From zonexo at gmail.com Fri May 4 01:14:51 2007 From: zonexo at gmail.com (Ben Tay) Date: Fri, 4 May 2007 14:14:51 +0800 Subject: Using existing mpich for new built Message-ID: <804ab5d40705032314q31d53fd4gfa107831f8a06fc@mail.gmail.com> Hi, I just compiled a shared version of PETSc. I realised that I've forgotten to add --download-hypre=1 in the configure. Hence I need to rebuilt the PETSc. Initially I had used --download-mpich=1 to built the mpich. Before the "make all test", I got: ... MPI: Includes: ['/nfs/lsftmp/g0306332/petsc-2.3.2-p8/externalpackages/mpich2- 1.0.4p1/atlas3/include'] Library: ['/nfs/lsftmp/g0306332/petsc-2.3.2-p8/externalpackages/mpich2- 1.0.4p1/atlas3/lib/libmpich.a', 'libnsl.a', 'librt.a'] ... I tried to "reuse" this library for the new built but it failed. It says UNABLE to CONFIGURE with GIVEN OPTIONS (see configure.log for details): --------------------------------------------------------------------------------------- Shared libraries cannot be built using MPI provided. Either rebuild with --with-shared=0 or rebuild MPI with shared library support My option is --with-mpi-include=/nfs/lsftmp/g0306332/petsc-2.3.2-p8 /externalpackages/mpich2-1.0.4p1/atlas3/include --with-mpi-lib=/nfs/lsftmp/g0306332/petsc-2.3.2-p8/externalpackages/mpich2- 1.0.4p1/atlas3/lib/libmpich.a I also changed libmpich.a to libmpich.so but the same error occurs. Is there any way I can use the mpich which I built earlier. Thanks. -------------- next part -------------- An HTML attachment was scrubbed... URL: From knepley at gmail.com Fri May 4 08:07:05 2007 From: knepley at gmail.com (Matthew Knepley) Date: Fri, 4 May 2007 08:07:05 -0500 Subject: Using existing mpich for new built In-Reply-To: <804ab5d40705032314q31d53fd4gfa107831f8a06fc@mail.gmail.com> References: <804ab5d40705032314q31d53fd4gfa107831f8a06fc@mail.gmail.com> Message-ID: When you rebuild, you use the same option, --download-mpich and it will do the correct thing. Matt On 5/4/07, Ben Tay wrote: > Hi, > > I just compiled a shared version of PETSc. I realised that I've forgotten to > add --download-hypre=1 in the configure. Hence I need to rebuilt the PETSc. > Initially I had used --download-mpich=1 to built the mpich. > > Before the "make all test", I got: > > ... > > MPI: > Includes: > ['/nfs/lsftmp/g0306332/petsc-2.3.2-p8/externalpackages/mpich2-1.0.4p1/atlas3/include'] > Library: > ['/nfs/lsftmp/g0306332/petsc-2.3.2-p8/externalpackages/mpich2-1.0.4p1/atlas3/lib/libmpich.a', > ' libnsl.a', 'librt.a'] > > ... > > I tried to "reuse" this library for the new built but it failed. It says > > UNABLE to CONFIGURE with GIVEN OPTIONS (see configure.log for > details): > --------------------------------------------------------------------------------------- > Shared libraries cannot be built using MPI provided. > Either rebuild with --with-shared=0 or rebuild MPI with shared library > support > > My option is > --with-mpi-include=/nfs/lsftmp/g0306332/petsc-2.3.2-p8/externalpackages/mpich2-1.0.4p1/atlas3/include > --with-mpi-lib=/nfs/lsftmp/g0306332/petsc-2.3.2-p8/externalpackages/mpich2-1.0.4p1/atlas3/lib/libmpich.a > > I also changed libmpich.a to libmpich.so but the same error occurs. Is there > any way I can use the mpich which I built earlier. > > Thanks. -- The government saving money is like me spilling beer. It happens, but never on purpose. From bsmith at mcs.anl.gov Fri May 4 14:18:37 2007 From: bsmith at mcs.anl.gov (Barry Smith) Date: Fri, 4 May 2007 14:18:37 -0500 (CDT) Subject: Looking to hire more PETSc team developers Message-ID: PETSc Users, We are looking to try to hire one or more PETSc developers who would work on PETSc and one or two large application codes that use PETSc. So experience with PETSc and C or C++ programming would be a must. There would be a possibility of working at Argonne as an Argonne employee or possibly with a sub-contract. If you are interested, please let me know directly bsmith at mcs.anl.gov and send a CV and some indications of your experience with PETSc. Thanks Barry From zonexo at gmail.com Sun May 6 22:08:35 2007 From: zonexo at gmail.com (Ben Tay) Date: Mon, 7 May 2007 11:08:35 +0800 Subject: Using existing mpich for new built In-Reply-To: References: <804ab5d40705032314q31d53fd4gfa107831f8a06fc@mail.gmail.com> Message-ID: <804ab5d40705062008s7db17224u7996ee7d54446c05@mail.gmail.com> Hi, In that case, can I use the this mpich library for other builds of PETSc on the same system? For e.g., I have 2 builts, 1 with external solver like hypre, another without. Can I use the same mpich built for the 2nd PETSc after I had the 1st PETSc library compiled? Similarly, can I reuse the blas/lapack compiled for the 1st built? What are the options? Thank you. On 5/4/07, Matthew Knepley wrote: > > When you rebuild, you use the same option, --download-mpich and it will do > the > correct thing. > > Matt > > On 5/4/07, Ben Tay wrote: > > Hi, > > > > I just compiled a shared version of PETSc. I realised that I've > forgotten to > > add --download-hypre=1 in the configure. Hence I need to rebuilt the > PETSc. > > Initially I had used --download-mpich=1 to built the mpich. > > > > Before the "make all test", I got: > > > > ... > > > > MPI: > > Includes: > > ['/nfs/lsftmp/g0306332/petsc-2.3.2-p8/externalpackages/mpich2-1.0.4p1 > /atlas3/include'] > > Library: > > ['/nfs/lsftmp/g0306332/petsc-2.3.2-p8/externalpackages/mpich2-1.0.4p1 > /atlas3/lib/libmpich.a', > > ' libnsl.a', 'librt.a'] > > > > ... > > > > I tried to "reuse" this library for the new built but it failed. It says > > > > UNABLE to CONFIGURE with GIVEN OPTIONS (see configure.logfor > > details): > > > --------------------------------------------------------------------------------------- > > Shared libraries cannot be built using MPI provided. > > Either rebuild with --with-shared=0 or rebuild MPI with shared library > > support > > > > My option is > > --with-mpi-include=/nfs/lsftmp/g0306332/petsc-2.3.2-p8 > /externalpackages/mpich2-1.0.4p1/atlas3/include > > --with-mpi-lib=/nfs/lsftmp/g0306332/petsc-2.3.2-p8 > /externalpackages/mpich2-1.0.4p1/atlas3/lib/libmpich.a > > > > I also changed libmpich.a to libmpich.so but the same error occurs. Is > there > > any way I can use the mpich which I built earlier. > > > > Thanks. > > > -- > The government saving money is like me spilling beer. It happens, but > never on purpose. > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From balay at mcs.anl.gov Sun May 6 23:07:34 2007 From: balay at mcs.anl.gov (Satish Balay) Date: Sun, 6 May 2007 23:07:34 -0500 (CDT) Subject: Using existing mpich for new built In-Reply-To: <804ab5d40705062008s7db17224u7996ee7d54446c05@mail.gmail.com> References: <804ab5d40705032314q31d53fd4gfa107831f8a06fc@mail.gmail.com> <804ab5d40705062008s7db17224u7996ee7d54446c05@mail.gmail.com> Message-ID: On Mon, 7 May 2007, Ben Tay wrote: > In that case, can I use the this mpich library for other builds of PETSc on > the same system? For e.g., I have 2 builts, 1 with external solver like > hypre, another without. Can I use the same mpich built for the 2nd PETSc > after I had the 1st PETSc library compiled? > > Similarly, can I reuse the blas/lapack compiled for the 1st built? yeah - you could - but why bother? It just complicates things. I guess the reason you've used --download-mpich=1 [instead of installing mpich separately] is to keep the install process simple. So just do the same thing for the second install as well. ./config/configure.py --download-mpich=1 --download-f-blas-lapack=1 PETSC_ARCH=linux-nohypre ./config/configure.py --download-mpich=1 --download-f-blas-lapack=1 --download-hypre=1 PETSC_ARCH=linux-hypre Alternatively - if you don't want multiple copies of each of these packages, then install them first [separately] and then specify them to PETSc. ./config/configure.py --with-mpi-dir=/home/balay/soft/mpich --with-blas-lapack-dir=/home/balay/soft/blaslapack PETSC_ARCH=linux-nohypre ./config/configure.py --with-mpi-dir=/home/balay/soft/mpich --with-blas-lapack-dir=/home/balay/soft/fblaslapack --download-hypre=1 PETSC_ARCH=linux-hypre > What are the options? If you wish to reuse the install of externalpackages from one build over to the next one, you can do the following. [However you should keep track of this dependency when you attempt to rebuild stuff] ./config/configure.py --download-mpich=1 --download-f-blas-lapack=1 PETSC_ARCH=linux-nohypre ./config/configure.py --with-mpi-dir=`pwd`/externalpackages/mpich2-1.0.4p1/linux-nohypre --with-blas-lapack-dir=`pwd`/externalpackages/fblaslapack/linux-nohypre --download-hypre=1 PETSC_ARCH=linux-hypre Satish From nicolas.bathfield at chalmers.se Mon May 7 09:50:56 2007 From: nicolas.bathfield at chalmers.se (Nicolas Bathfield) Date: Mon, 7 May 2007 16:50:56 +0200 (CEST) Subject: HYPRE with multiple variables In-Reply-To: References: <26157.193.183.3.2.1176453900.squirrel@webmail.chalmers.se> <9329.193.183.3.2.1176464453.squirrel@webmail.chalmers.se> <8614E121-37D8-4AF3-A67A-A142D27B7B62@student.uu.se> <35010.129.16.81.46.1177515489.squirrel@webmail.chalmers.se> <462FBDCB.3010309@llnl.gov> Message-ID: <7308.193.183.3.2.1178549456.squirrel@webmail.chalmers.se> Hi Allison, Barry, and PETSc users, The nodal relaxation leads to a nice solution! Thanks a lot for your help on this! Even though I get a satisfactory result, I can not explain why hypre is performing what looks like several outer loops (see below for an abstract of the output from -pc_hypre_boomeramg_print_statistics). Here is the list of options I used: -pc_hypre_boomeramg_tol 1e-5 -pc_type hypre -pc_hypre_type boomeramg -pc_hypre_boomeramg_print_statistics -pc_hypre_boomeramg_grid_sweeps_all 20 -pc_hypre_boomeramg_max_iter 6 -pc_hypre_boomeramg_relax_weight_all 0.6 -pc_hypre_boomeramg_outer_relax_weight_all 0.6 -pc_hypre_boomeramg_nodal_coarsen 1 I would expect the preconditioner to be executed once only (in my case this includes 6 V cycles). As you can see from the print_statistic option, boomeramg is exectuted several times (many more times than displyed here, it was too long to copy everything). Is this normal? If yes, why is it so? Best regards, Nicolas BoomerAMG SOLVER PARAMETERS: Maximum number of cycles: 6 Stopping Tolerance: 1.000000e-05 Cycle type (1 = V, 2 = W, etc.): 1 Relaxation Parameters: Visiting Grid: down up coarse Visiting Grid: down up coarse Number of partial sweeps: 20 20 1 Type 0=Jac, 3=hGS, 6=hSGS, 9=GE: 6 6 6 Point types, partial sweeps (1=C, -1=F): Pre-CG relaxation (down): 1 -1 1 -1 1 -1 1 -1 1 -1 1 -1 1 -1 1 -1 1 -1 1 -1 1 -1 1 -1 1 -1 1 -1 1 -1 1 -1 1 -1 1 -1 1 -1 1 -1 Post-CG relaxation (up): -1 1 -1 1 -1 1 -1 1 -1 1 -1 1 -1 1 -1 1 -1 1 -1 1 -1 1 -1 1 -1 1 -1 1 -1 1 -1 1 -1 1 -1 1 -1 1 -1 1 Coarsest grid: 0 Relaxation Weight 0.600000 level 0 Outer relaxation weight 0.600000 level 0 Output flag (print_level): 3 AMG SOLUTION INFO: relative residual factor residual -------- ------ -------- Initial 4.430431e-03 1.000000e+00 Cycle 1 4.540275e-03 1.024793 1.024793e+00 Cycle 2 4.539148e-03 0.999752 1.024539e+00 Cycle 3 4.620010e-03 1.017814 1.042790e+00 Cycle 4 5.196532e-03 1.124788 1.172918e+00 Cycle 5 6.243043e-03 1.201386 1.409128e+00 Cycle 6 7.310008e-03 1.170905 1.649954e+00 ============================================== NOTE: Convergence tolerance was not achieved within the allowed 6 V-cycles ============================================== Average Convergence Factor = 1.087039 Complexity: grid = 1.000000 operator = 1.000000 cycle = 1.000000 BoomerAMG SOLVER PARAMETERS: Maximum number of cycles: 6 Stopping Tolerance: 1.000000e-05 Cycle type (1 = V, 2 = W, etc.): 1 Relaxation Parameters: Visiting Grid: down up coarse Visiting Grid: down up coarse Number of partial sweeps: 20 20 1 Type 0=Jac, 3=hGS, 6=hSGS, 9=GE: 6 6 6 Point types, partial sweeps (1=C, -1=F): Pre-CG relaxation (down): 1 -1 1 -1 1 -1 1 -1 1 -1 1 -1 1 -1 1 -1 1 -1 1 -1 1 -1 1 -1 1 -1 1 -1 1 -1 1 -1 1 -1 1 -1 1 -1 1 -1 Post-CG relaxation (up): -1 1 -1 1 -1 1 -1 1 -1 1 -1 1 -1 1 -1 1 -1 1 -1 1 -1 1 -1 1 -1 1 -1 1 -1 1 -1 1 -1 1 -1 1 -1 1 -1 1 Coarsest grid: 0 Relaxation Weight 0.600000 level 0 Outer relaxation weight 0.600000 level 0 Output flag (print_level): 3 AMG SOLUTION INFO: relative residual factor residual -------- ------ -------- Initial 7.914114e-03 1.000000e+00 Cycle 1 8.320104e-03 1.051300 1.051300e+00 Cycle 2 7.767803e-03 0.933618 9.815126e-01 Cycle 3 6.570752e-03 0.845896 8.302575e-01 Cycle 4 5.836166e-03 0.888204 7.374377e-01 Cycle 5 6.593214e-03 1.129717 8.330956e-01 Cycle 6 8.191117e-03 1.242356 1.035001e+00 ============================================== NOTE: Convergence tolerance was not achieved within the allowed 6 V-cycles ============================================== Average Convergence Factor = 1.005750 Complexity: grid = 1.000000 operator = 1.000000 cycle = 1.000000 BoomerAMG SOLVER PARAMETERS: Maximum number of cycles: 6 Stopping Tolerance: 1.000000e-05 Cycle type (1 = V, 2 = W, etc.): 1 Relaxation Parameters: Visiting Grid: down up coarse Visiting Grid: down up coarse Number of partial sweeps: 20 20 1 Type 0=Jac, 3=hGS, 6=hSGS, 9=GE: 6 6 6 Point types, partial sweeps (1=C, -1=F): Pre-CG relaxation (down): 1 -1 1 -1 1 -1 1 -1 1 -1 1 -1 1 -1 1 -1 1 -1 1 -1 1 -1 1 -1 1 -1 1 -1 1 -1 1 -1 1 -1 1 -1 1 -1 1 -1 Post-CG relaxation (up): -1 1 -1 1 -1 1 -1 1 -1 1 -1 1 -1 1 -1 1 -1 1 -1 1 -1 1 -1 1 -1 1 -1 1 -1 1 -1 1 -1 1 -1 1 -1 1 -1 1 Coarsest grid: 0 Relaxation Weight 0.600000 level 0 Outer relaxation weight 0.600000 level 0 Output flag (print_level): 3 AMG SOLUTION INFO: relative residual factor residual -------- ------ -------- Initial 5.553016e-03 1.000000e+00 Cycle 1 6.417482e-03 1.155675 1.155675e+00 Cycle 2 7.101926e-03 1.106653 1.278931e+00 Cycle 3 7.077471e-03 0.996556 1.274527e+00 Cycle 4 6.382835e-03 0.901853 1.149436e+00 Cycle 5 5.392392e-03 0.844827 9.710744e-01 Cycle 6 4.674173e-03 0.866809 8.417359e-01 ============================================== NOTE: Convergence tolerance was not achieved within the allowed 6 V-cycles ============================================== Average Convergence Factor = 0.971694 Complexity: grid = 1.000000 operator = 1.000000 cycle = 1.000000 BoomerAMG SOLVER PARAMETERS: Maximum number of cycles: 6 Stopping Tolerance: 1.000000e-05 Cycle type (1 = V, 2 = W, etc.): 1 Relaxation Parameters: Visiting Grid: down up coarse Visiting Grid: down up coarse Number of partial sweeps: 20 20 1 Type 0=Jac, 3=hGS, 6=hSGS, 9=GE: 6 6 6 Point types, partial sweeps (1=C, -1=F): Pre-CG relaxation (down): 1 -1 1 -1 1 -1 1 -1 1 -1 1 -1 1 -1 1 -1 1 -1 1 -1 1 -1 1 -1 1 -1 1 -1 1 -1 1 -1 1 -1 1 -1 1 -1 1 -1 Post-CG relaxation (up): -1 1 -1 1 -1 1 -1 1 -1 1 -1 1 -1 1 -1 1 -1 1 -1 1 -1 1 -1 1 -1 1 -1 1 -1 1 -1 1 -1 1 -1 1 -1 1 -1 1 Coarsest grid: 0 Relaxation Weight 0.600000 level 0 Outer relaxation weight 0.600000 level 0 Output flag (print_level): 3 AMG SOLUTION INFO: relative residual factor residual -------- ------ -------- Initial 3.846663e-03 1.000000e+00 Cycle 1 4.136662e-03 1.075390 1.075390e+00 Cycle 2 4.463861e-03 1.079097 1.160450e+00 Cycle 3 4.302262e-03 0.963798 1.118440e+00 Cycle 4 4.175328e-03 0.970496 1.085441e+00 Cycle 5 4.735871e-03 1.134251 1.231163e+00 Cycle 6 5.809054e-03 1.226607 1.510154e+00 ============================================== NOTE: Convergence tolerance was not achieved within the allowed 6 V-cycles ============================================== Average Convergence Factor = 1.071117 Complexity: grid = 1.000000 operator = 1.000000 cycle = 1.000000 > > Nicolas, > > I have added support for both 1 and/or 2 to petsc-dev > http://www-unix.mcs.anl.gov/petsc/petsc-as/developers/index.html > > The two new options are -pc_hypre_boomeramg_nodal_coarsen and > -pc_hypre_boomeramg_nodal_relaxation argument n indicates the levels which the SmoothNumLevels() sets. > > I have not tested the code so please let me know what problems you have. > > Allison, > > Thank you very much for the clarifications. > > Barry > > On Wed, 25 Apr 2007, Allison Baker wrote: > >> Hi Barry and Nicolas, >> >> To clarify, >> >> HYPRE_BoomerAMGSetNumFunctions(solver, int num_functions) tells AMG to >> solve a >> system of equations with the specified number of functions/unknowns. The >> default AMG scheme to solve a PDE system is the "unknown" approach. (The >> coarsening and interpolation are determined by looking at each >> unknown/function independently - therefore you can imagine that the >> coarse >> grids are generally not the same for each variable. This approach >> generally >> works well unless you have strong coupling between unknowns.) >> >> HYPRE_BOOMERAMGSetNodal(solver, int nodal ) tells AMG to coarsen such >> that >> each variable has the same coarse grid - sometimes this is more >> "physical" for >> a particular problem. The value chosen here for nodal determines how >> strength >> of connection is determined between the coupled system. I suggest >> setting >> nodal = 1, which uses a Frobenius norm. This does NOT tell AMG to use >> nodal >> relaxation. >> >> If you want to use nodal relaxation in hypre there are two choices: >> >> (1) If you call HYPRE_BOOMERAMGSetNodal, then you can additionally do >> nodal >> relaxation via the schwarz smoother option in hypre. I did not >> implement this >> in the Petsc interface, but it could be done easy enough. The following >> four >> functions need to be called: >> >> HYPRE_BoomerAMGSetSmoothType(solver, 6); >> HYPRE_BoomerAMGSetDomainType(solver, 1); >> HYPRE_BoomerAMGSetOverlap(solver, 0); >> HYPRE_BoomerAMGSetSmoothNumLevels(solver, num_levels); (Set >> num_levels >> to number of levels on which you want nodal smoothing, i.e. 1=just the >> fine >> grid, 2= fine grid and the grid below, etc. I find that doing nodal >> relaxation on just the finest level is generally sufficient.) Note that >> the >> interpolation scheme used will be the same as in the unknown approach - >> so >> this is what we call a hybrid systems method. >> >> (2) You can do both nodal smoothing and a nodal interpolation scheme. >> While >> this is currently implemented in 2.0.0, it is not advertised (i.e., >> mentioned >> at all in the user's manual) because it is not yet implemented very >> efficiently (the fine grid matrix is converted to a block matrix - and >> both >> are stored), and we have not found it to be as effective as advertised >> elsewhere (this is an area of current research for us)..... If you want >> to try >> it anyway, let me know and I will provide more info. >> >> Hope this helps, >> Allison >> >> >> Barry Smith wrote: >> > Nicolas, >> > >> > On Wed, 25 Apr 2007, Nicolas Bathfield wrote: >> > >> > >> > > Dear Barry, >> > > >> > > Using MatSetBlockSize(A,5) improved my results greatly. Boomemramg >> is now >> > > solving the system of equations. >> > > >> > >> > Good >> > >> > >> > > Still, the equations I solve are coupled, and my discretization >> scheme is >> > > meant for a non-segregated solver. As a consequence (I believe), >> boomeramg >> > > still diverges. >> > > >> > >> > How can "Boomeramg be now solving the system of equations" but also >> > diverge? I am so confused. >> > >> > >> > > I would therefore like to use the nodal relaxation in >> > > boomeramg (the hypre command is HYPRE_BOOMERAMGSetNodal) in order to >> > > couple the coarse grid choice for all my variables. >> > > >> > >> > I can add this this afternoon. >> > >> > I have to admit I do not understand the difference between >> > HYPRE_BOOMERAMGSetNodal() and hypre_BoomerAMGSetNumFunctions(). Do >> you? >> > >> > Barry >> > >> > > How can I achieve this from PETSc? >> > > >> > > Best regards, >> > > >> > > Nicolas >> > > >> > > >> > > > From PETSc MPIAIJ matrices you need to set the block size of the >> > > > matrix >> > > > with MatSetBlockSize(A,5) after you have called MatSetType() or >> > > > MatCreateMPIAIJ(). Then HYPRE_BoomerAMGSetNumFunctions() is >> > > > automatically >> > > > called by PETSc. >> > > > >> > > > Barry >> > > > >> > > > The reason this is done this way instead of as >> > > > -pc_hypre_boomeramg_block_size is the idea that hypre will >> use the >> > > > properties of the matrix it is given in building the >> preconditioner so >> > > > the user does not have to pass those properties in seperately >> directly >> > > > to hypre. >> > > > >> > > > >> > > > On Fri, 13 Apr 2007, Shaman Mahmoudi wrote: >> > > > >> > > > >> > > > > Hi, >> > > > > >> > > > > int HYPRE_BoomerAMGSetNumFunctions (.....) >> > > > > >> > > > > sets the size of the system of PDEs. >> > > > > >> > > > > With best regards, Shaman Mahmoudi >> > > > > >> > > > > On Apr 13, 2007, at 2:04 PM, Shaman Mahmoudi wrote: >> > > > > >> > > > > >> > > > > > Hi Nicolas, >> > > > > > >> > > > > > You are right. hypre has changed a lot since the version I >> used. >> > > > > > >> > > > > > I found this interesting information: >> > > > > > >> > > > > > int HYPRE_BOOMERAMGSetNodal(....) >> > > > > > >> > > > > > Sets whether to use the nodal systems version. Default is 0. >> > > > > > >> > > > > > Then information about smoothers: >> > > > > > >> > > > > > One interesting thing there is this, >> > > > > > >> > > > > > HYPRE_BoomerAMGSetDomainType(....) >> > > > > > >> > > > > > 0 - each point is a domain (default) >> > > > > > 1 each node is a domain (only of interest in systems AMG) >> > > > > > 2 .... >> > > > > > >> > > > > > I could not find how you define the nodal displacement >> ordering. But >> > > > > > >> > > > > it >> > > > > >> > > > > > should be there somewhere. >> > > > > > >> > > > > > I read the reference manual for hypre 2.0 >> > > > > > >> > > > > > With best regards, Shaman Mahmoudi >> > > > > > >> > > > > > >> > > > > > On Apr 13, 2007, at 1:40 PM, Nicolas Bathfield wrote: >> > > > > > >> > > > > > >> > > > > > > Dear Shaman, >> > > > > > > >> > > > > > > As far as I could understand, there is a BoomerAMG?s systems >> AMG >> > > > > > > >> > > > > version >> > > > > >> > > > > > > available. This seems to be exactly what I am looking for, >> but I >> > > > > > > >> > > > > just >> > > > > >> > > > > > > don't know how to access it, either through PETSc or >> directly. >> > > > > > > >> > > > > > > Best regards, >> > > > > > > >> > > > > > > Nicolas >> > > > > > > >> > > > > > > >> > > > > > > > Hi, >> > > > > > > > >> > > > > > > > You want to exploit the structure of the model? >> > > > > > > > As far as I know, boomeramg can not treat a set of rows or >> > > > > > > > blocks >> > > > > > > > >> > > > > as >> > > > > >> > > > > > > > a molecule, a so called block-smoother? >> > > > > > > > ML 2.0 smoothed aggregation does support it. >> > > > > > > > >> > > > > > > > With best regards, Shaman Mahmoudi >> > > > > > > > >> > > > > > > > On Apr 13, 2007, at 10:45 AM, Nicolas Bathfield wrote: >> > > > > > > > >> > > > > > > > >> > > > > > > > > Hi, >> > > > > > > > > >> > > > > > > > > I am solving the Navier-stokes equations and try to use >> Hypre >> > > > > > > > > as >> > > > > > > > > preconditioner. >> > > > > > > > > Until now, I used PETSc as non-segregated solver and it >> worked >> > > > > > > > > perfectly. >> > > > > > > > > Things got worse when I decided to use Boomeramg >> (Hypre). >> > > > > > > > > As I solve a system of PDEs, each cell is represented >> by 5 >> > > > > > > > > rows >> > > > > > > > > >> > > > > in my >> > > > > >> > > > > > > > > matrix (I solve for 5 variables). PETSc handles that >> without >> > > > > > > > > >> > > > > problem >> > > > > >> > > > > > > > > apparently, but the coarsening scheme of Boomeramg needs >> more >> > > > > > > > > >> > > > > input in >> > > > > >> > > > > > > > > order to work properly. >> > > > > > > > > >> > > > > > > > > Is there an option in PETSc to tell HYPRE that we are >> dealing >> > > > > > > > > >> > > > > with a >> > > > > >> > > > > > > > > system of PDEs? (something like: >> -pc_hypre_boomeramg_...) >> > > > > > > > > >> > > > > > > > > >> > > > > > > > > Thanks for your help. >> > > > > > > > > >> > > > > > > > > Best regards, >> > > > > > > > > >> > > > > > > > > Nicolas >> > > > > > > > > >> > > > > > > > > >> > > > > > > > > -- >> > > > > > > > > Nicolas BATHFIELD >> > > > > > > > > Chalmers University of Technology >> > > > > > > > > Shipping and Marine Technology >> > > > > > > > > phone: +46 (0)31 772 1476 >> > > > > > > > > fax: +46 (0)31 772 3699 >> > > > > > > > > >> > > > > > > > > >> > > > > > > > >> > > > > > > -- >> > > > > > > Nicolas BATHFIELD >> > > > > > > Chalmers University of Technology >> > > > > > > Shipping and Marine Technology >> > > > > > > phone: +46 (0)31 772 1476 >> > > > > > > fax: +46 (0)31 772 3699 >> > > > > > > >> > > > > > > >> > > >> > > >> > >> > >> >> > > -- Nicolas BATHFIELD Chalmers University of Technology Shipping and Marine Technology phone: +46 (0)31 772 1476 fax: +46 (0)31 772 3699 From bsmith at mcs.anl.gov Mon May 7 10:28:03 2007 From: bsmith at mcs.anl.gov (Barry Smith) Date: Mon, 7 May 2007 10:28:03 -0500 (CDT) Subject: HYPRE with multiple variables In-Reply-To: <7308.193.183.3.2.1178549456.squirrel@webmail.chalmers.se> References: <26157.193.183.3.2.1176453900.squirrel@webmail.chalmers.se> <9329.193.183.3.2.1176464453.squirrel@webmail.chalmers.se> <8614E121-37D8-4AF3-A67A-A142D27B7B62@student.uu.se> <35010.129.16.81.46.1177515489.squirrel@webmail.chalmers.se> <462FBDCB.3010309@llnl.gov> <7308.193.183.3.2.1178549456.squirrel@webmail.chalmers.se> Message-ID: > AMG SOLUTION INFO: > relative > residual factor residual > -------- ------ -------- > Initial 4.430431e-03 1.000000e+00 > Cycle 1 4.540275e-03 1.024793 1.024793e+00 > Cycle 2 4.539148e-03 0.999752 1.024539e+00 > Cycle 3 4.620010e-03 1.017814 1.042790e+00 > Cycle 4 5.196532e-03 1.124788 1.172918e+00 > Cycle 5 6.243043e-03 1.201386 1.409128e+00 > Cycle 6 7.310008e-03 1.170905 1.649954e+00 > Hmmm. This is not converging. > Even though I get a satisfactory result, I can not explain why hypre is > performing what looks like several outer loops (see below for an abstract > of the output from -pc_hypre_boomeramg_print_statistics). What are you using for PETSc KSP? Unless you use -ksp_max_it 1 or -ksp_type preonly PETSc will run the Krylov method until convergence. If you run with -ksp_monitor you can watch the residuals and -ksp_view for the solver options. Barry On Mon, 7 May 2007, Nicolas Bathfield wrote: > Hi Allison, Barry, and PETSc users, > > The nodal relaxation leads to a nice solution! Thanks a lot for your help > on this! > > Even though I get a satisfactory result, I can not explain why hypre is > performing what looks like several outer loops (see below for an abstract > of the output from -pc_hypre_boomeramg_print_statistics). > > Here is the list of options I used: > > -pc_hypre_boomeramg_tol 1e-5 > -pc_type hypre -pc_hypre_type boomeramg > -pc_hypre_boomeramg_print_statistics > -pc_hypre_boomeramg_grid_sweeps_all 20 > -pc_hypre_boomeramg_max_iter 6 > -pc_hypre_boomeramg_relax_weight_all 0.6 > -pc_hypre_boomeramg_outer_relax_weight_all 0.6 > -pc_hypre_boomeramg_nodal_coarsen 1 > > I would expect the preconditioner to be executed once only (in my case > this includes 6 V cycles). > As you can see from the print_statistic option, boomeramg is exectuted > several times (many more times than displyed here, it was too long to copy > everything). Is this normal? If yes, why is it so? > > Best regards, > > Nicolas > > BoomerAMG SOLVER PARAMETERS: > > Maximum number of cycles: 6 > Stopping Tolerance: 1.000000e-05 > Cycle type (1 = V, 2 = W, etc.): 1 > > Relaxation Parameters: > Visiting Grid: down up coarse > Visiting Grid: down up coarse > Number of partial sweeps: 20 20 1 > Type 0=Jac, 3=hGS, 6=hSGS, 9=GE: 6 6 6 > Point types, partial sweeps (1=C, -1=F): > Pre-CG relaxation (down): 1 -1 1 -1 1 -1 1 > -1 1 -1 1 -1 1 -1 1 -1 1 -1 1 -1 1 > -1 1 -1 1 -1 1 -1 1 -1 1 -1 1 -1 1 > -1 1 -1 1 -1 > Post-CG relaxation (up): -1 1 -1 1 -1 1 -1 > 1 -1 1 -1 1 -1 1 -1 1 -1 1 -1 1 -1 > 1 -1 1 -1 1 -1 1 -1 1 -1 1 -1 1 -1 > 1 -1 1 -1 1 > Coarsest grid: 0 > > Relaxation Weight 0.600000 level 0 > Outer relaxation weight 0.600000 level 0 > Output flag (print_level): 3 > > > AMG SOLUTION INFO: > relative > residual factor residual > -------- ------ -------- > Initial 4.430431e-03 1.000000e+00 > Cycle 1 4.540275e-03 1.024793 1.024793e+00 > Cycle 2 4.539148e-03 0.999752 1.024539e+00 > Cycle 3 4.620010e-03 1.017814 1.042790e+00 > Cycle 4 5.196532e-03 1.124788 1.172918e+00 > Cycle 5 6.243043e-03 1.201386 1.409128e+00 > Cycle 6 7.310008e-03 1.170905 1.649954e+00 > > > ============================================== > NOTE: Convergence tolerance was not achieved > within the allowed 6 V-cycles > ============================================== > > Average Convergence Factor = 1.087039 > > Complexity: grid = 1.000000 > operator = 1.000000 > cycle = 1.000000 > > > > > > BoomerAMG SOLVER PARAMETERS: > > Maximum number of cycles: 6 > Stopping Tolerance: 1.000000e-05 > Cycle type (1 = V, 2 = W, etc.): 1 > > Relaxation Parameters: > Visiting Grid: down up coarse > Visiting Grid: down up coarse > Number of partial sweeps: 20 20 1 > Type 0=Jac, 3=hGS, 6=hSGS, 9=GE: 6 6 6 > Point types, partial sweeps (1=C, -1=F): > Pre-CG relaxation (down): 1 -1 1 -1 1 -1 1 > -1 1 -1 1 -1 1 -1 1 -1 1 -1 1 -1 1 > -1 1 -1 1 -1 1 -1 1 -1 1 -1 1 -1 1 > -1 1 -1 1 -1 > Post-CG relaxation (up): -1 1 -1 1 -1 1 -1 > 1 -1 1 -1 1 -1 1 -1 1 -1 1 -1 1 -1 > 1 -1 1 -1 1 -1 1 -1 1 -1 1 -1 1 -1 > 1 -1 1 -1 1 > Coarsest grid: 0 > > Relaxation Weight 0.600000 level 0 > Outer relaxation weight 0.600000 level 0 > Output flag (print_level): 3 > > > AMG SOLUTION INFO: > relative > residual factor residual > -------- ------ -------- > Initial 7.914114e-03 1.000000e+00 > Cycle 1 8.320104e-03 1.051300 1.051300e+00 > Cycle 2 7.767803e-03 0.933618 9.815126e-01 > Cycle 3 6.570752e-03 0.845896 8.302575e-01 > Cycle 4 5.836166e-03 0.888204 7.374377e-01 > Cycle 5 6.593214e-03 1.129717 8.330956e-01 > Cycle 6 8.191117e-03 1.242356 1.035001e+00 > > > ============================================== > NOTE: Convergence tolerance was not achieved > within the allowed 6 V-cycles > ============================================== > > Average Convergence Factor = 1.005750 > > Complexity: grid = 1.000000 > operator = 1.000000 > cycle = 1.000000 > > > > > > BoomerAMG SOLVER PARAMETERS: > > Maximum number of cycles: 6 > Stopping Tolerance: 1.000000e-05 > Cycle type (1 = V, 2 = W, etc.): 1 > > Relaxation Parameters: > Visiting Grid: down up coarse > Visiting Grid: down up coarse > Number of partial sweeps: 20 20 1 > Type 0=Jac, 3=hGS, 6=hSGS, 9=GE: 6 6 6 > Point types, partial sweeps (1=C, -1=F): > Pre-CG relaxation (down): 1 -1 1 -1 1 -1 1 > -1 1 -1 1 -1 1 -1 1 -1 1 -1 1 -1 1 > -1 1 -1 1 -1 1 -1 1 -1 1 -1 1 -1 1 > -1 1 -1 1 -1 > Post-CG relaxation (up): -1 1 -1 1 -1 1 -1 > 1 -1 1 -1 1 -1 1 -1 1 -1 1 -1 1 -1 > 1 -1 1 -1 1 -1 1 -1 1 -1 1 -1 1 -1 > 1 -1 1 -1 1 > Coarsest grid: 0 > > Relaxation Weight 0.600000 level 0 > Outer relaxation weight 0.600000 level 0 > Output flag (print_level): 3 > > > AMG SOLUTION INFO: > relative > residual factor residual > -------- ------ -------- > Initial 5.553016e-03 1.000000e+00 > Cycle 1 6.417482e-03 1.155675 1.155675e+00 > Cycle 2 7.101926e-03 1.106653 1.278931e+00 > Cycle 3 7.077471e-03 0.996556 1.274527e+00 > Cycle 4 6.382835e-03 0.901853 1.149436e+00 > Cycle 5 5.392392e-03 0.844827 9.710744e-01 > Cycle 6 4.674173e-03 0.866809 8.417359e-01 > > > ============================================== > NOTE: Convergence tolerance was not achieved > within the allowed 6 V-cycles > ============================================== > > Average Convergence Factor = 0.971694 > > Complexity: grid = 1.000000 > operator = 1.000000 > cycle = 1.000000 > > > > > > BoomerAMG SOLVER PARAMETERS: > > Maximum number of cycles: 6 > Stopping Tolerance: 1.000000e-05 > Cycle type (1 = V, 2 = W, etc.): 1 > > Relaxation Parameters: > Visiting Grid: down up coarse > Visiting Grid: down up coarse > Number of partial sweeps: 20 20 1 > Type 0=Jac, 3=hGS, 6=hSGS, 9=GE: 6 6 6 > Point types, partial sweeps (1=C, -1=F): > Pre-CG relaxation (down): 1 -1 1 -1 1 -1 1 > -1 1 -1 1 -1 1 -1 1 -1 1 -1 1 -1 1 > -1 1 -1 1 -1 1 -1 1 -1 1 -1 1 -1 1 > -1 1 -1 1 -1 > Post-CG relaxation (up): -1 1 -1 1 -1 1 -1 > 1 -1 1 -1 1 -1 1 -1 1 -1 1 -1 1 -1 > 1 -1 1 -1 1 -1 1 -1 1 -1 1 -1 1 -1 > 1 -1 1 -1 1 > Coarsest grid: 0 > > Relaxation Weight 0.600000 level 0 > Outer relaxation weight 0.600000 level 0 > Output flag (print_level): 3 > > > AMG SOLUTION INFO: > relative > residual factor residual > -------- ------ -------- > Initial 3.846663e-03 1.000000e+00 > Cycle 1 4.136662e-03 1.075390 1.075390e+00 > Cycle 2 4.463861e-03 1.079097 1.160450e+00 > Cycle 3 4.302262e-03 0.963798 1.118440e+00 > Cycle 4 4.175328e-03 0.970496 1.085441e+00 > Cycle 5 4.735871e-03 1.134251 1.231163e+00 > Cycle 6 5.809054e-03 1.226607 1.510154e+00 > > > ============================================== > NOTE: Convergence tolerance was not achieved > within the allowed 6 V-cycles > ============================================== > > Average Convergence Factor = 1.071117 > > Complexity: grid = 1.000000 > operator = 1.000000 > cycle = 1.000000 > > > > > > Nicolas, > > > > I have added support for both 1 and/or 2 to petsc-dev > > http://www-unix.mcs.anl.gov/petsc/petsc-as/developers/index.html > > > > The two new options are -pc_hypre_boomeramg_nodal_coarsen and > > -pc_hypre_boomeramg_nodal_relaxation > argument n indicates the levels which the SmoothNumLevels() sets. > > > > I have not tested the code so please let me know what problems you have. > > > > Allison, > > > > Thank you very much for the clarifications. > > > > Barry > > > > On Wed, 25 Apr 2007, Allison Baker wrote: > > > >> Hi Barry and Nicolas, > >> > >> To clarify, > >> > >> HYPRE_BoomerAMGSetNumFunctions(solver, int num_functions) tells AMG to > >> solve a > >> system of equations with the specified number of functions/unknowns. The > >> default AMG scheme to solve a PDE system is the "unknown" approach. (The > >> coarsening and interpolation are determined by looking at each > >> unknown/function independently - therefore you can imagine that the > >> coarse > >> grids are generally not the same for each variable. This approach > >> generally > >> works well unless you have strong coupling between unknowns.) > >> > >> HYPRE_BOOMERAMGSetNodal(solver, int nodal ) tells AMG to coarsen such > >> that > >> each variable has the same coarse grid - sometimes this is more > >> "physical" for > >> a particular problem. The value chosen here for nodal determines how > >> strength > >> of connection is determined between the coupled system. I suggest > >> setting > >> nodal = 1, which uses a Frobenius norm. This does NOT tell AMG to use > >> nodal > >> relaxation. > >> > >> If you want to use nodal relaxation in hypre there are two choices: > >> > >> (1) If you call HYPRE_BOOMERAMGSetNodal, then you can additionally do > >> nodal > >> relaxation via the schwarz smoother option in hypre. I did not > >> implement this > >> in the Petsc interface, but it could be done easy enough. The following > >> four > >> functions need to be called: > >> > >> HYPRE_BoomerAMGSetSmoothType(solver, 6); > >> HYPRE_BoomerAMGSetDomainType(solver, 1); > >> HYPRE_BoomerAMGSetOverlap(solver, 0); > >> HYPRE_BoomerAMGSetSmoothNumLevels(solver, num_levels); (Set > >> num_levels > >> to number of levels on which you want nodal smoothing, i.e. 1=just the > >> fine > >> grid, 2= fine grid and the grid below, etc. I find that doing nodal > >> relaxation on just the finest level is generally sufficient.) Note that > >> the > >> interpolation scheme used will be the same as in the unknown approach - > >> so > >> this is what we call a hybrid systems method. > >> > >> (2) You can do both nodal smoothing and a nodal interpolation scheme. > >> While > >> this is currently implemented in 2.0.0, it is not advertised (i.e., > >> mentioned > >> at all in the user's manual) because it is not yet implemented very > >> efficiently (the fine grid matrix is converted to a block matrix - and > >> both > >> are stored), and we have not found it to be as effective as advertised > >> elsewhere (this is an area of current research for us)..... If you want > >> to try > >> it anyway, let me know and I will provide more info. > >> > >> Hope this helps, > >> Allison > >> > >> > >> Barry Smith wrote: > >> > Nicolas, > >> > > >> > On Wed, 25 Apr 2007, Nicolas Bathfield wrote: > >> > > >> > > >> > > Dear Barry, > >> > > > >> > > Using MatSetBlockSize(A,5) improved my results greatly. Boomemramg > >> is now > >> > > solving the system of equations. > >> > > > >> > > >> > Good > >> > > >> > > >> > > Still, the equations I solve are coupled, and my discretization > >> scheme is > >> > > meant for a non-segregated solver. As a consequence (I believe), > >> boomeramg > >> > > still diverges. > >> > > > >> > > >> > How can "Boomeramg be now solving the system of equations" but also > >> > diverge? I am so confused. > >> > > >> > > >> > > I would therefore like to use the nodal relaxation in > >> > > boomeramg (the hypre command is HYPRE_BOOMERAMGSetNodal) in order to > >> > > couple the coarse grid choice for all my variables. > >> > > > >> > > >> > I can add this this afternoon. > >> > > >> > I have to admit I do not understand the difference between > >> > HYPRE_BOOMERAMGSetNodal() and hypre_BoomerAMGSetNumFunctions(). Do > >> you? > >> > > >> > Barry > >> > > >> > > How can I achieve this from PETSc? > >> > > > >> > > Best regards, > >> > > > >> > > Nicolas > >> > > > >> > > > >> > > > From PETSc MPIAIJ matrices you need to set the block size of the > >> > > > matrix > >> > > > with MatSetBlockSize(A,5) after you have called MatSetType() or > >> > > > MatCreateMPIAIJ(). Then HYPRE_BoomerAMGSetNumFunctions() is > >> > > > automatically > >> > > > called by PETSc. > >> > > > > >> > > > Barry > >> > > > > >> > > > The reason this is done this way instead of as > >> > > > -pc_hypre_boomeramg_block_size is the idea that hypre will > >> use the > >> > > > properties of the matrix it is given in building the > >> preconditioner so > >> > > > the user does not have to pass those properties in seperately > >> directly > >> > > > to hypre. > >> > > > > >> > > > > >> > > > On Fri, 13 Apr 2007, Shaman Mahmoudi wrote: > >> > > > > >> > > > > >> > > > > Hi, > >> > > > > > >> > > > > int HYPRE_BoomerAMGSetNumFunctions (.....) > >> > > > > > >> > > > > sets the size of the system of PDEs. > >> > > > > > >> > > > > With best regards, Shaman Mahmoudi > >> > > > > > >> > > > > On Apr 13, 2007, at 2:04 PM, Shaman Mahmoudi wrote: > >> > > > > > >> > > > > > >> > > > > > Hi Nicolas, > >> > > > > > > >> > > > > > You are right. hypre has changed a lot since the version I > >> used. > >> > > > > > > >> > > > > > I found this interesting information: > >> > > > > > > >> > > > > > int HYPRE_BOOMERAMGSetNodal(....) > >> > > > > > > >> > > > > > Sets whether to use the nodal systems version. Default is 0. > >> > > > > > > >> > > > > > Then information about smoothers: > >> > > > > > > >> > > > > > One interesting thing there is this, > >> > > > > > > >> > > > > > HYPRE_BoomerAMGSetDomainType(....) > >> > > > > > > >> > > > > > 0 - each point is a domain (default) > >> > > > > > 1 each node is a domain (only of interest in systems AMG) > >> > > > > > 2 .... > >> > > > > > > >> > > > > > I could not find how you define the nodal displacement > >> ordering. But > >> > > > > > > >> > > > > it > >> > > > > > >> > > > > > should be there somewhere. > >> > > > > > > >> > > > > > I read the reference manual for hypre 2.0 > >> > > > > > > >> > > > > > With best regards, Shaman Mahmoudi > >> > > > > > > >> > > > > > > >> > > > > > On Apr 13, 2007, at 1:40 PM, Nicolas Bathfield wrote: > >> > > > > > > >> > > > > > > >> > > > > > > Dear Shaman, > >> > > > > > > > >> > > > > > > As far as I could understand, there is a BoomerAMG?s systems > >> AMG > >> > > > > > > > >> > > > > version > >> > > > > > >> > > > > > > available. This seems to be exactly what I am looking for, > >> but I > >> > > > > > > > >> > > > > just > >> > > > > > >> > > > > > > don't know how to access it, either through PETSc or > >> directly. > >> > > > > > > > >> > > > > > > Best regards, > >> > > > > > > > >> > > > > > > Nicolas > >> > > > > > > > >> > > > > > > > >> > > > > > > > Hi, > >> > > > > > > > > >> > > > > > > > You want to exploit the structure of the model? > >> > > > > > > > As far as I know, boomeramg can not treat a set of rows or > >> > > > > > > > blocks > >> > > > > > > > > >> > > > > as > >> > > > > > >> > > > > > > > a molecule, a so called block-smoother? > >> > > > > > > > ML 2.0 smoothed aggregation does support it. > >> > > > > > > > > >> > > > > > > > With best regards, Shaman Mahmoudi > >> > > > > > > > > >> > > > > > > > On Apr 13, 2007, at 10:45 AM, Nicolas Bathfield wrote: > >> > > > > > > > > >> > > > > > > > > >> > > > > > > > > Hi, > >> > > > > > > > > > >> > > > > > > > > I am solving the Navier-stokes equations and try to use > >> Hypre > >> > > > > > > > > as > >> > > > > > > > > preconditioner. > >> > > > > > > > > Until now, I used PETSc as non-segregated solver and it > >> worked > >> > > > > > > > > perfectly. > >> > > > > > > > > Things got worse when I decided to use Boomeramg > >> (Hypre). > >> > > > > > > > > As I solve a system of PDEs, each cell is represented > >> by 5 > >> > > > > > > > > rows > >> > > > > > > > > > >> > > > > in my > >> > > > > > >> > > > > > > > > matrix (I solve for 5 variables). PETSc handles that > >> without > >> > > > > > > > > > >> > > > > problem > >> > > > > > >> > > > > > > > > apparently, but the coarsening scheme of Boomeramg needs > >> more > >> > > > > > > > > > >> > > > > input in > >> > > > > > >> > > > > > > > > order to work properly. > >> > > > > > > > > > >> > > > > > > > > Is there an option in PETSc to tell HYPRE that we are > >> dealing > >> > > > > > > > > > >> > > > > with a > >> > > > > > >> > > > > > > > > system of PDEs? (something like: > >> -pc_hypre_boomeramg_...) > >> > > > > > > > > > >> > > > > > > > > > >> > > > > > > > > Thanks for your help. > >> > > > > > > > > > >> > > > > > > > > Best regards, > >> > > > > > > > > > >> > > > > > > > > Nicolas > >> > > > > > > > > > >> > > > > > > > > > >> > > > > > > > > -- > >> > > > > > > > > Nicolas BATHFIELD > >> > > > > > > > > Chalmers University of Technology > >> > > > > > > > > Shipping and Marine Technology > >> > > > > > > > > phone: +46 (0)31 772 1476 > >> > > > > > > > > fax: +46 (0)31 772 3699 > >> > > > > > > > > > >> > > > > > > > > > >> > > > > > > > > >> > > > > > > -- > >> > > > > > > Nicolas BATHFIELD > >> > > > > > > Chalmers University of Technology > >> > > > > > > Shipping and Marine Technology > >> > > > > > > phone: +46 (0)31 772 1476 > >> > > > > > > fax: +46 (0)31 772 3699 > >> > > > > > > > >> > > > > > > > >> > > > >> > > > >> > > >> > > >> > >> > > > > > > > From shma7099 at student.uu.se Mon May 7 11:14:03 2007 From: shma7099 at student.uu.se (Shaman Mahmoudi) Date: Mon, 7 May 2007 18:14:03 +0200 Subject: HYPRE with multiple variables In-Reply-To: <7308.193.183.3.2.1178549456.squirrel@webmail.chalmers.se> References: <26157.193.183.3.2.1176453900.squirrel@webmail.chalmers.se> <9329.193.183.3.2.1176464453.squirrel@webmail.chalmers.se> <8614E121-37D8-4AF3-A67A-A142D27B7B62@student.uu.se> <35010.129.16.81.46.1177515489.squirrel@webmail.chalmers.se> <462FBDCB.3010309@llnl.gov> <7308.193.183.3.2.1178549456.squirrel@webmail.chalmers.se> Message-ID: <0D714C19-6B32-4334-8E21-CF2F266C869F@student.uu.se> Hi, You mentioned that your preconditioner is boomerAMG. What solver are you using? What type of matrix are you trying to solve? Is it SPD and sparse? Either way, if your solver is not AMG and you are using AMG as a preconditioner, I would suggest that you use a stopping criteria for AMG such as a relative residual convergence of say 1e-03 for the preconditioner, and the solvers relative convergence 1e-05 as in your example, and disregard a stopping criteria which involves maximum number of iterations for AMG, set that to something high. Maybe you will get better results that way. In the end, I guess it is a lot of testing and fine-tuning. With best regards, Shaman Mahmoudi On May 7, 2007, at 4:50 PM, Nicolas Bathfield wrote: > Hi Allison, Barry, and PETSc users, > > The nodal relaxation leads to a nice solution! Thanks a lot for > your help > on this! > > Even though I get a satisfactory result, I can not explain why > hypre is > performing what looks like several outer loops (see below for an > abstract > of the output from -pc_hypre_boomeramg_print_statistics). > > Here is the list of options I used: > > -pc_hypre_boomeramg_tol 1e-5 > -pc_type hypre -pc_hypre_type boomeramg > -pc_hypre_boomeramg_print_statistics > -pc_hypre_boomeramg_grid_sweeps_all 20 > -pc_hypre_boomeramg_max_iter 6 > -pc_hypre_boomeramg_relax_weight_all 0.6 > -pc_hypre_boomeramg_outer_relax_weight_all 0.6 > -pc_hypre_boomeramg_nodal_coarsen 1 > > I would expect the preconditioner to be executed once only (in my case > this includes 6 V cycles). > As you can see from the print_statistic option, boomeramg is exectuted > several times (many more times than displyed here, it was too long > to copy > everything). Is this normal? If yes, why is it so? > > Best regards, > > Nicolas > > BoomerAMG SOLVER PARAMETERS: > > Maximum number of cycles: 6 > Stopping Tolerance: 1.000000e-05 > Cycle type (1 = V, 2 = W, etc.): 1 > > Relaxation Parameters: > Visiting Grid: down up coarse > Visiting Grid: down up coarse > Number of partial sweeps: 20 20 1 > Type 0=Jac, 3=hGS, 6=hSGS, 9=GE: 6 6 6 > Point types, partial sweeps (1=C, -1=F): > Pre-CG relaxation (down): 1 -1 1 -1 1 > -1 1 > -1 1 -1 1 -1 1 -1 1 -1 1 -1 1 -1 1 > -1 1 -1 1 -1 1 -1 1 -1 1 -1 1 -1 1 > -1 1 -1 1 -1 > Post-CG relaxation (up): -1 1 -1 1 -1 > 1 -1 > 1 -1 1 -1 1 -1 1 -1 1 -1 1 -1 1 -1 > 1 -1 1 -1 1 -1 1 -1 1 -1 1 -1 1 -1 > 1 -1 1 -1 1 > Coarsest grid: 0 > > Relaxation Weight 0.600000 level 0 > Outer relaxation weight 0.600000 level 0 > Output flag (print_level): 3 > > > AMG SOLUTION INFO: > relative > residual factor residual > -------- ------ -------- > Initial 4.430431e-03 1.000000e+00 > Cycle 1 4.540275e-03 1.024793 1.024793e+00 > Cycle 2 4.539148e-03 0.999752 1.024539e+00 > Cycle 3 4.620010e-03 1.017814 1.042790e+00 > Cycle 4 5.196532e-03 1.124788 1.172918e+00 > Cycle 5 6.243043e-03 1.201386 1.409128e+00 > Cycle 6 7.310008e-03 1.170905 1.649954e+00 > > > ============================================== > NOTE: Convergence tolerance was not achieved > within the allowed 6 V-cycles > ============================================== > > Average Convergence Factor = 1.087039 > > Complexity: grid = 1.000000 > operator = 1.000000 > cycle = 1.000000 > > > > > > BoomerAMG SOLVER PARAMETERS: > > Maximum number of cycles: 6 > Stopping Tolerance: 1.000000e-05 > Cycle type (1 = V, 2 = W, etc.): 1 > > Relaxation Parameters: > Visiting Grid: down up coarse > Visiting Grid: down up coarse > Number of partial sweeps: 20 20 1 > Type 0=Jac, 3=hGS, 6=hSGS, 9=GE: 6 6 6 > Point types, partial sweeps (1=C, -1=F): > Pre-CG relaxation (down): 1 -1 1 -1 1 > -1 1 > -1 1 -1 1 -1 1 -1 1 -1 1 -1 1 -1 1 > -1 1 -1 1 -1 1 -1 1 -1 1 -1 1 -1 1 > -1 1 -1 1 -1 > Post-CG relaxation (up): -1 1 -1 1 -1 > 1 -1 > 1 -1 1 -1 1 -1 1 -1 1 -1 1 -1 1 -1 > 1 -1 1 -1 1 -1 1 -1 1 -1 1 -1 1 -1 > 1 -1 1 -1 1 > Coarsest grid: 0 > > Relaxation Weight 0.600000 level 0 > Outer relaxation weight 0.600000 level 0 > Output flag (print_level): 3 > > > AMG SOLUTION INFO: > relative > residual factor residual > -------- ------ -------- > Initial 7.914114e-03 1.000000e+00 > Cycle 1 8.320104e-03 1.051300 1.051300e+00 > Cycle 2 7.767803e-03 0.933618 9.815126e-01 > Cycle 3 6.570752e-03 0.845896 8.302575e-01 > Cycle 4 5.836166e-03 0.888204 7.374377e-01 > Cycle 5 6.593214e-03 1.129717 8.330956e-01 > Cycle 6 8.191117e-03 1.242356 1.035001e+00 > > > ============================================== > NOTE: Convergence tolerance was not achieved > within the allowed 6 V-cycles > ============================================== > > Average Convergence Factor = 1.005750 > > Complexity: grid = 1.000000 > operator = 1.000000 > cycle = 1.000000 > > > > > > BoomerAMG SOLVER PARAMETERS: > > Maximum number of cycles: 6 > Stopping Tolerance: 1.000000e-05 > Cycle type (1 = V, 2 = W, etc.): 1 > > Relaxation Parameters: > Visiting Grid: down up coarse > Visiting Grid: down up coarse > Number of partial sweeps: 20 20 1 > Type 0=Jac, 3=hGS, 6=hSGS, 9=GE: 6 6 6 > Point types, partial sweeps (1=C, -1=F): > Pre-CG relaxation (down): 1 -1 1 -1 1 > -1 1 > -1 1 -1 1 -1 1 -1 1 -1 1 -1 1 -1 1 > -1 1 -1 1 -1 1 -1 1 -1 1 -1 1 -1 1 > -1 1 -1 1 -1 > Post-CG relaxation (up): -1 1 -1 1 -1 > 1 -1 > 1 -1 1 -1 1 -1 1 -1 1 -1 1 -1 1 -1 > 1 -1 1 -1 1 -1 1 -1 1 -1 1 -1 1 -1 > 1 -1 1 -1 1 > Coarsest grid: 0 > > Relaxation Weight 0.600000 level 0 > Outer relaxation weight 0.600000 level 0 > Output flag (print_level): 3 > > > AMG SOLUTION INFO: > relative > residual factor residual > -------- ------ -------- > Initial 5.553016e-03 1.000000e+00 > Cycle 1 6.417482e-03 1.155675 1.155675e+00 > Cycle 2 7.101926e-03 1.106653 1.278931e+00 > Cycle 3 7.077471e-03 0.996556 1.274527e+00 > Cycle 4 6.382835e-03 0.901853 1.149436e+00 > Cycle 5 5.392392e-03 0.844827 9.710744e-01 > Cycle 6 4.674173e-03 0.866809 8.417359e-01 > > > ============================================== > NOTE: Convergence tolerance was not achieved > within the allowed 6 V-cycles > ============================================== > > Average Convergence Factor = 0.971694 > > Complexity: grid = 1.000000 > operator = 1.000000 > cycle = 1.000000 > > > > > > BoomerAMG SOLVER PARAMETERS: > > Maximum number of cycles: 6 > Stopping Tolerance: 1.000000e-05 > Cycle type (1 = V, 2 = W, etc.): 1 > > Relaxation Parameters: > Visiting Grid: down up coarse > Visiting Grid: down up coarse > Number of partial sweeps: 20 20 1 > Type 0=Jac, 3=hGS, 6=hSGS, 9=GE: 6 6 6 > Point types, partial sweeps (1=C, -1=F): > Pre-CG relaxation (down): 1 -1 1 -1 1 > -1 1 > -1 1 -1 1 -1 1 -1 1 -1 1 -1 1 -1 1 > -1 1 -1 1 -1 1 -1 1 -1 1 -1 1 -1 1 > -1 1 -1 1 -1 > Post-CG relaxation (up): -1 1 -1 1 -1 > 1 -1 > 1 -1 1 -1 1 -1 1 -1 1 -1 1 -1 1 -1 > 1 -1 1 -1 1 -1 1 -1 1 -1 1 -1 1 -1 > 1 -1 1 -1 1 > Coarsest grid: 0 > > Relaxation Weight 0.600000 level 0 > Outer relaxation weight 0.600000 level 0 > Output flag (print_level): 3 > > > AMG SOLUTION INFO: > relative > residual factor residual > -------- ------ -------- > Initial 3.846663e-03 1.000000e+00 > Cycle 1 4.136662e-03 1.075390 1.075390e+00 > Cycle 2 4.463861e-03 1.079097 1.160450e+00 > Cycle 3 4.302262e-03 0.963798 1.118440e+00 > Cycle 4 4.175328e-03 0.970496 1.085441e+00 > Cycle 5 4.735871e-03 1.134251 1.231163e+00 > Cycle 6 5.809054e-03 1.226607 1.510154e+00 > > > ============================================== > NOTE: Convergence tolerance was not achieved > within the allowed 6 V-cycles > ============================================== > > Average Convergence Factor = 1.071117 > > Complexity: grid = 1.000000 > operator = 1.000000 > cycle = 1.000000 > > >> >> Nicolas, >> >> I have added support for both 1 and/or 2 to petsc-dev >> http://www-unix.mcs.anl.gov/petsc/petsc-as/developers/index.html >> >> The two new options are -pc_hypre_boomeramg_nodal_coarsen and >> -pc_hypre_boomeramg_nodal_relaxation > argument n indicates the levels which the SmoothNumLevels() sets. >> >> I have not tested the code so please let me know what problems you >> have. >> >> Allison, >> >> Thank you very much for the clarifications. >> >> Barry >> >> On Wed, 25 Apr 2007, Allison Baker wrote: >> >>> Hi Barry and Nicolas, >>> >>> To clarify, >>> >>> HYPRE_BoomerAMGSetNumFunctions(solver, int num_functions) tells >>> AMG to >>> solve a >>> system of equations with the specified number of functions/ >>> unknowns. The >>> default AMG scheme to solve a PDE system is the "unknown" >>> approach. (The >>> coarsening and interpolation are determined by looking at each >>> unknown/function independently - therefore you can imagine that the >>> coarse >>> grids are generally not the same for each variable. This approach >>> generally >>> works well unless you have strong coupling between unknowns.) >>> >>> HYPRE_BOOMERAMGSetNodal(solver, int nodal ) tells AMG to coarsen >>> such >>> that >>> each variable has the same coarse grid - sometimes this is more >>> "physical" for >>> a particular problem. The value chosen here for nodal determines how >>> strength >>> of connection is determined between the coupled system. I suggest >>> setting >>> nodal = 1, which uses a Frobenius norm. This does NOT tell AMG >>> to use >>> nodal >>> relaxation. >>> >>> If you want to use nodal relaxation in hypre there are two choices: >>> >>> (1) If you call HYPRE_BOOMERAMGSetNodal, then you can >>> additionally do >>> nodal >>> relaxation via the schwarz smoother option in hypre. I did not >>> implement this >>> in the Petsc interface, but it could be done easy enough. The >>> following >>> four >>> functions need to be called: >>> >>> HYPRE_BoomerAMGSetSmoothType(solver, 6); >>> HYPRE_BoomerAMGSetDomainType(solver, 1); >>> HYPRE_BoomerAMGSetOverlap(solver, 0); >>> HYPRE_BoomerAMGSetSmoothNumLevels(solver, num_levels); (Set >>> num_levels >>> to number of levels on which you want nodal smoothing, i.e. >>> 1=just the >>> fine >>> grid, 2= fine grid and the grid below, etc. I find that doing nodal >>> relaxation on just the finest level is generally sufficient.) >>> Note that >>> the >>> interpolation scheme used will be the same as in the unknown >>> approach - >>> so >>> this is what we call a hybrid systems method. >>> >>> (2) You can do both nodal smoothing and a nodal interpolation >>> scheme. >>> While >>> this is currently implemented in 2.0.0, it is not advertised (i.e., >>> mentioned >>> at all in the user's manual) because it is not yet implemented very >>> efficiently (the fine grid matrix is converted to a block matrix >>> - and >>> both >>> are stored), and we have not found it to be as effective as >>> advertised >>> elsewhere (this is an area of current research for us)..... If >>> you want >>> to try >>> it anyway, let me know and I will provide more info. >>> >>> Hope this helps, >>> Allison >>> >>> >>> Barry Smith wrote: >>>> Nicolas, >>>> >>>> On Wed, 25 Apr 2007, Nicolas Bathfield wrote: >>>> >>>> >>>>> Dear Barry, >>>>> >>>>> Using MatSetBlockSize(A,5) improved my results greatly. Boomemramg >>> is now >>>>> solving the system of equations. >>>>> >>>> >>>> Good >>>> >>>> >>>>> Still, the equations I solve are coupled, and my discretization >>> scheme is >>>>> meant for a non-segregated solver. As a consequence (I believe), >>> boomeramg >>>>> still diverges. >>>>> >>>> >>>> How can "Boomeramg be now solving the system of equations" but >>>> also >>>> diverge? I am so confused. >>>> >>>> >>>>> I would therefore like to use the nodal relaxation in >>>>> boomeramg (the hypre command is HYPRE_BOOMERAMGSetNodal) in >>>>> order to >>>>> couple the coarse grid choice for all my variables. >>>>> >>>> >>>> I can add this this afternoon. >>>> >>>> I have to admit I do not understand the difference between >>>> HYPRE_BOOMERAMGSetNodal() and hypre_BoomerAMGSetNumFunctions(). Do >>> you? >>>> >>>> Barry >>>> >>>>> How can I achieve this from PETSc? >>>>> >>>>> Best regards, >>>>> >>>>> Nicolas >>>>> >>>>> >>>>>> From PETSc MPIAIJ matrices you need to set the block size of >>>>>> the >>>>>> matrix >>>>>> with MatSetBlockSize(A,5) after you have called MatSetType() or >>>>>> MatCreateMPIAIJ(). Then HYPRE_BoomerAMGSetNumFunctions() is >>>>>> automatically >>>>>> called by PETSc. >>>>>> >>>>>> Barry >>>>>> >>>>>> The reason this is done this way instead of as >>>>>> -pc_hypre_boomeramg_block_size is the idea that hypre will >>> use the >>>>>> properties of the matrix it is given in building the >>> preconditioner so >>>>>> the user does not have to pass those properties in seperately >>> directly >>>>>> to hypre. >>>>>> >>>>>> >>>>>> On Fri, 13 Apr 2007, Shaman Mahmoudi wrote: >>>>>> >>>>>> >>>>>>> Hi, >>>>>>> >>>>>>> int HYPRE_BoomerAMGSetNumFunctions (.....) >>>>>>> >>>>>>> sets the size of the system of PDEs. >>>>>>> >>>>>>> With best regards, Shaman Mahmoudi >>>>>>> >>>>>>> On Apr 13, 2007, at 2:04 PM, Shaman Mahmoudi wrote: >>>>>>> >>>>>>> >>>>>>>> Hi Nicolas, >>>>>>>> >>>>>>>> You are right. hypre has changed a lot since the version I >>> used. >>>>>>>> >>>>>>>> I found this interesting information: >>>>>>>> >>>>>>>> int HYPRE_BOOMERAMGSetNodal(....) >>>>>>>> >>>>>>>> Sets whether to use the nodal systems version. Default is 0. >>>>>>>> >>>>>>>> Then information about smoothers: >>>>>>>> >>>>>>>> One interesting thing there is this, >>>>>>>> >>>>>>>> HYPRE_BoomerAMGSetDomainType(....) >>>>>>>> >>>>>>>> 0 - each point is a domain (default) >>>>>>>> 1 each node is a domain (only of interest in systems AMG) >>>>>>>> 2 .... >>>>>>>> >>>>>>>> I could not find how you define the nodal displacement >>> ordering. But >>>>>>>> >>>>>>> it >>>>>>> >>>>>>>> should be there somewhere. >>>>>>>> >>>>>>>> I read the reference manual for hypre 2.0 >>>>>>>> >>>>>>>> With best regards, Shaman Mahmoudi >>>>>>>> >>>>>>>> >>>>>>>> On Apr 13, 2007, at 1:40 PM, Nicolas Bathfield wrote: >>>>>>>> >>>>>>>> >>>>>>>>> Dear Shaman, >>>>>>>>> >>>>>>>>> As far as I could understand, there is a BoomerAMG?s systems >>> AMG >>>>>>>>> >>>>>>> version >>>>>>> >>>>>>>>> available. This seems to be exactly what I am looking for, >>> but I >>>>>>>>> >>>>>>> just >>>>>>> >>>>>>>>> don't know how to access it, either through PETSc or >>> directly. >>>>>>>>> >>>>>>>>> Best regards, >>>>>>>>> >>>>>>>>> Nicolas >>>>>>>>> >>>>>>>>> >>>>>>>>>> Hi, >>>>>>>>>> >>>>>>>>>> You want to exploit the structure of the model? >>>>>>>>>> As far as I know, boomeramg can not treat a set of rows or >>>>>>>>>> blocks >>>>>>>>>> >>>>>>> as >>>>>>> >>>>>>>>>> a molecule, a so called block-smoother? >>>>>>>>>> ML 2.0 smoothed aggregation does support it. >>>>>>>>>> >>>>>>>>>> With best regards, Shaman Mahmoudi >>>>>>>>>> >>>>>>>>>> On Apr 13, 2007, at 10:45 AM, Nicolas Bathfield wrote: >>>>>>>>>> >>>>>>>>>> >>>>>>>>>>> Hi, >>>>>>>>>>> >>>>>>>>>>> I am solving the Navier-stokes equations and try to use >>> Hypre >>>>>>>>>>> as >>>>>>>>>>> preconditioner. >>>>>>>>>>> Until now, I used PETSc as non-segregated solver and it >>> worked >>>>>>>>>>> perfectly. >>>>>>>>>>> Things got worse when I decided to use Boomeramg >>> (Hypre). >>>>>>>>>>> As I solve a system of PDEs, each cell is represented >>> by 5 >>>>>>>>>>> rows >>>>>>>>>>> >>>>>>> in my >>>>>>> >>>>>>>>>>> matrix (I solve for 5 variables). PETSc handles that >>> without >>>>>>>>>>> >>>>>>> problem >>>>>>> >>>>>>>>>>> apparently, but the coarsening scheme of Boomeramg needs >>> more >>>>>>>>>>> >>>>>>> input in >>>>>>> >>>>>>>>>>> order to work properly. >>>>>>>>>>> >>>>>>>>>>> Is there an option in PETSc to tell HYPRE that we are >>> dealing >>>>>>>>>>> >>>>>>> with a >>>>>>> >>>>>>>>>>> system of PDEs? (something like: >>> -pc_hypre_boomeramg_...) >>>>>>>>>>> >>>>>>>>>>> >>>>>>>>>>> Thanks for your help. >>>>>>>>>>> >>>>>>>>>>> Best regards, >>>>>>>>>>> >>>>>>>>>>> Nicolas >>>>>>>>>>> >>>>>>>>>>> >>>>>>>>>>> -- >>>>>>>>>>> Nicolas BATHFIELD >>>>>>>>>>> Chalmers University of Technology >>>>>>>>>>> Shipping and Marine Technology >>>>>>>>>>> phone: +46 (0)31 772 1476 >>>>>>>>>>> fax: +46 (0)31 772 3699 >>>>>>>>>>> >>>>>>>>>>> >>>>>>>>>> >>>>>>>>> -- >>>>>>>>> Nicolas BATHFIELD >>>>>>>>> Chalmers University of Technology >>>>>>>>> Shipping and Marine Technology >>>>>>>>> phone: +46 (0)31 772 1476 >>>>>>>>> fax: +46 (0)31 772 3699 >>>>>>>>> >>>>>>>>> >>>>> >>>>> >>>> >>>> >>> >>> >> >> > > > -- > Nicolas BATHFIELD > Chalmers University of Technology > Shipping and Marine Technology > phone: +46 (0)31 772 1476 > fax: +46 (0)31 772 3699 > From zonexo at gmail.com Tue May 8 00:20:16 2007 From: zonexo at gmail.com (Ben Tay) Date: Tue, 8 May 2007 13:20:16 +0800 Subject: lgcc_s not found error Message-ID: <804ab5d40705072220i607e122bt5427731d0036e725@mail.gmail.com> Hi, I've built static PETSc library with mpi/hypre w/o problems. The test e.g. also worked. When I tried to built my own a.out, using -static (fortran), the error is lgcc_s not found. removing it in the make file resulted in a lot of error for mpich2. btw, i do not have root access. the problem is that there's some servers in my sch's requires the use of static library to run the a.out, therefore I need to use the "-static" option btw, the error msg are /lsftmp/g0306332/petsc-2.3.2-p8/lib/atlas3-noshared/libpetsc.a(ghome.o)(.text+0x16): In function `PetscGetHomeDirectory': : warning: Using 'getpwuid' in statically linked applications requires at runtime the shared libraries from the glibc version used for linking /lsftmp/g0306332/petsc-2.3.2-p8/lib/atlas3-noshared/libpetsc.a(send.o)(.text+0xafc): In function `SOCKCall_Private': : warning: Using 'gethostbyname' in statically linked applications requires at runtime the shared libraries from the glibc version used for linking /usr/lib/gcc/x86_64-redhat-linux/3.4.6/libstdc++.a(eh_alloc.o)(.text.__cxa_allocate_exception+0xc8): In function `__cxa_allocate_exception': : undefined reference to `pthread_mutex_unlock' /usr/lib/gcc/x86_64-redhat-linux/3.4.6/libstdc++.a(eh_alloc.o)(.text.__cxa_allocate_exception+0xd6): In function `__cxa_allocate_exception': : undefined reference to `pthread_mutex_lock' /usr/lib/gcc/x86_64-redhat-linux/3.4.6/libstdc++.a(eh_alloc.o)(.text.__cxa_free_exception+0x90): In function `__cxa_free_exception': : undefined reference to `pthread_mutex_lock' /usr/lib/gcc/x86_64-redhat-linux/3.4.6/libstdc++.a(eh_alloc.o)(.text.__cxa_free_exception+0x6c): In function `__cxa_free_exception': : undefined reference to `pthread_mutex_unlock' /nfs/lsftmp/g0306332/petsc-2.3.2-p8/externalpackages/mpich2-1.0.4p1/atlas3-noshared/lib/libmpich.a(attr_delete.o)(.text+0x41): In function `MPI_Attr_delete': : undefined reference to `pthread_getspecific' /nfs/lsftmp/g0306332/petsc-2.3.2-p8/externalpackages/mpich2-1.0.4p1/atlas3-noshared/lib/libmpich.a(attr_delete.o)(.text+0x67): In function `MPI_Attr_delete': : undefined reference to `pthread_mutex_lock' /nfs/lsftmp/g0306332/petsc-2.3.2-p8/externalpackages/mpich2-1.0.4p1/atlas3-noshared/lib/libmpich.a(attr_delete.o)(.text+0x199): In function `MPI_Attr_delete': : undefined reference to `pthread_getspecific' /nfs/lsftmp/g0306332/petsc-2.3.2-p8/externalpackages/mpich2-1.0.4p1/atlas3-noshared/lib/libmpich.a(attr_delete.o)(.text+0x257): In function `MPI_Attr_delete': : undefined reference to `pthread_getspecific' /nfs/lsftmp/g0306332/petsc-2.3.2-p8/externalpackages/mpich2-1.0.4p1/atlas3-noshared/lib/libmpich.a(attr_delete.o)(.text+0x27d): In function `MPI_Attr_delete': : undefined reference to `pthread_mutex_unlock' /nfs/lsftmp/g0306332/petsc-2.3.2-p8/externalpackages/mpich2-1.0.4p1/atlas3-noshared/lib/libmpich.a(attr_delete.o)(.text+0x2b0): In function `MPI_Attr_delete': : undefined reference to `pthread_setspecific' /nfs/lsftmp/g0306332/petsc-2.3.2-p8/externalpackages/mpich2-1.0.4p1/atlas3-noshared/lib/libmpich.a(attr_delete.o)(.text+0x30e): In function `MPI_Attr_delete': : undefined reference to `pthread_setspecific' /nfs/lsftmp/g0306332/petsc-2.3.2-p8/externalpackages/mpich2-1.0.4p1/atlas3-noshared/lib/libmpich.a(attr_delete.o)(.text+0x337): In function `MPI_Attr_delete': : undefined reference to `pthread_setspecific' /nfs/lsftmp/g0306332/petsc-2.3.2-p8/externalpackages/mpich2-1.0.4p1/atlas3-noshared/lib/libmpich.a(attr_get.o)(.text+0x51): In function `MPI_Attr_get': : undefined reference to `pthread_getspecific' /nfs/lsftmp/g0306332/petsc-2.3.2-p8/externalpackages/mpich2-1.0.4p1/atlas3-noshared/lib/libmpich.a(attr_get.o)(.text+0x7b): In function `MPI_Attr_get': : undefined reference to `pthread_mutex_lock' /nfs/lsftmp/g0306332/petsc-2.3.2-p8/externalpackages/mpich2-1.0.4p1/atlas3-noshared/lib/libmpich.a(attr_get.o)(.text+0x1d3): In function `MPI_Attr_get': : undefined reference to `pthread_getspecific' /nfs/lsftmp/g0306332/petsc-2.3.2-p8/externalpackages/mpich2-1.0.4p1/atlas3-noshared/lib/libmpich.a(attr_get.o)(.text+0x2a6): In function `MPI_Attr_get': : undefined reference to `pthread_getspecific' /nfs/lsftmp/g0306332/petsc-2.3.2-p8/externalpackages/mpich2-1.0.4p1/atlas3-noshared/lib/libmpich.a(attr_get.o)(.text+0x2d3): In function `MPI_Attr_get': : undefined reference to `pthread_mutex_unlock' /nfs/lsftmp/g0306332/petsc-2.3.2-p8/externalpackages/mpich2-1.0.4p1/atlas3-noshared/lib/libmpich.a(attr_get.o)(.text+0x31a): In function `MPI_Attr_get': i wonder what can be done. Thanks -------------- next part -------------- An HTML attachment was scrubbed... URL: From balay at mcs.anl.gov Tue May 8 00:37:37 2007 From: balay at mcs.anl.gov (Satish Balay) Date: Tue, 8 May 2007 00:37:37 -0500 (CDT) Subject: lgcc_s not found error In-Reply-To: <804ab5d40705072220i607e122bt5427731d0036e725@mail.gmail.com> References: <804ab5d40705072220i607e122bt5427731d0036e725@mail.gmail.com> Message-ID: > i wonder what can be done. Don't use -static. Lots of system libraries can't be used in -static mode - as you've discovered. So you should: - make sure the remote machine has all the basic system libraries [as .so available]. - built PETSc without sharedlibrary options. - Now compile an example - copy it over to the remote machine. - run 'ldd executable' on the remote machine to see if all the required shared libraries get resolved. If they don't - then figure out a workarround. Something that will work for non-system files is to copy the .so files on to the remote machine aswell. [say into /home/foo/lib, and then add this path to LD_LIBRARY_PATH env variable]. Keep trying a workarround until 'ldd executable' is able to resolve all .so files. Satish On Tue, 8 May 2007, Ben Tay wrote: > Hi, > > I've built static PETSc library with mpi/hypre w/o problems. The test e.g. > also worked. > > When I tried to built my own a.out, using -static (fortran), the error is > lgcc_s not found. removing it in the make file resulted in a lot of error > for mpich2. > > btw, i do not have root access. the problem is that there's some servers in > my sch's requires the use of static library to run the a.out, therefore I > need to use the "-static" option > > btw, the error msg are > > /lsftmp/g0306332/petsc-2.3.2-p8/lib/atlas3-noshared/libpetsc.a(ghome.o)(.text+0x16): > In function `PetscGetHomeDirectory': > : warning: Using 'getpwuid' in statically linked applications requires at > runtime the shared libraries from the glibc version used for linking > /lsftmp/g0306332/petsc-2.3.2-p8/lib/atlas3-noshared/libpetsc.a(send.o)(.text+0xafc): > In function `SOCKCall_Private': > : warning: Using 'gethostbyname' in statically linked applications requires > at runtime the shared libraries from the glibc version used for linking > /usr/lib/gcc/x86_64-redhat-linux/3.4.6/libstdc++.a(eh_alloc.o)(.text.__cxa_allocate_exception+0xc8): > In function `__cxa_allocate_exception': > : undefined reference to `pthread_mutex_unlock' > /usr/lib/gcc/x86_64-redhat-linux/3.4.6/libstdc++.a(eh_alloc.o)(.text.__cxa_allocate_exception+0xd6): > In function `__cxa_allocate_exception': > : undefined reference to `pthread_mutex_lock' > /usr/lib/gcc/x86_64-redhat-linux/3.4.6/libstdc++.a(eh_alloc.o)(.text.__cxa_free_exception+0x90): > In function `__cxa_free_exception': > : undefined reference to `pthread_mutex_lock' > /usr/lib/gcc/x86_64-redhat-linux/3.4.6/libstdc++.a(eh_alloc.o)(.text.__cxa_free_exception+0x6c): > In function `__cxa_free_exception': > : undefined reference to `pthread_mutex_unlock' > /nfs/lsftmp/g0306332/petsc-2.3.2-p8/externalpackages/mpich2-1.0.4p1/atlas3-noshared/lib/libmpich.a(attr_delete.o)(.text+0x41): > In function `MPI_Attr_delete': > : undefined reference to `pthread_getspecific' > /nfs/lsftmp/g0306332/petsc-2.3.2-p8/externalpackages/mpich2-1.0.4p1/atlas3-noshared/lib/libmpich.a(attr_delete.o)(.text+0x67): > In function `MPI_Attr_delete': > : undefined reference to `pthread_mutex_lock' > /nfs/lsftmp/g0306332/petsc-2.3.2-p8/externalpackages/mpich2-1.0.4p1/atlas3-noshared/lib/libmpich.a(attr_delete.o)(.text+0x199): > In function `MPI_Attr_delete': > : undefined reference to `pthread_getspecific' > /nfs/lsftmp/g0306332/petsc-2.3.2-p8/externalpackages/mpich2-1.0.4p1/atlas3-noshared/lib/libmpich.a(attr_delete.o)(.text+0x257): > In function `MPI_Attr_delete': > : undefined reference to `pthread_getspecific' > /nfs/lsftmp/g0306332/petsc-2.3.2-p8/externalpackages/mpich2-1.0.4p1/atlas3-noshared/lib/libmpich.a(attr_delete.o)(.text+0x27d): > In function `MPI_Attr_delete': > : undefined reference to `pthread_mutex_unlock' > /nfs/lsftmp/g0306332/petsc-2.3.2-p8/externalpackages/mpich2-1.0.4p1/atlas3-noshared/lib/libmpich.a(attr_delete.o)(.text+0x2b0): > In function `MPI_Attr_delete': > : undefined reference to `pthread_setspecific' > /nfs/lsftmp/g0306332/petsc-2.3.2-p8/externalpackages/mpich2-1.0.4p1/atlas3-noshared/lib/libmpich.a(attr_delete.o)(.text+0x30e): > In function `MPI_Attr_delete': > : undefined reference to `pthread_setspecific' > /nfs/lsftmp/g0306332/petsc-2.3.2-p8/externalpackages/mpich2-1.0.4p1/atlas3-noshared/lib/libmpich.a(attr_delete.o)(.text+0x337): > In function `MPI_Attr_delete': > : undefined reference to `pthread_setspecific' > /nfs/lsftmp/g0306332/petsc-2.3.2-p8/externalpackages/mpich2-1.0.4p1/atlas3-noshared/lib/libmpich.a(attr_get.o)(.text+0x51): > In function `MPI_Attr_get': > : undefined reference to `pthread_getspecific' > /nfs/lsftmp/g0306332/petsc-2.3.2-p8/externalpackages/mpich2-1.0.4p1/atlas3-noshared/lib/libmpich.a(attr_get.o)(.text+0x7b): > In function `MPI_Attr_get': > : undefined reference to `pthread_mutex_lock' > /nfs/lsftmp/g0306332/petsc-2.3.2-p8/externalpackages/mpich2-1.0.4p1/atlas3-noshared/lib/libmpich.a(attr_get.o)(.text+0x1d3): > In function `MPI_Attr_get': > : undefined reference to `pthread_getspecific' > /nfs/lsftmp/g0306332/petsc-2.3.2-p8/externalpackages/mpich2-1.0.4p1/atlas3-noshared/lib/libmpich.a(attr_get.o)(.text+0x2a6): > In function `MPI_Attr_get': > : undefined reference to `pthread_getspecific' > /nfs/lsftmp/g0306332/petsc-2.3.2-p8/externalpackages/mpich2-1.0.4p1/atlas3-noshared/lib/libmpich.a(attr_get.o)(.text+0x2d3): > In function `MPI_Attr_get': > : undefined reference to `pthread_mutex_unlock' > /nfs/lsftmp/g0306332/petsc-2.3.2-p8/externalpackages/mpich2-1.0.4p1/atlas3-noshared/lib/libmpich.a(attr_get.o)(.text+0x31a): > In function `MPI_Attr_get': > > i wonder what can be done. > > Thanks > From balay at mcs.anl.gov Tue May 8 00:40:00 2007 From: balay at mcs.anl.gov (Satish Balay) Date: Tue, 8 May 2007 00:40:00 -0500 (CDT) Subject: lgcc_s not found error In-Reply-To: References: <804ab5d40705072220i607e122bt5427731d0036e725@mail.gmail.com> Message-ID: Also - why can't you just install PETSc on this remote machine? Satish On Tue, 8 May 2007, Satish Balay wrote: > > i wonder what can be done. > > Don't use -static. Lots of system libraries can't be used in -static > mode - as you've discovered. So you should: From balay at mcs.anl.gov Tue May 8 00:55:20 2007 From: balay at mcs.anl.gov (Satish Balay) Date: Tue, 8 May 2007 00:55:20 -0500 (CDT) Subject: lgcc_s not found error In-Reply-To: <804ab5d40705072220i607e122bt5427731d0036e725@mail.gmail.com> References: <804ab5d40705072220i607e122bt5427731d0036e725@mail.gmail.com> Message-ID: On Tue, 8 May 2007, Ben Tay wrote: > /lsftmp/g0306332/petsc-2.3.2-p8/lib/atlas3-noshared/libpetsc.a(ghome.o)(.text+0x16): > In function `PetscGetHomeDirectory': > : warning: Using 'getpwuid' in statically linked applications requires at > runtime the shared libraries from the glibc version used for linking > /lsftmp/g0306332/petsc-2.3.2-p8/lib/atlas3-noshared/libpetsc.a(send.o)(.text+0xafc): > In function `SOCKCall_Private': > : warning: Using 'gethostbyname' in statically linked applications requires > at runtime the shared libraries from the glibc version used for linking I guess you could ignore these warnings and see if the binary does not crash at runtime. > /usr/lib/gcc/x86_64-redhat-linux/3.4.6/libstdc++.a(eh_alloc.o)(.text.__cxa_allocate_exception+0xc8): > In function `__cxa_allocate_exception': > : undefined reference to `pthread_mutex_unlock' I think you removed -lpthread from the link options. Satish From nicolas.bathfield at chalmers.se Tue May 8 04:32:34 2007 From: nicolas.bathfield at chalmers.se (Nicolas Bathfield) Date: Tue, 8 May 2007 11:32:34 +0200 (CEST) Subject: HYPRE with multiple variables In-Reply-To: <0D714C19-6B32-4334-8E21-CF2F266C869F@student.uu.se> References: <26157.193.183.3.2.1176453900.squirrel@webmail.chalmers.se> <9329.193.183.3.2.1176464453.squirrel@webmail.chalmers.se> <8614E121-37D8-4AF3-A67A-A142D27B7B62@student.uu.se> <35010.129.16.81.46.1177515489.squirrel@webmail.chalmers.se> <462FBDCB.3010309@llnl.gov> <7308.193.183.3.2.1178549456.squirrel@webmail.chalmers.se> <0D714C19-6B32-4334-8E21-CF2F266C869F@student.uu.se> Message-ID: <5720.193.183.3.2.1178616754.squirrel@webmail.chalmers.se> Hi, Boomeramg is used as preconditioner, and I use default gmres to solve the equations. I did not expect the preconditioner to be used at every iteration, but I now understand better what's going on. I solve a spase matrix with 5 diagonals. I tried to use boomeramg as a solver only. To achieve this, I had to comment out the command ierr = KSPSetInitialGuessNonzero(ksp,PETSC_TRUE), and I used the option -ksp_preonly. Is it the right way to do? Best regards, Nicolas > Hi, > > You mentioned that your preconditioner is boomerAMG. What solver are > you using? > What type of matrix are you trying to solve? Is it SPD and sparse? > Either way, if your solver is not AMG and you are using AMG as a > preconditioner, I would suggest that you use a stopping criteria for > AMG such as a relative residual convergence of say 1e-03 for the > preconditioner, and the solvers relative convergence 1e-05 as in your > example, and disregard a stopping criteria which involves maximum > number of iterations for AMG, set that to something high. Maybe you > will get better results that way. In the end, I guess it is a lot of > testing and fine-tuning. > > With best regards, Shaman Mahmoudi > > On May 7, 2007, at 4:50 PM, Nicolas Bathfield wrote: > >> Hi Allison, Barry, and PETSc users, >> >> The nodal relaxation leads to a nice solution! Thanks a lot for >> your help >> on this! >> >> Even though I get a satisfactory result, I can not explain why >> hypre is >> performing what looks like several outer loops (see below for an >> abstract >> of the output from -pc_hypre_boomeramg_print_statistics). >> >> Here is the list of options I used: >> >> -pc_hypre_boomeramg_tol 1e-5 >> -pc_type hypre -pc_hypre_type boomeramg >> -pc_hypre_boomeramg_print_statistics >> -pc_hypre_boomeramg_grid_sweeps_all 20 >> -pc_hypre_boomeramg_max_iter 6 >> -pc_hypre_boomeramg_relax_weight_all 0.6 >> -pc_hypre_boomeramg_outer_relax_weight_all 0.6 >> -pc_hypre_boomeramg_nodal_coarsen 1 >> >> I would expect the preconditioner to be executed once only (in my case >> this includes 6 V cycles). >> As you can see from the print_statistic option, boomeramg is exectuted >> several times (many more times than displyed here, it was too long >> to copy >> everything). Is this normal? If yes, why is it so? >> >> Best regards, >> >> Nicolas >> >> BoomerAMG SOLVER PARAMETERS: >> >> Maximum number of cycles: 6 >> Stopping Tolerance: 1.000000e-05 >> Cycle type (1 = V, 2 = W, etc.): 1 >> >> Relaxation Parameters: >> Visiting Grid: down up coarse >> Visiting Grid: down up coarse >> Number of partial sweeps: 20 20 1 >> Type 0=Jac, 3=hGS, 6=hSGS, 9=GE: 6 6 6 >> Point types, partial sweeps (1=C, -1=F): >> Pre-CG relaxation (down): 1 -1 1 -1 1 >> -1 1 >> -1 1 -1 1 -1 1 -1 1 -1 1 -1 1 -1 1 >> -1 1 -1 1 -1 1 -1 1 -1 1 -1 1 -1 1 >> -1 1 -1 1 -1 >> Post-CG relaxation (up): -1 1 -1 1 -1 >> 1 -1 >> 1 -1 1 -1 1 -1 1 -1 1 -1 1 -1 1 -1 >> 1 -1 1 -1 1 -1 1 -1 1 -1 1 -1 1 -1 >> 1 -1 1 -1 1 >> Coarsest grid: 0 >> >> Relaxation Weight 0.600000 level 0 >> Outer relaxation weight 0.600000 level 0 >> Output flag (print_level): 3 >> >> >> AMG SOLUTION INFO: >> relative >> residual factor residual >> -------- ------ -------- >> Initial 4.430431e-03 1.000000e+00 >> Cycle 1 4.540275e-03 1.024793 1.024793e+00 >> Cycle 2 4.539148e-03 0.999752 1.024539e+00 >> Cycle 3 4.620010e-03 1.017814 1.042790e+00 >> Cycle 4 5.196532e-03 1.124788 1.172918e+00 >> Cycle 5 6.243043e-03 1.201386 1.409128e+00 >> Cycle 6 7.310008e-03 1.170905 1.649954e+00 >> >> >> ============================================== >> NOTE: Convergence tolerance was not achieved >> within the allowed 6 V-cycles >> ============================================== >> >> Average Convergence Factor = 1.087039 >> >> Complexity: grid = 1.000000 >> operator = 1.000000 >> cycle = 1.000000 >> >> >> >> >> >> BoomerAMG SOLVER PARAMETERS: >> >> Maximum number of cycles: 6 >> Stopping Tolerance: 1.000000e-05 >> Cycle type (1 = V, 2 = W, etc.): 1 >> >> Relaxation Parameters: >> Visiting Grid: down up coarse >> Visiting Grid: down up coarse >> Number of partial sweeps: 20 20 1 >> Type 0=Jac, 3=hGS, 6=hSGS, 9=GE: 6 6 6 >> Point types, partial sweeps (1=C, -1=F): >> Pre-CG relaxation (down): 1 -1 1 -1 1 >> -1 1 >> -1 1 -1 1 -1 1 -1 1 -1 1 -1 1 -1 1 >> -1 1 -1 1 -1 1 -1 1 -1 1 -1 1 -1 1 >> -1 1 -1 1 -1 >> Post-CG relaxation (up): -1 1 -1 1 -1 >> 1 -1 >> 1 -1 1 -1 1 -1 1 -1 1 -1 1 -1 1 -1 >> 1 -1 1 -1 1 -1 1 -1 1 -1 1 -1 1 -1 >> 1 -1 1 -1 1 >> Coarsest grid: 0 >> >> Relaxation Weight 0.600000 level 0 >> Outer relaxation weight 0.600000 level 0 >> Output flag (print_level): 3 >> >> >> AMG SOLUTION INFO: >> relative >> residual factor residual >> -------- ------ -------- >> Initial 7.914114e-03 1.000000e+00 >> Cycle 1 8.320104e-03 1.051300 1.051300e+00 >> Cycle 2 7.767803e-03 0.933618 9.815126e-01 >> Cycle 3 6.570752e-03 0.845896 8.302575e-01 >> Cycle 4 5.836166e-03 0.888204 7.374377e-01 >> Cycle 5 6.593214e-03 1.129717 8.330956e-01 >> Cycle 6 8.191117e-03 1.242356 1.035001e+00 >> >> >> ============================================== >> NOTE: Convergence tolerance was not achieved >> within the allowed 6 V-cycles >> ============================================== >> >> Average Convergence Factor = 1.005750 >> >> Complexity: grid = 1.000000 >> operator = 1.000000 >> cycle = 1.000000 >> >> >> >> >> >> BoomerAMG SOLVER PARAMETERS: >> >> Maximum number of cycles: 6 >> Stopping Tolerance: 1.000000e-05 >> Cycle type (1 = V, 2 = W, etc.): 1 >> >> Relaxation Parameters: >> Visiting Grid: down up coarse >> Visiting Grid: down up coarse >> Number of partial sweeps: 20 20 1 >> Type 0=Jac, 3=hGS, 6=hSGS, 9=GE: 6 6 6 >> Point types, partial sweeps (1=C, -1=F): >> Pre-CG relaxation (down): 1 -1 1 -1 1 >> -1 1 >> -1 1 -1 1 -1 1 -1 1 -1 1 -1 1 -1 1 >> -1 1 -1 1 -1 1 -1 1 -1 1 -1 1 -1 1 >> -1 1 -1 1 -1 >> Post-CG relaxation (up): -1 1 -1 1 -1 >> 1 -1 >> 1 -1 1 -1 1 -1 1 -1 1 -1 1 -1 1 -1 >> 1 -1 1 -1 1 -1 1 -1 1 -1 1 -1 1 -1 >> 1 -1 1 -1 1 >> Coarsest grid: 0 >> >> Relaxation Weight 0.600000 level 0 >> Outer relaxation weight 0.600000 level 0 >> Output flag (print_level): 3 >> >> >> AMG SOLUTION INFO: >> relative >> residual factor residual >> -------- ------ -------- >> Initial 5.553016e-03 1.000000e+00 >> Cycle 1 6.417482e-03 1.155675 1.155675e+00 >> Cycle 2 7.101926e-03 1.106653 1.278931e+00 >> Cycle 3 7.077471e-03 0.996556 1.274527e+00 >> Cycle 4 6.382835e-03 0.901853 1.149436e+00 >> Cycle 5 5.392392e-03 0.844827 9.710744e-01 >> Cycle 6 4.674173e-03 0.866809 8.417359e-01 >> >> >> ============================================== >> NOTE: Convergence tolerance was not achieved >> within the allowed 6 V-cycles >> ============================================== >> >> Average Convergence Factor = 0.971694 >> >> Complexity: grid = 1.000000 >> operator = 1.000000 >> cycle = 1.000000 >> >> >> >> >> >> BoomerAMG SOLVER PARAMETERS: >> >> Maximum number of cycles: 6 >> Stopping Tolerance: 1.000000e-05 >> Cycle type (1 = V, 2 = W, etc.): 1 >> >> Relaxation Parameters: >> Visiting Grid: down up coarse >> Visiting Grid: down up coarse >> Number of partial sweeps: 20 20 1 >> Type 0=Jac, 3=hGS, 6=hSGS, 9=GE: 6 6 6 >> Point types, partial sweeps (1=C, -1=F): >> Pre-CG relaxation (down): 1 -1 1 -1 1 >> -1 1 >> -1 1 -1 1 -1 1 -1 1 -1 1 -1 1 -1 1 >> -1 1 -1 1 -1 1 -1 1 -1 1 -1 1 -1 1 >> -1 1 -1 1 -1 >> Post-CG relaxation (up): -1 1 -1 1 -1 >> 1 -1 >> 1 -1 1 -1 1 -1 1 -1 1 -1 1 -1 1 -1 >> 1 -1 1 -1 1 -1 1 -1 1 -1 1 -1 1 -1 >> 1 -1 1 -1 1 >> Coarsest grid: 0 >> >> Relaxation Weight 0.600000 level 0 >> Outer relaxation weight 0.600000 level 0 >> Output flag (print_level): 3 >> >> >> AMG SOLUTION INFO: >> relative >> residual factor residual >> -------- ------ -------- >> Initial 3.846663e-03 1.000000e+00 >> Cycle 1 4.136662e-03 1.075390 1.075390e+00 >> Cycle 2 4.463861e-03 1.079097 1.160450e+00 >> Cycle 3 4.302262e-03 0.963798 1.118440e+00 >> Cycle 4 4.175328e-03 0.970496 1.085441e+00 >> Cycle 5 4.735871e-03 1.134251 1.231163e+00 >> Cycle 6 5.809054e-03 1.226607 1.510154e+00 >> >> >> ============================================== >> NOTE: Convergence tolerance was not achieved >> within the allowed 6 V-cycles >> ============================================== >> >> Average Convergence Factor = 1.071117 >> >> Complexity: grid = 1.000000 >> operator = 1.000000 >> cycle = 1.000000 >> >> >>> >>> Nicolas, >>> >>> I have added support for both 1 and/or 2 to petsc-dev >>> http://www-unix.mcs.anl.gov/petsc/petsc-as/developers/index.html >>> >>> The two new options are -pc_hypre_boomeramg_nodal_coarsen and >>> -pc_hypre_boomeramg_nodal_relaxation >> argument n indicates the levels which the SmoothNumLevels() sets. >>> >>> I have not tested the code so please let me know what problems you >>> have. >>> >>> Allison, >>> >>> Thank you very much for the clarifications. >>> >>> Barry >>> >>> On Wed, 25 Apr 2007, Allison Baker wrote: >>> >>>> Hi Barry and Nicolas, >>>> >>>> To clarify, >>>> >>>> HYPRE_BoomerAMGSetNumFunctions(solver, int num_functions) tells >>>> AMG to >>>> solve a >>>> system of equations with the specified number of functions/ >>>> unknowns. The >>>> default AMG scheme to solve a PDE system is the "unknown" >>>> approach. (The >>>> coarsening and interpolation are determined by looking at each >>>> unknown/function independently - therefore you can imagine that the >>>> coarse >>>> grids are generally not the same for each variable. This approach >>>> generally >>>> works well unless you have strong coupling between unknowns.) >>>> >>>> HYPRE_BOOMERAMGSetNodal(solver, int nodal ) tells AMG to coarsen >>>> such >>>> that >>>> each variable has the same coarse grid - sometimes this is more >>>> "physical" for >>>> a particular problem. The value chosen here for nodal determines how >>>> strength >>>> of connection is determined between the coupled system. I suggest >>>> setting >>>> nodal = 1, which uses a Frobenius norm. This does NOT tell AMG >>>> to use >>>> nodal >>>> relaxation. >>>> >>>> If you want to use nodal relaxation in hypre there are two choices: >>>> >>>> (1) If you call HYPRE_BOOMERAMGSetNodal, then you can >>>> additionally do >>>> nodal >>>> relaxation via the schwarz smoother option in hypre. I did not >>>> implement this >>>> in the Petsc interface, but it could be done easy enough. The >>>> following >>>> four >>>> functions need to be called: >>>> >>>> HYPRE_BoomerAMGSetSmoothType(solver, 6); >>>> HYPRE_BoomerAMGSetDomainType(solver, 1); >>>> HYPRE_BoomerAMGSetOverlap(solver, 0); >>>> HYPRE_BoomerAMGSetSmoothNumLevels(solver, num_levels); (Set >>>> num_levels >>>> to number of levels on which you want nodal smoothing, i.e. >>>> 1=just the >>>> fine >>>> grid, 2= fine grid and the grid below, etc. I find that doing nodal >>>> relaxation on just the finest level is generally sufficient.) >>>> Note that >>>> the >>>> interpolation scheme used will be the same as in the unknown >>>> approach - >>>> so >>>> this is what we call a hybrid systems method. >>>> >>>> (2) You can do both nodal smoothing and a nodal interpolation >>>> scheme. >>>> While >>>> this is currently implemented in 2.0.0, it is not advertised (i.e., >>>> mentioned >>>> at all in the user's manual) because it is not yet implemented very >>>> efficiently (the fine grid matrix is converted to a block matrix >>>> - and >>>> both >>>> are stored), and we have not found it to be as effective as >>>> advertised >>>> elsewhere (this is an area of current research for us)..... If >>>> you want >>>> to try >>>> it anyway, let me know and I will provide more info. >>>> >>>> Hope this helps, >>>> Allison >>>> >>>> >>>> Barry Smith wrote: >>>>> Nicolas, >>>>> >>>>> On Wed, 25 Apr 2007, Nicolas Bathfield wrote: >>>>> >>>>> >>>>>> Dear Barry, >>>>>> >>>>>> Using MatSetBlockSize(A,5) improved my results greatly. Boomemramg >>>> is now >>>>>> solving the system of equations. >>>>>> >>>>> >>>>> Good >>>>> >>>>> >>>>>> Still, the equations I solve are coupled, and my discretization >>>> scheme is >>>>>> meant for a non-segregated solver. As a consequence (I believe), >>>> boomeramg >>>>>> still diverges. >>>>>> >>>>> >>>>> How can "Boomeramg be now solving the system of equations" but >>>>> also >>>>> diverge? I am so confused. >>>>> >>>>> >>>>>> I would therefore like to use the nodal relaxation in >>>>>> boomeramg (the hypre command is HYPRE_BOOMERAMGSetNodal) in >>>>>> order to >>>>>> couple the coarse grid choice for all my variables. >>>>>> >>>>> >>>>> I can add this this afternoon. >>>>> >>>>> I have to admit I do not understand the difference between >>>>> HYPRE_BOOMERAMGSetNodal() and hypre_BoomerAMGSetNumFunctions(). Do >>>> you? >>>>> >>>>> Barry >>>>> >>>>>> How can I achieve this from PETSc? >>>>>> >>>>>> Best regards, >>>>>> >>>>>> Nicolas >>>>>> >>>>>> >>>>>>> From PETSc MPIAIJ matrices you need to set the block size of >>>>>>> the >>>>>>> matrix >>>>>>> with MatSetBlockSize(A,5) after you have called MatSetType() or >>>>>>> MatCreateMPIAIJ(). Then HYPRE_BoomerAMGSetNumFunctions() is >>>>>>> automatically >>>>>>> called by PETSc. >>>>>>> >>>>>>> Barry >>>>>>> >>>>>>> The reason this is done this way instead of as >>>>>>> -pc_hypre_boomeramg_block_size is the idea that hypre will >>>> use the >>>>>>> properties of the matrix it is given in building the >>>> preconditioner so >>>>>>> the user does not have to pass those properties in seperately >>>> directly >>>>>>> to hypre. >>>>>>> >>>>>>> >>>>>>> On Fri, 13 Apr 2007, Shaman Mahmoudi wrote: >>>>>>> >>>>>>> >>>>>>>> Hi, >>>>>>>> >>>>>>>> int HYPRE_BoomerAMGSetNumFunctions (.....) >>>>>>>> >>>>>>>> sets the size of the system of PDEs. >>>>>>>> >>>>>>>> With best regards, Shaman Mahmoudi >>>>>>>> >>>>>>>> On Apr 13, 2007, at 2:04 PM, Shaman Mahmoudi wrote: >>>>>>>> >>>>>>>> >>>>>>>>> Hi Nicolas, >>>>>>>>> >>>>>>>>> You are right. hypre has changed a lot since the version I >>>> used. >>>>>>>>> >>>>>>>>> I found this interesting information: >>>>>>>>> >>>>>>>>> int HYPRE_BOOMERAMGSetNodal(....) >>>>>>>>> >>>>>>>>> Sets whether to use the nodal systems version. Default is 0. >>>>>>>>> >>>>>>>>> Then information about smoothers: >>>>>>>>> >>>>>>>>> One interesting thing there is this, >>>>>>>>> >>>>>>>>> HYPRE_BoomerAMGSetDomainType(....) >>>>>>>>> >>>>>>>>> 0 - each point is a domain (default) >>>>>>>>> 1 each node is a domain (only of interest in systems AMG) >>>>>>>>> 2 .... >>>>>>>>> >>>>>>>>> I could not find how you define the nodal displacement >>>> ordering. But >>>>>>>>> >>>>>>>> it >>>>>>>> >>>>>>>>> should be there somewhere. >>>>>>>>> >>>>>>>>> I read the reference manual for hypre 2.0 >>>>>>>>> >>>>>>>>> With best regards, Shaman Mahmoudi >>>>>>>>> >>>>>>>>> >>>>>>>>> On Apr 13, 2007, at 1:40 PM, Nicolas Bathfield wrote: >>>>>>>>> >>>>>>>>> >>>>>>>>>> Dear Shaman, >>>>>>>>>> >>>>>>>>>> As far as I could understand, there is a BoomerAMG?s systems >>>> AMG >>>>>>>>>> >>>>>>>> version >>>>>>>> >>>>>>>>>> available. This seems to be exactly what I am looking for, >>>> but I >>>>>>>>>> >>>>>>>> just >>>>>>>> >>>>>>>>>> don't know how to access it, either through PETSc or >>>> directly. >>>>>>>>>> >>>>>>>>>> Best regards, >>>>>>>>>> >>>>>>>>>> Nicolas >>>>>>>>>> >>>>>>>>>> >>>>>>>>>>> Hi, >>>>>>>>>>> >>>>>>>>>>> You want to exploit the structure of the model? >>>>>>>>>>> As far as I know, boomeramg can not treat a set of rows or >>>>>>>>>>> blocks >>>>>>>>>>> >>>>>>>> as >>>>>>>> >>>>>>>>>>> a molecule, a so called block-smoother? >>>>>>>>>>> ML 2.0 smoothed aggregation does support it. >>>>>>>>>>> >>>>>>>>>>> With best regards, Shaman Mahmoudi >>>>>>>>>>> >>>>>>>>>>> On Apr 13, 2007, at 10:45 AM, Nicolas Bathfield wrote: >>>>>>>>>>> >>>>>>>>>>> >>>>>>>>>>>> Hi, >>>>>>>>>>>> >>>>>>>>>>>> I am solving the Navier-stokes equations and try to use >>>> Hypre >>>>>>>>>>>> as >>>>>>>>>>>> preconditioner. >>>>>>>>>>>> Until now, I used PETSc as non-segregated solver and it >>>> worked >>>>>>>>>>>> perfectly. >>>>>>>>>>>> Things got worse when I decided to use Boomeramg >>>> (Hypre). >>>>>>>>>>>> As I solve a system of PDEs, each cell is represented >>>> by 5 >>>>>>>>>>>> rows >>>>>>>>>>>> >>>>>>>> in my >>>>>>>> >>>>>>>>>>>> matrix (I solve for 5 variables). PETSc handles that >>>> without >>>>>>>>>>>> >>>>>>>> problem >>>>>>>> >>>>>>>>>>>> apparently, but the coarsening scheme of Boomeramg needs >>>> more >>>>>>>>>>>> >>>>>>>> input in >>>>>>>> >>>>>>>>>>>> order to work properly. >>>>>>>>>>>> >>>>>>>>>>>> Is there an option in PETSc to tell HYPRE that we are >>>> dealing >>>>>>>>>>>> >>>>>>>> with a >>>>>>>> >>>>>>>>>>>> system of PDEs? (something like: >>>> -pc_hypre_boomeramg_...) >>>>>>>>>>>> >>>>>>>>>>>> >>>>>>>>>>>> Thanks for your help. >>>>>>>>>>>> >>>>>>>>>>>> Best regards, >>>>>>>>>>>> >>>>>>>>>>>> Nicolas >>>>>>>>>>>> >>>>>>>>>>>> >>>>>>>>>>>> -- >>>>>>>>>>>> Nicolas BATHFIELD >>>>>>>>>>>> Chalmers University of Technology >>>>>>>>>>>> Shipping and Marine Technology >>>>>>>>>>>> phone: +46 (0)31 772 1476 >>>>>>>>>>>> fax: +46 (0)31 772 3699 >>>>>>>>>>>> >>>>>>>>>>>> >>>>>>>>>>> >>>>>>>>>> -- >>>>>>>>>> Nicolas BATHFIELD >>>>>>>>>> Chalmers University of Technology >>>>>>>>>> Shipping and Marine Technology >>>>>>>>>> phone: +46 (0)31 772 1476 >>>>>>>>>> fax: +46 (0)31 772 3699 >>>>>>>>>> >>>>>>>>>> >>>>>> >>>>>> >>>>> >>>>> >>>> >>>> >>> >>> >> >> >> -- >> Nicolas BATHFIELD >> Chalmers University of Technology >> Shipping and Marine Technology >> phone: +46 (0)31 772 1476 >> fax: +46 (0)31 772 3699 >> > > -- Nicolas BATHFIELD Chalmers University of Technology Shipping and Marine Technology phone: +46 (0)31 772 1476 fax: +46 (0)31 772 3699 From niriedith at gmail.com Tue May 8 08:12:34 2007 From: niriedith at gmail.com (Niriedith Karina ) Date: Tue, 8 May 2007 09:12:34 -0400 Subject: Help me! Message-ID: Hi Everybody! :D I used Metis and Parmetis for partitioning. And i see that metis is faster than parmetis , so my question is.. for what number of elements is more efficient use parmetis? .... my mesh is a tetrahedra... Thanks anyway! -------------- next part -------------- An HTML attachment was scrubbed... URL: From knepley at gmail.com Tue May 8 08:15:14 2007 From: knepley at gmail.com (Matthew Knepley) Date: Tue, 8 May 2007 08:15:14 -0500 Subject: Help me! In-Reply-To: References: Message-ID: I am fairly sure that Metis and ParMetis use the same serial partitioner, or at least it is possible to do so. There are a lot of options, and the particular choice will affect performance. Matt On 5/8/07, Niriedith Karina wrote: > Hi Everybody! :D > > I used Metis and Parmetis for partitioning. And i see that metis is faster > than parmetis , so my question is.. for what number of elements is more > efficient use parmetis? .... > > my mesh is a tetrahedra... > > Thanks anyway! > -- The government saving money is like me spilling beer. It happens, but never on purpose. From niriedith at gmail.com Tue May 8 08:32:20 2007 From: niriedith at gmail.com (Niriedith Karina ) Date: Tue, 8 May 2007 09:32:20 -0400 Subject: Help me! In-Reply-To: References: Message-ID: Thanks! but i read that for one number specifyc of elements parmetis is recommended..in my case i used a mesh with 8million .... But i read that the people recommended one quantity.... I try to know waht quantity is.... Thanks again :P On 5/8/07, Matthew Knepley wrote: > > I am fairly sure that Metis and ParMetis use the same serial partitioner, > or > at least it is possible to do so. There are a lot of options, and the > particular > choice will affect performance. > > Matt > > On 5/8/07, Niriedith Karina wrote: > > Hi Everybody! :D > > > > I used Metis and Parmetis for partitioning. And i see that metis is > faster > > than parmetis , so my question is.. for what number of elements is more > > efficient use parmetis? .... > > > > my mesh is a tetrahedra... > > > > Thanks anyway! > > > > > -- > The government saving money is like me spilling beer. It happens, but > never on purpose. > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From knepley at gmail.com Tue May 8 08:44:02 2007 From: knepley at gmail.com (Matthew Knepley) Date: Tue, 8 May 2007 08:44:02 -0500 Subject: Help me! In-Reply-To: References: Message-ID: I am not sure I understand what you are asking. However, I would mail George Karypis (author of both) with this question. Thanks, Matt On 5/8/07, Niriedith Karina wrote: > Thanks! > but i read that for one number specifyc of elements parmetis is > recommended..in my case i used a mesh with 8million .... > > But i read that the people recommended one quantity.... > I try to know waht quantity is.... > > Thanks again :P > > > On 5/8/07, Matthew Knepley wrote: > > I am fairly sure that Metis and ParMetis use the same serial partitioner, > or > > at least it is possible to do so. There are a lot of options, and the > particular > > choice will affect performance. > > > > Matt > > > > On 5/8/07, Niriedith Karina < niriedith at gmail.com> wrote: > > > Hi Everybody! :D > > > > > > I used Metis and Parmetis for partitioning. And i see that metis is > faster > > > than parmetis , so my question is.. for what number of elements is more > > > efficient use parmetis? .... > > > > > > my mesh is a tetrahedra... > > > > > > Thanks anyway! > > > > > > > > > -- > > The government saving money is like me spilling beer. It happens, but > > never on purpose. > > > > > > -- The government saving money is like me spilling beer. It happens, but never on purpose. From bsmith at mcs.anl.gov Wed May 9 10:12:36 2007 From: bsmith at mcs.anl.gov (Barry Smith) Date: Wed, 9 May 2007 10:12:36 -0500 (CDT) Subject: HYPRE with multiple variables In-Reply-To: <5720.193.183.3.2.1178616754.squirrel@webmail.chalmers.se> References: <26157.193.183.3.2.1176453900.squirrel@webmail.chalmers.se> <9329.193.183.3.2.1176464453.squirrel@webmail.chalmers.se> <8614E121-37D8-4AF3-A67A-A142D27B7B62@student.uu.se> <35010.129.16.81.46.1177515489.squirrel@webmail.chalmers.se> <462FBDCB.3010309@llnl.gov> <7308.193.183.3.2.1178549456.squirrel@webmail.chalmers.se> <0D714C19-6B32-4334-8E21-CF2F266C869F@student.uu.se> <5720.193.183.3.2.1178616754.squirrel@webmail.chalmers.se> Message-ID: Yes. Make sure you give BoomerAMG enough iterations to solve the system as accurately as you need with -pc_hypre_boomeramg_max_iter and -pc_hypre_boomeramg_rtol Barry On Tue, 8 May 2007, Nicolas Bathfield wrote: > Hi, > > > Boomeramg is used as preconditioner, and I use default gmres to solve the > equations. I did not expect the preconditioner to be used at every > iteration, but I now understand better what's going on. > I solve a spase matrix with 5 diagonals. > I tried to use boomeramg as a solver only. To achieve this, I had to > comment out the command ierr = KSPSetInitialGuessNonzero(ksp,PETSC_TRUE), > and I used the option -ksp_preonly. Is it the right way to do? > > Best regards, > > Nicolas > > > Hi, > > > > You mentioned that your preconditioner is boomerAMG. What solver are > > you using? > > What type of matrix are you trying to solve? Is it SPD and sparse? > > Either way, if your solver is not AMG and you are using AMG as a > > preconditioner, I would suggest that you use a stopping criteria for > > AMG such as a relative residual convergence of say 1e-03 for the > > preconditioner, and the solvers relative convergence 1e-05 as in your > > example, and disregard a stopping criteria which involves maximum > > number of iterations for AMG, set that to something high. Maybe you > > will get better results that way. In the end, I guess it is a lot of > > testing and fine-tuning. > > > > With best regards, Shaman Mahmoudi > > > > On May 7, 2007, at 4:50 PM, Nicolas Bathfield wrote: > > > >> Hi Allison, Barry, and PETSc users, > >> > >> The nodal relaxation leads to a nice solution! Thanks a lot for > >> your help > >> on this! > >> > >> Even though I get a satisfactory result, I can not explain why > >> hypre is > >> performing what looks like several outer loops (see below for an > >> abstract > >> of the output from -pc_hypre_boomeramg_print_statistics). > >> > >> Here is the list of options I used: > >> > >> -pc_hypre_boomeramg_tol 1e-5 > >> -pc_type hypre -pc_hypre_type boomeramg > >> -pc_hypre_boomeramg_print_statistics > >> -pc_hypre_boomeramg_grid_sweeps_all 20 > >> -pc_hypre_boomeramg_max_iter 6 > >> -pc_hypre_boomeramg_relax_weight_all 0.6 > >> -pc_hypre_boomeramg_outer_relax_weight_all 0.6 > >> -pc_hypre_boomeramg_nodal_coarsen 1 > >> > >> I would expect the preconditioner to be executed once only (in my case > >> this includes 6 V cycles). > >> As you can see from the print_statistic option, boomeramg is exectuted > >> several times (many more times than displyed here, it was too long > >> to copy > >> everything). Is this normal? If yes, why is it so? > >> > >> Best regards, > >> > >> Nicolas > >> > >> BoomerAMG SOLVER PARAMETERS: > >> > >> Maximum number of cycles: 6 > >> Stopping Tolerance: 1.000000e-05 > >> Cycle type (1 = V, 2 = W, etc.): 1 > >> > >> Relaxation Parameters: > >> Visiting Grid: down up coarse > >> Visiting Grid: down up coarse > >> Number of partial sweeps: 20 20 1 > >> Type 0=Jac, 3=hGS, 6=hSGS, 9=GE: 6 6 6 > >> Point types, partial sweeps (1=C, -1=F): > >> Pre-CG relaxation (down): 1 -1 1 -1 1 > >> -1 1 > >> -1 1 -1 1 -1 1 -1 1 -1 1 -1 1 -1 1 > >> -1 1 -1 1 -1 1 -1 1 -1 1 -1 1 -1 1 > >> -1 1 -1 1 -1 > >> Post-CG relaxation (up): -1 1 -1 1 -1 > >> 1 -1 > >> 1 -1 1 -1 1 -1 1 -1 1 -1 1 -1 1 -1 > >> 1 -1 1 -1 1 -1 1 -1 1 -1 1 -1 1 -1 > >> 1 -1 1 -1 1 > >> Coarsest grid: 0 > >> > >> Relaxation Weight 0.600000 level 0 > >> Outer relaxation weight 0.600000 level 0 > >> Output flag (print_level): 3 > >> > >> > >> AMG SOLUTION INFO: > >> relative > >> residual factor residual > >> -------- ------ -------- > >> Initial 4.430431e-03 1.000000e+00 > >> Cycle 1 4.540275e-03 1.024793 1.024793e+00 > >> Cycle 2 4.539148e-03 0.999752 1.024539e+00 > >> Cycle 3 4.620010e-03 1.017814 1.042790e+00 > >> Cycle 4 5.196532e-03 1.124788 1.172918e+00 > >> Cycle 5 6.243043e-03 1.201386 1.409128e+00 > >> Cycle 6 7.310008e-03 1.170905 1.649954e+00 > >> > >> > >> ============================================== > >> NOTE: Convergence tolerance was not achieved > >> within the allowed 6 V-cycles > >> ============================================== > >> > >> Average Convergence Factor = 1.087039 > >> > >> Complexity: grid = 1.000000 > >> operator = 1.000000 > >> cycle = 1.000000 > >> > >> > >> > >> > >> > >> BoomerAMG SOLVER PARAMETERS: > >> > >> Maximum number of cycles: 6 > >> Stopping Tolerance: 1.000000e-05 > >> Cycle type (1 = V, 2 = W, etc.): 1 > >> > >> Relaxation Parameters: > >> Visiting Grid: down up coarse > >> Visiting Grid: down up coarse > >> Number of partial sweeps: 20 20 1 > >> Type 0=Jac, 3=hGS, 6=hSGS, 9=GE: 6 6 6 > >> Point types, partial sweeps (1=C, -1=F): > >> Pre-CG relaxation (down): 1 -1 1 -1 1 > >> -1 1 > >> -1 1 -1 1 -1 1 -1 1 -1 1 -1 1 -1 1 > >> -1 1 -1 1 -1 1 -1 1 -1 1 -1 1 -1 1 > >> -1 1 -1 1 -1 > >> Post-CG relaxation (up): -1 1 -1 1 -1 > >> 1 -1 > >> 1 -1 1 -1 1 -1 1 -1 1 -1 1 -1 1 -1 > >> 1 -1 1 -1 1 -1 1 -1 1 -1 1 -1 1 -1 > >> 1 -1 1 -1 1 > >> Coarsest grid: 0 > >> > >> Relaxation Weight 0.600000 level 0 > >> Outer relaxation weight 0.600000 level 0 > >> Output flag (print_level): 3 > >> > >> > >> AMG SOLUTION INFO: > >> relative > >> residual factor residual > >> -------- ------ -------- > >> Initial 7.914114e-03 1.000000e+00 > >> Cycle 1 8.320104e-03 1.051300 1.051300e+00 > >> Cycle 2 7.767803e-03 0.933618 9.815126e-01 > >> Cycle 3 6.570752e-03 0.845896 8.302575e-01 > >> Cycle 4 5.836166e-03 0.888204 7.374377e-01 > >> Cycle 5 6.593214e-03 1.129717 8.330956e-01 > >> Cycle 6 8.191117e-03 1.242356 1.035001e+00 > >> > >> > >> ============================================== > >> NOTE: Convergence tolerance was not achieved > >> within the allowed 6 V-cycles > >> ============================================== > >> > >> Average Convergence Factor = 1.005750 > >> > >> Complexity: grid = 1.000000 > >> operator = 1.000000 > >> cycle = 1.000000 > >> > >> > >> > >> > >> > >> BoomerAMG SOLVER PARAMETERS: > >> > >> Maximum number of cycles: 6 > >> Stopping Tolerance: 1.000000e-05 > >> Cycle type (1 = V, 2 = W, etc.): 1 > >> > >> Relaxation Parameters: > >> Visiting Grid: down up coarse > >> Visiting Grid: down up coarse > >> Number of partial sweeps: 20 20 1 > >> Type 0=Jac, 3=hGS, 6=hSGS, 9=GE: 6 6 6 > >> Point types, partial sweeps (1=C, -1=F): > >> Pre-CG relaxation (down): 1 -1 1 -1 1 > >> -1 1 > >> -1 1 -1 1 -1 1 -1 1 -1 1 -1 1 -1 1 > >> -1 1 -1 1 -1 1 -1 1 -1 1 -1 1 -1 1 > >> -1 1 -1 1 -1 > >> Post-CG relaxation (up): -1 1 -1 1 -1 > >> 1 -1 > >> 1 -1 1 -1 1 -1 1 -1 1 -1 1 -1 1 -1 > >> 1 -1 1 -1 1 -1 1 -1 1 -1 1 -1 1 -1 > >> 1 -1 1 -1 1 > >> Coarsest grid: 0 > >> > >> Relaxation Weight 0.600000 level 0 > >> Outer relaxation weight 0.600000 level 0 > >> Output flag (print_level): 3 > >> > >> > >> AMG SOLUTION INFO: > >> relative > >> residual factor residual > >> -------- ------ -------- > >> Initial 5.553016e-03 1.000000e+00 > >> Cycle 1 6.417482e-03 1.155675 1.155675e+00 > >> Cycle 2 7.101926e-03 1.106653 1.278931e+00 > >> Cycle 3 7.077471e-03 0.996556 1.274527e+00 > >> Cycle 4 6.382835e-03 0.901853 1.149436e+00 > >> Cycle 5 5.392392e-03 0.844827 9.710744e-01 > >> Cycle 6 4.674173e-03 0.866809 8.417359e-01 > >> > >> > >> ============================================== > >> NOTE: Convergence tolerance was not achieved > >> within the allowed 6 V-cycles > >> ============================================== > >> > >> Average Convergence Factor = 0.971694 > >> > >> Complexity: grid = 1.000000 > >> operator = 1.000000 > >> cycle = 1.000000 > >> > >> > >> > >> > >> > >> BoomerAMG SOLVER PARAMETERS: > >> > >> Maximum number of cycles: 6 > >> Stopping Tolerance: 1.000000e-05 > >> Cycle type (1 = V, 2 = W, etc.): 1 > >> > >> Relaxation Parameters: > >> Visiting Grid: down up coarse > >> Visiting Grid: down up coarse > >> Number of partial sweeps: 20 20 1 > >> Type 0=Jac, 3=hGS, 6=hSGS, 9=GE: 6 6 6 > >> Point types, partial sweeps (1=C, -1=F): > >> Pre-CG relaxation (down): 1 -1 1 -1 1 > >> -1 1 > >> -1 1 -1 1 -1 1 -1 1 -1 1 -1 1 -1 1 > >> -1 1 -1 1 -1 1 -1 1 -1 1 -1 1 -1 1 > >> -1 1 -1 1 -1 > >> Post-CG relaxation (up): -1 1 -1 1 -1 > >> 1 -1 > >> 1 -1 1 -1 1 -1 1 -1 1 -1 1 -1 1 -1 > >> 1 -1 1 -1 1 -1 1 -1 1 -1 1 -1 1 -1 > >> 1 -1 1 -1 1 > >> Coarsest grid: 0 > >> > >> Relaxation Weight 0.600000 level 0 > >> Outer relaxation weight 0.600000 level 0 > >> Output flag (print_level): 3 > >> > >> > >> AMG SOLUTION INFO: > >> relative > >> residual factor residual > >> -------- ------ -------- > >> Initial 3.846663e-03 1.000000e+00 > >> Cycle 1 4.136662e-03 1.075390 1.075390e+00 > >> Cycle 2 4.463861e-03 1.079097 1.160450e+00 > >> Cycle 3 4.302262e-03 0.963798 1.118440e+00 > >> Cycle 4 4.175328e-03 0.970496 1.085441e+00 > >> Cycle 5 4.735871e-03 1.134251 1.231163e+00 > >> Cycle 6 5.809054e-03 1.226607 1.510154e+00 > >> > >> > >> ============================================== > >> NOTE: Convergence tolerance was not achieved > >> within the allowed 6 V-cycles > >> ============================================== > >> > >> Average Convergence Factor = 1.071117 > >> > >> Complexity: grid = 1.000000 > >> operator = 1.000000 > >> cycle = 1.000000 > >> > >> > >>> > >>> Nicolas, > >>> > >>> I have added support for both 1 and/or 2 to petsc-dev > >>> http://www-unix.mcs.anl.gov/petsc/petsc-as/developers/index.html > >>> > >>> The two new options are -pc_hypre_boomeramg_nodal_coarsen and > >>> -pc_hypre_boomeramg_nodal_relaxation >>> argument n indicates the levels which the SmoothNumLevels() sets. > >>> > >>> I have not tested the code so please let me know what problems you > >>> have. > >>> > >>> Allison, > >>> > >>> Thank you very much for the clarifications. > >>> > >>> Barry > >>> > >>> On Wed, 25 Apr 2007, Allison Baker wrote: > >>> > >>>> Hi Barry and Nicolas, > >>>> > >>>> To clarify, > >>>> > >>>> HYPRE_BoomerAMGSetNumFunctions(solver, int num_functions) tells > >>>> AMG to > >>>> solve a > >>>> system of equations with the specified number of functions/ > >>>> unknowns. The > >>>> default AMG scheme to solve a PDE system is the "unknown" > >>>> approach. (The > >>>> coarsening and interpolation are determined by looking at each > >>>> unknown/function independently - therefore you can imagine that the > >>>> coarse > >>>> grids are generally not the same for each variable. This approach > >>>> generally > >>>> works well unless you have strong coupling between unknowns.) > >>>> > >>>> HYPRE_BOOMERAMGSetNodal(solver, int nodal ) tells AMG to coarsen > >>>> such > >>>> that > >>>> each variable has the same coarse grid - sometimes this is more > >>>> "physical" for > >>>> a particular problem. The value chosen here for nodal determines how > >>>> strength > >>>> of connection is determined between the coupled system. I suggest > >>>> setting > >>>> nodal = 1, which uses a Frobenius norm. This does NOT tell AMG > >>>> to use > >>>> nodal > >>>> relaxation. > >>>> > >>>> If you want to use nodal relaxation in hypre there are two choices: > >>>> > >>>> (1) If you call HYPRE_BOOMERAMGSetNodal, then you can > >>>> additionally do > >>>> nodal > >>>> relaxation via the schwarz smoother option in hypre. I did not > >>>> implement this > >>>> in the Petsc interface, but it could be done easy enough. The > >>>> following > >>>> four > >>>> functions need to be called: > >>>> > >>>> HYPRE_BoomerAMGSetSmoothType(solver, 6); > >>>> HYPRE_BoomerAMGSetDomainType(solver, 1); > >>>> HYPRE_BoomerAMGSetOverlap(solver, 0); > >>>> HYPRE_BoomerAMGSetSmoothNumLevels(solver, num_levels); (Set > >>>> num_levels > >>>> to number of levels on which you want nodal smoothing, i.e. > >>>> 1=just the > >>>> fine > >>>> grid, 2= fine grid and the grid below, etc. I find that doing nodal > >>>> relaxation on just the finest level is generally sufficient.) > >>>> Note that > >>>> the > >>>> interpolation scheme used will be the same as in the unknown > >>>> approach - > >>>> so > >>>> this is what we call a hybrid systems method. > >>>> > >>>> (2) You can do both nodal smoothing and a nodal interpolation > >>>> scheme. > >>>> While > >>>> this is currently implemented in 2.0.0, it is not advertised (i.e., > >>>> mentioned > >>>> at all in the user's manual) because it is not yet implemented very > >>>> efficiently (the fine grid matrix is converted to a block matrix > >>>> - and > >>>> both > >>>> are stored), and we have not found it to be as effective as > >>>> advertised > >>>> elsewhere (this is an area of current research for us)..... If > >>>> you want > >>>> to try > >>>> it anyway, let me know and I will provide more info. > >>>> > >>>> Hope this helps, > >>>> Allison > >>>> > >>>> > >>>> Barry Smith wrote: > >>>>> Nicolas, > >>>>> > >>>>> On Wed, 25 Apr 2007, Nicolas Bathfield wrote: > >>>>> > >>>>> > >>>>>> Dear Barry, > >>>>>> > >>>>>> Using MatSetBlockSize(A,5) improved my results greatly. Boomemramg > >>>> is now > >>>>>> solving the system of equations. > >>>>>> > >>>>> > >>>>> Good > >>>>> > >>>>> > >>>>>> Still, the equations I solve are coupled, and my discretization > >>>> scheme is > >>>>>> meant for a non-segregated solver. As a consequence (I believe), > >>>> boomeramg > >>>>>> still diverges. > >>>>>> > >>>>> > >>>>> How can "Boomeramg be now solving the system of equations" but > >>>>> also > >>>>> diverge? I am so confused. > >>>>> > >>>>> > >>>>>> I would therefore like to use the nodal relaxation in > >>>>>> boomeramg (the hypre command is HYPRE_BOOMERAMGSetNodal) in > >>>>>> order to > >>>>>> couple the coarse grid choice for all my variables. > >>>>>> > >>>>> > >>>>> I can add this this afternoon. > >>>>> > >>>>> I have to admit I do not understand the difference between > >>>>> HYPRE_BOOMERAMGSetNodal() and hypre_BoomerAMGSetNumFunctions(). Do > >>>> you? > >>>>> > >>>>> Barry > >>>>> > >>>>>> How can I achieve this from PETSc? > >>>>>> > >>>>>> Best regards, > >>>>>> > >>>>>> Nicolas > >>>>>> > >>>>>> > >>>>>>> From PETSc MPIAIJ matrices you need to set the block size of > >>>>>>> the > >>>>>>> matrix > >>>>>>> with MatSetBlockSize(A,5) after you have called MatSetType() or > >>>>>>> MatCreateMPIAIJ(). Then HYPRE_BoomerAMGSetNumFunctions() is > >>>>>>> automatically > >>>>>>> called by PETSc. > >>>>>>> > >>>>>>> Barry > >>>>>>> > >>>>>>> The reason this is done this way instead of as > >>>>>>> -pc_hypre_boomeramg_block_size is the idea that hypre will > >>>> use the > >>>>>>> properties of the matrix it is given in building the > >>>> preconditioner so > >>>>>>> the user does not have to pass those properties in seperately > >>>> directly > >>>>>>> to hypre. > >>>>>>> > >>>>>>> > >>>>>>> On Fri, 13 Apr 2007, Shaman Mahmoudi wrote: > >>>>>>> > >>>>>>> > >>>>>>>> Hi, > >>>>>>>> > >>>>>>>> int HYPRE_BoomerAMGSetNumFunctions (.....) > >>>>>>>> > >>>>>>>> sets the size of the system of PDEs. > >>>>>>>> > >>>>>>>> With best regards, Shaman Mahmoudi > >>>>>>>> > >>>>>>>> On Apr 13, 2007, at 2:04 PM, Shaman Mahmoudi wrote: > >>>>>>>> > >>>>>>>> > >>>>>>>>> Hi Nicolas, > >>>>>>>>> > >>>>>>>>> You are right. hypre has changed a lot since the version I > >>>> used. > >>>>>>>>> > >>>>>>>>> I found this interesting information: > >>>>>>>>> > >>>>>>>>> int HYPRE_BOOMERAMGSetNodal(....) > >>>>>>>>> > >>>>>>>>> Sets whether to use the nodal systems version. Default is 0. > >>>>>>>>> > >>>>>>>>> Then information about smoothers: > >>>>>>>>> > >>>>>>>>> One interesting thing there is this, > >>>>>>>>> > >>>>>>>>> HYPRE_BoomerAMGSetDomainType(....) > >>>>>>>>> > >>>>>>>>> 0 - each point is a domain (default) > >>>>>>>>> 1 each node is a domain (only of interest in systems AMG) > >>>>>>>>> 2 .... > >>>>>>>>> > >>>>>>>>> I could not find how you define the nodal displacement > >>>> ordering. But > >>>>>>>>> > >>>>>>>> it > >>>>>>>> > >>>>>>>>> should be there somewhere. > >>>>>>>>> > >>>>>>>>> I read the reference manual for hypre 2.0 > >>>>>>>>> > >>>>>>>>> With best regards, Shaman Mahmoudi > >>>>>>>>> > >>>>>>>>> > >>>>>>>>> On Apr 13, 2007, at 1:40 PM, Nicolas Bathfield wrote: > >>>>>>>>> > >>>>>>>>> > >>>>>>>>>> Dear Shaman, > >>>>>>>>>> > >>>>>>>>>> As far as I could understand, there is a BoomerAMG?s systems > >>>> AMG > >>>>>>>>>> > >>>>>>>> version > >>>>>>>> > >>>>>>>>>> available. This seems to be exactly what I am looking for, > >>>> but I > >>>>>>>>>> > >>>>>>>> just > >>>>>>>> > >>>>>>>>>> don't know how to access it, either through PETSc or > >>>> directly. > >>>>>>>>>> > >>>>>>>>>> Best regards, > >>>>>>>>>> > >>>>>>>>>> Nicolas > >>>>>>>>>> > >>>>>>>>>> > >>>>>>>>>>> Hi, > >>>>>>>>>>> > >>>>>>>>>>> You want to exploit the structure of the model? > >>>>>>>>>>> As far as I know, boomeramg can not treat a set of rows or > >>>>>>>>>>> blocks > >>>>>>>>>>> > >>>>>>>> as > >>>>>>>> > >>>>>>>>>>> a molecule, a so called block-smoother? > >>>>>>>>>>> ML 2.0 smoothed aggregation does support it. > >>>>>>>>>>> > >>>>>>>>>>> With best regards, Shaman Mahmoudi > >>>>>>>>>>> > >>>>>>>>>>> On Apr 13, 2007, at 10:45 AM, Nicolas Bathfield wrote: > >>>>>>>>>>> > >>>>>>>>>>> > >>>>>>>>>>>> Hi, > >>>>>>>>>>>> > >>>>>>>>>>>> I am solving the Navier-stokes equations and try to use > >>>> Hypre > >>>>>>>>>>>> as > >>>>>>>>>>>> preconditioner. > >>>>>>>>>>>> Until now, I used PETSc as non-segregated solver and it > >>>> worked > >>>>>>>>>>>> perfectly. > >>>>>>>>>>>> Things got worse when I decided to use Boomeramg > >>>> (Hypre). > >>>>>>>>>>>> As I solve a system of PDEs, each cell is represented > >>>> by 5 > >>>>>>>>>>>> rows > >>>>>>>>>>>> > >>>>>>>> in my > >>>>>>>> > >>>>>>>>>>>> matrix (I solve for 5 variables). PETSc handles that > >>>> without > >>>>>>>>>>>> > >>>>>>>> problem > >>>>>>>> > >>>>>>>>>>>> apparently, but the coarsening scheme of Boomeramg needs > >>>> more > >>>>>>>>>>>> > >>>>>>>> input in > >>>>>>>> > >>>>>>>>>>>> order to work properly. > >>>>>>>>>>>> > >>>>>>>>>>>> Is there an option in PETSc to tell HYPRE that we are > >>>> dealing > >>>>>>>>>>>> > >>>>>>>> with a > >>>>>>>> > >>>>>>>>>>>> system of PDEs? (something like: > >>>> -pc_hypre_boomeramg_...) > >>>>>>>>>>>> > >>>>>>>>>>>> > >>>>>>>>>>>> Thanks for your help. > >>>>>>>>>>>> > >>>>>>>>>>>> Best regards, > >>>>>>>>>>>> > >>>>>>>>>>>> Nicolas > >>>>>>>>>>>> > >>>>>>>>>>>> > >>>>>>>>>>>> -- > >>>>>>>>>>>> Nicolas BATHFIELD > >>>>>>>>>>>> Chalmers University of Technology > >>>>>>>>>>>> Shipping and Marine Technology > >>>>>>>>>>>> phone: +46 (0)31 772 1476 > >>>>>>>>>>>> fax: +46 (0)31 772 3699 > >>>>>>>>>>>> > >>>>>>>>>>>> > >>>>>>>>>>> > >>>>>>>>>> -- > >>>>>>>>>> Nicolas BATHFIELD > >>>>>>>>>> Chalmers University of Technology > >>>>>>>>>> Shipping and Marine Technology > >>>>>>>>>> phone: +46 (0)31 772 1476 > >>>>>>>>>> fax: +46 (0)31 772 3699 > >>>>>>>>>> > >>>>>>>>>> > >>>>>> > >>>>>> > >>>>> > >>>>> > >>>> > >>>> > >>> > >>> > >> > >> > >> -- > >> Nicolas BATHFIELD > >> Chalmers University of Technology > >> Shipping and Marine Technology > >> phone: +46 (0)31 772 1476 > >> fax: +46 (0)31 772 3699 > >> > > > > > > > From bsmith at mcs.anl.gov Wed May 9 10:11:56 2007 From: bsmith at mcs.anl.gov (Barry Smith) Date: Wed, 9 May 2007 10:11:56 -0500 (CDT) Subject: HYPRE with multiple variables In-Reply-To: <5720.193.183.3.2.1178616754.squirrel@webmail.chalmers.se> References: <26157.193.183.3.2.1176453900.squirrel@webmail.chalmers.se> <9329.193.183.3.2.1176464453.squirrel@webmail.chalmers.se> <8614E121-37D8-4AF3-A67A-A142D27B7B62@student.uu.se> <35010.129.16.81.46.1177515489.squirrel@webmail.chalmers.se> <462FBDCB.3010309@llnl.gov> <7308.193.183.3.2.1178549456.squirrel@webmail.chalmers.se> <0D714C19-6B32-4334-8E21-CF2F266C869F@student.uu.se> <5720.193.183.3.2.1178616754.squirrel@webmail.chalmers.se> Message-ID: Yes. Make sure you give BoomerAMG enough iterations to solve the system as accurately as you need with -pc_hypre_boomeramg_max_iter and -pc_hypre_boomeramg_rtol Barry On Tue, 8 May 2007, Nicolas Bathfield wrote: > Hi, > > > Boomeramg is used as preconditioner, and I use default gmres to solve the > equations. I did not expect the preconditioner to be used at every > iteration, but I now understand better what's going on. > I solve a spase matrix with 5 diagonals. > I tried to use boomeramg as a solver only. To achieve this, I had to > comment out the command ierr = KSPSetInitialGuessNonzero(ksp,PETSC_TRUE), > and I used the option -ksp_preonly. Is it the right way to do? > > Best regards, > > Nicolas > > > Hi, > > > > You mentioned that your preconditioner is boomerAMG. What solver are > > you using? > > What type of matrix are you trying to solve? Is it SPD and sparse? > > Either way, if your solver is not AMG and you are using AMG as a > > preconditioner, I would suggest that you use a stopping criteria for > > AMG such as a relative residual convergence of say 1e-03 for the > > preconditioner, and the solvers relative convergence 1e-05 as in your > > example, and disregard a stopping criteria which involves maximum > > number of iterations for AMG, set that to something high. Maybe you > > will get better results that way. In the end, I guess it is a lot of > > testing and fine-tuning. > > > > With best regards, Shaman Mahmoudi > > > > On May 7, 2007, at 4:50 PM, Nicolas Bathfield wrote: > > > >> Hi Allison, Barry, and PETSc users, > >> > >> The nodal relaxation leads to a nice solution! Thanks a lot for > >> your help > >> on this! > >> > >> Even though I get a satisfactory result, I can not explain why > >> hypre is > >> performing what looks like several outer loops (see below for an > >> abstract > >> of the output from -pc_hypre_boomeramg_print_statistics). > >> > >> Here is the list of options I used: > >> > >> -pc_hypre_boomeramg_tol 1e-5 > >> -pc_type hypre -pc_hypre_type boomeramg > >> -pc_hypre_boomeramg_print_statistics > >> -pc_hypre_boomeramg_grid_sweeps_all 20 > >> -pc_hypre_boomeramg_max_iter 6 > >> -pc_hypre_boomeramg_relax_weight_all 0.6 > >> -pc_hypre_boomeramg_outer_relax_weight_all 0.6 > >> -pc_hypre_boomeramg_nodal_coarsen 1 > >> > >> I would expect the preconditioner to be executed once only (in my case > >> this includes 6 V cycles). > >> As you can see from the print_statistic option, boomeramg is exectuted > >> several times (many more times than displyed here, it was too long > >> to copy > >> everything). Is this normal? If yes, why is it so? > >> > >> Best regards, > >> > >> Nicolas > >> > >> BoomerAMG SOLVER PARAMETERS: > >> > >> Maximum number of cycles: 6 > >> Stopping Tolerance: 1.000000e-05 > >> Cycle type (1 = V, 2 = W, etc.): 1 > >> > >> Relaxation Parameters: > >> Visiting Grid: down up coarse > >> Visiting Grid: down up coarse > >> Number of partial sweeps: 20 20 1 > >> Type 0=Jac, 3=hGS, 6=hSGS, 9=GE: 6 6 6 > >> Point types, partial sweeps (1=C, -1=F): > >> Pre-CG relaxation (down): 1 -1 1 -1 1 > >> -1 1 > >> -1 1 -1 1 -1 1 -1 1 -1 1 -1 1 -1 1 > >> -1 1 -1 1 -1 1 -1 1 -1 1 -1 1 -1 1 > >> -1 1 -1 1 -1 > >> Post-CG relaxation (up): -1 1 -1 1 -1 > >> 1 -1 > >> 1 -1 1 -1 1 -1 1 -1 1 -1 1 -1 1 -1 > >> 1 -1 1 -1 1 -1 1 -1 1 -1 1 -1 1 -1 > >> 1 -1 1 -1 1 > >> Coarsest grid: 0 > >> > >> Relaxation Weight 0.600000 level 0 > >> Outer relaxation weight 0.600000 level 0 > >> Output flag (print_level): 3 > >> > >> > >> AMG SOLUTION INFO: > >> relative > >> residual factor residual > >> -------- ------ -------- > >> Initial 4.430431e-03 1.000000e+00 > >> Cycle 1 4.540275e-03 1.024793 1.024793e+00 > >> Cycle 2 4.539148e-03 0.999752 1.024539e+00 > >> Cycle 3 4.620010e-03 1.017814 1.042790e+00 > >> Cycle 4 5.196532e-03 1.124788 1.172918e+00 > >> Cycle 5 6.243043e-03 1.201386 1.409128e+00 > >> Cycle 6 7.310008e-03 1.170905 1.649954e+00 > >> > >> > >> ============================================== > >> NOTE: Convergence tolerance was not achieved > >> within the allowed 6 V-cycles > >> ============================================== > >> > >> Average Convergence Factor = 1.087039 > >> > >> Complexity: grid = 1.000000 > >> operator = 1.000000 > >> cycle = 1.000000 > >> > >> > >> > >> > >> > >> BoomerAMG SOLVER PARAMETERS: > >> > >> Maximum number of cycles: 6 > >> Stopping Tolerance: 1.000000e-05 > >> Cycle type (1 = V, 2 = W, etc.): 1 > >> > >> Relaxation Parameters: > >> Visiting Grid: down up coarse > >> Visiting Grid: down up coarse > >> Number of partial sweeps: 20 20 1 > >> Type 0=Jac, 3=hGS, 6=hSGS, 9=GE: 6 6 6 > >> Point types, partial sweeps (1=C, -1=F): > >> Pre-CG relaxation (down): 1 -1 1 -1 1 > >> -1 1 > >> -1 1 -1 1 -1 1 -1 1 -1 1 -1 1 -1 1 > >> -1 1 -1 1 -1 1 -1 1 -1 1 -1 1 -1 1 > >> -1 1 -1 1 -1 > >> Post-CG relaxation (up): -1 1 -1 1 -1 > >> 1 -1 > >> 1 -1 1 -1 1 -1 1 -1 1 -1 1 -1 1 -1 > >> 1 -1 1 -1 1 -1 1 -1 1 -1 1 -1 1 -1 > >> 1 -1 1 -1 1 > >> Coarsest grid: 0 > >> > >> Relaxation Weight 0.600000 level 0 > >> Outer relaxation weight 0.600000 level 0 > >> Output flag (print_level): 3 > >> > >> > >> AMG SOLUTION INFO: > >> relative > >> residual factor residual > >> -------- ------ -------- > >> Initial 7.914114e-03 1.000000e+00 > >> Cycle 1 8.320104e-03 1.051300 1.051300e+00 > >> Cycle 2 7.767803e-03 0.933618 9.815126e-01 > >> Cycle 3 6.570752e-03 0.845896 8.302575e-01 > >> Cycle 4 5.836166e-03 0.888204 7.374377e-01 > >> Cycle 5 6.593214e-03 1.129717 8.330956e-01 > >> Cycle 6 8.191117e-03 1.242356 1.035001e+00 > >> > >> > >> ============================================== > >> NOTE: Convergence tolerance was not achieved > >> within the allowed 6 V-cycles > >> ============================================== > >> > >> Average Convergence Factor = 1.005750 > >> > >> Complexity: grid = 1.000000 > >> operator = 1.000000 > >> cycle = 1.000000 > >> > >> > >> > >> > >> > >> BoomerAMG SOLVER PARAMETERS: > >> > >> Maximum number of cycles: 6 > >> Stopping Tolerance: 1.000000e-05 > >> Cycle type (1 = V, 2 = W, etc.): 1 > >> > >> Relaxation Parameters: > >> Visiting Grid: down up coarse > >> Visiting Grid: down up coarse > >> Number of partial sweeps: 20 20 1 > >> Type 0=Jac, 3=hGS, 6=hSGS, 9=GE: 6 6 6 > >> Point types, partial sweeps (1=C, -1=F): > >> Pre-CG relaxation (down): 1 -1 1 -1 1 > >> -1 1 > >> -1 1 -1 1 -1 1 -1 1 -1 1 -1 1 -1 1 > >> -1 1 -1 1 -1 1 -1 1 -1 1 -1 1 -1 1 > >> -1 1 -1 1 -1 > >> Post-CG relaxation (up): -1 1 -1 1 -1 > >> 1 -1 > >> 1 -1 1 -1 1 -1 1 -1 1 -1 1 -1 1 -1 > >> 1 -1 1 -1 1 -1 1 -1 1 -1 1 -1 1 -1 > >> 1 -1 1 -1 1 > >> Coarsest grid: 0 > >> > >> Relaxation Weight 0.600000 level 0 > >> Outer relaxation weight 0.600000 level 0 > >> Output flag (print_level): 3 > >> > >> > >> AMG SOLUTION INFO: > >> relative > >> residual factor residual > >> -------- ------ -------- > >> Initial 5.553016e-03 1.000000e+00 > >> Cycle 1 6.417482e-03 1.155675 1.155675e+00 > >> Cycle 2 7.101926e-03 1.106653 1.278931e+00 > >> Cycle 3 7.077471e-03 0.996556 1.274527e+00 > >> Cycle 4 6.382835e-03 0.901853 1.149436e+00 > >> Cycle 5 5.392392e-03 0.844827 9.710744e-01 > >> Cycle 6 4.674173e-03 0.866809 8.417359e-01 > >> > >> > >> ============================================== > >> NOTE: Convergence tolerance was not achieved > >> within the allowed 6 V-cycles > >> ============================================== > >> > >> Average Convergence Factor = 0.971694 > >> > >> Complexity: grid = 1.000000 > >> operator = 1.000000 > >> cycle = 1.000000 > >> > >> > >> > >> > >> > >> BoomerAMG SOLVER PARAMETERS: > >> > >> Maximum number of cycles: 6 > >> Stopping Tolerance: 1.000000e-05 > >> Cycle type (1 = V, 2 = W, etc.): 1 > >> > >> Relaxation Parameters: > >> Visiting Grid: down up coarse > >> Visiting Grid: down up coarse > >> Number of partial sweeps: 20 20 1 > >> Type 0=Jac, 3=hGS, 6=hSGS, 9=GE: 6 6 6 > >> Point types, partial sweeps (1=C, -1=F): > >> Pre-CG relaxation (down): 1 -1 1 -1 1 > >> -1 1 > >> -1 1 -1 1 -1 1 -1 1 -1 1 -1 1 -1 1 > >> -1 1 -1 1 -1 1 -1 1 -1 1 -1 1 -1 1 > >> -1 1 -1 1 -1 > >> Post-CG relaxation (up): -1 1 -1 1 -1 > >> 1 -1 > >> 1 -1 1 -1 1 -1 1 -1 1 -1 1 -1 1 -1 > >> 1 -1 1 -1 1 -1 1 -1 1 -1 1 -1 1 -1 > >> 1 -1 1 -1 1 > >> Coarsest grid: 0 > >> > >> Relaxation Weight 0.600000 level 0 > >> Outer relaxation weight 0.600000 level 0 > >> Output flag (print_level): 3 > >> > >> > >> AMG SOLUTION INFO: > >> relative > >> residual factor residual > >> -------- ------ -------- > >> Initial 3.846663e-03 1.000000e+00 > >> Cycle 1 4.136662e-03 1.075390 1.075390e+00 > >> Cycle 2 4.463861e-03 1.079097 1.160450e+00 > >> Cycle 3 4.302262e-03 0.963798 1.118440e+00 > >> Cycle 4 4.175328e-03 0.970496 1.085441e+00 > >> Cycle 5 4.735871e-03 1.134251 1.231163e+00 > >> Cycle 6 5.809054e-03 1.226607 1.510154e+00 > >> > >> > >> ============================================== > >> NOTE: Convergence tolerance was not achieved > >> within the allowed 6 V-cycles > >> ============================================== > >> > >> Average Convergence Factor = 1.071117 > >> > >> Complexity: grid = 1.000000 > >> operator = 1.000000 > >> cycle = 1.000000 > >> > >> > >>> > >>> Nicolas, > >>> > >>> I have added support for both 1 and/or 2 to petsc-dev > >>> http://www-unix.mcs.anl.gov/petsc/petsc-as/developers/index.html > >>> > >>> The two new options are -pc_hypre_boomeramg_nodal_coarsen and > >>> -pc_hypre_boomeramg_nodal_relaxation >>> argument n indicates the levels which the SmoothNumLevels() sets. > >>> > >>> I have not tested the code so please let me know what problems you > >>> have. > >>> > >>> Allison, > >>> > >>> Thank you very much for the clarifications. > >>> > >>> Barry > >>> > >>> On Wed, 25 Apr 2007, Allison Baker wrote: > >>> > >>>> Hi Barry and Nicolas, > >>>> > >>>> To clarify, > >>>> > >>>> HYPRE_BoomerAMGSetNumFunctions(solver, int num_functions) tells > >>>> AMG to > >>>> solve a > >>>> system of equations with the specified number of functions/ > >>>> unknowns. The > >>>> default AMG scheme to solve a PDE system is the "unknown" > >>>> approach. (The > >>>> coarsening and interpolation are determined by looking at each > >>>> unknown/function independently - therefore you can imagine that the > >>>> coarse > >>>> grids are generally not the same for each variable. This approach > >>>> generally > >>>> works well unless you have strong coupling between unknowns.) > >>>> > >>>> HYPRE_BOOMERAMGSetNodal(solver, int nodal ) tells AMG to coarsen > >>>> such > >>>> that > >>>> each variable has the same coarse grid - sometimes this is more > >>>> "physical" for > >>>> a particular problem. The value chosen here for nodal determines how > >>>> strength > >>>> of connection is determined between the coupled system. I suggest > >>>> setting > >>>> nodal = 1, which uses a Frobenius norm. This does NOT tell AMG > >>>> to use > >>>> nodal > >>>> relaxation. > >>>> > >>>> If you want to use nodal relaxation in hypre there are two choices: > >>>> > >>>> (1) If you call HYPRE_BOOMERAMGSetNodal, then you can > >>>> additionally do > >>>> nodal > >>>> relaxation via the schwarz smoother option in hypre. I did not > >>>> implement this > >>>> in the Petsc interface, but it could be done easy enough. The > >>>> following > >>>> four > >>>> functions need to be called: > >>>> > >>>> HYPRE_BoomerAMGSetSmoothType(solver, 6); > >>>> HYPRE_BoomerAMGSetDomainType(solver, 1); > >>>> HYPRE_BoomerAMGSetOverlap(solver, 0); > >>>> HYPRE_BoomerAMGSetSmoothNumLevels(solver, num_levels); (Set > >>>> num_levels > >>>> to number of levels on which you want nodal smoothing, i.e. > >>>> 1=just the > >>>> fine > >>>> grid, 2= fine grid and the grid below, etc. I find that doing nodal > >>>> relaxation on just the finest level is generally sufficient.) > >>>> Note that > >>>> the > >>>> interpolation scheme used will be the same as in the unknown > >>>> approach - > >>>> so > >>>> this is what we call a hybrid systems method. > >>>> > >>>> (2) You can do both nodal smoothing and a nodal interpolation > >>>> scheme. > >>>> While > >>>> this is currently implemented in 2.0.0, it is not advertised (i.e., > >>>> mentioned > >>>> at all in the user's manual) because it is not yet implemented very > >>>> efficiently (the fine grid matrix is converted to a block matrix > >>>> - and > >>>> both > >>>> are stored), and we have not found it to be as effective as > >>>> advertised > >>>> elsewhere (this is an area of current research for us)..... If > >>>> you want > >>>> to try > >>>> it anyway, let me know and I will provide more info. > >>>> > >>>> Hope this helps, > >>>> Allison > >>>> > >>>> > >>>> Barry Smith wrote: > >>>>> Nicolas, > >>>>> > >>>>> On Wed, 25 Apr 2007, Nicolas Bathfield wrote: > >>>>> > >>>>> > >>>>>> Dear Barry, > >>>>>> > >>>>>> Using MatSetBlockSize(A,5) improved my results greatly. Boomemramg > >>>> is now > >>>>>> solving the system of equations. > >>>>>> > >>>>> > >>>>> Good > >>>>> > >>>>> > >>>>>> Still, the equations I solve are coupled, and my discretization > >>>> scheme is > >>>>>> meant for a non-segregated solver. As a consequence (I believe), > >>>> boomeramg > >>>>>> still diverges. > >>>>>> > >>>>> > >>>>> How can "Boomeramg be now solving the system of equations" but > >>>>> also > >>>>> diverge? I am so confused. > >>>>> > >>>>> > >>>>>> I would therefore like to use the nodal relaxation in > >>>>>> boomeramg (the hypre command is HYPRE_BOOMERAMGSetNodal) in > >>>>>> order to > >>>>>> couple the coarse grid choice for all my variables. > >>>>>> > >>>>> > >>>>> I can add this this afternoon. > >>>>> > >>>>> I have to admit I do not understand the difference between > >>>>> HYPRE_BOOMERAMGSetNodal() and hypre_BoomerAMGSetNumFunctions(). Do > >>>> you? > >>>>> > >>>>> Barry > >>>>> > >>>>>> How can I achieve this from PETSc? > >>>>>> > >>>>>> Best regards, > >>>>>> > >>>>>> Nicolas > >>>>>> > >>>>>> > >>>>>>> From PETSc MPIAIJ matrices you need to set the block size of > >>>>>>> the > >>>>>>> matrix > >>>>>>> with MatSetBlockSize(A,5) after you have called MatSetType() or > >>>>>>> MatCreateMPIAIJ(). Then HYPRE_BoomerAMGSetNumFunctions() is > >>>>>>> automatically > >>>>>>> called by PETSc. > >>>>>>> > >>>>>>> Barry > >>>>>>> > >>>>>>> The reason this is done this way instead of as > >>>>>>> -pc_hypre_boomeramg_block_size is the idea that hypre will > >>>> use the > >>>>>>> properties of the matrix it is given in building the > >>>> preconditioner so > >>>>>>> the user does not have to pass those properties in seperately > >>>> directly > >>>>>>> to hypre. > >>>>>>> > >>>>>>> > >>>>>>> On Fri, 13 Apr 2007, Shaman Mahmoudi wrote: > >>>>>>> > >>>>>>> > >>>>>>>> Hi, > >>>>>>>> > >>>>>>>> int HYPRE_BoomerAMGSetNumFunctions (.....) > >>>>>>>> > >>>>>>>> sets the size of the system of PDEs. > >>>>>>>> > >>>>>>>> With best regards, Shaman Mahmoudi > >>>>>>>> > >>>>>>>> On Apr 13, 2007, at 2:04 PM, Shaman Mahmoudi wrote: > >>>>>>>> > >>>>>>>> > >>>>>>>>> Hi Nicolas, > >>>>>>>>> > >>>>>>>>> You are right. hypre has changed a lot since the version I > >>>> used. > >>>>>>>>> > >>>>>>>>> I found this interesting information: > >>>>>>>>> > >>>>>>>>> int HYPRE_BOOMERAMGSetNodal(....) > >>>>>>>>> > >>>>>>>>> Sets whether to use the nodal systems version. Default is 0. > >>>>>>>>> > >>>>>>>>> Then information about smoothers: > >>>>>>>>> > >>>>>>>>> One interesting thing there is this, > >>>>>>>>> > >>>>>>>>> HYPRE_BoomerAMGSetDomainType(....) > >>>>>>>>> > >>>>>>>>> 0 - each point is a domain (default) > >>>>>>>>> 1 each node is a domain (only of interest in systems AMG) > >>>>>>>>> 2 .... > >>>>>>>>> > >>>>>>>>> I could not find how you define the nodal displacement > >>>> ordering. But > >>>>>>>>> > >>>>>>>> it > >>>>>>>> > >>>>>>>>> should be there somewhere. > >>>>>>>>> > >>>>>>>>> I read the reference manual for hypre 2.0 > >>>>>>>>> > >>>>>>>>> With best regards, Shaman Mahmoudi > >>>>>>>>> > >>>>>>>>> > >>>>>>>>> On Apr 13, 2007, at 1:40 PM, Nicolas Bathfield wrote: > >>>>>>>>> > >>>>>>>>> > >>>>>>>>>> Dear Shaman, > >>>>>>>>>> > >>>>>>>>>> As far as I could understand, there is a BoomerAMG?s systems > >>>> AMG > >>>>>>>>>> > >>>>>>>> version > >>>>>>>> > >>>>>>>>>> available. This seems to be exactly what I am looking for, > >>>> but I > >>>>>>>>>> > >>>>>>>> just > >>>>>>>> > >>>>>>>>>> don't know how to access it, either through PETSc or > >>>> directly. > >>>>>>>>>> > >>>>>>>>>> Best regards, > >>>>>>>>>> > >>>>>>>>>> Nicolas > >>>>>>>>>> > >>>>>>>>>> > >>>>>>>>>>> Hi, > >>>>>>>>>>> > >>>>>>>>>>> You want to exploit the structure of the model? > >>>>>>>>>>> As far as I know, boomeramg can not treat a set of rows or > >>>>>>>>>>> blocks > >>>>>>>>>>> > >>>>>>>> as > >>>>>>>> > >>>>>>>>>>> a molecule, a so called block-smoother? > >>>>>>>>>>> ML 2.0 smoothed aggregation does support it. > >>>>>>>>>>> > >>>>>>>>>>> With best regards, Shaman Mahmoudi > >>>>>>>>>>> > >>>>>>>>>>> On Apr 13, 2007, at 10:45 AM, Nicolas Bathfield wrote: > >>>>>>>>>>> > >>>>>>>>>>> > >>>>>>>>>>>> Hi, > >>>>>>>>>>>> > >>>>>>>>>>>> I am solving the Navier-stokes equations and try to use > >>>> Hypre > >>>>>>>>>>>> as > >>>>>>>>>>>> preconditioner. > >>>>>>>>>>>> Until now, I used PETSc as non-segregated solver and it > >>>> worked > >>>>>>>>>>>> perfectly. > >>>>>>>>>>>> Things got worse when I decided to use Boomeramg > >>>> (Hypre). > >>>>>>>>>>>> As I solve a system of PDEs, each cell is represented > >>>> by 5 > >>>>>>>>>>>> rows > >>>>>>>>>>>> > >>>>>>>> in my > >>>>>>>> > >>>>>>>>>>>> matrix (I solve for 5 variables). PETSc handles that > >>>> without > >>>>>>>>>>>> > >>>>>>>> problem > >>>>>>>> > >>>>>>>>>>>> apparently, but the coarsening scheme of Boomeramg needs > >>>> more > >>>>>>>>>>>> > >>>>>>>> input in > >>>>>>>> > >>>>>>>>>>>> order to work properly. > >>>>>>>>>>>> > >>>>>>>>>>>> Is there an option in PETSc to tell HYPRE that we are > >>>> dealing > >>>>>>>>>>>> > >>>>>>>> with a > >>>>>>>> > >>>>>>>>>>>> system of PDEs? (something like: > >>>> -pc_hypre_boomeramg_...) > >>>>>>>>>>>> > >>>>>>>>>>>> > >>>>>>>>>>>> Thanks for your help. > >>>>>>>>>>>> > >>>>>>>>>>>> Best regards, > >>>>>>>>>>>> > >>>>>>>>>>>> Nicolas > >>>>>>>>>>>> > >>>>>>>>>>>> > >>>>>>>>>>>> -- > >>>>>>>>>>>> Nicolas BATHFIELD > >>>>>>>>>>>> Chalmers University of Technology > >>>>>>>>>>>> Shipping and Marine Technology > >>>>>>>>>>>> phone: +46 (0)31 772 1476 > >>>>>>>>>>>> fax: +46 (0)31 772 3699 > >>>>>>>>>>>> > >>>>>>>>>>>> > >>>>>>>>>>> > >>>>>>>>>> -- > >>>>>>>>>> Nicolas BATHFIELD > >>>>>>>>>> Chalmers University of Technology > >>>>>>>>>> Shipping and Marine Technology > >>>>>>>>>> phone: +46 (0)31 772 1476 > >>>>>>>>>> fax: +46 (0)31 772 3699 > >>>>>>>>>> > >>>>>>>>>> > >>>>>> > >>>>>> > >>>>> > >>>>> > >>>> > >>>> > >>> > >>> > >> > >> > >> -- > >> Nicolas BATHFIELD > >> Chalmers University of Technology > >> Shipping and Marine Technology > >> phone: +46 (0)31 772 1476 > >> fax: +46 (0)31 772 3699 > >> > > > > > > > From vijay.m at gmail.com Mon May 14 10:04:35 2007 From: vijay.m at gmail.com (Vijay M) Date: Mon, 14 May 2007 09:04:35 -0600 Subject: Makefile for compiling multiple source files Message-ID: Hi, I've been trying to use PETSc for one of my projects lately and I was wondering how to create the makefile to compile multiple source files with and without PETSc calls in fortran efficiently. I am sure it is a newbie question but I could not find much information about this anywhere. For example, you could take 4 files. A.f90, B.f90 without petsc calls and C.f90 and D.f90 with petsc calls. If I were to compile them I have to use f90 -c A.f90 -o A.o f90 -c B.f90 -o B.o cpp C.f90 -I > Ctemp.f90 f90 -c Ctemp.f90 -o C.o cpp D.f90 -I > Dtemp.f90 f90 -c Dtemp.f90 -o D.o g77 A.o B.o C.o D.o -o out.exe PETSc is configured to use g77 as the linker in my server but since I am compiling each file separately, I cannot quite use the linker directly (It treats the source as a linker script). Is there a simplified way to do this or have I missed something vital ?! I would appreciate any kind of help you can provide. If you need more information, let me know. Thanks. vijay From balay at mcs.anl.gov Mon May 14 10:31:01 2007 From: balay at mcs.anl.gov (Satish Balay) Date: Mon, 14 May 2007 10:31:01 -0500 (CDT) Subject: Makefile for compiling multiple source files In-Reply-To: References: Message-ID: On Mon, 14 May 2007, Vijay M wrote: > Hi, > > I've been trying to use PETSc for one of my projects lately and I was > wondering how to create the makefile to compile multiple source files > with and without PETSc calls in fortran efficiently. I am sure it is a > newbie question but I could not find much information about this > anywhere. > > For example, you could take 4 files. A.f90, B.f90 without petsc calls > and C.f90 and D.f90 with petsc calls. If I were to compile them I have > to use If C, D are PETSc sources, then they should be C.F90, D.F90 [not .f90 sufix] > > f90 -c A.f90 -o A.o > f90 -c B.f90 -o B.o > cpp C.f90 -I > Ctemp.f90 > f90 -c Ctemp.f90 -o C.o > cpp D.f90 -I > Dtemp.f90 > f90 -c Dtemp.f90 -o D.o > > g77 A.o B.o C.o D.o -o out.exe this assumes A.o has the main subroutine - if not - you should list the corresponding .o file first. > > PETSc is configured to use g77 as the linker in my server but since I > am compiling each file separately, I cannot quite use the linker > directly (It treats the source as a linker script). Is there a > simplified way to do this or have I missed something vital ?! I'm attaching a suitable PETSc makefile for your config. Satish > I would appreciate any kind of help you can provide. If you need more > information, let me know. Thanks. > > vijay > > -------------- next part -------------- CFLAGS = FFLAGS = CPPFLAGS = FPPFLAGS = CLEANFILES = executable include ${PETSC_DIR}/bmake/common/base OBJS = A.o B.o C.o D.o executable: ${OBJS} chkopts -${FLINKER} -o executable ${OBJS} ${PETSC_LIB} ${RM} ${OBJS} From vijay.m at gmail.com Mon May 14 10:54:48 2007 From: vijay.m at gmail.com (Vijay M) Date: Mon, 14 May 2007 10:54:48 -0500 Subject: Makefile for compiling multiple source files In-Reply-To: References: Message-ID: Satish, Thanks for the reply. I modified your makefile (which is quite close to what i had to start with) and when i run it, I get an error message to the effect mpif77 -c -fPIC -Wall -g -I/home/vijay/work/petsc -I/home/vijay/work/petsc/bmake/linux-gnu-c-debug -I/home/vijay/work/petsc/include -I/usr/include -o C.o C.F90 g77: C.F90: linker input file unused because linking not done mpif77 -c -fPIC -Wall -g -o A.o A.f90 g77: A.f90: linker input file unused because linking not done This is repeated for all source files. Hence when the linker command is issued, there are no object files created in my local directory. I observed this message previously and hence went through the manual cpp route. Do i have to set some environment or configuration variable to get this working ? Vijay On 5/14/07, Satish Balay wrote: > > On Mon, 14 May 2007, Vijay M wrote: > > > Hi, > > > > I've been trying to use PETSc for one of my projects lately and I was > > wondering how to create the makefile to compile multiple source files > > with and without PETSc calls in fortran efficiently. I am sure it is a > > newbie question but I could not find much information about this > > anywhere. > > > > For example, you could take 4 files. A.f90, B.f90 without petsc calls > > and C.f90 and D.f90 with petsc calls. If I were to compile them I have > > to use > > If C, D are PETSc sources, then they should be C.F90, D.F90 [not .f90 > sufix] > > > > > f90 -c A.f90 -o A.o > > f90 -c B.f90 -o B.o > > cpp C.f90 -I > Ctemp.f90 > > f90 -c Ctemp.f90 -o C.o > > cpp D.f90 -I > Dtemp.f90 > > f90 -c Dtemp.f90 -o D.o > > > > g77 A.o B.o C.o D.o -o out.exe > > this assumes A.o has the main subroutine - if not - you should list > the corresponding .o file first. > > > > > PETSc is configured to use g77 as the linker in my server but since I > > am compiling each file separately, I cannot quite use the linker > > directly (It treats the source as a linker script). Is there a > > simplified way to do this or have I missed something vital ?! > > > I'm attaching a suitable PETSc makefile for your config. > > Satish > > > I would appreciate any kind of help you can provide. If you need more > > information, let me know. Thanks. > > > > vijay > > > > > From balay at mcs.anl.gov Mon May 14 11:10:10 2007 From: balay at mcs.anl.gov (Satish Balay) Date: Mon, 14 May 2007 11:10:10 -0500 (CDT) Subject: Makefile for compiling multiple source files In-Reply-To: References: Message-ID: Looks like this is a problem with your mpif77. What do you have for: mpif77 -show Did PETSc configure fine with it? Are you able to run PETSc examples? cd src/ksp/ksp/examples/tutorials make ex2 make ex2f Satish On Mon, 14 May 2007, Vijay M wrote: > Satish, > > Thanks for the reply. I modified your makefile (which is quite close > to what i had to start with) and when i run it, I get an error message > to the effect > > mpif77 -c -fPIC -Wall -g -I/home/vijay/work/petsc > -I/home/vijay/work/petsc/bmake/linux-gnu-c-debug > -I/home/vijay/work/petsc/include -I/usr/include -o C.o C.F90 > g77: C.F90: linker input file unused because linking not done > > mpif77 -c -fPIC -Wall -g -o A.o A.f90 > g77: A.f90: linker input file unused because linking not done > > This is repeated for all source files. Hence when the linker command > is issued, there are no object files created in my local directory. I > observed this message previously and hence went through the manual cpp > route. Do i have to set some environment or configuration variable to > get this working ? > > Vijay > > > On 5/14/07, Satish Balay wrote: > > > > On Mon, 14 May 2007, Vijay M wrote: > > > > > Hi, > > > > > > I've been trying to use PETSc for one of my projects lately and I was > > > wondering how to create the makefile to compile multiple source files > > > with and without PETSc calls in fortran efficiently. I am sure it is a > > > newbie question but I could not find much information about this > > > anywhere. > > > > > > For example, you could take 4 files. A.f90, B.f90 without petsc calls > > > and C.f90 and D.f90 with petsc calls. If I were to compile them I have > > > to use > > > > If C, D are PETSc sources, then they should be C.F90, D.F90 [not .f90 > > sufix] > > > > > > > > f90 -c A.f90 -o A.o > > > f90 -c B.f90 -o B.o > > > cpp C.f90 -I > Ctemp.f90 > > > f90 -c Ctemp.f90 -o C.o > > > cpp D.f90 -I > Dtemp.f90 > > > f90 -c Dtemp.f90 -o D.o > > > > > > g77 A.o B.o C.o D.o -o out.exe > > > > this assumes A.o has the main subroutine - if not - you should list > > the corresponding .o file first. > > > > > > > > PETSc is configured to use g77 as the linker in my server but since I > > > am compiling each file separately, I cannot quite use the linker > > > directly (It treats the source as a linker script). Is there a > > > simplified way to do this or have I missed something vital ?! > > > > > > I'm attaching a suitable PETSc makefile for your config. > > > > Satish > > > > > I would appreciate any kind of help you can provide. If you need more > > > information, let me know. Thanks. > > > > > > vijay > > > > > > > > > > From vijay.m at gmail.com Mon May 14 11:22:24 2007 From: vijay.m at gmail.com (Vijay M) Date: Mon, 14 May 2007 11:22:24 -0500 Subject: Makefile for compiling multiple source files In-Reply-To: References: Message-ID: Running "mpif77 -show" gives the following output g77: unrecognized option `-show' /usr/lib/gcc/x86_64-redhat-linux/3.4.6/libfrtbegin.a(frtbegin.o)(.text+0x1e): In function `main': : undefined reference to `MAIN__' collect2: ld returned 1 exit status mpif77: No such file or directory I dont think that is a valid option in the current version of gcc i am using. Otherwise, when i try to compile and run the tutorial examples, everything works perfectly. Hence, both the c and fortran compiler are working as expected. I have attached the detailed commands with all linker options used for the compilation. ================================================ [vijay at linux src]$ cd ~/work/petsc/src/ksp/ksp/examples/tutorials [vijay at linux tutorials]$ make ex2 mpicc -c -fPIC -Wall -Wwrite-strings -Wno-strict-aliasing -g3 -I/home/vijay/work/petsc/src/dm/mesh/sieve -I/home/vijay/work/petsc -I/home/vijay/work/petsc/bmake/linux-gnu-c-debug -I/home/vijay/work/petsc/include -I/usr/include -D__SDIR__="src/ksp/ksp/examples/tutorials/" ex2.c mpicc -fPIC -Wall -Wwrite-strings -Wno-strict-aliasing -g3 -o ex2 ex2.o -Wl,-rpath,/home/vijay/work/petsc/lib/linux-gnu-c-debug -L/home/vijay/work/petsc/lib/linux-gnu-c-debug -lpetscksp -lpetscdm -lpetscmat -lpetscvec -lpetsc -llapack -lblas -lm -Wl,-rpath,/usr/lib64 -L/usr/lib64 -Wl,-rpath,/usr/lib/gcc/x86_64-redhat-linux/3.4.6 -L/usr/lib/gcc/x86_64-redhat-linux/3.4.6 -Wl,-rpath,/usr/lib/gcc/x86_64-redhat-linux/3.4.6/../../../../lib64 -L/usr/lib/gcc/x86_64-redhat-linux/3.4.6/../../../../lib64 -Wl,-rpath,/usr/lib/gcc/x86_64-redhat-linux/3.4.6/../../.. -L/usr/lib/gcc/x86_64-redhat-linux/3.4.6/../../.. -Wl,-rpath,/lib/../lib64 -L/lib/../lib64 -Wl,-rpath,/usr/lib/../lib64 -L/usr/lib/../lib64 -ldl -llammpio -llamf77mpi -lmpi -llam -laio -lutil -lgcc_s -lpthread -lg2c -lm -Wl,-rpath,/usr/lib64 -Wl,-rpath,/usr/lib/gcc/x86_64-redhat-linux/3.4.6 -Wl,-rpath,/usr/lib/gcc/x86_64-redhat-linux/3.4.6/../../../../lib64 -Wl,-rpath,/usr/lib/gcc/x86_64-redhat-linux/3.4.6/../../.. -Wl,-rpath,/lib/../lib64 -Wl,-rpath,/usr/lib/../lib64 -lm -Wl,-rpath,/usr/lib64 -L/usr/lib64 -Wl,-rpath,/usr/lib/gcc/x86_64-redhat-linux/3.4.6 -L/usr/lib/gcc/x86_64-redhat-linux/3.4.6 -Wl,-rpath,/usr/lib/gcc/x86_64-redhat-linux/3.4.6/../../../../lib64 -L/usr/lib/gcc/x86_64-redhat-linux/3.4.6/../../../../lib64 -Wl,-rpath,/usr/lib/gcc/x86_64-redhat-linux/3.4.6/../../.. -L/usr/lib/gcc/x86_64-redhat-linux/3.4.6/../../.. -Wl,-rpath,/lib/../lib64 -L/lib/../lib64 -Wl,-rpath,/usr/lib/../lib64 -L/usr/lib/../lib64 -ldl -llammpio -llamf77mpi -lmpi -llam -laio -lutil -lgcc_s -lpthread -ldl /bin/rm -f ex2.o [vijay at linux tutorials]$ make ex2f mpif77 -c -fPIC -Wall -g -I/home/vijay/work/petsc -I/home/vijay/work/petsc/bmake/linux-gnu-c-debug -I/home/vijay/work/petsc/include -I/usr/include -o ex2f.o ex2f.F mpif77 -fPIC -Wall -g -o ex2f ex2f.o -Wl,-rpath,/home/vijay/work/petsc/lib/linux-gnu-c-debug -L/home/vijay/work/petsc/lib/linux-gnu-c-debug -lpetscksp -lpetscdm -lpetscmat -lpetscvec -lpetsc -llapack -lblas -lm -Wl,-rpath,/usr/lib64 -L/usr/lib64 -Wl,-rpath,/usr/lib/gcc/x86_64-redhat-linux/3.4.6 -L/usr/lib/gcc/x86_64-redhat-linux/3.4.6 -Wl,-rpath,/usr/lib/gcc/x86_64-redhat-linux/3.4.6/../../../../lib64 -L/usr/lib/gcc/x86_64-redhat-linux/3.4.6/../../../../lib64 -Wl,-rpath,/usr/lib/gcc/x86_64-redhat-linux/3.4.6/../../.. -L/usr/lib/gcc/x86_64-redhat-linux/3.4.6/../../.. -Wl,-rpath,/lib/../lib64 -L/lib/../lib64 -Wl,-rpath,/usr/lib/../lib64 -L/usr/lib/../lib64 -ldl -llammpio -llamf77mpi -lmpi -llam -laio -lutil -lgcc_s -lpthread -lg2c -lm -Wl,-rpath,/usr/lib64 -Wl,-rpath,/usr/lib/gcc/x86_64-redhat-linux/3.4.6 -Wl,-rpath,/usr/lib/gcc/x86_64-redhat-linux/3.4.6/../../../../lib64 -Wl,-rpath,/usr/lib/gcc/x86_64-redhat-linux/3.4.6/../../.. -Wl,-rpath,/lib/../lib64 -Wl,-rpath,/usr/lib/../lib64 -lm -Wl,-rpath,/usr/lib64 -L/usr/lib64 -Wl,-rpath,/usr/lib/gcc/x86_64-redhat-linux/3.4.6 -L/usr/lib/gcc/x86_64-redhat-linux/3.4.6 -Wl,-rpath,/usr/lib/gcc/x86_64-redhat-linux/3.4.6/../../../../lib64 -L/usr/lib/gcc/x86_64-redhat-linux/3.4.6/../../../../lib64 -Wl,-rpath,/usr/lib/gcc/x86_64-redhat-linux/3.4.6/../../.. -L/usr/lib/gcc/x86_64-redhat-linux/3.4.6/../../.. -Wl,-rpath,/lib/../lib64 -L/lib/../lib64 -Wl,-rpath,/usr/lib/../lib64 -L/usr/lib/../lib64 -ldl -llammpio -llamf77mpi -lmpi -llam -laio -lutil -lgcc_s -lpthread -ldl /bin/rm -f ex2f.o ================================================ Satish, the only difference between these examples and my code is that i have multiple files to compile with and without petsc calls and each mpif77 call does not create an object file but instead tries to link and create an output executable. I am not sure how to get around this. Any ideas ? Vijay On 5/14/07, Satish Balay wrote: > > Looks like this is a problem with your mpif77. > > What do you have for: > > mpif77 -show > > Did PETSc configure fine with it? Are you able to run PETSc examples? > > cd src/ksp/ksp/examples/tutorials > make ex2 > make ex2f > > Satish > > On Mon, 14 May 2007, Vijay M wrote: > > > Satish, > > > > Thanks for the reply. I modified your makefile (which is quite close > > to what i had to start with) and when i run it, I get an error message > > to the effect > > > > mpif77 -c -fPIC -Wall -g -I/home/vijay/work/petsc > > -I/home/vijay/work/petsc/bmake/linux-gnu-c-debug > > -I/home/vijay/work/petsc/include -I/usr/include -o C.o C.F90 > > g77: C.F90: linker input file unused because linking not done > > > > mpif77 -c -fPIC -Wall -g -o A.o A.f90 > > g77: A.f90: linker input file unused because linking not done > > > > This is repeated for all source files. Hence when the linker command > > is issued, there are no object files created in my local directory. I > > observed this message previously and hence went through the manual cpp > > route. Do i have to set some environment or configuration variable to > > get this working ? > > > > Vijay > > > > > > On 5/14/07, Satish Balay wrote: > > > > > > On Mon, 14 May 2007, Vijay M wrote: > > > > > > > Hi, > > > > > > > > I've been trying to use PETSc for one of my projects lately and I was > > > > wondering how to create the makefile to compile multiple source files > > > > with and without PETSc calls in fortran efficiently. I am sure it is a > > > > newbie question but I could not find much information about this > > > > anywhere. > > > > > > > > For example, you could take 4 files. A.f90, B.f90 without petsc calls > > > > and C.f90 and D.f90 with petsc calls. If I were to compile them I have > > > > to use > > > > > > If C, D are PETSc sources, then they should be C.F90, D.F90 [not .f90 > > > sufix] > > > > > > > > > > > f90 -c A.f90 -o A.o > > > > f90 -c B.f90 -o B.o > > > > cpp C.f90 -I > Ctemp.f90 > > > > f90 -c Ctemp.f90 -o C.o > > > > cpp D.f90 -I > Dtemp.f90 > > > > f90 -c Dtemp.f90 -o D.o > > > > > > > > g77 A.o B.o C.o D.o -o out.exe > > > > > > this assumes A.o has the main subroutine - if not - you should list > > > the corresponding .o file first. > > > > > > > > > > > PETSc is configured to use g77 as the linker in my server but since I > > > > am compiling each file separately, I cannot quite use the linker > > > > directly (It treats the source as a linker script). Is there a > > > > simplified way to do this or have I missed something vital ?! > > > > > > > > > I'm attaching a suitable PETSc makefile for your config. > > > > > > Satish > > > > > > > I would appreciate any kind of help you can provide. If you need more > > > > information, let me know. Thanks. > > > > > > > > vijay > > > > > > > > > > > > > > > > > From balay at mcs.anl.gov Mon May 14 11:30:26 2007 From: balay at mcs.anl.gov (Satish Balay) Date: Mon, 14 May 2007 11:30:26 -0500 (CDT) Subject: Makefile for compiling multiple source files In-Reply-To: References: Message-ID: Ah - I missed this part. You have PETSc+MPI [LAM] installed with gcc+g77. However you want to use these libraries from 'f90' [Is this absoft or something else?] You'll have to build both PETSc & MPI with the same set of compilers you intend to use with your code. You can do: ./config/configure.py CC=gcc FC=f90 --download-mpich=1 --download-f-blas-lapack=1 Satish On Mon, 14 May 2007, Vijay M wrote: > Running "mpif77 -show" gives the following output > > g77: unrecognized option `-show' > /usr/lib/gcc/x86_64-redhat-linux/3.4.6/libfrtbegin.a(frtbegin.o)(.text+0x1e): > In function `main': > : undefined reference to `MAIN__' > collect2: ld returned 1 exit status > mpif77: No such file or directory > > I dont think that is a valid option in the current version of gcc i am using. > > Otherwise, when i try to compile and run the tutorial examples, > everything works perfectly. Hence, both the c and fortran compiler are > working as expected. I have attached the detailed commands with all > linker options used for the compilation. > ================================================ > [vijay at linux src]$ cd ~/work/petsc/src/ksp/ksp/examples/tutorials > > [vijay at linux tutorials]$ make ex2 > mpicc -c -fPIC -Wall -Wwrite-strings -Wno-strict-aliasing -g3 > -I/home/vijay/work/petsc/src/dm/mesh/sieve -I/home/vijay/work/petsc > -I/home/vijay/work/petsc/bmake/linux-gnu-c-debug > -I/home/vijay/work/petsc/include -I/usr/include > -D__SDIR__="src/ksp/ksp/examples/tutorials/" ex2.c > > mpicc -fPIC -Wall -Wwrite-strings -Wno-strict-aliasing -g3 -o ex2 > ex2.o -Wl,-rpath,/home/vijay/work/petsc/lib/linux-gnu-c-debug > -L/home/vijay/work/petsc/lib/linux-gnu-c-debug -lpetscksp -lpetscdm > -lpetscmat -lpetscvec -lpetsc -llapack -lblas -lm > -Wl,-rpath,/usr/lib64 -L/usr/lib64 > -Wl,-rpath,/usr/lib/gcc/x86_64-redhat-linux/3.4.6 > -L/usr/lib/gcc/x86_64-redhat-linux/3.4.6 > -Wl,-rpath,/usr/lib/gcc/x86_64-redhat-linux/3.4.6/../../../../lib64 > -L/usr/lib/gcc/x86_64-redhat-linux/3.4.6/../../../../lib64 > -Wl,-rpath,/usr/lib/gcc/x86_64-redhat-linux/3.4.6/../../.. > -L/usr/lib/gcc/x86_64-redhat-linux/3.4.6/../../.. > -Wl,-rpath,/lib/../lib64 -L/lib/../lib64 -Wl,-rpath,/usr/lib/../lib64 > -L/usr/lib/../lib64 -ldl -llammpio -llamf77mpi -lmpi -llam -laio > -lutil -lgcc_s -lpthread -lg2c -lm -Wl,-rpath,/usr/lib64 > -Wl,-rpath,/usr/lib/gcc/x86_64-redhat-linux/3.4.6 > -Wl,-rpath,/usr/lib/gcc/x86_64-redhat-linux/3.4.6/../../../../lib64 > -Wl,-rpath,/usr/lib/gcc/x86_64-redhat-linux/3.4.6/../../.. > -Wl,-rpath,/lib/../lib64 -Wl,-rpath,/usr/lib/../lib64 -lm > -Wl,-rpath,/usr/lib64 -L/usr/lib64 > -Wl,-rpath,/usr/lib/gcc/x86_64-redhat-linux/3.4.6 > -L/usr/lib/gcc/x86_64-redhat-linux/3.4.6 > -Wl,-rpath,/usr/lib/gcc/x86_64-redhat-linux/3.4.6/../../../../lib64 > -L/usr/lib/gcc/x86_64-redhat-linux/3.4.6/../../../../lib64 > -Wl,-rpath,/usr/lib/gcc/x86_64-redhat-linux/3.4.6/../../.. > -L/usr/lib/gcc/x86_64-redhat-linux/3.4.6/../../.. > -Wl,-rpath,/lib/../lib64 -L/lib/../lib64 -Wl,-rpath,/usr/lib/../lib64 > -L/usr/lib/../lib64 -ldl -llammpio -llamf77mpi -lmpi -llam -laio > -lutil -lgcc_s -lpthread -ldl > /bin/rm -f ex2.o > > [vijay at linux tutorials]$ make ex2f > mpif77 -c -fPIC -Wall -g -I/home/vijay/work/petsc > -I/home/vijay/work/petsc/bmake/linux-gnu-c-debug > -I/home/vijay/work/petsc/include -I/usr/include -o ex2f.o > ex2f.F > > mpif77 -fPIC -Wall -g -o ex2f ex2f.o > -Wl,-rpath,/home/vijay/work/petsc/lib/linux-gnu-c-debug > -L/home/vijay/work/petsc/lib/linux-gnu-c-debug -lpetscksp -lpetscdm > -lpetscmat -lpetscvec -lpetsc -llapack -lblas -lm > -Wl,-rpath,/usr/lib64 -L/usr/lib64 > -Wl,-rpath,/usr/lib/gcc/x86_64-redhat-linux/3.4.6 > -L/usr/lib/gcc/x86_64-redhat-linux/3.4.6 > -Wl,-rpath,/usr/lib/gcc/x86_64-redhat-linux/3.4.6/../../../../lib64 > -L/usr/lib/gcc/x86_64-redhat-linux/3.4.6/../../../../lib64 > -Wl,-rpath,/usr/lib/gcc/x86_64-redhat-linux/3.4.6/../../.. > -L/usr/lib/gcc/x86_64-redhat-linux/3.4.6/../../.. > -Wl,-rpath,/lib/../lib64 -L/lib/../lib64 -Wl,-rpath,/usr/lib/../lib64 > -L/usr/lib/../lib64 -ldl -llammpio -llamf77mpi -lmpi -llam -laio > -lutil -lgcc_s -lpthread -lg2c -lm -Wl,-rpath,/usr/lib64 > -Wl,-rpath,/usr/lib/gcc/x86_64-redhat-linux/3.4.6 > -Wl,-rpath,/usr/lib/gcc/x86_64-redhat-linux/3.4.6/../../../../lib64 > -Wl,-rpath,/usr/lib/gcc/x86_64-redhat-linux/3.4.6/../../.. > -Wl,-rpath,/lib/../lib64 -Wl,-rpath,/usr/lib/../lib64 -lm > -Wl,-rpath,/usr/lib64 -L/usr/lib64 > -Wl,-rpath,/usr/lib/gcc/x86_64-redhat-linux/3.4.6 > -L/usr/lib/gcc/x86_64-redhat-linux/3.4.6 > -Wl,-rpath,/usr/lib/gcc/x86_64-redhat-linux/3.4.6/../../../../lib64 > -L/usr/lib/gcc/x86_64-redhat-linux/3.4.6/../../../../lib64 > -Wl,-rpath,/usr/lib/gcc/x86_64-redhat-linux/3.4.6/../../.. > -L/usr/lib/gcc/x86_64-redhat-linux/3.4.6/../../.. > -Wl,-rpath,/lib/../lib64 -L/lib/../lib64 -Wl,-rpath,/usr/lib/../lib64 > -L/usr/lib/../lib64 -ldl -llammpio -llamf77mpi -lmpi -llam -laio > -lutil -lgcc_s -lpthread -ldl > /bin/rm -f ex2f.o > ================================================ > Satish, the only difference between these examples and my code is that > i have multiple files to compile with and without petsc calls and each > mpif77 call does not create an object file but instead tries to link > and create an output executable. I am not sure how to get around this. > Any ideas ? > > Vijay > > On 5/14/07, Satish Balay wrote: > > > > Looks like this is a problem with your mpif77. > > > > What do you have for: > > > > mpif77 -show > > > > Did PETSc configure fine with it? Are you able to run PETSc examples? > > > > cd src/ksp/ksp/examples/tutorials > > make ex2 > > make ex2f > > > > Satish > > > > On Mon, 14 May 2007, Vijay M wrote: > > > > > Satish, > > > > > > Thanks for the reply. I modified your makefile (which is quite close > > > to what i had to start with) and when i run it, I get an error message > > > to the effect > > > > > > mpif77 -c -fPIC -Wall -g -I/home/vijay/work/petsc > > > -I/home/vijay/work/petsc/bmake/linux-gnu-c-debug > > > -I/home/vijay/work/petsc/include -I/usr/include -o C.o C.F90 > > > g77: C.F90: linker input file unused because linking not done > > > > > > mpif77 -c -fPIC -Wall -g -o A.o A.f90 > > > g77: A.f90: linker input file unused because linking not done > > > > > > This is repeated for all source files. Hence when the linker command > > > is issued, there are no object files created in my local directory. I > > > observed this message previously and hence went through the manual cpp > > > route. Do i have to set some environment or configuration variable to > > > get this working ? > > > > > > Vijay > > > > > > > > > On 5/14/07, Satish Balay wrote: > > > > > > > > On Mon, 14 May 2007, Vijay M wrote: > > > > > > > > > Hi, > > > > > > > > > > I've been trying to use PETSc for one of my projects lately and I was > > > > > wondering how to create the makefile to compile multiple source files > > > > > with and without PETSc calls in fortran efficiently. I am sure it is a > > > > > newbie question but I could not find much information about this > > > > > anywhere. > > > > > > > > > > For example, you could take 4 files. A.f90, B.f90 without petsc calls > > > > > and C.f90 and D.f90 with petsc calls. If I were to compile them I have > > > > > to use > > > > > > > > If C, D are PETSc sources, then they should be C.F90, D.F90 [not .f90 > > > > sufix] > > > > > > > > > > > > > > f90 -c A.f90 -o A.o > > > > > f90 -c B.f90 -o B.o > > > > > cpp C.f90 -I > Ctemp.f90 > > > > > f90 -c Ctemp.f90 -o C.o > > > > > cpp D.f90 -I > Dtemp.f90 > > > > > f90 -c Dtemp.f90 -o D.o > > > > > > > > > > g77 A.o B.o C.o D.o -o out.exe > > > > > > > > this assumes A.o has the main subroutine - if not - you should list > > > > the corresponding .o file first. > > > > > > > > > > > > > > PETSc is configured to use g77 as the linker in my server but since I > > > > > am compiling each file separately, I cannot quite use the linker > > > > > directly (It treats the source as a linker script). Is there a > > > > > simplified way to do this or have I missed something vital ?! > > > > > > > > > > > > I'm attaching a suitable PETSc makefile for your config. > > > > > > > > Satish > > > > > > > > > I would appreciate any kind of help you can provide. If you need more > > > > > information, let me know. Thanks. > > > > > > > > > > vijay > > > > > > > > > > > > > > > > > > > > > > > > > > From vijay.m at gmail.com Mon May 14 11:43:18 2007 From: vijay.m at gmail.com (Vijay M) Date: Mon, 14 May 2007 11:43:18 -0500 Subject: Makefile for compiling multiple source files In-Reply-To: References: Message-ID: Satish, I understand what you are trying to say but when i try to configure with the options you specified, i get an error that fortran libraries cannot be used with c compiler. Here's the output. $ ./config/configure.py CC=gcc FC=f90 --download-f-blas-lapack=1 ================================================================================= Configuring PETSc to compile on your system ================================================================================= TESTING: checkFortranLibraries from config.compilers(python/BuildSystem/config/compilers.py:529) ********************************************************************************* UNABLE to CONFIGURE with GIVEN OPTIONS (see configure.log for details): --------------------------------------------------------------------------------------- Fortran libraries cannot be used with C compiler ********************************************************************************* I tried compiling and running example files after reconfiguration (snes/examples/ex1f.F) and they still work. I was wondering if there is something tricky involved since this is a 64 bit machine ? Vijay On 5/14/07, Satish Balay wrote: > Ah - I missed this part. > > You have PETSc+MPI [LAM] installed with gcc+g77. However you want to > use these libraries from 'f90' [Is this absoft or something else?] > > You'll have to build both PETSc & MPI with the same set of compilers > you intend to use with your code. > > You can do: > > ./config/configure.py CC=gcc FC=f90 --download-mpich=1 --download-f-blas-lapack=1 > > Satish > > On Mon, 14 May 2007, Vijay M wrote: > > > Running "mpif77 -show" gives the following output > > > > g77: unrecognized option `-show' > > /usr/lib/gcc/x86_64-redhat-linux/3.4.6/libfrtbegin.a(frtbegin.o)(.text+0x1e): > > In function `main': > > : undefined reference to `MAIN__' > > collect2: ld returned 1 exit status > > mpif77: No such file or directory > > > > I dont think that is a valid option in the current version of gcc i am using. > > > > Otherwise, when i try to compile and run the tutorial examples, > > everything works perfectly. Hence, both the c and fortran compiler are > > working as expected. I have attached the detailed commands with all > > linker options used for the compilation. > > ================================================ > > [vijay at linux src]$ cd ~/work/petsc/src/ksp/ksp/examples/tutorials > > > > [vijay at linux tutorials]$ make ex2 > > mpicc -c -fPIC -Wall -Wwrite-strings -Wno-strict-aliasing -g3 > > -I/home/vijay/work/petsc/src/dm/mesh/sieve -I/home/vijay/work/petsc > > -I/home/vijay/work/petsc/bmake/linux-gnu-c-debug > > -I/home/vijay/work/petsc/include -I/usr/include > > -D__SDIR__="src/ksp/ksp/examples/tutorials/" ex2.c > > > > mpicc -fPIC -Wall -Wwrite-strings -Wno-strict-aliasing -g3 -o ex2 > > ex2.o -Wl,-rpath,/home/vijay/work/petsc/lib/linux-gnu-c-debug > > -L/home/vijay/work/petsc/lib/linux-gnu-c-debug -lpetscksp -lpetscdm > > -lpetscmat -lpetscvec -lpetsc -llapack -lblas -lm > > -Wl,-rpath,/usr/lib64 -L/usr/lib64 > > -Wl,-rpath,/usr/lib/gcc/x86_64-redhat-linux/3.4.6 > > -L/usr/lib/gcc/x86_64-redhat-linux/3.4.6 > > -Wl,-rpath,/usr/lib/gcc/x86_64-redhat-linux/3.4.6/../../../../lib64 > > -L/usr/lib/gcc/x86_64-redhat-linux/3.4.6/../../../../lib64 > > -Wl,-rpath,/usr/lib/gcc/x86_64-redhat-linux/3.4.6/../../.. > > -L/usr/lib/gcc/x86_64-redhat-linux/3.4.6/../../.. > > -Wl,-rpath,/lib/../lib64 -L/lib/../lib64 -Wl,-rpath,/usr/lib/../lib64 > > -L/usr/lib/../lib64 -ldl -llammpio -llamf77mpi -lmpi -llam -laio > > -lutil -lgcc_s -lpthread -lg2c -lm -Wl,-rpath,/usr/lib64 > > -Wl,-rpath,/usr/lib/gcc/x86_64-redhat-linux/3.4.6 > > -Wl,-rpath,/usr/lib/gcc/x86_64-redhat-linux/3.4.6/../../../../lib64 > > -Wl,-rpath,/usr/lib/gcc/x86_64-redhat-linux/3.4.6/../../.. > > -Wl,-rpath,/lib/../lib64 -Wl,-rpath,/usr/lib/../lib64 -lm > > -Wl,-rpath,/usr/lib64 -L/usr/lib64 > > -Wl,-rpath,/usr/lib/gcc/x86_64-redhat-linux/3.4.6 > > -L/usr/lib/gcc/x86_64-redhat-linux/3.4.6 > > -Wl,-rpath,/usr/lib/gcc/x86_64-redhat-linux/3.4.6/../../../../lib64 > > -L/usr/lib/gcc/x86_64-redhat-linux/3.4.6/../../../../lib64 > > -Wl,-rpath,/usr/lib/gcc/x86_64-redhat-linux/3.4.6/../../.. > > -L/usr/lib/gcc/x86_64-redhat-linux/3.4.6/../../.. > > -Wl,-rpath,/lib/../lib64 -L/lib/../lib64 -Wl,-rpath,/usr/lib/../lib64 > > -L/usr/lib/../lib64 -ldl -llammpio -llamf77mpi -lmpi -llam -laio > > -lutil -lgcc_s -lpthread -ldl > > /bin/rm -f ex2.o > > > > [vijay at linux tutorials]$ make ex2f > > mpif77 -c -fPIC -Wall -g -I/home/vijay/work/petsc > > -I/home/vijay/work/petsc/bmake/linux-gnu-c-debug > > -I/home/vijay/work/petsc/include -I/usr/include -o ex2f.o > > ex2f.F > > > > mpif77 -fPIC -Wall -g -o ex2f ex2f.o > > -Wl,-rpath,/home/vijay/work/petsc/lib/linux-gnu-c-debug > > -L/home/vijay/work/petsc/lib/linux-gnu-c-debug -lpetscksp -lpetscdm > > -lpetscmat -lpetscvec -lpetsc -llapack -lblas -lm > > -Wl,-rpath,/usr/lib64 -L/usr/lib64 > > -Wl,-rpath,/usr/lib/gcc/x86_64-redhat-linux/3.4.6 > > -L/usr/lib/gcc/x86_64-redhat-linux/3.4.6 > > -Wl,-rpath,/usr/lib/gcc/x86_64-redhat-linux/3.4.6/../../../../lib64 > > -L/usr/lib/gcc/x86_64-redhat-linux/3.4.6/../../../../lib64 > > -Wl,-rpath,/usr/lib/gcc/x86_64-redhat-linux/3.4.6/../../.. > > -L/usr/lib/gcc/x86_64-redhat-linux/3.4.6/../../.. > > -Wl,-rpath,/lib/../lib64 -L/lib/../lib64 -Wl,-rpath,/usr/lib/../lib64 > > -L/usr/lib/../lib64 -ldl -llammpio -llamf77mpi -lmpi -llam -laio > > -lutil -lgcc_s -lpthread -lg2c -lm -Wl,-rpath,/usr/lib64 > > -Wl,-rpath,/usr/lib/gcc/x86_64-redhat-linux/3.4.6 > > -Wl,-rpath,/usr/lib/gcc/x86_64-redhat-linux/3.4.6/../../../../lib64 > > -Wl,-rpath,/usr/lib/gcc/x86_64-redhat-linux/3.4.6/../../.. > > -Wl,-rpath,/lib/../lib64 -Wl,-rpath,/usr/lib/../lib64 -lm > > -Wl,-rpath,/usr/lib64 -L/usr/lib64 > > -Wl,-rpath,/usr/lib/gcc/x86_64-redhat-linux/3.4.6 > > -L/usr/lib/gcc/x86_64-redhat-linux/3.4.6 > > -Wl,-rpath,/usr/lib/gcc/x86_64-redhat-linux/3.4.6/../../../../lib64 > > -L/usr/lib/gcc/x86_64-redhat-linux/3.4.6/../../../../lib64 > > -Wl,-rpath,/usr/lib/gcc/x86_64-redhat-linux/3.4.6/../../.. > > -L/usr/lib/gcc/x86_64-redhat-linux/3.4.6/../../.. > > -Wl,-rpath,/lib/../lib64 -L/lib/../lib64 -Wl,-rpath,/usr/lib/../lib64 > > -L/usr/lib/../lib64 -ldl -llammpio -llamf77mpi -lmpi -llam -laio > > -lutil -lgcc_s -lpthread -ldl > > /bin/rm -f ex2f.o > > ================================================ > > Satish, the only difference between these examples and my code is that > > i have multiple files to compile with and without petsc calls and each > > mpif77 call does not create an object file but instead tries to link > > and create an output executable. I am not sure how to get around this. > > Any ideas ? > > > > Vijay > > > > On 5/14/07, Satish Balay wrote: > > > > > > Looks like this is a problem with your mpif77. > > > > > > What do you have for: > > > > > > mpif77 -show > > > > > > Did PETSc configure fine with it? Are you able to run PETSc examples? > > > > > > cd src/ksp/ksp/examples/tutorials > > > make ex2 > > > make ex2f > > > > > > Satish > > > > > > On Mon, 14 May 2007, Vijay M wrote: > > > > > > > Satish, > > > > > > > > Thanks for the reply. I modified your makefile (which is quite close > > > > to what i had to start with) and when i run it, I get an error message > > > > to the effect > > > > > > > > mpif77 -c -fPIC -Wall -g -I/home/vijay/work/petsc > > > > -I/home/vijay/work/petsc/bmake/linux-gnu-c-debug > > > > -I/home/vijay/work/petsc/include -I/usr/include -o C.o C.F90 > > > > g77: C.F90: linker input file unused because linking not done > > > > > > > > mpif77 -c -fPIC -Wall -g -o A.o A.f90 > > > > g77: A.f90: linker input file unused because linking not done > > > > > > > > This is repeated for all source files. Hence when the linker command > > > > is issued, there are no object files created in my local directory. I > > > > observed this message previously and hence went through the manual cpp > > > > route. Do i have to set some environment or configuration variable to > > > > get this working ? > > > > > > > > Vijay > > > > > > > > > > > > On 5/14/07, Satish Balay wrote: > > > > > > > > > > On Mon, 14 May 2007, Vijay M wrote: > > > > > > > > > > > Hi, > > > > > > > > > > > > I've been trying to use PETSc for one of my projects lately and I was > > > > > > wondering how to create the makefile to compile multiple source files > > > > > > with and without PETSc calls in fortran efficiently. I am sure it is a > > > > > > newbie question but I could not find much information about this > > > > > > anywhere. > > > > > > > > > > > > For example, you could take 4 files. A.f90, B.f90 without petsc calls > > > > > > and C.f90 and D.f90 with petsc calls. If I were to compile them I have > > > > > > to use > > > > > > > > > > If C, D are PETSc sources, then they should be C.F90, D.F90 [not .f90 > > > > > sufix] > > > > > > > > > > > > > > > > > f90 -c A.f90 -o A.o > > > > > > f90 -c B.f90 -o B.o > > > > > > cpp C.f90 -I > Ctemp.f90 > > > > > > f90 -c Ctemp.f90 -o C.o > > > > > > cpp D.f90 -I > Dtemp.f90 > > > > > > f90 -c Dtemp.f90 -o D.o > > > > > > > > > > > > g77 A.o B.o C.o D.o -o out.exe > > > > > > > > > > this assumes A.o has the main subroutine - if not - you should list > > > > > the corresponding .o file first. > > > > > > > > > > > > > > > > > PETSc is configured to use g77 as the linker in my server but since I > > > > > > am compiling each file separately, I cannot quite use the linker > > > > > > directly (It treats the source as a linker script). Is there a > > > > > > simplified way to do this or have I missed something vital ?! > > > > > > > > > > > > > > > I'm attaching a suitable PETSc makefile for your config. > > > > > > > > > > Satish > > > > > > > > > > > I would appreciate any kind of help you can provide. If you need more > > > > > > information, let me know. Thanks. > > > > > > > > > > > > vijay > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > From vijay.m at gmail.com Mon May 14 11:58:24 2007 From: vijay.m at gmail.com (Vijay M) Date: Mon, 14 May 2007 11:58:24 -0500 Subject: Makefile for compiling multiple source files In-Reply-To: References: Message-ID: Something else i noticed just now is that when i rename an example file in snes directory from .F to .F90 or .f90, the code cannot compile it and i get the previously stated warning message g77: C.F90: linker input file unused because linking not done Not sure if it helps but it is just a sanity check that at least the error is consistent for all .F90 and not something specific to my code. Vijay On 5/14/07, Satish Balay wrote: > Ah - I missed this part. > > You have PETSc+MPI [LAM] installed with gcc+g77. However you want to > use these libraries from 'f90' [Is this absoft or something else?] > > You'll have to build both PETSc & MPI with the same set of compilers > you intend to use with your code. > > You can do: > > ./config/configure.py CC=gcc FC=f90 --download-mpich=1 --download-f-blas-lapack=1 > > Satish > > On Mon, 14 May 2007, Vijay M wrote: > > > Running "mpif77 -show" gives the following output > > > > g77: unrecognized option `-show' > > /usr/lib/gcc/x86_64-redhat-linux/3.4.6/libfrtbegin.a(frtbegin.o)(.text+0x1e): > > In function `main': > > : undefined reference to `MAIN__' > > collect2: ld returned 1 exit status > > mpif77: No such file or directory > > > > I dont think that is a valid option in the current version of gcc i am using. > > > > Otherwise, when i try to compile and run the tutorial examples, > > everything works perfectly. Hence, both the c and fortran compiler are > > working as expected. I have attached the detailed commands with all > > linker options used for the compilation. > > ================================================ > > [vijay at linux src]$ cd ~/work/petsc/src/ksp/ksp/examples/tutorials > > > > [vijay at linux tutorials]$ make ex2 > > mpicc -c -fPIC -Wall -Wwrite-strings -Wno-strict-aliasing -g3 > > -I/home/vijay/work/petsc/src/dm/mesh/sieve -I/home/vijay/work/petsc > > -I/home/vijay/work/petsc/bmake/linux-gnu-c-debug > > -I/home/vijay/work/petsc/include -I/usr/include > > -D__SDIR__="src/ksp/ksp/examples/tutorials/" ex2.c > > > > mpicc -fPIC -Wall -Wwrite-strings -Wno-strict-aliasing -g3 -o ex2 > > ex2.o -Wl,-rpath,/home/vijay/work/petsc/lib/linux-gnu-c-debug > > -L/home/vijay/work/petsc/lib/linux-gnu-c-debug -lpetscksp -lpetscdm > > -lpetscmat -lpetscvec -lpetsc -llapack -lblas -lm > > -Wl,-rpath,/usr/lib64 -L/usr/lib64 > > -Wl,-rpath,/usr/lib/gcc/x86_64-redhat-linux/3.4.6 > > -L/usr/lib/gcc/x86_64-redhat-linux/3.4.6 > > -Wl,-rpath,/usr/lib/gcc/x86_64-redhat-linux/3.4.6/../../../../lib64 > > -L/usr/lib/gcc/x86_64-redhat-linux/3.4.6/../../../../lib64 > > -Wl,-rpath,/usr/lib/gcc/x86_64-redhat-linux/3.4.6/../../.. > > -L/usr/lib/gcc/x86_64-redhat-linux/3.4.6/../../.. > > -Wl,-rpath,/lib/../lib64 -L/lib/../lib64 -Wl,-rpath,/usr/lib/../lib64 > > -L/usr/lib/../lib64 -ldl -llammpio -llamf77mpi -lmpi -llam -laio > > -lutil -lgcc_s -lpthread -lg2c -lm -Wl,-rpath,/usr/lib64 > > -Wl,-rpath,/usr/lib/gcc/x86_64-redhat-linux/3.4.6 > > -Wl,-rpath,/usr/lib/gcc/x86_64-redhat-linux/3.4.6/../../../../lib64 > > -Wl,-rpath,/usr/lib/gcc/x86_64-redhat-linux/3.4.6/../../.. > > -Wl,-rpath,/lib/../lib64 -Wl,-rpath,/usr/lib/../lib64 -lm > > -Wl,-rpath,/usr/lib64 -L/usr/lib64 > > -Wl,-rpath,/usr/lib/gcc/x86_64-redhat-linux/3.4.6 > > -L/usr/lib/gcc/x86_64-redhat-linux/3.4.6 > > -Wl,-rpath,/usr/lib/gcc/x86_64-redhat-linux/3.4.6/../../../../lib64 > > -L/usr/lib/gcc/x86_64-redhat-linux/3.4.6/../../../../lib64 > > -Wl,-rpath,/usr/lib/gcc/x86_64-redhat-linux/3.4.6/../../.. > > -L/usr/lib/gcc/x86_64-redhat-linux/3.4.6/../../.. > > -Wl,-rpath,/lib/../lib64 -L/lib/../lib64 -Wl,-rpath,/usr/lib/../lib64 > > -L/usr/lib/../lib64 -ldl -llammpio -llamf77mpi -lmpi -llam -laio > > -lutil -lgcc_s -lpthread -ldl > > /bin/rm -f ex2.o > > > > [vijay at linux tutorials]$ make ex2f > > mpif77 -c -fPIC -Wall -g -I/home/vijay/work/petsc > > -I/home/vijay/work/petsc/bmake/linux-gnu-c-debug > > -I/home/vijay/work/petsc/include -I/usr/include -o ex2f.o > > ex2f.F > > > > mpif77 -fPIC -Wall -g -o ex2f ex2f.o > > -Wl,-rpath,/home/vijay/work/petsc/lib/linux-gnu-c-debug > > -L/home/vijay/work/petsc/lib/linux-gnu-c-debug -lpetscksp -lpetscdm > > -lpetscmat -lpetscvec -lpetsc -llapack -lblas -lm > > -Wl,-rpath,/usr/lib64 -L/usr/lib64 > > -Wl,-rpath,/usr/lib/gcc/x86_64-redhat-linux/3.4.6 > > -L/usr/lib/gcc/x86_64-redhat-linux/3.4.6 > > -Wl,-rpath,/usr/lib/gcc/x86_64-redhat-linux/3.4.6/../../../../lib64 > > -L/usr/lib/gcc/x86_64-redhat-linux/3.4.6/../../../../lib64 > > -Wl,-rpath,/usr/lib/gcc/x86_64-redhat-linux/3.4.6/../../.. > > -L/usr/lib/gcc/x86_64-redhat-linux/3.4.6/../../.. > > -Wl,-rpath,/lib/../lib64 -L/lib/../lib64 -Wl,-rpath,/usr/lib/../lib64 > > -L/usr/lib/../lib64 -ldl -llammpio -llamf77mpi -lmpi -llam -laio > > -lutil -lgcc_s -lpthread -lg2c -lm -Wl,-rpath,/usr/lib64 > > -Wl,-rpath,/usr/lib/gcc/x86_64-redhat-linux/3.4.6 > > -Wl,-rpath,/usr/lib/gcc/x86_64-redhat-linux/3.4.6/../../../../lib64 > > -Wl,-rpath,/usr/lib/gcc/x86_64-redhat-linux/3.4.6/../../.. > > -Wl,-rpath,/lib/../lib64 -Wl,-rpath,/usr/lib/../lib64 -lm > > -Wl,-rpath,/usr/lib64 -L/usr/lib64 > > -Wl,-rpath,/usr/lib/gcc/x86_64-redhat-linux/3.4.6 > > -L/usr/lib/gcc/x86_64-redhat-linux/3.4.6 > > -Wl,-rpath,/usr/lib/gcc/x86_64-redhat-linux/3.4.6/../../../../lib64 > > -L/usr/lib/gcc/x86_64-redhat-linux/3.4.6/../../../../lib64 > > -Wl,-rpath,/usr/lib/gcc/x86_64-redhat-linux/3.4.6/../../.. > > -L/usr/lib/gcc/x86_64-redhat-linux/3.4.6/../../.. > > -Wl,-rpath,/lib/../lib64 -L/lib/../lib64 -Wl,-rpath,/usr/lib/../lib64 > > -L/usr/lib/../lib64 -ldl -llammpio -llamf77mpi -lmpi -llam -laio > > -lutil -lgcc_s -lpthread -ldl > > /bin/rm -f ex2f.o > > ================================================ > > Satish, the only difference between these examples and my code is that > > i have multiple files to compile with and without petsc calls and each > > mpif77 call does not create an object file but instead tries to link > > and create an output executable. I am not sure how to get around this. > > Any ideas ? > > > > Vijay > > > > On 5/14/07, Satish Balay wrote: > > > > > > Looks like this is a problem with your mpif77. > > > > > > What do you have for: > > > > > > mpif77 -show > > > > > > Did PETSc configure fine with it? Are you able to run PETSc examples? > > > > > > cd src/ksp/ksp/examples/tutorials > > > make ex2 > > > make ex2f > > > > > > Satish > > > > > > On Mon, 14 May 2007, Vijay M wrote: > > > > > > > Satish, > > > > > > > > Thanks for the reply. I modified your makefile (which is quite close > > > > to what i had to start with) and when i run it, I get an error message > > > > to the effect > > > > > > > > mpif77 -c -fPIC -Wall -g -I/home/vijay/work/petsc > > > > -I/home/vijay/work/petsc/bmake/linux-gnu-c-debug > > > > -I/home/vijay/work/petsc/include -I/usr/include -o C.o C.F90 > > > > g77: C.F90: linker input file unused because linking not done > > > > > > > > mpif77 -c -fPIC -Wall -g -o A.o A.f90 > > > > g77: A.f90: linker input file unused because linking not done > > > > > > > > This is repeated for all source files. Hence when the linker command > > > > is issued, there are no object files created in my local directory. I > > > > observed this message previously and hence went through the manual cpp > > > > route. Do i have to set some environment or configuration variable to > > > > get this working ? > > > > > > > > Vijay > > > > > > > > > > > > On 5/14/07, Satish Balay wrote: > > > > > > > > > > On Mon, 14 May 2007, Vijay M wrote: > > > > > > > > > > > Hi, > > > > > > > > > > > > I've been trying to use PETSc for one of my projects lately and I was > > > > > > wondering how to create the makefile to compile multiple source files > > > > > > with and without PETSc calls in fortran efficiently. I am sure it is a > > > > > > newbie question but I could not find much information about this > > > > > > anywhere. > > > > > > > > > > > > For example, you could take 4 files. A.f90, B.f90 without petsc calls > > > > > > and C.f90 and D.f90 with petsc calls. If I were to compile them I have > > > > > > to use > > > > > > > > > > If C, D are PETSc sources, then they should be C.F90, D.F90 [not .f90 > > > > > sufix] > > > > > > > > > > > > > > > > > f90 -c A.f90 -o A.o > > > > > > f90 -c B.f90 -o B.o > > > > > > cpp C.f90 -I > Ctemp.f90 > > > > > > f90 -c Ctemp.f90 -o C.o > > > > > > cpp D.f90 -I > Dtemp.f90 > > > > > > f90 -c Dtemp.f90 -o D.o > > > > > > > > > > > > g77 A.o B.o C.o D.o -o out.exe > > > > > > > > > > this assumes A.o has the main subroutine - if not - you should list > > > > > the corresponding .o file first. > > > > > > > > > > > > > > > > > PETSc is configured to use g77 as the linker in my server but since I > > > > > > am compiling each file separately, I cannot quite use the linker > > > > > > directly (It treats the source as a linker script). Is there a > > > > > > simplified way to do this or have I missed something vital ?! > > > > > > > > > > > > > > > I'm attaching a suitable PETSc makefile for your config. > > > > > > > > > > Satish > > > > > > > > > > > I would appreciate any kind of help you can provide. If you need more > > > > > > information, let me know. Thanks. > > > > > > > > > > > > vijay > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > From balay at mcs.anl.gov Mon May 14 12:03:59 2007 From: balay at mcs.anl.gov (Satish Balay) Date: Mon, 14 May 2007 12:03:59 -0500 (CDT) Subject: Makefile for compiling multiple source files In-Reply-To: References: Message-ID: Send us configure.log for this failed build at petsc-maint at mcs.anl.gov Satish On Mon, 14 May 2007, Vijay M wrote: > Satish, > > I understand what you are trying to say but when i try to configure > with the options you specified, i get an error that fortran libraries > cannot be used with c compiler. Here's the output. > > $ ./config/configure.py CC=gcc FC=f90 --download-f-blas-lapack=1 > ================================================================================= > Configuring PETSc to compile on your system > ================================================================================= > TESTING: checkFortranLibraries from > config.compilers(python/BuildSystem/config/compilers.py:529) > > ********************************************************************************* > UNABLE to CONFIGURE with GIVEN OPTIONS (see configure.log > for details): > --------------------------------------------------------------------------------------- > Fortran libraries cannot be used with C compiler > ********************************************************************************* > I tried compiling and running example files after reconfiguration > (snes/examples/ex1f.F) and they still work. I was wondering if there > is something tricky involved since this is a 64 bit machine ? > > Vijay > > On 5/14/07, Satish Balay wrote: > > Ah - I missed this part. > > > > You have PETSc+MPI [LAM] installed with gcc+g77. However you want to > > use these libraries from 'f90' [Is this absoft or something else?] > > > > You'll have to build both PETSc & MPI with the same set of compilers > > you intend to use with your code. > > > > You can do: > > > > ./config/configure.py CC=gcc FC=f90 --download-mpich=1 > > --download-f-blas-lapack=1 > > > > Satish > > > > On Mon, 14 May 2007, Vijay M wrote: > > > > > Running "mpif77 -show" gives the following output > > > > > > g77: unrecognized option `-show' > > > > > /usr/lib/gcc/x86_64-redhat-linux/3.4.6/libfrtbegin.a(frtbegin.o)(.text+0x1e): > > > In function `main': > > > : undefined reference to `MAIN__' > > > collect2: ld returned 1 exit status > > > mpif77: No such file or directory > > > > > > I dont think that is a valid option in the current version of gcc i am > > using. > > > > > > Otherwise, when i try to compile and run the tutorial examples, > > > everything works perfectly. Hence, both the c and fortran compiler are > > > working as expected. I have attached the detailed commands with all > > > linker options used for the compilation. > > > ================================================ > > > [vijay at linux src]$ cd ~/work/petsc/src/ksp/ksp/examples/tutorials > > > > > > [vijay at linux tutorials]$ make ex2 > > > mpicc -c -fPIC -Wall -Wwrite-strings -Wno-strict-aliasing -g3 > > > -I/home/vijay/work/petsc/src/dm/mesh/sieve -I/home/vijay/work/petsc > > > -I/home/vijay/work/petsc/bmake/linux-gnu-c-debug > > > -I/home/vijay/work/petsc/include -I/usr/include > > > -D__SDIR__="src/ksp/ksp/examples/tutorials/" ex2.c > > > > > > mpicc -fPIC -Wall -Wwrite-strings -Wno-strict-aliasing -g3 -o ex2 > > > ex2.o -Wl,-rpath,/home/vijay/work/petsc/lib/linux-gnu-c-debug > > > -L/home/vijay/work/petsc/lib/linux-gnu-c-debug -lpetscksp -lpetscdm > > > -lpetscmat -lpetscvec -lpetsc -llapack -lblas -lm > > > -Wl,-rpath,/usr/lib64 -L/usr/lib64 > > > -Wl,-rpath,/usr/lib/gcc/x86_64-redhat-linux/3.4.6 > > > -L/usr/lib/gcc/x86_64-redhat-linux/3.4.6 > > > -Wl,-rpath,/usr/lib/gcc/x86_64-redhat-linux/3.4.6/../../../../lib64 > > > -L/usr/lib/gcc/x86_64-redhat-linux/3.4.6/../../../../lib64 > > > -Wl,-rpath,/usr/lib/gcc/x86_64-redhat-linux/3.4.6/../../.. > > > -L/usr/lib/gcc/x86_64-redhat-linux/3.4.6/../../.. > > > -Wl,-rpath,/lib/../lib64 -L/lib/../lib64 -Wl,-rpath,/usr/lib/../lib64 > > > -L/usr/lib/../lib64 -ldl -llammpio -llamf77mpi -lmpi -llam -laio > > > -lutil -lgcc_s -lpthread -lg2c -lm -Wl,-rpath,/usr/lib64 > > > -Wl,-rpath,/usr/lib/gcc/x86_64-redhat-linux/3.4.6 > > > -Wl,-rpath,/usr/lib/gcc/x86_64-redhat-linux/3.4.6/../../../../lib64 > > > -Wl,-rpath,/usr/lib/gcc/x86_64-redhat-linux/3.4.6/../../.. > > > -Wl,-rpath,/lib/../lib64 -Wl,-rpath,/usr/lib/../lib64 -lm > > > -Wl,-rpath,/usr/lib64 -L/usr/lib64 > > > -Wl,-rpath,/usr/lib/gcc/x86_64-redhat-linux/3.4.6 > > > -L/usr/lib/gcc/x86_64-redhat-linux/3.4.6 > > > -Wl,-rpath,/usr/lib/gcc/x86_64-redhat-linux/3.4.6/../../../../lib64 > > > -L/usr/lib/gcc/x86_64-redhat-linux/3.4.6/../../../../lib64 > > > -Wl,-rpath,/usr/lib/gcc/x86_64-redhat-linux/3.4.6/../../.. > > > -L/usr/lib/gcc/x86_64-redhat-linux/3.4.6/../../.. > > > -Wl,-rpath,/lib/../lib64 -L/lib/../lib64 -Wl,-rpath,/usr/lib/../lib64 > > > -L/usr/lib/../lib64 -ldl -llammpio -llamf77mpi -lmpi -llam -laio > > > -lutil -lgcc_s -lpthread -ldl > > > /bin/rm -f ex2.o > > > > > > [vijay at linux tutorials]$ make ex2f > > > mpif77 -c -fPIC -Wall -g -I/home/vijay/work/petsc > > > -I/home/vijay/work/petsc/bmake/linux-gnu-c-debug > > > -I/home/vijay/work/petsc/include -I/usr/include -o ex2f.o > > > ex2f.F > > > > > > mpif77 -fPIC -Wall -g -o ex2f ex2f.o > > > -Wl,-rpath,/home/vijay/work/petsc/lib/linux-gnu-c-debug > > > -L/home/vijay/work/petsc/lib/linux-gnu-c-debug -lpetscksp -lpetscdm > > > -lpetscmat -lpetscvec -lpetsc -llapack -lblas -lm > > > -Wl,-rpath,/usr/lib64 -L/usr/lib64 > > > -Wl,-rpath,/usr/lib/gcc/x86_64-redhat-linux/3.4.6 > > > -L/usr/lib/gcc/x86_64-redhat-linux/3.4.6 > > > -Wl,-rpath,/usr/lib/gcc/x86_64-redhat-linux/3.4.6/../../../../lib64 > > > -L/usr/lib/gcc/x86_64-redhat-linux/3.4.6/../../../../lib64 > > > -Wl,-rpath,/usr/lib/gcc/x86_64-redhat-linux/3.4.6/../../.. > > > -L/usr/lib/gcc/x86_64-redhat-linux/3.4.6/../../.. > > > -Wl,-rpath,/lib/../lib64 -L/lib/../lib64 -Wl,-rpath,/usr/lib/../lib64 > > > -L/usr/lib/../lib64 -ldl -llammpio -llamf77mpi -lmpi -llam -laio > > > -lutil -lgcc_s -lpthread -lg2c -lm -Wl,-rpath,/usr/lib64 > > > -Wl,-rpath,/usr/lib/gcc/x86_64-redhat-linux/3.4.6 > > > -Wl,-rpath,/usr/lib/gcc/x86_64-redhat-linux/3.4.6/../../../../lib64 > > > -Wl,-rpath,/usr/lib/gcc/x86_64-redhat-linux/3.4.6/../../.. > > > -Wl,-rpath,/lib/../lib64 -Wl,-rpath,/usr/lib/../lib64 -lm > > > -Wl,-rpath,/usr/lib64 -L/usr/lib64 > > > -Wl,-rpath,/usr/lib/gcc/x86_64-redhat-linux/3.4.6 > > > -L/usr/lib/gcc/x86_64-redhat-linux/3.4.6 > > > -Wl,-rpath,/usr/lib/gcc/x86_64-redhat-linux/3.4.6/../../../../lib64 > > > -L/usr/lib/gcc/x86_64-redhat-linux/3.4.6/../../../../lib64 > > > -Wl,-rpath,/usr/lib/gcc/x86_64-redhat-linux/3.4.6/../../.. > > > -L/usr/lib/gcc/x86_64-redhat-linux/3.4.6/../../.. > > > -Wl,-rpath,/lib/../lib64 -L/lib/../lib64 -Wl,-rpath,/usr/lib/../lib64 > > > -L/usr/lib/../lib64 -ldl -llammpio -llamf77mpi -lmpi -llam -laio > > > -lutil -lgcc_s -lpthread -ldl > > > /bin/rm -f ex2f.o > > > ================================================ > > > Satish, the only difference between these examples and my code is that > > > i have multiple files to compile with and without petsc calls and each > > > mpif77 call does not create an object file but instead tries to link > > > and create an output executable. I am not sure how to get around this. > > > Any ideas ? > > > > > > Vijay > > > > > > On 5/14/07, Satish Balay wrote: > > > > > > > > Looks like this is a problem with your mpif77. > > > > > > > > What do you have for: > > > > > > > > mpif77 -show > > > > > > > > Did PETSc configure fine with it? Are you able to run PETSc examples? > > > > > > > > cd src/ksp/ksp/examples/tutorials > > > > make ex2 > > > > make ex2f > > > > > > > > Satish > > > > > > > > On Mon, 14 May 2007, Vijay M wrote: > > > > > > > > > Satish, > > > > > > > > > > Thanks for the reply. I modified your makefile (which is quite close > > > > > to what i had to start with) and when i run it, I get an error message > > > > > to the effect > > > > > > > > > > mpif77 -c -fPIC -Wall -g -I/home/vijay/work/petsc > > > > > -I/home/vijay/work/petsc/bmake/linux-gnu-c-debug > > > > > -I/home/vijay/work/petsc/include -I/usr/include -o C.o C.F90 > > > > > g77: C.F90: linker input file unused because linking not done > > > > > > > > > > mpif77 -c -fPIC -Wall -g -o A.o A.f90 > > > > > g77: A.f90: linker input file unused because linking not done > > > > > > > > > > This is repeated for all source files. Hence when the linker command > > > > > is issued, there are no object files created in my local directory. I > > > > > observed this message previously and hence went through the manual cpp > > > > > route. Do i have to set some environment or configuration variable to > > > > > get this working ? > > > > > > > > > > Vijay > > > > > > > > > > > > > > > On 5/14/07, Satish Balay wrote: > > > > > > > > > > > > On Mon, 14 May 2007, Vijay M wrote: > > > > > > > > > > > > > Hi, > > > > > > > > > > > > > > I've been trying to use PETSc for one of my projects lately and I > > was > > > > > > > wondering how to create the makefile to compile multiple source > > files > > > > > > > with and without PETSc calls in fortran efficiently. I am sure it > > is a > > > > > > > newbie question but I could not find much information about this > > > > > > > anywhere. > > > > > > > > > > > > > > For example, you could take 4 files. A.f90, B.f90 without petsc > > calls > > > > > > > and C.f90 and D.f90 with petsc calls. If I were to compile them I > > have > > > > > > > to use > > > > > > > > > > > > If C, D are PETSc sources, then they should be C.F90, D.F90 [not > > .f90 > > > > > > sufix] > > > > > > > > > > > > > > > > > > > > f90 -c A.f90 -o A.o > > > > > > > f90 -c B.f90 -o B.o > > > > > > > cpp C.f90 -I > Ctemp.f90 > > > > > > > f90 -c Ctemp.f90 -o C.o > > > > > > > cpp D.f90 -I > Dtemp.f90 > > > > > > > f90 -c Dtemp.f90 -o D.o > > > > > > > > > > > > > > g77 A.o B.o C.o D.o -o out.exe > > > > > > > > > > > > this assumes A.o has the main subroutine - if not - you should list > > > > > > the corresponding .o file first. > > > > > > > > > > > > > > > > > > > > PETSc is configured to use g77 as the linker in my server but > > since I > > > > > > > am compiling each file separately, I cannot quite use the linker > > > > > > > directly (It treats the source as a linker script). Is there a > > > > > > > simplified way to do this or have I missed something vital ?! > > > > > > > > > > > > > > > > > > I'm attaching a suitable PETSc makefile for your config. > > > > > > > > > > > > Satish > > > > > > > > > > > > > I would appreciate any kind of help you can provide. If you need > > more > > > > > > > information, let me know. Thanks. > > > > > > > > > > > > > > vijay > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > From balay at mcs.anl.gov Mon May 14 12:05:02 2007 From: balay at mcs.anl.gov (Satish Balay) Date: Mon, 14 May 2007 12:05:02 -0500 (CDT) Subject: Makefile for compiling multiple source files In-Reply-To: References: Message-ID: On Mon, 14 May 2007, Vijay M wrote: > Something else i noticed just now is that when i rename an example > file in snes directory from .F to .F90 or .f90, the code cannot > compile it and i get the previously stated warning message > > g77: C.F90: linker input file unused because linking not done > > Not sure if it helps but it is just a sanity check that at least the > error is consistent for all .F90 and not something specific to my > code. Thats because g77 is not an f90 compiler - and does not understand .f90 or .F90 suffixes. Satish From aiminy at iastate.edu Fri May 18 00:29:28 2007 From: aiminy at iastate.edu (Aimin Yan) Date: Fri, 18 May 2007 00:29:28 -0500 Subject: use BLZPACK in Petsc and SLEPc Message-ID: <6.2.1.2.2.20070518001009.038386b0@aiminy.mail.iastate.edu> I want to call BLZPACK library in my C++. Someone told me I can use SLEPc. For SLEPc, I have to install PETSc. So I get to this list. Does anyone have some experience in this? Thanks, Aimin From balay at mcs.anl.gov Fri May 18 09:43:43 2007 From: balay at mcs.anl.gov (Satish Balay) Date: Fri, 18 May 2007 09:43:43 -0500 (CDT) Subject: use BLZPACK in Petsc and SLEPc In-Reply-To: <6.2.1.2.2.20070518001009.038386b0@aiminy.mail.iastate.edu> References: <6.2.1.2.2.20070518001009.038386b0@aiminy.mail.iastate.edu> Message-ID: You can check the installation instructions of SLEPc & PETSc http://www.grycap.upv.es/slepc/install.htm http://www-unix.mcs.anl.gov/petsc/petsc-as/documentation/installation.html Wrt using BLZPACK from SLEPc, you'll have to check with SLEPc developers. [slepc-maint at grycap.upv.es] Simple summary of installing PETSc: >>>>> wget ftp://ftp.mcs.anl.gov/pub/petsc/release-snapshots/petsc-lite-2.3.2-p10.tar.gz tar -xzf petsc-lite-2.3.2-p10.tar.gz cd petsc-2.3.2-p10 export PETSC_DIR=$PWD ./config/configure.py CC=gcc FC=g77 --download-f-blas-lapack=1 --download-mpich=1 make all make test <<< If you wish to use these libraries from C++ - you could use the additional configure option: --with-clanguage=cxx Satish On Fri, 18 May 2007, Aimin Yan wrote: > I want to call BLZPACK library in my C++. > Someone told me I can use SLEPc. > For SLEPc, I have to install PETSc. So I get to this list. > Does anyone have some experience in this? > > Thanks, > > Aimin > > From zonexo at gmail.com Mon May 21 23:06:25 2007 From: zonexo at gmail.com (Ben Tay) Date: Tue, 22 May 2007 12:06:25 +0800 Subject: Can't read MPIRUN_HOST Message-ID: <804ab5d40705212106r5e0e8f53gd2779579f3b56897@mail.gmail.com> Hi, I tried to compile PETSc and there's no problem. I also used the *--with-batch=1 *option since my server is using a job scheduler. There's no shared library and I'm installing hypre too. However, when after sending the job for ex1f, I got the error msg: Can't read MPIRUN_HOST. So what's the problem? thanks -------------- next part -------------- An HTML attachment was scrubbed... URL: From balay at mcs.anl.gov Mon May 21 23:09:32 2007 From: balay at mcs.anl.gov (Satish Balay) Date: Mon, 21 May 2007 23:09:32 -0500 (CDT) Subject: Can't read MPIRUN_HOST In-Reply-To: <804ab5d40705212106r5e0e8f53gd2779579f3b56897@mail.gmail.com> References: <804ab5d40705212106r5e0e8f53gd2779579f3b56897@mail.gmail.com> Message-ID: I don't think this has anything to do with PETSc. You might want to check the docs of MPI library or the batch system you are using. Satish On Tue, 22 May 2007, Ben Tay wrote: > Hi, > > I tried to compile PETSc and there's no problem. I also used the > *--with-batch=1 > *option since my server is using a job scheduler. There's no shared library > and I'm installing hypre too. > > However, when after sending the job for ex1f, I got the error msg: > > Can't read MPIRUN_HOST. > > So what's the problem? > > thanks > From zonexo at gmail.com Mon May 21 23:34:41 2007 From: zonexo at gmail.com (Ben Tay) Date: Tue, 22 May 2007 12:34:41 +0800 Subject: Can't read MPIRUN_HOST In-Reply-To: References: <804ab5d40705212106r5e0e8f53gd2779579f3b56897@mail.gmail.com> Message-ID: <804ab5d40705212134o51ac0e98n319e66792757d201@mail.gmail.com> ok. I sorted out that problem. Now I got this error: *** glibc detected *** double free or corruption (!prev): 0x0000000000509660 *** /usr/lsf62/bin/mvapich_wrapper: line 388: 28571 Aborted (core dumped) $PJL $PJL_OPTS $REMOTE_ENV_VAR $JOB_CMD Job /usr/lsf62/bin/mvapich_wrapper ./ex1f TID HOST_NAME COMMAND_LINE STATUS TERMINATION_TIME ===== ========== ================ ======================= =================== 00001 atlas3-c63 Undefined 00002 atlas3-c63 Undefined 00002 atlas3-c61 ./ex1f Signaled (SIGKILL) 05/22/2007 12:23:58 00003 atlas3-c61 Undefined Thanks On 5/22/07, Satish Balay wrote: > > I don't think this has anything to do with PETSc. You might want to > check the docs of MPI library or the batch system you are using. > > Satish > > > On Tue, 22 May 2007, Ben Tay wrote: > > > Hi, > > > > I tried to compile PETSc and there's no problem. I also used the > > *--with-batch=1 > > *option since my server is using a job scheduler. There's no shared > library > > and I'm installing hypre too. > > > > However, when after sending the job for ex1f, I got the error msg: > > > > Can't read MPIRUN_HOST. > > > > So what's the problem? > > > > thanks > > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From zonexo at gmail.com Mon May 21 23:53:59 2007 From: zonexo at gmail.com (Ben Tay) Date: Tue, 22 May 2007 12:53:59 +0800 Subject: Calling hypre Message-ID: <804ab5d40705212153v4394f4dasc00a73cfaa1fbc78@mail.gmail.com> Hi, presently I am using ./a.out -pc_type hypre -pc_hypre_type boomeramg to use hypre. However, I have 2 matrix to solve and now I only want one of them to use hypre, while the other use jacobi + bcgs. for the solver using hypre, initially it's just: call PCSetType(pc,PCLU,ierr) - direct solver I now changed to call PCSetType(pc,hypre,ierr) call PCHYPRESetType(pc,boomeramg,ierr) However, I got the error msg: fortcom: Error: petsc_sub.F, line 125: This name does not have a type, and must have an explicit type. [HYPRE] call PCSetType(pc,hypre,ierr) Sorry I don't really understand the eg. hyppilut.c since my knowledge of the c language is very basic. So how should I set the options to use hypre? Thank you. -------------- next part -------------- An HTML attachment was scrubbed... URL: From balay at mcs.anl.gov Tue May 22 00:04:48 2007 From: balay at mcs.anl.gov (Satish Balay) Date: Tue, 22 May 2007 00:04:48 -0500 (CDT) Subject: Can't read MPIRUN_HOST In-Reply-To: <804ab5d40705212134o51ac0e98n319e66792757d201@mail.gmail.com> References: <804ab5d40705212106r5e0e8f53gd2779579f3b56897@mail.gmail.com> <804ab5d40705212134o51ac0e98n319e66792757d201@mail.gmail.com> Message-ID: Is this a PETSc example? Which one is it? Satish On Tue, 22 May 2007, Ben Tay wrote: > ok. I sorted out that problem. Now I got this error: > > > *** glibc detected *** double free or corruption (!prev): 0x0000000000509660 > *** > /usr/lsf62/bin/mvapich_wrapper: line 388: 28571 Aborted > (core dumped) $PJL $PJL_OPTS $REMOTE_ENV_VAR $JOB_CMD > Job /usr/lsf62/bin/mvapich_wrapper ./ex1f > > TID HOST_NAME COMMAND_LINE STATUS TERMINATION_TIME > ===== ========== ================ ======================= > =================== > 00001 atlas3-c63 Undefined > 00002 atlas3-c63 Undefined > 00002 atlas3-c61 ./ex1f Signaled (SIGKILL) 05/22/2007 > 12:23:58 > 00003 atlas3-c61 Undefined > > > > Thanks > > > > On 5/22/07, Satish Balay wrote: > > > > I don't think this has anything to do with PETSc. You might want to > > check the docs of MPI library or the batch system you are using. > > > > Satish > > > > > > On Tue, 22 May 2007, Ben Tay wrote: > > > > > Hi, > > > > > > I tried to compile PETSc and there's no problem. I also used the > > > *--with-batch=1 > > > *option since my server is using a job scheduler. There's no shared > > library > > > and I'm installing hypre too. > > > > > > However, when after sending the job for ex1f, I got the error msg: > > > > > > Can't read MPIRUN_HOST. > > > > > > So what's the problem? > > > > > > thanks > > > > > > > > From zonexo at gmail.com Tue May 22 00:06:43 2007 From: zonexo at gmail.com (Ben Tay) Date: Tue, 22 May 2007 13:06:43 +0800 Subject: Can't read MPIRUN_HOST In-Reply-To: References: <804ab5d40705212106r5e0e8f53gd2779579f3b56897@mail.gmail.com> <804ab5d40705212134o51ac0e98n319e66792757d201@mail.gmail.com> Message-ID: <804ab5d40705212206m54dc12fcle945300d1d4971a0@mail.gmail.com> It's ex1f. Thanks On 5/22/07, Satish Balay wrote: > > Is this a PETSc example? Which one is it? > > Satish > > On Tue, 22 May 2007, Ben Tay wrote: > > > ok. I sorted out that problem. Now I got this error: > > > > > > *** glibc detected *** double free or corruption (!prev): > 0x0000000000509660 > > *** > > /usr/lsf62/bin/mvapich_wrapper: line 388: 28571 Aborted > > (core dumped) $PJL $PJL_OPTS $REMOTE_ENV_VAR $JOB_CMD > > Job /usr/lsf62/bin/mvapich_wrapper ./ex1f > > > > TID HOST_NAME > COMMAND_LINE STATUS TERMINATION_TIME > > ===== ========== ================ ======================= > > =================== > > 00001 atlas3-c63 Undefined > > 00002 atlas3-c63 Undefined > > 00002 atlas3-c61 ./ex1f Signaled (SIGKILL) 05/22/2007 > > 12:23:58 > > 00003 atlas3-c61 Undefined > > > > > > > > Thanks > > > > > > > > On 5/22/07, Satish Balay wrote: > > > > > > I don't think this has anything to do with PETSc. You might want to > > > check the docs of MPI library or the batch system you are using. > > > > > > Satish > > > > > > > > > On Tue, 22 May 2007, Ben Tay wrote: > > > > > > > Hi, > > > > > > > > I tried to compile PETSc and there's no problem. I also used the > > > > *--with-batch=1 > > > > *option since my server is using a job scheduler. There's no shared > > > library > > > > and I'm installing hypre too. > > > > > > > > However, when after sending the job for ex1f, I got the error msg: > > > > > > > > Can't read MPIRUN_HOST. > > > > > > > > So what's the problem? > > > > > > > > thanks > > > > > > > > > > > > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From balay at mcs.anl.gov Tue May 22 00:09:50 2007 From: balay at mcs.anl.gov (Satish Balay) Date: Tue, 22 May 2007 00:09:50 -0500 (CDT) Subject: Calling hypre In-Reply-To: <804ab5d40705212153v4394f4dasc00a73cfaa1fbc78@mail.gmail.com> References: <804ab5d40705212153v4394f4dasc00a73cfaa1fbc78@mail.gmail.com> Message-ID: use: call PCSetType(pc,'hypre',ierr) call PCHYPRESetType(pc,'boomeramg',ierr) Satish On Tue, 22 May 2007, Ben Tay wrote: > Hi, > > presently I am using ./a.out -pc_type hypre -pc_hypre_type boomeramg to use > hypre. > > However, I have 2 matrix to solve and now I only want one of them to use > hypre, while the other use jacobi + bcgs. > > for the solver using hypre, > > initially it's just: > > call PCSetType(pc,PCLU,ierr) - direct solver > > I now changed to > > call PCSetType(pc,hypre,ierr) > > call PCHYPRESetType(pc,boomeramg,ierr) > > However, I got the error msg: > > fortcom: Error: petsc_sub.F, line 125: This name does not have a type, and > must have an explicit type. [HYPRE] > call PCSetType(pc,hypre,ierr) > > Sorry I don't really understand the eg. hyppilut.c since my knowledge of the > c language is very basic. > > So how should I set the options to use hypre? > > Thank you. > From balay at mcs.anl.gov Tue May 22 00:11:11 2007 From: balay at mcs.anl.gov (Satish Balay) Date: Tue, 22 May 2007 00:11:11 -0500 (CDT) Subject: Can't read MPIRUN_HOST In-Reply-To: <804ab5d40705212206m54dc12fcle945300d1d4971a0@mail.gmail.com> References: <804ab5d40705212106r5e0e8f53gd2779579f3b56897@mail.gmail.com> <804ab5d40705212134o51ac0e98n319e66792757d201@mail.gmail.com> <804ab5d40705212206m54dc12fcle945300d1d4971a0@mail.gmail.com> Message-ID: which ex1f is this? If its src/sys/examples/tests/ex1f.F - then it tests PETSc error codes [i.e the code generates an error] Satish On Tue, 22 May 2007, Ben Tay wrote: > It's ex1f. Thanks > > On 5/22/07, Satish Balay wrote: > > > > Is this a PETSc example? Which one is it? > > > > Satish > > > > On Tue, 22 May 2007, Ben Tay wrote: > > > > > ok. I sorted out that problem. Now I got this error: > > > > > > > > > *** glibc detected *** double free or corruption (!prev): > > 0x0000000000509660 > > > *** > > > /usr/lsf62/bin/mvapich_wrapper: line 388: 28571 Aborted > > > (core dumped) $PJL $PJL_OPTS $REMOTE_ENV_VAR $JOB_CMD > > > Job /usr/lsf62/bin/mvapich_wrapper ./ex1f > > > > > > TID HOST_NAME > > COMMAND_LINE STATUS TERMINATION_TIME > > > ===== ========== ================ ======================= > > > =================== > > > 00001 atlas3-c63 Undefined > > > 00002 atlas3-c63 Undefined > > > 00002 atlas3-c61 ./ex1f Signaled (SIGKILL) 05/22/2007 > > > 12:23:58 > > > 00003 atlas3-c61 Undefined > > > > > > > > > > > > Thanks > > > > > > > > > > > > On 5/22/07, Satish Balay wrote: > > > > > > > > I don't think this has anything to do with PETSc. You might want to > > > > check the docs of MPI library or the batch system you are using. > > > > > > > > Satish > > > > > > > > > > > > On Tue, 22 May 2007, Ben Tay wrote: > > > > > > > > > Hi, > > > > > > > > > > I tried to compile PETSc and there's no problem. I also used the > > > > > *--with-batch=1 > > > > > *option since my server is using a job scheduler. There's no shared > > > > library > > > > > and I'm installing hypre too. > > > > > > > > > > However, when after sending the job for ex1f, I got the error msg: > > > > > > > > > > Can't read MPIRUN_HOST. > > > > > > > > > > So what's the problem? > > > > > > > > > > thanks > > > > > > > > > > > > > > > > > > > > > From zonexo at gmail.com Tue May 22 00:30:42 2007 From: zonexo at gmail.com (Ben Tay) Date: Tue, 22 May 2007 13:30:42 +0800 Subject: Calling hypre In-Reply-To: References: <804ab5d40705212153v4394f4dasc00a73cfaa1fbc78@mail.gmail.com> Message-ID: <804ab5d40705212230qc9bb65fi8d78e4823e4a63d5@mail.gmail.com> Sorry I got this error during linking: petsc_sub.o(.text+0x20c): In function `petsc_sub_mp_petsc_solver_pois_': /nfs/home/enduser/g0306332/ns2d_c/petsc_sub.F:125: undefined reference to `pcsetype_' Did I miss out something? Thanks On 5/22/07, Satish Balay wrote: > > use: > > call PCSetType(pc,'hypre',ierr) > call PCHYPRESetType(pc,'boomeramg',ierr) > > Satish > > On Tue, 22 May 2007, Ben Tay wrote: > > > Hi, > > > > presently I am using ./a.out -pc_type hypre -pc_hypre_type boomeramg to > use > > hypre. > > > > However, I have 2 matrix to solve and now I only want one of them to use > > hypre, while the other use jacobi + bcgs. > > > > for the solver using hypre, > > > > initially it's just: > > > > call PCSetType(pc,PCLU,ierr) - direct solver > > > > I now changed to > > > > call PCSetType(pc,hypre,ierr) > > > > call PCHYPRESetType(pc,boomeramg,ierr) > > > > However, I got the error msg: > > > > fortcom: Error: petsc_sub.F, line 125: This name does not have a type, > and > > must have an explicit type. [HYPRE] > > call PCSetType(pc,hypre,ierr) > > > > Sorry I don't really understand the eg. hyppilut.c since my knowledge of > the > > c language is very basic. > > > > So how should I set the options to use hypre? > > > > Thank you. > > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From zonexo at gmail.com Tue May 22 00:31:47 2007 From: zonexo at gmail.com (Ben Tay) Date: Tue, 22 May 2007 13:31:47 +0800 Subject: Can't read MPIRUN_HOST In-Reply-To: References: <804ab5d40705212106r5e0e8f53gd2779579f3b56897@mail.gmail.com> <804ab5d40705212134o51ac0e98n319e66792757d201@mail.gmail.com> <804ab5d40705212206m54dc12fcle945300d1d4971a0@mail.gmail.com> Message-ID: <804ab5d40705212231m46cfabcfta29c28ead67281eb@mail.gmail.com> Oh sorry, forgot that there's many ex1f. It's in src/ksp/ksp/examples/tutorials/ for ksp solvers. Thanks On 5/22/07, Satish Balay wrote: > > which ex1f is this? > > If its src/sys/examples/tests/ex1f.F - then it tests PETSc error codes > [i.e the code generates an error] > > Satish > > On Tue, 22 May 2007, Ben Tay wrote: > > > It's ex1f. Thanks > > > > On 5/22/07, Satish Balay wrote: > > > > > > Is this a PETSc example? Which one is it? > > > > > > Satish > > > > > > On Tue, 22 May 2007, Ben Tay wrote: > > > > > > > ok. I sorted out that problem. Now I got this error: > > > > > > > > > > > > *** glibc detected *** double free or corruption (!prev): > > > 0x0000000000509660 > > > > *** > > > > /usr/lsf62/bin/mvapich_wrapper: line 388: 28571 Aborted > > > > (core dumped) $PJL $PJL_OPTS $REMOTE_ENV_VAR $JOB_CMD > > > > Job /usr/lsf62/bin/mvapich_wrapper ./ex1f > > > > > > > > TID HOST_NAME > > > COMMAND_LINE STATUS TERMINATION_TIME > > > > ===== ========== ================ ======================= > > > > =================== > > > > 00001 atlas3-c63 Undefined > > > > 00002 atlas3-c63 Undefined > > > > 00002 atlas3-c61 ./ex1f Signaled (SIGKILL) > 05/22/2007 > > > > 12:23:58 > > > > 00003 atlas3-c61 Undefined > > > > > > > > > > > > > > > > Thanks > > > > > > > > > > > > > > > > On 5/22/07, Satish Balay wrote: > > > > > > > > > > I don't think this has anything to do with PETSc. You might want > to > > > > > check the docs of MPI library or the batch system you are using. > > > > > > > > > > Satish > > > > > > > > > > > > > > > On Tue, 22 May 2007, Ben Tay wrote: > > > > > > > > > > > Hi, > > > > > > > > > > > > I tried to compile PETSc and there's no problem. I also used the > > > > > > *--with-batch=1 > > > > > > *option since my server is using a job scheduler. There's no > shared > > > > > library > > > > > > and I'm installing hypre too. > > > > > > > > > > > > However, when after sending the job for ex1f, I got the error > msg: > > > > > > > > > > > > Can't read MPIRUN_HOST. > > > > > > > > > > > > So what's the problem? > > > > > > > > > > > > thanks > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From knepley at gmail.com Tue May 22 06:15:48 2007 From: knepley at gmail.com (Matthew Knepley) Date: Tue, 22 May 2007 06:15:48 -0500 Subject: Can't read MPIRUN_HOST In-Reply-To: <804ab5d40705212231m46cfabcfta29c28ead67281eb@mail.gmail.com> References: <804ab5d40705212106r5e0e8f53gd2779579f3b56897@mail.gmail.com> <804ab5d40705212134o51ac0e98n319e66792757d201@mail.gmail.com> <804ab5d40705212206m54dc12fcle945300d1d4971a0@mail.gmail.com> <804ab5d40705212231m46cfabcfta29c28ead67281eb@mail.gmail.com> Message-ID: Does it run without the batch queue? Also, move this to petsc-maint. Matt On 5/22/07, Ben Tay wrote: > Oh sorry, forgot that there's many ex1f. > > It's in src/ksp/ksp/examples/tutorials/ for ksp solvers. > > > Thanks > > > On 5/22/07, Satish Balay wrote: > > which ex1f is this? > > > > If its src/sys/examples/tests/ex1f.F - then it tests PETSc error codes > > [i.e the code generates an error] > > > > Satish > > > > On Tue, 22 May 2007, Ben Tay wrote: > > > > > It's ex1f. Thanks > > > > > > On 5/22/07, Satish Balay wrote: > > > > > > > > Is this a PETSc example? Which one is it? > > > > > > > > Satish > > > > > > > > On Tue, 22 May 2007, Ben Tay wrote: > > > > > > > > > ok. I sorted out that problem. Now I got this error: > > > > > > > > > > > > > > > *** glibc detected *** double free or corruption (!prev): > > > > 0x0000000000509660 > > > > > *** > > > > > /usr/lsf62/bin/mvapich_wrapper: line 388: 28571 Aborted > > > > > (core dumped) $PJL $PJL_OPTS $REMOTE_ENV_VAR $JOB_CMD > > > > > Job /usr/lsf62/bin/mvapich_wrapper ./ex1f > > > > > > > > > > TID HOST_NAME > > > > COMMAND_LINE STATUS > TERMINATION_TIME > > > > > ===== ========== ================ > ======================= > > > > > =================== > > > > > 00001 atlas3-c63 Undefined > > > > > 00002 atlas3-c63 Undefined > > > > > 00002 atlas3-c61 ./ex1f Signaled (SIGKILL) > 05/22/2007 > > > > > 12:23:58 > > > > > 00003 atlas3-c61 Undefined > > > > > > > > > > > > > > > > > > > > Thanks > > > > > > > > > > > > > > > > > > > > On 5/22/07, Satish Balay wrote: > > > > > > > > > > > > I don't think this has anything to do with PETSc. You might want > to > > > > > > check the docs of MPI library or the batch system you are using. > > > > > > > > > > > > Satish > > > > > > > > > > > > > > > > > > On Tue, 22 May 2007, Ben Tay wrote: > > > > > > > > > > > > > Hi, > > > > > > > > > > > > > > I tried to compile PETSc and there's no problem. I also used the > > > > > > > *--with-batch=1 > > > > > > > *option since my server is using a job scheduler. There's no > shared > > > > > > library > > > > > > > and I'm installing hypre too. > > > > > > > > > > > > > > However, when after sending the job for ex1f, I got the error > msg: > > > > > > > > > > > > > > Can't read MPIRUN_HOST. > > > > > > > > > > > > > > So what's the problem? > > > > > > > > > > > > > > thanks > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > -- What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. -- Norbert Wiener From knepley at gmail.com Tue May 22 06:19:34 2007 From: knepley at gmail.com (Matthew Knepley) Date: Tue, 22 May 2007 06:19:34 -0500 Subject: Calling hypre In-Reply-To: <804ab5d40705212230qc9bb65fi8d78e4823e4a63d5@mail.gmail.com> References: <804ab5d40705212153v4394f4dasc00a73cfaa1fbc78@mail.gmail.com> <804ab5d40705212230qc9bb65fi8d78e4823e4a63d5@mail.gmail.com> Message-ID: YEs, you are missing a 't'. Matt On 5/22/07, Ben Tay wrote: > Sorry I got this error during linking: > > petsc_sub.o(.text+0x20c): In function > `petsc_sub_mp_petsc_solver_pois_': > /nfs/home/enduser/g0306332/ns2d_c/petsc_sub.F:125: > undefined reference to `pcsetype_' > > Did I miss out something? > > Thanks > > > > On 5/22/07, Satish Balay wrote: > > use: > > > > call PCSetType(pc,'hypre',ierr) > > call PCHYPRESetType(pc,'boomeramg',ierr) > > > > Satish > > > > On Tue, 22 May 2007, Ben Tay wrote: > > > > > Hi, > > > > > > presently I am using ./a.out -pc_type hypre -pc_hypre_type boomeramg to > use > > > hypre. > > > > > > However, I have 2 matrix to solve and now I only want one of them to use > > > hypre, while the other use jacobi + bcgs. > > > > > > for the solver using hypre, > > > > > > initially it's just: > > > > > > call PCSetType(pc,PCLU,ierr) - direct solver > > > > > > I now changed to > > > > > > call PCSetType(pc,hypre,ierr) > > > > > > call PCHYPRESetType(pc,boomeramg,ierr) > > > > > > However, I got the error msg: > > > > > > fortcom: Error: petsc_sub.F, line 125: This name does not have a type, > and > > > must have an explicit type. [HYPRE] > > > call PCSetType(pc,hypre,ierr) > > > > > > Sorry I don't really understand the eg. hyppilut.c since my knowledge of > the > > > c language is very basic. > > > > > > So how should I set the options to use hypre? > > > > > > Thank you. > > > > > > > > > -- What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. -- Norbert Wiener From bsmith at mcs.anl.gov Tue May 22 07:49:11 2007 From: bsmith at mcs.anl.gov (Barry Smith) Date: Tue, 22 May 2007 07:49:11 -0500 (CDT) Subject: Calling hypre In-Reply-To: <804ab5d40705212230qc9bb65fi8d78e4823e4a63d5@mail.gmail.com> References: <804ab5d40705212153v4394f4dasc00a73cfaa1fbc78@mail.gmail.com> <804ab5d40705212230qc9bb65fi8d78e4823e4a63d5@mail.gmail.com> Message-ID: pcsettype On Tue, 22 May 2007, Ben Tay wrote: > Sorry I got this error during linking: > > petsc_sub.o(.text+0x20c): In function `petsc_sub_mp_petsc_solver_pois_': > /nfs/home/enduser/g0306332/ns2d_c/petsc_sub.F:125: undefined reference to > `pcsetype_' > > Did I miss out something? > > Thanks > > > On 5/22/07, Satish Balay wrote: > > > > use: > > > > call PCSetType(pc,'hypre',ierr) > > call PCHYPRESetType(pc,'boomeramg',ierr) > > > > Satish > > > > On Tue, 22 May 2007, Ben Tay wrote: > > > > > Hi, > > > > > > presently I am using ./a.out -pc_type hypre -pc_hypre_type boomeramg to > > use > > > hypre. > > > > > > However, I have 2 matrix to solve and now I only want one of them to use > > > hypre, while the other use jacobi + bcgs. > > > > > > for the solver using hypre, > > > > > > initially it's just: > > > > > > call PCSetType(pc,PCLU,ierr) - direct solver > > > > > > I now changed to > > > > > > call PCSetType(pc,hypre,ierr) > > > > > > call PCHYPRESetType(pc,boomeramg,ierr) > > > > > > However, I got the error msg: > > > > > > fortcom: Error: petsc_sub.F, line 125: This name does not have a type, > > and > > > must have an explicit type. [HYPRE] > > > call PCSetType(pc,hypre,ierr) > > > > > > Sorry I don't really understand the eg. hyppilut.c since my knowledge of > > the > > > c language is very basic. > > > > > > So how should I set the options to use hypre? > > > > > > Thank you. > > > > > > > > From zonexo at gmail.com Tue May 22 08:43:07 2007 From: zonexo at gmail.com (Ben Tay) Date: Tue, 22 May 2007 21:43:07 +0800 Subject: Calling hypre In-Reply-To: References: <804ab5d40705212153v4394f4dasc00a73cfaa1fbc78@mail.gmail.com> <804ab5d40705212230qc9bb65fi8d78e4823e4a63d5@mail.gmail.com> Message-ID: <804ab5d40705220643k2d948209pd874ebd9440da8d0@mail.gmail.com> Ops.... how careless. Thanks again. On 5/22/07, Barry Smith wrote: > > > pcsettype > > > On Tue, 22 May 2007, Ben Tay wrote: > > > Sorry I got this error during linking: > > > > petsc_sub.o(.text+0x20c): In function `petsc_sub_mp_petsc_solver_pois_': > > /nfs/home/enduser/g0306332/ns2d_c/petsc_sub.F:125: undefined reference > to > > `pcsetype_' > > > > Did I miss out something? > > > > Thanks > > > > > > On 5/22/07, Satish Balay wrote: > > > > > > use: > > > > > > call PCSetType(pc,'hypre',ierr) > > > call PCHYPRESetType(pc,'boomeramg',ierr) > > > > > > Satish > > > > > > On Tue, 22 May 2007, Ben Tay wrote: > > > > > > > Hi, > > > > > > > > presently I am using ./a.out -pc_type hypre -pc_hypre_type boomeramg > to > > > use > > > > hypre. > > > > > > > > However, I have 2 matrix to solve and now I only want one of them to > use > > > > hypre, while the other use jacobi + bcgs. > > > > > > > > for the solver using hypre, > > > > > > > > initially it's just: > > > > > > > > call PCSetType(pc,PCLU,ierr) - direct solver > > > > > > > > I now changed to > > > > > > > > call PCSetType(pc,hypre,ierr) > > > > > > > > call PCHYPRESetType(pc,boomeramg,ierr) > > > > > > > > However, I got the error msg: > > > > > > > > fortcom: Error: petsc_sub.F, line 125: This name does not have a > type, > > > and > > > > must have an explicit type. [HYPRE] > > > > call PCSetType(pc,hypre,ierr) > > > > > > > > Sorry I don't really understand the eg. hyppilut.c since my > knowledge of > > > the > > > > c language is very basic. > > > > > > > > So how should I set the options to use hypre? > > > > > > > > Thank you. > > > > > > > > > > > > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From zonexo at gmail.com Tue May 22 09:13:18 2007 From: zonexo at gmail.com (Ben Tay) Date: Tue, 22 May 2007 22:13:18 +0800 Subject: Calling hypre In-Reply-To: <804ab5d40705220643k2d948209pd874ebd9440da8d0@mail.gmail.com> References: <804ab5d40705212153v4394f4dasc00a73cfaa1fbc78@mail.gmail.com> <804ab5d40705212230qc9bb65fi8d78e4823e4a63d5@mail.gmail.com> <804ab5d40705220643k2d948209pd874ebd9440da8d0@mail.gmail.com> Message-ID: <804ab5d40705220713u6f14f08r1e9b835958a3d30f@mail.gmail.com> Sorry to trouble you ppl again. The 1st step worked but subsequently I got the error msg: --------------------- Error Message ------------------------------------ [0]PETSC ERROR: Operation done in wrong order! [0]PETSC ERROR: Cannot set the HYPRE preconditioner type once it has been set! I have used : call PCHYPRESetType(pc,'boomeramg',ierr) call PCSetType(pc,'hypre',ierr) I changed the 2 order but it still can't work. Thanks again On 5/22/07, Ben Tay wrote: > > Ops.... how careless. Thanks again. > > On 5/22/07, Barry Smith wrote: > > > > > > pcsettype > > > > > > On Tue, 22 May 2007, Ben Tay wrote: > > > > > Sorry I got this error during linking: > > > > > > petsc_sub.o(.text+0x20c): In function > > `petsc_sub_mp_petsc_solver_pois_': > > > /nfs/home/enduser/g0306332/ns2d_c/petsc_sub.F:125: undefined reference > > to > > > `pcsetype_' > > > > > > Did I miss out something? > > > > > > Thanks > > > > > > > > > On 5/22/07, Satish Balay wrote: > > > > > > > > use: > > > > > > > > call PCSetType(pc,'hypre',ierr) > > > > call PCHYPRESetType(pc,'boomeramg',ierr) > > > > > > > > Satish > > > > > > > > On Tue, 22 May 2007, Ben Tay wrote: > > > > > > > > > Hi, > > > > > > > > > > presently I am using ./a.out -pc_type hypre -pc_hypre_type > > boomeramg to > > > > use > > > > > hypre. > > > > > > > > > > However, I have 2 matrix to solve and now I only want one of them > > to use > > > > > hypre, while the other use jacobi + bcgs. > > > > > > > > > > for the solver using hypre, > > > > > > > > > > initially it's just: > > > > > > > > > > call PCSetType(pc,PCLU,ierr) - direct solver > > > > > > > > > > I now changed to > > > > > > > > > > call PCSetType(pc,hypre,ierr) > > > > > > > > > > call PCHYPRESetType(pc,boomeramg,ierr) > > > > > > > > > > However, I got the error msg: > > > > > > > > > > fortcom: Error: petsc_sub.F, line 125: This name does not have a > > type, > > > > and > > > > > must have an explicit type. [HYPRE] > > > > > call PCSetType(pc,hypre,ierr) > > > > > > > > > > Sorry I don't really understand the eg. hyppilut.c since my > > knowledge of > > > > the > > > > > c language is very basic. > > > > > > > > > > So how should I set the options to use hypre? > > > > > > > > > > Thank you. > > > > > > > > > > > > > > > > > > > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From balay at mcs.anl.gov Tue May 22 09:15:09 2007 From: balay at mcs.anl.gov (Satish Balay) Date: Tue, 22 May 2007 09:15:09 -0500 (CDT) Subject: Calling hypre In-Reply-To: <804ab5d40705220713u6f14f08r1e9b835958a3d30f@mail.gmail.com> References: <804ab5d40705212153v4394f4dasc00a73cfaa1fbc78@mail.gmail.com> <804ab5d40705212230qc9bb65fi8d78e4823e4a63d5@mail.gmail.com> <804ab5d40705220643k2d948209pd874ebd9440da8d0@mail.gmail.com> <804ab5d40705220713u6f14f08r1e9b835958a3d30f@mail.gmail.com> Message-ID: you are probably doing this inside a loop. check the code. satish On Tue, 22 May 2007, Ben Tay wrote: > Sorry to trouble you ppl again. > > The 1st step worked but subsequently I got the error msg: > > --------------------- Error Message ------------------------------------ > [0]PETSC ERROR: Operation done in wrong order! > [0]PETSC ERROR: Cannot set the HYPRE preconditioner type once it has been > set! > > I have used : > > call PCHYPRESetType(pc,'boomeramg',ierr) > > call PCSetType(pc,'hypre',ierr) > > > I changed the 2 order but it still can't work. > > Thanks again > > > > > > > On 5/22/07, Ben Tay wrote: > > > > Ops.... how careless. Thanks again. > > > > On 5/22/07, Barry Smith wrote: > > > > > > > > > pcsettype > > > > > > > > > On Tue, 22 May 2007, Ben Tay wrote: > > > > > > > Sorry I got this error during linking: > > > > > > > > petsc_sub.o(.text+0x20c): In function > > > `petsc_sub_mp_petsc_solver_pois_': > > > > /nfs/home/enduser/g0306332/ns2d_c/petsc_sub.F:125: undefined reference > > > to > > > > `pcsetype_' > > > > > > > > Did I miss out something? > > > > > > > > Thanks > > > > > > > > > > > > On 5/22/07, Satish Balay wrote: > > > > > > > > > > use: > > > > > > > > > > call PCSetType(pc,'hypre',ierr) > > > > > call PCHYPRESetType(pc,'boomeramg',ierr) > > > > > > > > > > Satish > > > > > > > > > > On Tue, 22 May 2007, Ben Tay wrote: > > > > > > > > > > > Hi, > > > > > > > > > > > > presently I am using ./a.out -pc_type hypre -pc_hypre_type > > > boomeramg to > > > > > use > > > > > > hypre. > > > > > > > > > > > > However, I have 2 matrix to solve and now I only want one of them > > > to use > > > > > > hypre, while the other use jacobi + bcgs. > > > > > > > > > > > > for the solver using hypre, > > > > > > > > > > > > initially it's just: > > > > > > > > > > > > call PCSetType(pc,PCLU,ierr) - direct solver > > > > > > > > > > > > I now changed to > > > > > > > > > > > > call PCSetType(pc,hypre,ierr) > > > > > > > > > > > > call PCHYPRESetType(pc,boomeramg,ierr) > > > > > > > > > > > > However, I got the error msg: > > > > > > > > > > > > fortcom: Error: petsc_sub.F, line 125: This name does not have a > > > type, > > > > > and > > > > > > must have an explicit type. [HYPRE] > > > > > > call PCSetType(pc,hypre,ierr) > > > > > > > > > > > > Sorry I don't really understand the eg. hyppilut.c since my > > > knowledge of > > > > > the > > > > > > c language is very basic. > > > > > > > > > > > > So how should I set the options to use hypre? > > > > > > > > > > > > Thank you. > > > > > > > > > > > > > > > > > > > > > > > > > > > > > From zonexo at gmail.com Tue May 22 09:32:46 2007 From: zonexo at gmail.com (Ben Tay) Date: Tue, 22 May 2007 22:32:46 +0800 Subject: Calling hypre In-Reply-To: References: <804ab5d40705212153v4394f4dasc00a73cfaa1fbc78@mail.gmail.com> <804ab5d40705212230qc9bb65fi8d78e4823e4a63d5@mail.gmail.com> <804ab5d40705220643k2d948209pd874ebd9440da8d0@mail.gmail.com> <804ab5d40705220713u6f14f08r1e9b835958a3d30f@mail.gmail.com> Message-ID: <804ab5d40705220732o4070ed5gcce96fb34b6e00b@mail.gmail.com> Hi, I'm advancing my solution thru each time step. So after each time step, it'll be called again call PCHYPRESetType(pc,'boomeramg',ierr) call PCSetType(pc,'hypre',ierr) I tried to call "call PCHYPRESetType(pc,'boomeramg',ierr)" once during the entire run and it's ok. Is it supposed to be so? Thanks On 5/22/07, Satish Balay wrote: > > you are probably doing this inside a loop. check the code. > > satish > > On Tue, 22 May 2007, Ben Tay wrote: > > > Sorry to trouble you ppl again. > > > > The 1st step worked but subsequently I got the error msg: > > > > --------------------- Error Message ------------------------------------ > > [0]PETSC ERROR: Operation done in wrong order! > > [0]PETSC ERROR: Cannot set the HYPRE preconditioner type once it has > been > > set! > > > > I have used : > > > > call PCHYPRESetType(pc,'boomeramg',ierr) > > > > call PCSetType(pc,'hypre',ierr) > > > > > > I changed the 2 order but it still can't work. > > > > Thanks again > > > > > > > > > > > > > > On 5/22/07, Ben Tay wrote: > > > > > > Ops.... how careless. Thanks again. > > > > > > On 5/22/07, Barry Smith wrote: > > > > > > > > > > > > pcsettype > > > > > > > > > > > > On Tue, 22 May 2007, Ben Tay wrote: > > > > > > > > > Sorry I got this error during linking: > > > > > > > > > > petsc_sub.o(.text+0x20c): In function > > > > `petsc_sub_mp_petsc_solver_pois_': > > > > > /nfs/home/enduser/g0306332/ns2d_c/petsc_sub.F:125: undefined > reference > > > > to > > > > > `pcsetype_' > > > > > > > > > > Did I miss out something? > > > > > > > > > > Thanks > > > > > > > > > > > > > > > On 5/22/07, Satish Balay wrote: > > > > > > > > > > > > use: > > > > > > > > > > > > call PCSetType(pc,'hypre',ierr) > > > > > > call PCHYPRESetType(pc,'boomeramg',ierr) > > > > > > > > > > > > Satish > > > > > > > > > > > > On Tue, 22 May 2007, Ben Tay wrote: > > > > > > > > > > > > > Hi, > > > > > > > > > > > > > > presently I am using ./a.out -pc_type hypre -pc_hypre_type > > > > boomeramg to > > > > > > use > > > > > > > hypre. > > > > > > > > > > > > > > However, I have 2 matrix to solve and now I only want one of > them > > > > to use > > > > > > > hypre, while the other use jacobi + bcgs. > > > > > > > > > > > > > > for the solver using hypre, > > > > > > > > > > > > > > initially it's just: > > > > > > > > > > > > > > call PCSetType(pc,PCLU,ierr) - direct solver > > > > > > > > > > > > > > I now changed to > > > > > > > > > > > > > > call PCSetType(pc,hypre,ierr) > > > > > > > > > > > > > > call PCHYPRESetType(pc,boomeramg,ierr) > > > > > > > > > > > > > > However, I got the error msg: > > > > > > > > > > > > > > fortcom: Error: petsc_sub.F, line 125: This name does not have > a > > > > type, > > > > > > and > > > > > > > must have an explicit type. [HYPRE] > > > > > > > call PCSetType(pc,hypre,ierr) > > > > > > > > > > > > > > Sorry I don't really understand the eg. hyppilut.c since my > > > > knowledge of > > > > > > the > > > > > > > c language is very basic. > > > > > > > > > > > > > > So how should I set the options to use hypre? > > > > > > > > > > > > > > Thank you. > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From balay at mcs.anl.gov Tue May 22 10:54:39 2007 From: balay at mcs.anl.gov (Satish Balay) Date: Tue, 22 May 2007 10:54:39 -0500 (CDT) Subject: Calling hypre In-Reply-To: <804ab5d40705220732o4070ed5gcce96fb34b6e00b@mail.gmail.com> References: <804ab5d40705212153v4394f4dasc00a73cfaa1fbc78@mail.gmail.com> <804ab5d40705212230qc9bb65fi8d78e4823e4a63d5@mail.gmail.com> <804ab5d40705220643k2d948209pd874ebd9440da8d0@mail.gmail.com> <804ab5d40705220713u6f14f08r1e9b835958a3d30f@mail.gmail.com> <804ab5d40705220732o4070ed5gcce96fb34b6e00b@mail.gmail.com> Message-ID: On Tue, 22 May 2007, Ben Tay wrote: > Hi, > > I'm advancing my solution thru each time step. So after each time step, > it'll be called again Unless you are destroying and recreating [ksp/pc] objects in this loop, you don't need to call PCSetType repeatedly.. Satish > > call PCHYPRESetType(pc,'boomeramg',ierr) > > call PCSetType(pc,'hypre',ierr) > > I tried to call "call PCHYPRESetType(pc,'boomeramg',ierr)" once during the > entire run and it's ok. Is it supposed to be so? > > Thanks > > > > > > On 5/22/07, Satish Balay wrote: > > > > you are probably doing this inside a loop. check the code. > > > > satish > > > > On Tue, 22 May 2007, Ben Tay wrote: > > > > > Sorry to trouble you ppl again. > > > > > > The 1st step worked but subsequently I got the error msg: > > > > > > --------------------- Error Message ------------------------------------ > > > [0]PETSC ERROR: Operation done in wrong order! > > > [0]PETSC ERROR: Cannot set the HYPRE preconditioner type once it has > > been > > > set! > > > > > > I have used : > > > > > > call PCHYPRESetType(pc,'boomeramg',ierr) > > > > > > call PCSetType(pc,'hypre',ierr) > > > > > > > > > I changed the 2 order but it still can't work. > > > > > > Thanks again > > > > > > > > > > > > > > > > > > > > > On 5/22/07, Ben Tay wrote: > > > > > > > > Ops.... how careless. Thanks again. > > > > > > > > On 5/22/07, Barry Smith wrote: > > > > > > > > > > > > > > > pcsettype > > > > > > > > > > > > > > > On Tue, 22 May 2007, Ben Tay wrote: > > > > > > > > > > > Sorry I got this error during linking: > > > > > > > > > > > > petsc_sub.o(.text+0x20c): In function > > > > > `petsc_sub_mp_petsc_solver_pois_': > > > > > > /nfs/home/enduser/g0306332/ns2d_c/petsc_sub.F:125: undefined > > reference > > > > > to > > > > > > `pcsetype_' > > > > > > > > > > > > Did I miss out something? > > > > > > > > > > > > Thanks > > > > > > > > > > > > > > > > > > On 5/22/07, Satish Balay wrote: > > > > > > > > > > > > > > use: > > > > > > > > > > > > > > call PCSetType(pc,'hypre',ierr) > > > > > > > call PCHYPRESetType(pc,'boomeramg',ierr) > > > > > > > > > > > > > > Satish > > > > > > > > > > > > > > On Tue, 22 May 2007, Ben Tay wrote: > > > > > > > > > > > > > > > Hi, > > > > > > > > > > > > > > > > presently I am using ./a.out -pc_type hypre -pc_hypre_type > > > > > boomeramg to > > > > > > > use > > > > > > > > hypre. > > > > > > > > > > > > > > > > However, I have 2 matrix to solve and now I only want one of > > them > > > > > to use > > > > > > > > hypre, while the other use jacobi + bcgs. > > > > > > > > > > > > > > > > for the solver using hypre, > > > > > > > > > > > > > > > > initially it's just: > > > > > > > > > > > > > > > > call PCSetType(pc,PCLU,ierr) - direct solver > > > > > > > > > > > > > > > > I now changed to > > > > > > > > > > > > > > > > call PCSetType(pc,hypre,ierr) > > > > > > > > > > > > > > > > call PCHYPRESetType(pc,boomeramg,ierr) > > > > > > > > > > > > > > > > However, I got the error msg: > > > > > > > > > > > > > > > > fortcom: Error: petsc_sub.F, line 125: This name does not have > > a > > > > > type, > > > > > > > and > > > > > > > > must have an explicit type. [HYPRE] > > > > > > > > call PCSetType(pc,hypre,ierr) > > > > > > > > > > > > > > > > Sorry I don't really understand the eg. hyppilut.c since my > > > > > knowledge of > > > > > > > the > > > > > > > > c language is very basic. > > > > > > > > > > > > > > > > So how should I set the options to use hypre? > > > > > > > > > > > > > > > > Thank you. > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > From devteam at transvalor.com Wed May 23 07:24:56 2007 From: devteam at transvalor.com (devteam) Date: Wed, 23 May 2007 14:24:56 +0200 Subject: Verry time consuming first matrix assembly References: <804ab5d40705212153v4394f4dasc00a73cfaa1fbc78@mail.gmail.com> <804ab5d40705212230qc9bb65fi8d78e4823e4a63d5@mail.gmail.com> <804ab5d40705220643k2d948209pd874ebd9440da8d0@mail.gmail.com> <804ab5d40705220713u6f14f08r1e9b835958a3d30f@mail.gmail.com> <804ab5d40705220732o4070ed5gcce96fb34b6e00b@mail.gmail.com> Message-ID: <02d501c79d35$5fcbd590$089621c0@dev> Hello, I have a FEM code that perform with several meshes. We handle interactions/contacts between bodies/meshes by assembling coupling terms between the ? contact ? nodes of the meshes. I have a very large bandwidth : The numbering of the whole problem is done mesh by mesh (my problem is of size 4*N where N is the total number of node and N = N1 + N2 + .. + Nq with Nq the number of nodes of mesh q. Nodes of mesh q are numbered from N1+N2 + .. + N(q-1) + 1 to N1+N2+..+Nq) Typically N # 100000 to 1000000. The matrix is a MPIBAIJ one and the d_nnz and o_nnz info are specified when created. It is filled using MatSetValuesBlockedLocal in mode ADD_VALUES. At each increment of my time step scheme, the connections between mesh nodes may change and I have to rebuild the matrix. It appears that the CPU required for the first matrix assembly is very large (three to four times the CPU for 1 system solve) and depend on the number of meshes : if I have only one mesh of an equivalent size the assembly CPU remain almost zero. So I wonder what is causing the assembly to last so much ? I was thinking that the system solve would have been longer because of my large bandwidth but I don't understand why it is the matrix assembly that last so much. I have investigated using Mat_info but all seems to be correct : the number of malloc during MatSetValue is always zero and I have a ratio non zero used / non zero allocated between 1% and 10% (same ratio than when I have only one mesh). I have tested using a simple SOR preconditionner instead of ILU, wondering if it was the precond assembly that last long because of the bandwidth, but it does not change anything ! Thanks a lot for any remarks or any tip. Best regards, Etienne Perchat From knepley at gmail.com Wed May 23 08:22:19 2007 From: knepley at gmail.com (Matthew Knepley) Date: Wed, 23 May 2007 08:22:19 -0500 Subject: Verry time consuming first matrix assembly In-Reply-To: <02d501c79d35$5fcbd590$089621c0@dev> References: <804ab5d40705212153v4394f4dasc00a73cfaa1fbc78@mail.gmail.com> <804ab5d40705212230qc9bb65fi8d78e4823e4a63d5@mail.gmail.com> <804ab5d40705220643k2d948209pd874ebd9440da8d0@mail.gmail.com> <804ab5d40705220713u6f14f08r1e9b835958a3d30f@mail.gmail.com> <804ab5d40705220732o4070ed5gcce96fb34b6e00b@mail.gmail.com> <02d501c79d35$5fcbd590$089621c0@dev> Message-ID: Please send the results of -info -log_summary. Thanks, Matt On 5/23/07, devteam wrote: > Hello, > > > > I have a FEM code that perform with several meshes. We handle > interactions/contacts between bodies/meshes by assembling coupling terms > between the ? contact ? nodes of the meshes. > > > > I have a very large bandwidth : The numbering of the whole problem is done > mesh by mesh (my problem is of size 4*N where N is the total number of node > and N = N1 + N2 + .. + Nq with Nq the number of nodes of mesh q. Nodes of > mesh q are numbered from N1+N2 + .. + N(q-1) + 1 to N1+N2+..+Nq) > > > > Typically N # 100000 to 1000000. > > > > > > The matrix is a MPIBAIJ one and the d_nnz and o_nnz info are specified when > created. > > It is filled using MatSetValuesBlockedLocal in mode ADD_VALUES. > > > > At each increment of my time step scheme, the connections between mesh nodes > may change and I have to rebuild the matrix. > > It appears that the CPU required for the first matrix assembly is very large > (three to four times the CPU for 1 system solve) and depend on the number of > meshes : if I have only one mesh of an equivalent size the assembly CPU > remain almost zero. > > > > So I wonder what is causing the assembly to last so much ? I was thinking > that the system solve would have been longer because of my large bandwidth > but I don't understand why it is the matrix assembly that last so much. > > > > I have investigated using Mat_info but all seems to be correct : the number > of malloc during MatSetValue is always zero and I have a ratio non zero used > / non zero allocated between 1% and 10% (same ratio than when I have only > one mesh). > > I have tested using a simple SOR preconditionner instead of ILU, wondering > if it was the precond assembly that last long because of the bandwidth, but > it does not change anything ! > > > > Thanks a lot for any remarks or any tip. > > > > Best regards, > > Etienne Perchat > > > -- What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. -- Norbert Wiener From junwang at uwm.edu Wed May 23 10:52:51 2007 From: junwang at uwm.edu (junwang at uwm.edu) Date: Wed, 23 May 2007 10:52:51 -0500 Subject: Question about two Factorization functions Message-ID: <1179935571.465463532fd5e@panthermail.uwm.edu> Hi, I am trying to know the difference bewteen the following two functions: MatCholeskyFactorNumeric_SeqSBAIJ_2(Mat A,MatFactorInfo *info,Mat *B); MatCholeskyFactorNumeric_SeqSBAIJ_2_NaturalOrdering(Mat A,MatFactorInfo *info,Mat *B); Suppose I have a matrix "mat" stored in the SBAIJ format, the block size is 2. When i am running the following code, it seems it calling the second NatrualOrdering function: ierr = MatCreateSeqSBAIJ(PETSC_COMM_SELF,2,6,6,0,nnz,&mat);CHKERRQ(ierr); IS perm,colp; MatGetOrdering(mat,MATORDERING_NATURAL,&perm,&colp); ierr = ISDestroy(colp);CHKERRQ(ierr); MatFactorInfo info; info.fill=1.0; Mat result; ierr = MatCholeskyFactorSymbolic(mat,perm,&info,&result); CHKERRQ(ierr); ierr = MatCholeskyFactorNumeric(mat,&info,&result);CHKERRQ(ierr); However, since the first function is not naturalordering, does that mean i don't have to call the MatGetOrdering before I call MatCholeskyFactor....(). If so, does that mean, it will reduce the time cost since it doesn't have to do the ordering part. Is anyone can give any idea and give me a example for calling the first function to do the factorization. Thank you very mcuh! Jun From devteam at transvalor.com Wed May 23 11:21:22 2007 From: devteam at transvalor.com (devteam) Date: Wed, 23 May 2007 18:21:22 +0200 Subject: Verry time consuming first matrix assembly References: <804ab5d40705212153v4394f4dasc00a73cfaa1fbc78@mail.gmail.com> <804ab5d40705212230qc9bb65fi8d78e4823e4a63d5@mail.gmail.com> <804ab5d40705220643k2d948209pd874ebd9440da8d0@mail.gmail.com> <804ab5d40705220713u6f14f08r1e9b835958a3d30f@mail.gmail.com> <804ab5d40705220732o4070ed5gcce96fb34b6e00b@mail.gmail.com> <02d501c79d35$5fcbd590$089621c0@dev> Message-ID: <036401c79d56$68a48830$089621c0@dev> Hi all, Hi Mat, I have to apologize. I was using the info displayed by Mat_GetInfo function. If instead I use -log_info or -info then I see that my matrix is not well preallocated. I think that I have to debug my stuff and stop bothering you Sorry for that, regards, Etienne Perchat ----- Original Message ----- From: "Matthew Knepley" To: Sent: Wednesday, May 23, 2007 3:22 PM Subject: Re: Verry time consuming first matrix assembly > Please send the results of -info -log_summary. > > Thanks, > > Matt > > On 5/23/07, devteam wrote: >> Hello, >> >> >> >> I have a FEM code that perform with several meshes. We handle >> interactions/contacts between bodies/meshes by assembling coupling terms >> between the ? contact ? nodes of the meshes. >> >> >> >> I have a very large bandwidth : The numbering of the whole problem is >> done >> mesh by mesh (my problem is of size 4*N where N is the total number of >> node >> and N = N1 + N2 + .. + Nq with Nq the number of nodes of mesh q. Nodes of >> mesh q are numbered from N1+N2 + .. + N(q-1) + 1 to N1+N2+..+Nq) >> >> >> >> Typically N # 100000 to 1000000. >> >> >> >> >> >> The matrix is a MPIBAIJ one and the d_nnz and o_nnz info are specified >> when >> created. >> >> It is filled using MatSetValuesBlockedLocal in mode ADD_VALUES. >> >> >> >> At each increment of my time step scheme, the connections between mesh >> nodes >> may change and I have to rebuild the matrix. >> >> It appears that the CPU required for the first matrix assembly is very >> large >> (three to four times the CPU for 1 system solve) and depend on the number >> of >> meshes : if I have only one mesh of an equivalent size the assembly CPU >> remain almost zero. >> >> >> >> So I wonder what is causing the assembly to last so much ? I was thinking >> that the system solve would have been longer because of my large >> bandwidth >> but I don't understand why it is the matrix assembly that last so much. >> >> >> >> I have investigated using Mat_info but all seems to be correct : the >> number >> of malloc during MatSetValue is always zero and I have a ratio non zero >> used >> / non zero allocated between 1% and 10% (same ratio than when I have only >> one mesh). >> >> I have tested using a simple SOR preconditionner instead of ILU, >> wondering >> if it was the precond assembly that last long because of the bandwidth, >> but >> it does not change anything ! >> >> >> >> Thanks a lot for any remarks or any tip. >> >> >> >> Best regards, >> >> Etienne Perchat >> >> >> > > > -- > What most experimenters take for granted before they begin their > experiments is infinitely more interesting than any results to which > their experiments lead. > -- Norbert Wiener > > > > From hzhang at mcs.anl.gov Wed May 23 13:58:18 2007 From: hzhang at mcs.anl.gov (Hong Zhang) Date: Wed, 23 May 2007 13:58:18 -0500 (CDT) Subject: Question about two Factorization functions In-Reply-To: <1179935571.465463532fd5e@panthermail.uwm.edu> References: <1179935571.465463532fd5e@panthermail.uwm.edu> Message-ID: Jun, > I am trying to know the difference bewteen the following two functions: > MatCholeskyFactorNumeric_SeqSBAIJ_2(Mat A,MatFactorInfo *info,Mat *B); > MatCholeskyFactorNumeric_SeqSBAIJ_2_NaturalOrdering(Mat A,MatFactorInfo > *info,Mat *B); > > Suppose I have a matrix "mat" stored in the SBAIJ format, the block size is > 2. When i am running the following code, it seems it calling the second > NatrualOrdering function: > > ierr = MatCreateSeqSBAIJ(PETSC_COMM_SELF,2,6,6,0,nnz,&mat);CHKERRQ(ierr); > > IS perm,colp; > MatGetOrdering(mat,MATORDERING_NATURAL,&perm,&colp); > ierr = ISDestroy(colp);CHKERRQ(ierr); > MatFactorInfo info; > info.fill=1.0; > Mat result; > ierr = MatCholeskyFactorSymbolic(mat,perm,&info,&result); CHKERRQ(ierr); > ierr = MatCholeskyFactorNumeric(mat,&info,&result);CHKERRQ(ierr); > > However, since the first function is not naturalordering, does that mean i > don't have to call the MatGetOrdering before I call MatCholeskyFactor....(). > If so, does that mean, it will reduce the time cost since it doesn't have to do > the ordering part. You must provide 'perm' when calling MatCholeskyFactorSymbolic(). MatGetOrdering() takes ignorable time. You can run your code with option '-log_summary |grep MatGetOrdering' and see how much time spent on it. > Is anyone can give any idea and give me a example for calling the first > function to do the factorization. Thank you very mcuh! You may see src/mat/examples/tests/ex74.c as an example (messy!). Hong > > From junwang at uwm.edu Wed May 23 14:49:34 2007 From: junwang at uwm.edu (junwang at uwm.edu) Date: Wed, 23 May 2007 14:49:34 -0500 Subject: Question about two Factorization functions In-Reply-To: References: <1179935571.465463532fd5e@panthermail.uwm.edu> Message-ID: <1179949774.46549ace189b4@panthermail.uwm.edu> Hi, Hong: Thanks again for your reply! :) I have read this ex74 before. I am curious, when i call: ierr = MatCholeskyFactorNumeric(mat,&info,&result);CHKERRQ(ierr); which function will be called: MatCholeskyFactorNumeric_SeqSBAIJ_2(Mat A,MatFactorInfo *info,Mat *B); or MatCholeskyFactorNumeric_SeqSBAIJ_2_NaturalOrdering(Mat A,MatFactorInfo *info,Mat *B); For the following codes: ierr = MatCreateSeqSBAIJ(PETSC_COMM_SELF,2,6,6,0,nnz,&mat);CHKERRQ(ierr); IS perm,colp; MatGetOrdering(mat,MATORDERING_NATURAL,&perm,&colp); ierr = ISDestroy(colp);CHKERRQ(ierr); MatFactorInfo info; info.fill=1.0; Mat result; ierr = MatCholeskyFactorSymbolic(mat,perm,&info,&result);CHKERRQ(ierr); ierr = MatCholeskyFactorNumeric(mat,&info,&result);CHKERRQ(ierr); I know when i call MatCholeskyFactorNumeric , MatCholeskyFactorNumeric_SeqSBAIJ_2_NaturalOrdering() will be called. So when is the first one will be called? This is not showed in ex74. Thank you! Jun Quoting Hong Zhang : > > Jun, > > I am trying to know the difference bewteen the following two functions: > > MatCholeskyFactorNumeric_SeqSBAIJ_2(Mat A,MatFactorInfo *info,Mat *B); > > MatCholeskyFactorNumeric_SeqSBAIJ_2_NaturalOrdering(Mat A,MatFactorInfo > > *info,Mat *B); > > > > Suppose I have a matrix "mat" stored in the SBAIJ format, the block > size is > > 2. When i am running the following code, it seems it calling the second > > NatrualOrdering function: > > > > ierr = > MatCreateSeqSBAIJ(PETSC_COMM_SELF,2,6,6,0,nnz,&mat);CHKERRQ(ierr); > > > > IS perm,colp; > > MatGetOrdering(mat,MATORDERING_NATURAL,&perm,&colp); > > ierr = ISDestroy(colp);CHKERRQ(ierr); > > MatFactorInfo info; > > info.fill=1.0; > > Mat result; > > ierr = MatCholeskyFactorSymbolic(mat,perm,&info,&result); > CHKERRQ(ierr); > > ierr = MatCholeskyFactorNumeric(mat,&info,&result);CHKERRQ(ierr); > > > > However, since the first function is not naturalordering, does that > mean i > > don't have to call the MatGetOrdering before I call > MatCholeskyFactor....(). > > If so, does that mean, it will reduce the time cost since it doesn't have > to do > > the ordering part. > > You must provide 'perm' when calling MatCholeskyFactorSymbolic(). > MatGetOrdering() takes ignorable time. > You can run your code with option '-log_summary |grep MatGetOrdering' and > see how much time spent on it. > > > Is anyone can give any idea and give me a example for calling the first > > function to do the factorization. Thank you very mcuh! > > You may see src/mat/examples/tests/ex74.c as an example (messy!). > > Hong > > > > > > From hzhang at mcs.anl.gov Wed May 23 21:12:02 2007 From: hzhang at mcs.anl.gov (Hong Zhang) Date: Wed, 23 May 2007 21:12:02 -0500 (CDT) Subject: Question about two Factorization functions In-Reply-To: <1179949774.46549ace189b4@panthermail.uwm.edu> References: <1179935571.465463532fd5e@panthermail.uwm.edu> <1179949774.46549ace189b4@panthermail.uwm.edu> Message-ID: On Wed, 23 May 2007 junwang at uwm.edu wrote: > Hi, Hong: > Thanks again for your reply! :) > I have read this ex74 before. I am curious, when i call: > ierr = MatCholeskyFactorNumeric(mat,&info,&result);CHKERRQ(ierr); > which function will be called: > MatCholeskyFactorNumeric_SeqSBAIJ_2(Mat A,MatFactorInfo *info,Mat *B); > or > MatCholeskyFactorNumeric_SeqSBAIJ_2_NaturalOrdering(Mat A,MatFactorInfo > *info,Mat *B); MatCholeskyFactorNumeric_SeqSBAIJ_2_NaturalOrdering() is called, because MatGetOrdering(A,MATORDERING_NATURAL,&perm,&iscol) is called before MatCholeskyFactorNumeric() in ex74. You can use a debugger to follow the steps of the calling procedure. > For the following codes: > ierr = MatCreateSeqSBAIJ(PETSC_COMM_SELF,2,6,6,0,nnz,&mat);CHKERRQ(ierr); > IS perm,colp; > MatGetOrdering(mat,MATORDERING_NATURAL,&perm,&colp); > ierr = ISDestroy(colp);CHKERRQ(ierr); > MatFactorInfo info; > info.fill=1.0; > Mat result; > ierr = MatCholeskyFactorSymbolic(mat,perm,&info,&result);CHKERRQ(ierr); > ierr = MatCholeskyFactorNumeric(mat,&info,&result);CHKERRQ(ierr); > > I know when i call MatCholeskyFactorNumeric , > MatCholeskyFactorNumeric_SeqSBAIJ_2_NaturalOrdering() will be called. Yes. > So when is the first one will be called? This is not showed in ex74. Sorry, ^^^^^^^^^^^^^? I do not understand what do you mean "the first one"? Again, a debugger can show exactly which functions are being called. Hong > > > Jun > > > Quoting Hong Zhang : > > > > > Jun, > > > I am trying to know the difference bewteen the following two functions: > > > MatCholeskyFactorNumeric_SeqSBAIJ_2(Mat A,MatFactorInfo *info,Mat *B); > > > MatCholeskyFactorNumeric_SeqSBAIJ_2_NaturalOrdering(Mat A,MatFactorInfo > > > *info,Mat *B); > > > > > > Suppose I have a matrix "mat" stored in the SBAIJ format, the block > > size is > > > 2. When i am running the following code, it seems it calling the second > > > NatrualOrdering function: > > > > > > ierr = > > MatCreateSeqSBAIJ(PETSC_COMM_SELF,2,6,6,0,nnz,&mat);CHKERRQ(ierr); > > > > > > IS perm,colp; > > > MatGetOrdering(mat,MATORDERING_NATURAL,&perm,&colp); > > > ierr = ISDestroy(colp);CHKERRQ(ierr); > > > MatFactorInfo info; > > > info.fill=1.0; > > > Mat result; > > > ierr = MatCholeskyFactorSymbolic(mat,perm,&info,&result); > > CHKERRQ(ierr); > > > ierr = MatCholeskyFactorNumeric(mat,&info,&result);CHKERRQ(ierr); > > > > > > However, since the first function is not naturalordering, does that > > mean i > > > don't have to call the MatGetOrdering before I call > > MatCholeskyFactor....(). > > > If so, does that mean, it will reduce the time cost since it doesn't have > > to do > > > the ordering part. > > > > You must provide 'perm' when calling MatCholeskyFactorSymbolic(). > > MatGetOrdering() takes ignorable time. > > You can run your code with option '-log_summary |grep MatGetOrdering' and > > see how much time spent on it. > > > > > Is anyone can give any idea and give me a example for calling the first > > > function to do the factorization. Thank you very mcuh! > > > > You may see src/mat/examples/tests/ex74.c as an example (messy!). > > > > Hong > > > > > > > > > > > > > > From bsmith at mcs.anl.gov Thu May 24 17:20:57 2007 From: bsmith at mcs.anl.gov (Barry Smith) Date: Thu, 24 May 2007 17:20:57 -0500 (CDT) Subject: New PETSc release 2.3.3 Message-ID: ---------------------------------------------------------------------------- Portable, Extensible Toolkit for Scientific Computation (PETSc) ---------------------------------------------------------------------------- http://www.mcs.anl.gov/petsc We are pleased to announce the release of the PETSc 2.3.3 parallel software libraries for the implicit solution of PDEs and related problems. There are a couple of MAJOR interface changes, listed at http://www-unix.mcs.anl.gov/petsc/petsc-as/documentation/changes/233.html please check these changes before compiling your codes. In particular the calling sequence for VecScatterBegin/End() now takes the scatter context as the first argument not the last. You can download from http://www-unix.mcs.anl.gov/petsc/petsc-as/download/index.html and and the installataion instructions are at http://www-unix.mcs.anl.gov/petsc/petsc-as/documentation/installation.html As always, please send bug reports, questions, and requests for new features to petsc-maint at mcs.anl.gov Thanks for your continued support. The PETSc developers, Satish, Lisandro, Matt, Victor, Barry and Hong From yaron at oak-research.com Thu May 24 18:13:47 2007 From: yaron at oak-research.com (yaron at oak-research.com) Date: Thu, 24 May 2007 15:13:47 -0800 Subject: New PETSc release 2.3.3 Message-ID: <20070524231347.16945.qmail@s402.sureserver.com> Barry- When will the manual (http://www-unix.mcs.anl.gov/petsc/petsc-as/snapshots/petsc-current/docs/manual.pdf) be updated? Thanks Yaron -------Original Message------- From: Barry Smith Subject: New PETSc release 2.3.3 Sent: 24 May '07 14:20 ---------------------------------------------------------------------------- Portable, Extensible Toolkit for Scientific Computation (PETSc) ---------------------------------------------------------------------------- [LINK: http://www.mcs.anl.gov/petsc] http://www.mcs.anl.gov/petsc We are pleased to announce the release of the PETSc 2.3.3 parallel software libraries for the implicit solution of PDEs and related problems. There are a couple of MAJOR interface changes, listed at [LINK: http://www-unix.mcs.anl.gov/petsc/petsc-as/documentation/changes/233.html] http://www-unix.mcs.anl.gov/petsc/petsc-as/documentation/changes/233.html please check these changes before compiling your codes. In particular the calling sequence for VecScatterBegin/End() now takes the scatter context as the first argument not the last. You can download from [LINK: http://www-unix.mcs.anl.gov/petsc/petsc-as/download/index.html] http://www-unix.mcs.anl.gov/petsc/petsc-as/download/index.html and and the installataion instructions are at [LINK: http://www-unix.mcs.anl.gov/petsc/petsc-as/documentation/installation.html] http://www-unix.mcs.anl.gov/petsc/petsc-as/documentation/installation.html As always, please send bug reports, questions, and requests for new features to [LINK: http://webmail.oak-research.com/compose.php?to=petsc-maint at mcs.anl.gov] petsc-maint at mcs.anl.gov Thanks for your continued support. The PETSc developers, Satish, Lisandro, Matt, Victor, Barry and Hong -------------- next part -------------- An HTML attachment was scrubbed... URL: From yaron at oak-research.com Thu May 24 18:13:47 2007 From: yaron at oak-research.com (yaron at oak-research.com) Date: Thu, 24 May 2007 15:13:47 -0800 Subject: New PETSc release 2.3.3 Message-ID: <20070524231347.16945.qmail@s402.sureserver.com> Barry- When will the manual (http://www-unix.mcs.anl.gov/petsc/petsc-as/snapshots/petsc-current/docs/manual.pdf) be updated? Thanks Yaron -------Original Message------- From: Barry Smith Subject: New PETSc release 2.3.3 Sent: 24 May '07 14:20 ---------------------------------------------------------------------------- Portable, Extensible Toolkit for Scientific Computation (PETSc) ---------------------------------------------------------------------------- [LINK: http://www.mcs.anl.gov/petsc] http://www.mcs.anl.gov/petsc We are pleased to announce the release of the PETSc 2.3.3 parallel software libraries for the implicit solution of PDEs and related problems. There are a couple of MAJOR interface changes, listed at [LINK: http://www-unix.mcs.anl.gov/petsc/petsc-as/documentation/changes/233.html] http://www-unix.mcs.anl.gov/petsc/petsc-as/documentation/changes/233.html please check these changes before compiling your codes. In particular the calling sequence for VecScatterBegin/End() now takes the scatter context as the first argument not the last. You can download from [LINK: http://www-unix.mcs.anl.gov/petsc/petsc-as/download/index.html] http://www-unix.mcs.anl.gov/petsc/petsc-as/download/index.html and and the installataion instructions are at [LINK: http://www-unix.mcs.anl.gov/petsc/petsc-as/documentation/installation.html] http://www-unix.mcs.anl.gov/petsc/petsc-as/documentation/installation.html As always, please send bug reports, questions, and requests for new features to [LINK: http://webmail.oak-research.com/compose.php?to=petsc-maint at mcs.anl.gov] petsc-maint at mcs.anl.gov Thanks for your continued support. The PETSc developers, Satish, Lisandro, Matt, Victor, Barry and Hong -------------- next part -------------- An HTML attachment was scrubbed... URL: From balay at mcs.anl.gov Thu May 24 18:33:24 2007 From: balay at mcs.anl.gov (Satish Balay) Date: Thu, 24 May 2007 18:33:24 -0500 (CDT) Subject: New PETSc release 2.3.3 In-Reply-To: <20070524231347.16945.qmail@s402.sureserver.com> References: <20070524231347.16945.qmail@s402.sureserver.com> Message-ID: petsc-current link is now updated. thanks, Satish On Thu, 24 May 2007, yaron at oak-research.com wrote: > Barry- > When will the manual > (http://www-unix.mcs.anl.gov/petsc/petsc-as/snapshots/petsc-current/docs/manual.pdf) > be updated? > > Thanks > Yaron From petsc-maint at mcs.anl.gov Fri May 25 16:08:12 2007 From: petsc-maint at mcs.anl.gov (Barry Smith) Date: Fri, 25 May 2007 16:08:12 -0500 (CDT) Subject: [PETSC #16200] Petsc Performance on Dual- and Quad-core systems In-Reply-To: <91FBE7AEE91B454282AE05FE79413A4A0BE63591@trexch.prog.altair.com> References: <91FBE7AEE91B454282AE05FE79413A4A0BE63591@trexch.prog.altair.com> Message-ID: Carlos, We don't have any particular numbers for these systems. There are two main things to keep in mind. 1) Ideally the MPI you use will take advantage of the local shared memory within a node to lower the communication time. MPICH for example can be compiled with certain options to help this. 2) The memory bandwidth is often shared among several of the cores. Since sparse matrices computations are almost totally bounded by memory bandwidth the most important thing to consider when buying a system like this is how much totally memory bandwidth does it have and how much is really usable for each core. Ideally you'd like to see a 6++ gigabytes per second peak memory bandwith per core. Barry On Wed, 23 May 2007, Carlos Erik Baumann wrote: > > > Hello Everyone, > > > > Do you have any performance number on Petsc solving typical heat > transfer / laplace / poisson problems using dual and/or quad-core > workstations ? > > > > I am interested in speedup based on problem size, etc. > > > > Looking forward to your reply. > > > > Best, > > > > --Carlos > > > > Carlos Baumann Altair Engineering, Inc. > > Cell 512-657-4348, Ph. 512-467-0618(x512) > > > > > > > > From balay at mcs.anl.gov Fri May 25 16:39:57 2007 From: balay at mcs.anl.gov (balay at mcs.anl.gov) Date: Fri, 25 May 2007 16:39:57 -0500 (CDT) Subject: [PETSC #16200] Petsc Performance on Dual- and Quad-core systems In-Reply-To: References: <91FBE7AEE91B454282AE05FE79413A4A0BE63591@trexch.prog.altair.com> Message-ID: I don't know what the current naming convention is wrt cores is. Its all messed up. I'll use the following definitions here. 1. CPU=core=processor 2. Dual-core is a short form for 'dual cores per chip. 3. chip: a minimal piece of cpus one can buy.. [the current marketing might call this CPU - but that donesn't make sense to me] Anyway back to the topic on hand, one way to look at this issue is: as long at the memory bandwidth scales up with number of processors you'll see scalable performance. However the current machines, the bandwidth doesn't scale with multiple cores[per chip]. However it might scale with number of chips. This is true with both AMD and Intel chips. For eg: - if you have a 2x2 [2 dual core chips] machine - you might see the best performance for 'mpirun -np 2'. Satish On Fri, 25 May 2007, Barry Smith wrote: > > Carlos, > > We don't have any particular numbers for these systems. There are > two main things to keep in mind. > > 1) Ideally the MPI you use will take advantage of the local shared memory > within a node to lower the communication time. MPICH for example can be > compiled with certain options to help this. > 2) The memory bandwidth is often shared among several of the cores. Since > sparse matrices computations are almost totally bounded by memory bandwidth > the most important thing to consider when buying a system like this is how > much totally memory bandwidth does it have and how much is really usable > for each core. Ideally you'd like to see a 6++ gigabytes per second peak > memory bandwith per core. > > Barry > > > On Wed, 23 May 2007, Carlos Erik Baumann wrote: > > > > > > > Hello Everyone, > > > > > > > > Do you have any performance number on Petsc solving typical heat > > transfer / laplace / poisson problems using dual and/or quad-core > > workstations ? > > > > > > > > I am interested in speedup based on problem size, etc. > > > > > > > > Looking forward to your reply. > > > > > > > > Best, > > > > > > > > --Carlos > > > > > > > > Carlos Baumann Altair Engineering, Inc. > > > > Cell 512-657-4348, Ph. 512-467-0618(x512) > > > > > > > > > > > > > > > > > > From yxliu at fudan.edu.cn Sat May 26 04:35:55 2007 From: yxliu at fudan.edu.cn (Yixun Liu) Date: Sat, 26 May 2007 17:35:55 +0800 Subject: about mkl_p4p.dll not found Message-ID: <005501c79f79$43b60510$8864a8c0@dmrc6700512> Hi, My program is based on PETSC. Its compiler and link is ok, but when execute a error was generated. The error is: "MKL FATAL ERROR: Cannot load neither mkl_p4p.dll nor mkl_def.dllPress any key to continue" I can find the mkl_p4p.dll in "D:\MyVC\petsc-2.3.2-p3\externalpackages\w_mkl_serial_p_8.1.001\mkl_serial_8.1\mkl_8.1_serial\ia32\bin". Do I need to set the environment variable? Hope your help. Best, Yixun -------------- next part -------------- An HTML attachment was scrubbed... URL: From balay at mcs.anl.gov Sat May 26 10:58:00 2007 From: balay at mcs.anl.gov (Satish Balay) Date: Sat, 26 May 2007 10:58:00 -0500 (CDT) Subject: about mkl_p4p.dll not found In-Reply-To: <005501c79f79$43b60510$8864a8c0@dmrc6700512> References: <005501c79f79$43b60510$8864a8c0@dmrc6700512> Message-ID: MLK is installed in \petsc-2.3.2-p3\externalpackages? Sounds like a nonstandard install of MLK. Yes - on windows the location of all .dll files that are used should be set in PATH env variable. The MLK installer should probably do this automatically. Satish On Sat, 26 May 2007, Yixun Liu wrote: > Hi, > > My program is based on PETSC. Its compiler and link is ok, but when execute a error was generated. The error is: > > "MKL FATAL ERROR: Cannot load neither mkl_p4p.dll nor mkl_def.dllPress any key to continue" > > I can find the mkl_p4p.dll in "D:\MyVC\petsc-2.3.2-p3\externalpackages\w_mkl_serial_p_8.1.001\mkl_serial_8.1\mkl_8.1_serial\ia32\bin". Do I need to set the environment variable? > > Hope your help. > > Best, > > Yixun > From balay at mcs.anl.gov Sat May 26 10:59:43 2007 From: balay at mcs.anl.gov (Satish Balay) Date: Sat, 26 May 2007 10:59:43 -0500 (CDT) Subject: about mkl_p4p.dll not found In-Reply-To: References: <005501c79f79$43b60510$8864a8c0@dmrc6700512> Message-ID: BTW: we have a new relaese petsc-2.3.3 - which you might want to use. Satish On Sat, 26 May 2007, Satish Balay wrote: > MLK is installed in \petsc-2.3.2-p3\externalpackages? Sounds like a > nonstandard install of MLK. > > Yes - on windows the location of all .dll files that are used should > be set in PATH env variable. The MLK installer should probably do this > automatically. > > Satish > > On Sat, 26 May 2007, Yixun Liu wrote: > > > Hi, > > > > My program is based on PETSC. Its compiler and link is ok, but when execute a error was generated. The error is: > > > > "MKL FATAL ERROR: Cannot load neither mkl_p4p.dll nor mkl_def.dllPress any key to continue" > > > > I can find the mkl_p4p.dll in "D:\MyVC\petsc-2.3.2-p3\externalpackages\w_mkl_serial_p_8.1.001\mkl_serial_8.1\mkl_8.1_serial\ia32\bin". Do I need to set the environment variable? > > > > Hope your help. > > > > Best, > > > > Yixun > > > > From carlosb at altair.com Sun May 27 22:48:42 2007 From: carlosb at altair.com (Carlos Erik Baumann) Date: Sun, 27 May 2007 23:48:42 -0400 Subject: [PETSC #16200] Petsc Performance on Dual- and Quad-core systems In-Reply-To: References: <91FBE7AEE91B454282AE05FE79413A4A0BE63591@trexch.prog.altair.com> Message-ID: <91FBE7AEE91B454282AE05FE79413A4A0BF2F4B5@trexch.prog.altair.com> Barry and Satish, Thank you very much for your comments on performance. Regards, --Carlos Carlos Baumann Altair Engineering, Inc. Cell 512-657-4348, Ph. 512-467-0618(x512) -----Original Message----- From: balay at mcs.anl.gov [mailto:balay at mcs.anl.gov] Sent: Friday, May 25, 2007 4:40 PM To: petsc-users at mcs.anl.gov Cc: Carlos Erik Baumann Subject: Re: [PETSC #16200] Petsc Performance on Dual- and Quad-core systems I don't know what the current naming convention is wrt cores is. Its all messed up. I'll use the following definitions here. 1. CPU=core=processor 2. Dual-core is a short form for 'dual cores per chip. 3. chip: a minimal piece of cpus one can buy.. [the current marketing might call this CPU - but that donesn't make sense to me] Anyway back to the topic on hand, one way to look at this issue is: as long at the memory bandwidth scales up with number of processors you'll see scalable performance. However the current machines, the bandwidth doesn't scale with multiple cores[per chip]. However it might scale with number of chips. This is true with both AMD and Intel chips. For eg: - if you have a 2x2 [2 dual core chips] machine - you might see the best performance for 'mpirun -np 2'. Satish On Fri, 25 May 2007, Barry Smith wrote: > > Carlos, > > We don't have any particular numbers for these systems. There are > two main things to keep in mind. > > 1) Ideally the MPI you use will take advantage of the local shared memory > within a node to lower the communication time. MPICH for example can be > compiled with certain options to help this. > 2) The memory bandwidth is often shared among several of the cores. Since > sparse matrices computations are almost totally bounded by memory bandwidth > the most important thing to consider when buying a system like this is how > much totally memory bandwidth does it have and how much is really usable > for each core. Ideally you'd like to see a 6++ gigabytes per second peak > memory bandwith per core. > > Barry > > > On Wed, 23 May 2007, Carlos Erik Baumann wrote: > > > > > > > Hello Everyone, > > > > > > > > Do you have any performance number on Petsc solving typical heat > > transfer / laplace / poisson problems using dual and/or quad-core > > workstations ? > > > > > > > > I am interested in speedup based on problem size, etc. > > > > > > > > Looking forward to your reply. > > > > > > > > Best, > > > > > > > > --Carlos > > > > > > > > Carlos Baumann Altair Engineering, Inc. > > > > Cell 512-657-4348, Ph. 512-467-0618(x512) > > > > > > > > > > > > > > > > > > From ondrej at certik.cz Mon May 28 07:07:22 2007 From: ondrej at certik.cz (Ondrej Certik) Date: Mon, 28 May 2007 14:07:22 +0200 Subject: python-petsc in Debian Message-ID: <85b5c3130705280507u45d6f439m1eef15da65def355@mail.gmail.com> Hi, I packaged the python bindings to petsc: http://code.google.com/p/petsc4py/ and it is already in Debian unstable now: http://packages.debian.org/unstable/python/python-petsc It should get into Debian testing in around 10 days and into Ubuntu Gutsy in a few days automatically. Ondrej From knepley at gmail.com Mon May 28 07:15:17 2007 From: knepley at gmail.com (Matthew Knepley) Date: Mon, 28 May 2007 07:15:17 -0500 Subject: python-petsc in Debian In-Reply-To: <85b5c3130705280507u45d6f439m1eef15da65def355@mail.gmail.com> References: <85b5c3130705280507u45d6f439m1eef15da65def355@mail.gmail.com> Message-ID: Excellent! Can't wait to get them as a package. Matt On 5/28/07, Ondrej Certik wrote: > Hi, > > I packaged the python bindings to petsc: > > http://code.google.com/p/petsc4py/ > > and it is already in Debian unstable now: > > http://packages.debian.org/unstable/python/python-petsc > > It should get into Debian testing in around 10 days and into Ubuntu > Gutsy in a few days automatically. > > Ondrej > > -- What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. -- Norbert Wiener