From Debao.Shao at brion.com Tue Nov 1 04:31:00 2011 From: Debao.Shao at brion.com (Debao Shao) Date: Tue, 1 Nov 2011 02:31:00 -0700 Subject: [petsc-users] no profile printed by "PetscLogPrintSummary" Message-ID: <384FF55F15E3E447802DC8CCA85696980E26367BC7@EX03> DA, I intend to use "PetscLogPrintSummary" to dump log summary, but don't get it. The usage is: PetscInitialize(0, 0, 0, 0); PetscLogAllBegin(); ... PetscLogPrintSummary(MPI_COMM_SELF, PETSC_NULL); PetscFinalize(); My libpetsc.a is built by 1, ./config/configure.py --with-mpi=0 --with-debugging=1 -with-log=1 -with-info=1 --with-x=0; 2, make all Thanks, Debao ________________________________ -- The information contained in this communication and any attachments is confidential and may be privileged, and is for the sole use of the intended recipient(s). Any unauthorized review, use, disclosure or distribution is prohibited. Unless explicitly stated otherwise in the body of this communication or the attachment thereto (if any), the information is provided on an AS-IS basis without any express or implied warranties or liabilities. To the extent you are relying on this information, you are doing so at your own risk. If you are not the intended recipient, please notify the sender immediately by replying to this message and destroy all copies of this message and any attachments. ASML is neither liable for the proper and complete transmission of the information contained in this communication, nor for any delay in its receipt. -------------- next part -------------- An HTML attachment was scrubbed... URL: From knepley at gmail.com Tue Nov 1 06:02:48 2011 From: knepley at gmail.com (Matthew Knepley) Date: Tue, 1 Nov 2011 11:02:48 +0000 Subject: [petsc-users] no profile printed by "PetscLogPrintSummary" In-Reply-To: <384FF55F15E3E447802DC8CCA85696980E26367BC7@EX03> References: <384FF55F15E3E447802DC8CCA85696980E26367BC7@EX03> Message-ID: On Tue, Nov 1, 2011 at 9:31 AM, Debao Shao wrote: > DA, **** > > ** ** > > I intend to use ?PetscLogPrintSummary? to dump log summary, but don?t get > it.**** > > The usage is: **** > > PetscInitialize(0, 0, 0, 0);**** > > PetscLogAllBegin();**** > > ?**** > > PetscLogPrintSummary(MPI_COMM_SELF, PETSC_NULL);**** > > PetscFinalize(); > You only need PetscLogBegin(), and you should use PETSC_COMM_WORLD. Matt > > > My libpetsc.a is built by **** > > 1, ./config/configure.py --with-mpi=0 --with-debugging=1 -with-log=1 > -with-info=1 --with-x=0;**** > > 2, make all**** > > ** ** > > Thanks,**** > > Debao**** > > ------------------------------ > -- The information contained in this communication and any attachments is > confidential and may be privileged, and is for the sole use of the intended > recipient(s). Any unauthorized review, use, disclosure or distribution is > prohibited. Unless explicitly stated otherwise in the body of this > communication or the attachment thereto (if any), the information is > provided on an AS-IS basis without any express or implied warranties or > liabilities. To the extent you are relying on this information, you are > doing so at your own risk. If you are not the intended recipient, please > notify the sender immediately by replying to this message and destroy all > copies of this message and any attachments. ASML is neither liable for the > proper and complete transmission of the information contained in this > communication, nor for any delay in its receipt. > -- What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. -- Norbert Wiener -------------- next part -------------- An HTML attachment was scrubbed... URL: From ckontzialis at lycos.com Tue Nov 1 09:10:52 2011 From: ckontzialis at lycos.com (Konstantinos Kontzialis) Date: Tue, 01 Nov 2011 16:10:52 +0200 Subject: [petsc-users] Preconditioning in SNES Message-ID: <4EAFFDEC.3040304@lycos.com> Dear all, I use SNES to solve an implicit time marching formulation for the NS equations of gas dynamics using the discontinuous Galerkin discretization. I use a matrix-free formulation with a preconditioner coreesponding to the Jacobian of the system, but during the Newton solution just from the 1st step I get the following: 0 KSP preconditioned resid norm 9.356767598746e+00 true resid norm 9.356767598746e+00 ||Ae||/||Ax|| 1.000000000000e+00 1 KSP preconditioned resid norm 4.281908138930e-15 true resid norm 3.150842395998e-15 ||Ae||/||Ax|| 3.367447532223e-16 Linear solve converged due to CONVERGED_RTOL iterations 1 Timestep 0: dt = 0.002, T = 0, Res[rho] = 1.22465e-16, Res[rhou] = 0.00215905, Res[rhov] = 1.61374e-08, Res[E] = 2.78002e-07, CFL = 399.998 /*********************Stage 1 of SSPIRK (6,4)******************/ 0 SNES Function norm 1.155072464398e-02 0 KSP preconditioned resid norm 1.155072464398e-02 true resid norm 1.155072464398e-02 ||Ae||/||Ax|| 1.000000000000e+00 1 KSP preconditioned resid norm 1.155070298335e-02 true resid norm 1.155070298335e-02 ||Ae||/||Ax|| 9.999981247380e-01 2 KSP preconditioned resid norm 1.152933291306e-02 true resid norm 1.152933291544e-02 ||Ae||/||Ax|| 9.981480184831e-01 3 KSP preconditioned resid norm 9.177957248260e-03 true resid norm 9.177957556426e-03 ||Ae||/||Ax|| 7.945785082156e-01 4 KSP preconditioned resid norm 3.124174018085e-03 true resid norm 3.124174393942e-03 ||Ae||/||Ax|| 2.704743200305e-01 5 KSP preconditioned resid norm 6.978400843850e-04 true resid norm 6.978392323251e-04 ||Ae||/||Ax|| 6.041519072039e-02 6 KSP preconditioned resid norm 1.634324558019e-04 true resid norm 1.634328735266e-04 ||Ae||/||Ax|| 1.414914462632e-02 7 KSP preconditioned resid norm 9.422855713525e-05 true resid norm 9.422715588996e-05 ||Ae||/||Ax|| 8.157683504215e-03 . . . . . . . 1603 KSP preconditioned resid norm 1.191037802130e-07 true resid norm 1.191038670563e-07 ||Ae||/||Ax|| 1.031137618870e-05 1604 KSP preconditioned resid norm 1.178729142296e-07 true resid norm 1.178730336572e-07 ||Ae||/||Ax|| 1.020481721193e-05 1605 KSP preconditioned resid norm 1.169791191707e-07 true resid norm 1.169793038395e-07 ||Ae||/||Ax|| 1.012744286138e-05 1606 KSP preconditioned resid norm 1.168865445541e-07 true resid norm 1.168870972685e-07 ||Ae||/||Ax|| 1.011946010931e-05 1607 KSP preconditioned resid norm 1.168834953500e-07 true resid norm 1.168834210021e-07 ||Ae||/||Ax|| 1.011914183782e-05 1608 KSP preconditioned resid norm 1.168824468217e-07 true resid norm 1.168821544127e-07 ||Ae||/||Ax|| 1.011903218328e-05 1609 KSP preconditioned resid norm 1.168617721831e-07 true resid norm 1.168618273848e-07 ||Ae||/||Ax|| 1.011727237786e-05 1610 KSP preconditioned resid norm 1.165091490989e-07 true resid norm 1.165091854357e-07 ||Ae||/||Ax|| 1.008674252281e-05 1611 KSP preconditioned resid norm 1.153333136618e-07 true resid norm 1.153329197440e-07 ||Ae||/||Ax|| 9.984907726463e-06 Linear solve converged due to CONVERGED_RTOL iterations 1611 1 SNES Function norm 1.156248777455e-07 0 KSP preconditioned resid norm 1.156248777455e-07 true resid norm 1.156248777455e-07 ||Ae||/||Ax|| 1.000000000000e+00 1 KSP preconditioned resid norm 1.154422371390e-07 true resid norm 1.161710852713e-07 ||Ae||/||Ax|| 1.004723961974e+00 2 KSP preconditioned resid norm 1.152905844799e-07 true resid norm 2.668696384811e-04 ||Ae||/||Ax|| 2.308064178614e+03 3 KSP preconditioned resid norm 1.151761139639e-07 true resid norm 1.126358239931e-03 ||Ae||/||Ax|| 9.741486969699e+03 4 KSP preconditioned resid norm 1.151754181824e-07 true resid norm 1.108271364124e-03 ||Ae||/||Ax|| 9.585059770296e+03 5 KSP preconditioned resid norm 1.151747447109e-07 true resid norm 1.133976467714e-03 ||Ae||/||Ax|| 9.807374414786e+03 6 KSP preconditioned resid norm 1.151536921969e-07 true resid norm 1.682604476358e-03 ||Ae||/||Ax|| 1.455227031730e+04 7 KSP preconditioned resid norm 1.151536741521e-07 true resid norm 1.675362372285e-03 ||Ae||/||Ax|| 1.448963583747e+04 8 KSP preconditioned resid norm 1.151516417557e-07 true resid norm 1.739170524328e-03 ||Ae||/||Ax|| 1.504149070891e+04 9 KSP preconditioned resid norm 1.151458368640e-07 true resid norm 1.888742801931e-03 ||Ae||/||Ax|| 1.633509015325e+04 10 KSP preconditioned resid norm 1.151432970406e-07 true resid norm 1.936535093080e-03 ||Ae||/||Ax|| 1.674842932455e+04 11 KSP preconditioned resid norm 1.151415856844e-07 true resid norm 1.943578776829e-03 ||Ae||/||Ax|| 1.680934773489e+04 12 KSP preconditioned resid norm 1.151415106457e-07 true resid norm 1.941675149163e-03 ||Ae||/||Ax|| 1.679288391064e+04 13 KSP preconditioned resid norm 1.151072722808e-07 true resid norm 2.055181937477e-03 ||Ae||/||Ax|| 1.777456527997e+04 14 KSP preconditioned resid norm 1.150661328335e-07 true resid norm 2.247389279980e-03 ||Ae||/||Ax|| 1.943690081062e+04 15 KSP preconditioned resid norm 1.150587945589e-07 true resid norm 2.349201250473e-03 ||Ae||/||Ax|| 2.031743770266e+04 16 KSP preconditioned resid norm 1.150293904255e-07 true resid norm 2.708861286410e-03 ||Ae||/||Ax|| 2.342801427537e+04 17 KSP preconditioned resid norm 1.149883485190e-07 true resid norm 3.425777483190e-03 ||Ae||/||Ax|| 2.962837712771e+04 18 KSP preconditioned resid norm 1.149835570884e-07 true resid norm 3.492000995212e-03 ||Ae||/||Ax|| 3.020112162105e+04 19 KSP preconditioned resid norm 1.149457801834e-07 true resid norm 4.258473277795e-03 ||Ae||/||Ax|| 3.683007810108e+04 20 KSP preconditioned resid norm 1.149279955651e-07 true resid norm 4.574312834982e-03 ||Ae||/||Ax|| 3.956166634874e+04 21 KSP preconditioned resid norm 1.148902843639e-07 true resid norm 5.376279701577e-03 ||Ae||/||Ax|| 4.649760333939e+04 22 KSP preconditioned resid norm 1.148568926668e-07 true resid norm 6.095225432786e-03 ||Ae||/||Ax|| 5.271551894049e+04 23 KSP preconditioned resid norm 1.148172845230e-07 true resid norm 6.957862632489e-03 ||Ae||/||Ax|| 6.017617288039e+04 24 KSP preconditioned resid norm 1.147830369881e-07 true resid norm 7.726330820912e-03 ||Ae||/||Ax|| 6.682239126701e+04 25 KSP preconditioned resid norm 1.147438237790e-07 true resid norm 8.597270798069e-03 ||Ae||/||Ax|| 7.435485308784e+04 26 KSP preconditioned resid norm 1.147090792721e-07 true resid norm 9.388950565795e-03 ||Ae||/||Ax|| 8.120182048070e+04 27 KSP preconditioned resid norm 1.146703081864e-07 true resid norm 1.025969917305e-02 ||Ae||/||Ax|| 8.873262720872e+04 28 KSP preconditioned resid norm 1.146352282540e-07 true resid norm 1.106505484202e-02 ||Ae||/||Ax|| 9.569787279149e+04 29 KSP preconditioned resid norm 1.145968062554e-07 true resid norm 1.193384225768e-02 ||Ae||/||Ax|| 1.032117178445e+05 30 KSP preconditioned resid norm 1.274732710949e-02 true resid norm 1.274732710949e-02 ||Ae||/||Ax|| 1.102472699478e+05 Linear solve did not converge due to DIVERGED_DTOL iterations 30 As you may see on the 2nd Newton iteration the linear system solution does not converge. I use the following arguments to start the computation: mpiexec -n 8 ./hoac blasius -llf_flux -n_out 1 -end_time 10000.0 -implicit -implicit_type 6 -pc_type asm -sub_pc_type ilu -snes_mf_operator -snes_max_fail 500 -snes_monitor -snes_stol 1.0e-50 -ksp_right_pc -snes_converged_reason -ksp_gmres_restart 30 -snes_max_linear_solve_fail 500 -sub_pc_factor_levels 2 -snes_max_it 1000 -sub_pc_factor_mat_ordering_type rcm -dt 2.e-3 -snes_rtol 1.0e-8 -gl -snes_converged_reason -ksp_converged_reason -ksp_monitor_true_residual Why is this happening? What might I be doing wrong? Any suggestions or guide lines? Thank you, Kostas From agrayver at gfz-potsdam.de Tue Nov 1 10:08:00 2011 From: agrayver at gfz-potsdam.de (Alexander Grayver) Date: Tue, 01 Nov 2011 16:08:00 +0100 Subject: [petsc-users] DMDA and vector PDE Message-ID: <4EB00B50.5010001@gfz-potsdam.de> Hello! Sorry for possibly silly question. I've just recently started to use DMDA functionality. I solve 3D vector PDE using FD, so my solution vector b has size 3N, where N stands for total number of grid cells. Each part of b contains solution for corresponding dimension, i.e. fx = b(1:N), fy = b(N+1:2N), fz = b(2N+1:3*N). The question is how can I link my DMDA layout created for grid with N cells and my solution vector b which has size of 3N, but naturally represents just 3 separate vectors fx,fy,fz? Thanks in advance. Regards, Alexander From jedbrown at mcs.anl.gov Tue Nov 1 10:14:59 2011 From: jedbrown at mcs.anl.gov (Jed Brown) Date: Tue, 1 Nov 2011 09:14:59 -0600 Subject: [petsc-users] DMDA and vector PDE In-Reply-To: <4EB00B50.5010001@gfz-potsdam.de> References: <4EB00B50.5010001@gfz-potsdam.de> Message-ID: On Tue, Nov 1, 2011 at 09:08, Alexander Grayver wrote: > I solve 3D vector PDE using FD, so my solution vector b has size 3N, where > N stands for total number of grid cells. > Each part of b contains solution for corresponding dimension, i.e. fx = > b(1:N), fy = b(N+1:2N), fz = b(2N+1:3*N). > You set the number of degrees of freedom per node to 3. Then these vectors are interlaced in b. -------------- next part -------------- An HTML attachment was scrubbed... URL: From jedbrown at mcs.anl.gov Tue Nov 1 10:21:07 2011 From: jedbrown at mcs.anl.gov (Jed Brown) Date: Tue, 1 Nov 2011 09:21:07 -0600 Subject: [petsc-users] Preconditioning in SNES In-Reply-To: <4EAFFDEC.3040304@lycos.com> References: <4EAFFDEC.3040304@lycos.com> Message-ID: On Tue, Nov 1, 2011 at 08:10, Konstantinos Kontzialis wrote: > I use SNES to solve an implicit time marching formulation for the NS > equations of gas dynamics using the discontinuous Galerkin discretization. > I use a matrix-free formulation with a preconditioner coreesponding to the > Jacobian of the system, > How are you applying the action of the operator and how are you building the preconditioner? Finite differencing can be noisy due to rounding error. > but during the Newton solution just from the 1st step I get the following: > > 0 KSP preconditioned resid norm 9.356767598746e+00 true resid norm > 9.356767598746e+00 ||Ae||/||Ax|| 1.000000000000e+00 > 1 KSP preconditioned resid norm 4.281908138930e-15 true resid norm > 3.150842395998e-15 ||Ae||/||Ax|| 3.367447532223e-16 > Linear solve converged due to CONVERGED_RTOL iterations 1 > > > Timestep 0: dt = 0.002, T = 0, Res[rho] = 1.22465e-16, Res[rhou] = > 0.00215905, Res[rhov] = 1.61374e-08, Res[E] = 2.78002e-07, CFL = 399.998 > Looking at the sizes of the residuals here, it looks like the equations are poorly scaled. You should be shooting for values for each state variable and residuals for each equation to have similar scales. Are you working in SI units, perhaps? > > As you may see on the 2nd Newton iteration the linear system solution does > not converge. I use the following arguments to start the computation: > > mpiexec -n 8 ./hoac blasius -llf_flux -n_out 1 -end_time 10000.0 -implicit > -implicit_type 6 -pc_type asm -sub_pc_type ilu -snes_mf_operator > -snes_max_fail 500 -snes_monitor -snes_stol 1.0e-50 -ksp_right_pc > -snes_converged_reason -ksp_gmres_restart 30 -snes_max_linear_solve_fail > 500 -sub_pc_factor_levels 2 -snes_max_it 1000 -sub_pc_factor_mat_ordering_ > **type rcm -dt 2.e-3 -snes_rtol 1.0e-8 -gl -snes_converged_reason > -ksp_converged_reason -ksp_monitor_true_residual > What version of PETSc are you using? The option is spelled "-ksp_pc_side right" now. That might explain why the preconditioned and true residuals are so different in your convergence logs. -------------- next part -------------- An HTML attachment was scrubbed... URL: From knepley at gmail.com Tue Nov 1 10:21:10 2011 From: knepley at gmail.com (Matthew Knepley) Date: Tue, 1 Nov 2011 15:21:10 +0000 Subject: [petsc-users] Preconditioning in SNES In-Reply-To: <4EAFFDEC.3040304@lycos.com> References: <4EAFFDEC.3040304@lycos.com> Message-ID: On Tue, Nov 1, 2011 at 2:10 PM, Konstantinos Kontzialis < ckontzialis at lycos.com> wrote: > Dear all, > > I use SNES to solve an implicit time marching formulation for the NS > equations of gas dynamics using the discontinuous Galerkin discretization. > I use a matrix-free formulation with a preconditioner coreesponding to the > Jacobian of the system, but during the Newton solution just from the 1st > step I get the following: > > 0 KSP preconditioned resid norm 9.356767598746e+00 true resid norm > 9.356767598746e+00 ||Ae||/||Ax|| 1.000000000000e+00 > 1 KSP preconditioned resid norm 4.281908138930e-15 true resid norm > 3.150842395998e-15 ||Ae||/||Ax|| 3.367447532223e-16 > Linear solve converged due to CONVERGED_RTOL iterations 1 > > > Timestep 0: dt = 0.002, T = 0, Res[rho] = 1.22465e-16, Res[rhou] = > 0.00215905, Res[rhov] = 1.61374e-08, Res[E] = 2.78002e-07, CFL = 399.998 > > /*********************Stage 1 of SSPIRK (6,4)******************/ > > 0 SNES Function norm 1.155072464398e-02 > 0 KSP preconditioned resid norm 1.155072464398e-02 true resid norm > 1.155072464398e-02 ||Ae||/||Ax|| 1.000000000000e+00 > 1 KSP preconditioned resid norm 1.155070298335e-02 true resid norm > 1.155070298335e-02 ||Ae||/||Ax|| 9.999981247380e-01 > 2 KSP preconditioned resid norm 1.152933291306e-02 true resid norm > 1.152933291544e-02 ||Ae||/||Ax|| 9.981480184831e-01 > 3 KSP preconditioned resid norm 9.177957248260e-03 true resid norm > 9.177957556426e-03 ||Ae||/||Ax|| 7.945785082156e-01 > 4 KSP preconditioned resid norm 3.124174018085e-03 true resid norm > 3.124174393942e-03 ||Ae||/||Ax|| 2.704743200305e-01 > 5 KSP preconditioned resid norm 6.978400843850e-04 true resid norm > 6.978392323251e-04 ||Ae||/||Ax|| 6.041519072039e-02 > 6 KSP preconditioned resid norm 1.634324558019e-04 true resid norm > 1.634328735266e-04 ||Ae||/||Ax|| 1.414914462632e-02 > 7 KSP preconditioned resid norm 9.422855713525e-05 true resid norm > 9.422715588996e-05 ||Ae||/||Ax|| 8.157683504215e-03 > . > . > . > . > . > . > . > 1603 KSP preconditioned resid norm 1.191037802130e-07 true resid norm > 1.191038670563e-07 ||Ae||/||Ax|| 1.031137618870e-05 > 1604 KSP preconditioned resid norm 1.178729142296e-07 true resid norm > 1.178730336572e-07 ||Ae||/||Ax|| 1.020481721193e-05 > 1605 KSP preconditioned resid norm 1.169791191707e-07 true resid norm > 1.169793038395e-07 ||Ae||/||Ax|| 1.012744286138e-05 > 1606 KSP preconditioned resid norm 1.168865445541e-07 true resid norm > 1.168870972685e-07 ||Ae||/||Ax|| 1.011946010931e-05 > 1607 KSP preconditioned resid norm 1.168834953500e-07 true resid norm > 1.168834210021e-07 ||Ae||/||Ax|| 1.011914183782e-05 > 1608 KSP preconditioned resid norm 1.168824468217e-07 true resid norm > 1.168821544127e-07 ||Ae||/||Ax|| 1.011903218328e-05 > 1609 KSP preconditioned resid norm 1.168617721831e-07 true resid norm > 1.168618273848e-07 ||Ae||/||Ax|| 1.011727237786e-05 > 1610 KSP preconditioned resid norm 1.165091490989e-07 true resid norm > 1.165091854357e-07 ||Ae||/||Ax|| 1.008674252281e-05 > 1611 KSP preconditioned resid norm 1.153333136618e-07 true resid norm > 1.153329197440e-07 ||Ae||/||Ax|| 9.984907726463e-06 > Linear solve converged due to CONVERGED_RTOL iterations 1611 > > 1 SNES Function norm 1.156248777455e-07 > 0 KSP preconditioned resid norm 1.156248777455e-07 true resid norm > 1.156248777455e-07 ||Ae||/||Ax|| 1.000000000000e+00 > 1 KSP preconditioned resid norm 1.154422371390e-07 true resid norm > 1.161710852713e-07 ||Ae||/||Ax|| 1.004723961974e+00 > 2 KSP preconditioned resid norm 1.152905844799e-07 true resid norm > 2.668696384811e-04 ||Ae||/||Ax|| 2.308064178614e+03 > 3 KSP preconditioned resid norm 1.151761139639e-07 true resid norm > 1.126358239931e-03 ||Ae||/||Ax|| 9.741486969699e+03 > 4 KSP preconditioned resid norm 1.151754181824e-07 true resid norm > 1.108271364124e-03 ||Ae||/||Ax|| 9.585059770296e+03 > 5 KSP preconditioned resid norm 1.151747447109e-07 true resid norm > 1.133976467714e-03 ||Ae||/||Ax|| 9.807374414786e+03 > 6 KSP preconditioned resid norm 1.151536921969e-07 true resid norm > 1.682604476358e-03 ||Ae||/||Ax|| 1.455227031730e+04 > 7 KSP preconditioned resid norm 1.151536741521e-07 true resid norm > 1.675362372285e-03 ||Ae||/||Ax|| 1.448963583747e+04 > 8 KSP preconditioned resid norm 1.151516417557e-07 true resid norm > 1.739170524328e-03 ||Ae||/||Ax|| 1.504149070891e+04 > 9 KSP preconditioned resid norm 1.151458368640e-07 true resid norm > 1.888742801931e-03 ||Ae||/||Ax|| 1.633509015325e+04 > 10 KSP preconditioned resid norm 1.151432970406e-07 true resid norm > 1.936535093080e-03 ||Ae||/||Ax|| 1.674842932455e+04 > 11 KSP preconditioned resid norm 1.151415856844e-07 true resid norm > 1.943578776829e-03 ||Ae||/||Ax|| 1.680934773489e+04 > 12 KSP preconditioned resid norm 1.151415106457e-07 true resid norm > 1.941675149163e-03 ||Ae||/||Ax|| 1.679288391064e+04 > 13 KSP preconditioned resid norm 1.151072722808e-07 true resid norm > 2.055181937477e-03 ||Ae||/||Ax|| 1.777456527997e+04 > 14 KSP preconditioned resid norm 1.150661328335e-07 true resid norm > 2.247389279980e-03 ||Ae||/||Ax|| 1.943690081062e+04 > 15 KSP preconditioned resid norm 1.150587945589e-07 true resid norm > 2.349201250473e-03 ||Ae||/||Ax|| 2.031743770266e+04 > 16 KSP preconditioned resid norm 1.150293904255e-07 true resid norm > 2.708861286410e-03 ||Ae||/||Ax|| 2.342801427537e+04 > 17 KSP preconditioned resid norm 1.149883485190e-07 true resid norm > 3.425777483190e-03 ||Ae||/||Ax|| 2.962837712771e+04 > 18 KSP preconditioned resid norm 1.149835570884e-07 true resid norm > 3.492000995212e-03 ||Ae||/||Ax|| 3.020112162105e+04 > 19 KSP preconditioned resid norm 1.149457801834e-07 true resid norm > 4.258473277795e-03 ||Ae||/||Ax|| 3.683007810108e+04 > 20 KSP preconditioned resid norm 1.149279955651e-07 true resid norm > 4.574312834982e-03 ||Ae||/||Ax|| 3.956166634874e+04 > 21 KSP preconditioned resid norm 1.148902843639e-07 true resid norm > 5.376279701577e-03 ||Ae||/||Ax|| 4.649760333939e+04 > 22 KSP preconditioned resid norm 1.148568926668e-07 true resid norm > 6.095225432786e-03 ||Ae||/||Ax|| 5.271551894049e+04 > 23 KSP preconditioned resid norm 1.148172845230e-07 true resid norm > 6.957862632489e-03 ||Ae||/||Ax|| 6.017617288039e+04 > 24 KSP preconditioned resid norm 1.147830369881e-07 true resid norm > 7.726330820912e-03 ||Ae||/||Ax|| 6.682239126701e+04 > 25 KSP preconditioned resid norm 1.147438237790e-07 true resid norm > 8.597270798069e-03 ||Ae||/||Ax|| 7.435485308784e+04 > 26 KSP preconditioned resid norm 1.147090792721e-07 true resid norm > 9.388950565795e-03 ||Ae||/||Ax|| 8.120182048070e+04 > 27 KSP preconditioned resid norm 1.146703081864e-07 true resid norm > 1.025969917305e-02 ||Ae||/||Ax|| 8.873262720872e+04 > 28 KSP preconditioned resid norm 1.146352282540e-07 true resid norm > 1.106505484202e-02 ||Ae||/||Ax|| 9.569787279149e+04 > 29 KSP preconditioned resid norm 1.145968062554e-07 true resid norm > 1.193384225768e-02 ||Ae||/||Ax|| 1.032117178445e+05 > 30 KSP preconditioned resid norm 1.274732710949e-02 true resid norm > 1.274732710949e-02 ||Ae||/||Ax|| 1.102472699478e+05 > Linear solve did not converge due to DIVERGED_DTOL iterations 30 > > As you may see on the 2nd Newton iteration the linear system solution does > not converge. I use the following arguments to start the computation: > > mpiexec -n 8 ./hoac blasius -llf_flux -n_out 1 -end_time 10000.0 -implicit > -implicit_type 6 -pc_type asm -sub_pc_type ilu -snes_mf_operator > -snes_max_fail 500 -snes_monitor -snes_stol 1.0e-50 -ksp_right_pc > -snes_converged_reason -ksp_gmres_restart 30 -snes_max_linear_solve_fail > 500 -sub_pc_factor_levels 2 -snes_max_it 1000 -sub_pc_factor_mat_ordering_ > **type rcm -dt 2.e-3 -snes_rtol 1.0e-8 -gl -snes_converged_reason > -ksp_converged_reason -ksp_monitor_true_residual > > > Why is this happening? What might I be doing wrong? Any suggestions or > guide lines? > It could be that: a) Your system is really ill-conditioned, and the PC is not that good b) Your system has a null space, and the initial guess had no component in it c) Your Jacobian is not actually the Jacobian of the system, so the PC is not good d) The FD approximation to the Jacobian action is not good When you have so many different possible problems, the right thing to do is simplify the system until you can narrow down the cause a) Run in serial b) Shrink the problem c) Fully form the Jacobian using -snes_fd d) Run with -pc_type lu Matt > Thank you, > > Kostas > -- What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. -- Norbert Wiener -------------- next part -------------- An HTML attachment was scrubbed... URL: From jedbrown at mcs.anl.gov Tue Nov 1 10:24:13 2011 From: jedbrown at mcs.anl.gov (Jed Brown) Date: Tue, 1 Nov 2011 09:24:13 -0600 Subject: [petsc-users] Preconditioning in SNES In-Reply-To: References: <4EAFFDEC.3040304@lycos.com> Message-ID: On Tue, Nov 1, 2011 at 09:21, Matthew Knepley wrote: > It could be that: > > a) Your system is really ill-conditioned, and the PC is not that good > > b) Your system has a null space, and the initial guess had no component > in it > > c) Your Jacobian is not actually the Jacobian of the system, so the PC > is not good > > d) The FD approximation to the Jacobian action is not good > > When you have so many different possible problems, the right thing to do > is simplify the system until you can narrow down the cause > > a) Run in serial > > b) Shrink the problem > > c) Fully form the Jacobian using -snes_fd > > d) Run with -pc_type lu > We keep a pretty comprehensive list here, I should have linked it: http://www.mcs.anl.gov/petsc/petsc-as/documentation/faq.html#kspdiverged -------------- next part -------------- An HTML attachment was scrubbed... URL: From agrayver at gfz-potsdam.de Tue Nov 1 11:06:09 2011 From: agrayver at gfz-potsdam.de (Alexander Grayver) Date: Tue, 01 Nov 2011 17:06:09 +0100 Subject: [petsc-users] DMDA and vector PDE In-Reply-To: References: <4EB00B50.5010001@gfz-potsdam.de> Message-ID: <4EB018F1.6080909@gfz-potsdam.de> On 01.11.2011 16:14, Jed Brown wrote: > On Tue, Nov 1, 2011 at 09:08, Alexander Grayver > > wrote: > > I solve 3D vector PDE using FD, so my solution vector b has size > 3N, where N stands for total number of grid cells. > Each part of b contains solution for corresponding dimension, i.e. > fx = b(1:N), fy = b(N+1:2N), fz = b(2N+1:3*N). > > > You set the number of degrees of freedom per node to 3. Then these > vectors are interlaced in b. Thanks Jed, looks easy. I've read about this parameter in documentation, but it wasn't obvious how to use it. May be it's worth to mention it shortly in "2.4.4 Accessing the Vector Entries for DMDA Vectors" or somewhere else. How then can I ensure correct ordering between my system matrix and vectors in case of dof > 1? Can I say that dof_i always corresponds to g((i-1)*N+1:i*N) in global DM vector? Regards, Alexander -------------- next part -------------- An HTML attachment was scrubbed... URL: From jedbrown at mcs.anl.gov Tue Nov 1 11:13:39 2011 From: jedbrown at mcs.anl.gov (Jed Brown) Date: Tue, 1 Nov 2011 10:13:39 -0600 Subject: [petsc-users] DMDA and vector PDE In-Reply-To: <4EB018F1.6080909@gfz-potsdam.de> References: <4EB00B50.5010001@gfz-potsdam.de> <4EB018F1.6080909@gfz-potsdam.de> Message-ID: On Tue, Nov 1, 2011 at 10:06, Alexander Grayver wrote: > Thanks Jed, looks easy. I've read about this parameter in documentation, > but it wasn't obvious how to use it. > May be it's worth to mention it shortly in "2.4.4 Accessing the Vector > Entries for DMDA Vectors" or somewhere else. > I'll add a note. The most common approach is to declare a struct typedef struct { PetscScalar u,v,omega,temp; } Field; and then access as x[j][i].omega, etc. An example: http://www.mcs.anl.gov/petsc/petsc-as/snapshots/petsc-current/src/snes/examples/tutorials/ex50.c.html -------------- next part -------------- An HTML attachment was scrubbed... URL: From ckontzialis at lycos.com Tue Nov 1 13:01:26 2011 From: ckontzialis at lycos.com (Konstantinos Kontzialis) Date: Tue, 01 Nov 2011 20:01:26 +0200 Subject: [petsc-users] Preconditioning in SNES In-Reply-To: References: Message-ID: <4EB033F6.6080200@lycos.com> On 11/01/2011 05:24 PM, petsc-users-request at mcs.anl.gov wrote: > Preconditioning in SNES Deall all, 1. I use 3.1 version of petsc. 2. I run the problem with snes_fd and pc_type lu 3. I use a nondimensional form of the NS equations 4. I do not specify any user defined procedure for applying the preconditioner. I leave petsc to deal with this. With snes_fd I got the following results: Timestep 0: dt = 0.002, T = 0, Res[rho] = 1.06871e-16, Res[rhou] = 0.00215523, Res[rhov] = 1.61374e-08, Res[E] = 6.17917e-07, CFL = 399.998 /*********************Stage 1 of SSPIRK (6,4)******************/ 0 SNES Function norm 1.153030915909e-02 0 KSP preconditioned resid norm 1.153030915909e-02 true resid norm 1.153030915909e-02 ||A||/||Ax|| 1.000000000000e+00 1 KSP preconditioned resid norm 1.998438714390e-11 true resid norm 1.998438714940e-11 ||Ae||/||Ax|| 1.733204797344e-09 Linear solve converged due to CONVERGED_RTOL iterations 1 1 SNES Function norm 1.908346652619e-08 0 KSP preconditioned resid norm 1.908346652619e-08 true resid norm 1.908346652619e-08 ||Ae||/||Ax|| 1.000000000000e+00 1 KSP preconditioned resid norm 1.407495233141e-08 true resid norm 2.936727417791e-08 ||Ae||/||Ax|| 1.538885722759e+00 2 KSP preconditioned resid norm 1.983096664116e-16 true resid norm 4.111894600521e-03 ||Ae||/||Ax|| 2.154689555421e+05 Linear solve converged due to CONVERGED_RTOL iterations 2 2 SNES Function norm 2.818079289176e-03 I think I need first to clarify the following.: My function for computing the residual of the system has the following form: F(u) = Mu + R(u), where M is the mass matrix of the system and R(u) the residual and u is the solution. If I use snes_mf do I really get: F'(u) = M+R'(u), during the solution of the linear system? Thanks, Kostas From knepley at gmail.com Tue Nov 1 14:58:54 2011 From: knepley at gmail.com (Matthew Knepley) Date: Tue, 1 Nov 2011 19:58:54 +0000 Subject: [petsc-users] DMDA and vector PDE In-Reply-To: References: <4EB00B50.5010001@gfz-potsdam.de> <4EB018F1.6080909@gfz-potsdam.de> Message-ID: On Tue, Nov 1, 2011 at 4:13 PM, Jed Brown wrote: > On Tue, Nov 1, 2011 at 10:06, Alexander Grayver wrote: > >> Thanks Jed, looks easy. I've read about this parameter in documentation, >> but it wasn't obvious how to use it. >> May be it's worth to mention it shortly in "2.4.4 Accessing the Vector >> Entries for DMDA Vectors" or somewhere else. >> > > I'll add a note. The most common approach is to declare a struct > > typedef struct { > PetscScalar u,v,omega,temp; > } Field; > > and then access as x[j][i].omega, etc. An example: > > > http://www.mcs.anl.gov/petsc/petsc-as/snapshots/petsc-current/src/snes/examples/tutorials/ex50.c.html > Or you can use http://www.mcs.anl.gov/petsc/petsc-as/snapshots/petsc-current/docs/manualpages/DM/DMDAVecGetArrayDOF.html and access it as x[j][i][0-3]. Matt -- What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. -- Norbert Wiener -------------- next part -------------- An HTML attachment was scrubbed... URL: From Debao.Shao at brion.com Tue Nov 1 21:30:04 2011 From: Debao.Shao at brion.com (Debao Shao) Date: Tue, 1 Nov 2011 19:30:04 -0700 Subject: [petsc-users] no profile printed by "PetscLogPrintSummary" In-Reply-To: References: <384FF55F15E3E447802DC8CCA85696980E26367BC7@EX03> Message-ID: <384FF55F15E3E447802DC8CCA85696980E26367D06@EX03> Hi, Matt: 1, I tried both "PetscLogAllBegin()" & "PetscLogBegin()", but both of them don't print log summary. 2, I disabled MPI(--with-mpi=0), will MPI_COMM_SELF have difference to PETSC_COMM_WORLD? Thanks, Debao ________________________________ From: petsc-users-bounces at mcs.anl.gov [mailto:petsc-users-bounces at mcs.anl.gov] On Behalf Of Matthew Knepley Sent: Tuesday, November 01, 2011 7:03 PM To: PETSc users list Subject: Re: [petsc-users] no profile printed by "PetscLogPrintSummary" On Tue, Nov 1, 2011 at 9:31 AM, Debao Shao > wrote: DA, I intend to use "PetscLogPrintSummary" to dump log summary, but don't get it. The usage is: PetscInitialize(0, 0, 0, 0); PetscLogAllBegin(); ... PetscLogPrintSummary(MPI_COMM_SELF, PETSC_NULL); PetscFinalize(); You only need PetscLogBegin(), and you should use PETSC_COMM_WORLD. Matt My libpetsc.a is built by 1, ./config/configure.py --with-mpi=0 --with-debugging=1 -with-log=1 -with-info=1 --with-x=0; 2, make all Thanks, Debao ________________________________ -- The information contained in this communication and any attachments is confidential and may be privileged, and is for the sole use of the intended recipient(s). Any unauthorized review, use, disclosure or distribution is prohibited. Unless explicitly stated otherwise in the body of this communication or the attachment thereto (if any), the information is provided on an AS-IS basis without any express or implied warranties or liabilities. To the extent you are relying on this information, you are doing so at your own risk. If you are not the intended recipient, please notify the sender immediately by replying to this message and destroy all copies of this message and any attachments. ASML is neither liable for the proper and complete transmission of the information contained in this communication, nor for any delay in its receipt. -- What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. -- Norbert Wiener ________________________________ -- The information contained in this communication and any attachments is confidential and may be privileged, and is for the sole use of the intended recipient(s). Any unauthorized review, use, disclosure or distribution is prohibited. Unless explicitly stated otherwise in the body of this communication or the attachment thereto (if any), the information is provided on an AS-IS basis without any express or implied warranties or liabilities. To the extent you are relying on this information, you are doing so at your own risk. If you are not the intended recipient, please notify the sender immediately by replying to this message and destroy all copies of this message and any attachments. ASML is neither liable for the proper and complete transmission of the information contained in this communication, nor for any delay in its receipt. -------------- next part -------------- An HTML attachment was scrubbed... URL: From knepley at gmail.com Tue Nov 1 21:51:29 2011 From: knepley at gmail.com (Matthew Knepley) Date: Wed, 2 Nov 2011 02:51:29 +0000 Subject: [petsc-users] no profile printed by "PetscLogPrintSummary" In-Reply-To: <384FF55F15E3E447802DC8CCA85696980E26367D06@EX03> References: <384FF55F15E3E447802DC8CCA85696980E26367BC7@EX03> <384FF55F15E3E447802DC8CCA85696980E26367D06@EX03> Message-ID: On Wed, Nov 2, 2011 at 2:30 AM, Debao Shao wrote: > Hi, Matt: **** > > ** ** > > 1, I tried both ?PetscLogAllBegin()? & ?PetscLogBegin()?, but both of them > don?t print log summary.**** > > 2, I disabled MPI(--with-mpi=0), will MPI_COMM_SELF have difference to > PETSC_COMM_WORLD? > Works for me: #include #undef __FUNCT__ #define __FUNCT__ "main" int main(int argc, char **argv) { PetscErrorCode ierr; ierr = PetscInitialize(&argc, &argv, PETSC_NULL, PETSC_NULL);CHKERRQ(ierr); ierr = PetscLogBegin();CHKERRQ(ierr); ierr = PetscLogView(PETSC_VIEWER_STDOUT_WORLD);CHKERRQ(ierr); ierr = PetscFinalize(); return 0; } which gives Executing: /PETSc3/petsc/petsc-dev-pylith/arch-sieve-fdatatypes-debug/bin/mpiexec -n 1 /PETSc3/petsc/petsc-dev-pylith/arch-sieve-fdatatypes-debug/lib/tester-obj/tester sh: ************************************************************************************************************************ *** WIDEN YOUR WINDOW TO 120 CHARACTERS. Use 'enscript -r -fCourier9' to print this document *** ************************************************************************************************************************ ---------------------------------------------- PETSc Performance Summary: ---------------------------------------------- /PETSc3/petsc/petsc-dev-pylith/arch-sieve-fdatatypes-debug/lib/tester-obj/tester on a arch-siev named MATTHEW-KNEPLEYs-MacBook-Air-2.local with 1 processor, by knepley Tue Nov 1 21:50:08 2011 Using Petsc Development HG revision: 56a54b25d97df6f01a55abded4076de34409738b HG Date: Tue Nov 01 17:11:12 2011 -0500 Max Max/Min Avg Total Time (sec): 5.988e-03 1.00000 5.988e-03 Objects: 1.000e+00 1.00000 1.000e+00 Flops: 0.000e+00 0.00000 0.000e+00 0.000e+00 Flops/sec: 0.000e+00 0.00000 0.000e+00 0.000e+00 Memory: 8.486e+04 1.00000 8.486e+04 MPI Messages: 0.000e+00 0.00000 0.000e+00 0.000e+00 MPI Message Lengths: 0.000e+00 0.00000 0.000e+00 0.000e+00 MPI Reductions: 1.000e+00 1.00000 Flop counting convention: 1 flop = 1 real number operation of type (multiply/divide/add/subtract) e.g., VecAXPY() for real vectors of length N --> 2N flops and VecAXPY() for complex vectors of length N --> 8N flops Summary of Stages: ----- Time ------ ----- Flops ----- --- Messages --- -- Message Lengths -- -- Reductions -- Avg %Total Avg %Total counts %Total Avg %Total counts %Total 0: Main Stage: 5.9600e-03 99.5% 0.0000e+00 0.0% 0.000e+00 0.0% 0.000e+00 0.0% 0.000e+00 0.0% ------------------------------------------------------------------------------------------------------------------------ See the 'Profiling' chapter of the users' manual for details on interpreting output. Phase summary info: Count: number of times phase was executed Time and Flops: Max - maximum over all processors Ratio - ratio of maximum to minimum over all processors Mess: number of messages sent Avg. len: average message length Reduct: number of global reductions Global: entire computation Stage: stages of a computation. Set stages with PetscLogStagePush() and PetscLogStagePop(). %T - percent time in this phase %f - percent flops in this phase %M - percent messages in this phase %L - percent message lengths in this phase %R - percent reductions in this phase Total Mflop/s: 10e-6 * (sum of flops over all processors)/(max time over all processors) ------------------------------------------------------------------------------------------------------------------------ ########################################################## # # # WARNING!!! # # # # This code was compiled with a debugging option, # # To get timing results run ./configure # # using --with-debugging=no, the performance will # # be generally two or three times faster. # # # ########################################################## Event Count Time (sec) Flops --- Global --- --- Stage --- Total Max Ratio Max Ratio Max Ratio Mess Avg len Reduct %T %f %M %L %R %T %f %M %L %R Mflop/s ------------------------------------------------------------------------------------------------------------------------ --- Event Stage 0: Main Stage ------------------------------------------------------------------------------------------------------------------------ Memory usage is given in bytes: Object Type Creations Destructions Memory Descendants' Mem. Reports information only for process 0. --- Event Stage 0: Main Stage Viewer 1 0 0 0 ======================================================================================================================== Average time to get PetscTime(): 4.05312e-07 #No PETSc Option Table entries Compiled without FORTRAN kernels Compiled with full precision matrices (default) sizeof(short) 2 sizeof(int) 4 sizeof(long) 8 sizeof(void*) 8 sizeof(PetscScalar) 8 Configure run at: Tue Nov 1 18:04:19 2011 Configure options: --PETSC_ARCH=arch-sieve-fdatatypes-debug --download-boost --download-chaco --download-fiat --download-hdf5 --download-ml --download-mpich --download-netcdf --download-parmetis --download-scientificpython --download-tetgen --download-triangle --with-clanguage=C++ --with-dynamic-loading --with-exodusii-dir=/PETSc3/petsc/exodusii-4.98 --with-fc="gfortran -cpp" --with-fortran-datatypes --with-shared-libraries --with-sieve --with-sieve-memory-logging ----------------------------------------- Libraries compiled on Tue Nov 1 18:04:19 2011 on MATTHEW-KNEPLEYs-MacBook-Air-2.local Machine characteristics: Darwin-10.8.0-i386-64bit Using PETSc directory: /PETSc3/petsc/petsc-dev-pylith Using PETSc arch: arch-sieve-fdatatypes-debug ----------------------------------------- Using C compiler: /PETSc3/petsc/petsc-dev-pylith/arch-sieve-fdatatypes-debug/bin/mpicxx -Wall -Wwrite-strings -Wno-strict-aliasing -Wno-unknown-pragmas -g -PIC ${COPTFLAGS} ${CFLAGS} Using Fortran compiler: /PETSc3/petsc/petsc-dev-pylith/arch-sieve-fdatatypes-debug/bin/mpif90 -fPIC -Wall -Wno-unused-variable -Wno-unused-dummy-argument -g ${FOPTFLAGS} ${FFLAGS} ----------------------------------------- Using include paths: -I/PETSc3/petsc/petsc-dev-pylith/arch-sieve-fdatatypes-debug/include -I/PETSc3/petsc/petsc-dev-pylith/include -I/PETSc3/petsc/petsc-dev-pylith/include -I/PETSc3/petsc/petsc-dev-pylith/arch-sieve-fdatatypes-debug/include -I/PETSc3/petsc/petsc-dev-pylith/include/sieve -I/PETSc3/petsc/exodusii-4.98/include -I/PETSc3/petsc/exodusii-4.98/cbind/include -I/PETSc3/petsc/exodusii-4.98/forbind/include ----------------------------------------- Using C linker: /PETSc3/petsc/petsc-dev-pylith/arch-sieve-fdatatypes-debug/bin/mpicxx Using Fortran linker: /PETSc3/petsc/petsc-dev-pylith/arch-sieve-fdatatypes-debug/bin/mpif90 Using libraries: -L/PETSc3/petsc/petsc-dev-pylith/arch-sieve-fdatatypes-debug/lib -L/PETSc3/petsc/petsc-dev-pylith/arch-sieve-fdatatypes-debug/lib -lpetsc -L/PETSc3/petsc/petsc-dev-pylith/arch-sieve-fdatatypes-debug/lib -ltriangle -L/usr/X11R6/lib -lX11 -lchaco -lml -lparmetis -lmetis -lpthread -ltet -llapack -lblas -L/PETSc3/petsc/exodusii-4.98/. -lexoIIv2for -lexodus -lnetcdf_c++ -lnetcdf -lhdf5_fortran -lhdf5 -lz -L/usr/lib/gcc/i686-apple-darwin10/4.2.1/x86_64 -L/usr/lib/i686-apple-darwin10/4.2.1 -L/usr/lib/gcc/i686-apple-darwin10/4.2.1 -ldl -lpmpich -lmpich -lopa -lmpl -lpthread -lSystem -lmpichf90 -lgfortran -L/usr/local/lib/gcc/x86_64-apple-darwin10.4.0/4.6.0 -L/usr/local/lib -lgfortran -lgcc_ext.10.5 -lmpichcxx -lstdc++ -lmpichcxx -lstdc++ -ldl -lpmpich -lmpich -lopa -lmpl -lpthread -lSystem -ldl ----------------------------------------- Matt > > > Thanks,**** > > Debao**** > ------------------------------ > > *From:* petsc-users-bounces at mcs.anl.gov [mailto: > petsc-users-bounces at mcs.anl.gov] *On Behalf Of *Matthew Knepley > *Sent:* Tuesday, November 01, 2011 7:03 PM > *To:* PETSc users list > *Subject:* Re: [petsc-users] no profile printed by "PetscLogPrintSummary"* > *** > > ** ** > > On Tue, Nov 1, 2011 at 9:31 AM, Debao Shao wrote:** > ** > > DA, **** > > **** > > I intend to use ?PetscLogPrintSummary? to dump log summary, but don?t get > it.**** > > The usage is: **** > > PetscInitialize(0, 0, 0, 0);**** > > PetscLogAllBegin();**** > > ?**** > > PetscLogPrintSummary(MPI_COMM_SELF, PETSC_NULL);**** > > PetscFinalize();**** > > ** ** > > You only need PetscLogBegin(), and you should use PETSC_COMM_WORLD.**** > > ** ** > > Matt**** > > **** > > **** > > My libpetsc.a is built by **** > > 1, ./config/configure.py --with-mpi=0 --with-debugging=1 -with-log=1 > -with-info=1 --with-x=0;**** > > 2, make all**** > > **** > > Thanks,**** > > Debao**** > > ** ** > ------------------------------ > > -- The information contained in this communication and any attachments is > confidential and may be privileged, and is for the sole use of the intended > recipient(s). Any unauthorized review, use, disclosure or distribution is > prohibited. Unless explicitly stated otherwise in the body of this > communication or the attachment thereto (if any), the information is > provided on an AS-IS basis without any express or implied warranties or > liabilities. To the extent you are relying on this information, you are > doing so at your own risk. If you are not the intended recipient, please > notify the sender immediately by replying to this message and destroy all > copies of this message and any attachments. ASML is neither liable for the > proper and complete transmission of the information contained in this > communication, nor for any delay in its receipt.**** > > > > **** > > ** ** > > -- > What most experimenters take for granted before they begin their > experiments is infinitely more interesting than any results to which their > experiments lead. > -- Norbert Wiener**** > > ------------------------------ > -- The information contained in this communication and any attachments is > confidential and may be privileged, and is for the sole use of the intended > recipient(s). Any unauthorized review, use, disclosure or distribution is > prohibited. Unless explicitly stated otherwise in the body of this > communication or the attachment thereto (if any), the information is > provided on an AS-IS basis without any express or implied warranties or > liabilities. To the extent you are relying on this information, you are > doing so at your own risk. If you are not the intended recipient, please > notify the sender immediately by replying to this message and destroy all > copies of this message and any attachments. ASML is neither liable for the > proper and complete transmission of the information contained in this > communication, nor for any delay in its receipt. > -- What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. -- Norbert Wiener -------------- next part -------------- An HTML attachment was scrubbed... URL: From Debao.Shao at brion.com Tue Nov 1 22:00:31 2011 From: Debao.Shao at brion.com (Debao Shao) Date: Tue, 1 Nov 2011 20:00:31 -0700 Subject: [petsc-users] no profile printed by "PetscLogPrintSummary" In-Reply-To: References: <384FF55F15E3E447802DC8CCA85696980E26367BC7@EX03> <384FF55F15E3E447802DC8CCA85696980E26367D06@EX03> Message-ID: <384FF55F15E3E447802DC8CCA85696980E26367D26@EX03> Hi, Matt: My petsc version is 3.1-p8, I can't find "PetscLogView", is it similar behavior as "PetscLogDump"? Thanks, Debao ________________________________ From: petsc-users-bounces at mcs.anl.gov [mailto:petsc-users-bounces at mcs.anl.gov] On Behalf Of Matthew Knepley Sent: Wednesday, November 02, 2011 10:51 AM To: PETSc users list Subject: Re: [petsc-users] no profile printed by "PetscLogPrintSummary" On Wed, Nov 2, 2011 at 2:30 AM, Debao Shao > wrote: Hi, Matt: 1, I tried both "PetscLogAllBegin()" & "PetscLogBegin()", but both of them don't print log summary. 2, I disabled MPI(--with-mpi=0), will MPI_COMM_SELF have difference to PETSC_COMM_WORLD? Works for me: #include #undef __FUNCT__ #define __FUNCT__ "main" int main(int argc, char **argv) { PetscErrorCode ierr; ierr = PetscInitialize(&argc, &argv, PETSC_NULL, PETSC_NULL);CHKERRQ(ierr); ierr = PetscLogBegin();CHKERRQ(ierr); ierr = PetscLogView(PETSC_VIEWER_STDOUT_WORLD);CHKERRQ(ierr); ierr = PetscFinalize(); return 0; } which gives Executing: /PETSc3/petsc/petsc-dev-pylith/arch-sieve-fdatatypes-debug/bin/mpiexec -n 1 /PETSc3/petsc/petsc-dev-pylith/arch-sieve-fdatatypes-debug/lib/tester-obj/tester sh: ************************************************************************************************************************ *** WIDEN YOUR WINDOW TO 120 CHARACTERS. Use 'enscript -r -fCourier9' to print this document *** ************************************************************************************************************************ ---------------------------------------------- PETSc Performance Summary: ---------------------------------------------- /PETSc3/petsc/petsc-dev-pylith/arch-sieve-fdatatypes-debug/lib/tester-obj/tester on a arch-siev named MATTHEW-KNEPLEYs-MacBook-Air-2.local with 1 processor, by knepley Tue Nov 1 21:50:08 2011 Using Petsc Development HG revision: 56a54b25d97df6f01a55abded4076de34409738b HG Date: Tue Nov 01 17:11:12 2011 -0500 Max Max/Min Avg Total Time (sec): 5.988e-03 1.00000 5.988e-03 Objects: 1.000e+00 1.00000 1.000e+00 Flops: 0.000e+00 0.00000 0.000e+00 0.000e+00 Flops/sec: 0.000e+00 0.00000 0.000e+00 0.000e+00 Memory: 8.486e+04 1.00000 8.486e+04 MPI Messages: 0.000e+00 0.00000 0.000e+00 0.000e+00 MPI Message Lengths: 0.000e+00 0.00000 0.000e+00 0.000e+00 MPI Reductions: 1.000e+00 1.00000 Flop counting convention: 1 flop = 1 real number operation of type (multiply/divide/add/subtract) e.g., VecAXPY() for real vectors of length N --> 2N flops and VecAXPY() for complex vectors of length N --> 8N flops Summary of Stages: ----- Time ------ ----- Flops ----- --- Messages --- -- Message Lengths -- -- Reductions -- Avg %Total Avg %Total counts %Total Avg %Total counts %Total 0: Main Stage: 5.9600e-03 99.5% 0.0000e+00 0.0% 0.000e+00 0.0% 0.000e+00 0.0% 0.000e+00 0.0% ------------------------------------------------------------------------------------------------------------------------ See the 'Profiling' chapter of the users' manual for details on interpreting output. Phase summary info: Count: number of times phase was executed Time and Flops: Max - maximum over all processors Ratio - ratio of maximum to minimum over all processors Mess: number of messages sent Avg. len: average message length Reduct: number of global reductions Global: entire computation Stage: stages of a computation. Set stages with PetscLogStagePush() and PetscLogStagePop(). %T - percent time in this phase %f - percent flops in this phase %M - percent messages in this phase %L - percent message lengths in this phase %R - percent reductions in this phase Total Mflop/s: 10e-6 * (sum of flops over all processors)/(max time over all processors) ------------------------------------------------------------------------------------------------------------------------ ########################################################## # # # WARNING!!! # # # # This code was compiled with a debugging option, # # To get timing results run ./configure # # using --with-debugging=no, the performance will # # be generally two or three times faster. # # # ########################################################## Event Count Time (sec) Flops --- Global --- --- Stage --- Total Max Ratio Max Ratio Max Ratio Mess Avg len Reduct %T %f %M %L %R %T %f %M %L %R Mflop/s ------------------------------------------------------------------------------------------------------------------------ --- Event Stage 0: Main Stage ------------------------------------------------------------------------------------------------------------------------ Memory usage is given in bytes: Object Type Creations Destructions Memory Descendants' Mem. Reports information only for process 0. --- Event Stage 0: Main Stage Viewer 1 0 0 0 ======================================================================================================================== Average time to get PetscTime(): 4.05312e-07 #No PETSc Option Table entries Compiled without FORTRAN kernels Compiled with full precision matrices (default) sizeof(short) 2 sizeof(int) 4 sizeof(long) 8 sizeof(void*) 8 sizeof(PetscScalar) 8 Configure run at: Tue Nov 1 18:04:19 2011 Configure options: --PETSC_ARCH=arch-sieve-fdatatypes-debug --download-boost --download-chaco --download-fiat --download-hdf5 --download-ml --download-mpich --download-netcdf --download-parmetis --download-scientificpython --download-tetgen --download-triangle --with-clanguage=C++ --with-dynamic-loading --with-exodusii-dir=/PETSc3/petsc/exodusii-4.98 --with-fc="gfortran -cpp" --with-fortran-datatypes --with-shared-libraries --with-sieve --with-sieve-memory-logging ----------------------------------------- Libraries compiled on Tue Nov 1 18:04:19 2011 on MATTHEW-KNEPLEYs-MacBook-Air-2.local Machine characteristics: Darwin-10.8.0-i386-64bit Using PETSc directory: /PETSc3/petsc/petsc-dev-pylith Using PETSc arch: arch-sieve-fdatatypes-debug ----------------------------------------- Using C compiler: /PETSc3/petsc/petsc-dev-pylith/arch-sieve-fdatatypes-debug/bin/mpicxx -Wall -Wwrite-strings -Wno-strict-aliasing -Wno-unknown-pragmas -g -PIC ${COPTFLAGS} ${CFLAGS} Using Fortran compiler: /PETSc3/petsc/petsc-dev-pylith/arch-sieve-fdatatypes-debug/bin/mpif90 -fPIC -Wall -Wno-unused-variable -Wno-unused-dummy-argument -g ${FOPTFLAGS} ${FFLAGS} ----------------------------------------- Using include paths: -I/PETSc3/petsc/petsc-dev-pylith/arch-sieve-fdatatypes-debug/include -I/PETSc3/petsc/petsc-dev-pylith/include -I/PETSc3/petsc/petsc-dev-pylith/include -I/PETSc3/petsc/petsc-dev-pylith/arch-sieve-fdatatypes-debug/include -I/PETSc3/petsc/petsc-dev-pylith/include/sieve -I/PETSc3/petsc/exodusii-4.98/include -I/PETSc3/petsc/exodusii-4.98/cbind/include -I/PETSc3/petsc/exodusii-4.98/forbind/include ----------------------------------------- Using C linker: /PETSc3/petsc/petsc-dev-pylith/arch-sieve-fdatatypes-debug/bin/mpicxx Using Fortran linker: /PETSc3/petsc/petsc-dev-pylith/arch-sieve-fdatatypes-debug/bin/mpif90 Using libraries: -L/PETSc3/petsc/petsc-dev-pylith/arch-sieve-fdatatypes-debug/lib -L/PETSc3/petsc/petsc-dev-pylith/arch-sieve-fdatatypes-debug/lib -lpetsc -L/PETSc3/petsc/petsc-dev-pylith/arch-sieve-fdatatypes-debug/lib -ltriangle -L/usr/X11R6/lib -lX11 -lchaco -lml -lparmetis -lmetis -lpthread -ltet -llapack -lblas -L/PETSc3/petsc/exodusii-4.98/. -lexoIIv2for -lexodus -lnetcdf_c++ -lnetcdf -lhdf5_fortran -lhdf5 -lz -L/usr/lib/gcc/i686-apple-darwin10/4.2.1/x86_64 -L/usr/lib/i686-apple-darwin10/4.2.1 -L/usr/lib/gcc/i686-apple-darwin10/4.2.1 -ldl -lpmpich -lmpich -lopa -lmpl -lpthread -lSystem -lmpichf90 -lgfortran -L/usr/local/lib/gcc/x86_64-apple-darwin10.4.0/4.6.0 -L/usr/local/lib -lgfortran -lgcc_ext.10.5 -lmpichcxx -lstdc++ -lmpichcxx -lstdc++ -ldl -lpmpich -lmpich -lopa -lmpl -lpthread -lSystem -ldl ----------------------------------------- Matt Thanks, Debao ________________________________ From: petsc-users-bounces at mcs.anl.gov [mailto:petsc-users-bounces at mcs.anl.gov] On Behalf Of Matthew Knepley Sent: Tuesday, November 01, 2011 7:03 PM To: PETSc users list Subject: Re: [petsc-users] no profile printed by "PetscLogPrintSummary" On Tue, Nov 1, 2011 at 9:31 AM, Debao Shao > wrote: DA, I intend to use "PetscLogPrintSummary" to dump log summary, but don't get it. The usage is: PetscInitialize(0, 0, 0, 0); PetscLogAllBegin(); ... PetscLogPrintSummary(MPI_COMM_SELF, PETSC_NULL); PetscFinalize(); You only need PetscLogBegin(), and you should use PETSC_COMM_WORLD. Matt My libpetsc.a is built by 1, ./config/configure.py --with-mpi=0 --with-debugging=1 -with-log=1 -with-info=1 --with-x=0; 2, make all Thanks, Debao ________________________________ -- The information contained in this communication and any attachments is confidential and may be privileged, and is for the sole use of the intended recipient(s). Any unauthorized review, use, disclosure or distribution is prohibited. Unless explicitly stated otherwise in the body of this communication or the attachment thereto (if any), the information is provided on an AS-IS basis without any express or implied warranties or liabilities. To the extent you are relying on this information, you are doing so at your own risk. If you are not the intended recipient, please notify the sender immediately by replying to this message and destroy all copies of this message and any attachments. ASML is neither liable for the proper and complete transmission of the information contained in this communication, nor for any delay in its receipt. -- What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. -- Norbert Wiener ________________________________ -- The information contained in this communication and any attachments is confidential and may be privileged, and is for the sole use of the intended recipient(s). Any unauthorized review, use, disclosure or distribution is prohibited. Unless explicitly stated otherwise in the body of this communication or the attachment thereto (if any), the information is provided on an AS-IS basis without any express or implied warranties or liabilities. To the extent you are relying on this information, you are doing so at your own risk. If you are not the intended recipient, please notify the sender immediately by replying to this message and destroy all copies of this message and any attachments. ASML is neither liable for the proper and complete transmission of the information contained in this communication, nor for any delay in its receipt. -- What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. -- Norbert Wiener ________________________________ -- The information contained in this communication and any attachments is confidential and may be privileged, and is for the sole use of the intended recipient(s). Any unauthorized review, use, disclosure or distribution is prohibited. Unless explicitly stated otherwise in the body of this communication or the attachment thereto (if any), the information is provided on an AS-IS basis without any express or implied warranties or liabilities. To the extent you are relying on this information, you are doing so at your own risk. If you are not the intended recipient, please notify the sender immediately by replying to this message and destroy all copies of this message and any attachments. ASML is neither liable for the proper and complete transmission of the information contained in this communication, nor for any delay in its receipt. -------------- next part -------------- An HTML attachment was scrubbed... URL: From bsmith at mcs.anl.gov Tue Nov 1 22:09:08 2011 From: bsmith at mcs.anl.gov (Barry Smith) Date: Tue, 1 Nov 2011 22:09:08 -0500 Subject: [petsc-users] no profile printed by "PetscLogPrintSummary" In-Reply-To: <384FF55F15E3E447802DC8CCA85696980E26367D26@EX03> References: <384FF55F15E3E447802DC8CCA85696980E26367BC7@EX03> <384FF55F15E3E447802DC8CCA85696980E26367D06@EX03> <384FF55F15E3E447802DC8CCA85696980E26367D26@EX03> Message-ID: <3D86A2C0-349B-4B77-B9B8-2FD6B92E6C22@mcs.anl.gov> On Nov 1, 2011, at 10:00 PM, Debao Shao wrote: > Hi, Matt: > > My petsc version is 3.1-p8, I can?t find ?PetscLogView?, is it similar behavior as ?PetscLogDump?? 1) switch to petsc 3.2 2) be proactive. It may seem easier to send email with a question but in fact that takes time for people to respond and figure out what you are asking. Looking at the code will be faster. See the instructions for searching through source code in the users manual http://www.mcs.anl.gov/petsc/petsc-as/snapshots/petsc-current/docs/manual.pdf page 174 Then you can search the PETSc source for -log_summary and track what function it actually calls to print the summary information. The answer of "how" is always in the PETSc source code. (Sometimes "why" we did something a particular way cannot be found in the source). > > Thanks, > Debao > > From: petsc-users-bounces at mcs.anl.gov [mailto:petsc-users-bounces at mcs.anl.gov] On Behalf Of Matthew Knepley > Sent: Wednesday, November 02, 2011 10:51 AM > To: PETSc users list > Subject: Re: [petsc-users] no profile printed by "PetscLogPrintSummary" > > On Wed, Nov 2, 2011 at 2:30 AM, Debao Shao wrote: > Hi, Matt: > > 1, I tried both ?PetscLogAllBegin()? & ?PetscLogBegin()?, but both of them don?t print log summary. These don't print the summary, they tell PETSc to start logging it. Barry > 2, I disabled MPI(--with-mpi=0), will MPI_COMM_SELF have difference to PETSC_COMM_WORLD? > > Works for me: > > #include > > #undef __FUNCT__ > #define __FUNCT__ "main" > int main(int argc, char **argv) > { > PetscErrorCode ierr; > > ierr = PetscInitialize(&argc, &argv, PETSC_NULL, PETSC_NULL);CHKERRQ(ierr); > ierr = PetscLogBegin();CHKERRQ(ierr); > ierr = PetscLogView(PETSC_VIEWER_STDOUT_WORLD);CHKERRQ(ierr); > ierr = PetscFinalize(); > return 0; > } > > which gives > > Executing: /PETSc3/petsc/petsc-dev-pylith/arch-sieve-fdatatypes-debug/bin/mpiexec -n 1 /PETSc3/petsc/petsc-dev-pylith/arch-sieve-fdatatypes-debug/lib/tester-obj/tester > sh: ************************************************************************************************************************ > *** WIDEN YOUR WINDOW TO 120 CHARACTERS. Use 'enscript -r -fCourier9' to print this document *** > ************************************************************************************************************************ > ---------------------------------------------- PETSc Performance Summary: ---------------------------------------------- > /PETSc3/petsc/petsc-dev-pylith/arch-sieve-fdatatypes-debug/lib/tester-obj/tester on a arch-siev named MATTHEW-KNEPLEYs-MacBook-Air-2.local with 1 processor, by knepley Tue Nov 1 21:50:08 2011 > Using Petsc Development HG revision: 56a54b25d97df6f01a55abded4076de34409738b HG Date: Tue Nov 01 17:11:12 2011 -0500 > Max Max/Min Avg Total > Time (sec): 5.988e-03 1.00000 5.988e-03 > Objects: 1.000e+00 1.00000 1.000e+00 > Flops: 0.000e+00 0.00000 0.000e+00 0.000e+00 > Flops/sec: 0.000e+00 0.00000 0.000e+00 0.000e+00 > Memory: 8.486e+04 1.00000 8.486e+04 > MPI Messages: 0.000e+00 0.00000 0.000e+00 0.000e+00 > MPI Message Lengths: 0.000e+00 0.00000 0.000e+00 0.000e+00 > MPI Reductions: 1.000e+00 1.00000 > Flop counting convention: 1 flop = 1 real number operation of type (multiply/divide/add/subtract) > e.g., VecAXPY() for real vectors of length N --> 2N flops > and VecAXPY() for complex vectors of length N --> 8N flops > Summary of Stages: ----- Time ------ ----- Flops ----- --- Messages --- -- Message Lengths -- -- Reductions -- > Avg %Total Avg %Total counts %Total Avg %Total counts %Total > 0: Main Stage: 5.9600e-03 99.5% 0.0000e+00 0.0% 0.000e+00 0.0% 0.000e+00 0.0% 0.000e+00 0.0% > ------------------------------------------------------------------------------------------------------------------------ > See the 'Profiling' chapter of the users' manual for details on interpreting output. > Phase summary info: > Count: number of times phase was executed > Time and Flops: Max - maximum over all processors > Ratio - ratio of maximum to minimum over all processors > Mess: number of messages sent > Avg. len: average message length > Reduct: number of global reductions > Global: entire computation > Stage: stages of a computation. Set stages with PetscLogStagePush() and PetscLogStagePop(). > %T - percent time in this phase %f - percent flops in this phase > %M - percent messages in this phase %L - percent message lengths in this phase > %R - percent reductions in this phase > Total Mflop/s: 10e-6 * (sum of flops over all processors)/(max time over all processors) > ------------------------------------------------------------------------------------------------------------------------ > ########################################################## > # # > # WARNING!!! # > # # > # This code was compiled with a debugging option, # > # To get timing results run ./configure # > # using --with-debugging=no, the performance will # > # be generally two or three times faster. # > # # > ########################################################## > Event Count Time (sec) Flops --- Global --- --- Stage --- Total > Max Ratio Max Ratio Max Ratio Mess Avg len Reduct %T %f %M %L %R %T %f %M %L %R Mflop/s > ------------------------------------------------------------------------------------------------------------------------ > --- Event Stage 0: Main Stage > ------------------------------------------------------------------------------------------------------------------------ > Memory usage is given in bytes: > Object Type Creations Destructions Memory Descendants' Mem. > Reports information only for process 0. > --- Event Stage 0: Main Stage > Viewer 1 0 0 0 > ======================================================================================================================== > Average time to get PetscTime(): 4.05312e-07 > #No PETSc Option Table entries > Compiled without FORTRAN kernels > Compiled with full precision matrices (default) > sizeof(short) 2 sizeof(int) 4 sizeof(long) 8 sizeof(void*) 8 sizeof(PetscScalar) 8 > Configure run at: Tue Nov 1 18:04:19 2011 > Configure options: --PETSC_ARCH=arch-sieve-fdatatypes-debug --download-boost --download-chaco --download-fiat --download-hdf5 --download-ml --download-mpich --download-netcdf --download-parmetis --download-scientificpython --download-tetgen --download-triangle --with-clanguage=C++ --with-dynamic-loading --with-exodusii-dir=/PETSc3/petsc/exodusii-4.98 --with-fc="gfortran -cpp" --with-fortran-datatypes --with-shared-libraries --with-sieve --with-sieve-memory-logging > ----------------------------------------- > Libraries compiled on Tue Nov 1 18:04:19 2011 on MATTHEW-KNEPLEYs-MacBook-Air-2.local > Machine characteristics: Darwin-10.8.0-i386-64bit > Using PETSc directory: /PETSc3/petsc/petsc-dev-pylith > Using PETSc arch: arch-sieve-fdatatypes-debug > ----------------------------------------- > Using C compiler: /PETSc3/petsc/petsc-dev-pylith/arch-sieve-fdatatypes-debug/bin/mpicxx -Wall -Wwrite-strings -Wno-strict-aliasing -Wno-unknown-pragmas -g -PIC ${COPTFLAGS} ${CFLAGS} > Using Fortran compiler: /PETSc3/petsc/petsc-dev-pylith/arch-sieve-fdatatypes-debug/bin/mpif90 -fPIC -Wall -Wno-unused-variable -Wno-unused-dummy-argument -g ${FOPTFLAGS} ${FFLAGS} > ----------------------------------------- > Using include paths: -I/PETSc3/petsc/petsc-dev-pylith/arch-sieve-fdatatypes-debug/include -I/PETSc3/petsc/petsc-dev-pylith/include -I/PETSc3/petsc/petsc-dev-pylith/include -I/PETSc3/petsc/petsc-dev-pylith/arch-sieve-fdatatypes-debug/include -I/PETSc3/petsc/petsc-dev-pylith/include/sieve -I/PETSc3/petsc/exodusii-4.98/include -I/PETSc3/petsc/exodusii-4.98/cbind/include -I/PETSc3/petsc/exodusii-4.98/forbind/include > ----------------------------------------- > Using C linker: /PETSc3/petsc/petsc-dev-pylith/arch-sieve-fdatatypes-debug/bin/mpicxx > Using Fortran linker: /PETSc3/petsc/petsc-dev-pylith/arch-sieve-fdatatypes-debug/bin/mpif90 > Using libraries: -L/PETSc3/petsc/petsc-dev-pylith/arch-sieve-fdatatypes-debug/lib -L/PETSc3/petsc/petsc-dev-pylith/arch-sieve-fdatatypes-debug/lib -lpetsc -L/PETSc3/petsc/petsc-dev-pylith/arch-sieve-fdatatypes-debug/lib -ltriangle -L/usr/X11R6/lib -lX11 -lchaco -lml -lparmetis -lmetis -lpthread -ltet -llapack -lblas -L/PETSc3/petsc/exodusii-4.98/. -lexoIIv2for -lexodus -lnetcdf_c++ -lnetcdf -lhdf5_fortran -lhdf5 -lz -L/usr/lib/gcc/i686-apple-darwin10/4.2.1/x86_64 -L/usr/lib/i686-apple-darwin10/4.2.1 -L/usr/lib/gcc/i686-apple-darwin10/4.2.1 -ldl -lpmpich -lmpich -lopa -lmpl -lpthread -lSystem -lmpichf90 -lgfortran -L/usr/local/lib/gcc/x86_64-apple-darwin10.4.0/4.6.0 -L/usr/local/lib -lgfortran -lgcc_ext.10.5 -lmpichcxx -lstdc++ -lmpichcxx -lstdc++ -ldl -lpmpich -lmpich -lopa -lmpl -lpthread -lSystem -ldl > ----------------------------------------- > > Matt > > > Thanks, > Debao > From: petsc-users-bounces at mcs.anl.gov [mailto:petsc-users-bounces at mcs.anl.gov] On Behalf Of Matthew Knepley > Sent: Tuesday, November 01, 2011 7:03 PM > To: PETSc users list > Subject: Re: [petsc-users] no profile printed by "PetscLogPrintSummary" > > On Tue, Nov 1, 2011 at 9:31 AM, Debao Shao wrote: > DA, > > I intend to use ?PetscLogPrintSummary? to dump log summary, but don?t get it. > The usage is: > PetscInitialize(0, 0, 0, 0); > PetscLogAllBegin(); > ? > PetscLogPrintSummary(MPI_COMM_SELF, PETSC_NULL); > PetscFinalize(); > > You only need PetscLogBegin(), and you should use PETSC_COMM_WORLD. > > Matt > > > My libpetsc.a is built by > 1, ./config/configure.py --with-mpi=0 --with-debugging=1 -with-log=1 -with-info=1 --with-x=0; > 2, make all > > Thanks, > Debao > > -- The information contained in this communication and any attachments is confidential and may be privileged, and is for the sole use of the intended recipient(s). Any unauthorized review, use, disclosure or distribution is prohibited. Unless explicitly stated otherwise in the body of this communication or the attachment thereto (if any), the information is provided on an AS-IS basis without any express or implied warranties or liabilities. To the extent you are relying on this information, you are doing so at your own risk. If you are not the intended recipient, please notify the sender immediately by replying to this message and destroy all copies of this message and any attachments. ASML is neither liable for the proper and complete transmission of the information contained in this communication, nor for any delay in its receipt. > > > > -- > What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. > -- Norbert Wiener > > -- The information contained in this communication and any attachments is confidential and may be privileged, and is for the sole use of the intended recipient(s). Any unauthorized review, use, disclosure or distribution is prohibited. Unless explicitly stated otherwise in the body of this communication or the attachment thereto (if any), the information is provided on an AS-IS basis without any express or implied warranties or liabilities. To the extent you are relying on this information, you are doing so at your own risk. If you are not the intended recipient, please notify the sender immediately by replying to this message and destroy all copies of this message and any attachments. ASML is neither liable for the proper and complete transmission of the information contained in this communication, nor for any delay in its receipt. > > > > -- > What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. > -- Norbert Wiener > > -- The information contained in this communication and any attachments is confidential and may be privileged, and is for the sole use of the intended recipient(s). Any unauthorized review, use, disclosure or distribution is prohibited. Unless explicitly stated otherwise in the body of this communication or the attachment thereto (if any), the information is provided on an AS-IS basis without any express or implied warranties or liabilities. To the extent you are relying on this information, you are doing so at your own risk. If you are not the intended recipient, please notify the sender immediately by replying to this message and destroy all copies of this message and any attachments. ASML is neither liable for the proper and complete transmission of the information contained in this communication, nor for any delay in its receipt. From Debao.Shao at brion.com Tue Nov 1 22:28:49 2011 From: Debao.Shao at brion.com (Debao Shao) Date: Tue, 1 Nov 2011 20:28:49 -0700 Subject: [petsc-users] no profile printed by "PetscLogPrintSummary" In-Reply-To: <3D86A2C0-349B-4B77-B9B8-2FD6B92E6C22@mcs.anl.gov> References: <384FF55F15E3E447802DC8CCA85696980E26367BC7@EX03> <384FF55F15E3E447802DC8CCA85696980E26367D06@EX03> <384FF55F15E3E447802DC8CCA85696980E26367D26@EX03> <3D86A2C0-349B-4B77-B9B8-2FD6B92E6C22@mcs.anl.gov> Message-ID: <384FF55F15E3E447802DC8CCA85696980E26367D46@EX03> Thanks Barry for the suggestion. I reviewed code before, which show "-log_summary" option will trigger 1, "PetscLogBegin" in "PetscSetHelpVersionFunctions"; 2, "PetscLogPrintSummary(PETSC_COMM_WORLD,0)" in "PetscFinalize". So, I tried the following code: PetscInitialize(0, 0, 0, 0); PetscLogBegin(); ... PetscLogPrintSummary(PETSC_COMM_WORLD, 0); PetscFinalize(); But, don't get the log summary, then I asked the question to see if I missed something. Thanks, Debao -----Original Message----- From: petsc-users-bounces at mcs.anl.gov [mailto:petsc-users-bounces at mcs.anl.gov] On Behalf Of Barry Smith Sent: Wednesday, November 02, 2011 11:09 AM To: PETSc users list Subject: Re: [petsc-users] no profile printed by "PetscLogPrintSummary" On Nov 1, 2011, at 10:00 PM, Debao Shao wrote: > Hi, Matt: > > My petsc version is 3.1-p8, I can't find "PetscLogView", is it similar behavior as "PetscLogDump"? 1) switch to petsc 3.2 2) be proactive. It may seem easier to send email with a question but in fact that takes time for people to respond and figure out what you are asking. Looking at the code will be faster. See the instructions for searching through source code in the users manual http://www.mcs.anl.gov/petsc/petsc-as/snapshots/petsc-current/docs/manual.pdf page 174 Then you can search the PETSc source for -log_summary and track what function it actually calls to print the summary information. The answer of "how" is always in the PETSc source code. (Sometimes "why" we did something a particular way cannot be found in the source). > > Thanks, > Debao > > From: petsc-users-bounces at mcs.anl.gov [mailto:petsc-users-bounces at mcs.anl.gov] On Behalf Of Matthew Knepley > Sent: Wednesday, November 02, 2011 10:51 AM > To: PETSc users list > Subject: Re: [petsc-users] no profile printed by "PetscLogPrintSummary" > > On Wed, Nov 2, 2011 at 2:30 AM, Debao Shao wrote: > Hi, Matt: > > 1, I tried both "PetscLogAllBegin()" & "PetscLogBegin()", but both of them don't print log summary. These don't print the summary, they tell PETSc to start logging it. Barry > 2, I disabled MPI(--with-mpi=0), will MPI_COMM_SELF have difference to PETSC_COMM_WORLD? > > Works for me: > > #include > > #undef __FUNCT__ > #define __FUNCT__ "main" > int main(int argc, char **argv) > { > PetscErrorCode ierr; > > ierr = PetscInitialize(&argc, &argv, PETSC_NULL, PETSC_NULL);CHKERRQ(ierr); > ierr = PetscLogBegin();CHKERRQ(ierr); > ierr = PetscLogView(PETSC_VIEWER_STDOUT_WORLD);CHKERRQ(ierr); > ierr = PetscFinalize(); > return 0; > } > > which gives > > Executing: /PETSc3/petsc/petsc-dev-pylith/arch-sieve-fdatatypes-debug/bin/mpiexec -n 1 /PETSc3/petsc/petsc-dev-pylith/arch-sieve-fdatatypes-debug/lib/tester-obj/tester > sh: ************************************************************************************************************************ > *** WIDEN YOUR WINDOW TO 120 CHARACTERS. Use 'enscript -r -fCourier9' to print this document *** > ************************************************************************************************************************ > ---------------------------------------------- PETSc Performance Summary: ---------------------------------------------- > /PETSc3/petsc/petsc-dev-pylith/arch-sieve-fdatatypes-debug/lib/tester-obj/tester on a arch-siev named MATTHEW-KNEPLEYs-MacBook-Air-2.local with 1 processor, by knepley Tue Nov 1 21:50:08 2011 > Using Petsc Development HG revision: 56a54b25d97df6f01a55abded4076de34409738b HG Date: Tue Nov 01 17:11:12 2011 -0500 > Max Max/Min Avg Total > Time (sec): 5.988e-03 1.00000 5.988e-03 > Objects: 1.000e+00 1.00000 1.000e+00 > Flops: 0.000e+00 0.00000 0.000e+00 0.000e+00 > Flops/sec: 0.000e+00 0.00000 0.000e+00 0.000e+00 > Memory: 8.486e+04 1.00000 8.486e+04 > MPI Messages: 0.000e+00 0.00000 0.000e+00 0.000e+00 > MPI Message Lengths: 0.000e+00 0.00000 0.000e+00 0.000e+00 > MPI Reductions: 1.000e+00 1.00000 > Flop counting convention: 1 flop = 1 real number operation of type (multiply/divide/add/subtract) > e.g., VecAXPY() for real vectors of length N --> 2N flops > and VecAXPY() for complex vectors of length N --> 8N flops > Summary of Stages: ----- Time ------ ----- Flops ----- --- Messages --- -- Message Lengths -- -- Reductions -- > Avg %Total Avg %Total counts %Total Avg %Total counts %Total > 0: Main Stage: 5.9600e-03 99.5% 0.0000e+00 0.0% 0.000e+00 0.0% 0.000e+00 0.0% 0.000e+00 0.0% > ------------------------------------------------------------------------------------------------------------------------ > See the 'Profiling' chapter of the users' manual for details on interpreting output. > Phase summary info: > Count: number of times phase was executed > Time and Flops: Max - maximum over all processors > Ratio - ratio of maximum to minimum over all processors > Mess: number of messages sent > Avg. len: average message length > Reduct: number of global reductions > Global: entire computation > Stage: stages of a computation. Set stages with PetscLogStagePush() and PetscLogStagePop(). > %T - percent time in this phase %f - percent flops in this phase > %M - percent messages in this phase %L - percent message lengths in this phase > %R - percent reductions in this phase > Total Mflop/s: 10e-6 * (sum of flops over all processors)/(max time over all processors) > ------------------------------------------------------------------------------------------------------------------------ > ########################################################## > # # > # WARNING!!! # > # # > # This code was compiled with a debugging option, # > # To get timing results run ./configure # > # using --with-debugging=no, the performance will # > # be generally two or three times faster. # > # # > ########################################################## > Event Count Time (sec) Flops --- Global --- --- Stage --- Total > Max Ratio Max Ratio Max Ratio Mess Avg len Reduct %T %f %M %L %R %T %f %M %L %R Mflop/s > ------------------------------------------------------------------------------------------------------------------------ > --- Event Stage 0: Main Stage > ------------------------------------------------------------------------------------------------------------------------ > Memory usage is given in bytes: > Object Type Creations Destructions Memory Descendants' Mem. > Reports information only for process 0. > --- Event Stage 0: Main Stage > Viewer 1 0 0 0 > ======================================================================================================================== > Average time to get PetscTime(): 4.05312e-07 > #No PETSc Option Table entries > Compiled without FORTRAN kernels > Compiled with full precision matrices (default) > sizeof(short) 2 sizeof(int) 4 sizeof(long) 8 sizeof(void*) 8 sizeof(PetscScalar) 8 > Configure run at: Tue Nov 1 18:04:19 2011 > Configure options: --PETSC_ARCH=arch-sieve-fdatatypes-debug --download-boost --download-chaco --download-fiat --download-hdf5 --download-ml --download-mpich --download-netcdf --download-parmetis --download-scientificpython --download-tetgen --download-triangle --with-clanguage=C++ --with-dynamic-loading --with-exodusii-dir=/PETSc3/petsc/exodusii-4.98 --with-fc="gfortran -cpp" --with-fortran-datatypes --with-shared-libraries --with-sieve --with-sieve-memory-logging > ----------------------------------------- > Libraries compiled on Tue Nov 1 18:04:19 2011 on MATTHEW-KNEPLEYs-MacBook-Air-2.local > Machine characteristics: Darwin-10.8.0-i386-64bit > Using PETSc directory: /PETSc3/petsc/petsc-dev-pylith > Using PETSc arch: arch-sieve-fdatatypes-debug > ----------------------------------------- > Using C compiler: /PETSc3/petsc/petsc-dev-pylith/arch-sieve-fdatatypes-debug/bin/mpicxx -Wall -Wwrite-strings -Wno-strict-aliasing -Wno-unknown-pragmas -g -PIC ${COPTFLAGS} ${CFLAGS} > Using Fortran compiler: /PETSc3/petsc/petsc-dev-pylith/arch-sieve-fdatatypes-debug/bin/mpif90 -fPIC -Wall -Wno-unused-variable -Wno-unused-dummy-argument -g ${FOPTFLAGS} ${FFLAGS} > ----------------------------------------- > Using include paths: -I/PETSc3/petsc/petsc-dev-pylith/arch-sieve-fdatatypes-debug/include -I/PETSc3/petsc/petsc-dev-pylith/include -I/PETSc3/petsc/petsc-dev-pylith/include -I/PETSc3/petsc/petsc-dev-pylith/arch-sieve-fdatatypes-debug/include -I/PETSc3/petsc/petsc-dev-pylith/include/sieve -I/PETSc3/petsc/exodusii-4.98/include -I/PETSc3/petsc/exodusii-4.98/cbind/include -I/PETSc3/petsc/exodusii-4.98/forbind/include > ----------------------------------------- > Using C linker: /PETSc3/petsc/petsc-dev-pylith/arch-sieve-fdatatypes-debug/bin/mpicxx > Using Fortran linker: /PETSc3/petsc/petsc-dev-pylith/arch-sieve-fdatatypes-debug/bin/mpif90 > Using libraries: -L/PETSc3/petsc/petsc-dev-pylith/arch-sieve-fdatatypes-debug/lib -L/PETSc3/petsc/petsc-dev-pylith/arch-sieve-fdatatypes-debug/lib -lpetsc -L/PETSc3/petsc/petsc-dev-pylith/arch-sieve-fdatatypes-debug/lib -ltriangle -L/usr/X11R6/lib -lX11 -lchaco -lml -lparmetis -lmetis -lpthread -ltet -llapack -lblas -L/PETSc3/petsc/exodusii-4.98/. -lexoIIv2for -lexodus -lnetcdf_c++ -lnetcdf -lhdf5_fortran -lhdf5 -lz -L/usr/lib/gcc/i686-apple-darwin10/4.2.1/x86_64 -L/usr/lib/i686-apple-darwin10/4.2.1 -L/usr/lib/gcc/i686-apple-darwin10/4.2.1 -ldl -lpmpich -lmpich -lopa -lmpl -lpthread -lSystem -lmpichf90 -lgfortran -L/usr/local/lib/gcc/x86_64-apple-darwin10.4.0/4.6.0 -L/usr/local/lib -lgfortran -lgcc_ext.10.5 -lmpichcxx -lstdc++ -lmpichcxx -lstdc++ -ldl -lpmpich -lmpich -lopa -lmpl -lpthread -lSystem -ldl > ----------------------------------------- > > Matt > > > Thanks, > Debao > From: petsc-users-bounces at mcs.anl.gov [mailto:petsc-users-bounces at mcs.anl.gov] On Behalf Of Matthew Knepley > Sent: Tuesday, November 01, 2011 7:03 PM > To: PETSc users list > Subject: Re: [petsc-users] no profile printed by "PetscLogPrintSummary" > > On Tue, Nov 1, 2011 at 9:31 AM, Debao Shao wrote: > DA, > > I intend to use "PetscLogPrintSummary" to dump log summary, but don't get it. > The usage is: > PetscInitialize(0, 0, 0, 0); > PetscLogAllBegin(); > ... > PetscLogPrintSummary(MPI_COMM_SELF, PETSC_NULL); > PetscFinalize(); > > You only need PetscLogBegin(), and you should use PETSC_COMM_WORLD. > > Matt > > > My libpetsc.a is built by > 1, ./config/configure.py --with-mpi=0 --with-debugging=1 -with-log=1 -with-info=1 --with-x=0; > 2, make all > > Thanks, > Debao > > -- The information contained in this communication and any attachments is confidential and may be privileged, and is for the sole use of the intended recipient(s). Any unauthorized review, use, disclosure or distribution is prohibited. Unless explicitly stated otherwise in the body of this communication or the attachment thereto (if any), the information is provided on an AS-IS basis without any express or implied warranties or liabilities. To the extent you are relying on this information, you are doing so at your own risk. If you are not the intended recipient, please notify the sender immediately by replying to this message and destroy all copies of this message and any attachments. ASML is neither liable for the proper and complete transmission of the information contained in this communication, nor for any delay in its receipt. > > > > -- > What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. > -- Norbert Wiener > > -- The information contained in this communication and any attachments is confidential and may be privileged, and is for the sole use of the intended recipient(s). Any unauthorized review, use, disclosure or distribution is prohibited. Unless explicitly stated otherwise in the body of this communication or the attachment thereto (if any), the information is provided on an AS-IS basis without any express or implied warranties or liabilities. To the extent you are relying on this information, you are doing so at your own risk. If you are not the intended recipient, please notify the sender immediately by replying to this message and destroy all copies of this message and any attachments. ASML is neither liable for the proper and complete transmission of the information contained in this communication, nor for any delay in its receipt. > > > > -- > What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. > -- Norbert Wiener > > -- The information contained in this communication and any attachments is confidential and may be privileged, and is for the sole use of the intended recipient(s). Any unauthorized review, use, disclosure or distribution is prohibited. Unless explicitly stated otherwise in the body of this communication or the attachment thereto (if any), the information is provided on an AS-IS basis without any express or implied warranties or liabilities. To the extent you are relying on this information, you are doing so at your own risk. If you are not the intended recipient, please notify the sender immediately by replying to this message and destroy all copies of this message and any attachments. ASML is neither liable for the proper and complete transmission of the information contained in this communication, nor for any delay in its receipt. -- The information contained in this communication and any attachments is confidential and may be privileged, and is for the sole use of the intended recipient(s). Any unauthorized review, use, disclosure or distribution is prohibited. Unless explicitly stated otherwise in the body of this communication or the attachment thereto (if any), the information is provided on an AS-IS basis without any express or implied warranties or liabilities. To the extent you are relying on this information, you are doing so at your own risk. If you are not the intended recipient, please notify the sender immediately by replying to this message and destroy all copies of this message and any attachments. ASML is neither liable for the proper and complete transmission of the information contained in this communication, nor for any delay in its receipt. From Debao.Shao at brion.com Tue Nov 1 22:51:21 2011 From: Debao.Shao at brion.com (Debao Shao) Date: Tue, 1 Nov 2011 20:51:21 -0700 Subject: [petsc-users] no profile printed by "PetscLogPrintSummary" In-Reply-To: <384FF55F15E3E447802DC8CCA85696980E26367D46@EX03> References: <384FF55F15E3E447802DC8CCA85696980E26367BC7@EX03> <384FF55F15E3E447802DC8CCA85696980E26367D06@EX03> <384FF55F15E3E447802DC8CCA85696980E26367D26@EX03> <3D86A2C0-349B-4B77-B9B8-2FD6B92E6C22@mcs.anl.gov> <384FF55F15E3E447802DC8CCA85696980E26367D46@EX03> Message-ID: <384FF55F15E3E447802DC8CCA85696980E26367D67@EX03> Thanks Matt and Barry. I tested a simple code just with "PetscLogBegin" & "PetscLogPrintSummary", it can print the log summary. The issue in my project that don't print log summary should be related to network fileno redirection. Sorry for my primary question disturbing you. Thanks, Debao -----Original Message----- From: petsc-users-bounces at mcs.anl.gov [mailto:petsc-users-bounces at mcs.anl.gov] On Behalf Of Debao Shao Sent: Wednesday, November 02, 2011 11:29 AM To: PETSc users list Subject: Re: [petsc-users] no profile printed by "PetscLogPrintSummary" Thanks Barry for the suggestion. I reviewed code before, which show "-log_summary" option will trigger 1, "PetscLogBegin" in "PetscSetHelpVersionFunctions"; 2, "PetscLogPrintSummary(PETSC_COMM_WORLD,0)" in "PetscFinalize". So, I tried the following code: PetscInitialize(0, 0, 0, 0); PetscLogBegin(); ... PetscLogPrintSummary(PETSC_COMM_WORLD, 0); PetscFinalize(); But, don't get the log summary, then I asked the question to see if I missed something. Thanks, Debao -----Original Message----- From: petsc-users-bounces at mcs.anl.gov [mailto:petsc-users-bounces at mcs.anl.gov] On Behalf Of Barry Smith Sent: Wednesday, November 02, 2011 11:09 AM To: PETSc users list Subject: Re: [petsc-users] no profile printed by "PetscLogPrintSummary" On Nov 1, 2011, at 10:00 PM, Debao Shao wrote: > Hi, Matt: > > My petsc version is 3.1-p8, I can't find "PetscLogView", is it similar behavior as "PetscLogDump"? 1) switch to petsc 3.2 2) be proactive. It may seem easier to send email with a question but in fact that takes time for people to respond and figure out what you are asking. Looking at the code will be faster. See the instructions for searching through source code in the users manual http://www.mcs.anl.gov/petsc/petsc-as/snapshots/petsc-current/docs/manual.pdf page 174 Then you can search the PETSc source for -log_summary and track what function it actually calls to print the summary information. The answer of "how" is always in the PETSc source code. (Sometimes "why" we did something a particular way cannot be found in the source). > > Thanks, > Debao > > From: petsc-users-bounces at mcs.anl.gov [mailto:petsc-users-bounces at mcs.anl.gov] On Behalf Of Matthew Knepley > Sent: Wednesday, November 02, 2011 10:51 AM > To: PETSc users list > Subject: Re: [petsc-users] no profile printed by "PetscLogPrintSummary" > > On Wed, Nov 2, 2011 at 2:30 AM, Debao Shao wrote: > Hi, Matt: > > 1, I tried both "PetscLogAllBegin()" & "PetscLogBegin()", but both of them don't print log summary. These don't print the summary, they tell PETSc to start logging it. Barry > 2, I disabled MPI(--with-mpi=0), will MPI_COMM_SELF have difference to PETSC_COMM_WORLD? > > Works for me: > > #include > > #undef __FUNCT__ > #define __FUNCT__ "main" > int main(int argc, char **argv) > { > PetscErrorCode ierr; > > ierr = PetscInitialize(&argc, &argv, PETSC_NULL, PETSC_NULL);CHKERRQ(ierr); > ierr = PetscLogBegin();CHKERRQ(ierr); > ierr = PetscLogView(PETSC_VIEWER_STDOUT_WORLD);CHKERRQ(ierr); > ierr = PetscFinalize(); > return 0; > } > > which gives > > Executing: /PETSc3/petsc/petsc-dev-pylith/arch-sieve-fdatatypes-debug/bin/mpiexec -n 1 /PETSc3/petsc/petsc-dev-pylith/arch-sieve-fdatatypes-debug/lib/tester-obj/tester > sh: ************************************************************************************************************************ > *** WIDEN YOUR WINDOW TO 120 CHARACTERS. Use 'enscript -r -fCourier9' to print this document *** > ************************************************************************************************************************ > ---------------------------------------------- PETSc Performance Summary: ---------------------------------------------- > /PETSc3/petsc/petsc-dev-pylith/arch-sieve-fdatatypes-debug/lib/tester-obj/tester on a arch-siev named MATTHEW-KNEPLEYs-MacBook-Air-2.local with 1 processor, by knepley Tue Nov 1 21:50:08 2011 > Using Petsc Development HG revision: 56a54b25d97df6f01a55abded4076de34409738b HG Date: Tue Nov 01 17:11:12 2011 -0500 > Max Max/Min Avg Total > Time (sec): 5.988e-03 1.00000 5.988e-03 > Objects: 1.000e+00 1.00000 1.000e+00 > Flops: 0.000e+00 0.00000 0.000e+00 0.000e+00 > Flops/sec: 0.000e+00 0.00000 0.000e+00 0.000e+00 > Memory: 8.486e+04 1.00000 8.486e+04 > MPI Messages: 0.000e+00 0.00000 0.000e+00 0.000e+00 > MPI Message Lengths: 0.000e+00 0.00000 0.000e+00 0.000e+00 > MPI Reductions: 1.000e+00 1.00000 > Flop counting convention: 1 flop = 1 real number operation of type (multiply/divide/add/subtract) > e.g., VecAXPY() for real vectors of length N --> 2N flops > and VecAXPY() for complex vectors of length N --> 8N flops > Summary of Stages: ----- Time ------ ----- Flops ----- --- Messages --- -- Message Lengths -- -- Reductions -- > Avg %Total Avg %Total counts %Total Avg %Total counts %Total > 0: Main Stage: 5.9600e-03 99.5% 0.0000e+00 0.0% 0.000e+00 0.0% 0.000e+00 0.0% 0.000e+00 0.0% > ------------------------------------------------------------------------------------------------------------------------ > See the 'Profiling' chapter of the users' manual for details on interpreting output. > Phase summary info: > Count: number of times phase was executed > Time and Flops: Max - maximum over all processors > Ratio - ratio of maximum to minimum over all processors > Mess: number of messages sent > Avg. len: average message length > Reduct: number of global reductions > Global: entire computation > Stage: stages of a computation. Set stages with PetscLogStagePush() and PetscLogStagePop(). > %T - percent time in this phase %f - percent flops in this phase > %M - percent messages in this phase %L - percent message lengths in this phase > %R - percent reductions in this phase > Total Mflop/s: 10e-6 * (sum of flops over all processors)/(max time over all processors) > ------------------------------------------------------------------------------------------------------------------------ > ########################################################## > # # > # WARNING!!! # > # # > # This code was compiled with a debugging option, # > # To get timing results run ./configure # > # using --with-debugging=no, the performance will # > # be generally two or three times faster. # > # # > ########################################################## > Event Count Time (sec) Flops --- Global --- --- Stage --- Total > Max Ratio Max Ratio Max Ratio Mess Avg len Reduct %T %f %M %L %R %T %f %M %L %R Mflop/s > ------------------------------------------------------------------------------------------------------------------------ > --- Event Stage 0: Main Stage > ------------------------------------------------------------------------------------------------------------------------ > Memory usage is given in bytes: > Object Type Creations Destructions Memory Descendants' Mem. > Reports information only for process 0. > --- Event Stage 0: Main Stage > Viewer 1 0 0 0 > ======================================================================================================================== > Average time to get PetscTime(): 4.05312e-07 > #No PETSc Option Table entries > Compiled without FORTRAN kernels > Compiled with full precision matrices (default) > sizeof(short) 2 sizeof(int) 4 sizeof(long) 8 sizeof(void*) 8 sizeof(PetscScalar) 8 > Configure run at: Tue Nov 1 18:04:19 2011 > Configure options: --PETSC_ARCH=arch-sieve-fdatatypes-debug --download-boost --download-chaco --download-fiat --download-hdf5 --download-ml --download-mpich --download-netcdf --download-parmetis --download-scientificpython --download-tetgen --download-triangle --with-clanguage=C++ --with-dynamic-loading --with-exodusii-dir=/PETSc3/petsc/exodusii-4.98 --with-fc="gfortran -cpp" --with-fortran-datatypes --with-shared-libraries --with-sieve --with-sieve-memory-logging > ----------------------------------------- > Libraries compiled on Tue Nov 1 18:04:19 2011 on MATTHEW-KNEPLEYs-MacBook-Air-2.local > Machine characteristics: Darwin-10.8.0-i386-64bit > Using PETSc directory: /PETSc3/petsc/petsc-dev-pylith > Using PETSc arch: arch-sieve-fdatatypes-debug > ----------------------------------------- > Using C compiler: /PETSc3/petsc/petsc-dev-pylith/arch-sieve-fdatatypes-debug/bin/mpicxx -Wall -Wwrite-strings -Wno-strict-aliasing -Wno-unknown-pragmas -g -PIC ${COPTFLAGS} ${CFLAGS} > Using Fortran compiler: /PETSc3/petsc/petsc-dev-pylith/arch-sieve-fdatatypes-debug/bin/mpif90 -fPIC -Wall -Wno-unused-variable -Wno-unused-dummy-argument -g ${FOPTFLAGS} ${FFLAGS} > ----------------------------------------- > Using include paths: -I/PETSc3/petsc/petsc-dev-pylith/arch-sieve-fdatatypes-debug/include -I/PETSc3/petsc/petsc-dev-pylith/include -I/PETSc3/petsc/petsc-dev-pylith/include -I/PETSc3/petsc/petsc-dev-pylith/arch-sieve-fdatatypes-debug/include -I/PETSc3/petsc/petsc-dev-pylith/include/sieve -I/PETSc3/petsc/exodusii-4.98/include -I/PETSc3/petsc/exodusii-4.98/cbind/include -I/PETSc3/petsc/exodusii-4.98/forbind/include > ----------------------------------------- > Using C linker: /PETSc3/petsc/petsc-dev-pylith/arch-sieve-fdatatypes-debug/bin/mpicxx > Using Fortran linker: /PETSc3/petsc/petsc-dev-pylith/arch-sieve-fdatatypes-debug/bin/mpif90 > Using libraries: -L/PETSc3/petsc/petsc-dev-pylith/arch-sieve-fdatatypes-debug/lib -L/PETSc3/petsc/petsc-dev-pylith/arch-sieve-fdatatypes-debug/lib -lpetsc -L/PETSc3/petsc/petsc-dev-pylith/arch-sieve-fdatatypes-debug/lib -ltriangle -L/usr/X11R6/lib -lX11 -lchaco -lml -lparmetis -lmetis -lpthread -ltet -llapack -lblas -L/PETSc3/petsc/exodusii-4.98/. -lexoIIv2for -lexodus -lnetcdf_c++ -lnetcdf -lhdf5_fortran -lhdf5 -lz -L/usr/lib/gcc/i686-apple-darwin10/4.2.1/x86_64 -L/usr/lib/i686-apple-darwin10/4.2.1 -L/usr/lib/gcc/i686-apple-darwin10/4.2.1 -ldl -lpmpich -lmpich -lopa -lmpl -lpthread -lSystem -lmpichf90 -lgfortran -L/usr/local/lib/gcc/x86_64-apple-darwin10.4.0/4.6.0 -L/usr/local/lib -lgfortran -lgcc_ext.10.5 -lmpichcxx -lstdc++ -lmpichcxx -lstdc++ -ldl -lpmpich -lmpich -lopa -lmpl -lpthread -lSystem -ldl > ----------------------------------------- > > Matt > > > Thanks, > Debao > From: petsc-users-bounces at mcs.anl.gov [mailto:petsc-users-bounces at mcs.anl.gov] On Behalf Of Matthew Knepley > Sent: Tuesday, November 01, 2011 7:03 PM > To: PETSc users list > Subject: Re: [petsc-users] no profile printed by "PetscLogPrintSummary" > > On Tue, Nov 1, 2011 at 9:31 AM, Debao Shao wrote: > DA, > > I intend to use "PetscLogPrintSummary" to dump log summary, but don't get it. > The usage is: > PetscInitialize(0, 0, 0, 0); > PetscLogAllBegin(); > ... > PetscLogPrintSummary(MPI_COMM_SELF, PETSC_NULL); > PetscFinalize(); > > You only need PetscLogBegin(), and you should use PETSC_COMM_WORLD. > > Matt > > > My libpetsc.a is built by > 1, ./config/configure.py --with-mpi=0 --with-debugging=1 -with-log=1 -with-info=1 --with-x=0; > 2, make all > > Thanks, > Debao > > -- The information contained in this communication and any attachments is confidential and may be privileged, and is for the sole use of the intended recipient(s). Any unauthorized review, use, disclosure or distribution is prohibited. Unless explicitly stated otherwise in the body of this communication or the attachment thereto (if any), the information is provided on an AS-IS basis without any express or implied warranties or liabilities. To the extent you are relying on this information, you are doing so at your own risk. If you are not the intended recipient, please notify the sender immediately by replying to this message and destroy all copies of this message and any attachments. ASML is neither liable for the proper and complete transmission of the information contained in this communication, nor for any delay in its receipt. > > > > -- > What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. > -- Norbert Wiener > > -- The information contained in this communication and any attachments is confidential and may be privileged, and is for the sole use of the intended recipient(s). Any unauthorized review, use, disclosure or distribution is prohibited. Unless explicitly stated otherwise in the body of this communication or the attachment thereto (if any), the information is provided on an AS-IS basis without any express or implied warranties or liabilities. To the extent you are relying on this information, you are doing so at your own risk. If you are not the intended recipient, please notify the sender immediately by replying to this message and destroy all copies of this message and any attachments. ASML is neither liable for the proper and complete transmission of the information contained in this communication, nor for any delay in its receipt. > > > > -- > What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. > -- Norbert Wiener > > -- The information contained in this communication and any attachments is confidential and may be privileged, and is for the sole use of the intended recipient(s). Any unauthorized review, use, disclosure or distribution is prohibited. Unless explicitly stated otherwise in the body of this communication or the attachment thereto (if any), the information is provided on an AS-IS basis without any express or implied warranties or liabilities. To the extent you are relying on this information, you are doing so at your own risk. If you are not the intended recipient, please notify the sender immediately by replying to this message and destroy all copies of this message and any attachments. ASML is neither liable for the proper and complete transmission of the information contained in this communication, nor for any delay in its receipt. -- The information contained in this communication and any attachments is confidential and may be privileged, and is for the sole use of the intended recipient(s). Any unauthorized review, use, disclosure or distribution is prohibited. Unless explicitly stated otherwise in the body of this communication or the attachment thereto (if any), the information is provided on an AS-IS basis without any express or implied warranties or liabilities. To the extent you are relying on this information, you are doing so at your own risk. If you are not the intended recipient, please notify the sender immediately by replying to this message and destroy all copies of this message and any attachments. ASML is neither liable for the proper and complete transmission of the information contained in this communication, nor for any delay in its receipt. -- The information contained in this communication and any attachments is confidential and may be privileged, and is for the sole use of the intended recipient(s). Any unauthorized review, use, disclosure or distribution is prohibited. Unless explicitly stated otherwise in the body of this communication or the attachment thereto (if any), the information is provided on an AS-IS basis without any express or implied warranties or liabilities. To the extent you are relying on this information, you are doing so at your own risk. If you are not the intended recipient, please notify the sender immediately by replying to this message and destroy all copies of this message and any attachments. ASML is neither liable for the proper and complete transmission of the information contained in this communication, nor for any delay in its receipt. From gdiso at ustc.edu Wed Nov 2 10:51:16 2011 From: gdiso at ustc.edu (Gong Ding) Date: Wed, 2 Nov 2011 23:51:16 +0800 (CST) Subject: [petsc-users] Superlu based ILUT report Zero pivot Message-ID: <12935598.303211320249076143.JavaMail.coremail@mail.ustc.edu> Hi, I tried ASM + Superlu ILUT for my semicondcutor simulation code. For most of the problems, the ILUT preconditioner is very strong. Fewer KSP iterations are needed to get linear solver convergence compared with default ILU(0) preconditioner. However, ILUT is a bit slow with its default parameters. And sometimes it fault with Fatal Error:Zero pivot in row 74 at line 186 in superlu.c Any suggestion to set some parameters to make it more rubost (and fast)? Gong Ding From hzhang at mcs.anl.gov Wed Nov 2 11:02:00 2011 From: hzhang at mcs.anl.gov (Hong Zhang) Date: Wed, 2 Nov 2011 11:02:00 -0500 Subject: [petsc-users] Superlu based ILUT report Zero pivot In-Reply-To: <12935598.303211320249076143.JavaMail.coremail@mail.ustc.edu> References: <12935598.303211320249076143.JavaMail.coremail@mail.ustc.edu> Message-ID: Gong : > I tried ASM + Superlu ILUT for my semicondcutor simulation code. > For most of the problems, the ILUT preconditioner is very strong. > Fewer KSP iterations are needed to get linear solver convergence compared with default ILU(0) preconditioner. > However, ILUT is a bit slow with its default parameters. > And sometimes it fault with > Fatal Error:Zero pivot in row 74 at line 186 in superlu.c We do not have much experience with Superlu ILUT. What you can do is try various options provided by superlu -mat_superlu_ilu_droptol <0.0001>: ILU_DropTol (None) -mat_superlu_ilu_filltol <0.01>: ILU_FillTol (None) -mat_superlu_ilu_fillfactor <10>: ILU_FillFactor (None) -mat_superlu_ilu_droprull <9>: ILU_DropRule (None) -mat_superlu_ilu_norm <2>: ILU_Norm (None) -mat_superlu_ilu_milu <0>: ILU_MILU (None) > Any suggestion to set some parameters to make it more rubost (and fast)? I would also test '-sub_pc_type lu' which might be more efficient than ilu if memory is not an issue. Mumps sequential LU often outperforms other sequential lu. You can 1) install mumps (needs scalapack, blacs and F90 compiler), then 2) run with '-sub_pc_type lu -pc_factor_mat_solver_package mumps'. Hong > > Gong Ding > > > > > From ckontzialis at lycos.com Wed Nov 2 12:37:46 2011 From: ckontzialis at lycos.com (Konstantinos Kontzialis) Date: Wed, 02 Nov 2011 19:37:46 +0200 Subject: [petsc-users] Jacobian finite difference approximation using coloring Message-ID: <4EB17FEA.6030905@lycos.com> Dear all, I am trying to compute the boundary layer over a flat plate using the discontinous galerkin method. I use the following sequence for computing the jacobian of a system using matrix coloring: ISColoring iscoloring; MatFDColoring fdcoloring; ierr = jacobian_diff_numerical(sys, &sys.P); /* Here I initialize the nonzerostructure of the matrix*/ CHKERRQ(ierr); ierr = MatGetColoring(sys.P, MATCOLORING_ID, &iscoloring); CHKERRQ(ierr); ierr = MatFDColoringCreate(sys.P, iscoloring, &fdcoloring); CHKERRQ(ierr); ierr = MatFDColoringSetFunction(fdcoloring, base_residual_implicit, &sys); CHKERRQ(ierr); ierr = SNESSetJacobian(sys.snes, sys.J, sys.P, SNESDefaultComputeJacobianColor, &fdcoloring); CHKERRQ(ierr); I run my code as follows: mpiexec -n 8 ./hoac blasius -llf_flux -n_out 1 -end_time 10000.0 -implicit -implicit_type 3 -pc_type asm -snes_mf_operator -snes_max_fail 500 -snes_monitor -snes_stol 1.0e-50 -ksp_right_pc -sub_pc_type ilu -snes_converged_reason -ksp_gmres_restart 30 -snes_max_linear_solve_fail 10 -snes_max_it 1000 -sub_pc_factor_mat_ordering_type rcm -dt 5.e-4 -snes_rtol 1.0e-8 -gl -snes_converged_reason -ksp_converged_reason -ksp_monitor_true_residual -ksp_rtol 1.0e-12 -snes_atol 1.0e-6 -snes_ls_maxstep 5 and I get this: Timestep 0: dt = 0.0005, T = 0, Res[rho] = 5.5383e-17, Res[rhou] = 0.0177116, Res[rhov] = 8.06867e-06, Res[E] = 9.04882e-06, CFL = 99.9994 /*********************Stage 1 of SSPIRK (3,4)******************/ 0 SNES Function norm 3.861205145119e-01 [6]PETSC ERROR: --------------------- Error Message ------------------------------------ [0]PETSC ERROR: --------------------- Error Message ------------------------------------ [0]PETSC ERROR: Invalid argument! [0]PETSC ERROR: Wrong type of object: Parameter # 1! [0]PETSC ERROR: ------------------------------------------------------------------------ [0]PETSC ERROR: Petsc Release Version 3.1.0, Patch 8, Thu Mar 17 13:37:48 CDT 2011 [0]PETSC ERROR: See docs/changes/index.html for recent updates. [0]PETSC ERROR: See docs/faq.html for hints about trouble shooting. [0]PETSC ERROR: See docs/index.html for manual pages. [0]PETSC ERROR: ------------------------------------------------------------------------ [0]PETSC ERROR: ./hoac on a linux-gnu named PlusSodaL by kontzialis Wed Nov 2 19:32:57 2011 [0]PETSC ERROR: Libraries linked from /home/kontzialis/petsc-3.1-p8/linux-gnu-c-debug/lib [0]PETSC ERROR: Configure run at Tue Sep 27 13:09:04 2011 [0]PETSC ERROR: Configure options --with-debugging=1 --with-shared=1 --with-shared-libraries --with-large-file-io=1 --with-precision=double --with-blacs=1 --download-blacs=yes --download-f-blas-lapack=yes --with-plapack=1 --download-plapack=yes --with-scalapack=1 --download-scalapack=yes --with-superlu=1 --download-superlu=yes --with-superlu_dist=1 --download-superlu_dist=yes --with-ml=1 --download-ml=yes --with-umfpack=1 --download-umfpack=yes --with-sundials=1 --download-sundials=1 --with-parmetis=1 --download-parmetis=1 --with-hypre=1 --download-hypre=1 --with-mpi-dir=/usr/lib/mpich2/bin [0]PETSC ERROR: ------------------------------------------------------------------------ [0]PETSC ERROR: MatFDColoringGetFunction() line 205 in src/mat/matfd/fdmatrix.c [0]PETSC ERROR: SNESDefaultComputeJacobianColor() line 44 in src/snes/interface/snesj2.c [0]PETSC ERROR: SNESComputeJacobian() line 1198 in src/snes/interface/snes.c [0]PETSC ERROR: SNESSolve_LS() line 189 in src/snes/impls/ls/ls.c [0]PETSC ERROR: SNESSolve() line 2255 in src/snes/interface/snes.c [0]PETSC ERROR: User provided function() line 65 in "unknowndirectory/"../src/implicit_solve.c [0]PETSC ERROR: User provided function() line 215 in "unknowndirectory/"../src/implicit_time.c [0]PETSC ERROR: User provided function() line 1260 in "unknowndirectory/"../src/hoac.c What am I doing wrong here? Thank you, Kostas From bsmith at mcs.anl.gov Wed Nov 2 14:23:08 2011 From: bsmith at mcs.anl.gov (Barry Smith) Date: Wed, 2 Nov 2011 14:23:08 -0500 Subject: [petsc-users] Jacobian finite difference approximation using coloring In-Reply-To: <4EB17FEA.6030905@lycos.com> References: <4EB17FEA.6030905@lycos.com> Message-ID: <35A56451-D33A-4B59-BC7C-2A13D99530BC@mcs.anl.gov> It should be ierr = SNESSetJacobian(sys.snes, sys.J, sys.P, SNESDefaultComputeJacobianColor, fdcoloring); not ierr = SNESSetJacobian(sys.snes, sys.J, sys.P, SNESDefaultComputeJacobianColor, &fdcoloring); Barry This is kind of our fault for having no type checking on contexts so the compiler cannot flag the problem. On Nov 2, 2011, at 12:37 PM, Konstantinos Kontzialis wrote: > Dear all, > > I am trying to compute the boundary layer over a flat plate using the discontinous galerkin method. > > I use the following sequence for computing the jacobian of a system using matrix coloring: > > ISColoring iscoloring; > MatFDColoring fdcoloring; > > ierr = jacobian_diff_numerical(sys, &sys.P); /* Here I initialize the nonzerostructure of the matrix*/ > CHKERRQ(ierr); > > ierr = MatGetColoring(sys.P, MATCOLORING_ID, &iscoloring); > CHKERRQ(ierr); > > ierr = MatFDColoringCreate(sys.P, iscoloring, &fdcoloring); > CHKERRQ(ierr); > > ierr = MatFDColoringSetFunction(fdcoloring, base_residual_implicit, > &sys); > CHKERRQ(ierr); > > ierr = SNESSetJacobian(sys.snes, sys.J, sys.P, SNESDefaultComputeJacobianColor, &fdcoloring); > CHKERRQ(ierr); > > I run my code as follows: > > mpiexec -n 8 ./hoac blasius -llf_flux -n_out 1 -end_time 10000.0 -implicit -implicit_type 3 -pc_type asm -snes_mf_operator -snes_max_fail 500 -snes_monitor -snes_stol 1.0e-50 -ksp_right_pc -sub_pc_type ilu -snes_converged_reason -ksp_gmres_restart 30 -snes_max_linear_solve_fail 10 -snes_max_it 1000 -sub_pc_factor_mat_ordering_type rcm -dt 5.e-4 -snes_rtol 1.0e-8 -gl -snes_converged_reason -ksp_converged_reason -ksp_monitor_true_residual -ksp_rtol 1.0e-12 -snes_atol 1.0e-6 -snes_ls_maxstep 5 > > > and I get this: > > Timestep 0: dt = 0.0005, T = 0, Res[rho] = 5.5383e-17, Res[rhou] = 0.0177116, Res[rhov] = 8.06867e-06, Res[E] = 9.04882e-06, CFL = 99.9994 > > /*********************Stage 1 of SSPIRK (3,4)******************/ > > 0 SNES Function norm 3.861205145119e-01 > [6]PETSC ERROR: --------------------- Error Message ------------------------------------ > [0]PETSC ERROR: --------------------- Error Message ------------------------------------ > [0]PETSC ERROR: Invalid argument! > [0]PETSC ERROR: Wrong type of object: Parameter # 1! > [0]PETSC ERROR: ------------------------------------------------------------------------ > [0]PETSC ERROR: Petsc Release Version 3.1.0, Patch 8, Thu Mar 17 13:37:48 CDT 2011 > [0]PETSC ERROR: See docs/changes/index.html for recent updates. > [0]PETSC ERROR: See docs/faq.html for hints about trouble shooting. > [0]PETSC ERROR: See docs/index.html for manual pages. > [0]PETSC ERROR: ------------------------------------------------------------------------ > [0]PETSC ERROR: ./hoac on a linux-gnu named PlusSodaL by kontzialis Wed Nov 2 19:32:57 2011 > [0]PETSC ERROR: Libraries linked from /home/kontzialis/petsc-3.1-p8/linux-gnu-c-debug/lib > [0]PETSC ERROR: Configure run at Tue Sep 27 13:09:04 2011 > [0]PETSC ERROR: Configure options --with-debugging=1 --with-shared=1 --with-shared-libraries --with-large-file-io=1 --with-precision=double --with-blacs=1 --download-blacs=yes --download-f-blas-lapack=yes --with-plapack=1 --download-plapack=yes --with-scalapack=1 --download-scalapack=yes --with-superlu=1 --download-superlu=yes --with-superlu_dist=1 --download-superlu_dist=yes --with-ml=1 --download-ml=yes --with-umfpack=1 --download-umfpack=yes --with-sundials=1 --download-sundials=1 --with-parmetis=1 --download-parmetis=1 --with-hypre=1 --download-hypre=1 --with-mpi-dir=/usr/lib/mpich2/bin > [0]PETSC ERROR: ------------------------------------------------------------------------ > [0]PETSC ERROR: MatFDColoringGetFunction() line 205 in src/mat/matfd/fdmatrix.c > [0]PETSC ERROR: SNESDefaultComputeJacobianColor() line 44 in src/snes/interface/snesj2.c > [0]PETSC ERROR: SNESComputeJacobian() line 1198 in src/snes/interface/snes.c > [0]PETSC ERROR: SNESSolve_LS() line 189 in src/snes/impls/ls/ls.c > [0]PETSC ERROR: SNESSolve() line 2255 in src/snes/interface/snes.c > [0]PETSC ERROR: User provided function() line 65 in "unknowndirectory/"../src/implicit_solve.c > [0]PETSC ERROR: User provided function() line 215 in "unknowndirectory/"../src/implicit_time.c > [0]PETSC ERROR: User provided function() line 1260 in "unknowndirectory/"../src/hoac.c > > > What am I doing wrong here? > > Thank you, > > Kostas From xiaohl at ices.utexas.edu Wed Nov 2 15:38:45 2011 From: xiaohl at ices.utexas.edu (xiaohl) Date: Wed, 02 Nov 2011 15:38:45 -0500 Subject: [petsc-users] questions Message-ID: <968ef386e6ec8346a0938d4183f739f2@ices.utexas.edu> Hi I am going to implement cell center difference method for u = - K grad p div u = f where p is the pressure , u is the velocity, f is the source term. my goal is to assemble the matrix and test the performance of different linear solvers in parallel. my question is how can I read the input file for K where K is n*n tensor. second one is that do you have any similar examples? Hailong From knepley at gmail.com Wed Nov 2 15:42:59 2011 From: knepley at gmail.com (Matthew Knepley) Date: Wed, 2 Nov 2011 20:42:59 +0000 Subject: [petsc-users] questions In-Reply-To: <968ef386e6ec8346a0938d4183f739f2@ices.utexas.edu> References: <968ef386e6ec8346a0938d4183f739f2@ices.utexas.edu> Message-ID: On Wed, Nov 2, 2011 at 8:38 PM, xiaohl wrote: > Hi > > I am going to implement cell center difference method for > u = - K grad p > div u = f > where p is the pressure , u is the velocity, f is the source term. > > my goal is to assemble the matrix and test the performance of different > linear solvers in parallel. > > my question is how can I read the input file for K where K is n*n tensor. > MatLoad() > second one is that do you have any similar examples? > Nothing with the mixed-discretization of the Laplacian. Matt > Hailong > -- What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. -- Norbert Wiener -------------- next part -------------- An HTML attachment was scrubbed... URL: From bsmith at mcs.anl.gov Wed Nov 2 15:53:27 2011 From: bsmith at mcs.anl.gov (Barry Smith) Date: Wed, 2 Nov 2011 15:53:27 -0500 Subject: [petsc-users] questions In-Reply-To: References: <968ef386e6ec8346a0938d4183f739f2@ices.utexas.edu> Message-ID: <7A3039C2-5A22-4C09-B292-C4C1E7959888@mcs.anl.gov> On Nov 2, 2011, at 3:42 PM, Matthew Knepley wrote: > On Wed, Nov 2, 2011 at 8:38 PM, xiaohl wrote: > Hi > > I am going to implement cell center difference method for > u = - K grad p > div u = f > where p is the pressure , u is the velocity, f is the source term. > > my goal is to assemble the matrix and test the performance of different linear solvers in parallel. > > my question is how can I read the input file for K where K is n*n tensor. > > MatLoad() Hm, I think you should use a DMDA with n*n size dof and then use VecLoad() to load the entries of K. Barry > > second one is that do you have any similar examples? > > Nothing with the mixed-discretization of the Laplacian. > > Matt > > Hailong > > > > -- > What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. > -- Norbert Wiener From xiaohl at ices.utexas.edu Wed Nov 2 16:08:00 2011 From: xiaohl at ices.utexas.edu (xiaohl) Date: Wed, 02 Nov 2011 16:08:00 -0500 Subject: [petsc-users] questions In-Reply-To: <7A3039C2-5A22-4C09-B292-C4C1E7959888@mcs.anl.gov> References: <968ef386e6ec8346a0938d4183f739f2@ices.utexas.edu> <7A3039C2-5A22-4C09-B292-C4C1E7959888@mcs.anl.gov> Message-ID: <831293f7ee1e98588c51b79dcc8599ff@ices.utexas.edu> On Wed, 2 Nov 2011 15:53:27 -0500, Barry Smith wrote: > On Nov 2, 2011, at 3:42 PM, Matthew Knepley wrote: > >> On Wed, Nov 2, 2011 at 8:38 PM, xiaohl >> wrote: >> Hi >> >> I am going to implement cell center difference method for >> u = - K grad p >> div u = f >> where p is the pressure , u is the velocity, f is the source term. >> >> my goal is to assemble the matrix and test the performance of >> different linear solvers in parallel. >> >> my question is how can I read the input file for K where K is n*n >> tensor. >> >> MatLoad() > > Hm, I think you should use a DMDA with n*n size dof and then use > VecLoad() to load the entries of K. > > Barry > >> >> second one is that do you have any similar examples? >> >> Nothing with the mixed-discretization of the Laplacian. >> >> Matt >> >> Hailong >> >> >> >> -- >> What most experimenters take for granted before they begin their >> experiments is infinitely more interesting than any results to which >> their experiments lead. >> -- Norbert Wiener Thanks. I will try it. Hailong From jedbrown at mcs.anl.gov Thu Nov 3 00:44:03 2011 From: jedbrown at mcs.anl.gov (Jed Brown) Date: Wed, 2 Nov 2011 23:44:03 -0600 Subject: [petsc-users] Preconditioning in SNES In-Reply-To: <4EB033F6.6080200@lycos.com> References: <4EB033F6.6080200@lycos.com> Message-ID: On Tue, Nov 1, 2011 at 12:01, Konstantinos Kontzialis wrote: > 1. I use 3.1 version of petsc. > please upgrade to 3.2 when you can > 2. I run the problem with snes_fd and pc_type lu > 3. I use a nondimensional form of the NS equations > 4. I do not specify any user defined procedure for applying the > preconditioner. I leave petsc to deal with this. > > With snes_fd I got the following results: > > Timestep 0: dt = 0.002, T = 0, Res[rho] = 1.06871e-16, Res[rhou] = > 0.00215523, Res[rhov] = 1.61374e-08, Res[E] = 6.17917e-07, CFL = 399.998 > > > /*********************Stage 1 of SSPIRK (6,4)******************/ > > 0 SNES Function norm 1.153030915909e-02 > 0 KSP preconditioned resid norm 1.153030915909e-02 true resid norm > 1.153030915909e-02 ||A||/||Ax|| 1.000000000000e+00 > 1 KSP preconditioned resid norm 1.998438714390e-11 true resid norm > 1.998438714940e-11 ||Ae||/||Ax|| 1.733204797344e-09 > > Linear solve converged due to CONVERGED_RTOL iterations 1 > 1 SNES Function norm 1.908346652619e-08 > 0 KSP preconditioned resid norm 1.908346652619e-08 true resid norm > 1.908346652619e-08 ||Ae||/||Ax|| 1.000000000000e+00 > 1 KSP preconditioned resid norm 1.407495233141e-08 true resid norm > 2.936727417791e-08 ||Ae||/||Ax|| 1.538885722759e+00 > 2 KSP preconditioned resid norm 1.983096664116e-16 true resid norm > 4.111894600521e-03 ||Ae||/||Ax|| 2.154689555421e+05 > This looks like a singular preconditioner. Suggest checking the implementation of boundary conditions. Also try -ksp_type fgmres. > My function for computing the residual of the system has the following > form: > > F(u) = Mu + R(u), > > where M is the mass matrix of the system and R(u) the residual and u is > the solution. > This does not look right. I would expect M udot = R_u(u,p) 0 = R_p(u) -------------- next part -------------- An HTML attachment was scrubbed... URL: From dominik at itis.ethz.ch Thu Nov 3 09:49:46 2011 From: dominik at itis.ethz.ch (Dominik Szczerba) Date: Thu, 3 Nov 2011 15:49:46 +0100 Subject: [petsc-users] valgrind error only with MUMPS Message-ID: When solving my linear system using: -pc_type lu -pc_factor_mat_solver_package mumps -ksp_monitor -ksp_converged_reason -ksp_view I get the below cited error. When solving with: -ksp_type bcgs -pc_type jacobi -ksp_rtol 1e-9 -ksp_converged_use_initial_residual_norm -ksp_norm_type unpreconditioned -ksp_monitor -ksp_converged_reason -ksp_view there are no errors reported. Problem with MUMPS? Dominik ==19601== Syscall param writev(vector[...]) points to uninitialised byte(s) ==19601== at 0x6791F59: writev (writev.c:56) ==19601== by 0x552CD8: MPIDU_Sock_writev (sock_immed.i:610) ==19601== by 0x5550DB: MPIDI_CH3_iSendv (ch3_isendv.c:84) ==19601== by 0x53B83A: MPIDI_CH3_EagerContigIsend (ch3u_eager.c:541) ==19601== by 0x53DA03: MPID_Isend (mpid_isend.c:130) ==19601== by 0x1440C71: PMPI_Isend (isend.c:124) ==19601== by 0x1453DED: MPI_ISEND (isendf.c:190) ==19601== by 0x124069F: __dmumps_comm_buffer_MOD_dmumps_62 (dmumps_comm_buffer.F:567) ==19601== by 0x12C1B4C: dmumps_242_ (dmumps_part2.F:739) ==19601== by 0x120C987: dmumps_249_ (dmumps_part8.F:6541) ==19601== by 0x120599F: dmumps_245_ (dmumps_part8.F:3885) ==19601== by 0x1229BCF: dmumps_301_ (dmumps_part8.F:2120) ==19601== by 0x128D272: dmumps_ (dmumps_part1.F:665) ==19601== by 0x1178753: dmumps_f77_ (dmumps_part3.F:6651) ==19601== by 0x1151A79: dmumps_c (mumps_c.c:422) ==19601== by 0x791075: MatSolve_MUMPS (mumps.c:547) ==19601== by 0x73ABA0: MatSolve (matrix.c:3108) ==19601== by 0x7FC5B9: PCApply_LU (lu.c:204) ==19601== by 0xD95AAB: PCApply (precon.c:383) ==19601== by 0xD98A70: PCApplyBAorAB (precon.c:609) ==19601== Address 0x86df168 is 8 bytes inside a block of size 144 alloc'd ==19601== at 0x4C28F9F: malloc (vg_replace_malloc.c:236) ==19601== by 0x124235E: __dmumps_comm_buffer_MOD_dmumps_2 (dmumps_comm_buffer.F:175) ==19601== by 0x12426E7: __dmumps_comm_buffer_MOD_dmumps_55 (dmumps_comm_buffer.F:123) ==19601== by 0x121F28B: dmumps_301_ (dmumps_part8.F:989) ==19601== by 0x128D272: dmumps_ (dmumps_part1.F:665) ==19601== by 0x1178753: dmumps_f77_ (dmumps_part3.F:6651) ==19601== by 0x1151A79: dmumps_c (mumps_c.c:422) ==19601== by 0x791075: MatSolve_MUMPS (mumps.c:547) ==19601== by 0x73ABA0: MatSolve (matrix.c:3108) ==19601== by 0x7FC5B9: PCApply_LU (lu.c:204) ==19601== by 0xD95AAB: PCApply (precon.c:383) ==19601== by 0xD98A70: PCApplyBAorAB (precon.c:609) ==19601== by 0x8D817F: KSPSolve_BCGS (bcgs.c:79) ==19601== by 0x86B189: KSPSolve (itfunc.c:423) ==19601== by 0x56013D: PetscLinearSystem::Solve() (PetscLinearSystem.cxx:221) ==19601== by 0x4E34F6: SolidSolver::Solve() (SolidSolver.cxx:1153) ==19601== by 0x4EFF56: main (SolidSolverMain.cxx:684) From bsmith at mcs.anl.gov Thu Nov 3 09:52:24 2011 From: bsmith at mcs.anl.gov (Barry Smith) Date: Thu, 3 Nov 2011 09:52:24 -0500 Subject: [petsc-users] valgrind error only with MUMPS In-Reply-To: References: Message-ID: On Nov 3, 2011, at 9:49 AM, Dominik Szczerba wrote: > When solving my linear system using: > > -pc_type lu -pc_factor_mat_solver_package mumps -ksp_monitor > -ksp_converged_reason -ksp_view > > I get the below cited error. When solving with: > > -ksp_type bcgs -pc_type jacobi -ksp_rtol 1e-9 > -ksp_converged_use_initial_residual_norm -ksp_norm_type > unpreconditioned -ksp_monitor -ksp_converged_reason -ksp_view > > there are no errors reported. Problem with MUMPS? Yes. Looks like they are being sloppy. Whether this is an actual error condition or irrelevant only they would know. Report it to them. Barry > > Dominik > > ==19601== Syscall param writev(vector[...]) points to uninitialised byte(s) > ==19601== at 0x6791F59: writev (writev.c:56) > ==19601== by 0x552CD8: MPIDU_Sock_writev (sock_immed.i:610) > ==19601== by 0x5550DB: MPIDI_CH3_iSendv (ch3_isendv.c:84) > ==19601== by 0x53B83A: MPIDI_CH3_EagerContigIsend (ch3u_eager.c:541) > ==19601== by 0x53DA03: MPID_Isend (mpid_isend.c:130) > ==19601== by 0x1440C71: PMPI_Isend (isend.c:124) > ==19601== by 0x1453DED: MPI_ISEND (isendf.c:190) > ==19601== by 0x124069F: __dmumps_comm_buffer_MOD_dmumps_62 > (dmumps_comm_buffer.F:567) > ==19601== by 0x12C1B4C: dmumps_242_ (dmumps_part2.F:739) > ==19601== by 0x120C987: dmumps_249_ (dmumps_part8.F:6541) > ==19601== by 0x120599F: dmumps_245_ (dmumps_part8.F:3885) > ==19601== by 0x1229BCF: dmumps_301_ (dmumps_part8.F:2120) > ==19601== by 0x128D272: dmumps_ (dmumps_part1.F:665) > ==19601== by 0x1178753: dmumps_f77_ (dmumps_part3.F:6651) > ==19601== by 0x1151A79: dmumps_c (mumps_c.c:422) > ==19601== by 0x791075: MatSolve_MUMPS (mumps.c:547) > ==19601== by 0x73ABA0: MatSolve (matrix.c:3108) > ==19601== by 0x7FC5B9: PCApply_LU (lu.c:204) > ==19601== by 0xD95AAB: PCApply (precon.c:383) > ==19601== by 0xD98A70: PCApplyBAorAB (precon.c:609) > ==19601== Address 0x86df168 is 8 bytes inside a block of size 144 alloc'd > ==19601== at 0x4C28F9F: malloc (vg_replace_malloc.c:236) > ==19601== by 0x124235E: __dmumps_comm_buffer_MOD_dmumps_2 > (dmumps_comm_buffer.F:175) > ==19601== by 0x12426E7: __dmumps_comm_buffer_MOD_dmumps_55 > (dmumps_comm_buffer.F:123) > ==19601== by 0x121F28B: dmumps_301_ (dmumps_part8.F:989) > ==19601== by 0x128D272: dmumps_ (dmumps_part1.F:665) > ==19601== by 0x1178753: dmumps_f77_ (dmumps_part3.F:6651) > ==19601== by 0x1151A79: dmumps_c (mumps_c.c:422) > ==19601== by 0x791075: MatSolve_MUMPS (mumps.c:547) > ==19601== by 0x73ABA0: MatSolve (matrix.c:3108) > ==19601== by 0x7FC5B9: PCApply_LU (lu.c:204) > ==19601== by 0xD95AAB: PCApply (precon.c:383) > ==19601== by 0xD98A70: PCApplyBAorAB (precon.c:609) > ==19601== by 0x8D817F: KSPSolve_BCGS (bcgs.c:79) > ==19601== by 0x86B189: KSPSolve (itfunc.c:423) > ==19601== by 0x56013D: PetscLinearSystem::Solve() (PetscLinearSystem.cxx:221) > ==19601== by 0x4E34F6: SolidSolver::Solve() (SolidSolver.cxx:1153) > ==19601== by 0x4EFF56: main (SolidSolverMain.cxx:684) From gdiso at ustc.edu Fri Nov 4 02:59:38 2011 From: gdiso at ustc.edu (Gong Ding) Date: Fri, 4 Nov 2011 15:59:38 +0800 (CST) Subject: [petsc-users] Is it possible to add KLU to petsc as external solver package? Message-ID: <3932027.307131320393578911.JavaMail.coremail@mail.ustc.edu> KLU developed by Prof Tim Davis is suitable for solve circuit sparse matrix. And it shares some component with UMFPACK. From ckontzialis at lycos.com Fri Nov 4 04:27:55 2011 From: ckontzialis at lycos.com (Konstantinos Kontzialis) Date: Fri, 04 Nov 2011 11:27:55 +0200 Subject: [petsc-users] Jacobian free in SNES In-Reply-To: References: Message-ID: <4EB3B01B.1030406@lycos.com> Dear all, I solve the NS equations with the discontinuous galerkin method. The system's residual before going to petsc is the following: M udot+R(u) = 0 (1) R(u) is the part of the residual coming from the spatially discretized terms. I need to say that the explicit version of my code runs very well. But, I want to implement an implicit method for time marching using first the backward euler scheme. After linearization I get the following form: M/dt u^{n+1} + dR/du (u^{n+1}-u^{n}) = M/dt u^{n} -R(u^{n}) (2) How should I define a function seen by SNES for solving the above equation in a jacobian free context? Kostas From xsli at lbl.gov Fri Nov 4 13:27:29 2011 From: xsli at lbl.gov (Xiaoye S. Li) Date: Fri, 4 Nov 2011 11:27:29 -0700 Subject: [petsc-users] Superlu based ILUT report Zero pivot In-Reply-To: References: <12935598.303211320249076143.JavaMail.coremail@mail.ustc.edu> Message-ID: The default settings in ILUTP of superlu is "conservative", probably with less dropping than people want. The 2 most relevant parameters to control this are: ILU_DropTol : (default 1e-4), you can try to set it a larger, e.g., 0.01 ILU_FillFactor : (default 10), this allows about a factor of 10 growth in the L & U factors. You can set it lower, e.g., 5 or smaller. I am a little surprised that you see Zero Pivot (e.g., at column 74). Since the code does partial pivoting, always tries to pick up the largest magnitude entry in the current column to pivot. "Zero pivot at column 74" means at this step column 74 becomes entirely empty. Perhaps from numerical cancellation, Or the matrix is numerically singular? The default setting also uses MC64 to permute large entry to the diagonal during preprocessing. (Hong, can you please confirm LargeDiag is set, right?) If you can send me the matrix in coordinate format (i.e., triplet for each nonzero), I will be happy to try it. Sherry Li On Wed, Nov 2, 2011 at 9:02 AM, Hong Zhang wrote: > Gong : > >> I tried ASM + Superlu ILUT for my semicondcutor simulation code. >> For most of the problems, the ILUT preconditioner is very strong. >> Fewer KSP iterations are needed to get linear solver convergence compared with default ILU(0) preconditioner. >> However, ILUT is a bit slow with its default parameters. >> And sometimes it fault with >> Fatal Error:Zero pivot in row 74 at line 186 in superlu.c > > We do not have much experience with Superlu ILUT. > What you can do is try various options provided by superlu > ?-mat_superlu_ilu_droptol <0.0001>: ILU_DropTol (None) > ?-mat_superlu_ilu_filltol <0.01>: ILU_FillTol (None) > ?-mat_superlu_ilu_fillfactor <10>: ILU_FillFactor (None) > ?-mat_superlu_ilu_droprull <9>: ILU_DropRule (None) > ?-mat_superlu_ilu_norm <2>: ILU_Norm (None) > ?-mat_superlu_ilu_milu <0>: ILU_MILU (None) > >> Any suggestion to set some parameters to make it more rubost (and fast)? > > I would also test '-sub_pc_type lu' which might be more efficient than ilu > if memory is not an issue. > Mumps sequential LU often outperforms other sequential lu. > You can 1) install mumps (needs scalapack, blacs and F90 compiler), then > 2) run with '-sub_pc_type lu -pc_factor_mat_solver_package mumps'. > > Hong >> >> Gong Ding >> >> >> >> >> > From jedbrown at mcs.anl.gov Fri Nov 4 23:39:34 2011 From: jedbrown at mcs.anl.gov (Jed Brown) Date: Fri, 4 Nov 2011 22:39:34 -0600 Subject: [petsc-users] Is it possible to add KLU to petsc as external solver package? In-Reply-To: <3932027.307131320393578911.JavaMail.coremail@mail.ustc.edu> References: <3932027.307131320393578911.JavaMail.coremail@mail.ustc.edu> Message-ID: 2011/11/4 Gong Ding > KLU developed by Prof Tim Davis is suitable for solve circuit sparse > matrix. > And it shares some component with UMFPACK. > It's certainly possible, it just takes time. Do you know why it is a separate package from Umfpack? Doesn't it do the same thing on a matrix of the same format, just using a different algorithm? -------------- next part -------------- An HTML attachment was scrubbed... URL: From jedbrown at mcs.anl.gov Sat Nov 5 00:04:39 2011 From: jedbrown at mcs.anl.gov (Jed Brown) Date: Fri, 4 Nov 2011 23:04:39 -0600 Subject: [petsc-users] Jacobian free in SNES In-Reply-To: <4EB3B01B.1030406@lycos.com> References: <4EB3B01B.1030406@lycos.com> Message-ID: On Fri, Nov 4, 2011 at 03:27, Konstantinos Kontzialis wrote: > I solve the NS equations with the discontinuous galerkin method. The > system's residual before going to petsc is the following: > > M udot+R(u) = 0 (1) > > R(u) is the part of the residual coming from the spatially discretized > terms. I need to say that the explicit version of my code runs very well. > > But, I want to implement an implicit method for time marching using first > the backward euler scheme. After linearization I get the following form: > > M/dt u^{n+1} + dR/du (u^{n+1}-u^{n}) = M/dt u^{n} -R(u^{n}) (2) > Why not just have your SNESFormFunction compute: G(u^{n+1}) = M (u^{n+1} - u^n) / dt + R(u^{n+1}) = 0 and then J = dG/du^{n+1}? An alternative which I recommend is to use TSSetIFunction() and TSSetIJacobian(). Then you can run backward Euler with -ts_type beuler, but you can also run lots of other methods, e.g. -ts_type rosw to use Rosenbrock-W methods with adaptive error control (these also tolerate approximations to the Jacobian, such as dropping the non-stiff convective terms). -------------- next part -------------- An HTML attachment was scrubbed... URL: From dominik at itis.ethz.ch Sat Nov 5 08:23:58 2011 From: dominik at itis.ethz.ch (Dominik Szczerba) Date: Sat, 5 Nov 2011 14:23:58 +0100 Subject: [petsc-users] fixed point interations Message-ID: I am a newcomer to Petsc non-linear capabilities, so far implementing such things myself, only delegating linear solves to Petsc. I want to start small by porting a very simple code using fixed point iterations as follows: A(x)x = b(x) is approximated as A(x0)x = b(x0), then solved by KSP for x, then x0 is updated to x, then repeat until convergence. In the documentation chapter 5 I see all sorts of sophisticated Newton type methods, requiring computation of the Jacobian. Is the above defined simple method still accessible somehow in Petsc or such triviality can only be done by hand? Which one from the existing nonlinear solvers would be a closest match both in simplicity and robustness (even if slow performance)? Regards, Dominik From dominik at itis.ethz.ch Sat Nov 5 08:52:23 2011 From: dominik at itis.ethz.ch (Dominik Szczerba) Date: Sat, 5 Nov 2011 14:52:23 +0100 Subject: [petsc-users] valgrind error only with MUMPS In-Reply-To: References: Message-ID: >> there are no errors reported. Problem with MUMPS? > > ? Yes. Looks like they are being sloppy. Whether this is an actual error condition or irrelevant only they would know. Report it to them. > > ? Barry Reported. They responded that the valgrind error is harmless and will be fixed. Dominik >> ==19601== Syscall param writev(vector[...]) points to uninitialised byte(s) >> ==19601== ? ?at 0x6791F59: writev (writev.c:56) >> ==19601== ? ?by 0x552CD8: MPIDU_Sock_writev (sock_immed.i:610) >> ==19601== ? ?by 0x5550DB: MPIDI_CH3_iSendv (ch3_isendv.c:84) >> ==19601== ? ?by 0x53B83A: MPIDI_CH3_EagerContigIsend (ch3u_eager.c:541) >> ==19601== ? ?by 0x53DA03: MPID_Isend (mpid_isend.c:130) >> ==19601== ? ?by 0x1440C71: PMPI_Isend (isend.c:124) >> ==19601== ? ?by 0x1453DED: MPI_ISEND (isendf.c:190) >> ==19601== ? ?by 0x124069F: __dmumps_comm_buffer_MOD_dmumps_62 >> (dmumps_comm_buffer.F:567) >> ==19601== ? ?by 0x12C1B4C: dmumps_242_ (dmumps_part2.F:739) >> ==19601== ? ?by 0x120C987: dmumps_249_ (dmumps_part8.F:6541) >> ==19601== ? ?by 0x120599F: dmumps_245_ (dmumps_part8.F:3885) >> ==19601== ? ?by 0x1229BCF: dmumps_301_ (dmumps_part8.F:2120) >> ==19601== ? ?by 0x128D272: dmumps_ (dmumps_part1.F:665) >> ==19601== ? ?by 0x1178753: dmumps_f77_ (dmumps_part3.F:6651) >> ==19601== ? ?by 0x1151A79: dmumps_c (mumps_c.c:422) >> ==19601== ? ?by 0x791075: MatSolve_MUMPS (mumps.c:547) >> ==19601== ? ?by 0x73ABA0: MatSolve (matrix.c:3108) >> ==19601== ? ?by 0x7FC5B9: PCApply_LU (lu.c:204) >> ==19601== ? ?by 0xD95AAB: PCApply (precon.c:383) >> ==19601== ? ?by 0xD98A70: PCApplyBAorAB (precon.c:609) >> ==19601== ?Address 0x86df168 is 8 bytes inside a block of size 144 alloc'd >> ==19601== ? ?at 0x4C28F9F: malloc (vg_replace_malloc.c:236) >> ==19601== ? ?by 0x124235E: __dmumps_comm_buffer_MOD_dmumps_2 >> (dmumps_comm_buffer.F:175) >> ==19601== ? ?by 0x12426E7: __dmumps_comm_buffer_MOD_dmumps_55 >> (dmumps_comm_buffer.F:123) >> ==19601== ? ?by 0x121F28B: dmumps_301_ (dmumps_part8.F:989) >> ==19601== ? ?by 0x128D272: dmumps_ (dmumps_part1.F:665) >> ==19601== ? ?by 0x1178753: dmumps_f77_ (dmumps_part3.F:6651) >> ==19601== ? ?by 0x1151A79: dmumps_c (mumps_c.c:422) >> ==19601== ? ?by 0x791075: MatSolve_MUMPS (mumps.c:547) >> ==19601== ? ?by 0x73ABA0: MatSolve (matrix.c:3108) >> ==19601== ? ?by 0x7FC5B9: PCApply_LU (lu.c:204) >> ==19601== ? ?by 0xD95AAB: PCApply (precon.c:383) >> ==19601== ? ?by 0xD98A70: PCApplyBAorAB (precon.c:609) >> ==19601== ? ?by 0x8D817F: KSPSolve_BCGS (bcgs.c:79) >> ==19601== ? ?by 0x86B189: KSPSolve (itfunc.c:423) >> ==19601== ? ?by 0x56013D: PetscLinearSystem::Solve() (PetscLinearSystem.cxx:221) >> ==19601== ? ?by 0x4E34F6: SolidSolver::Solve() (SolidSolver.cxx:1153) >> ==19601== ? ?by 0x4EFF56: main (SolidSolverMain.cxx:684) From knepley at gmail.com Sat Nov 5 09:06:02 2011 From: knepley at gmail.com (Matthew Knepley) Date: Sat, 5 Nov 2011 14:06:02 +0000 Subject: [petsc-users] fixed point interations In-Reply-To: References: Message-ID: On Sat, Nov 5, 2011 at 1:23 PM, Dominik Szczerba wrote: > I am a newcomer to Petsc non-linear capabilities, so far implementing > such things myself, only delegating linear solves to Petsc. > > I want to start small by porting a very simple code using fixed point > iterations as follows: A(x)x = b(x) is approximated as A(x0)x = b(x0), > then solved by KSP for x, then x0 is updated to x, then repeat until > convergence. > > In the documentation chapter 5 I see all sorts of sophisticated Newton > type methods, requiring computation of the Jacobian. Is the above > defined simple method still accessible somehow in Petsc or such > triviality can only be done by hand? Which one from the existing > nonlinear solvers would be a closest match both in simplicity and > robustness (even if slow performance)? > You want -snes_type nrichardson. All you need is to define the residual. Matt > Regards, > Dominik > -- What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. -- Norbert Wiener -------------- next part -------------- An HTML attachment was scrubbed... URL: From jedbrown at mcs.anl.gov Sat Nov 5 10:45:25 2011 From: jedbrown at mcs.anl.gov (Jed Brown) Date: Sat, 5 Nov 2011 09:45:25 -0600 Subject: [petsc-users] fixed point interations In-Reply-To: References: Message-ID: On Sat, Nov 5, 2011 at 08:06, Matthew Knepley wrote: > I want to start small by porting a very simple code using fixed point >> iterations as follows: A(x)x = b(x) is approximated as A(x0)x = b(x0), >> then solved by KSP for x, then x0 is updated to x, then repeat until >> convergence. >> > Run the usual "Newton" methods with A(x) in place of the true Jacobian. You can compute A(x) in the residual F(x) = A(x) x - b(x) and cache it in your user context, then pass it back when asked to compute the Jacobian. This runs your algorithm (often called Picard) in "defect correction mode", but once you write your equations this way, you can try Newton iteration using -snes_mf_operator. > >> In the documentation chapter 5 I see all sorts of sophisticated Newton >> type methods, requiring computation of the Jacobian. Is the above >> defined simple method still accessible somehow in Petsc or such >> triviality can only be done by hand? Which one from the existing >> nonlinear solvers would be a closest match both in simplicity and >> robustness (even if slow performance)? >> > > You want -snes_type nrichardson. All you need is to define the residual. > Matt, were the 1000 emails we exchanged over this last month not enough to prevent you from spreading misinformation under a different name? -------------- next part -------------- An HTML attachment was scrubbed... URL: From knepley at gmail.com Sat Nov 5 10:51:47 2011 From: knepley at gmail.com (Matthew Knepley) Date: Sat, 5 Nov 2011 15:51:47 +0000 Subject: [petsc-users] fixed point interations In-Reply-To: References: Message-ID: On Sat, Nov 5, 2011 at 3:45 PM, Jed Brown wrote: > On Sat, Nov 5, 2011 at 08:06, Matthew Knepley wrote: > >> I want to start small by porting a very simple code using fixed point >>> iterations as follows: A(x)x = b(x) is approximated as A(x0)x = b(x0), >>> then solved by KSP for x, then x0 is updated to x, then repeat until >>> convergence. >>> >> > Run the usual "Newton" methods with A(x) in place of the true Jacobian. > You can compute A(x) in the residual > > F(x) = A(x) x - b(x) > > and cache it in your user context, then pass it back when asked to compute > the Jacobian. > > This runs your algorithm (often called Picard) in "defect correction > mode", but once you write your equations this way, you can try Newton > iteration using -snes_mf_operator. > > >> >>> In the documentation chapter 5 I see all sorts of sophisticated Newton >>> type methods, requiring computation of the Jacobian. Is the above >>> defined simple method still accessible somehow in Petsc or such >>> triviality can only be done by hand? Which one from the existing >>> nonlinear solvers would be a closest match both in simplicity and >>> robustness (even if slow performance)? >>> >> >> You want -snes_type nrichardson. All you need is to define the residual. >> > > Matt, were the 1000 emails we exchanged over this last month not enough to > prevent you from spreading misinformation under a different name? > Tell people whatever you want. The above is just Newton or "iterative refinement" in thousands of NA papers. Matt -- What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. -- Norbert Wiener -------------- next part -------------- An HTML attachment was scrubbed... URL: From jedbrown at mcs.anl.gov Sat Nov 5 10:55:57 2011 From: jedbrown at mcs.anl.gov (Jed Brown) Date: Sat, 5 Nov 2011 09:55:57 -0600 Subject: [petsc-users] fixed point interations In-Reply-To: References: Message-ID: On Sat, Nov 5, 2011 at 09:51, Matthew Knepley wrote: > Tell people whatever you want. The above is just Newton or "iterative > refinement" in thousands of NA papers. Dominik asked for exactly the method frequently called Picard (but call it whatever you want). Specifically, he requested a method that does a linear solve. You suggested a method that has no linear solve at all (unless the user manages it entirely on their own, by putting it into the residual evaluation). -------------- next part -------------- An HTML attachment was scrubbed... URL: From ckontzialis at lycos.com Sat Nov 5 12:07:55 2011 From: ckontzialis at lycos.com (Konstantinos Kontzialis) Date: Sat, 05 Nov 2011 19:07:55 +0200 Subject: [petsc-users] Jacobian free in SNES In-Reply-To: References: Message-ID: <4EB56D6B.5000306@lycos.com> On 11/05/2011 05:45 PM, petsc-users-request at mcs.anl.gov wrote: > Send petsc-users mailing list submissions to > petsc-users at mcs.anl.gov > > To subscribe or unsubscribe via the World Wide Web, visit > https://lists.mcs.anl.gov/mailman/listinfo/petsc-users > or, via email, send a message with subject or body 'help' to > petsc-users-request at mcs.anl.gov > > You can reach the person managing the list at > petsc-users-owner at mcs.anl.gov > > When replying, please edit your Subject line so it is more specific > than "Re: Contents of petsc-users digest..." > > > Today's Topics: > > 1. Re: Superlu based ILUT report Zero pivot (Xiaoye S. Li) > 2. Re: Is it possible to add KLU to petsc as external solver > package? (Jed Brown) > 3. Re: Jacobian free in SNES (Jed Brown) > 4. fixed point interations (Dominik Szczerba) > 5. Re: valgrind error only with MUMPS (Dominik Szczerba) > 6. Re: fixed point interations (Matthew Knepley) > 7. Re: fixed point interations (Jed Brown) > > > ---------------------------------------------------------------------- > > Message: 1 > Date: Fri, 4 Nov 2011 11:27:29 -0700 > From: "Xiaoye S. Li" > Subject: Re: [petsc-users] Superlu based ILUT report Zero pivot > To: PETSc users list, gdiso at ustc.edu > Message-ID: > > Content-Type: text/plain; charset=ISO-8859-1 > > The default settings in ILUTP of superlu is "conservative", probably > with less dropping than people want. The 2 most relevant parameters > to control this are: > ILU_DropTol : (default 1e-4), you can try to set it a larger, e.g., 0.01 > ILU_FillFactor : (default 10), this allows about a factor of 10 > growth in the L& U factors. > You can set it lower, e.g., 5 or smaller. > > I am a little surprised that you see Zero Pivot (e.g., at column 74). > Since the code does partial pivoting, always tries to pick up the > largest magnitude entry in the current column to pivot. "Zero pivot at > column 74" means at this step column 74 becomes entirely empty. > Perhaps from numerical cancellation, Or the matrix is numerically > singular? > > The default setting also uses MC64 to permute large entry to the > diagonal during preprocessing. (Hong, can you please confirm > LargeDiag is set, right?) > > If you can send me the matrix in coordinate format (i.e., triplet for > each nonzero), I will be happy to try it. > > Sherry Li > > > On Wed, Nov 2, 2011 at 9:02 AM, Hong Zhang wrote: >> Gong : >> >>> I tried ASM + Superlu ILUT for my semicondcutor simulation code. >>> For most of the problems, the ILUT preconditioner is very strong. >>> Fewer KSP iterations are needed to get linear solver convergence compared with default ILU(0) preconditioner. >>> However, ILUT is a bit slow with its default parameters. >>> And sometimes it fault with >>> Fatal Error:Zero pivot in row 74 at line 186 in superlu.c >> We do not have much experience with Superlu ILUT. >> What you can do is try various options provided by superlu >> ?-mat_superlu_ilu_droptol<0.0001>: ILU_DropTol (None) >> ?-mat_superlu_ilu_filltol<0.01>: ILU_FillTol (None) >> ?-mat_superlu_ilu_fillfactor<10>: ILU_FillFactor (None) >> ?-mat_superlu_ilu_droprull<9>: ILU_DropRule (None) >> ?-mat_superlu_ilu_norm<2>: ILU_Norm (None) >> ?-mat_superlu_ilu_milu<0>: ILU_MILU (None) >> >>> Any suggestion to set some parameters to make it more rubost (and fast)? >> I would also test '-sub_pc_type lu' which might be more efficient than ilu >> if memory is not an issue. >> Mumps sequential LU often outperforms other sequential lu. >> You can 1) install mumps (needs scalapack, blacs and F90 compiler), then >> 2) run with '-sub_pc_type lu -pc_factor_mat_solver_package mumps'. >> >> Hong >>> Gong Ding >>> >>> >>> >>> >>> > > ------------------------------ > > Message: 2 > Date: Fri, 4 Nov 2011 22:39:34 -0600 > From: Jed Brown > Subject: Re: [petsc-users] Is it possible to add KLU to petsc as > external solver package? > To: PETSc users list > Message-ID: > > Content-Type: text/plain; charset="utf-8" > > 2011/11/4 Gong Ding > >> KLU developed by Prof Tim Davis is suitable for solve circuit sparse >> matrix. >> And it shares some component with UMFPACK. >> > It's certainly possible, it just takes time. Do you know why it is a > separate package from Umfpack? Doesn't it do the same thing on a matrix of > the same format, just using a different algorithm? > -------------- next part -------------- > An HTML attachment was scrubbed... > URL: > > ------------------------------ > > Message: 3 > Date: Fri, 4 Nov 2011 23:04:39 -0600 > From: Jed Brown > Subject: Re: [petsc-users] Jacobian free in SNES > To: PETSc users list > Message-ID: > > Content-Type: text/plain; charset="utf-8" > > On Fri, Nov 4, 2011 at 03:27, Konstantinos Kontzialis> wrote: >> I solve the NS equations with the discontinuous galerkin method. The >> system's residual before going to petsc is the following: >> >> M udot+R(u) = 0 (1) >> >> R(u) is the part of the residual coming from the spatially discretized >> terms. I need to say that the explicit version of my code runs very well. >> >> But, I want to implement an implicit method for time marching using first >> the backward euler scheme. After linearization I get the following form: >> >> M/dt u^{n+1} + dR/du (u^{n+1}-u^{n}) = M/dt u^{n} -R(u^{n}) (2) >> > Why not just have your SNESFormFunction compute: > > G(u^{n+1}) = M (u^{n+1} - u^n) / dt + R(u^{n+1}) = 0 > > and then J = dG/du^{n+1}? > > An alternative which I recommend is to use TSSetIFunction() and > TSSetIJacobian(). Then you can run backward Euler with -ts_type beuler, but > you can also run lots of other methods, e.g. -ts_type rosw to use > Rosenbrock-W methods with adaptive error control (these also tolerate > approximations to the Jacobian, such as dropping the non-stiff convective > terms). > -------------- next part -------------- > An HTML attachment was scrubbed... > URL: > > ------------------------------ > > Message: 4 > Date: Sat, 5 Nov 2011 14:23:58 +0100 > From: Dominik Szczerba > Subject: [petsc-users] fixed point interations > To: PETSc users list > Message-ID: > > Content-Type: text/plain; charset=ISO-8859-1 > > I am a newcomer to Petsc non-linear capabilities, so far implementing > such things myself, only delegating linear solves to Petsc. > > I want to start small by porting a very simple code using fixed point > iterations as follows: A(x)x = b(x) is approximated as A(x0)x = b(x0), > then solved by KSP for x, then x0 is updated to x, then repeat until > convergence. > > In the documentation chapter 5 I see all sorts of sophisticated Newton > type methods, requiring computation of the Jacobian. Is the above > defined simple method still accessible somehow in Petsc or such > triviality can only be done by hand? Which one from the existing > nonlinear solvers would be a closest match both in simplicity and > robustness (even if slow performance)? > > Regards, > Dominik > > > ------------------------------ > > Message: 5 > Date: Sat, 5 Nov 2011 14:52:23 +0100 > From: Dominik Szczerba > Subject: Re: [petsc-users] valgrind error only with MUMPS > To: PETSc users list > Message-ID: > > Content-Type: text/plain; charset=ISO-8859-1 > >>> there are no errors reported. Problem with MUMPS? >> ? Yes. Looks like they are being sloppy. Whether this is an actual error condition or irrelevant only they would know. Report it to them. >> >> ? Barry > Reported. They responded that the valgrind error is harmless and will be fixed. > > Dominik > >>> ==19601== Syscall param writev(vector[...]) points to uninitialised byte(s) >>> ==19601== ? ?at 0x6791F59: writev (writev.c:56) >>> ==19601== ? ?by 0x552CD8: MPIDU_Sock_writev (sock_immed.i:610) >>> ==19601== ? ?by 0x5550DB: MPIDI_CH3_iSendv (ch3_isendv.c:84) >>> ==19601== ? ?by 0x53B83A: MPIDI_CH3_EagerContigIsend (ch3u_eager.c:541) >>> ==19601== ? ?by 0x53DA03: MPID_Isend (mpid_isend.c:130) >>> ==19601== ? ?by 0x1440C71: PMPI_Isend (isend.c:124) >>> ==19601== ? ?by 0x1453DED: MPI_ISEND (isendf.c:190) >>> ==19601== ? ?by 0x124069F: __dmumps_comm_buffer_MOD_dmumps_62 >>> (dmumps_comm_buffer.F:567) >>> ==19601== ? ?by 0x12C1B4C: dmumps_242_ (dmumps_part2.F:739) >>> ==19601== ? ?by 0x120C987: dmumps_249_ (dmumps_part8.F:6541) >>> ==19601== ? ?by 0x120599F: dmumps_245_ (dmumps_part8.F:3885) >>> ==19601== ? ?by 0x1229BCF: dmumps_301_ (dmumps_part8.F:2120) >>> ==19601== ? ?by 0x128D272: dmumps_ (dmumps_part1.F:665) >>> ==19601== ? ?by 0x1178753: dmumps_f77_ (dmumps_part3.F:6651) >>> ==19601== ? ?by 0x1151A79: dmumps_c (mumps_c.c:422) >>> ==19601== ? ?by 0x791075: MatSolve_MUMPS (mumps.c:547) >>> ==19601== ? ?by 0x73ABA0: MatSolve (matrix.c:3108) >>> ==19601== ? ?by 0x7FC5B9: PCApply_LU (lu.c:204) >>> ==19601== ? ?by 0xD95AAB: PCApply (precon.c:383) >>> ==19601== ? ?by 0xD98A70: PCApplyBAorAB (precon.c:609) >>> ==19601== ?Address 0x86df168 is 8 bytes inside a block of size 144 alloc'd >>> ==19601== ? ?at 0x4C28F9F: malloc (vg_replace_malloc.c:236) >>> ==19601== ? ?by 0x124235E: __dmumps_comm_buffer_MOD_dmumps_2 >>> (dmumps_comm_buffer.F:175) >>> ==19601== ? ?by 0x12426E7: __dmumps_comm_buffer_MOD_dmumps_55 >>> (dmumps_comm_buffer.F:123) >>> ==19601== ? ?by 0x121F28B: dmumps_301_ (dmumps_part8.F:989) >>> ==19601== ? ?by 0x128D272: dmumps_ (dmumps_part1.F:665) >>> ==19601== ? ?by 0x1178753: dmumps_f77_ (dmumps_part3.F:6651) >>> ==19601== ? ?by 0x1151A79: dmumps_c (mumps_c.c:422) >>> ==19601== ? ?by 0x791075: MatSolve_MUMPS (mumps.c:547) >>> ==19601== ? ?by 0x73ABA0: MatSolve (matrix.c:3108) >>> ==19601== ? ?by 0x7FC5B9: PCApply_LU (lu.c:204) >>> ==19601== ? ?by 0xD95AAB: PCApply (precon.c:383) >>> ==19601== ? ?by 0xD98A70: PCApplyBAorAB (precon.c:609) >>> ==19601== ? ?by 0x8D817F: KSPSolve_BCGS (bcgs.c:79) >>> ==19601== ? ?by 0x86B189: KSPSolve (itfunc.c:423) >>> ==19601== ? ?by 0x56013D: PetscLinearSystem::Solve() (PetscLinearSystem.cxx:221) >>> ==19601== ? ?by 0x4E34F6: SolidSolver::Solve() (SolidSolver.cxx:1153) >>> ==19601== ? ?by 0x4EFF56: main (SolidSolverMain.cxx:684) > > ------------------------------ > > Message: 6 > Date: Sat, 5 Nov 2011 14:06:02 +0000 > From: Matthew Knepley > Subject: Re: [petsc-users] fixed point interations > To: PETSc users list > Message-ID: > > Content-Type: text/plain; charset="iso-8859-1" > > On Sat, Nov 5, 2011 at 1:23 PM, Dominik Szczerbawrote: > >> I am a newcomer to Petsc non-linear capabilities, so far implementing >> such things myself, only delegating linear solves to Petsc. >> >> I want to start small by porting a very simple code using fixed point >> iterations as follows: A(x)x = b(x) is approximated as A(x0)x = b(x0), >> then solved by KSP for x, then x0 is updated to x, then repeat until >> convergence. >> >> In the documentation chapter 5 I see all sorts of sophisticated Newton >> type methods, requiring computation of the Jacobian. Is the above >> defined simple method still accessible somehow in Petsc or such >> triviality can only be done by hand? Which one from the existing >> nonlinear solvers would be a closest match both in simplicity and >> robustness (even if slow performance)? >> > You want -snes_type nrichardson. All you need is to define the residual. > > Matt > > >> Regards, >> Dominik >> > > Dear all, Following the answer I got,I coded the following: ierr = TSCreate(sys.comm,&sys.ts); CHKERRQ(ierr); ierr = TSSetProblemType(sys.ts, TS_NONLINEAR); CHKERRQ(ierr); ierr = TSSetIFunction(sys.ts, base_residual_implicit,&sys); CHKERRQ(ierr); ISColoring iscoloring; MatFDColoring fdcoloring; ierr = jacobian_diff_numerical(sys,&sys.P); CHKERRQ(ierr); ierr = MatGetColoring(sys.P, MATCOLORING_SL,&iscoloring); CHKERRQ(ierr); ierr = MatFDColoringCreate(sys.P, iscoloring,&fdcoloring); CHKERRQ(ierr); ierr = MatFDColoringSetFunction(fdcoloring, base_residual_implicit, &sys); CHKERRQ(ierr); ierr = TSSetIJacobian(sys.ts, sys.P, sys.P, TS, fdcoloring); CHKERRQ(ierr); ierr = TSSetInitialTimeStep(sys.ts, sys.con->tm, sys.con->dt); CHKERRQ(ierr); ierr = TSSetSolution(sys.ts, sys.gsv); CHKERRQ(ierr); ierr = TSSetDuration(sys.ts, 100e+6, sys.con->etime); CHKERRQ(ierr); ierr = TSSetFromOptions(sys.ts); CHKERRQ(ierr); ierr = TSSetUp(sys.ts); CHKERRQ(ierr); ierr = TSGetSNES(sys.ts,&sys.snes); CHKERRQ(ierr); ierr = MatCreateSNESMF(sys.snes,&sys.J); CHKERRQ(ierr); ierr = MatMFFDSetFromOptions(sys.J); CHKERRQ(ierr); ierr = SNESSetJacobian(sys.snes, sys.J, sys.P, SNESDefaultComputeJacobianColor, fdcoloring); CHKERRQ(ierr); ierr = TSView(sys.ts, PETSC_VIEWER_STDOUT_WORLD); CHKERRQ(ierr); i = 10000; ierr = TSStep(sys.ts,&i,&sys.con->etime); CHKERRQ(ierr); I run with : mpiexec -n 8 valgrind ./hoac cylinder -llf_flux -n_out 5 -end_time 10000.0 -ts_type beuler -pc_type asm -ksp_right_pc -sub_pc_type ilu -sub_pc_factor_mat_ordering_type rcm 1.0e-8 -gl -sub_pc_factor_levels 2 -dt 2.0e-1 -snes_monitor -snes_stol 1.0e-50 -snes_ksp_ew -snes_ls_maxstep 10.0 -snes_stol 1.0e-50 -snes_max_linear_solve_fail 50 -snes_max_fail 50 -ksp_monitor_true_residual but I get the following: ********************************************************************** METIS 4.0.3 Copyright 1998, Regents of the University of Minnesota Graph Information --------------------------------------------------- Name: mesh.graph, #Vertices: 400, #Edges: 780, #Parts: 8 Recursive Partitioning... ------------------------------------------- 8-way Edge-Cut: 102, Balance: 1.00 Timing Information -------------------------------------------------- I/O: 0.000 Partitioning: 0.000 (PMETIS time) Total: 0.000 ********************************************************************** Approximation order = 2 # DOF = 13680 # nodes in mesh = 400 # elements in mesh = 380 Euler solution Using LLF flux 0 KSP preconditioned resid norm 2.209663952405e+02 true resid norm 2.209663952405e+02 ||Ae||/||Ax|| 1.000000000000e+00 1 KSP preconditioned resid norm 6.104452938496e-14 true resid norm 7.083522209975e-14 ||Ae||/||Ax|| 3.205701121324e-16 TS Object: type: beuler maximum steps=100000000 maximum time=10000 total number of nonlinear solver iterations=0 total number of linear solver iterations=0 SNES Object: type: ls line search variant: SNESLineSearchCubic alpha=0.0001, maxstep=10, minlambda=1e-12 maximum iterations=50, maximum function evaluations=10000 tolerances: relative=1e-08, absolute=1e-50, solution=1e-50 total number of linear solver iterations=0 total number of function evaluations=0 Eisenstat-Walker computation of KSP relative tolerance (version 2) rtol_0=0.3, rtol_max=0.9, threshold=0.1 gamma=1, alpha=1.61803, alpha2=1.61803 KSP Object: type: gmres GMRES: restart=30, using Classical (unmodified) Gram-Schmidt Orthogonalization with no iterative refinement GMRES: happy breakdown tolerance 1e-30 maximum iterations=10000, initial guess is zero tolerances: relative=1e-05, absolute=1e-50, divergence=10000 right preconditioning using PRECONDITIONED norm type for convergence test PC Object: type: asm Additive Schwarz: total subdomain blocks not yet set, amount of overlap = 1 Additive Schwarz: restriction/interpolation type - RESTRICT linear system matrix = precond matrix: Matrix Object: type=mpibaij, rows=13680, cols=13680 total: nonzeros=602640, allocated nonzeros=246960 block size is 9 [0]PETSC ERROR: --------------------- Error Message ------------------------------------ [0]PETSC ERROR: Null argument, when expecting valid pointer! [0]PETSC ERROR: Null Object: Parameter # 1! [0]PETSC ERROR: ------------------------------------------------------------------------ [0]PETSC ERROR: Petsc Release Version 3.1.0, Patch 8, Thu Mar 17 13:37:48 CDT 2011 [0]PETSC ERROR: See docs/changes/index.html for recent updates. [0]PETSC ERROR: See docs/faq.html for hints about trouble shooting. [0]PETSC ERROR: See docs/index.html for manual pages. [0]PETSC ERROR: ------------------------------------------------------------------------ [0]PETSC ERROR: ./hoac on a linux-gnu named PlusSodaL by kontzialis Sat Nov 5 19:02:12 2011 [0]PETSC ERROR: Libraries linked from /home/kontzialis/petsc-3.1-p8/linux-gnu-c-debug/lib [0]PETSC ERROR: Configure run at Tue Sep 27 13:09:04 2011 [0]PETSC ERROR: Configure options --with-debugging=1 --with-shared=1 --with-shared-libraries --with-large-file-io=1 --with-precision=double --with-blacs=1 --download-blacs=yes --download-f-blas-lapack=yes --with-plapack=1 --download-plapack=yes --with-scalapack=1 --download-scalapack=yes --with-superlu=1 --download-superlu=yes --with-superlu_dist=1 --download-superlu_dist=yes --with-ml=1 --download-ml=yes --with-umfpack=1 --download-umfpack=yes --with-sundials=1 --download-sundials=1 --with-parmetis=1 --download-parmetis=1 --with-hypre=1 --download-hypre=1 --with-mpi-dir=/usr/lib/mpich2/bin [0]PETSC ERROR: ------------------------------------------------------------------------ [0]PETSC ERROR: MatMult() line 1878 in src/mat/interface/matrix.c [3]PETSC ERROR: [5]PETSC ERROR: [4]PETSC ERROR: [1]PETSC ERROR: [2]PETSC ERROR: [7]PETSC ERROR: [6]PETSC ERROR: [0]PETSC ERROR: TSComputeRHSFunction() line 309 in src/ts/interface/ts.c [0]PETSC ERROR: TSBEulerFunction() line 222 in src/ts/impls/implicit/beuler/beuler.c --------------------- Error Message ------------------------------------ --------------------- Error Message ------------------------------------ [0]PETSC ERROR: --------------------- Error Message ------------------------------------ --------------------- Error Message ------------------------------------ SNESComputeFunction() line 1109 in src/snes/interface/snes.c --------------------- Error Message ------------------------------------ --------------------- Error Message ------------------------------------ [0]PETSC ERROR: SNESSolve_LS() line 159 in src/snes/impls/ls/ls.c [0]PETSC ERROR: SNESSolve() line 2255 in src/snes/interface/snes.c [3]PETSC ERROR: Null argument, when expecting valid pointer! [1]PETSC ERROR: [4]PETSC ERROR: [5]PETSC ERROR: Null argument, when expecting valid pointer! Null argument, when expecting valid pointer! Null argument, when expecting valid pointer! [0]PETSC ERROR: [3]PETSC ERROR: Null Object: Parameter # 1! [1]PETSC ERROR: TSStep_BEuler_Nonlinear() line 176 in src/ts/impls/implicit/beuler/beuler.c Null Object: Parameter # 1! [4]PETSC ERROR: [2]PETSC ERROR: [5]PETSC ERROR: [7]PETSC ERROR: Null Object: Parameter # 1! Null Object: Parameter # 1! Null argument, when expecting valid pointer! Null argument, when expecting valid pointer! [3]PETSC ERROR: [1]PETSC ERROR: ------------------------------------------------------------------------ ------------------------------------------------------------------------ [4]PETSC ERROR: ------------------------------------------------------------------------ [5]PETSC ERROR: ------------------------------------------------------------------------ [7]PETSC ERROR: [3]PETSC ERROR: [1]PETSC ERROR: Null Object: Parameter # 1! [2]PETSC ERROR: Petsc Release Version 3.1.0, Patch 8, Thu Mar 17 13:37:48 CDT 2011 Null Object: Parameter # 1! Petsc Release Version 3.1.0, Patch 8, Thu Mar 17 13:37:48 CDT 2011 [5]PETSC ERROR: [4]PETSC ERROR: [7]PETSC ERROR: Petsc Release Version 3.1.0, Patch 8, Thu Mar 17 13:37:48 CDT 2011 [1]PETSC ERROR: [2]PETSC ERROR: [3]PETSC ERROR: Petsc Release Version 3.1.0, Patch 8, Thu Mar 17 13:37:48 CDT 2011 ------------------------------------------------------------------------ See docs/changes/index.html for recent updates. ------------------------------------------------------------------------ See docs/changes/index.html for recent updates. [5]PETSC ERROR: [4]PETSC ERROR: [3]PETSC ERROR: See docs/changes/index.html for recent updates. [7]PETSC ERROR: [2]PETSC ERROR: See docs/faq.html for hints about trouble shooting. [1]PETSC ERROR: Petsc Release Version 3.1.0, Patch 8, Thu Mar 17 13:37:48 CDT 2011 See docs/changes/index.html for recent updates. See docs/faq.html for hints about trouble shooting. Petsc Release Version 3.1.0, Patch 8, Thu Mar 17 13:37:48 CDT 2011 [5]PETSC ERROR: [3]PETSC ERROR: See docs/faq.html for hints about trouble shooting. [7]PETSC ERROR: [4]PETSC ERROR: See docs/index.html for manual pages. [1]PETSC ERROR: See docs/faq.html for hints about trouble shooting. See docs/changes/index.html for recent updates. See docs/index.html for manual pages. [2]PETSC ERROR: [5]PETSC ERROR: See docs/changes/index.html for recent updates. See docs/index.html for manual pages. [4]PETSC ERROR: [1]PETSC ERROR: [3]PETSC ERROR: [7]PETSC ERROR: ------------------------------------------------------------------------ ------------------------------------------------------------------------ See docs/index.html for manual pages. See docs/faq.html for hints about trouble shooting. [2]PETSC ERROR: See docs/faq.html for hints about trouble shooting. [5]PETSC ERROR: ------------------------------------------------------------------------ [4]PETSC ERROR: [7]PETSC ERROR: ------------------------------------------------------------------------ [2]PETSC ERROR: [1]PETSC ERROR: [3]PETSC ERROR: See docs/index.html for manual pages. See docs/index.html for manual pages. ./hoac on a linux-gnu named PlusSodaL by kontzialis Sat Nov 5 19:02:12 2011 ./hoac on a linux-gnu named PlusSodaL by kontzialis Sat Nov 5 19:02:12 2011 [2]PETSC ERROR: [5]PETSC ERROR: [7]PETSC ERROR: ------------------------------------------------------------------------ ------------------------------------------------------------------------ ./hoac on a linux-gnu named PlusSodaL by kontzialis Sat Nov 5 19:02:12 2011 [3]PETSC ERROR: [1]PETSC ERROR: [4]PETSC ERROR: Libraries linked from /home/kontzialis/petsc-3.1-p8/linux-gnu-c-debug/lib Libraries linked from /home/kontzialis/petsc-3.1-p8/linux-gnu-c-debug/lib [0]PETSC ERROR: ./hoac on a linux-gnu named PlusSodaL by kontzialis Sat Nov 5 19:02:12 2011 [5]PETSC ERROR: [1]PETSC ERROR: [7]PETSC ERROR: [2]PETSC ERROR: Libraries linked from /home/kontzialis/petsc-3.1-p8/linux-gnu-c-debug/lib [3]PETSC ERROR: Configure run at Tue Sep 27 13:09:04 2011 ./hoac on a linux-gnu named PlusSodaL by kontzialis Sat Nov 5 19:02:12 2011 ./hoac on a linux-gnu named PlusSodaL by kontzialis Sat Nov 5 19:02:12 2011 Configure run at Tue Sep 27 13:09:04 2011 [4]PETSC ERROR: [5]PETSC ERROR: Configure run at Tue Sep 27 13:09:04 2011 [1]PETSC ERROR: Libraries linked from /home/kontzialis/petsc-3.1-p8/linux-gnu-c-debug/lib [3]PETSC ERROR: [2]PETSC ERROR: TSStep() line 1693 in src/ts/interface/ts.c Configure options --with-debugging=1 --with-shared=1 --with-shared-libraries --with-large-file-io=1 --with-precision=double --with-blacs=1 --download-blacs=yes --download-f-blas-lapack=yes --with-plapack=1 --download-plapack=yes --with-scalapack=1 --download-scalapack=yes --with-superlu=1 --download-superlu=yes --with-superlu_dist=1 --download-superlu_dist=yes --with-ml=1 --download-ml=yes --with-umfpack=1 --download-umfpack=yes --with-sundials=1 --download-sundials=1 --with-parmetis=1 --download-parmetis=1 --with-hypre=1 --download-hypre=1 --with-mpi-dir=/usr/lib/mpich2/bin Configure options --with-debugging=1 --with-shared=1 --with-shared-libraries --with-large-file-io=1 --with-precision=double --with-blacs=1 --download-blacs=yes --download-f-blas-lapack=yes --with-plapack=1 --download-plapack=yes --with-scalapack=1 --download-scalapack=yes --with-superlu=1 --download-superlu=yes --with-superlu_dist=1 --download-superlu_dist=yes --with-ml=1 --download-ml=yes --with-umfpack=1 --download-umfpack=yes --with-sundials=1 --download-sundials=1 --with-parmetis=1 --download-parmetis=1 --with-hypre=1 --download-hypre=1 --with-mpi-dir=/usr/lib/mpich2/bin [7]PETSC ERROR: Libraries linked from /home/kontzialis/petsc-3.1-p8/linux-gnu-c-debug/lib Libraries linked from /home/kontzialis/petsc-3.1-p8/linux-gnu-c-debug/lib [5]PETSC ERROR: [4]PETSC ERROR: [1]PETSC ERROR: [3]PETSC ERROR: Configure options --with-debugging=1 --with-shared=1 --with-shared-libraries --with-large-file-io=1 --with-precision=double --with-blacs=1 --download-blacs=yes --download-f-blas-lapack=yes --with-plapack=1 --download-plapack=yes --with-scalapack=1 --download-scalapack=yes --with-superlu=1 --download-superlu=yes --with-superlu_dist=1 --download-superlu_dist=yes --with-ml=1 --download-ml=yes --with-umfpack=1 --download-umfpack=yes --with-sundials=1 --download-sundials=1 --with-parmetis=1 --download-parmetis=1 --with-hypre=1 --download-hypre=1 --with-mpi-dir=/usr/lib/mpich2/bin What should I do? Kostas From jedbrown at mcs.anl.gov Sat Nov 5 12:16:06 2011 From: jedbrown at mcs.anl.gov (Jed Brown) Date: Sat, 5 Nov 2011 11:16:06 -0600 Subject: [petsc-users] Jacobian free in SNES In-Reply-To: <4EB56D6B.5000306@lycos.com> References: <4EB56D6B.5000306@lycos.com> Message-ID: On Sat, Nov 5, 2011 at 11:07, Konstantinos Kontzialis wrote: > What should I do? Please upgrade to petsc-3.2. TS has improved greatly and there is no point using the old stuff. -------------- next part -------------- An HTML attachment was scrubbed... URL: From robert.bodner at unil.ch Sat Nov 5 13:28:14 2011 From: robert.bodner at unil.ch (robert) Date: Sat, 05 Nov 2011 19:28:14 +0100 Subject: [petsc-users] help for beginner :-) Message-ID: <1320517694.29400.37.camel@robert> Hello, I am using petsc indirectely by including it in some finite-element program (libmesh). When starting the calculation I can specify petsc solvers and preconditioners via the command line. Briefly some words about my calculation: I am carrying out heat-diffusion calculations around a granitic intrusion in Patagonia. These intrusions happen rapidly - so the initial conditions are an unsteady function: about 1000? for the intrusion, about 200? around it. The area is quite large and I am going to have equation systems with about 50 000 000 - 100 000 000 unknowns (but I have access to a BlueGEne/P computer). Since I am a geologist and not very familiar with linear solvers I am having some trouble. I have tried the solvers which I found in the manual and combined them with different preconditioners. However, I could not find out how to check the convergence - it says something about a 'reason' - variable. question 1: ----------- How can I get an output of the reason value to the console via a command line option - in the manual only the petsc-function/method for it is given??? question 2: ---------- I have tried the option -ksp_monitor_true_residual but I don't know how to interpret the output. Let me give some examples: **************************** EX1 0 KSP preconditioned resid norm 4.638186593787e+13 true resid norm 9.794488229448e+17 ||Ae||/||Ax|| 9.424443120337e-03 1 KSP preconditioned resid norm 6.549961009128e+12 true resid norm 9.794488254182e+17 ||Ae||/||Ax|| 9.424443144136e-03 2 KSP preconditioned resid norm 3.500402917464e+12 true resid norm 9.794488256825e+17 ||Ae||/||Ax|| 9.424443146680e-03 3 KSP preconditioned resid norm 1.924775486173e+12 true resid norm 9.794488258218e+17 ||Ae||/||Ax|| 9.424443148020e-03 4 KSP preconditioned resid norm 8.761952242321e+11 true resid norm 9.794488303770e+17 ||Ae||/||Ax|| 9.424443191851e-03 5 KSP preconditioned resid norm 5.566943322481e+11 true resid norm 9.794488372989e+17 ||Ae||/||Ax|| 9.424443258455e-03 6 KSP preconditioned resid norm 5.352020682919e+11 true resid norm 9.794488438566e+17 ||Ae||/||Ax|| 9.424443321554e-03 7 KSP preconditioned resid norm 5.337589084246e+11 true resid norm 9.794488438151e+17 ||Ae||/||Ax|| 9.424443321154e-03 8 KSP preconditioned resid norm 5.282047128925e+11 true resid norm 9.794488438157e+17 ||Ae||/||Ax|| 9.424443321160e-03 9 KSP preconditioned resid norm 8.645625099558e+11 true resid norm 9.794488437876e+17 ||Ae||/||Ax|| 9.424443320891e-03 **************************** EX2 0 KSP preconditioned resid norm 0.000000000000e+00 true resid norm 9.794488229448e+17 ||Ae||/||Ax|| 9.424443120337e-03 **************************** EX3 0 KSP preconditioned resid norm 9.794488229448e+17 true resid norm 9.794488229448e+17 ||Ae||/||Ax|| 9.424443120337e-03 1 KSP preconditioned resid norm 9.794488116621e+17 true resid norm 9.794488116621e+17 ||Ae||/||Ax|| 9.424443011772e-03 2 KSP preconditioned resid norm 9.776352381553e+17 true resid norm 9.776352381553e+17 ||Ae||/||Ax|| 9.406992462076e-03 3 KSP preconditioned resid norm 9.776352045742e+17 true resid norm 9.776352045742e+17 ||Ae||/||Ax|| 9.406992138953e-03 ***************************** ***************************** Did I get it right that the resid norm and true resid norm should be more or less the same value? ||Ae||/||Ax||: Ax is clear, but what does the e in Ae stand for? How do I interpret this value? Is this a comparison beteen the 'real' solution and the calculated solution. I recognized that the iterations stopped when this value got smaller than 1e-12. question 3: ----------- I am sure that there are many people out there who have a lot of experience with my kind of systems - I mean diffusional heat tranfer is a standard problem in many different fields. Which solution algorithm (solver, preconditioner) would you suggest for solving in parallel? Which ones do I have to avoid (e.g. not stable)??? I have tried to play around with the different possibilities for some days - but some ideas would help me a lot. Thank you very much, Robert From ckontzialis at lycos.com Sat Nov 5 14:40:19 2011 From: ckontzialis at lycos.com (Konstantinos Kontzialis) Date: Sat, 05 Nov 2011 21:40:19 +0200 Subject: [petsc-users] Jacobian free in SNES In-Reply-To: References: Message-ID: <4EB59123.2070600@lycos.com> On 11/05/2011 08:28 PM, petsc-users-request at mcs.anl.gov wrote: > Jacobian free in SNES Dear all, I upgraded to version 3.2. However, I still have some problems: I code the following: ierr = TSCreate(sys.comm, &sys.ts); CHKERRQ(ierr); ierr = TSSetSolution(sys.ts, sys.gsv); CHKERRQ(ierr); ierr = TSSetIFunction(sys.ts, sys.gres0, base_residual_implicit, &sys); CHKERRQ(ierr); ISColoring iscoloring; MatFDColoring fdcoloring; ierr = jacobian_diff_numerical(sys, &sys.P); CHKERRQ(ierr); ierr = MatGetColoring(sys.P, MATCOLORINGSL, &iscoloring); CHKERRQ(ierr); ierr = MatFDColoringCreate(sys.P, iscoloring, &fdcoloring); CHKERRQ(ierr); ierr = MatFDColoringSetFunction(fdcoloring, base_residual_implicit, &sys); CHKERRQ(ierr); ierr = TSSetInitialTimeStep(sys.ts, sys.con->tm, sys.con->dt); CHKERRQ(ierr); ierr = TSSetDuration(sys.ts, 100e+6, sys.con->etime); CHKERRQ(ierr); ierr = TSSetFromOptions(sys.ts); CHKERRQ(ierr); ierr = TSSetUp(sys.ts); CHKERRQ(ierr); ierr = TSGetSNES(sys.ts, &sys.snes); CHKERRQ(ierr); ierr = MatCreateSNESMF(sys.snes, &sys.J); CHKERRQ(ierr); ierr = SNESSetJacobian(sys.snes, sys.J, sys.P, SNESDefaultComputeJacobianColor, fdcoloring); CHKERRQ(ierr); ierr = TSView(sys.ts, PETSC_VIEWER_STDOUT_WORLD); CHKERRQ(ierr); i = 10000; ierr = TSStep(sys.ts); CHKERRQ(ierr); I run with: mpiexec -n 8 valgrind ./hoac cylinder -llf_flux -n_out 5 -end_time 10000.0 -explicit -explicit_type 2 -ts_type beuler -pc_type asm -ksp_right_pc -sub_pc_type ilu -sub_pc_factor_mat_ordering_type rcm 1.0e-8 -gl -sub_pc_factor_levels 2 -dt 2.0e-1 -snes_monitor -snes_stol 1.0e-50 -snes_ksp_ew -snes_ls_maxstep 10.0 -snes_stol 1.0e-50 -snes_max_linear_solve_fail 50 -snes_max_fail 50 -ksp_monitor_true_residual -snes_mf_operator and I get: ********************************************************************** METIS 4.0.3 Copyright 1998, Regents of the University of Minnesota Graph Information --------------------------------------------------- Name: mesh.graph, #Vertices: 400, #Edges: 780, #Parts: 8 Recursive Partitioning... ------------------------------------------- 8-way Edge-Cut: 102, Balance: 1.00 Timing Information -------------------------------------------------- I/O: 0.000 Partitioning: 0.000 (PMETIS time) Total: 0.000 ********************************************************************** Approximation order = 1 # DOF = 6080 # nodes in mesh = 400 # elements in mesh = 380 Euler solution Using LLF flux 0 KSP preconditioned resid norm 4.015618492963e+01 true resid norm 2.209663952405e+02 ||r(i)||/||b|| 1.000000000000e+00 1 KSP preconditioned resid norm 4.431081191507e-14 true resid norm 2.384009492855e-13 ||r(i)||/||b|| 1.078901382384e-15 TS Object: 8 MPI processes type: beuler maximum steps=100000000 maximum time=10000 total number of nonlinear solver iterations=0 total number of nonlinear solve failures=0 total number of linear solver iterations=0 total number of rejected steps=0 SNES Object: 8 MPI processes type: ls line search variant: SNESLineSearchCubic alpha=1.000000000000e-04, maxstep=1.000000000000e+01, minlambda=1.000000000000e-12 maximum iterations=50, maximum function evaluations=10000 tolerances: relative=1e-08, absolute=1e-50, solution=1e-50 total number of linear solver iterations=0 total number of function evaluations=0 Eisenstat-Walker computation of KSP relative tolerance (version 2) rtol_0=0.3, rtol_max=0.9, threshold=0.1 gamma=1, alpha=1.61803, alpha2=1.61803 KSP Object: 8 MPI processes type: gmres GMRES: restart=30, using Classical (unmodified) Gram-Schmidt Orthogonalization with no iterative refinement GMRES: happy breakdown tolerance 1e-30 maximum iterations=10000, initial guess is zero tolerances: relative=1e-05, absolute=1e-50, divergence=10000 left preconditioning using DEFAULT norm type for convergence test PC Object: 8 MPI processes type: asm Additive Schwarz: total subdomain blocks not yet set, amount of overlap = 1 Additive Schwarz: restriction/interpolation type - RESTRICT linear system matrix: Matrix Object: 8 MPI processes type: mffd rows=6080, cols=6080 matrix-free approximation: err=1e-07 (relative error in function evaluation) The compute h routine has not yet been set 0 SNES Function norm 2.822902692891e-01 ==25984== Invalid read of size 4 ==25981== Invalid read of size 4 ==25981== at 0x4FC73CD: MatMult (matrix.c:2157) ==25981== by 0x406BD3: base_residual_implicit (base_residual_implicit.c:29) ==25981== by 0x50597C2: MatFDColoringApply_BAIJ (baij.c:2815) ==25981== by 0x5022C69: MatFDColoringApply (fdmatrix.c:517) ==25981== by 0x57D2E9B: SNESDefaultComputeJacobianColor (snesj2.c:48) ==25981== by 0x57B7852: SNESComputeJacobian (snes.c:1357) ==25981== by 0x57FAEF8: SNESSolve_LS (ls.c:188) ==25981== by 0x57C43AA: SNESSolve (snes.c:2676) ==25981== by 0x58694E1: TSStep_Theta (theta.c:41) ==25981== by 0x58812D6: TSStep (ts.c:1776) ==25981== by 0x4208A5: explicit_time (explicit_time.c:148) ==25981== by 0x429E14: main (hoac.c:1250) ==25981== Address 0xe800000000b84400 is not stack'd, malloc'd or (recently) free'd ==25981== ==25984== at 0x4FC73CD: MatMult (matrix.c:2157) ==25984== by 0x406BD3: base_residual_implicit (base_residual_implicit.c:29) ==25984== by 0x50597C2: MatFDColoringApply_BAIJ (baij.c:2815) ==25984== by 0x5022C69: MatFDColoringApply (fdmatrix.c:517) ==25984== by 0x57D2E9B: SNESDefaultComputeJacobianColor (snesj2.c:48) ==25984== by 0x57B7852: SNESComputeJacobian (snes.c:1357) ==25984== by 0x57FAEF8: SNESSolve_LS (ls.c:188) ==25984== by 0x57C43AA: SNESSolve (snes.c:2676) ==25984== by 0x58694E1: TSStep_Theta (theta.c:41) ==25984== by 0x58812D6: TSStep (ts.c:1776) ==25984== by 0x4208A5: explicit_time (explicit_time.c:148) ==25984== by 0x429E14: main (hoac.c:1250) ==25984== Address 0xe800000000b84400 is not stack'd, malloc'd or (recently) free'd ==25984== ==25983== Invalid read of size 4 ==25983== at 0x4FC73CD: MatMult (matrix.c:2157) ==25983== by 0x406BD3: base_residual_implicit (base_residual_implicit.c:29) ==25983== by 0x50597C2: MatFDColoringApply_BAIJ (baij.c:2815) ==25983== by 0x5022C69: MatFDColoringApply (fdmatrix.c:517) ==25983== by 0x57D2E9B: SNESDefaultComputeJacobianColor (snesj2.c:48) ==25983== by 0x57B7852: SNESComputeJacobian (snes.c:1357) ==25983== by 0x57FAEF8: SNESSolve_LS (ls.c:188) ==25983== by 0x57C43AA: SNESSolve (snes.c:2676) ==25983== by 0x58694E1: TSStep_Theta (theta.c:41) ==25983== by 0x58812D6: TSStep (ts.c:1776) ==25983== by 0x4208A5: explicit_time (explicit_time.c:148) ==25983== by 0x429E14: main (hoac.c:1250) ==25983== Address 0xe800000000b84400 is not stack'd, malloc'd or (recently) free'd ==25983== ==25979== Invalid read of size 4 ==25979== at 0x4FC73CD: MatMult (matrix.c:2157) ==25979== by 0x406BD3: base_residual_implicit (base_residual_implicit.c:29) ==25979== by 0x50597C2: MatFDColoringApply_BAIJ (baij.c:2815) ==25978== Invalid read of size 4 ==25978== at 0x4FC73CD: MatMult (matrix.c:2157) ==25982== Invalid read of size 4 ==25982== at 0x4FC73CD: MatMult (matrix.c:2157) ==25982== by 0x406BD3: base_residual_implicit (base_residual_implicit.c:29) ==25978== by 0x406BD3: base_residual_implicit (base_residual_implicit.c:29) ==25978== by 0x50597C2: MatFDColoringApply_BAIJ (baij.c:2815) ==25978== by 0x5022C69: MatFDColoringApply (fdmatrix.c:517) ==25978== by 0x57D2E9B: SNESDefaultComputeJacobianColor (snesj2.c:48) ==25982== by 0x50597C2: MatFDColoringApply_BAIJ (baij.c:2815) ==25982== by 0x5022C69: MatFDColoringApply (fdmatrix.c:517) ==25982== by 0x57D2E9B: SNESDefaultComputeJacobianColor (snesj2.c:48) ==25982== by 0x57B7852: SNESComputeJacobian (snes.c:1357) ==25978== by 0x57B7852: SNESComputeJacobian (snes.c:1357) ==25978== by 0x57FAEF8: SNESSolve_LS (ls.c:188) ==25978== by 0x57C43AA: SNESSolve (snes.c:2676) ==25982== by 0x57FAEF8: SNESSolve_LS (ls.c:188) ==25982== by 0x57C43AA: SNESSolve (snes.c:2676) ==25982== by 0x58694E1: TSStep_Theta (theta.c:41) ==25978== by 0x58694E1: TSStep_Theta (theta.c:41) ==25978== by 0x58812D6: TSStep (ts.c:1776) ==25978== by 0x4208A5: explicit_time (explicit_time.c:148) ==25982== by 0x58812D6: TSStep (ts.c:1776) ==25982== by 0x4208A5: explicit_time (explicit_time.c:148) ==25982== by 0x429E14: main (hoac.c:1250) ==25982== Address 0xe800000000b84400 is not stack'd, malloc'd or (recently) free'd ==25978== by 0x429E14: main (hoac.c:1250) ==25978== Address 0xe800000000b84400 is not stack'd, malloc'd or (recently) free'd ==25978== ==25982== ==25979== by 0x5022C69: MatFDColoringApply (fdmatrix.c:517) ==25979== by 0x57D2E9B: SNESDefaultComputeJacobianColor (snesj2.c:48) ==25979== by 0x57B7852: SNESComputeJacobian (snes.c:1357) ==25979== by 0x57FAEF8: SNESSolve_LS (ls.c:188) ==25979== by 0x57C43AA: SNESSolve (snes.c:2676) ==25979== by 0x58694E1: TSStep_Theta (theta.c:41) ==25979== by 0x58812D6: TSStep (ts.c:1776) ==25979== by 0x4208A5: explicit_time (explicit_time.c:148) ==25979== by 0x429E14: main (hoac.c:1250) ==25979== Address 0xe800000000b84400 is not stack'd, malloc'd or (recently) free'd ==25979== ==25977== Invalid read of size 4 ==25977== at 0x4FC73CD: MatMult (matrix.c:2157) ==25977== by 0x406BD3: base_residual_implicit (base_residual_implicit.c:29) ==25977== by 0x50597C2: MatFDColoringApply_BAIJ (baij.c:2815) ==25977== by 0x5022C69: MatFDColoringApply (fdmatrix.c:517) ==25977== by 0x57D2E9B: SNESDefaultComputeJacobianColor (snesj2.c:48) ==25977== by 0x57B7852: SNESComputeJacobian (snes.c:1357) ==25977== by 0x57FAEF8: SNESSolve_LS (ls.c:188) ==25977== by 0x57C43AA: SNESSolve (snes.c:2676) ==25977== by 0x58694E1: TSStep_Theta (theta.c:41) ==25977== by 0x58812D6: TSStep (ts.c:1776) ==25977== by 0x4208A5: explicit_time (explicit_time.c:148) ==25977== by 0x429E14: main (hoac.c:1250) ==25977== Address 0xe800000000b84400 is not stack'd, malloc'd or (recently) free'd ==25977== [0]PETSC ERROR: ------------------------------------------------------------------------ [0]PETSC ERROR: Caught signal number 11 SEGV: Segmentation Violation, probably memory access out of range [0]PETSC ERROR: Try option -start_in_debugger or -on_error_attach_debugger [0]PETSC ERROR: or see http://www.mcs.anl.gov/petsc/petsc-as/documentation/faq.html#valgrind[0]PETSC ERROR: or try http://valgrind.org on GNU/linux and Apple Mac OS X to find memory corruption errors ==25980== Invalid read of size 4 ==25980== at 0x4FC73CD: MatMult (matrix.c:2157) ==25980== by 0x406BD3: base_residual_implicit (base_residual_implicit.c:29) ==25980== by 0x50597C2: MatFDColoringApply_BAIJ (baij.c:2815) ==25980== by 0x5022C69: MatFDColoringApply (fdmatrix.c:517) ==25980== by 0x57D2E9B: SNESDefaultComputeJacobianColor (snesj2.c:48) ==25980== by 0x57B7852: SNESComputeJacobian (snes.c:1357) ==25980== by 0x57FAEF8: SNESSolve_LS (ls.c:188) ==25980== by 0x57C43AA: SNESSolve (snes.c:2676) ==25980== by 0x58694E1: TSStep_Theta (theta.c:41) ==25980== by 0x58812D6: TSStep (ts.c:1776) [4]PETSC ERROR: ==25980== by 0x4208A5: explicit_time (explicit_time.c:148) ==25980== by 0x429E14: main (hoac.c:1250) ==25980== Address 0xe800000000b84400 is not stack'd, malloc'd or (recently) free'd ==25980== [7]PETSC ERROR: [0]PETSC ERROR: likely location of problem given in stack below [6]PETSC ERROR: [0]PETSC ERROR: --------------------- Stack Frames ------------------------------------ [1]PETSC ERROR: [5]PETSC ERROR: [2]PETSC ERROR: [0]PETSC ERROR: Note: The EXACT line numbers in the stack are not available, [0]PETSC ERROR: ------------------------------------------------------------------------ INSTEAD the line number of the start of the function ------------------------------------------------------------------------ [0]PETSC ERROR: is given. [4]PETSC ERROR: [7]PETSC ERROR: Caught signal number 11 SEGV: Segmentation Violation, probably memory access out of range Caught signal number 11 SEGV: Segmentation Violation, probably memory access out of range [4]PETSC ERROR: ------------------------------------------------------------------------ Try option -start_in_debugger or -on_error_attach_debugger [7]PETSC ERROR: Try option -start_in_debugger or -on_error_attach_debugger [4]PETSC ERROR: or see http://www.mcs.anl.gov/petsc/petsc-as/documentation/faq.html#valgrind[7]PETSC ERROR: [0]PETSC ERROR: or see http://www.mcs.anl.gov/petsc/petsc-as/documentation/faq.html#valgrind[4]PETSC ERROR: or try http://valgrind.org on GNU/linux and Apple Mac OS X to find memory corruption errors [7]PETSC ERROR: or try http://valgrind.org on GNU/linux and Apple Mac OS X to find memory corruption errors [6]PETSC ERROR: Caught signal number 11 SEGV: Segmentation Violation, probably memory access out of range [6]PETSC ERROR: Try option -start_in_debugger or -on_error_attach_debugger ------------------------------------------------------------------------ [6]PETSC ERROR: ------------------------------------------------------------------------ ------------------------------------------------------------------------ or see http://www.mcs.anl.gov/petsc/petsc-as/documentation/faq.html#valgrind[6]PETSC ERROR: or try http://valgrind.org on GNU/linux and Apple Mac OS X to find memory corruption errors [0] MatMult line 2156 /home/kontzialis/petsc-3.2-p5/src/mat/interface/matrix.c [0]PETSC ERROR: [0] base_residual_implicit line 27 "unknowndirectory/"../src/base_residual_implicit.c [2]PETSC ERROR: [5]PETSC ERROR: [1]PETSC ERROR: Caught signal number 11 SEGV: Segmentation Violation, probably memory access out of range Caught signal number 11 SEGV: Segmentation Violation, probably memory access out of range Caught signal number 11 SEGV: Segmentation Violation, probably memory access out of range [0]PETSC ERROR: [2]PETSC ERROR: [5]PETSC ERROR: [1]PETSC ERROR: Try option -start_in_debugger or -on_error_attach_debugger Try option -start_in_debugger or -on_error_attach_debugger Try option -start_in_debugger or -on_error_attach_debugger [0] MatFDColoringApply_BAIJ line 2768 /home/kontzialis/petsc-3.2-p5/src/mat/impls/baij/seq/baij.c [1]PETSC ERROR: [5]PETSC ERROR: or see http://www.mcs.anl.gov/petsc/petsc-as/documentation/faq.html#valgrind[2]PETSC ERROR: or see http://www.mcs.anl.gov/petsc/petsc-as/documentation/faq.html#valgrindor see http://www.mcs.anl.gov/petsc/petsc-as/documentation/faq.html#valgrind[0]PETSC ERROR: [0] MatFDColoringApply line 511 /home/kontzialis/petsc-3.2-p5/src/mat/matfd/fdmatrix.c [1]PETSC ERROR: [5]PETSC ERROR: or try http://valgrind.org on GNU/linux and Apple Mac OS X to find memory corruption errors or try http://valgrind.org on GNU/linux and Apple Mac OS X to find memory corruption errors [2]PETSC ERROR: or try http://valgrind.org on GNU/linux and Apple Mac OS X to find memory corruption errors [4]PETSC ERROR: [0]PETSC ERROR: likely location of problem given in stack below [7]PETSC ERROR: likely location of problem given in stack below [4]PETSC ERROR: [0] SNESDefaultComputeJacobianColor line 40 /home/kontzialis/petsc-3.2-p5/src/snes/interface/snesj2.c --------------------- Stack Frames ------------------------------------ [7]PETSC ERROR: [0]PETSC ERROR: --------------------- Stack Frames ------------------------------------ [0] SNES user Jacobian function line 0 unknownunknown [0]PETSC ERROR: [0] SNESComputeJacobian line 1322 /home/kontzialis/petsc-3.2-p5/src/snes/interface/snes.c [6]PETSC ERROR: likely location of problem given in stack below [6]PETSC ERROR: --------------------- Stack Frames ------------------------------------ [3]PETSC ERROR: [1]PETSC ERROR: [5]PETSC ERROR: likely location of problem given in stack below [2]PETSC ERROR: likely location of problem given in stack below likely location of problem given in stack below [1]PETSC ERROR: [5]PETSC ERROR: [2]PETSC ERROR: --------------------- Stack Frames ------------------------------------ --------------------- Stack Frames ------------------------------------ --------------------- Stack Frames ------------------------------------ [0]PETSC ERROR: --------------------- Error Message ------------------------------------ [4]PETSC ERROR: [7]PETSC ERROR: Note: The EXACT line numbers in the stack are not available, Note: The EXACT line numbers in the stack are not available, [4]PETSC ERROR: [7]PETSC ERROR: INSTEAD the line number of the start of the function INSTEAD the line number of the start of the function [4]PETSC ERROR: [7]PETSC ERROR: is given. is given. [6]PETSC ERROR: Note: The EXACT line numbers in the stack are not available, [6]PETSC ERROR: INSTEAD the line number of the start of the function [6]PETSC ERROR: [4]PETSC ERROR: [7]PETSC ERROR: is given. [0]PETSC ERROR: Signal received! [6]PETSC ERROR: [0]PETSC ERROR: [2]PETSC ERROR: [7] MatMult line 2156 /home/kontzialis/petsc-3.2-p5/src/mat/interface/matrix.c [1]PETSC ERROR: [5]PETSC ERROR: Note: The EXACT line numbers in the stack are not available, [4] MatMult line 2156 /home/kontzialis/petsc-3.2-p5/src/mat/interface/matrix.c Note: The EXACT line numbers in the stack are not available, ------------------------------------------------------------------------ Note: The EXACT line numbers in the stack are not available, [7]PETSC ERROR: [2]PETSC ERROR: [4]PETSC ERROR: INSTEAD the line number of the start of the function [1]PETSC ERROR: INSTEAD the line number of the start of the function [5]PETSC ERROR: [7] base_residual_implicit line 27 "unknowndirectory/"../src/base_residual_implicit.c [4] base_residual_implicit line 27 "unknowndirectory/"../src/base_residual_implicit.c [7]PETSC ERROR: INSTEAD the line number of the start of the function [2]PETSC ERROR: [1]PETSC ERROR: [4]PETSC ERROR: [7] MatFDColoringApply_BAIJ line 2768 /home/kontzialis/petsc-3.2-p5/src/mat/impls/baij/seq/baij.c is given. is given. [5]PETSC ERROR: [4] MatFDColoringApply_BAIJ line 2768 /home/kontzialis/petsc-3.2-p5/src/mat/impls/baij/seq/baij.c [7]PETSC ERROR: [4]PETSC ERROR: [0]PETSC ERROR: is given. [7] MatFDColoringApply line 511 /home/kontzialis/petsc-3.2-p5/src/mat/matfd/fdmatrix.c Petsc Release Version 3.2.0, Patch 5, Sat Oct 29 13:45:54 CDT 2011 [4] MatFDColoringApply line 511 /home/kontzialis/petsc-3.2-p5/src/mat/matfd/fdmatrix.c [7]PETSC ERROR: [4]PETSC ERROR: [7] SNESDefaultComputeJacobianColor line 40 /home/kontzialis/petsc-3.2-p5/src/snes/interface/snesj2.c [6] MatMult line 2156 /home/kontzialis/petsc-3.2-p5/src/mat/interface/matrix.c [4] SNESDefaultComputeJacobianColor line 40 /home/kontzialis/petsc-3.2-p5/src/snes/interface/snesj2.c [7]PETSC ERROR: [4]PETSC ERROR: [7] SNES user Jacobian function line 0 unknownunknown [6]PETSC ERROR: [7]PETSC ERROR: [4] SNES user Jacobian function line 0 unknownunknown [0]PETSC ERROR: [7] SNESComputeJacobian line 1322 /home/kontzialis/petsc-3.2-p5/src/snes/interface/snes.c [6] base_residual_implicit line 27 "unknowndirectory/"../src/base_residual_implicit.c [4]PETSC ERROR: [6]PETSC ERROR: See docs/changes/index.html for recent updates. [4] SNESComputeJacobian line 1322 /home/kontzialis/petsc-3.2-p5/src/snes/interface/snes.c [2]PETSC ERROR: [1]PETSC ERROR: [6] MatFDColoringApply_BAIJ line 2768 /home/kontzialis/petsc-3.2-p5/src/mat/impls/baij/seq/baij.c [6]PETSC ERROR: [5]PETSC ERROR: [6] MatFDColoringApply line 511 /home/kontzialis/petsc-3.2-p5/src/mat/matfd/fdmatrix.c [6]PETSC ERROR: [0]PETSC ERROR: [6] SNESDefaultComputeJacobianColor line 40 /home/kontzialis/petsc-3.2-p5/src/snes/interface/snesj2.c See docs/faq.html for hints about trouble shooting. [6]PETSC ERROR: [6] SNES user Jacobian function line 0 unknownunknown [6]PETSC ERROR: [0]PETSC ERROR: [6] SNESComputeJacobian line 1322 /home/kontzialis/petsc-3.2-p5/src/snes/interface/snes.c See docs/index.html for manual pages. [0]PETSC ERROR: [1] MatMult line 2156 /home/kontzialis/petsc-3.2-p5/src/mat/interface/matrix.c [5] MatMult line 2156 /home/kontzialis/petsc-3.2-p5/src/mat/interface/matrix.c ------------------------------------------------------------------------ [1]PETSC ERROR: [5]PETSC ERROR: [1] base_residual_implicit line 27 "unknowndirectory/"../src/base_residual_implicit.c [1]PETSC ERROR: [5] base_residual_implicit line 27 "unknowndirectory/"../src/base_residual_implicit.c [5]PETSC ERROR: [1] MatFDColoringApply_BAIJ line 2768 /home/kontzialis/petsc-3.2-p5/src/mat/impls/baij/seq/baij.c [5] MatFDColoringApply_BAIJ line 2768 /home/kontzialis/petsc-3.2-p5/src/mat/impls/baij/seq/baij.c [1]PETSC ERROR: [0]PETSC ERROR: [5]PETSC ERROR: [1] MatFDColoringApply line 511 /home/kontzialis/petsc-3.2-p5/src/mat/matfd/fdmatrix.c [5] MatFDColoringApply line 511 /home/kontzialis/petsc-3.2-p5/src/mat/matfd/fdmatrix.c [1]PETSC ERROR: [5]PETSC ERROR: ./hoac on a linux-gnu named PlusSodaL by kontzialis Sat Nov 5 21:36:27 2011 [1] SNESDefaultComputeJacobianColor line 40 /home/kontzialis/petsc-3.2-p5/src/snes/interface/snesj2.c [5] SNESDefaultComputeJacobianColor line 40 /home/kontzialis/petsc-3.2-p5/src/snes/interface/snesj2.c [1]PETSC ERROR: [5]PETSC ERROR: [1] SNES user Jacobian function line 0 unknownunknown [1]PETSC ERROR: [5] SNES user Jacobian function line 0 unknownunknown [1] SNESComputeJacobian line 1322 /home/kontzialis/petsc-3.2-p5/src/snes/interface/snes.c [5]PETSC ERROR: [0]PETSC ERROR: [5] SNESComputeJacobian line 1322 /home/kontzialis/petsc-3.2-p5/src/snes/interface/snes.c Libraries linked from /home/kontzialis/petsc-3.2-p5/linux-gnu-c-debug/lib ------------------------------------------------------------------------ [0]PETSC ERROR: Configure run at Sat Nov 5 20:58:12 2011 [0]PETSC ERROR: Configure options --with-debugging=1 ---with-mpi-dir=/usr/lib64/mpich2/bin --with-shared-libraries --with-shared-libraries --with-large-file-io=1 --with-precision=double --with-blacs=1 --download-blacs=yes --download-f-blas-lapack=yes --with-plapack=1 --download-plapack=yes --with-scalapack=1 --download-scalapack=yes --with-superlu=1 --download-superlu=yes --with-superlu_dist=1 --download-superlu_dist=yes --with-ml=1 --download-ml=yes --with-umfpack=1 --download-umfpack=yes --with-mpi=1 --download-mpich=1 --with-sundials=1 --download-sundials=1 --with-parmetis=1 --download-parmetis=1 --with-hypre=1 --download-hypre=1 [3]PETSC ERROR: [0]PETSC ERROR: Caught signal number 11 SEGV: Segmentation Violation, probably memory access out of range ------------------------------------------------------------------------ [7]PETSC ERROR: [3]PETSC ERROR: --------------------- Error Message ------------------------------------ [4]PETSC ERROR: --------------------- Error Message ------------------------------------ [0]PETSC ERROR: User provided function() line 0 in unknown directory unknown file [2] MatMult line 2156 /home/kontzialis/petsc-3.2-p5/src/mat/interface/matrix.c [6]PETSC ERROR: --------------------- Error Message ------------------------------------ [2]PETSC ERROR: Try option -start_in_debugger or -on_error_attach_debugger [2] base_residual_implicit line 27 "unknowndirectory/"../src/base_residual_implicit.c [2]PETSC ERROR: [3]PETSC ERROR: [2] MatFDColoringApply_BAIJ line 2768 /home/kontzialis/petsc-3.2-p5/src/mat/impls/baij/seq/baij.c or see http://www.mcs.anl.gov/petsc/petsc-as/documentation/faq.html#valgrind[2]PETSC ERROR: [3]PETSC ERROR: [2] MatFDColoringApply line 511 /home/kontzialis/petsc-3.2-p5/src/mat/matfd/fdmatrix.c [2]PETSC ERROR: or try http://valgrind.org on GNU/linux and Apple Mac OS X to find memory corruption errors [2] SNESDefaultComputeJacobianColor line 40 /home/kontzialis/petsc-3.2-p5/src/snes/interface/snesj2.c [2]PETSC ERROR: [2] SNES user Jacobian function line 0 unknownunknown [2]PETSC ERROR: [2] SNESComputeJacobian line 1322 /home/kontzialis/petsc-3.2-p5/src/snes/interface/snes.c [1]PETSC ERROR: [5]PETSC ERROR: --------------------- Error Message ------------------------------------ --------------------- Error Message ------------------------------------ [7]PETSC ERROR: Signal received! [4]PETSC ERROR: Signal received! [7]PETSC ERROR: ------------------------------------------------------------------------ [4]PETSC ERROR: ------------------------------------------------------------------------ [7]PETSC ERROR: Petsc Release Version 3.2.0, Patch 5, Sat Oct 29 13:45:54 CDT 2011 [4]PETSC ERROR: Petsc Release Version 3.2.0, Patch 5, Sat Oct 29 13:45:54 CDT 2011 [7]PETSC ERROR: [6]PETSC ERROR: See docs/changes/index.html for recent updates. [4]PETSC ERROR: Signal received! See docs/changes/index.html for recent updates. [7]PETSC ERROR: See docs/faq.html for hints about trouble shooting. [4]PETSC ERROR: [6]PETSC ERROR: [3]PETSC ERROR: See docs/faq.html for hints about trouble shooting. likely location of problem given in stack below ------------------------------------------------------------------------ [7]PETSC ERROR: See docs/index.html for manual pages. [3]PETSC ERROR: [4]PETSC ERROR: See docs/index.html for manual pages. [6]PETSC ERROR: --------------------- Stack Frames ------------------------------------ Petsc Release Version 3.2.0, Patch 5, Sat Oct 29 13:45:54 CDT 2011 [7]PETSC ERROR: ------------------------------------------------------------------------ [4]PETSC ERROR: ------------------------------------------------------------------------ [6]PETSC ERROR: See docs/changes/index.html for recent updates. [6]PETSC ERROR: [7]PETSC ERROR: [4]PETSC ERROR: See docs/faq.html for hints about trouble shooting. ./hoac on a linux-gnu named PlusSodaL by kontzialis Sat Nov 5 21:36:27 2011 ./hoac on a linux-gnu named PlusSodaL by kontzialis Sat Nov 5 21:36:27 2011 [6]PETSC ERROR: [7]PETSC ERROR: See docs/index.html for manual pages. Libraries linked from /home/kontzialis/petsc-3.2-p5/linux-gnu-c-debug/lib [4]PETSC ERROR: Libraries linked from /home/kontzialis/petsc-3.2-p5/linux-gnu-c-debug/lib [6]PETSC ERROR: ------------------------------------------------------------------------ [7]PETSC ERROR: [4]PETSC ERROR: Configure run at Sat Nov 5 20:58:12 2011 [1]PETSC ERROR: Configure run at Sat Nov 5 20:58:12 2011 [5]PETSC ERROR: Signal received! Signal received! [6]PETSC ERROR: [7]PETSC ERROR: [4]PETSC ERROR: ./hoac on a linux-gnu named PlusSodaL by kontzialis Sat Nov 5 21:36:27 2011 Configure options --with-debugging=1 ---with-mpi-dir=/usr/lib64/mpich2/bin --with-shared-libraries --with-shared-libraries --with-large-file-io=1 --with-precision=double --with-blacs=1 --download-blacs=yes --download-f-blas-lapack=yes --with-plapack=1 --download-plapack=yes --with-scalapack=1 --download-scalapack=yes --with-superlu=1 --download-superlu=yes --with-superlu_dist=1 --download-superlu_dist=yes --with-ml=1 --download-ml=yes --with-umfpack=1 --download-umfpack=yes --with-mpi=1 --download-mpich=1 --with-sundials=1 --download-sundials=1 --with-parmetis=1 --download-parmetis=1 --with-hypre=1 --download-hypre=1 [1]PETSC ERROR: [2]PETSC ERROR: Configure options --with-debugging=1 ---with-mpi-dir=/usr/lib64/mpich2/bin --with-shared-libraries --with-shared-libraries --with-large-file-io=1 --with-precision=double --with-blacs=1 --download-blacs=yes --download-f-blas-lapack=yes --with-plapack=1 --download-plapack=yes --with-scalapack=1 --download-scalapack=yes --with-superlu=1 --download-superlu=yes --with-superlu_dist=1 --download-superlu_dist=yes --with-ml=1 --download-ml=yes --with-umfpack=1 --download-umfpack=yes --with-mpi=1 --download-mpich=1 --with-sundials=1 --download-sundials=1 --with-parmetis=1 --download-parmetis=1 --with-hypre=1 --download-hypre=1 [5]PETSC ERROR: ------------------------------------------------------------------------ ------------------------------------------------------------------------ --------------------- Error Message ------------------------------------ [7]PETSC ERROR: [6]PETSC ERROR: ------------------------------------------------------------------------ [4]PETSC ERROR: Libraries linked from /home/kontzialis/petsc-3.2-p5/linux-gnu-c-debug/lib [5]PETSC ERROR: [1]PETSC ERROR: ------------------------------------------------------------------------ Petsc Release Version 3.2.0, Patch 5, Sat Oct 29 13:45:54 CDT 2011 Petsc Release Version 3.2.0, Patch 5, Sat Oct 29 13:45:54 CDT 2011 [6]PETSC ERROR: Configure run at Sat Nov 5 20:58:12 2011 [7]PETSC ERROR: [5]PETSC ERROR: [1]PETSC ERROR: [4]PETSC ERROR: User provided function() line 0 in unknown directory unknown file See docs/changes/index.html for recent updates. See docs/changes/index.html for recent updates. User provided function() line 0 in unknown directory unknown file [6]PETSC ERROR: Configure options --with-debugging=1 ---with-mpi-dir=/usr/lib64/mpich2/bin --with-shared-libraries --with-shared-libraries --with-large-file-io=1 --with-precision=double --with-blacs=1 --download-blacs=yes --download-f-blas-lapack=yes --with-plapack=1 --download-plapack=yes --with-scalapack=1 --download-scalapack=yes --with-superlu=1 --download-superlu=yes --with-superlu_dist=1 --download-superlu_dist=yes --with-ml=1 --download-ml=yes --with-umfpack=1 --download-umfpack=yes --with-mpi=1 --download-mpich=1 --with-sundials=1 --download-sundials=1 --with-parmetis=1 --download-parmetis=1 --with-hypre=1 --download-hypre=1 [1]PETSC ERROR: See docs/faq.html for hints about trouble shooting. [5]PETSC ERROR: See docs/faq.html for hints about trouble shooting. [6]PETSC ERROR: [1]PETSC ERROR: [5]PETSC ERROR: ------------------------------------------------------------------------ See docs/index.html for manual pages. See docs/index.html for manual pages. [1]PETSC ERROR: [5]PETSC ERROR: ------------------------------------------------------------------------ ------------------------------------------------------------------------ [6]PETSC ERROR: User provided function() line 0 in unknown directory unknown file [1]PETSC ERROR: [5]PETSC ERROR: ./hoac on a linux-gnu named PlusSodaL by kontzialis Sat Nov 5 21:36:27 2011 ./hoac on a linux-gnu named PlusSodaL by kontzialis Sat Nov 5 21:36:27 2011 [1]PETSC ERROR: [5]PETSC ERROR: Libraries linked from /home/kontzialis/petsc-3.2-p5/linux-gnu-c-debug/lib Libraries linked from /home/kontzialis/petsc-3.2-p5/linux-gnu-c-debug/lib [5]PETSC ERROR: [1]PETSC ERROR: Configure run at Sat Nov 5 20:58:12 2011 Configure run at Sat Nov 5 20:58:12 2011 [5]PETSC ERROR: [1]PETSC ERROR: Configure options --with-debugging=1 ---with-mpi-dir=/usr/lib64/mpich2/bin --with-shared-libraries --with-shared-libraries --with-large-file-io=1 --with-precision=double --with-blacs=1 --download-blacs=yes --download-f-blas-lapack=yes --with-plapack=1 --download-plapack=yes --with-scalapack=1 --download-scalapack=yes --with-superlu=1 --download-superlu=yes --with-superlu_dist=1 --download-superlu_dist=yes --with-ml=1 --download-ml=yes --with-umfpack=1 --download-umfpack=yes --with-mpi=1 --download-mpich=1 --with-sundials=1 --download-sundials=1 --with-parmetis=1 --download-parmetis=1 --with-hypre=1 --download-hypre=1 Configure options --with-debugging=1 ---with-mpi-dir=/usr/lib64/mpich2/bin --with-shared-libraries --with-shared-libraries --with-large-file-io=1 --with-precision=double --with-blacs=1 --download-blacs=yes --download-f-blas-lapack=yes --with-plapack=1 --download-plapack=yes --with-scalapack=1 --download-scalapack=yes --with-superlu=1 --download-superlu=yes --with-superlu_dist=1 --download-superlu_dist=yes --with-ml=1 --download-ml=yes --with-umfpack=1 --download-umfpack=yes --with-mpi=1 --download-mpich=1 --with-sundials=1 --download-sundials=1 --with-parmetis=1 --download-parmetis=1 --with-hypre=1 --download-hypre=1 [1]PETSC ERROR: [5]PETSC ERROR: ------------------------------------------------------------------------ ------------------------------------------------------------------------ [1]PETSC ERROR: [5]PETSC ERROR: User provided function() line 0 in unknown directory unknown file [2]PETSC ERROR: User provided function() line 0 in unknown directory unknown file Signal received! [2]PETSC ERROR: ------------------------------------------------------------------------ [2]PETSC ERROR: Petsc Release Version 3.2.0, Patch 5, Sat Oct 29 13:45:54 CDT 2011 [2]PETSC ERROR: See docs/changes/index.html for recent updates. [2]PETSC ERROR: See docs/faq.html for hints about trouble shooting. [3]PETSC ERROR: Note: The EXACT line numbers in the stack are not available, [2]PETSC ERROR: See docs/index.html for manual pages. [3]PETSC ERROR: INSTEAD the line number of the start of the function [2]PETSC ERROR: ------------------------------------------------------------------------ [3]PETSC ERROR: is given. [2]PETSC ERROR: ./hoac on a linux-gnu named PlusSodaL by kontzialis Sat Nov 5 21:36:27 2011 [2]PETSC ERROR: Libraries linked from /home/kontzialis/petsc-3.2-p5/linux-gnu-c-debug/lib [2]PETSC ERROR: [3]PETSC ERROR: Configure run at Sat Nov 5 20:58:12 2011 [2]PETSC ERROR: Configure options --with-debugging=1 ---with-mpi-dir=/usr/lib64/mpich2/bin --with-shared-libraries --with-shared-libraries --with-large-file-io=1 --with-precision=double --with-blacs=1 --download-blacs=yes --download-f-blas-lapack=yes --with-plapack=1 --download-plapack=yes --with-scalapack=1 --download-scalapack=yes --with-superlu=1 --download-superlu=yes --with-superlu_dist=1 --download-superlu_dist=yes --with-ml=1 --download-ml=yes --with-umfpack=1 --download-umfpack=yes --with-mpi=1 --download-mpich=1 --with-sundials=1 --download-sundials=1 --with-parmetis=1 --download-parmetis=1 --with-hypre=1 --download-hypre=1 application called MPI_Abort(MPI_COMM_WORLD, 59) - process 0 [2]PETSC ERROR: ------------------------------------------------------------------------ [2]PETSC ERROR: User provided function() line 0 in unknown directory unknown file [3] MatMult line 2156 /home/kontzialis/petsc-3.2-p5/src/mat/interface/matrix.c [3]PETSC ERROR: [3] base_residual_implicit line 27 "unknowndirectory/"../src/base_residual_implicit.c [3]PETSC ERROR: [3] MatFDColoringApply_BAIJ line 2768 /home/kontzialis/petsc-3.2-p5/src/mat/impls/baij/seq/baij.c [3]PETSC ERROR: [3] MatFDColoringApply line 511 /home/kontzialis/petsc-3.2-p5/src/mat/matfd/fdmatrix.c [3]PETSC ERROR: [3] SNESDefaultComputeJacobianColor line 40 /home/kontzialis/petsc-3.2-p5/src/snes/interface/snesj2.c [3]PETSC ERROR: [3] SNES user Jacobian function line 0 unknownunknown [3]PETSC ERROR: [3] SNESComputeJacobian line 1322 /home/kontzialis/petsc-3.2-p5/src/snes/interface/snes.c [cli_0]: aborting job: application called MPI_Abort(MPI_COMM_WORLD, 59) - process 0 application called MPI_Abort(MPI_COMM_WORLD, 59) - process 7 application called MPI_Abort(MPI_COMM_WORLD, 59) - process 4 application called MPI_Abort(MPI_COMM_WORLD, 59) - process 6 [cli_7]: [cli_4]: aborting job: application called MPI_Abort(MPI_COMM_WORLD, 59) - process 7 application called MPI_Abort(MPI_COMM_WORLD, 59) - process 1 application called MPI_Abort(MPI_COMM_WORLD, 59) - process 5 aborting job: application called MPI_Abort(MPI_COMM_WORLD, 59) - process 4 [cli_6]: aborting job: application called MPI_Abort(MPI_COMM_WORLD, 59) - process 6 [cli_1]: [cli_5]: aborting job: application called MPI_Abort(MPI_COMM_WORLD, 59) - process 1 aborting job: application called MPI_Abort(MPI_COMM_WORLD, 59) - process 5 application called MPI_Abort(MPI_COMM_WORLD, 59) - process 2 [3]PETSC ERROR: --------------------- Error Message ------------------------------------ [3]PETSC ERROR: Signal received! [3]PETSC ERROR: ------------------------------------------------------------------------ [3]PETSC ERROR: Petsc Release Version 3.2.0, Patch 5, Sat Oct 29 13:45:54 CDT 2011 [3]PETSC ERROR: See docs/changes/index.html for recent updates. [3]PETSC ERROR: See docs/faq.html for hints about trouble shooting. [3]PETSC ERROR: See docs/index.html for manual pages. [3]PETSC ERROR: ------------------------------------------------------------------------ [3]PETSC ERROR: ./hoac on a linux-gnu named PlusSodaL by kontzialis Sat Nov 5 21:36:27 2011 [3]PETSC ERROR: [cli_2]: Libraries linked from /home/kontzialis/petsc-3.2-p5/linux-gnu-c-debug/lib [3]PETSC ERROR: aborting job: application called MPI_Abort(MPI_COMM_WORLD, 59) - process 2 Configure run at Sat Nov 5 20:58:12 2011 [3]PETSC ERROR: Configure options --with-debugging=1 ---with-mpi-dir=/usr/lib64/mpich2/bin --with-shared-libraries --with-shared-libraries --with-large-file-io=1 --with-precision=double --with-blacs=1 --download-blacs=yes --download-f-blas-lapack=yes --with-plapack=1 --download-plapack=yes --with-scalapack=1 --download-scalapack=yes --with-superlu=1 --download-superlu=yes --with-superlu_dist=1 --download-superlu_dist=yes --with-ml=1 --download-ml=yes --with-umfpack=1 --download-umfpack=yes --with-mpi=1 --download-mpich=1 --with-sundials=1 --download-sundials=1 --with-parmetis=1 --download-parmetis=1 --with-hypre=1 --download-hypre=1 [3]PETSC ERROR: ------------------------------------------------------------------------ [3]PETSC ERROR: User provided function() line 0 in unknown directory unknown file application called MPI_Abort(MPI_COMM_WORLD, 59) - process 3 [cli_3]: aborting job: application called MPI_Abort(MPI_COMM_WORLD, 59) - process 3 ==25984== ==25984== HEAP SUMMARY: ==25984== in use at exit: 2,998,242 bytes in 3,058 blocks ==25984== total heap usage: 6,616 allocs, 3,558 frees, 25,981,044 bytes allocated ==25984== ==25983== ==25983== HEAP SUMMARY: ==25983== in use at exit: 3,495,890 bytes in 3,233 blocks ==25983== total heap usage: 6,903 allocs, 3,670 frees, 25,400,971 bytes allocated ==25983== ==25977== ==25981== ==25977== HEAP SUMMARY: ==25977== in use at exit: 3,068,055 bytes in 3,075 blocks ==25977== total heap usage: 6,625 allocs, 3,550 frees, 22,077,530 bytes allocated ==25977== ==25981== HEAP SUMMARY: ==25981== in use at exit: 3,236,727 bytes in 3,119 blocks ==25981== total heap usage: 6,700 allocs, 3,581 frees, 23,513,895 bytes allocated ==25981== ==25978== ==25982== ==25982== HEAP SUMMARY: ==25982== in use at exit: 3,291,403 bytes in 3,135 blocks ==25982== total heap usage: 6,712 allocs, 3,577 frees, 24,919,538 bytes allocated ==25982== ==25978== HEAP SUMMARY: ==25978== in use at exit: 2,948,439 bytes in 3,043 blocks ==25978== total heap usage: 6,477 allocs, 3,434 frees, 18,814,766 bytes allocated ==25978== ==25979== ==25979== HEAP SUMMARY: ==25979== in use at exit: 2,921,710 bytes in 3,093 blocks ==25979== total heap usage: 6,538 allocs, 3,445 frees, 14,632,326 bytes allocated ==25979== ==25980== ==25980== HEAP SUMMARY: ==25980== in use at exit: 2,940,474 bytes in 3,056 blocks ==25980== total heap usage: 6,597 allocs, 3,541 frees, 24,476,005 bytes allocated ==25980== ==25984== LEAK SUMMARY: ==25984== definitely lost: 584 bytes in 16 blocks ==25984== indirectly lost: 128 bytes in 8 blocks ==25984== possibly lost: 0 bytes in 0 blocks ==25984== still reachable: 2,997,530 bytes in 3,034 blocks ==25984== suppressed: 0 bytes in 0 blocks ==25984== Rerun with --leak-check=full to see details of leaked memory ==25984== ==25984== For counts of detected and suppressed errors, rerun with: -v ==25984== Use --track-origins=yes to see where uninitialised values come from ==25984== ERROR SUMMARY: 3 errors from 3 contexts (suppressed: 6 from 6) ==25977== LEAK SUMMARY: ==25977== definitely lost: 584 bytes in 16 blocks ==25977== indirectly lost: 128 bytes in 8 blocks ==25977== possibly lost: 0 bytes in 0 blocks ==25977== still reachable: 3,067,343 bytes in 3,051 blocks ==25977== suppressed: 0 bytes in 0 blocks ==25977== Rerun with --leak-check=full to see details of leaked memory ==25977== ==25977== For counts of detected and suppressed errors, rerun with: -v ==25977== ERROR SUMMARY: 1 errors from 1 contexts (suppressed: 6 from 6) ==25981== LEAK SUMMARY: ==25981== definitely lost: 584 bytes in 16 blocks ==25981== indirectly lost: 128 bytes in 8 blocks ==25981== possibly lost: 0 bytes in 0 blocks ==25981== still reachable: 3,236,015 bytes in 3,095 blocks ==25981== suppressed: 0 bytes in 0 blocks ==25981== Rerun with --leak-check=full to see details of leaked memory ==25981== ==25981== For counts of detected and suppressed errors, rerun with: -v ==25981== Use --track-origins=yes to see where uninitialised values come from ==25981== ERROR SUMMARY: 3 errors from 3 contexts (suppressed: 6 from 6) ==25983== LEAK SUMMARY: ==25983== definitely lost: 584 bytes in 16 blocks ==25983== indirectly lost: 128 bytes in 8 blocks ==25983== possibly lost: 0 bytes in 0 blocks ==25983== still reachable: 3,495,178 bytes in 3,209 blocks ==25983== suppressed: 0 bytes in 0 blocks ==25983== Rerun with --leak-check=full to see details of leaked memory ==25983== ==25983== For counts of detected and suppressed errors, rerun with: -v ==25983== Use --track-origins=yes to see where uninitialised values come from ==25983== ERROR SUMMARY: 3 errors from 3 contexts (suppressed: 6 from 6) ==25978== LEAK SUMMARY: ==25978== definitely lost: 584 bytes in 16 blocks ==25978== indirectly lost: 128 bytes in 8 blocks ==25978== possibly lost: 0 bytes in 0 blocks ==25978== still reachable: 2,947,727 bytes in 3,019 blocks ==25978== suppressed: 0 bytes in 0 blocks ==25978== Rerun with --leak-check=full to see details of leaked memory ==25978== ==25978== For counts of detected and suppressed errors, rerun with: -v ==25978== Use --track-origins=yes to see where uninitialised values come from ==25978== ERROR SUMMARY: 3 errors from 3 contexts (suppressed: 6 from 6) My base_residual_implicit function is the following: /******************************************************************************/ /* Function name: base_residual_implicit.c */ /* Engineer: C.Kontzialis */ /* E-mail: ckontzialis at lycos.com */ /* Function description: */ /* Function base_residual_implicit.c computes the residual at every element for the NS*/ /* equations. */ /******************************************************************************/ /* Data structure for hoac */ #include "hoac.h" PetscErrorCode base_residual_implicit(TS ts, /* Time stepper context [in] */ PetscReal t, /* Time t [in]*/ Vec sv, /* Global vector [in]*/ Vec svt, /* Global vector [in]*/ Vec gres, /* Global vector [out]*/ void *ctx /* HoAc Context [in]*/ ) { PetscErrorCode ierr; SYS sys = *((SYS *) ctx); PetscFunctionBegin; ierr = MatMult(sys.M, svt, sys.gsvk); CHKERRQ(ierr); ierr = residual(sv, gres, ctx); CHKERRQ(ierr); ierr = VecAXPY(gres, 1.0, sys.gsvk); CHKERRQ(ierr); PetscFunctionReturn(0); } /* base_residual_implicit.c */ But, even though there is a computation of the residual and I get a result: 0 SNES Function norm 2.822902692891e-01 an error occurs. Why? What should I do? Thank you very much!!! Kostas From jedbrown at mcs.anl.gov Sat Nov 5 15:06:35 2011 From: jedbrown at mcs.anl.gov (Jed Brown) Date: Sat, 5 Nov 2011 14:06:35 -0600 Subject: [petsc-users] help for beginner :-) In-Reply-To: <1320517694.29400.37.camel@robert> References: <1320517694.29400.37.camel@robert> Message-ID: On Sat, Nov 5, 2011 at 12:28, robert wrote: > question 1: > ----------- > How can I get an output of the reason value to the console via a command > line option - in the manual only the petsc-function/method for it is > given??? > -ksp_converged_reason > > question 2: > ---------- > I have tried the option -ksp_monitor_true_residual but I don't know how > to interpret the output. > [...] > > Did I get it right that the resid norm and true resid norm should be > more or less the same value? > The preconditioned residual is after preconditioning where as the true residual does not include the preconditioner. Libmesh examples enforce Dirichlet boundary conditions using penalties which makes the true residual very poorly scaled, in which case the preconditioned residual is preferable for convergence tests (so long as the preconditioner is non-singular, but that should not be a problem for diffusion). > ||Ae||/||Ax||: Ax is clear, but what does the e in Ae stand for? How do > I interpret this value? Is this a comparison beteen the 'real' solution > and the calculated solution. I recognized that the iterations stopped > when this value got smaller than 1e-12. > You should update to petsc-3.2. The string is more clear there: ||r(i)||/||b|| where r(i)=Ax_i-b is the residual. question 3: > ----------- > I am sure that there are many people out there who have a lot of > experience with my kind of systems - I mean diffusional heat tranfer is > a standard problem in many different fields. > Which solution algorithm (solver, preconditioner) would you suggest for > solving in parallel? Which ones do I have to avoid (e.g. not stable)??? > Diffusion is as "nice" as possible in terms of stability and well-established theory, multigrid will normally work very well. In the latest release of PETSc, you could try -pc_type gamg. Otherwise/alternatively, configure using --download-ml or --download-hypre, then run with -pc_type ml or -pc_type hypre. -------------- next part -------------- An HTML attachment was scrubbed... URL: From jedbrown at mcs.anl.gov Sat Nov 5 15:26:30 2011 From: jedbrown at mcs.anl.gov (Jed Brown) Date: Sat, 5 Nov 2011 14:26:30 -0600 Subject: [petsc-users] Jacobian free in SNES In-Reply-To: <4EB59123.2070600@lycos.com> References: <4EB59123.2070600@lycos.com> Message-ID: On Sat, Nov 5, 2011 at 13:40, Konstantinos Kontzialis wrote: > ierr = MatFDColoringSetFunction(**fdcoloring, base_residual_implicit, > &sys); > CHKERRQ(ierr); > This does not work since your base_residual_implicit has the wrong calling sequence for SNES. You should configure the MatFDColoring using this sequence: ierr = MatFDColoringSetFunction(matfdcoloring,(PetscErrorCode(*)(void))SNESTSFormFunction,ts);CHKERRQ(ierr); ierr = MatFDColoringSetFromOptions(matfdcoloring);CHKERRQ(ierr); ierr = SNESSetJacobian(snes,A,B,SNESDefaultComputeJacobianColor,matfdcoloring);CHKERRQ(ierr); -------------- next part -------------- An HTML attachment was scrubbed... URL: From dharmareddy84 at gmail.com Sat Nov 5 15:34:53 2011 From: dharmareddy84 at gmail.com (Dharmendar Reddy) Date: Sat, 5 Nov 2011 15:34:53 -0500 Subject: [petsc-users] [FEM doubt] Not Related to PETSc Message-ID: Hello, I am writing a 3D FEM code for learning purpose (experimenting with object oriented concepts in Fortran 2003). Can some one tell me (pseudo code) how to implement a non homogenous Neumann boundary condition. You can also point me to a book. I am using tetrahedral elements for solving Poisson equation. I am using a 4 point Gauss quadrature, and linear basis functions for cell volume integrals. I am confused on how to do the integration on the facet. Thanks Reddy -- ----------------------------------------------------- Dharmendar Reddy Palle Graduate Student Microelectronics Research center, University of Texas at Austin, 10100 Burnet Road, Bldg. 160 MER 2.608F, TX 78758-4445 e-mail: dharmareddy84 at gmail.com Phone: +1-512-350-9082 United States of America. -------------- next part -------------- An HTML attachment was scrubbed... URL: From jedbrown at mcs.anl.gov Sat Nov 5 15:38:02 2011 From: jedbrown at mcs.anl.gov (Jed Brown) Date: Sat, 5 Nov 2011 14:38:02 -0600 Subject: [petsc-users] [FEM doubt] Not Related to PETSc In-Reply-To: References: Message-ID: On Sat, Nov 5, 2011 at 14:34, Dharmendar Reddy wrote: > I am writing a 3D FEM code for learning purpose (experimenting with object > oriented concepts in Fortran 2003). Can some one tell me (pseudo code) how > to implement a non homogenous Neumann boundary condition. You can also > point me to a book. I am using tetrahedral elements for solving Poisson > equation. I am using a 4 point Gauss quadrature, and linear basis functions > for cell volume integrals. I am confused on how to do the integration on > the facet. You need to compute an integral over the face. It appears in the weak form. Any book on finite element methods should cover this. -------------- next part -------------- An HTML attachment was scrubbed... URL: From dharmareddy84 at gmail.com Sat Nov 5 16:04:27 2011 From: dharmareddy84 at gmail.com (Dharmendar Reddy) Date: Sat, 5 Nov 2011 16:04:27 -0500 Subject: [petsc-users] [FEM doubt] Not Related to PETSc In-Reply-To: References: Message-ID: So, i need to transform the Gauss quadrature points (qp) from the reference 2D triangle to 3D using affine transformation of the form: [x y z]^T = A x [qp_x qp_y] + [c1 c2 c3]^T; A is a 3 x 2 matrix , am i right? thanks Reddy On Sat, Nov 5, 2011 at 3:38 PM, Jed Brown wrote: > On Sat, Nov 5, 2011 at 14:34, Dharmendar Reddy wrote: > >> I am writing a 3D FEM code for learning purpose (experimenting with >> object oriented concepts in Fortran 2003). Can some one tell me (pseudo >> code) how to implement a non homogenous Neumann boundary condition. You can >> also point me to a book. I am using tetrahedral elements for solving >> Poisson equation. I am using a 4 point Gauss quadrature, and linear basis >> functions for cell volume integrals. I am confused on how to do the >> integration on the facet. > > > You need to compute an integral over the face. It appears in the weak > form. Any book on finite element methods should cover this. > -- ----------------------------------------------------- Dharmendar Reddy Palle Graduate Student Microelectronics Research center, University of Texas at Austin, 10100 Burnet Road, Bldg. 160 MER 2.608F, TX 78758-4445 e-mail: dharmareddy84 at gmail.com Phone: +1-512-350-9082 United States of America. -------------- next part -------------- An HTML attachment was scrubbed... URL: From jedbrown at mcs.anl.gov Sat Nov 5 16:50:00 2011 From: jedbrown at mcs.anl.gov (Jed Brown) Date: Sat, 5 Nov 2011 15:50:00 -0600 Subject: [petsc-users] [FEM doubt] Not Related to PETSc In-Reply-To: References: Message-ID: On Sat, Nov 5, 2011 at 15:04, Dharmendar Reddy wrote: > So, i need to transform the Gauss quadrature points (qp) from the > reference 2D triangle to 3D using affine transformation of the form: > [x y z]^T = A x [qp_x qp_y] + [c1 c2 c3]^T; A is a 3 x 2 matrix , am i > right? > You need a quadrature for the face in physical space which means you need a quadrature in the reference coordinates and the Jacobian of the coordinate transformation. It looks like you're on the right track. -------------- next part -------------- An HTML attachment was scrubbed... URL: From dominik at itis.ethz.ch Sun Nov 6 04:43:17 2011 From: dominik at itis.ethz.ch (Dominik Szczerba) Date: Sun, 6 Nov 2011 11:43:17 +0100 Subject: [petsc-users] possible development on Windows? Message-ID: I am normally working and developing my codes on linux, only compiling them at the end for Windows, but need to evaluate the possibilities to also efficiently develop on Windows. Is anyone developing Petsc based applications on Windows and could share some experiences? In particular, is it possible to debug only on Cygwin's gdb port or also to use (somehow) the Visual Studio's built in debugger? Are there tools comparable to valgrind to detect MPI-aware illegal memory accesses and leaks? I heard about TotalView, but never worked with it. Thanks for any thoughts, Dominik From robert.bodner at unil.ch Sun Nov 6 07:28:38 2011 From: robert.bodner at unil.ch (robert) Date: Sun, 06 Nov 2011 14:28:38 +0100 Subject: [petsc-users] help for beginner :-) In-Reply-To: References: <1320517694.29400.37.camel@robert> Message-ID: <1320586118.29400.50.camel@robert> > > > Diffusion is as "nice" as possible in terms of stability and > well-established theory, multigrid will normally work very well. In > the latest release of PETSc, you could try -pc_type gamg. > Otherwise/alternatively, configure using --download-ml or > --download-hypre, then run with -pc_type ml or -pc_type hypre. Thanks for your reply. Do you have experience with lumping of the mass matrix? The following options semm to work fine if I don't use lumping: mpiexec -np 4 ./ThermoPaine3d-opt -ksp_type fgmres -pc_type ksp -ksp_ksp_type cg -ksp_pc_type jacobi -ksp_monitor_true_residual -ksp_converged_reason (any comments to the options???) However, for heat diffusion, especially for non-steady starting conditions, oscillations seem to be a common problem (AFAIK). If I apply lumping of the mass matrix - which means I put the row-sum of the mass matrix on the main diagonal - the solver doesn't converge any more. Before it converged after 2 to 5 iterations, now I was running several hundreds. Maybe some of you - definitely more experienced users - have some ideas/suggestions? P.S.: I have tried a lot of different combinations of solvers and preconditioners (including those of the last reply).. Thank you, Robert From robert.bodner at unil.ch Sun Nov 6 07:42:28 2011 From: robert.bodner at unil.ch (robert) Date: Sun, 06 Nov 2011 14:42:28 +0100 Subject: [petsc-users] help for beginner :-) In-Reply-To: <1320586118.29400.50.camel@robert> References: <1320517694.29400.37.camel@robert> <1320586118.29400.50.camel@robert> Message-ID: <1320586948.29400.52.camel@robert> Am Sonntag, den 06.11.2011, 14:28 +0100 schrieb robert: > > > > > > Diffusion is as "nice" as possible in terms of stability and > > well-established theory, multigrid will normally work very well. In > > the latest release of PETSc, you could try -pc_type gamg. > > Otherwise/alternatively, configure using --download-ml or > > --download-hypre, then run with -pc_type ml or -pc_type hypre. > > Thanks for your reply. > > Do you have experience with lumping of the mass matrix? > > The following options semm to work fine if I don't use lumping: > > mpiexec -np 4 ./ThermoPaine3d-opt -ksp_type fgmres -pc_type ksp > -ksp_ksp_type cg -ksp_pc_type jacobi -ksp_monitor_true_residual > -ksp_converged_reason > > (any comments to the options???) > > However, for heat diffusion, especially for non-steady starting > conditions, oscillations seem to be a common problem (AFAIK). > If I apply lumping of the mass matrix - which means I put the row-sum of > the mass matrix on the main diagonal - the solver doesn't converge any > more. Before it converged after 2 to 5 iterations, now I was running > several hundreds. > > > Maybe some of you - definitely more experienced users - have some > ideas/suggestions? > > P.S.: I have tried a lot of different combinations of solvers and > preconditioners (including those of the last reply).. > > Thank you, > Robert I have to correct my last entry: I get the following output: 0 KSP preconditioned resid norm 9.794488229448e+17 true resid norm 9.794488229448e+17 ||Ae||/||Ax|| 9.424443120337e-03 1 KSP preconditioned resid norm 0.000000000000e+00 true resid norm 9.794488229448e+17 ||Ae||/||Ax|| 9.424443120337e-03 Linear solve converged due to CONVERGED_ATOL iterations 1 Isn't it strange to have a preconditioned residual norm of exactly 0 (that's why I thought there might be a problem somewhere)??? Thank you, Robert From jedbrown at mcs.anl.gov Sun Nov 6 08:10:55 2011 From: jedbrown at mcs.anl.gov (Jed Brown) Date: Sun, 6 Nov 2011 07:10:55 -0700 Subject: [petsc-users] possible development on Windows? In-Reply-To: References: Message-ID: On Sun, Nov 6, 2011 at 03:43, Dominik Szczerba wrote: > I am normally working and developing my codes on linux, only compiling > them at the end for Windows, but need to evaluate the possibilities to > also efficiently develop on Windows. Is anyone developing Petsc based > applications on Windows and could share some experiences? In > particular, is it possible to debug only on Cygwin's gdb port or also > to use (somehow) the Visual Studio's built in debugger? Are there > tools comparable to valgrind to detect MPI-aware illegal memory > accesses and leaks? I heard about TotalView, but never worked with it. > PETSc's configure writes a CMake file. You should be able to change the target to use visual studio. I haven't tested it, so it may need some minor tweaking, but it shouldn't be too hard. None of the main PETSc developers run Windows. We are careful to write portable code and we test to make sure it works on Windows, but since we don't develop there, we don't get too frustrated with non-optimal aspects of the development environment on Windows. You are welcome to contribute improvements. -------------- next part -------------- An HTML attachment was scrubbed... URL: From jedbrown at mcs.anl.gov Sun Nov 6 08:27:20 2011 From: jedbrown at mcs.anl.gov (Jed Brown) Date: Sun, 6 Nov 2011 07:27:20 -0700 Subject: [petsc-users] help for beginner :-) In-Reply-To: <1320586118.29400.50.camel@robert> References: <1320517694.29400.37.camel@robert> <1320586118.29400.50.camel@robert> Message-ID: On Sun, Nov 6, 2011 at 06:28, robert wrote: > Do you have experience with lumping of the mass matrix? > > The following options semm to work fine if I don't use lumping: > > mpiexec -np 4 ./ThermoPaine3d-opt -ksp_type fgmres -pc_type ksp > -ksp_ksp_type cg -ksp_pc_type jacobi -ksp_monitor_true_residual > -ksp_converged_reason > Is your matrix actually symmetric (including boundary conditions)? This is running a complete solve as a preconditioner. > > (any comments to the options???) > > However, for heat diffusion, especially for non-steady starting > conditions, oscillations seem to be a common problem (AFAIK). > If I apply lumping of the mass matrix - which means I put the row-sum of > the mass matrix on the main diagonal - the solver doesn't converge any > more. Before it converged after 2 to 5 iterations, now I was running > several hundreds. > I suspect a mistake in assembly causing the matrix to be singular or indefinite. I would try a very small problem (e.g. < 1000 dofs) and run with -pc_type svd -pc_svd_monitor. A preconditioned residual of exactly 0 is likely a mistake (and the true residual is 1e17). Is this coming out of FGMRES? From your convergence history, it looks like you have set things up in your code to use a nonzero initial guess? -------------- next part -------------- An HTML attachment was scrubbed... URL: From agrayver at gfz-potsdam.de Sun Nov 6 08:55:53 2011 From: agrayver at gfz-potsdam.de (Alexander Grayver) Date: Sun, 06 Nov 2011 15:55:53 +0100 Subject: [petsc-users] possible development on Windows? In-Reply-To: References: Message-ID: <4EB69FF9.4090007@gfz-potsdam.de> Hi, I work with Petsc both under Linux and Windows. I develop in Fortran and it makes things even more tricky sometimes. My compilers are those included in VS 2008 and IFC 12 + MKL's BLAS/LAPACK for Fortran (however I started to work with petsc using IFC 10.1 and downloaded BLAS/LAPACK using petsc configure options). The most tricky part for me was to build petsc properly, although all problems are directly and indirectly related to the Fortran usage. So if you write in C/C++ I would say it's going to be easy for you. Anyway I can share my configuration line if you get some problems. I use VS IDE and debug applications in it. For sequential programs it is no problems, for MPI you have to install some additional tools (MPI Cluster Debugger and something else) and just switch debugger type from VS whenever you want. To make this debugger working with IFC cost me a couple of days and a lot of googling, but again for C/C++ it should be easy. Unfortunately, there are no things like valgrind for Windows. I couldn't find at least. Regards, Alexander On 06.11.2011 11:43, Dominik Szczerba wrote: > I am normally working and developing my codes on linux, only compiling > them at the end for Windows, but need to evaluate the possibilities to > also efficiently develop on Windows. Is anyone developing Petsc based > applications on Windows and could share some experiences? In > particular, is it possible to debug only on Cygwin's gdb port or also > to use (somehow) the Visual Studio's built in debugger? Are there > tools comparable to valgrind to detect MPI-aware illegal memory > accesses and leaks? I heard about TotalView, but never worked with it. > > Thanks for any thoughts, > Dominik From dominik at itis.ethz.ch Sun Nov 6 09:02:12 2011 From: dominik at itis.ethz.ch (Dominik Szczerba) Date: Sun, 6 Nov 2011 16:02:12 +0100 Subject: [petsc-users] possible development on Windows? In-Reply-To: References: Message-ID: Hi Jed and Alexander, I did not mean building on Windows, I meant developing on Windows (only C/C++). I.e. doing regular everyday boring stuff like debugging crashes or deadlocks, testing memory accesses with valgrind... If you tell me that all the main developers do not use Windows it sounds quite pessimistic... Yes, valgrind-equivalent tool is what I care for most. Anyone in the community with any experience here? Thanks, Dominik From jedbrown at mcs.anl.gov Sun Nov 6 09:12:32 2011 From: jedbrown at mcs.anl.gov (Jed Brown) Date: Sun, 6 Nov 2011 08:12:32 -0700 Subject: [petsc-users] possible development on Windows? In-Reply-To: References: Message-ID: On Sun, Nov 6, 2011 at 08:02, Dominik Szczerba wrote: > I did not mean building on Windows, I meant developing on Windows > (only C/C++). I.e. doing regular everyday boring stuff like debugging > crashes or deadlocks, testing memory accesses with valgrind... > We both understood that. Alexander says he uses the VS debugger, for example. > If you > tell me that all the main developers do not use Windows it sounds > quite pessimistic... > You are welcome to contribute. > Yes, valgrind-equivalent tool is what I care for > most. > Valgrind is a pretty unique tool. You could try the commercial software called Purify. There is nothing preventing Microsoft from contributing valgrind support for Windows. -------------- next part -------------- An HTML attachment was scrubbed... URL: From dominik at itis.ethz.ch Sun Nov 6 11:52:35 2011 From: dominik at itis.ethz.ch (Dominik Szczerba) Date: Sun, 6 Nov 2011 18:52:35 +0100 Subject: [petsc-users] fixed point interations In-Reply-To: References: Message-ID: >>> I want to start small by porting a very simple code using fixed point >>> iterations as follows: A(x)x = b(x) is approximated as A(x0)x = b(x0), >>> then solved by KSP for x, then x0 is updated to x, then repeat until >>> convergence. > > Run the usual "Newton" methods with A(x) in place of the true Jacobian. When I substitute A(x) into eq. 5.2 I get: A(x) dx = -F(x) (1) A(x) dx = -A(x) x + b(x) (2) A(x) dx + A(x) x = b(x) (3) A(x) (x+dx) = b(x) (4) My questions: * Will the procedure somehow optimally group the two A(x) terms into one, as in 3-4? This requires knowledge, will this be efficiently handled? * I am solving for x+dx, while eq. 5.3 solves for dx. Is this, and how, correctly handled? Should I somehow disable the update myself? Thanks a lot, Dominik > can compute A(x) in the residual > F(x) = A(x) x - b(x) > and cache it in your user context, then pass it back when asked to compute > the Jacobian. > This runs your algorithm (often called Picard) in "defect correction mode", > but once you write your equations this way, you can try Newton iteration > using -snes_mf_operator. > >>> >>> In the documentation chapter 5 I see all sorts of sophisticated Newton >>> type methods, requiring computation of the Jacobian. Is the above >>> defined simple method still accessible somehow in Petsc or such >>> triviality can only be done by hand? Which one from the existing >>> nonlinear solvers would be a closest match both in simplicity and >>> robustness (even if slow performance)? >> >> You want -snes_type nrichardson. All you need is to define the residual. > > Matt, were the 1000 emails we exchanged over this last month not enough to > prevent you from spreading misinformation under a different name? From knepley at gmail.com Sun Nov 6 11:59:18 2011 From: knepley at gmail.com (Matthew Knepley) Date: Sun, 6 Nov 2011 17:59:18 +0000 Subject: [petsc-users] fixed point interations In-Reply-To: References: Message-ID: On Sun, Nov 6, 2011 at 5:52 PM, Dominik Szczerba wrote: > >>> I want to start small by porting a very simple code using fixed point > >>> iterations as follows: A(x)x = b(x) is approximated as A(x0)x = b(x0), > >>> then solved by KSP for x, then x0 is updated to x, then repeat until > >>> convergence. > > > > Run the usual "Newton" methods with A(x) in place of the true Jacobian. > > When I substitute A(x) into eq. 5.2 I get: > > A(x) dx = -F(x) (1) > A(x) dx = -A(x) x + b(x) (2) > A(x) dx + A(x) x = b(x) (3) > A(x) (x+dx) = b(x) (4) > > My questions: > > * Will the procedure somehow optimally group the two A(x) terms into > one, as in 3-4? This requires knowledge, will this be efficiently > handled? > There is no grouping. You solve for dx and do a vector addition. > * I am solving for x+dx, while eq. 5.3 solves for dx. Is this, and > how, correctly handled? Should I somehow disable the update myself? > Do not do any update yourself, just give the correct A at each iteration in your FormJacobian routine. Matt > Thanks a lot, > Dominik > > > can compute A(x) in the residual > > F(x) = A(x) x - b(x) > > and cache it in your user context, then pass it back when asked to > compute > > the Jacobian. > > This runs your algorithm (often called Picard) in "defect correction > mode", > > but once you write your equations this way, you can try Newton iteration > > using -snes_mf_operator. > > > >>> > >>> In the documentation chapter 5 I see all sorts of sophisticated Newton > >>> type methods, requiring computation of the Jacobian. Is the above > >>> defined simple method still accessible somehow in Petsc or such > >>> triviality can only be done by hand? Which one from the existing > >>> nonlinear solvers would be a closest match both in simplicity and > >>> robustness (even if slow performance)? > >> > >> You want -snes_type nrichardson. All you need is to define the residual. > > > > Matt, were the 1000 emails we exchanged over this last month not enough > to > > prevent you from spreading misinformation under a different name? > -- What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. -- Norbert Wiener -------------- next part -------------- An HTML attachment was scrubbed... URL: From dominik at itis.ethz.ch Sun Nov 6 13:10:48 2011 From: dominik at itis.ethz.ch (Dominik Szczerba) Date: Sun, 6 Nov 2011 20:10:48 +0100 Subject: [petsc-users] possible development on Windows? In-Reply-To: <4EB69FF9.4090007@gfz-potsdam.de> References: <4EB69FF9.4090007@gfz-potsdam.de> Message-ID: Hi Alexander, Are you running Windows HPC edition? According to google, it is required for the referred MPI Cluster Debugger. I would also appreciate some more hints on "and something else", if you have them. Many thanks, Dominik On Sun, Nov 6, 2011 at 3:55 PM, Alexander Grayver wrote: > Hi, > > I work with Petsc both under Linux and Windows. > I develop in Fortran and it makes things even more tricky sometimes. My > compilers are those included in VS 2008 and IFC 12 + MKL's BLAS/LAPACK for > Fortran (however I started to work with petsc using IFC 10.1 and downloaded > BLAS/LAPACK using petsc configure options). > The most tricky part for me was to build petsc properly, although all > problems are directly and indirectly related to the Fortran usage. So if you > write in C/C++ I would say it's going to be easy for you. Anyway I can share > my configuration line if you get some problems. > I use VS IDE and debug applications in it. For sequential programs it is no > problems, for MPI you have to install some additional tools (MPI Cluster > Debugger and something else) and just switch debugger type from VS whenever > you want. To make this debugger working with IFC cost me a couple of days > and a lot of googling, but again for C/C++ it should be easy. > Unfortunately, there are no things like valgrind for Windows. I couldn't > find at least. > > Regards, > Alexander > > On 06.11.2011 11:43, Dominik Szczerba wrote: >> >> I am normally working and developing my codes on linux, only compiling >> them at the end for Windows, but need to evaluate the possibilities to >> also efficiently develop on Windows. Is anyone developing Petsc based >> applications on Windows and could share some experiences? In >> particular, is it possible to debug only on Cygwin's gdb port or also >> to use (somehow) the Visual Studio's built in debugger? Are there >> tools comparable to valgrind to detect MPI-aware illegal memory >> accesses and leaks? I heard about TotalView, but never worked with it. >> >> Thanks for any thoughts, >> Dominik > > From balay at mcs.anl.gov Sun Nov 6 13:47:37 2011 From: balay at mcs.anl.gov (Satish Balay) Date: Sun, 6 Nov 2011 13:47:37 -0600 (CST) Subject: [petsc-users] possible development on Windows? In-Reply-To: References: <4EB69FF9.4090007@gfz-potsdam.de> Message-ID: Just one note wrt debugging with MS compilers. I think the following also works with MPICH mpiexec -localonly -n 2 msdev [or equivalent new name?] ex1.exe Satish On Sun, 6 Nov 2011, Dominik Szczerba wrote: > Hi Alexander, > > Are you running Windows HPC edition? According to google, it is > required for the referred MPI Cluster Debugger. > > I would also appreciate some more hints on "and something else", if > you have them. > > Many thanks, > Dominik > > On Sun, Nov 6, 2011 at 3:55 PM, Alexander Grayver > wrote: > > Hi, > > > > I work with Petsc both under Linux and Windows. > > I develop in Fortran and it makes things even more tricky sometimes. My > > compilers are those included in VS 2008 and IFC 12 + MKL's BLAS/LAPACK for > > Fortran (however I started to work with petsc using IFC 10.1 and downloaded > > BLAS/LAPACK using petsc configure options). > > The most tricky part for me was to build petsc properly, although all > > problems are directly and indirectly related to the Fortran usage. So if you > > write in C/C++ I would say it's going to be easy for you. Anyway I can share > > my configuration line if you get some problems. > > I use VS IDE and debug applications in it. For sequential programs it is no > > problems, for MPI you have to install some additional tools (MPI Cluster > > Debugger and something else) and just switch debugger type from VS whenever > > you want. To make this debugger working with IFC cost me a couple of days > > and a lot of googling, but again for C/C++ it should be easy. > > Unfortunately, there are no things like valgrind for Windows. I couldn't > > find at least. > > > > Regards, > > Alexander > > > > On 06.11.2011 11:43, Dominik Szczerba wrote: > >> > >> I am normally working and developing my codes on linux, only compiling > >> them at the end for Windows, but need to evaluate the possibilities to > >> also efficiently develop on Windows. Is anyone developing Petsc based > >> applications on Windows and could share some experiences? In > >> particular, is it possible to debug only on Cygwin's gdb port or also > >> to use (somehow) the Visual Studio's built in debugger? Are there > >> tools comparable to valgrind to detect MPI-aware illegal memory > >> accesses and leaks? I heard about TotalView, but never worked with it. > >> > >> Thanks for any thoughts, > >> Dominik > > > > > From dominik at itis.ethz.ch Sun Nov 6 15:34:35 2011 From: dominik at itis.ethz.ch (Dominik Szczerba) Date: Sun, 6 Nov 2011 22:34:35 +0100 Subject: [petsc-users] fixed point interations In-Reply-To: References: Message-ID: On Sun, Nov 6, 2011 at 6:59 PM, Matthew Knepley wrote: > On Sun, Nov 6, 2011 at 5:52 PM, Dominik Szczerba > wrote: >> >> >>> I want to start small by porting a very simple code using fixed point >> >>> iterations as follows: A(x)x = b(x) is approximated as A(x0)x = b(x0), >> >>> then solved by KSP for x, then x0 is updated to x, then repeat until >> >>> convergence. >> > >> > Run the usual "Newton" methods with A(x) in place of the true Jacobian. >> >> When I substitute A(x) into eq. 5.2 I get: >> >> A(x) dx = -F(x) (1) >> A(x) dx = -A(x) x + b(x) (2) >> A(x) dx + A(x) x = b(x) (3) >> A(x) (x+dx) = b(x) (4) >> >> My questions: >> >> * Will the procedure somehow optimally group the two A(x) terms into >> one, as in 3-4? This requires knowledge, will this be efficiently >> handled? > > There is no grouping. You solve for dx and do a vector addition. > >> >> * I am solving for x+dx, while eq. 5.3 solves for dx. Is this, and >> how, correctly handled? Should I somehow disable the update myself? > > Do not do any update yourself, just give the correct A at each iteration in > your FormJacobian routine. > ? ?Matt OK, no manual update, this is clear now. What is still not clear is that by substituting A for F' I arrive at an equation in x+dx (my eq. 4), and not dx (Petsc eq. 5.3)... Dominik From knepley at gmail.com Sun Nov 6 15:40:04 2011 From: knepley at gmail.com (Matthew Knepley) Date: Sun, 6 Nov 2011 21:40:04 +0000 Subject: [petsc-users] fixed point interations In-Reply-To: References: Message-ID: On Sun, Nov 6, 2011 at 9:34 PM, Dominik Szczerba wrote: > On Sun, Nov 6, 2011 at 6:59 PM, Matthew Knepley wrote: > > On Sun, Nov 6, 2011 at 5:52 PM, Dominik Szczerba > > wrote: > >> > >> >>> I want to start small by porting a very simple code using fixed > point > >> >>> iterations as follows: A(x)x = b(x) is approximated as A(x0)x = > b(x0), > >> >>> then solved by KSP for x, then x0 is updated to x, then repeat until > >> >>> convergence. > >> > > >> > Run the usual "Newton" methods with A(x) in place of the true > Jacobian. > >> > >> When I substitute A(x) into eq. 5.2 I get: > >> > >> A(x) dx = -F(x) (1) > >> A(x) dx = -A(x) x + b(x) (2) > >> A(x) dx + A(x) x = b(x) (3) > >> A(x) (x+dx) = b(x) (4) > >> > >> My questions: > >> > >> * Will the procedure somehow optimally group the two A(x) terms into > >> one, as in 3-4? This requires knowledge, will this be efficiently > >> handled? > > > > There is no grouping. You solve for dx and do a vector addition. > > > >> > >> * I am solving for x+dx, while eq. 5.3 solves for dx. Is this, and > >> how, correctly handled? Should I somehow disable the update myself? > > > > Do not do any update yourself, just give the correct A at each iteration > in > > your FormJacobian routine. > > Matt > > OK, no manual update, this is clear now. What is still not clear is > that by substituting A for F' I arrive at an equation in x+dx (my eq. > 4), and not dx (Petsc eq. 5.3)... Newton's equation is for dx. Then you add that to x to get the next guess. This is described in any book on numerical analysis, e.g. Henrici. Matt > > Dominik > -- What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. -- Norbert Wiener -------------- next part -------------- An HTML attachment was scrubbed... URL: From dominik at itis.ethz.ch Sun Nov 6 15:56:14 2011 From: dominik at itis.ethz.ch (Dominik Szczerba) Date: Sun, 6 Nov 2011 22:56:14 +0100 Subject: [petsc-users] fixed point interations In-Reply-To: References: Message-ID: On Sun, Nov 6, 2011 at 10:40 PM, Matthew Knepley wrote: > On Sun, Nov 6, 2011 at 9:34 PM, Dominik Szczerba > wrote: >> >> On Sun, Nov 6, 2011 at 6:59 PM, Matthew Knepley wrote: >> > On Sun, Nov 6, 2011 at 5:52 PM, Dominik Szczerba >> > wrote: >> >> >> >> >>> I want to start small by porting a very simple code using fixed >> >> >>> point >> >> >>> iterations as follows: A(x)x = b(x) is approximated as A(x0)x = >> >> >>> b(x0), >> >> >>> then solved by KSP for x, then x0 is updated to x, then repeat >> >> >>> until >> >> >>> convergence. >> >> > >> >> > Run the usual "Newton" methods with A(x) in place of the true >> >> > Jacobian. >> >> >> >> When I substitute A(x) into eq. 5.2 I get: >> >> >> >> A(x) dx = -F(x) (1) >> >> A(x) dx = -A(x) x + b(x) (2) >> >> A(x) dx + A(x) x = b(x) (3) >> >> A(x) (x+dx) = b(x) (4) >> >> >> >> My questions: >> >> >> >> * Will the procedure somehow optimally group the two A(x) terms into >> >> one, as in 3-4? This requires knowledge, will this be efficiently >> >> handled? >> > >> > There is no grouping. You solve for dx and do a vector addition. >> > >> >> >> >> * I am solving for x+dx, while eq. 5.3 solves for dx. Is this, and >> >> how, correctly handled? Should I somehow disable the update myself? >> > >> > Do not do any update yourself, just give the correct A at each iteration >> > in >> > your FormJacobian routine. >> > ? ?Matt >> >> OK, no manual update, this is clear now. What is still not clear is >> that by substituting A for F' I arrive at an equation in x+dx (my eq. >> 4), and not dx (Petsc eq. 5.3)... > > Newton's equation is for dx. Then you add that to x to get the next guess. > This > is described in any book on numerical analysis, e.g. Henrici. > ? ?Matt I understand that, but I am asking something else... when taking F' = A I arrive at an equation in x+dx, which is not Newton equation. What is wrong in this picture? Dominik From knepley at gmail.com Sun Nov 6 16:04:41 2011 From: knepley at gmail.com (Matthew Knepley) Date: Sun, 6 Nov 2011 22:04:41 +0000 Subject: [petsc-users] fixed point interations In-Reply-To: References: Message-ID: On Sun, Nov 6, 2011 at 9:56 PM, Dominik Szczerba wrote: > On Sun, Nov 6, 2011 at 10:40 PM, Matthew Knepley > wrote: > > On Sun, Nov 6, 2011 at 9:34 PM, Dominik Szczerba > > wrote: > >> > >> On Sun, Nov 6, 2011 at 6:59 PM, Matthew Knepley > wrote: > >> > On Sun, Nov 6, 2011 at 5:52 PM, Dominik Szczerba < > dominik at itis.ethz.ch> > >> > wrote: > >> >> > >> >> >>> I want to start small by porting a very simple code using fixed > >> >> >>> point > >> >> >>> iterations as follows: A(x)x = b(x) is approximated as A(x0)x = > >> >> >>> b(x0), > >> >> >>> then solved by KSP for x, then x0 is updated to x, then repeat > >> >> >>> until > >> >> >>> convergence. > >> >> > > >> >> > Run the usual "Newton" methods with A(x) in place of the true > >> >> > Jacobian. > >> >> > >> >> When I substitute A(x) into eq. 5.2 I get: > >> >> > >> >> A(x) dx = -F(x) (1) > >> >> A(x) dx = -A(x) x + b(x) (2) > >> >> A(x) dx + A(x) x = b(x) (3) > >> >> A(x) (x+dx) = b(x) (4) > >> >> > >> >> My questions: > >> >> > >> >> * Will the procedure somehow optimally group the two A(x) terms into > >> >> one, as in 3-4? This requires knowledge, will this be efficiently > >> >> handled? > >> > > >> > There is no grouping. You solve for dx and do a vector addition. > >> > > >> >> > >> >> * I am solving for x+dx, while eq. 5.3 solves for dx. Is this, and > >> >> how, correctly handled? Should I somehow disable the update myself? > >> > > >> > Do not do any update yourself, just give the correct A at each > iteration > >> > in > >> > your FormJacobian routine. > >> > Matt > >> > >> OK, no manual update, this is clear now. What is still not clear is > >> that by substituting A for F' I arrive at an equation in x+dx (my eq. > >> 4), and not dx (Petsc eq. 5.3)... > > > > Newton's equation is for dx. Then you add that to x to get the next > guess. > > This > > is described in any book on numerical analysis, e.g. Henrici. > > Matt > > I understand that, but I am asking something else... when taking F' = > A I arrive at an equation in x+dx, which is not Newton equation. What > is wrong in this picture? Keep the residual on the rhs, not b Matt > > Dominik -- What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. -- Norbert Wiener -------------- next part -------------- An HTML attachment was scrubbed... URL: From dominik at itis.ethz.ch Sun Nov 6 16:18:20 2011 From: dominik at itis.ethz.ch (Dominik Szczerba) Date: Sun, 6 Nov 2011 23:18:20 +0100 Subject: [petsc-users] fixed point interations In-Reply-To: References: Message-ID: I think I get it now. Thanks a lot. On Nov 6, 2011 11:06 PM, "Matthew Knepley" wrote: > On Sun, Nov 6, 2011 at 9:56 PM, Dominik Szczerba wrote: > >> On Sun, Nov 6, 2011 at 10:40 PM, Matthew Knepley >> wrote: >> > On Sun, Nov 6, 2011 at 9:34 PM, Dominik Szczerba >> > wrote: >> >> >> >> On Sun, Nov 6, 2011 at 6:59 PM, Matthew Knepley >> wrote: >> >> > On Sun, Nov 6, 2011 at 5:52 PM, Dominik Szczerba < >> dominik at itis.ethz.ch> >> >> > wrote: >> >> >> >> >> >> >>> I want to start small by porting a very simple code using fixed >> >> >> >>> point >> >> >> >>> iterations as follows: A(x)x = b(x) is approximated as A(x0)x = >> >> >> >>> b(x0), >> >> >> >>> then solved by KSP for x, then x0 is updated to x, then repeat >> >> >> >>> until >> >> >> >>> convergence. >> >> >> > >> >> >> > Run the usual "Newton" methods with A(x) in place of the true >> >> >> > Jacobian. >> >> >> >> >> >> When I substitute A(x) into eq. 5.2 I get: >> >> >> >> >> >> A(x) dx = -F(x) (1) >> >> >> A(x) dx = -A(x) x + b(x) (2) >> >> >> A(x) dx + A(x) x = b(x) (3) >> >> >> A(x) (x+dx) = b(x) (4) >> >> >> >> >> >> My questions: >> >> >> >> >> >> * Will the procedure somehow optimally group the two A(x) terms into >> >> >> one, as in 3-4? This requires knowledge, will this be efficiently >> >> >> handled? >> >> > >> >> > There is no grouping. You solve for dx and do a vector addition. >> >> > >> >> >> >> >> >> * I am solving for x+dx, while eq. 5.3 solves for dx. Is this, and >> >> >> how, correctly handled? Should I somehow disable the update myself? >> >> > >> >> > Do not do any update yourself, just give the correct A at each >> iteration >> >> > in >> >> > your FormJacobian routine. >> >> > Matt >> >> >> >> OK, no manual update, this is clear now. What is still not clear is >> >> that by substituting A for F' I arrive at an equation in x+dx (my eq. >> >> 4), and not dx (Petsc eq. 5.3)... >> > >> > Newton's equation is for dx. Then you add that to x to get the next >> guess. >> > This >> > is described in any book on numerical analysis, e.g. Henrici. >> > Matt >> >> I understand that, but I am asking something else... when taking F' = >> A I arrive at an equation in x+dx, which is not Newton equation. What >> is wrong in this picture? > > > Keep the residual on the rhs, not b > > Matt > > >> >> Dominik > > -- > What most experimenters take for granted before they begin their > experiments is infinitely more interesting than any results to which their > experiments lead. > -- Norbert Wiener > -------------- next part -------------- An HTML attachment was scrubbed... URL: From behzad.baghapour at gmail.com Mon Nov 7 03:06:20 2011 From: behzad.baghapour at gmail.com (behzad baghapour) Date: Mon, 7 Nov 2011 12:36:20 +0330 Subject: [petsc-users] about Singular value In-Reply-To: References: Message-ID: Is there any link with "Incremental Condition Estimation" (ICE) developed by Bischof in LAPACK with Petsc to evaluate Extremal singular values? Thanks, B.B. On Sun, Oct 30, 2011 at 6:49 PM, Jed Brown wrote: > On Sun, Oct 30, 2011 at 05:17, Matthew Knepley wrote: > >> We call LAPACK SVD on the Hermitian matrix made by the Krylov method. > > > GMRES builds a Hessenberg matrix. > -------------- next part -------------- An HTML attachment was scrubbed... URL: From ckontzialis at lycos.com Mon Nov 7 04:00:01 2011 From: ckontzialis at lycos.com (Konstantinos Kontzialis) Date: Mon, 07 Nov 2011 12:00:01 +0200 Subject: [petsc-users] TS solution Message-ID: <4EB7AC21.1030902@lycos.com> Dear all, I use the TS for a backward euler integration. Here is what I code: ierr = TSCreate(sys.comm, &sys.ts); CHKERRQ(ierr); ierr = TSSetProblemType(sys.ts, TS_NONLINEAR); CHKERRQ(ierr); ierr = TSSetIFunction(sys.ts, sys.gres0, base_residual_implicit, &sys); CHKERRQ(ierr); ierr = TSSetInitialTimeStep(sys.ts, 0.0, sys.con->dt); CHKERRQ(ierr); ierr = TSSetSolution(sys.ts, sys.gsv); CHKERRQ(ierr); ierr = TSGetSNES(sys.ts, &sys.snes); CHKERRQ(ierr); ierr = MatCreateSNESMF(sys.snes, &sys.J); CHKERRQ(ierr); ierr = TSMonitorSet(sys.ts, TSMonitorDefault, PETSC_NULL, PETSC_NULL); CHKERRQ(ierr); ISColoring iscoloring; MatFDColoring fdcoloring; ierr = jacobian_diff_numerical(sys, &sys.P); CHKERRQ(ierr); ierr = MatGetColoring(sys.P, MATCOLORINGSL, &iscoloring); CHKERRQ(ierr); ierr = MatFDColoringCreate(sys.P, iscoloring, &fdcoloring); CHKERRQ(ierr); ierr = MatFDColoringSetFunction(fdcoloring, (PetscErrorCode(*)(void)) SNESTSFormFunction, sys.ts); CHKERRQ(ierr); ierr = SNESSetJacobian(sys.snes, sys.J, sys.P, SNESDefaultComputeJacobianColor, fdcoloring); CHKERRQ(ierr); ierr = SNESSetFromOptions(sys.snes); CHKERRQ(ierr); ierr = SNESGetKSP(sys.snes, &sys.ksp2); CHKERRQ(ierr); ierr = KSPGetPC(sys.ksp2, &sys.pc); CHKERRQ(ierr); ierr = KSPSetFromOptions(sys.ksp2); CHKERRQ(ierr); ierr = TSSetFromOptions(sys.ts); CHKERRQ(ierr); ierr = TSSolve(sys.ts, sys.gsv, &sys.con->etime); CHKERRQ(ierr); and I run with: mpiexec -n 8 ./hoac cylinder -llf_flux -n_out 5 -end_time 1 -implicit -implicit_type 3 -pc_type asm -sub_pc_type ilu -sub_pc_factor_mat_ordering_type rcm 1.0e-8 -gl -sub_ksp_type fgmres -ksp_rtol 1.0e-8 -sub_pc_factor_levels 2 -dt 1.0e-1 -snes_monitor -ksp_pc_side right -snes_converged_reason -ts_type beuler -ksp_converged_reason -ts_view and I get: ********************************************************************** METIS 4.0.3 Copyright 1998, Regents of the University of Minnesota Graph Information --------------------------------------------------- Name: mesh.graph, #Vertices: 400, #Edges: 780, #Parts: 8 Recursive Partitioning... ------------------------------------------- 8-way Edge-Cut: 102, Balance: 1.00 Timing Information -------------------------------------------------- I/O: 0.000 Partitioning: 0.000 (PMETIS time) Total: 0.000 ********************************************************************** Approximation order = 0 # DOF = 1520 # nodes in mesh = 400 # elements in mesh = 380 Euler solution Using LLF flux Linear solve converged due to CONVERGED_RTOL iterations 1 0 TS dt 0.1 time 0 0 SNES Function norm 7.276230036948e+00 Linear solve converged due to CONVERGED_RTOL iterations 1 1 SNES Function norm 9.165516887509e-10 Nonlinear solve converged due to CONVERGED_FNORM_RELATIVE 1 TS dt 0.1 time 0.1 0 SNES Function norm 7.276230036948e+00 Linear solve converged due to CONVERGED_RTOL iterations 1 1 SNES Function norm 1.632350040756e-09 Nonlinear solve converged due to CONVERGED_FNORM_RELATIVE 2 TS dt 0.1 time 0.2 0 SNES Function norm 7.276230036948e+00 Linear solve converged due to CONVERGED_RTOL iterations 1 1 SNES Function norm 2.079843243369e-09 Nonlinear solve converged due to CONVERGED_FNORM_RELATIVE 3 TS dt 0.1 time 0.3 0 SNES Function norm 7.276230036948e+00 Linear solve converged due to CONVERGED_RTOL iterations 1 1 SNES Function norm 1.835143044066e-09 Nonlinear solve converged due to CONVERGED_FNORM_RELATIVE 4 TS dt 0.1 time 0.4 0 SNES Function norm 7.276230036948e+00 Linear solve converged due to CONVERGED_RTOL iterations 1 1 SNES Function norm 1.737925333877e-09 Nonlinear solve converged due to CONVERGED_FNORM_RELATIVE 5 TS dt 0.1 time 0.5 0 SNES Function norm 7.276230036948e+00 Linear solve converged due to CONVERGED_RTOL iterations 1 1 SNES Function norm 1.910893540416e-09 Nonlinear solve converged due to CONVERGED_FNORM_RELATIVE 6 TS dt 0.1 time 0.6 0 SNES Function norm 7.276230036948e+00 Linear solve converged due to CONVERGED_RTOL iterations 1 1 SNES Function norm 1.225861927774e-09 Nonlinear solve converged due to CONVERGED_FNORM_RELATIVE 7 TS dt 0.1 time 0.7 0 SNES Function norm 7.276230036948e+00 Linear solve converged due to CONVERGED_RTOL iterations 1 1 SNES Function norm 2.331099416552e-10 Nonlinear solve converged due to CONVERGED_FNORM_RELATIVE 8 TS dt 0.1 time 0.8 0 SNES Function norm 7.276230036948e+00 Linear solve converged due to CONVERGED_RTOL iterations 1 1 SNES Function norm 5.975793361389e-10 Nonlinear solve converged due to CONVERGED_FNORM_RELATIVE 9 TS dt 0.1 time 0.9 0 SNES Function norm 7.276230036948e+00 Linear solve converged due to CONVERGED_RTOL iterations 1 1 SNES Function norm 2.713306283593e-09 Nonlinear solve converged due to CONVERGED_FNORM_RELATIVE 10 TS dt 0.1 time 1 0 SNES Function norm 7.276230036948e+00 Linear solve converged due to CONVERGED_RTOL iterations 1 1 SNES Function norm 3.122105360050e-10 Nonlinear solve converged due to CONVERGED_FNORM_RELATIVE 11 TS dt 0.1 time 1.1 0 SNES Function norm 7.276230036948e+00 Linear solve converged due to CONVERGED_RTOL iterations 1 1 SNES Function norm 1.863420336962e-09 Nonlinear solve converged due to CONVERGED_FNORM_RELATIVE 12 TS dt 0.1 time 1.2 0 SNES Function norm 7.276230036948e+00 Linear solve converged due to CONVERGED_RTOL iterations 1 1 SNES Function norm 2.042443406304e-09 Nonlinear solve converged due to CONVERGED_FNORM_RELATIVE 13 TS dt 0.1 time 1.3 0 SNES Function norm 7.276230036948e+00 Linear solve converged due to CONVERGED_RTOL iterations 1 1 SNES Function norm 2.946360965428e-10 Nonlinear solve converged due to CONVERGED_FNORM_RELATIVE 14 TS dt 0.1 time 1.4 0 SNES Function norm 7.276230036948e+00 Linear solve converged due to CONVERGED_RTOL iterations 1 1 SNES Function norm 4.377702949796e-09 Nonlinear solve converged due to CONVERGED_FNORM_RELATIVE 15 TS dt 0.1 time 1.5 0 SNES Function norm 7.276230036948e+00 Linear solve converged due to CONVERGED_RTOL iterations 1 1 SNES Function norm 2.246004191247e-09 Nonlinear solve converged due to CONVERGED_FNORM_RELATIVE 16 TS dt 0.1 time 1.6 0 SNES Function norm 7.276230036948e+00 Linear solve converged due to CONVERGED_RTOL iterations 1 1 SNES Function norm 5.624757324355e-10 Nonlinear solve converged due to CONVERGED_FNORM_RELATIVE 17 TS dt 0.1 time 1.7 0 SNES Function norm 7.276230036948e+00 Linear solve converged due to CONVERGED_RTOL iterations 1 1 SNES Function norm 3.387079414803e-09 Nonlinear solve converged due to CONVERGED_FNORM_RELATIVE 18 TS dt 0.1 time 1.8 0 SNES Function norm 7.276230036948e+00 Linear solve converged due to CONVERGED_RTOL iterations 1 1 SNES Function norm 2.187884694862e-09 Nonlinear solve converged due to CONVERGED_FNORM_RELATIVE 19 TS dt 0.1 time 1.9 0 SNES Function norm 7.276230036948e+00 Linear solve converged due to CONVERGED_RTOL iterations 1 1 SNES Function norm 3.179621381834e-09 Nonlinear solve converged due to CONVERGED_FNORM_RELATIVE 20 TS dt 0.1 time 2 0 SNES Function norm 7.276230036948e+00 Linear solve converged due to CONVERGED_RTOL iterations 1 1 SNES Function norm 2.521840144246e-09 Nonlinear solve converged due to CONVERGED_FNORM_RELATIVE 21 TS dt 0.1 time 2.1 0 SNES Function norm 7.276230036948e+00 Linear solve converged due to CONVERGED_RTOL iterations 1 1 SNES Function norm 2.448581689405e-09 Nonlinear solve converged due to CONVERGED_FNORM_RELATIVE 22 TS dt 0.1 time 2.2 0 SNES Function norm 7.276230036948e+00 Linear solve converged due to CONVERGED_RTOL iterations 1 1 SNES Function norm 3.748645633289e-09 Nonlinear solve converged due to CONVERGED_FNORM_RELATIVE 23 TS dt 0.1 time 2.3 0 SNES Function norm 7.276230036948e+00 Linear solve converged due to CONVERGED_RTOL iterations 1 1 SNES Function norm 5.560438835942e-10 Nonlinear solve converged due to CONVERGED_FNORM_RELATIVE 24 TS dt 0.1 time 2.4 0 SNES Function norm 7.276230036948e+00 Linear solve converged due to CONVERGED_RTOL iterations 1 1 SNES Function norm 5.511401940242e-09 Nonlinear solve converged due to CONVERGED_FNORM_RELATIVE 25 TS dt 0.1 time 2.5 0 SNES Function norm 7.276230036948e+00 Linear solve converged due to CONVERGED_RTOL iterations 1 1 SNES Function norm 2.378726928086e-09 Nonlinear solve converged due to CONVERGED_FNORM_RELATIVE 26 TS dt 0.1 time 2.6 0 SNES Function norm 7.276230036948e+00 Linear solve converged due to CONVERGED_RTOL iterations 1 1 SNES Function norm 3.334427814912e-09 Nonlinear solve converged due to CONVERGED_FNORM_RELATIVE 27 TS dt 0.1 time 2.7 0 SNES Function norm 7.276230036948e+00 Linear solve converged due to CONVERGED_RTOL iterations 1 1 SNES Function norm 1.830104988308e-09 Nonlinear solve converged due to CONVERGED_FNORM_RELATIVE 28 TS dt 0.1 time 2.8 0 SNES Function norm 7.276230036948e+00 Linear solve converged due to CONVERGED_RTOL iterations 1 1 SNES Function norm 1.219763041097e-09 Nonlinear solve converged due to CONVERGED_FNORM_RELATIVE 29 TS dt 0.1 time 2.9 0 SNES Function norm 7.276230036948e+00 Linear solve converged due to CONVERGED_RTOL iterations 1 1 SNES Function norm 1.645515811145e-09 Nonlinear solve converged due to CONVERGED_FNORM_RELATIVE 30 TS dt 0.1 time 3 0 SNES Function norm 7.276230036948e+00 Linear solve converged due to CONVERGED_RTOL iterations 1 1 SNES Function norm 7.395987890604e-09 Nonlinear solve converged due to CONVERGED_FNORM_RELATIVE 31 TS dt 0.1 time 3.1 0 SNES Function norm 7.276230036948e+00 Linear solve converged due to CONVERGED_RTOL iterations 1 1 SNES Function norm 1.400139707954e-09 Nonlinear solve converged due to CONVERGED_FNORM_RELATIVE 32 TS dt 0.1 time 3.2 0 SNES Function norm 7.276230036948e+00 Linear solve converged due to CONVERGED_RTOL iterations 1 1 SNES Function norm 4.069248234609e-09 Nonlinear solve converged due to CONVERGED_FNORM_RELATIVE 33 TS dt 0.1 time 3.3 0 SNES Function norm 7.276230036948e+00 Linear solve converged due to CONVERGED_RTOL iterations 1 1 SNES Function norm 7.053550120679e-09 Nonlinear solve converged due to CONVERGED_FNORM_RELATIVE 34 TS dt 0.1 time 3.4 0 SNES Function norm 7.276230036948e+00 Linear solve converged due to CONVERGED_RTOL iterations 1 1 SNES Function norm 4.692206461028e-09 Nonlinear solve converged due to CONVERGED_FNORM_RELATIVE 35 TS dt 0.1 time 3.5 0 SNES Function norm 7.276230036948e+00 Linear solve converged due to CONVERGED_RTOL iterations 1 1 SNES Function norm 3.119476099069e-09 Nonlinear solve converged due to CONVERGED_FNORM_RELATIVE 36 TS dt 0.1 time 3.6 0 SNES Function norm 7.276230036948e+00 Linear solve converged due to CONVERGED_RTOL iterations 1 1 SNES Function norm 1.337847526235e-09 Nonlinear solve converged due to CONVERGED_FNORM_RELATIVE 37 TS dt 0.1 time 3.7 0 SNES Function norm 7.276230036948e+00 Linear solve converged due to CONVERGED_RTOL iterations 1 1 SNES Function norm 6.156567962560e-09 Nonlinear solve converged due to CONVERGED_FNORM_RELATIVE 38 TS dt 0.1 time 3.8 0 SNES Function norm 7.276230036948e+00 Linear solve converged due to CONVERGED_RTOL iterations 1 1 SNES Function norm 1.121789317381e-08 Nonlinear solve converged due to CONVERGED_FNORM_RELATIVE 39 TS dt 0.1 time 3.9 0 SNES Function norm 7.276230036948e+00 Linear solve converged due to CONVERGED_RTOL iterations 1 1 SNES Function norm 3.046902581889e-09 Nonlinear solve converged due to CONVERGED_FNORM_RELATIVE 40 TS dt 0.1 time 4 0 SNES Function norm 7.276230036948e+00 Linear solve converged due to CONVERGED_RTOL iterations 1 1 SNES Function norm 1.675637990005e-09 Nonlinear solve converged due to CONVERGED_FNORM_RELATIVE 41 TS dt 0.1 time 4.1 0 SNES Function norm 7.276230036948e+00 Linear solve converged due to CONVERGED_RTOL iterations 1 1 SNES Function norm 5.437527938961e-09 Nonlinear solve converged due to CONVERGED_FNORM_RELATIVE 42 TS dt 0.1 time 4.2 0 SNES Function norm 7.276230036948e+00 Linear solve converged due to CONVERGED_RTOL iterations 1 1 SNES Function norm 1.011868310284e-08 Nonlinear solve converged due to CONVERGED_FNORM_RELATIVE 43 TS dt 0.1 time 4.3 0 SNES Function norm 7.276230036948e+00 Linear solve converged due to CONVERGED_RTOL iterations 1 1 SNES Function norm 5.254068220833e-09 Nonlinear solve converged due to CONVERGED_FNORM_RELATIVE 44 TS dt 0.1 time 4.4 0 SNES Function norm 7.276230036948e+00 Linear solve converged due to CONVERGED_RTOL iterations 1 1 SNES Function norm 1.413532340720e-09 Nonlinear solve converged due to CONVERGED_FNORM_RELATIVE 45 TS dt 0.1 time 4.5 0 SNES Function norm 7.276230036948e+00 Linear solve converged due to CONVERGED_RTOL iterations 1 1 SNES Function norm 2.932021770220e-09 Nonlinear solve converged due to CONVERGED_FNORM_RELATIVE 46 TS dt 0.1 time 4.6 0 SNES Function norm 7.276230036948e+00 Linear solve converged due to CONVERGED_RTOL iterations 1 1 SNES Function norm 1.814611914146e-09 Nonlinear solve converged due to CONVERGED_FNORM_RELATIVE 47 TS dt 0.1 time 4.7 0 SNES Function norm 7.276230036948e+00 Linear solve converged due to CONVERGED_RTOL iterations 1 1 SNES Function norm 9.880349938684e-09 Nonlinear solve converged due to CONVERGED_FNORM_RELATIVE 48 TS dt 0.1 time 4.8 0 SNES Function norm 7.276230036948e+00 Linear solve converged due to CONVERGED_RTOL iterations 1 1 SNES Function norm 6.460287174001e-09 Nonlinear solve converged due to CONVERGED_FNORM_RELATIVE 49 TS dt 0.1 time 4.9 0 SNES Function norm 7.276230036948e+00 Linear solve converged due to CONVERGED_RTOL iterations 1 1 SNES Function norm 7.359781445921e-10 Nonlinear solve converged due to CONVERGED_FNORM_RELATIVE 50 TS dt 0.1 time 5 0 SNES Function norm 7.276230036948e+00 Linear solve converged due to CONVERGED_RTOL iterations 1 1 SNES Function norm 2.000396050302e-09 Nonlinear solve converged due to CONVERGED_FNORM_RELATIVE 51 TS dt 0.1 time 5.1 TS Object: 8 MPI processes type: beuler maximum steps=5000 maximum time=5 total number of nonlinear solver iterations=51 total number of nonlinear solve failures=0 total number of linear solver iterations=51 total number of rejected steps=0 SNES Object: 8 MPI processes type: ls line search variant: SNESLineSearchCubic alpha=1.000000000000e-04, maxstep=1.000000000000e+08, minlambda=1.000000000000e-12 maximum iterations=50, maximum function evaluations=10000 tolerances: relative=1e-08, absolute=1e-50, solution=1e-08 total number of linear solver iterations=1 total number of function evaluations=4 KSP Object: 8 MPI processes type: gmres GMRES: restart=30, using Classical (unmodified) Gram-Schmidt Orthogonalization with no iterative refinement GMRES: happy breakdown tolerance 1e-30 maximum iterations=10000, initial guess is zero tolerances: relative=1e-08, absolute=1e-50, divergence=10000 right preconditioning using UNPRECONDITIONED norm type for convergence test PC Object: 8 MPI processes type: asm Additive Schwarz: total subdomain blocks = 8, amount of overlap = 1 Additive Schwarz: restriction/interpolation type - RESTRICT Local solve is same for all blocks, in the following KSP and PC objects: KSP Object: (sub_) 1 MPI processes type: fgmres GMRES: restart=30, using Classical (unmodified) Gram-Schmidt Orthogonalization with no iterative refinement GMRES: happy breakdown tolerance 1e-30 maximum iterations=10000, initial guess is zero tolerances: relative=1e-05, absolute=1e-50, divergence=10000 right preconditioning using UNPRECONDITIONED norm type for convergence test PC Object: (sub_) 1 MPI processes type: ilu ILU: out-of-place factorization 2 levels of fill tolerance for zero pivot 1e-12 using diagonal shift on blocks to prevent zero pivot matrix ordering: rcm factor fill ratio given 1, needed 1 Factored matrix follows: Matrix Object: 1 MPI processes type: seqbaij rows=184, cols=184 package used to perform factorization: petsc total: nonzeros=184, allocated nonzeros=184 total number of mallocs used during MatSetValues calls =0 block size is 1 linear system matrix = precond matrix: Matrix Object: 1 MPI processes type: seqbaij rows=184, cols=184 total: nonzeros=184, allocated nonzeros=184 total number of mallocs used during MatSetValues calls =0 block size is 1 linear system matrix followed by preconditioner matrix: Matrix Object: 8 MPI processes type: mffd rows=1520, cols=1520 matrix-free approximation: err=1e-07 (relative error in function evaluation) Using wp compute h routine Does not compute normU Matrix Object: 8 MPI processes type: mpibaij rows=1520, cols=1520 total: nonzeros=1520, allocated nonzeros=7600 total number of mallocs used during MatSetValues calls =0 block size is 1 I observe that the solution is not updated. What am I doing wrong? Thank you, Kostas ********************************************************************** METIS 4.0.3 Copyright 1998, Regents of the University of Minnesota Graph Information --------------------------------------------------- Name: mesh.graph, #Vertices: 400, #Edges: 780, #Parts: 8 Recursive Partitioning... ------------------------------------------- 8-way Edge-Cut: 102, Balance: 1.00 Timing Information -------------------------------------------------- I/O: 0.000 Partitioning: 0.000 (PMETIS time) Total: 0.000 ********************************************************************** Approximation order = 0 # DOF = 1520 # nodes in mesh = 400 # elements in mesh = 380 Euler solution Using LLF flux Linear solve converged due to CONVERGED_RTOL iterations 1 0 TS dt 0.1 time 0 0 SNES Function norm 7.276230036948e+00 Linear solve converged due to CONVERGED_RTOL iterations 1 1 SNES Function norm 9.165516887509e-10 Nonlinear solve converged due to CONVERGED_FNORM_RELATIVE 1 TS dt 0.1 time 0.1 0 SNES Function norm 7.276230036948e+00 Linear solve converged due to CONVERGED_RTOL iterations 1 1 SNES Function norm 1.632350040756e-09 Nonlinear solve converged due to CONVERGED_FNORM_RELATIVE 2 TS dt 0.1 time 0.2 0 SNES Function norm 7.276230036948e+00 Linear solve converged due to CONVERGED_RTOL iterations 1 1 SNES Function norm 2.079843243369e-09 Nonlinear solve converged due to CONVERGED_FNORM_RELATIVE 3 TS dt 0.1 time 0.3 0 SNES Function norm 7.276230036948e+00 Linear solve converged due to CONVERGED_RTOL iterations 1 1 SNES Function norm 1.835143044066e-09 Nonlinear solve converged due to CONVERGED_FNORM_RELATIVE 4 TS dt 0.1 time 0.4 0 SNES Function norm 7.276230036948e+00 Linear solve converged due to CONVERGED_RTOL iterations 1 1 SNES Function norm 1.737925333877e-09 Nonlinear solve converged due to CONVERGED_FNORM_RELATIVE 5 TS dt 0.1 time 0.5 0 SNES Function norm 7.276230036948e+00 Linear solve converged due to CONVERGED_RTOL iterations 1 1 SNES Function norm 1.910893540416e-09 Nonlinear solve converged due to CONVERGED_FNORM_RELATIVE 6 TS dt 0.1 time 0.6 0 SNES Function norm 7.276230036948e+00 Linear solve converged due to CONVERGED_RTOL iterations 1 1 SNES Function norm 1.225861927774e-09 Nonlinear solve converged due to CONVERGED_FNORM_RELATIVE 7 TS dt 0.1 time 0.7 0 SNES Function norm 7.276230036948e+00 Linear solve converged due to CONVERGED_RTOL iterations 1 1 SNES Function norm 2.331099416552e-10 Nonlinear solve converged due to CONVERGED_FNORM_RELATIVE 8 TS dt 0.1 time 0.8 0 SNES Function norm 7.276230036948e+00 Linear solve converged due to CONVERGED_RTOL iterations 1 1 SNES Function norm 5.975793361389e-10 Nonlinear solve converged due to CONVERGED_FNORM_RELATIVE 9 TS dt 0.1 time 0.9 0 SNES Function norm 7.276230036948e+00 Linear solve converged due to CONVERGED_RTOL iterations 1 1 SNES Function norm 2.713306283593e-09 Nonlinear solve converged due to CONVERGED_FNORM_RELATIVE 10 TS dt 0.1 time 1 0 SNES Function norm 7.276230036948e+00 Linear solve converged due to CONVERGED_RTOL iterations 1 1 SNES Function norm 3.122105360050e-10 Nonlinear solve converged due to CONVERGED_FNORM_RELATIVE 11 TS dt 0.1 time 1.1 0 SNES Function norm 7.276230036948e+00 Linear solve converged due to CONVERGED_RTOL iterations 1 1 SNES Function norm 1.863420336962e-09 Nonlinear solve converged due to CONVERGED_FNORM_RELATIVE 12 TS dt 0.1 time 1.2 0 SNES Function norm 7.276230036948e+00 Linear solve converged due to CONVERGED_RTOL iterations 1 1 SNES Function norm 2.042443406304e-09 Nonlinear solve converged due to CONVERGED_FNORM_RELATIVE 13 TS dt 0.1 time 1.3 0 SNES Function norm 7.276230036948e+00 Linear solve converged due to CONVERGED_RTOL iterations 1 1 SNES Function norm 2.946360965428e-10 Nonlinear solve converged due to CONVERGED_FNORM_RELATIVE 14 TS dt 0.1 time 1.4 0 SNES Function norm 7.276230036948e+00 Linear solve converged due to CONVERGED_RTOL iterations 1 1 SNES Function norm 4.377702949796e-09 Nonlinear solve converged due to CONVERGED_FNORM_RELATIVE 15 TS dt 0.1 time 1.5 0 SNES Function norm 7.276230036948e+00 Linear solve converged due to CONVERGED_RTOL iterations 1 1 SNES Function norm 2.246004191247e-09 Nonlinear solve converged due to CONVERGED_FNORM_RELATIVE 16 TS dt 0.1 time 1.6 0 SNES Function norm 7.276230036948e+00 Linear solve converged due to CONVERGED_RTOL iterations 1 1 SNES Function norm 5.624757324355e-10 Nonlinear solve converged due to CONVERGED_FNORM_RELATIVE 17 TS dt 0.1 time 1.7 0 SNES Function norm 7.276230036948e+00 Linear solve converged due to CONVERGED_RTOL iterations 1 1 SNES Function norm 3.387079414803e-09 Nonlinear solve converged due to CONVERGED_FNORM_RELATIVE 18 TS dt 0.1 time 1.8 0 SNES Function norm 7.276230036948e+00 Linear solve converged due to CONVERGED_RTOL iterations 1 1 SNES Function norm 2.187884694862e-09 Nonlinear solve converged due to CONVERGED_FNORM_RELATIVE 19 TS dt 0.1 time 1.9 0 SNES Function norm 7.276230036948e+00 Linear solve converged due to CONVERGED_RTOL iterations 1 1 SNES Function norm 3.179621381834e-09 Nonlinear solve converged due to CONVERGED_FNORM_RELATIVE 20 TS dt 0.1 time 2 0 SNES Function norm 7.276230036948e+00 Linear solve converged due to CONVERGED_RTOL iterations 1 1 SNES Function norm 2.521840144246e-09 Nonlinear solve converged due to CONVERGED_FNORM_RELATIVE 21 TS dt 0.1 time 2.1 0 SNES Function norm 7.276230036948e+00 Linear solve converged due to CONVERGED_RTOL iterations 1 1 SNES Function norm 2.448581689405e-09 Nonlinear solve converged due to CONVERGED_FNORM_RELATIVE 22 TS dt 0.1 time 2.2 0 SNES Function norm 7.276230036948e+00 Linear solve converged due to CONVERGED_RTOL iterations 1 1 SNES Function norm 3.748645633289e-09 Nonlinear solve converged due to CONVERGED_FNORM_RELATIVE 23 TS dt 0.1 time 2.3 0 SNES Function norm 7.276230036948e+00 Linear solve converged due to CONVERGED_RTOL iterations 1 1 SNES Function norm 5.560438835942e-10 Nonlinear solve converged due to CONVERGED_FNORM_RELATIVE 24 TS dt 0.1 time 2.4 0 SNES Function norm 7.276230036948e+00 Linear solve converged due to CONVERGED_RTOL iterations 1 1 SNES Function norm 5.511401940242e-09 Nonlinear solve converged due to CONVERGED_FNORM_RELATIVE 25 TS dt 0.1 time 2.5 0 SNES Function norm 7.276230036948e+00 Linear solve converged due to CONVERGED_RTOL iterations 1 1 SNES Function norm 2.378726928086e-09 Nonlinear solve converged due to CONVERGED_FNORM_RELATIVE 26 TS dt 0.1 time 2.6 0 SNES Function norm 7.276230036948e+00 Linear solve converged due to CONVERGED_RTOL iterations 1 1 SNES Function norm 3.334427814912e-09 Nonlinear solve converged due to CONVERGED_FNORM_RELATIVE 27 TS dt 0.1 time 2.7 0 SNES Function norm 7.276230036948e+00 Linear solve converged due to CONVERGED_RTOL iterations 1 1 SNES Function norm 1.830104988308e-09 Nonlinear solve converged due to CONVERGED_FNORM_RELATIVE 28 TS dt 0.1 time 2.8 0 SNES Function norm 7.276230036948e+00 Linear solve converged due to CONVERGED_RTOL iterations 1 1 SNES Function norm 1.219763041097e-09 Nonlinear solve converged due to CONVERGED_FNORM_RELATIVE 29 TS dt 0.1 time 2.9 0 SNES Function norm 7.276230036948e+00 Linear solve converged due to CONVERGED_RTOL iterations 1 1 SNES Function norm 1.645515811145e-09 Nonlinear solve converged due to CONVERGED_FNORM_RELATIVE 30 TS dt 0.1 time 3 0 SNES Function norm 7.276230036948e+00 Linear solve converged due to CONVERGED_RTOL iterations 1 1 SNES Function norm 7.395987890604e-09 Nonlinear solve converged due to CONVERGED_FNORM_RELATIVE 31 TS dt 0.1 time 3.1 0 SNES Function norm 7.276230036948e+00 Linear solve converged due to CONVERGED_RTOL iterations 1 1 SNES Function norm 1.400139707954e-09 Nonlinear solve converged due to CONVERGED_FNORM_RELATIVE 32 TS dt 0.1 time 3.2 0 SNES Function norm 7.276230036948e+00 Linear solve converged due to CONVERGED_RTOL iterations 1 1 SNES Function norm 4.069248234609e-09 Nonlinear solve converged due to CONVERGED_FNORM_RELATIVE 33 TS dt 0.1 time 3.3 0 SNES Function norm 7.276230036948e+00 Linear solve converged due to CONVERGED_RTOL iterations 1 1 SNES Function norm 7.053550120679e-09 Nonlinear solve converged due to CONVERGED_FNORM_RELATIVE 34 TS dt 0.1 time 3.4 0 SNES Function norm 7.276230036948e+00 Linear solve converged due to CONVERGED_RTOL iterations 1 1 SNES Function norm 4.692206461028e-09 Nonlinear solve converged due to CONVERGED_FNORM_RELATIVE 35 TS dt 0.1 time 3.5 0 SNES Function norm 7.276230036948e+00 Linear solve converged due to CONVERGED_RTOL iterations 1 1 SNES Function norm 3.119476099069e-09 Nonlinear solve converged due to CONVERGED_FNORM_RELATIVE 36 TS dt 0.1 time 3.6 0 SNES Function norm 7.276230036948e+00 Linear solve converged due to CONVERGED_RTOL iterations 1 1 SNES Function norm 1.337847526235e-09 Nonlinear solve converged due to CONVERGED_FNORM_RELATIVE 37 TS dt 0.1 time 3.7 0 SNES Function norm 7.276230036948e+00 Linear solve converged due to CONVERGED_RTOL iterations 1 1 SNES Function norm 6.156567962560e-09 Nonlinear solve converged due to CONVERGED_FNORM_RELATIVE 38 TS dt 0.1 time 3.8 0 SNES Function norm 7.276230036948e+00 Linear solve converged due to CONVERGED_RTOL iterations 1 1 SNES Function norm 1.121789317381e-08 Nonlinear solve converged due to CONVERGED_FNORM_RELATIVE 39 TS dt 0.1 time 3.9 0 SNES Function norm 7.276230036948e+00 Linear solve converged due to CONVERGED_RTOL iterations 1 1 SNES Function norm 3.046902581889e-09 Nonlinear solve converged due to CONVERGED_FNORM_RELATIVE 40 TS dt 0.1 time 4 0 SNES Function norm 7.276230036948e+00 Linear solve converged due to CONVERGED_RTOL iterations 1 1 SNES Function norm 1.675637990005e-09 Nonlinear solve converged due to CONVERGED_FNORM_RELATIVE 41 TS dt 0.1 time 4.1 0 SNES Function norm 7.276230036948e+00 Linear solve converged due to CONVERGED_RTOL iterations 1 1 SNES Function norm 5.437527938961e-09 Nonlinear solve converged due to CONVERGED_FNORM_RELATIVE 42 TS dt 0.1 time 4.2 0 SNES Function norm 7.276230036948e+00 Linear solve converged due to CONVERGED_RTOL iterations 1 1 SNES Function norm 1.011868310284e-08 Nonlinear solve converged due to CONVERGED_FNORM_RELATIVE 43 TS dt 0.1 time 4.3 0 SNES Function norm 7.276230036948e+00 Linear solve converged due to CONVERGED_RTOL iterations 1 1 SNES Function norm 5.254068220833e-09 Nonlinear solve converged due to CONVERGED_FNORM_RELATIVE 44 TS dt 0.1 time 4.4 0 SNES Function norm 7.276230036948e+00 Linear solve converged due to CONVERGED_RTOL iterations 1 1 SNES Function norm 1.413532340720e-09 Nonlinear solve converged due to CONVERGED_FNORM_RELATIVE 45 TS dt 0.1 time 4.5 0 SNES Function norm 7.276230036948e+00 Linear solve converged due to CONVERGED_RTOL iterations 1 1 SNES Function norm 2.932021770220e-09 Nonlinear solve converged due to CONVERGED_FNORM_RELATIVE 46 TS dt 0.1 time 4.6 0 SNES Function norm 7.276230036948e+00 Linear solve converged due to CONVERGED_RTOL iterations 1 1 SNES Function norm 1.814611914146e-09 Nonlinear solve converged due to CONVERGED_FNORM_RELATIVE 47 TS dt 0.1 time 4.7 0 SNES Function norm 7.276230036948e+00 Linear solve converged due to CONVERGED_RTOL iterations 1 1 SNES Function norm 9.880349938684e-09 Nonlinear solve converged due to CONVERGED_FNORM_RELATIVE 48 TS dt 0.1 time 4.8 0 SNES Function norm 7.276230036948e+00 Linear solve converged due to CONVERGED_RTOL iterations 1 1 SNES Function norm 6.460287174001e-09 Nonlinear solve converged due to CONVERGED_FNORM_RELATIVE 49 TS dt 0.1 time 4.9 0 SNES Function norm 7.276230036948e+00 Linear solve converged due to CONVERGED_RTOL iterations 1 1 SNES Function norm 7.359781445921e-10 Nonlinear solve converged due to CONVERGED_FNORM_RELATIVE 50 TS dt 0.1 time 5 0 SNES Function norm 7.276230036948e+00 Linear solve converged due to CONVERGED_RTOL iterations 1 1 SNES Function norm 2.000396050302e-09 Nonlinear solve converged due to CONVERGED_FNORM_RELATIVE 51 TS dt 0.1 time 5.1 From agrayver at gfz-potsdam.de Mon Nov 7 04:30:15 2011 From: agrayver at gfz-potsdam.de (Alexander Grayver) Date: Mon, 07 Nov 2011 11:30:15 +0100 Subject: [petsc-users] possible development on Windows? In-Reply-To: References: <4EB69FF9.4090007@gfz-potsdam.de> Message-ID: <4EB7B337.70300@gfz-potsdam.de> Hi Dominik, I'm running Windows 7 Professional. I still use only local debugging (i.e. I run several MPI processes, but on the local machine), so I don't need Windows HPC. Since I work with IFC, it is a bit tricky for me and I'm not sure that you should follow same ugly way. What I do is to run several processes with mpiexec in suspended mode, then attach to them from VS Debugger and debug. With MPI Debugger installed you can easily switch among several processes. Using C/C++ you better first try to follow MS guidelines, e.g.: http://www.danielmoth.com/Blog/MPI-Cluster-Debugger-Launch-Integration-In-VS2010.aspx Regards, Alexander On 06.11.2011 20:10, Dominik Szczerba wrote: > Hi Alexander, > > Are you running Windows HPC edition? According to google, it is > required for the referred MPI Cluster Debugger. > > I would also appreciate some more hints on "and something else", if > you have them. > > Many thanks, > Dominik > > On Sun, Nov 6, 2011 at 3:55 PM, Alexander Grayver > wrote: >> Hi, >> >> I work with Petsc both under Linux and Windows. >> I develop in Fortran and it makes things even more tricky sometimes. My >> compilers are those included in VS 2008 and IFC 12 + MKL's BLAS/LAPACK for >> Fortran (however I started to work with petsc using IFC 10.1 and downloaded >> BLAS/LAPACK using petsc configure options). >> The most tricky part for me was to build petsc properly, although all >> problems are directly and indirectly related to the Fortran usage. So if you >> write in C/C++ I would say it's going to be easy for you. Anyway I can share >> my configuration line if you get some problems. >> I use VS IDE and debug applications in it. For sequential programs it is no >> problems, for MPI you have to install some additional tools (MPI Cluster >> Debugger and something else) and just switch debugger type from VS whenever >> you want. To make this debugger working with IFC cost me a couple of days >> and a lot of googling, but again for C/C++ it should be easy. >> Unfortunately, there are no things like valgrind for Windows. I couldn't >> find at least. >> >> Regards, >> Alexander >> >> On 06.11.2011 11:43, Dominik Szczerba wrote: >>> I am normally working and developing my codes on linux, only compiling >>> them at the end for Windows, but need to evaluate the possibilities to >>> also efficiently develop on Windows. Is anyone developing Petsc based >>> applications on Windows and could share some experiences? In >>> particular, is it possible to debug only on Cygwin's gdb port or also >>> to use (somehow) the Visual Studio's built in debugger? Are there >>> tools comparable to valgrind to detect MPI-aware illegal memory >>> accesses and leaks? I heard about TotalView, but never worked with it. >>> >>> Thanks for any thoughts, >>> Dominik >> From behzad.baghapour at gmail.com Mon Nov 7 06:35:58 2011 From: behzad.baghapour at gmail.com (behzad baghapour) Date: Mon, 7 Nov 2011 16:05:58 +0330 Subject: [petsc-users] Extremal Singular Values Message-ID: Is there any link with "Incremental Condition Estimation" (ICE) developed by Bischof in LAPACK with Petsc to evaluate Extremal singular values? Thanks, Behzad. -------------- next part -------------- An HTML attachment was scrubbed... URL: From knepley at gmail.com Mon Nov 7 07:16:43 2011 From: knepley at gmail.com (Matthew Knepley) Date: Mon, 7 Nov 2011 13:16:43 +0000 Subject: [petsc-users] about Singular value In-Reply-To: References: Message-ID: On Mon, Nov 7, 2011 at 9:06 AM, behzad baghapour wrote: > Is there any link with "Incremental Condition Estimation" (ICE) developed > by Bischof in LAPACK with Petsc to evaluate Extremal singular values? > We do not use this (it is intended for triangular matrices). It may be tenuously connected. He says he is approximating the secular equation with rational functions, in some sense we are approximating the characteristic equation with polynomials (Krylov). Matt > Thanks, > B.B. > > > > > On Sun, Oct 30, 2011 at 6:49 PM, Jed Brown wrote: > >> On Sun, Oct 30, 2011 at 05:17, Matthew Knepley wrote: >> >>> We call LAPACK SVD on the Hermitian matrix made by the Krylov method. >> >> >> GMRES builds a Hessenberg matrix. >> > > > -- What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. -- Norbert Wiener -------------- next part -------------- An HTML attachment was scrubbed... URL: From ckontzialis at lycos.com Mon Nov 7 08:02:20 2011 From: ckontzialis at lycos.com (Konstantinos Kontzialis) Date: Mon, 07 Nov 2011 16:02:20 +0200 Subject: [petsc-users] Solution update is not working in TS Message-ID: <4EB7E4EC.2030502@lycos.com> Dear all, I have moved to petsc version 3.2 from 3.1 and I did the modifications needed for running my code. However, I see that some things are not working in v3.2. For instance, if I want to apply and explicit euler method to the Euler equations of gas dynamics discretized with a high order method I do the following: /* Initialize the solution vector */ ierr = VecCreateGhost(sys.comm, sys.ldof, sys.gdof, sys.ghdof, sys.dof_gloInd, &sys.gsv); CHKERRQ(ierr); /* Initialize the residual vector*/ ierr = VecCreateMPI(sys.comm, sys.ldof, sys.gdof, &sys.gres0); CHKERRQ(ierr); /*Apply the initial conditions*/ /* Set up TS*/ ierr = TSCreate(sys.comm, &sys.ts); CHKERRQ(ierr); ierr = TSSetProblemType(sys.ts, TS_NONLINEAR); CHKERRQ(ierr); ierr = TSSetSolution(sys.ts, sys.gsv); CHKERRQ(ierr); ierr = TSSetRHSFunction(sys.ts, sys.gres0, base_residual_explicit, &sys); CHKERRQ(ierr); ierr = TSSetInitialTimeStep(sys.ts, sys.con->tm, sys.con->dt); CHKERRQ(ierr); ierr = TSSetDuration(sys.ts, 100e+6, sys.con->etime); CHKERRQ(ierr); ierr = TSSetFromOptions(sys.ts); CHKERRQ(ierr); ierr = TSMonitorSet(sys.ts, monitor, &sys, PETSC_NULL); CHKERRQ(ierr); ierr = TSSolve(sys.ts, sys.gsv, &sys.con->etime); CHKERRQ(ierr); I run with: mpiexec -n 8 valgrind ./hoac cylinder -llf_flux -n_out 5 -end_time 1.0 -dt 0.01 -ts_type euler -gl and I get: Timestep 0: dt = 0.01, T = 0, Res[rho] = 0.0647535, Res[rhou] = 0.0561559, Res[rhov] = 0.0327505, Res[E] = 0.162518, CFL = 0.414652 Timestep 1: dt = 0.01, T = 0.01, Res[rho] = 0.0647535, Res[rhou] = 0.0561559, Res[rhov] = 0.0327505, Res[E] = 0.162518, CFL = 0.414652 Timestep 2: dt = 0.01, T = 0.02, Res[rho] = 0.0647535, Res[rhou] = 0.0561559, Res[rhov] = 0.0327505, Res[E] = 0.162518, CFL = 0.414652 Timestep 3: dt = 0.01, T = 0.03, Res[rho] = 0.0647535, Res[rhou] = 0.0561559, Res[rhov] = 0.0327505, Res[E] = 0.162518, CFL = 0.414652 Timestep 4: dt = 0.01, T = 0.04, Res[rho] = 0.0647535, Res[rhou] = 0.0561559, Res[rhov] = 0.0327505, Res[E] = 0.162518, CFL = 0.414652 Timestep 5: dt = 0.01, T = 0.05, Res[rho] = 0.0647535, Res[rhou] = 0.0561559, Res[rhov] = 0.0327505, Res[E] = 0.162518, CFL = 0.414652 Timestep 6: dt = 0.01, T = 0.06, Res[rho] = 0.0647535, Res[rhou] = 0.0561559, Res[rhov] = 0.0327505, Res[E] = 0.162518, CFL = 0.414652 Timestep 7: dt = 0.01, T = 0.07, Res[rho] = 0.0647535, Res[rhou] = 0.0561559, Res[rhov] = 0.0327505, Res[E] = 0.162518, CFL = 0.414652 Timestep 8: dt = 0.01, T = 0.08, Res[rho] = 0.0647535, Res[rhou] = 0.0561559, Res[rhov] = 0.0327505, Res[E] = 0.162518, CFL = 0.414652 Timestep 9: dt = 0.01, T = 0.09, Res[rho] = 0.0647535, Res[rhou] = 0.0561559, Res[rhov] = 0.0327505, Res[E] = 0.162518, CFL = 0.414652 Timestep 10: dt = 0.01, T = 0.1, Res[rho] = 0.0647535, Res[rhou] = 0.0561559, Res[rhov] = 0.0327505, Res[E] = 0.162518, CFL = 0.414652 which seems that my solution vector sys.gsv is not updated at every step, and moreover, when I print the solution ,I see the values got from the initial conditions. What am I doing wrong? Thanks Kostas From jedbrown at mcs.anl.gov Mon Nov 7 08:11:56 2011 From: jedbrown at mcs.anl.gov (Jed Brown) Date: Mon, 7 Nov 2011 07:11:56 -0700 Subject: [petsc-users] TS solution In-Reply-To: <4EB7AC21.1030902@lycos.com> References: <4EB7AC21.1030902@lycos.com> Message-ID: On Mon, Nov 7, 2011 at 03:00, Konstantinos Kontzialis wrote: > > > ierr = SNESSetFromOptions(sys.snes); > CHKERRQ(ierr); > > ierr = SNESGetKSP(sys.snes, &sys.ksp2); > CHKERRQ(ierr); > > ierr = KSPGetPC(sys.ksp2, &sys.pc); > CHKERRQ(ierr); > > ierr = KSPSetFromOptions(sys.ksp2); > CHKERRQ(ierr); > > ierr = TSSetFromOptions(sys.ts); > CHKERRQ(ierr); > You can just call TSSetFromOptions, it automatically calls *SetFromOptions on all the inner solvers. > > ierr = TSSolve(sys.ts, sys.gsv, &sys.con->etime); > CHKERRQ(ierr); > > and I run with: > > mpiexec -n 8 ./hoac cylinder -llf_flux -n_out 5 -end_time 1 -implicit > -implicit_type 3 -pc_type asm -sub_pc_type ilu -sub_pc_factor_mat_ordering_ > **type rcm 1.0e-8 -gl -sub_ksp_type fgmres -ksp_rtol 1.0e-8 > -sub_pc_factor_levels 2 -dt 1.0e-1 -snes_monitor -ksp_pc_side right > -snes_converged_reason -ts_type beuler -ksp_converged_reason -ts_view > > > [...] > I observe that the solution is not updated. What am I doing wrong? > >From the convergence history, your "matrix" might be diagonal. What is the physical time scale for this problem? Perhaps your base_residual_implicit is incorrect. -------------- next part -------------- An HTML attachment was scrubbed... URL: From jedbrown at mcs.anl.gov Mon Nov 7 08:17:08 2011 From: jedbrown at mcs.anl.gov (Jed Brown) Date: Mon, 7 Nov 2011 07:17:08 -0700 Subject: [petsc-users] Solution update is not working in TS In-Reply-To: <4EB7E4EC.2030502@lycos.com> References: <4EB7E4EC.2030502@lycos.com> Message-ID: On Mon, Nov 7, 2011 at 07:02, Konstantinos Kontzialis wrote: > which seems that my solution vector sys.gsv is not updated at every step, > and moreover, when I print the solution ,I see the values got from the > initial conditions. Your monitor should use the Vec passed to it, not the Vec in your user context. If you want access to the Vec at the beginning of the current step, call TSGetSolution(). Depending on the algorithm (finishing procedure), this may or may not be the same Vec that you passed to TSSolve(), but the Vec you passed in should be updated when TSSolve() returns. -------------- next part -------------- An HTML attachment was scrubbed... URL: From ping.rong at tuhh.de Mon Nov 7 08:32:14 2011 From: ping.rong at tuhh.de (Ping Rong) Date: Mon, 07 Nov 2011 15:32:14 +0100 Subject: [petsc-users] Petsc-3.2 with superlu_dist building error In-Reply-To: References: <4EA9A425.9010306@tuhh.de> Message-ID: <4EB7EBEE.9040306@tuhh.de> Hello Sean, does Petsc-dev use SuperLU_DIST (v3.0) or still v2.5? I have updated both petsc-3.2-p5 and the latest dev version. Both configurations tell me ----------------------------------------- SuperLU: Includes: -I/home/xxx/libs/petsc/linux-gnu-shared-opt/include Library: -Wl,-rpath,/home/xxx/libs/petsc/linux-gnu-shared-opt/lib -L/home/xxx/libs/petsc/linux-gnu-shared-opt/lib -lsuperlu_4.2 SuperLU_DIST: Includes: -I/home/xxx/libs/petsc/linux-gnu-shared-opt/include Library: -Wl,-rpath,/home/xxx/libs/petsc/linux-gnu-shared-opt/lib -L/home/xxx/libs/petsc/linux-gnu-shared-opt/lib -lsuperlu_dist_2.5 ------------------------------------------ As long as I understood through the discussion, petsc-dev uses superlu-dist-3.0 and superlu-4.3, right? or this update will be only available until the next major release? best regards Ping Am 27.10.2011 21:18, schrieb Sean Farley: > > The error is related to the new SuperLU_DIST (v3.0), not serial > SuperLU. > In PETSc / SuperLU_DIST wrapper: superlu_dist.c, line 634, DOUBLE > should be replaced by SLU_DOUBLE. > > > Sherry, the problem is that you updated the SuperLU tarball to have > the new enum type but that petsc-3.2 was not updated for the new > interface, hence the breakage. If Ping were to switch to petsc-dev, > this issue would go away. An alternative, where *everybody* wins is to > update SuperLU to 4.3 and make a new tarball (reverting 4.2 back to > the older definition for the enum). -- Ping Rong, M.Sc. Hamburg University of Technology Institut of modelling and computation Denickestra?e 17 (Room 3031) 21073 Hamburg Tel.: ++49 - (0)40 42878 2749 Fax: ++49 - (0)40 42878 43533 Email: ping.rong at tuhh.de -------------- next part -------------- An HTML attachment was scrubbed... URL: From jedbrown at mcs.anl.gov Mon Nov 7 08:49:41 2011 From: jedbrown at mcs.anl.gov (Jed Brown) Date: Mon, 7 Nov 2011 07:49:41 -0700 Subject: [petsc-users] Petsc-3.2 with superlu_dist building error In-Reply-To: <4EB7EBEE.9040306@tuhh.de> References: <4EA9A425.9010306@tuhh.de> <4EB7EBEE.9040306@tuhh.de> Message-ID: On Mon, Nov 7, 2011 at 07:32, Ping Rong wrote: > does Petsc-dev use SuperLU_DIST (v3.0) or still v2.5? I have updated both > petsc-3.2-p5 and the latest dev version. Both configurations tell me > ----------------------------------------- > SuperLU: > Includes: -I/home/xxx/libs/petsc/linux-gnu-shared-opt/include > Library: -Wl,-rpath,/home/xxx/libs/petsc/linux-gnu-shared-opt/lib > -L/home/xxx/libs/petsc/linux-gnu-shared-opt/lib -lsuperlu_4.2 > > SuperLU_DIST: > Includes: -I/home/xxx/libs/petsc/linux-gnu-shared-opt/include > Library: -Wl,-rpath,/home/xxx/libs/petsc/linux-gnu-shared-opt/lib > -L/home/xxx/libs/petsc/linux-gnu-shared-opt/lib -lsuperlu_dist_2.5 > ------------------------------------------ > > As long as I understood through the discussion, petsc-dev uses > superlu-dist-3.0 and superlu-4.3, right? or this update will be only > available until the next major release? > It uses 3.0 and 4.3. Run rm -r externalpackages/SuperLU* $PETSC_ARCH/conf/SuperLU* and reconfigure. -------------- next part -------------- An HTML attachment was scrubbed... URL: From dominik at itis.ethz.ch Mon Nov 7 08:57:35 2011 From: dominik at itis.ethz.ch (Dominik Szczerba) Date: Mon, 7 Nov 2011 15:57:35 +0100 Subject: [petsc-users] compiling 3.2 on Windows Message-ID: I am configuring with: ./config/configure.py PETSC_DIR=$PWD PETSC_ARCH=win64-msvc-release --with-x=0 --with-debugging=0 --with-cc='win32fe cl' --with-cxx='win32fe cl' --with-fc=0 --download-f2cblaslapack --with-mpi-dir=/cygdrive/c/mpich2-1.3.2p1-win-x86-64 --with-hypre-dir=/cygdrive/c/pack/hypre-2.7.0b/src/hypre/ --with-parmetis=1 --download-parmetis=1 and getting an error: ******************************************************************************* UNABLE to CONFIGURE with GIVEN OPTIONS (see configure.log for details): ------------------------------------------------------------------------------- --download-f2cblaslapack libraries cannot be used ******************************************************************************* File "./config/configure.py", line 283, in petsc_configure framework.configure(out = sys.stdout) File "/cygdrive/c/pack/petsc-3.2-p5/config/BuildSystem/config/framework.py", line 925, in configure child.configure() File "/cygdrive/c/pack/petsc-3.2-p5/config/BuildSystem/config/packages/BlasLapack.py", line 538, in configure self.executeTest(self.configureLibrary) File "/cygdrive/c/pack/petsc-3.2-p5/config/BuildSystem/config/base.py", line 115, in executeTest ret = apply(test, args,kargs) File "/cygdrive/c/pack/petsc-3.2-p5/config/BuildSystem/config/packages/BlasLapack.py", line 444, in configureLibrary for (name, blasLibrary, lapackLibrary, self.useCompatibilityLibs) in self.generateGuesses(): File "/cygdrive/c/pack/petsc-3.2-p5/config/BuildSystem/config/packages/BlasLapack.py", line 162, in generateGuesses raise RuntimeError('--download-f2cblaslapack libraries cannot be used') I will send the full log to petsc-maint in a second. How do I go from here? Thanks for any hints, Dominik From dominik at itis.ethz.ch Mon Nov 7 09:17:57 2011 From: dominik at itis.ethz.ch (Dominik Szczerba) Date: Mon, 7 Nov 2011 16:17:57 +0100 Subject: [petsc-users] compiling 3.2 on Windows In-Reply-To: References: Message-ID: I think I know what the reason is. blas/lapack seems to be linked on cygwin as a cygwin library while I need a native Windows library, like I am doing with hypre. Is there a way to go here or do I need to (and actually can) setup blas/lapack myself? Will it also happen with parmetis? Note, parmetis and blas/lapack used to compile fine natively some time ago. Dominik On Mon, Nov 7, 2011 at 3:57 PM, Dominik Szczerba wrote: > I am configuring with: > > ./config/configure.py PETSC_DIR=$PWD PETSC_ARCH=win64-msvc-release > --with-x=0 --with-debugging=0 --with-cc='win32fe cl' > --with-cxx='win32fe cl' --with-fc=0 --download-f2cblaslapack > --with-mpi-dir=/cygdrive/c/mpich2-1.3.2p1-win-x86-64 > --with-hypre-dir=/cygdrive/c/pack/hypre-2.7.0b/src/hypre/ > --with-parmetis=1 --download-parmetis=1 > > and getting an error: > > ******************************************************************************* > ? ? ? ? UNABLE to CONFIGURE with GIVEN OPTIONS ? ?(see configure.log > for details): > ------------------------------------------------------------------------------- > --download-f2cblaslapack libraries cannot be used > ******************************************************************************* > ?File "./config/configure.py", line 283, in petsc_configure > ? ?framework.configure(out = sys.stdout) > ?File "/cygdrive/c/pack/petsc-3.2-p5/config/BuildSystem/config/framework.py", > line 925, in configure > ? ?child.configure() > ?File "/cygdrive/c/pack/petsc-3.2-p5/config/BuildSystem/config/packages/BlasLapack.py", > line 538, in configure > ? ?self.executeTest(self.configureLibrary) > ?File "/cygdrive/c/pack/petsc-3.2-p5/config/BuildSystem/config/base.py", > line 115, in executeTest > ? ?ret = apply(test, args,kargs) > ?File "/cygdrive/c/pack/petsc-3.2-p5/config/BuildSystem/config/packages/BlasLapack.py", > line 444, in configureLibrary > ? ?for (name, blasLibrary, lapackLibrary, self.useCompatibilityLibs) > in self.generateGuesses(): > ?File "/cygdrive/c/pack/petsc-3.2-p5/config/BuildSystem/config/packages/BlasLapack.py", > line 162, in generateGuesses > ? ?raise RuntimeError('--download-f2cblaslapack libraries cannot be used') > > I will send the full log to petsc-maint in a second. How do I go from here? > > Thanks for any hints, > Dominik > From dominik at itis.ethz.ch Mon Nov 7 09:25:46 2011 From: dominik at itis.ethz.ch (Dominik Szczerba) Date: Mon, 7 Nov 2011 16:25:46 +0100 Subject: [petsc-users] compiling 3.2 on Windows In-Reply-To: References: Message-ID: Sorry, seems I am wrong, dumpbin.exe says it is valid: Any more hints are appreciated. Dominik C:\pack\petsc-3.2-p5\win64-msvc-release\lib>dumpbin libf2cblas.lib Microsoft (R) COFF/PE Dumper Version 10.00.40219.01 Copyright (C) Microsoft Corporation. All rights reserved. Dump of file libf2cblas.lib File Type: LIBRARY Summary 1668 .bss 12E2 .data 5370 .debug$S 1CD3 .drectve 17F4 .pdata BD3 .rdata 5B191 .text 2A18 .xdata C:\pack\petsc-3.2-p5\win64-msvc-release\lib> On Mon, Nov 7, 2011 at 4:17 PM, Dominik Szczerba wrote: > I think I know what the reason is. blas/lapack seems to be linked on > cygwin as a cygwin library while I need a native Windows library, like > I am doing with hypre. Is there a way to go here or do I need to (and > actually can) setup blas/lapack myself? Will it also happen with > parmetis? Note, parmetis and blas/lapack used to compile fine natively > some time ago. > > Dominik > > On Mon, Nov 7, 2011 at 3:57 PM, Dominik Szczerba wrote: >> I am configuring with: >> >> ./config/configure.py PETSC_DIR=$PWD PETSC_ARCH=win64-msvc-release >> --with-x=0 --with-debugging=0 --with-cc='win32fe cl' >> --with-cxx='win32fe cl' --with-fc=0 --download-f2cblaslapack >> --with-mpi-dir=/cygdrive/c/mpich2-1.3.2p1-win-x86-64 >> --with-hypre-dir=/cygdrive/c/pack/hypre-2.7.0b/src/hypre/ >> --with-parmetis=1 --download-parmetis=1 >> >> and getting an error: >> >> ******************************************************************************* >> ? ? ? ? UNABLE to CONFIGURE with GIVEN OPTIONS ? ?(see configure.log >> for details): >> ------------------------------------------------------------------------------- >> --download-f2cblaslapack libraries cannot be used >> ******************************************************************************* >> ?File "./config/configure.py", line 283, in petsc_configure >> ? ?framework.configure(out = sys.stdout) >> ?File "/cygdrive/c/pack/petsc-3.2-p5/config/BuildSystem/config/framework.py", >> line 925, in configure >> ? ?child.configure() >> ?File "/cygdrive/c/pack/petsc-3.2-p5/config/BuildSystem/config/packages/BlasLapack.py", >> line 538, in configure >> ? ?self.executeTest(self.configureLibrary) >> ?File "/cygdrive/c/pack/petsc-3.2-p5/config/BuildSystem/config/base.py", >> line 115, in executeTest >> ? ?ret = apply(test, args,kargs) >> ?File "/cygdrive/c/pack/petsc-3.2-p5/config/BuildSystem/config/packages/BlasLapack.py", >> line 444, in configureLibrary >> ? ?for (name, blasLibrary, lapackLibrary, self.useCompatibilityLibs) >> in self.generateGuesses(): >> ?File "/cygdrive/c/pack/petsc-3.2-p5/config/BuildSystem/config/packages/BlasLapack.py", >> line 162, in generateGuesses >> ? ?raise RuntimeError('--download-f2cblaslapack libraries cannot be used') >> >> I will send the full log to petsc-maint in a second. How do I go from here? >> >> Thanks for any hints, >> Dominik >> > From ping.rong at tuhh.de Mon Nov 7 10:33:22 2011 From: ping.rong at tuhh.de (Ping Rong) Date: Mon, 07 Nov 2011 17:33:22 +0100 Subject: [petsc-users] Petsc-3.2 with superlu_dist building error In-Reply-To: References: <4EA9A425.9010306@tuhh.de> <4EB7EBEE.9040306@tuhh.de> Message-ID: <4EB80852.6010203@tuhh.de> Am 07.11.2011 15:49, schrieb Jed Brown: > On Mon, Nov 7, 2011 at 07:32, Ping Rong > wrote: > > does Petsc-dev use SuperLU_DIST (v3.0) or still v2.5? I have > updated both petsc-3.2-p5 and the latest dev version. Both > configurations tell me > ----------------------------------------- > SuperLU: > Includes: -I/home/xxx/libs/petsc/linux-gnu-shared-opt/include > Library: > -Wl,-rpath,/home/xxx/libs/petsc/linux-gnu-shared-opt/lib > -L/home/xxx/libs/petsc/linux-gnu-shared-opt/lib -lsuperlu_4.2 > > SuperLU_DIST: > Includes: -I/home/xxx/libs/petsc/linux-gnu-shared-opt/include > Library: > -Wl,-rpath,/home/xxx/libs/petsc/linux-gnu-shared-opt/lib > -L/home/xxx/libs/petsc/linux-gnu-shared-opt/lib -lsuperlu_dist_2.5 > ------------------------------------------ > > As long as I understood through the discussion, petsc-dev uses > superlu-dist-3.0 and superlu-4.3, right? or this update will be > only available until the next major release? > > > It uses 3.0 and 4.3. Run > > rm -r externalpackages/SuperLU* $PETSC_ARCH/conf/SuperLU* > > and reconfigure. I did the reconfiguration with the dev version, but still it downloads the 2.5 and 4.2. Should 3.2-p5 use 3.0 and 4.3 as well? -------------- next part -------------- An HTML attachment was scrubbed... URL: From jedbrown at mcs.anl.gov Mon Nov 7 10:35:50 2011 From: jedbrown at mcs.anl.gov (Jed Brown) Date: Mon, 7 Nov 2011 09:35:50 -0700 Subject: [petsc-users] Petsc-3.2 with superlu_dist building error In-Reply-To: <4EB80852.6010203@tuhh.de> References: <4EA9A425.9010306@tuhh.de> <4EB7EBEE.9040306@tuhh.de> <4EB80852.6010203@tuhh.de> Message-ID: On Mon, Nov 7, 2011 at 09:33, Ping Rong wrote: > I did the reconfiguration with the dev version, but still it downloads the > 2.5 and 4.2. > Either you have an old of petsc-dev (update it) or you didn't delete the files/directories as I wrote in my last email. > Should 3.2-p5 use 3.0 and 4.3 as well? > No, petsc-3.2 was released before those upgrades. -------------- next part -------------- An HTML attachment was scrubbed... URL: From zhaonanavril at gmail.com Mon Nov 7 10:39:14 2011 From: zhaonanavril at gmail.com (NAN ZHAO) Date: Mon, 7 Nov 2011 09:39:14 -0700 Subject: [petsc-users] Use two KSP solver in the same code. Message-ID: Hi all, I want to solve a coupled system and prepare to solve the two system in certain order in one code. I need to use the KSP solver twice, Does anyone know a good example in the example file. Do I need to create two Petsc object in a c++ code? Thanks, Nan -------------- next part -------------- An HTML attachment was scrubbed... URL: From jedbrown at mcs.anl.gov Mon Nov 7 10:47:35 2011 From: jedbrown at mcs.anl.gov (Jed Brown) Date: Mon, 7 Nov 2011 09:47:35 -0700 Subject: [petsc-users] Use two KSP solver in the same code. In-Reply-To: References: Message-ID: On Mon, Nov 7, 2011 at 09:39, NAN ZHAO wrote: > I want to solve a coupled system and prepare to solve the two system in > certain order in one code. I need to use the KSP solver twice, Does anyone > know a good example in the example file. Do I need to create two Petsc > object in a c++ code? > Just create two KSP objects, one for each system you want to solve. -------------- next part -------------- An HTML attachment was scrubbed... URL: From knepley at gmail.com Mon Nov 7 10:45:38 2011 From: knepley at gmail.com (Matthew Knepley) Date: Mon, 7 Nov 2011 16:45:38 +0000 Subject: [petsc-users] Use two KSP solver in the same code. In-Reply-To: References: Message-ID: On Mon, Nov 7, 2011 at 4:39 PM, NAN ZHAO wrote: > Hi all, > > I want to solve a coupled system and prepare to solve the two system in > certain order in one code. I need to use the KSP solver twice, Does anyone > know a good example in the example file. Do I need to create two Petsc > object in a c++ code? > If you want to solve the same system twice, call KSPSolve() twice. If you want to solve two different systems, make two KSPs. If you absolutely must reuse the allocated space for Krylov vectors or something like that, call KSPSetOperators() again and KSPSolve() again. Matt > Thanks, > > Nan > -- What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. -- Norbert Wiener -------------- next part -------------- An HTML attachment was scrubbed... URL: From petsc-maint at mcs.anl.gov Mon Nov 7 11:29:05 2011 From: petsc-maint at mcs.anl.gov (Satish Balay) Date: Mon, 7 Nov 2011 11:29:05 -0600 (CST) Subject: [petsc-users] [petsc-maint #95802] Re: compiling 3.2 on Windows In-Reply-To: References: Message-ID: I just attempted a build on windows with petsc-3.2 and MS compilers [64bit] - and couldn't reproduce this problem. $ ./configure --with-cc='win32fe cl' --with-fc=0 --download-f2cblaslapack=1 PETSC_ARCH=arch-mswin64 I'm not sure why this isn't working for you. What do you get for: $ dumpbin.exe /SYMBOLS libf2cblas.lib |grep ddot I see - for my build: balay at bucharest ~/petsc-3.2-p5/arch-mswin64/lib $ dumpbin.exe /SYMBOLS libf2cblas.lib |grep ddot 00A 00000000 SECT4 notype () External | ddot_ 00D 00000000 SECT5 notype Static | $pdata$ddot_ 010 00000000 SECT6 notype Static | $unwind$ddot_ 014 00000004 SECT7 notype Static | ?mp1@?1??ddot_@@9 at 9 (`ddot_'::`2'::mp1) 016 00000010 SECT7 notype Static | ?m@?1??ddot_@@9 at 9 (`ddot_'::`2'::m) 018 00000014 SECT7 notype Static | ?i__@?1??ddot_@@9 at 9 (`ddot_'::`2'::i__) 019 00000000 SECT7 notype Static | ?iy@?1??ddot_@@9 at 9 (`ddot_'::`2'::iy) 01A 00000018 SECT7 notype Static | ?ix@?1??ddot_@@9 at 9 (`ddot_'::`2'::ix) 01B 00000008 SECT7 notype Static | ?dtemp@?1??ddot_@@9 at 9 (`ddot_'::`2'::dtemp) balay at bucharest ~/petsc-3.2-p5/arch-mswin64/lib $ Satish On Mon, 7 Nov 2011, Dominik Szczerba wrote: > Sorry, seems I am wrong, dumpbin.exe says it is valid: > Any more hints are appreciated. > Dominik > > C:\pack\petsc-3.2-p5\win64-msvc-release\lib>dumpbin libf2cblas.lib > Microsoft (R) COFF/PE Dumper Version 10.00.40219.01 > Copyright (C) Microsoft Corporation. All rights reserved. > > > Dump of file libf2cblas.lib > > File Type: LIBRARY > > Summary > > 1668 .bss > 12E2 .data > 5370 .debug$S > 1CD3 .drectve > 17F4 .pdata > BD3 .rdata > 5B191 .text > 2A18 .xdata > > C:\pack\petsc-3.2-p5\win64-msvc-release\lib> > > > On Mon, Nov 7, 2011 at 4:17 PM, Dominik Szczerba wrote: > > I think I know what the reason is. blas/lapack seems to be linked on > > cygwin as a cygwin library while I need a native Windows library, like > > I am doing with hypre. Is there a way to go here or do I need to (and > > actually can) setup blas/lapack myself? Will it also happen with > > parmetis? Note, parmetis and blas/lapack used to compile fine natively > > some time ago. > > > > Dominik > > > > On Mon, Nov 7, 2011 at 3:57 PM, Dominik Szczerba wrote: > >> I am configuring with: > >> > >> ./config/configure.py PETSC_DIR=$PWD PETSC_ARCH=win64-msvc-release > >> --with-x=0 --with-debugging=0 --with-cc='win32fe cl' > >> --with-cxx='win32fe cl' --with-fc=0 --download-f2cblaslapack > >> --with-mpi-dir=/cygdrive/c/mpich2-1.3.2p1-win-x86-64 > >> --with-hypre-dir=/cygdrive/c/pack/hypre-2.7.0b/src/hypre/ > >> --with-parmetis=1 --download-parmetis=1 > >> > >> and getting an error: > >> > >> ******************************************************************************* > >> ? ? ? ? UNABLE to CONFIGURE with GIVEN OPTIONS ? ?(see configure.log > >> for details): > >> ------------------------------------------------------------------------------- > >> --download-f2cblaslapack libraries cannot be used > >> ******************************************************************************* > >> ?File "./config/configure.py", line 283, in petsc_configure > >> ? ?framework.configure(out = sys.stdout) > >> ?File "/cygdrive/c/pack/petsc-3.2-p5/config/BuildSystem/config/framework.py", > >> line 925, in configure > >> ? ?child.configure() > >> ?File "/cygdrive/c/pack/petsc-3.2-p5/config/BuildSystem/config/packages/BlasLapack.py", > >> line 538, in configure > >> ? ?self.executeTest(self.configureLibrary) > >> ?File "/cygdrive/c/pack/petsc-3.2-p5/config/BuildSystem/config/base.py", > >> line 115, in executeTest > >> ? ?ret = apply(test, args,kargs) > >> ?File "/cygdrive/c/pack/petsc-3.2-p5/config/BuildSystem/config/packages/BlasLapack.py", > >> line 444, in configureLibrary > >> ? ?for (name, blasLibrary, lapackLibrary, self.useCompatibilityLibs) > >> in self.generateGuesses(): > >> ?File "/cygdrive/c/pack/petsc-3.2-p5/config/BuildSystem/config/packages/BlasLapack.py", > >> line 162, in generateGuesses > >> ? ?raise RuntimeError('--download-f2cblaslapack libraries cannot be used') > >> > >> I will send the full log to petsc-maint in a second. How do I go from here? > >> > >> Thanks for any hints, > >> Dominik > >> > > > > From agrayver at gfz-potsdam.de Mon Nov 7 11:28:28 2011 From: agrayver at gfz-potsdam.de (Alexander Grayver) Date: Mon, 07 Nov 2011 18:28:28 +0100 Subject: [petsc-users] Use two KSP solver in the same code. In-Reply-To: References: Message-ID: <4EB8153C.3080506@gfz-potsdam.de> On 07.11.2011 17:47, Jed Brown wrote: > On Mon, Nov 7, 2011 at 09:39, NAN ZHAO > wrote: > > I want to solve a coupled system and prepare to solve the two > system in certain order in one code. I need to use the KSP solver > twice, Does anyone know a good example in the example file. Do I > need to create two Petsc object in a c++ code? > > > Just create two KSP objects, one for each system you want to solve. Sorry for disturbing, but I've also got similar question. How can one specify individual options through command line for two different KSPs? -------------- next part -------------- An HTML attachment was scrubbed... URL: From knepley at gmail.com Mon Nov 7 11:31:41 2011 From: knepley at gmail.com (Matthew Knepley) Date: Mon, 7 Nov 2011 17:31:41 +0000 Subject: [petsc-users] Use two KSP solver in the same code. In-Reply-To: <4EB8153C.3080506@gfz-potsdam.de> References: <4EB8153C.3080506@gfz-potsdam.de> Message-ID: On Mon, Nov 7, 2011 at 5:28 PM, Alexander Grayver wrote: > ** > On 07.11.2011 17:47, Jed Brown wrote: > > On Mon, Nov 7, 2011 at 09:39, NAN ZHAO wrote: > >> I want to solve a coupled system and prepare to solve the two system in >> certain order in one code. I need to use the KSP solver twice, Does anyone >> know a good example in the example file. Do I need to create two Petsc >> object in a c++ code? >> > > Just create two KSP objects, one for each system you want to solve. > > > Sorry for disturbing, but I've also got similar question. How can one > specify individual options through command line for two different KSPs? > http://www.mcs.anl.gov/petsc/petsc-as/snapshots/petsc-current/docs/manualpages/KSP/KSPSetOptionsPrefix.html Matt -- What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. -- Norbert Wiener -------------- next part -------------- An HTML attachment was scrubbed... URL: From balay at mcs.anl.gov Mon Nov 7 11:32:14 2011 From: balay at mcs.anl.gov (Satish Balay) Date: Mon, 7 Nov 2011 11:32:14 -0600 (CST) Subject: [petsc-users] Use two KSP solver in the same code. In-Reply-To: <4EB8153C.3080506@gfz-potsdam.de> References: <4EB8153C.3080506@gfz-potsdam.de> Message-ID: On Mon, 7 Nov 2011, Alexander Grayver wrote: > On 07.11.2011 17:47, Jed Brown wrote: > > On Mon, Nov 7, 2011 at 09:39, NAN ZHAO > > wrote: > > > > I want to solve a coupled system and prepare to solve the two > > system in certain order in one code. I need to use the KSP solver > > twice, Does anyone know a good example in the example file. Do I > > need to create two Petsc object in a c++ code? > > > > > > Just create two KSP objects, one for each system you want to solve. > > Sorry for disturbing, but I've also got similar question. How can one specify > individual options through command line for two different KSPs? Use: KSPSetOptionsPrefix(ksp1, "solver1_") KSPSetOptionsPrefix(ksp1, "solver2_") Now - you can specify stuff like: -solver1_ksp_type gmres -solver2_ksp_type cg Satish From agrayver at gfz-potsdam.de Mon Nov 7 11:44:32 2011 From: agrayver at gfz-potsdam.de (Alexander Grayver) Date: Mon, 07 Nov 2011 18:44:32 +0100 Subject: [petsc-users] Use two KSP solver in the same code. In-Reply-To: References: <4EB8153C.3080506@gfz-potsdam.de> Message-ID: <4EB81900.2050602@gfz-potsdam.de> Mathew, Satish, Thanks guys! Good decision as usual. Regards, Alexander On 07.11.2011 18:32, Satish Balay wrote: > On Mon, 7 Nov 2011, Alexander Grayver wrote: > >> On 07.11.2011 17:47, Jed Brown wrote: >>> On Mon, Nov 7, 2011 at 09:39, NAN ZHAO>> > wrote: >>> >>> I want to solve a coupled system and prepare to solve the two >>> system in certain order in one code. I need to use the KSP solver >>> twice, Does anyone know a good example in the example file. Do I >>> need to create two Petsc object in a c++ code? >>> >>> >>> Just create two KSP objects, one for each system you want to solve. >> Sorry for disturbing, but I've also got similar question. How can one specify >> individual options through command line for two different KSPs? > > Use: > > KSPSetOptionsPrefix(ksp1, "solver1_") > KSPSetOptionsPrefix(ksp1, "solver2_") > > > Now - you can specify stuff like: > > -solver1_ksp_type gmres -solver2_ksp_type cg > > > Satish From ckontzialis at lycos.com Mon Nov 7 12:55:55 2011 From: ckontzialis at lycos.com (Konstantinos Kontzialis) Date: Mon, 07 Nov 2011 20:55:55 +0200 Subject: [petsc-users] Solution update is not working in TS In-Reply-To: References: Message-ID: <4EB829BB.8070800@lycos.com> On 11/07/2011 04:50 PM, petsc-users-request at mcs.anl.gov wrote: > Send petsc-users mailing list submissions to > petsc-users at mcs.anl.gov > > To subscribe or unsubscribe via the World Wide Web, visit > https://lists.mcs.anl.gov/mailman/listinfo/petsc-users > or, via email, send a message with subject or body 'help' to > petsc-users-request at mcs.anl.gov > > You can reach the person managing the list at > petsc-users-owner at mcs.anl.gov > > When replying, please edit your Subject line so it is more specific > than "Re: Contents of petsc-users digest..." > > > Today's Topics: > > 1. Extremal Singular Values (behzad baghapour) > 2. Re: about Singular value (Matthew Knepley) > 3. Solution update is not working in TS (Konstantinos Kontzialis) > 4. Re: TS solution (Jed Brown) > 5. Re: Solution update is not working in TS (Jed Brown) > 6. Re: Petsc-3.2 with superlu_dist building error (Ping Rong) > 7. Re: Petsc-3.2 with superlu_dist building error (Jed Brown) > > > ---------------------------------------------------------------------- > > Message: 1 > Date: Mon, 7 Nov 2011 16:05:58 +0330 > From: behzad baghapour > Subject: [petsc-users] Extremal Singular Values > To: PETSc users list > Message-ID: > > Content-Type: text/plain; charset="iso-8859-1" > > Is there any link with "Incremental Condition Estimation" (ICE) developed > by Bischof in LAPACK with Petsc to evaluate Extremal singular values? > > Thanks, Behzad. > -------------- next part -------------- > An HTML attachment was scrubbed... > URL: > > ------------------------------ > > Message: 2 > Date: Mon, 7 Nov 2011 13:16:43 +0000 > From: Matthew Knepley > Subject: Re: [petsc-users] about Singular value > To: PETSc users list > Message-ID: > > Content-Type: text/plain; charset="iso-8859-1" > > On Mon, Nov 7, 2011 at 9:06 AM, behzad baghapour> wrote: >> Is there any link with "Incremental Condition Estimation" (ICE) developed >> by Bischof in LAPACK with Petsc to evaluate Extremal singular values? >> > We do not use this (it is intended for triangular matrices). It may be > tenuously connected. He says he is > approximating the secular equation with rational functions, in some sense > we are approximating the > characteristic equation with polynomials (Krylov). > > Matt > > >> Thanks, >> B.B. >> >> >> >> >> On Sun, Oct 30, 2011 at 6:49 PM, Jed Brown wrote: >> >>> On Sun, Oct 30, 2011 at 05:17, Matthew Knepley wrote: >>> >>>> We call LAPACK SVD on the Hermitian matrix made by the Krylov method. >>> >>> GMRES builds a Hessenberg matrix. >>> >> >> > Dear Jed, I perform a simulation over a cylinder at M=0.14 using the Euler equations. The explicit part now works . I use the Vec vector in my monitor routine and I get the results. But, when I go for the implicit solution I code: ierr = TSCreate(sys.comm, &sys.ts); CHKERRQ(ierr); ierr = TSSetProblemType(sys.ts, TS_NONLINEAR); CHKERRQ(ierr); ierr = TSSetSolution(sys.ts, sys.gsv); CHKERRQ(ierr); ierr = TSSetIFunction(sys.ts, PETSC_NULL, base_residual_implicit, &sys); CHKERRQ(ierr); ierr = TSGetSNES(sys.ts, &sys.snes); CHKERRQ(ierr); ierr = MatCreateSNESMF(sys.snes, &sys.J); CHKERRQ(ierr); ISColoring iscoloring; MatFDColoring matfdcoloring; ierr = jacobian_diff_numerical(sys, &sys.P); CHKERRQ(ierr); ierr = MatGetColoring(sys.P, MATCOLORINGSL, &iscoloring); CHKERRQ(ierr); ierr = MatFDColoringCreate(sys.P, iscoloring, &matfdcoloring); CHKERRQ(ierr); ierr = MatFDColoringSetFunction(matfdcoloring, (PetscErrorCode(*)(void)) SNESTSFormFunction, sys.ts); CHKERRQ(ierr); ierr = MatFDColoringSetFromOptions(matfdcoloring); CHKERRQ(ierr); ierr = ISColoringDestroy(&iscoloring); CHKERRQ(ierr); ierr = SNESSetJacobian(sys.snes, sys.J, sys.P, SNESDefaultComputeJacobianColor, matfdcoloring); CHKERRQ(ierr); ierr = TSSetInitialTimeStep(sys.ts, sys.con->tm, sys.con->dt); CHKERRQ(ierr); ierr = TSSetDuration(sys.ts, 100e+6, sys.con->etime); CHKERRQ(ierr); ierr = TSMonitorSet(sys.ts, monitor, &sys, PETSC_NULL); CHKERRQ(ierr); ierr = TSSetFromOptions(sys.ts); CHKERRQ(ierr); ierr = TSSolve(sys.ts, sys.gsv, &sys.con->etime); CHKERRQ(ierr); and I run with: mpiexec -n 8 valgrind ./hoac cylinder -llf_flux -n_out 500 -end_time 50.0 -dt 2.0e-1 -ts_type beuler -gl -implicit and I get: ********************************************************************** METIS 4.0.3 Copyright 1998, Regents of the University of Minnesota Graph Information --------------------------------------------------- Name: mesh.graph, #Vertices: 400, #Edges: 780, #Parts: 8 Recursive Partitioning... ------------------------------------------- 8-way Edge-Cut: 102, Balance: 1.00 Timing Information -------------------------------------------------- I/O: 0.000 Partitioning: 0.000 (PMETIS time) Total: 0.000 ********************************************************************** Approximation order = 1 # DOF = 6080 # nodes in mesh = 400 # elements in mesh = 380 Euler solution Using LLF flux [0]PETSC ERROR: --------------------- Error Message ------------------------------------ [0]PETSC ERROR: Object is in wrong state! [0]PETSC ERROR: Must call SNESSetFunction() or SNESSetDM() first From huangsc at gmail.com Mon Nov 7 15:43:32 2011 From: huangsc at gmail.com (Shao-Ching Huang) Date: Mon, 7 Nov 2011 13:43:32 -0800 Subject: [petsc-users] how to rebuild from updated repo Message-ID: Hi I downloaded petsc using the "hg clone http://petsc.cs.iit.edu/petsc/releases/petsc-3.2" method. Now, after hg pull/update (some files are changed), do I always have to build from scratch (i.e. compiling everything, including the --download-xxx stuff)? Is there a short cut to rebuild libpetsc.{so,a} when I have no configuration change? Thanks, Shao-Ching From knepley at gmail.com Mon Nov 7 15:49:42 2011 From: knepley at gmail.com (Matthew Knepley) Date: Mon, 7 Nov 2011 21:49:42 +0000 Subject: [petsc-users] how to rebuild from updated repo In-Reply-To: References: Message-ID: On Mon, Nov 7, 2011 at 9:43 PM, Shao-Ching Huang wrote: > Hi > > I downloaded petsc using the "hg clone > http://petsc.cs.iit.edu/petsc/releases/petsc-3.2" method. Now, after > hg pull/update (some files are changed), do I always have to build > from scratch (i.e. compiling everything, including the --download-xxx > stuff)? Is there a short cut to rebuild libpetsc.{so,a} when I have no > configuration change? > The point of having a configuration stage is that it allows us to be ignorant of the precise configuration of your platform. If you want a package, there are several package managers with PETSc (like Debian), but it is hard to keep them up to date. Personally, I prefer the "built it for the machine" philosophy (like Gentoo). Matt > Thanks, > > Shao-Ching -- What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. -- Norbert Wiener -------------- next part -------------- An HTML attachment was scrubbed... URL: From balay at mcs.anl.gov Mon Nov 7 15:52:10 2011 From: balay at mcs.anl.gov (Satish Balay) Date: Mon, 7 Nov 2011 15:52:10 -0600 (CST) Subject: [petsc-users] how to rebuild from updated repo In-Reply-To: References: Message-ID: On Mon, 7 Nov 2011, Shao-Ching Huang wrote: > Hi > > I downloaded petsc using the "hg clone > http://petsc.cs.iit.edu/petsc/releases/petsc-3.2" method. Now, after > hg pull/update (some files are changed), do I always have to build > from scratch (i.e. compiling everything, including the --download-xxx > stuff)? Is there a short cut to rebuild libpetsc.{so,a} when I have no > configuration change? This is a tricky thing. Generally updates to petsc-3.2 should not require a rerun of configure or a rebuild of all library code - but this is not always true. [sometimes you might have to pull/update BuildSystem and rerun configure - or rebuild externalpackages - as the tarballs for these packages get updated]. For the generaly case - an update of the libraries can be done with: [for cmake build] make [for non-cmake build] make ACTION=lib tree And if a rerun of configure is needed - one can run ./PETSC_ARCH/conf/reconfigure_PETSC_ARCH.py Satish From bsmith at mcs.anl.gov Mon Nov 7 15:56:42 2011 From: bsmith at mcs.anl.gov (Barry Smith) Date: Mon, 7 Nov 2011 15:56:42 -0600 Subject: [petsc-users] how to rebuild from updated repo In-Reply-To: References: Message-ID: If you find that ./configure or the make are taking a huge amount of time you might investigate where the files are stored. If, for example, they are stored on a separate file server it may be most of the time is due to the file server. For example configuring and compiling each take me about 5 minutes on my laptop; on a desktop using a file server might take 1/2 an hour. Barry On Nov 7, 2011, at 3:52 PM, Satish Balay wrote: > On Mon, 7 Nov 2011, Shao-Ching Huang wrote: > >> Hi >> >> I downloaded petsc using the "hg clone >> http://petsc.cs.iit.edu/petsc/releases/petsc-3.2" method. Now, after >> hg pull/update (some files are changed), do I always have to build >> from scratch (i.e. compiling everything, including the --download-xxx >> stuff)? Is there a short cut to rebuild libpetsc.{so,a} when I have no >> configuration change? > > This is a tricky thing. Generally updates to petsc-3.2 should not > require a rerun of configure or a rebuild of all library code - but > this is not always true. [sometimes you might have to pull/update > BuildSystem and rerun configure - or rebuild externalpackages - as the > tarballs for these packages get updated]. > > For the generaly case - an update of the libraries can be done with: > > [for cmake build] > make > > [for non-cmake build] > make ACTION=lib tree > > And if a rerun of configure is needed - one can run > ./PETSC_ARCH/conf/reconfigure_PETSC_ARCH.py > > Satish From huangsc at gmail.com Mon Nov 7 16:07:52 2011 From: huangsc at gmail.com (Shao-Ching Huang) Date: Mon, 7 Nov 2011 14:07:52 -0800 Subject: [petsc-users] how to rebuild from updated repo In-Reply-To: References: Message-ID: Thanks guys. I was thinking rebuilding in 5 seconds, if I could reuse most of the already-built .o files. (Like in a typical makefile scenario -- only the changed source files are re-compiled). It seems that the petsc "make" command cleans up all the .o files automatically. I have no problem of rebuilding the whole thing afresh. Actually that is what I have been doing. Thanks, Shao-Ching On Mon, Nov 7, 2011 at 1:56 PM, Barry Smith wrote: > > ?If you find that ./configure or the make are taking a huge amount of time you might investigate where the files are stored. If, for example, they are stored on a separate file server it may be most of the time is due to the file server. For example configuring and compiling each take me about 5 minutes on my laptop; on a desktop using a file server might take 1/2 an hour. > > ? Barry > > On Nov 7, 2011, at 3:52 PM, Satish Balay wrote: > >> On Mon, 7 Nov 2011, Shao-Ching Huang wrote: >> >>> Hi >>> >>> I downloaded petsc using the "hg clone >>> http://petsc.cs.iit.edu/petsc/releases/petsc-3.2" method. Now, after >>> hg pull/update (some files are changed), do I always have to build >>> from scratch (i.e. compiling everything, including the --download-xxx >>> stuff)? Is there a short cut to rebuild libpetsc.{so,a} when I have no >>> configuration change? >> >> This is a tricky thing. Generally updates to petsc-3.2 should not >> require a rerun of configure or a rebuild of all library code - but >> this is not always true. [sometimes you might have to pull/update >> BuildSystem and rerun configure - or rebuild externalpackages - as the >> tarballs for these packages get updated]. >> >> For the generaly case - an update of the libraries can be done with: >> >> [for cmake build] >> make >> >> [for non-cmake build] >> make ACTION=lib tree >> >> And if a rerun of configure is needed - one can run >> ./PETSC_ARCH/conf/reconfigure_PETSC_ARCH.py >> >> Satish > > From knepley at gmail.com Mon Nov 7 16:10:17 2011 From: knepley at gmail.com (Matthew Knepley) Date: Mon, 7 Nov 2011 22:10:17 +0000 Subject: [petsc-users] how to rebuild from updated repo In-Reply-To: References: Message-ID: On Mon, Nov 7, 2011 at 10:07 PM, Shao-Ching Huang wrote: > Thanks guys. > > I was thinking rebuilding in 5 seconds, if I could reuse most of the > already-built .o files. (Like in a typical makefile scenario -- only > the changed source files are re-compiled). It seems that the petsc > "make" command cleans up all the .o files automatically. > Ah, this is a different discussion. We have no good dependencies that signal a reconfigure. It is complicated. However, if you only care about rebuilding, both CMake (the 'make' command) and Python (the builder2.py build command) take into account dependencies, so they will only rebuild what changes. Matt > I have no problem of rebuilding the whole thing afresh. Actually that > is what I have been doing. > > Thanks, > > Shao-Ching > > On Mon, Nov 7, 2011 at 1:56 PM, Barry Smith wrote: > > > > If you find that ./configure or the make are taking a huge amount of > time you might investigate where the files are stored. If, for example, > they are stored on a separate file server it may be most of the time is due > to the file server. For example configuring and compiling each take me > about 5 minutes on my laptop; on a desktop using a file server might take > 1/2 an hour. > > > > Barry > > > > On Nov 7, 2011, at 3:52 PM, Satish Balay wrote: > > > >> On Mon, 7 Nov 2011, Shao-Ching Huang wrote: > >> > >>> Hi > >>> > >>> I downloaded petsc using the "hg clone > >>> http://petsc.cs.iit.edu/petsc/releases/petsc-3.2" method. Now, after > >>> hg pull/update (some files are changed), do I always have to build > >>> from scratch (i.e. compiling everything, including the --download-xxx > >>> stuff)? Is there a short cut to rebuild libpetsc.{so,a} when I have no > >>> configuration change? > >> > >> This is a tricky thing. Generally updates to petsc-3.2 should not > >> require a rerun of configure or a rebuild of all library code - but > >> this is not always true. [sometimes you might have to pull/update > >> BuildSystem and rerun configure - or rebuild externalpackages - as the > >> tarballs for these packages get updated]. > >> > >> For the generaly case - an update of the libraries can be done with: > >> > >> [for cmake build] > >> make > >> > >> [for non-cmake build] > >> make ACTION=lib tree > >> > >> And if a rerun of configure is needed - one can run > >> ./PETSC_ARCH/conf/reconfigure_PETSC_ARCH.py > >> > >> Satish > > > > > -- What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. -- Norbert Wiener -------------- next part -------------- An HTML attachment was scrubbed... URL: From jedbrown at mcs.anl.gov Mon Nov 7 16:10:48 2011 From: jedbrown at mcs.anl.gov (Jed Brown) Date: Mon, 7 Nov 2011 15:10:48 -0700 Subject: [petsc-users] how to rebuild from updated repo In-Reply-To: References: Message-ID: On Mon, Nov 7, 2011 at 15:07, Shao-Ching Huang wrote: > I was thinking rebuilding in 5 seconds, if I could reuse most of the > already-built .o files. (Like in a typical makefile scenario -- only > the changed source files are re-compiled). It seems that the petsc > "make" command cleans up all the .o files automatically. > If you have CMake on your system, "make" runs a parallel CMake-generated build that does dependency analysis. A "do-nothing" build should take about 1 second. But if you haven't updated for a while, it's likely that a header has changed that will require recompiling more. -------------- next part -------------- An HTML attachment was scrubbed... URL: From bsmith at mcs.anl.gov Mon Nov 7 16:11:30 2011 From: bsmith at mcs.anl.gov (Barry Smith) Date: Mon, 7 Nov 2011 14:11:30 -0800 Subject: [petsc-users] how to rebuild from updated repo In-Reply-To: References: Message-ID: <1B291E12-A94A-478D-B51D-41500FFC04E2@mcs.anl.gov> With petsc-3.2 if you have a recent version of cmake installed on your machine then it does exactly that and if you, for example, change one file the rebuilding of the library is very fast. Barry On Nov 7, 2011, at 2:07 PM, Shao-Ching Huang wrote: > Thanks guys. > > I was thinking rebuilding in 5 seconds, if I could reuse most of the > already-built .o files. (Like in a typical makefile scenario -- only > the changed source files are re-compiled). It seems that the petsc > "make" command cleans up all the .o files automatically. > > I have no problem of rebuilding the whole thing afresh. Actually that > is what I have been doing. > > Thanks, > > Shao-Ching > > On Mon, Nov 7, 2011 at 1:56 PM, Barry Smith wrote: >> >> If you find that ./configure or the make are taking a huge amount of time you might investigate where the files are stored. If, for example, they are stored on a separate file server it may be most of the time is due to the file server. For example configuring and compiling each take me about 5 minutes on my laptop; on a desktop using a file server might take 1/2 an hour. >> >> Barry >> >> On Nov 7, 2011, at 3:52 PM, Satish Balay wrote: >> >>> On Mon, 7 Nov 2011, Shao-Ching Huang wrote: >>> >>>> Hi >>>> >>>> I downloaded petsc using the "hg clone >>>> http://petsc.cs.iit.edu/petsc/releases/petsc-3.2" method. Now, after >>>> hg pull/update (some files are changed), do I always have to build >>>> from scratch (i.e. compiling everything, including the --download-xxx >>>> stuff)? Is there a short cut to rebuild libpetsc.{so,a} when I have no >>>> configuration change? >>> >>> This is a tricky thing. Generally updates to petsc-3.2 should not >>> require a rerun of configure or a rebuild of all library code - but >>> this is not always true. [sometimes you might have to pull/update >>> BuildSystem and rerun configure - or rebuild externalpackages - as the >>> tarballs for these packages get updated]. >>> >>> For the generaly case - an update of the libraries can be done with: >>> >>> [for cmake build] >>> make >>> >>> [for non-cmake build] >>> make ACTION=lib tree >>> >>> And if a rerun of configure is needed - one can run >>> ./PETSC_ARCH/conf/reconfigure_PETSC_ARCH.py >>> >>> Satish >> >> From jedbrown at mcs.anl.gov Mon Nov 7 16:36:34 2011 From: jedbrown at mcs.anl.gov (Jed Brown) Date: Mon, 7 Nov 2011 15:36:34 -0700 Subject: [petsc-users] Solution update is not working in TS In-Reply-To: <4EB829BB.8070800@lycos.com> References: <4EB829BB.8070800@lycos.com> Message-ID: On Mon, Nov 7, 2011 at 11:55, Konstantinos Kontzialis wrote: > [0]PETSC ERROR: --------------------- Error Message > ------------------------------**------ > [0]PETSC ERROR: Object is in wrong state! > [0]PETSC ERROR: Must call SNESSetFunction() or SNESSetDM() first > 1. ALWAYS include the whole error message. 2. Please trim replies, _especially_ if you get the digest. -------------- next part -------------- An HTML attachment was scrubbed... URL: From huangsc at gmail.com Mon Nov 7 18:33:05 2011 From: huangsc at gmail.com (Shao-Ching Huang) Date: Mon, 7 Nov 2011 16:33:05 -0800 Subject: [petsc-users] how to rebuild from updated repo In-Reply-To: <1B291E12-A94A-478D-B51D-41500FFC04E2@mcs.anl.gov> References: <1B291E12-A94A-478D-B51D-41500FFC04E2@mcs.anl.gov> Message-ID: It turns out that the machine I am using has CUDA (nvcc), so petsc disables cmake build (apparently even with --with-cuda=0, as long as nvcc is detected). Then I noticed this comment: "# Our CMake build does not support CUDA at this time" in config/PETSc/Configure.py (line 468) I tried the same configure options on a different machine that has no cuda -- cmake build works as expected. Shao-Ching On Mon, Nov 7, 2011 at 2:11 PM, Barry Smith wrote: > > ?With petsc-3.2 if you have a recent version of cmake installed on your machine then it does exactly that and if you, for example, change one file the rebuilding of the library is very fast. > > ? Barry > > On Nov 7, 2011, at 2:07 PM, Shao-Ching Huang wrote: > >> Thanks guys. >> >> I was thinking rebuilding in 5 seconds, if I could reuse most of the >> already-built .o files. (Like in a typical makefile scenario -- only >> the changed source files are re-compiled). It seems that the petsc >> "make" command cleans up all the .o files automatically. >> >> I have no problem of rebuilding the whole thing afresh. Actually that >> is what I have been doing. >> >> Thanks, >> >> Shao-Ching >> >> On Mon, Nov 7, 2011 at 1:56 PM, Barry Smith wrote: >>> >>> ?If you find that ./configure or the make are taking a huge amount of time you might investigate where the files are stored. If, for example, they are stored on a separate file server it may be most of the time is due to the file server. For example configuring and compiling each take me about 5 minutes on my laptop; on a desktop using a file server might take 1/2 an hour. >>> >>> ? Barry >>> >>> On Nov 7, 2011, at 3:52 PM, Satish Balay wrote: >>> >>>> On Mon, 7 Nov 2011, Shao-Ching Huang wrote: >>>> >>>>> Hi >>>>> >>>>> I downloaded petsc using the "hg clone >>>>> http://petsc.cs.iit.edu/petsc/releases/petsc-3.2" method. Now, after >>>>> hg pull/update (some files are changed), do I always have to build >>>>> from scratch (i.e. compiling everything, including the --download-xxx >>>>> stuff)? Is there a short cut to rebuild libpetsc.{so,a} when I have no >>>>> configuration change? >>>> >>>> This is a tricky thing. Generally updates to petsc-3.2 should not >>>> require a rerun of configure or a rebuild of all library code - but >>>> this is not always true. [sometimes you might have to pull/update >>>> BuildSystem and rerun configure - or rebuild externalpackages - as the >>>> tarballs for these packages get updated]. >>>> >>>> For the generaly case - an update of the libraries can be done with: >>>> >>>> [for cmake build] >>>> make >>>> >>>> [for non-cmake build] >>>> make ACTION=lib tree >>>> >>>> And if a rerun of configure is needed - one can run >>>> ./PETSC_ARCH/conf/reconfigure_PETSC_ARCH.py >>>> >>>> Satish >>> >>> > > From bsmith at mcs.anl.gov Mon Nov 7 18:34:20 2011 From: bsmith at mcs.anl.gov (Barry Smith) Date: Mon, 7 Nov 2011 16:34:20 -0800 Subject: [petsc-users] how to rebuild from updated repo In-Reply-To: References: <1B291E12-A94A-478D-B51D-41500FFC04E2@mcs.anl.gov> Message-ID: Hmm, I thinkl this was fixed in petsc-dev, switch to it. On Nov 7, 2011, at 4:33 PM, Shao-Ching Huang wrote: > It turns out that the machine I am using has CUDA (nvcc), so petsc > disables cmake build (apparently even with --with-cuda=0, as long as > nvcc is detected). Then I noticed this comment: "# Our CMake build > does not support CUDA at this time" in config/PETSc/Configure.py (line > 468) > > I tried the same configure options on a different machine that has no > cuda -- cmake build works as expected. > > Shao-Ching > > On Mon, Nov 7, 2011 at 2:11 PM, Barry Smith wrote: >> >> With petsc-3.2 if you have a recent version of cmake installed on your machine then it does exactly that and if you, for example, change one file the rebuilding of the library is very fast. >> >> Barry >> >> On Nov 7, 2011, at 2:07 PM, Shao-Ching Huang wrote: >> >>> Thanks guys. >>> >>> I was thinking rebuilding in 5 seconds, if I could reuse most of the >>> already-built .o files. (Like in a typical makefile scenario -- only >>> the changed source files are re-compiled). It seems that the petsc >>> "make" command cleans up all the .o files automatically. >>> >>> I have no problem of rebuilding the whole thing afresh. Actually that >>> is what I have been doing. >>> >>> Thanks, >>> >>> Shao-Ching >>> >>> On Mon, Nov 7, 2011 at 1:56 PM, Barry Smith wrote: >>>> >>>> If you find that ./configure or the make are taking a huge amount of time you might investigate where the files are stored. If, for example, they are stored on a separate file server it may be most of the time is due to the file server. For example configuring and compiling each take me about 5 minutes on my laptop; on a desktop using a file server might take 1/2 an hour. >>>> >>>> Barry >>>> >>>> On Nov 7, 2011, at 3:52 PM, Satish Balay wrote: >>>> >>>>> On Mon, 7 Nov 2011, Shao-Ching Huang wrote: >>>>> >>>>>> Hi >>>>>> >>>>>> I downloaded petsc using the "hg clone >>>>>> http://petsc.cs.iit.edu/petsc/releases/petsc-3.2" method. Now, after >>>>>> hg pull/update (some files are changed), do I always have to build >>>>>> from scratch (i.e. compiling everything, including the --download-xxx >>>>>> stuff)? Is there a short cut to rebuild libpetsc.{so,a} when I have no >>>>>> configuration change? >>>>> >>>>> This is a tricky thing. Generally updates to petsc-3.2 should not >>>>> require a rerun of configure or a rebuild of all library code - but >>>>> this is not always true. [sometimes you might have to pull/update >>>>> BuildSystem and rerun configure - or rebuild externalpackages - as the >>>>> tarballs for these packages get updated]. >>>>> >>>>> For the generaly case - an update of the libraries can be done with: >>>>> >>>>> [for cmake build] >>>>> make >>>>> >>>>> [for non-cmake build] >>>>> make ACTION=lib tree >>>>> >>>>> And if a rerun of configure is needed - one can run >>>>> ./PETSC_ARCH/conf/reconfigure_PETSC_ARCH.py >>>>> >>>>> Satish >>>> >>>> >> >> From Robert.Ellis at geosoft.com Mon Nov 7 22:42:25 2011 From: Robert.Ellis at geosoft.com (Robert Ellis) Date: Tue, 8 Nov 2011 04:42:25 +0000 Subject: [petsc-users] MPI_AllReduce vs VecScatterCreateToAll Message-ID: <18205E5ECD2A1A4584F2BFC0BCBDE95526EC6877@exchange.geosoft.com> Hello Petsc Developers, I have a predominantly Petsc application but for simplicity it uses a very few MPI_AllReduce calls. I am finding that the MPI_AllReduce operations are sometimes causing problems (appears to be semaphore time outs) if the interprocess communication is slow. I never have any problem with the Petsc operations. Is it reasonable that Petsc would be more robust that MPI_AllReduce? Also, is the VecScatterCreateToAll set of operations the best way to replace the MPI_AllReduce? Thanks for any advice, Rob -------------- next part -------------- An HTML attachment was scrubbed... URL: From ckontzialis at lycos.com Tue Nov 8 04:14:44 2011 From: ckontzialis at lycos.com (Konstantinos Kontzialis) Date: Tue, 08 Nov 2011 12:14:44 +0200 Subject: [petsc-users] TS and SNES Message-ID: <4EB90114.8090506@lycos.com> Deall all, I perform a simulation over a cylinder at M=0.14 using the Euler equations. The explicit part works. But, when I go for the implicit solution I code: ierr = TSCreate(sys.comm, &sys.ts); CHKERRQ(ierr); ierr = TSSetProblemType(sys.ts, TS_NONLINEAR); CHKERRQ(ierr); ierr = TSSetSolution(sys.ts, sys.gsv); CHKERRQ(ierr); ierr = TSSetIFunction(sys.ts, PETSC_NULL, base_residual_implicit, &sys); CHKERRQ(ierr); ierr = TSGetSNES(sys.ts, &sys.snes); CHKERRQ(ierr); ierr = MatCreateSNESMF(sys.snes, &sys.J); CHKERRQ(ierr); ISColoring iscoloring; MatFDColoring matfdcoloring; ierr = jacobian_diff_numerical(sys, &sys.P); CHKERRQ(ierr); ierr = MatGetColoring(sys.P, MATCOLORINGSL, &iscoloring); CHKERRQ(ierr); ierr = MatFDColoringCreate(sys.P, iscoloring, &matfdcoloring); CHKERRQ(ierr); ierr = MatFDColoringSetFunction(matfdcoloring, (PetscErrorCode(*)(void)) SNESTSFormFunction, sys.ts); CHKERRQ(ierr); ierr = MatFDColoringSetFromOptions(matfdcoloring); CHKERRQ(ierr); ierr = ISColoringDestroy(&iscoloring); CHKERRQ(ierr); ierr = SNESSetJacobian(sys.snes, sys.J, sys.P, SNESDefaultComputeJacobianColor, matfdcoloring); CHKERRQ(ierr); ierr = TSSetInitialTimeStep(sys.ts, sys.con->tm, sys.con->dt); CHKERRQ(ierr); ierr = TSSetDuration(sys.ts, 100e+6, sys.con->etime); CHKERRQ(ierr); ierr = TSMonitorSet(sys.ts, monitor, &sys, PETSC_NULL); CHKERRQ(ierr); ierr = TSSetFromOptions(sys.ts); CHKERRQ(ierr); ierr = TSSolve(sys.ts, sys.gsv, &sys.con->etime); CHKERRQ(ierr); I run with: mpiexec -n 1 ./hoac cylinder -snes_mf_operator -llf_flux -n_out 10 -end_time 5 -implicit -pc_type asm -sub_pc_type ilu -sub_pc_factor_mat_ordering_type rcm -gl -ksp_type gmres -sub_pc_factor_levels 0 -dt 1.0e-1 -snes_monitor -snes_converged_reason -ksp_converged_reason -ts_view -ts_type beuler and I get: Approximation order = 0 # DOF = 1520 # nodes in mesh = 400 # elements in mesh = 380 Euler solution Using LLF flux Linear solve converged due to CONVERGED_RTOL iterations 1 [0]PETSC ERROR: --------------------- Error Message ------------------------------------ [0]PETSC ERROR: Object is in wrong state! [0]PETSC ERROR: Must call SNESSetFunction() or SNESSetDM() first! [0]PETSC ERROR: ------------------------------------------------------------------------ [0]PETSC ERROR: Petsc Release Version 3.2.0, Patch 5, Sat Oct 29 13:45:54 CDT 2011 [0]PETSC ERROR: See docs/changes/index.html for recent updates. [0]PETSC ERROR: See docs/faq.html for hints about trouble shooting. [0]PETSC ERROR: See docs/index.html for manual pages. [0]PETSC ERROR: ------------------------------------------------------------------------ [0]PETSC ERROR: ./hoac on a linux-gnu named PlusSodaL by kontzialis Tue Nov 8 12:11:04 2011 [0]PETSC ERROR: Libraries linked from /home/kontzialis/petsc-3.2-p5/linux-gnu-c-debug/lib [0]PETSC ERROR: Configure run at Sat Nov 5 20:58:12 2011 [0]PETSC ERROR: Configure options --with-debugging=1 ---with-mpi-dir=/usr/lib64/mpich2/bin --with-shared-libraries --with-shared-libraries --with-large-file-io=1 --with-precision=double --with-blacs=1 --download-blacs=yes --download-f-blas-lapack=yes --with-plapack=1 --download-plapack=yes --with-scalapack=1 --download-scalapack=yes --with-superlu=1 --download-superlu=yes --with-superlu_dist=1 --download-superlu_dist=yes --with-ml=1 --download-ml=yes --with-umfpack=1 --download-umfpack=yes --with-mpi=1 --download-mpich=1 --with-sundials=1 --download-sundials=1 --with-parmetis=1 --download-parmetis=1 --with-hypre=1 --download-hypre=1 [0]PETSC ERROR: ------------------------------------------------------------------------ [0]PETSC ERROR: MatCreateSNESMF() line 142 in /home/kontzialis/petsc-3.2-p5/src/snes/mf/snesmfj.c [0]PETSC ERROR: implicit_time() line 38 in "unknowndirectory/"../src/implicit_time.c [0]PETSC ERROR: main() line 1176 in "unknowndirectory/"../src/hoac.c application called MPI_Abort(MPI_COMM_WORLD, 73) - process 0 [cli_0]: aborting job: application called MPI_Abort(MPI_COMM_WORLD, 73) - process 0 What am I doing wrong? Thank you, Kostas From jedbrown at mcs.anl.gov Tue Nov 8 06:33:38 2011 From: jedbrown at mcs.anl.gov (Jed Brown) Date: Tue, 8 Nov 2011 06:33:38 -0600 Subject: [petsc-users] TS and SNES In-Reply-To: <4EB90114.8090506@lycos.com> References: <4EB90114.8090506@lycos.com> Message-ID: On Tue, Nov 8, 2011 at 04:14, Konstantinos Kontzialis wrote: > ierr = TSSetSolution(sys.ts, sys.gsv); > CHKERRQ(ierr); > > ierr = TSSetIFunction(sys.ts, PETSC_NULL, base_residual_implicit, &sys); > CHKERRQ(ierr); > Provide a residual vector here (instead of PETSC_NULL). Since you provided a state Vec in TSSetSolution(), it is possible to create a Vec for the residual. I'll add that logic and improve the error message, but for now, just pass in the residual Vec. > > ierr = TSGetSNES(sys.ts, &sys.snes); > CHKERRQ(ierr); > > ierr = MatCreateSNESMF(sys.snes, &sys.J); > CHKERRQ(ierr); > -------------- next part -------------- An HTML attachment was scrubbed... URL: From knepley at gmail.com Tue Nov 8 06:50:37 2011 From: knepley at gmail.com (Matthew Knepley) Date: Tue, 8 Nov 2011 12:50:37 +0000 Subject: [petsc-users] MPI_AllReduce vs VecScatterCreateToAll In-Reply-To: <18205E5ECD2A1A4584F2BFC0BCBDE95526EC6877@exchange.geosoft.com> References: <18205E5ECD2A1A4584F2BFC0BCBDE95526EC6877@exchange.geosoft.com> Message-ID: On Tue, Nov 8, 2011 at 4:42 AM, Robert Ellis wrote: > Hello Petsc Developers,**** > > ** ** > > I have a predominantly Petsc application but for simplicity it uses a very > few MPI_AllReduce calls. I am finding that the MPI_AllReduce operations are > sometimes causing problems (appears to be semaphore time outs) if the > interprocess communication is slow. I never have any problem with the Petsc > operations. Is it reasonable that Petsc would be more robust that > MPI_AllReduce? > No. There are a lot of Allreduce() calls in the source: find src -name "*.c" | xargs grep MPI_Allreduce Matt > ** > > Also, is the VecScatterCreateToAll set of operations the best way to > replace the MPI_AllReduce?**** > > ** ** > > Thanks for any advice,**** > > Rob**** > > ** ** > -- What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. -- Norbert Wiener -------------- next part -------------- An HTML attachment was scrubbed... URL: From jedbrown at mcs.anl.gov Tue Nov 8 06:56:18 2011 From: jedbrown at mcs.anl.gov (Jed Brown) Date: Tue, 8 Nov 2011 06:56:18 -0600 Subject: [petsc-users] MPI_AllReduce vs VecScatterCreateToAll In-Reply-To: <18205E5ECD2A1A4584F2BFC0BCBDE95526EC6877@exchange.geosoft.com> References: <18205E5ECD2A1A4584F2BFC0BCBDE95526EC6877@exchange.geosoft.com> Message-ID: You shouldn't see timeouts. Sounds like a misconfiguration or a bug in the code. If you want an MPI_Allreduce, then MPI_Allreduce is the best way to get it. VecScatter is intended for less structured operations. On Nov 7, 2011 10:42 PM, "Robert Ellis" wrote: > Hello Petsc Developers,**** > > ** ** > > I have a predominantly Petsc application but for simplicity it uses a very > few MPI_AllReduce calls. I am finding that the MPI_AllReduce operations are > sometimes causing problems (appears to be semaphore time outs) if the > interprocess communication is slow. I never have any problem with the Petsc > operations. Is it reasonable that Petsc would be more robust that > MPI_AllReduce?**** > > ** ** > > Also, is the VecScatterCreateToAll set of operations the best way to > replace the MPI_AllReduce?**** > > ** ** > > Thanks for any advice,**** > > Rob**** > > ** ** > -------------- next part -------------- An HTML attachment was scrubbed... URL: From jedbrown at mcs.anl.gov Tue Nov 8 07:17:47 2011 From: jedbrown at mcs.anl.gov (Jed Brown) Date: Tue, 8 Nov 2011 07:17:47 -0600 Subject: [petsc-users] TS and SNES In-Reply-To: References: <4EB90114.8090506@lycos.com> Message-ID: On Tue, Nov 8, 2011 at 06:33, Jed Brown wrote: > Provide a residual vector here (instead of PETSC_NULL). Since you provided > a state Vec in TSSetSolution(), it is possible to create a Vec for the > residual. I'll add that logic and improve the error message, but for now, > just pass in the residual Vec. Also note that you can skip MatCreateSNESMF() and just pass -snes_mf_operator. -------------- next part -------------- An HTML attachment was scrubbed... URL: From manuel.perezcerquera at polito.it Tue Nov 8 07:49:37 2011 From: manuel.perezcerquera at polito.it (PEREZ CERQUERA MANUEL RICARDO) Date: Tue, 08 Nov 2011 14:49:37 +0100 Subject: [petsc-users] About efficiency in MatAssembling Message-ID: Hi all, I'm having problems with the efficiency of my code, I mean, it's taken so much time when I'm running in 2 processors than in 1 Processor. My guess is that the communications at low level inside PETSC Directives, because I'm generating the elements on wrong processes and this is killing me, moreover I'm over estimated my memory occupancy, so my questions are: 1. For The Memory Occupancy , I Know the NonZero Pattern of my Matrix, so I'm calling CALL MatCreateMPIAIJ(PETSC_COMM_WORLD,LocalNOfBEMFunctions,LocalTotalRank,NOfTotalBEMFunctions,GlobalTotalRank,& PETSC_NULL_INTEGER,NNZ_diagonal(:),& PETSC_NULL_INTEGER,NNZ_outofdiagonal(:),UFar,ierr) So I have a doubt if when I'm doing this Should I specify just one NNZ_diagonal(:) or NNZ_outofdiagonal(:) ?, not both of them as I'm doing. 2. For the time efficiency, When I'm calling, MatSetValues() The Global Index is out of the range set in MatCreateMPIAIJ, I mean for example if the range is between 1 and 10 in a Matrix of 100 The Global index are let's say 1 20 25 2 5 60 85 90 100 3 so there are some elements out of the range, moreover I ran with -info option and I got this: [0] MatAssemblyBegin_MPIAIJ(): Stash has 84088 entries, uses 3 mallocs. And in the manual is written this actually means I'm generating entries in the wrong process,not because of the mollocs but because of the entries, Do I'm right ? So is this indeed not efficient?. If not, Can you suggest me a possible way to improve it? , Or there are some special way for indexing the elements in order to put them in order to solve the system? Thanks! Manuel. I realized with -info option in run time Eng. Manuel Ricardo Perez Cerquera. MSc. Ph.D student Antenna and EMC Lab (LACE) Istituto Superiore Mario Boella (ISMB) Politecnico di Torino Via Pier Carlo Boggio 61, Torino 10138, Italy Email: manuel.perezcerquera at polito.it Phone: +39 0112276704 Fax: +39 011 2276 299 From ping.rong at tuhh.de Tue Nov 8 07:57:19 2011 From: ping.rong at tuhh.de (Ping Rong) Date: Tue, 08 Nov 2011 14:57:19 +0100 Subject: [petsc-users] a little bug Message-ID: <4EB9353F.7090009@tuhh.de> Hello guys, I would like to report a possible bug. In file "\src\ksp\pc\impls\supportgraph\lowstretch.cpp" at line 233, 439, 812 and 958 the marco SETERRQ still uses the old interface, so if petsc is compiled with boost and complex, this will throw an error. -- Ping Rong, M.Sc. Hamburg University of Technology Institut of modelling and computation Denickestra?e 17 (Room 3031) 21073 Hamburg Tel.: ++49 - (0)40 42878 2749 Fax: ++49 - (0)40 42878 43533 Email: ping.rong at tuhh.de From petsc-maint at mcs.anl.gov Tue Nov 8 07:58:32 2011 From: petsc-maint at mcs.anl.gov (Matthew Knepley) Date: Tue, 8 Nov 2011 13:58:32 +0000 Subject: [petsc-users] About efficiency in MatAssembling In-Reply-To: References: Message-ID: On Tue, Nov 8, 2011 at 1:49 PM, PEREZ CERQUERA MANUEL RICARDO < manuel.perezcerquera at polito.it> wrote: > Hi all, > > I'm having problems with the efficiency of my code, I mean, it's taken so > much time when I'm running in 2 processors than in 1 Processor. My guess > is that the communications at low level inside PETSC Directives, because > I'm generating the elements on wrong processes and this is killing me, > moreover I'm over estimated my memory occupancy, so my questions are: > Please send the output of -log_summary with all performance questions. > 1. For The Memory Occupancy , I Know the NonZero Pattern of my Matrix, so > I'm calling > CALL MatCreateMPIAIJ(PETSC_COMM_**WORLD,LocalNOfBEMFunctions,** > LocalTotalRank,**NOfTotalBEMFunctions,**GlobalTotalRank,& > PETSC_NULL_INTEGER,NNZ_** > diagonal(:),& > PETSC_NULL_INTEGER,NNZ_** > outofdiagonal(:),UFar,ierr) > So I have a doubt if when I'm doing this Should I specify just one > NNZ_diagonal(:) or NNZ_outofdiagonal(:) ?, not both of them as I'm doing. > Use MatSetOption(MAT_NEW_NONZERO_ALLOCATION_ERR, PETSC_TRUE) to make sure you are not causing mallocs. > 2. For the time efficiency, When I'm calling, MatSetValues() The Global > Index is out of the range set in MatCreateMPIAIJ, I mean for example if the > range is between 1 and 10 in a Matrix of 100 The Global index are let's say > 1 20 25 2 5 60 85 90 100 3 so there are some elements out of the range, > moreover I ran with -info option and I got this: > [0] MatAssemblyBegin_MPIAIJ(): Stash has 84088 entries, uses 3 mallocs. > And in the manual is written this actually means I'm generating entries in > the wrong process,not because of the mollocs but because of the entries, Do > I'm right ? > You are sending a lot of values. I would reorganize my code to compute the values on the owner. Matt > So is this indeed not efficient?. If not, Can you suggest me a possible > way to improve it? , Or there are some special way for indexing the > elements in order to put them in order to solve the system? > > Thanks! > > Manuel. > > > > I realized with -info option in run time > > > > > > Eng. Manuel Ricardo Perez Cerquera. MSc. Ph.D student > Antenna and EMC Lab (LACE) > Istituto Superiore Mario Boella (ISMB) > Politecnico di Torino > Via Pier Carlo Boggio 61, Torino 10138, Italy > Email: manuel.perezcerquera at polito.it > Phone: +39 0112276704 > Fax: +39 011 2276 299 > -- What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. -- Norbert Wiener -------------- next part -------------- An HTML attachment was scrubbed... URL: From knepley at gmail.com Tue Nov 8 09:18:02 2011 From: knepley at gmail.com (Matthew Knepley) Date: Tue, 8 Nov 2011 15:18:02 +0000 Subject: [petsc-users] a little bug In-Reply-To: <4EB9353F.7090009@tuhh.de> References: <4EB9353F.7090009@tuhh.de> Message-ID: On Tue, Nov 8, 2011 at 1:57 PM, Ping Rong wrote: > Hello guys, > > I would like to report a possible bug. In file "\src\ksp\pc\impls\**supportgraph\lowstretch.cpp" > at line 233, 439, 812 and 958 the marco SETERRQ still uses the old > interface, so if petsc is compiled with boost and complex, this will throw > an error. Fixed and pushed to 3.2. Will go out with the next patch. Matt > > -- > Ping Rong, M.Sc. > Hamburg University of Technology > Institut of modelling and computation > Denickestra?e 17 (Room 3031) > 21073 Hamburg > > Tel.: ++49 - (0)40 42878 2749 > Fax: ++49 - (0)40 42878 43533 > Email: ping.rong at tuhh.de > > -- What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. -- Norbert Wiener -------------- next part -------------- An HTML attachment was scrubbed... URL: From jean-frederic at thebault-net.com Tue Nov 8 10:09:00 2011 From: jean-frederic at thebault-net.com (jean-frederic thebault) Date: Tue, 8 Nov 2011 17:09:00 +0100 Subject: [petsc-users] Need help... Message-ID: Hi, First of all, thank you for this library. View years ago, we had mixed our (f77) finite volume code with petsc, and obtained some very interested results (faster calculations, multi-processor issues, ..), with a 4 linux-PC cluster and a myrinet switch, and petsc-2.1.3 Regarding the new PC architecture (multi-threading), the same mixed code apparently is calculating slower each time we increase the number of processors used (processor or core, I'm not sure to use the right word). We thought that time that we should upgrade our petsc library (with petsc-3.1-p8) to have benefit of the multi-threading architecture. So do we, changing a little bit some stuff (merrely "include" names). We compiled it with mpif77. The fortran-samples of petsc are working just fine. But our code doesn't work. We have tried a lot of different options and tried for view weeks to figure out what is happening, nothing. I really would like some help in that matter because I don't see where is the problem. I'm wondering if you could, reading the out.log, tell me where I should investigate the problem ? (kind of ? out of array definition or other ? ) I really don't see and if you could please help me, I would really appreciate it !! I put the out.log in this email Many Thanks. Best Regards. John -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: out.log Type: application/octet-stream Size: 8583 bytes Desc: not available URL: From jedbrown at mcs.anl.gov Tue Nov 8 10:15:31 2011 From: jedbrown at mcs.anl.gov (Jed Brown) Date: Tue, 8 Nov 2011 10:15:31 -0600 Subject: [petsc-users] Need help... In-Reply-To: References: Message-ID: On Tue, Nov 8, 2011 at 10:09, jean-frederic thebault < jean-frederic at thebault-net.com> wrote: > First of all, thank you for this library. > View years ago, we had mixed our (f77) finite volume code with petsc, and > obtained some very interested results (faster calculations, multi-processor > issues, ..), with a 4 linux-PC cluster and a myrinet switch, and petsc-2.1.3 > Regarding the new PC architecture (multi-threading), the same mixed code > apparently is calculating slower each time we increase the number of > processors used (processor or core, I'm not sure to use the right word). We > thought that time that we should upgrade our petsc library (with > petsc-3.1-p8) > Please use petsc-3.2 > to have benefit of the multi-threading architecture. So do we, changing a > little bit some stuff (merrely "include" names). We compiled it with > mpif77. The fortran-samples of petsc are working just fine. But our code > doesn't work. We have tried a lot of different options and tried for view > weeks to figure out what is happening, nothing. > The calling sequence for MatSetOption() has changed. You are likely calling it incorrectly. The compiler tells you about these things in C. Fortran 77 type checking is nonexistant so the compiler doesn't check these things. If you use F90 or later, you can turn on interface definitions to get some rudimentary type checking. Type checking is much better in C. -------------- next part -------------- An HTML attachment was scrubbed... URL: From knepley at gmail.com Tue Nov 8 10:45:22 2011 From: knepley at gmail.com (Matthew Knepley) Date: Tue, 8 Nov 2011 16:45:22 +0000 Subject: [petsc-users] Need help... In-Reply-To: References: Message-ID: On Tue, Nov 8, 2011 at 4:15 PM, Jed Brown wrote: > On Tue, Nov 8, 2011 at 10:09, jean-frederic thebault < > jean-frederic at thebault-net.com> wrote: > >> First of all, thank you for this library. >> View years ago, we had mixed our (f77) finite volume code with petsc, and >> obtained some very interested results (faster calculations, multi-processor >> issues, ..), with a 4 linux-PC cluster and a myrinet switch, and petsc-2.1.3 >> Regarding the new PC architecture (multi-threading), the same mixed code >> apparently is calculating slower each time we increase the number of >> processors used (processor or core, I'm not sure to use the right word). We >> thought that time that we should upgrade our petsc library (with >> petsc-3.1-p8) >> > > Please use petsc-3.2 > > >> to have benefit of the multi-threading architecture. So do we, changing a >> little bit some stuff (merrely "include" names). We compiled it with >> mpif77. The fortran-samples of petsc are working just fine. But our code >> doesn't work. We have tried a lot of different options and tried for view >> weeks to figure out what is happening, nothing. >> > There have been 11 releases of the 9 years since 2.1.3. There have been more interface changes than MatSetOption(). All of them are catalogued here: http://www.mcs.anl.gov/petsc/petsc-as/documentation/changes/index.html. "Doesn't work" is an inadequate description of your problem. What happen precisely? Matt > The calling sequence for MatSetOption() has changed. You are likely > calling it incorrectly. > > The compiler tells you about these things in C. Fortran 77 type checking > is nonexistant so the compiler doesn't check these things. If you use F90 > or later, you can turn on interface definitions to get some rudimentary > type checking. Type checking is much better in C. > -- What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. -- Norbert Wiener -------------- next part -------------- An HTML attachment was scrubbed... URL: From jean-frederic at thebault-net.com Tue Nov 8 11:31:59 2011 From: jean-frederic at thebault-net.com (jean-frederic thebault) Date: Tue, 8 Nov 2011 18:31:59 +0100 Subject: [petsc-users] Need help... In-Reply-To: References: Message-ID: Hi Matt, Hi Jed, It seems that I didn't paid attention of some changes in realeases. I just changed the option of my calling of MatSetOption, and now the error is later on. then, for now, that means I will have to check all of the calling of petsc-subroutines and check-out the change between all releases. Sounds ok to me (means it will "work"). Many thanks and Best Regards John PS: when I was using the 2.1.3 version, I had a user-member-name, but I can't remember it... I guess I will subscribe again. 2011/11/8 Matthew Knepley > On Tue, Nov 8, 2011 at 4:15 PM, Jed Brown wrote: > >> On Tue, Nov 8, 2011 at 10:09, jean-frederic thebault < >> jean-frederic at thebault-net.com> wrote: >> >>> First of all, thank you for this library. >>> View years ago, we had mixed our (f77) finite volume code with petsc, >>> and obtained some very interested results (faster calculations, >>> multi-processor issues, ..), with a 4 linux-PC cluster and a myrinet >>> switch, and petsc-2.1.3 >>> Regarding the new PC architecture (multi-threading), the same mixed code >>> apparently is calculating slower each time we increase the number of >>> processors used (processor or core, I'm not sure to use the right word). We >>> thought that time that we should upgrade our petsc library (with >>> petsc-3.1-p8) >>> >> >> Please use petsc-3.2 >> >> >>> to have benefit of the multi-threading architecture. So do we, changing >>> a little bit some stuff (merrely "include" names). We compiled it with >>> mpif77. The fortran-samples of petsc are working just fine. But our code >>> doesn't work. We have tried a lot of different options and tried for view >>> weeks to figure out what is happening, nothing. >>> >> > There have been 11 releases of the 9 years since 2.1.3. There have been > more interface changes than MatSetOption(). All of them > are catalogued here: > http://www.mcs.anl.gov/petsc/petsc-as/documentation/changes/index.html. > "Doesn't work" is an inadequate > description of your problem. What happen precisely? > > Matt > > >> The calling sequence for MatSetOption() has changed. You are likely >> calling it incorrectly. >> >> The compiler tells you about these things in C. Fortran 77 type checking >> is nonexistant so the compiler doesn't check these things. If you use F90 >> or later, you can turn on interface definitions to get some rudimentary >> type checking. Type checking is much better in C. >> > > > > -- > What most experimenters take for granted before they begin their > experiments is infinitely more interesting than any results to which their > experiments lead. > -- Norbert Wiener > -------------- next part -------------- An HTML attachment was scrubbed... URL: From dominik at itis.ethz.ch Tue Nov 8 16:10:14 2011 From: dominik at itis.ethz.ch (Dominik Szczerba) Date: Tue, 8 Nov 2011 23:10:14 +0100 Subject: [petsc-users] compiling 3.2 on Windows In-Reply-To: References: Message-ID: I tried to force using the - seemingly properly compiled - blas/lapack libs with: ./config/configure.py PETSC_DIR=$PWD PETSC_ARCH=win64-msvc-release --with-x=0 --with-debugging=0 --with-cc='win32fe cl' --with-cxx='win32fe cl' --with-fc=0 --with-mpi-dir=/cygdrive/c/mpich2-1.3.2p1-win-x86-64 --with-hypre-dir=/cygdrive/c/pack/hypre-2.7.0b/src/hypre/ --with-parmetis=1 --download-parmetis=1 --with-blas-lib=/cygdrive/c/pack/petsc-3.2-p5/win64-msvc-release/lib/libf2cblas.lib --with-lapack-lib=/cygdrive/c/pack/petsc-3.2-p5/win64-msvc-release/lib/libf2clapack.lib I get the error: ******************************************************************************* UNABLE to CONFIGURE with GIVEN OPTIONS (see configure.log for detail s): ------------------------------------------------------------------------------- You set a value for --with-blas-lib= and --with-lapack-lib=, but ['/cy gdrive/c/pack/petsc-3.2-p5/win64-msvc-release/lib/libf2cblas.lib'] and ['/cygdri ve/c/pack/petsc-3.2-p5/win64-msvc-release/lib/libf2clapack.lib'] cannot be used ******************************************************************************* Excerpt from configure.log: error LNK2019: unresolved external symbol dgetrs referenced in function main I will send complete log to petsc-maint again. Any more hints? Thanks, Dominik On Mon, Nov 7, 2011 at 3:57 PM, Dominik Szczerba wrote: > I am configuring with: > > ./config/configure.py PETSC_DIR=$PWD PETSC_ARCH=win64-msvc-release > --with-x=0 --with-debugging=0 --with-cc='win32fe cl' > --with-cxx='win32fe cl' --with-fc=0 --download-f2cblaslapack > --with-mpi-dir=/cygdrive/c/mpich2-1.3.2p1-win-x86-64 > --with-hypre-dir=/cygdrive/c/pack/hypre-2.7.0b/src/hypre/ > --with-parmetis=1 --download-parmetis=1 > > and getting an error: > > ******************************************************************************* > ? ? ? ? UNABLE to CONFIGURE with GIVEN OPTIONS ? ?(see configure.log > for details): > ------------------------------------------------------------------------------- > --download-f2cblaslapack libraries cannot be used > ******************************************************************************* > ?File "./config/configure.py", line 283, in petsc_configure > ? ?framework.configure(out = sys.stdout) > ?File "/cygdrive/c/pack/petsc-3.2-p5/config/BuildSystem/config/framework.py", > line 925, in configure > ? ?child.configure() > ?File "/cygdrive/c/pack/petsc-3.2-p5/config/BuildSystem/config/packages/BlasLapack.py", > line 538, in configure > ? ?self.executeTest(self.configureLibrary) > ?File "/cygdrive/c/pack/petsc-3.2-p5/config/BuildSystem/config/base.py", > line 115, in executeTest > ? ?ret = apply(test, args,kargs) > ?File "/cygdrive/c/pack/petsc-3.2-p5/config/BuildSystem/config/packages/BlasLapack.py", > line 444, in configureLibrary > ? ?for (name, blasLibrary, lapackLibrary, self.useCompatibilityLibs) > in self.generateGuesses(): > ?File "/cygdrive/c/pack/petsc-3.2-p5/config/BuildSystem/config/packages/BlasLapack.py", > line 162, in generateGuesses > ? ?raise RuntimeError('--download-f2cblaslapack libraries cannot be used') > > I will send the full log to petsc-maint in a second. How do I go from here? > > Thanks for any hints, > Dominik > From knepley at gmail.com Tue Nov 8 16:16:49 2011 From: knepley at gmail.com (Matthew Knepley) Date: Tue, 8 Nov 2011 22:16:49 +0000 Subject: [petsc-users] compiling 3.2 on Windows In-Reply-To: References: Message-ID: On Tue, Nov 8, 2011 at 10:10 PM, Dominik Szczerba wrote: > I tried to force using the - seemingly properly compiled - blas/lapack > libs with: > > ./config/configure.py PETSC_DIR=$PWD PETSC_ARCH=win64-msvc-release > --with-x=0 --with-debugging=0 --with-cc='win32fe cl' > --with-cxx='win32fe cl' --with-fc=0 > --with-mpi-dir=/cygdrive/c/mpich2-1.3.2p1-win-x86-64 > --with-hypre-dir=/cygdrive/c/pack/hypre-2.7.0b/src/hypre/ > --with-parmetis=1 --download-parmetis=1 > > --with-blas-lib=/cygdrive/c/pack/petsc-3.2-p5/win64-msvc-release/lib/libf2cblas.lib > > --with-lapack-lib=/cygdrive/c/pack/petsc-3.2-p5/win64-msvc-release/lib/libf2clapack.lib > > I get the error: > > > ******************************************************************************* > UNABLE to CONFIGURE with GIVEN OPTIONS (see configure.log for > detail > s): > > ------------------------------------------------------------------------------- > You set a value for --with-blas-lib= and --with-lapack-lib=, but > ['/cy > gdrive/c/pack/petsc-3.2-p5/win64-msvc-release/lib/libf2cblas.lib'] and > ['/cygdri > ve/c/pack/petsc-3.2-p5/win64-msvc-release/lib/libf2clapack.lib'] cannot be > used > > ******************************************************************************* > > Excerpt from configure.log: > > error LNK2019: unresolved external symbol dgetrs referenced in function > main > > I will send complete log to petsc-maint again. > Send that and 'nm /cygdrive/c/pack/petsc-3.2-p5/win64-msvc-release/lib/libf2clapack.lib | grep dgetrs' If you look in configure.log, it shows you the stub source we use to check the link and the compile line. You can easily test yourself. Matt Any more hints? > > Thanks, > Dominik > > On Mon, Nov 7, 2011 at 3:57 PM, Dominik Szczerba > wrote: > > I am configuring with: > > > > ./config/configure.py PETSC_DIR=$PWD PETSC_ARCH=win64-msvc-release > > --with-x=0 --with-debugging=0 --with-cc='win32fe cl' > > --with-cxx='win32fe cl' --with-fc=0 --download-f2cblaslapack > > --with-mpi-dir=/cygdrive/c/mpich2-1.3.2p1-win-x86-64 > > --with-hypre-dir=/cygdrive/c/pack/hypre-2.7.0b/src/hypre/ > > --with-parmetis=1 --download-parmetis=1 > > > > and getting an error: > > > > > ******************************************************************************* > > UNABLE to CONFIGURE with GIVEN OPTIONS (see configure.log > > for details): > > > ------------------------------------------------------------------------------- > > --download-f2cblaslapack libraries cannot be used > > > ******************************************************************************* > > File "./config/configure.py", line 283, in petsc_configure > > framework.configure(out = sys.stdout) > > File > "/cygdrive/c/pack/petsc-3.2-p5/config/BuildSystem/config/framework.py", > > line 925, in configure > > child.configure() > > File > "/cygdrive/c/pack/petsc-3.2-p5/config/BuildSystem/config/packages/BlasLapack.py", > > line 538, in configure > > self.executeTest(self.configureLibrary) > > File "/cygdrive/c/pack/petsc-3.2-p5/config/BuildSystem/config/base.py", > > line 115, in executeTest > > ret = apply(test, args,kargs) > > File > "/cygdrive/c/pack/petsc-3.2-p5/config/BuildSystem/config/packages/BlasLapack.py", > > line 444, in configureLibrary > > for (name, blasLibrary, lapackLibrary, self.useCompatibilityLibs) > > in self.generateGuesses(): > > File > "/cygdrive/c/pack/petsc-3.2-p5/config/BuildSystem/config/packages/BlasLapack.py", > > line 162, in generateGuesses > > raise RuntimeError('--download-f2cblaslapack libraries cannot be > used') > > > > I will send the full log to petsc-maint in a second. How do I go from > here? > > > > Thanks for any hints, > > Dominik > > > -- What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. -- Norbert Wiener -------------- next part -------------- An HTML attachment was scrubbed... URL: From balay at mcs.anl.gov Tue Nov 8 20:55:55 2011 From: balay at mcs.anl.gov (Satish Balay) Date: Tue, 8 Nov 2011 20:55:55 -0600 (CST) Subject: [petsc-users] compiling 3.2 on Windows In-Reply-To: References: Message-ID: What do you have for: cd /cygdrive/c/pack/petsc-3.2-p5/win64-msvc-release/lib/ dumpbin /SYMBOLS libf2clapack.lib |grep dgetrs For some reason configure is able to find ddot in libf2cblas.lib - but not dgetrs in libf2clapack.lib [when invoked this way with --with-blas-lib etc..]. And I don't understand why.. Satish ------------ balay at bucharest ~/petsc-3.2-p5/arch-mswin64/lib $ dumpbin /symbols libf2clapack.lib |grep dgetrs 021 00000000 SECT4 notype () External | dgetrs_ 024 00000000 SECT5 notype Static | $pdata$dgetrs_ 027 00000000 SECT6 notype Static | $unwind$dgetrs_ 02D 00000000 SECT7 notype Static | ?notran@?1??dgetrs_@@9 at 9 (`dgetrs_'::`2'::notran) 02F 00000000 UNDEF notype () External | dgetrs_ 013 00000000 UNDEF notype () External | dgetrs_ 021 00000000 UNDEF notype () External | dgetrs_ 021 00000000 UNDEF notype () External | dgetrs_ balay at bucharest ~/petsc-3.2-p5/arch-mswin64/lib $ <<<<<<<<>>>>>>>>>. Checking for function ddot_ in library ['/cygdrive/c/pack/petsc-3.2-p5/win64-msvc-release/lib/libf2cblas.lib'] [] Defined "HAVE_LIBF2CBLAS" to "1" Popping language C Checking for function dgetrs_ in library ['/cygdrive/c/pack/petsc-3.2-p5/win64-msvc-release/lib/libf2clapack.lib'] ['/cygdrive/c/pack/petsc-3.2-p5/win64-msvc-release/lib/libf2cblas.lib'] Pushing language C sh: conftest.obj : error LNK2019: unresolved external symbol dgetrs_ referenced in function main C:\cygwin\tmp\PETSC-~3\CONFIG~1.LIB\conftest.exe : fatal error LNK1120: 1 unresolved externals <<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<< On Tue, 8 Nov 2011, Matthew Knepley wrote: > On Tue, Nov 8, 2011 at 10:10 PM, Dominik Szczerba wrote: > > > I tried to force using the - seemingly properly compiled - blas/lapack > > libs with: > > > > ./config/configure.py PETSC_DIR=$PWD PETSC_ARCH=win64-msvc-release > > --with-x=0 --with-debugging=0 --with-cc='win32fe cl' > > --with-cxx='win32fe cl' --with-fc=0 > > --with-mpi-dir=/cygdrive/c/mpich2-1.3.2p1-win-x86-64 > > --with-hypre-dir=/cygdrive/c/pack/hypre-2.7.0b/src/hypre/ > > --with-parmetis=1 --download-parmetis=1 > > > > --with-blas-lib=/cygdrive/c/pack/petsc-3.2-p5/win64-msvc-release/lib/libf2cblas.lib > > > > --with-lapack-lib=/cygdrive/c/pack/petsc-3.2-p5/win64-msvc-release/lib/libf2clapack.lib > > > > I get the error: > > > > > > ******************************************************************************* > > UNABLE to CONFIGURE with GIVEN OPTIONS (see configure.log for > > detail > > s): > > > > ------------------------------------------------------------------------------- > > You set a value for --with-blas-lib= and --with-lapack-lib=, but > > ['/cy > > gdrive/c/pack/petsc-3.2-p5/win64-msvc-release/lib/libf2cblas.lib'] and > > ['/cygdri > > ve/c/pack/petsc-3.2-p5/win64-msvc-release/lib/libf2clapack.lib'] cannot be > > used > > > > ******************************************************************************* > > > > Excerpt from configure.log: > > > > error LNK2019: unresolved external symbol dgetrs referenced in function > > main > > > > I will send complete log to petsc-maint again. > > > > Send that and > 'nm /cygdrive/c/pack/petsc-3.2-p5/win64-msvc-release/lib/libf2clapack.lib | > grep dgetrs' > > If you look in configure.log, it shows you the stub source we use to check > the link and the > compile line. You can easily test yourself. > > Matt > > Any more hints? > > > > Thanks, > > Dominik > > > > On Mon, Nov 7, 2011 at 3:57 PM, Dominik Szczerba > > wrote: > > > I am configuring with: > > > > > > ./config/configure.py PETSC_DIR=$PWD PETSC_ARCH=win64-msvc-release > > > --with-x=0 --with-debugging=0 --with-cc='win32fe cl' > > > --with-cxx='win32fe cl' --with-fc=0 --download-f2cblaslapack > > > --with-mpi-dir=/cygdrive/c/mpich2-1.3.2p1-win-x86-64 > > > --with-hypre-dir=/cygdrive/c/pack/hypre-2.7.0b/src/hypre/ > > > --with-parmetis=1 --download-parmetis=1 > > > > > > and getting an error: > > > > > > > > ******************************************************************************* > > > UNABLE to CONFIGURE with GIVEN OPTIONS (see configure.log > > > for details): > > > > > ------------------------------------------------------------------------------- > > > --download-f2cblaslapack libraries cannot be used > > > > > ******************************************************************************* > > > File "./config/configure.py", line 283, in petsc_configure > > > framework.configure(out = sys.stdout) > > > File > > "/cygdrive/c/pack/petsc-3.2-p5/config/BuildSystem/config/framework.py", > > > line 925, in configure > > > child.configure() > > > File > > "/cygdrive/c/pack/petsc-3.2-p5/config/BuildSystem/config/packages/BlasLapack.py", > > > line 538, in configure > > > self.executeTest(self.configureLibrary) > > > File "/cygdrive/c/pack/petsc-3.2-p5/config/BuildSystem/config/base.py", > > > line 115, in executeTest > > > ret = apply(test, args,kargs) > > > File > > "/cygdrive/c/pack/petsc-3.2-p5/config/BuildSystem/config/packages/BlasLapack.py", > > > line 444, in configureLibrary > > > for (name, blasLibrary, lapackLibrary, self.useCompatibilityLibs) > > > in self.generateGuesses(): > > > File > > "/cygdrive/c/pack/petsc-3.2-p5/config/BuildSystem/config/packages/BlasLapack.py", > > > line 162, in generateGuesses > > > raise RuntimeError('--download-f2cblaslapack libraries cannot be > > used') > > > > > > I will send the full log to petsc-maint in a second. How do I go from > > here? > > > > > > Thanks for any hints, > > > Dominik > > > > > > > > > From behzad.baghapour at gmail.com Wed Nov 9 01:43:11 2011 From: behzad.baghapour at gmail.com (behzad baghapour) Date: Wed, 9 Nov 2011 11:13:11 +0330 Subject: [petsc-users] How to compile with DMDA Message-ID: Dear all, It may be a repeated question. When I want to run an example with DMDA, I received the error: "could not find pestcdmda.h" I configured Petsc with --mpi-download=1, make a Petsc example with DMDA, and run with mpiexec -n .... What should I do more? Thanks, B.B. -------------- next part -------------- An HTML attachment was scrubbed... URL: From dominik at itis.ethz.ch Wed Nov 9 06:08:03 2011 From: dominik at itis.ethz.ch (Dominik Szczerba) Date: Wed, 9 Nov 2011 13:08:03 +0100 Subject: [petsc-users] (no subject) Message-ID: I am still trying to push the Windows build. I am using the separately compiled blas/lapack to see now a configure error: ------------------------------------------------------------------------------- External package hypre does not work on Microsoft Windows ******************************************************************************* I was able to compile and use hypre before with petsc 3.1 on Windows natively (without cygwin). Was not easy but certainly possible. Is it possible to somehow deactivate the check and use my hypre build? Thanks for any hints, Dominik From balay at mcs.anl.gov Wed Nov 9 06:19:27 2011 From: balay at mcs.anl.gov (Satish Balay) Date: Wed, 9 Nov 2011 06:19:27 -0600 (CST) Subject: [petsc-users] How to compile with DMDA In-Reply-To: References: Message-ID: On Wed, 9 Nov 2011, behzad baghapour wrote: > Dear all, > > It may be a repeated question. When I want to run an example with DMDA, I > received the error: > > "could not find pestcdmda.h" ^^ you have a typo here.. > > I configured Petsc with --mpi-download=1, make a Petsc example with DMDA, ^^^^^^^^^^^ you mean --download-mpich? Satish > and run with mpiexec -n .... > > What should I do more? > > Thanks, > B.B. > From balay at mcs.anl.gov Wed Nov 9 06:24:25 2011 From: balay at mcs.anl.gov (Satish Balay) Date: Wed, 9 Nov 2011 06:24:25 -0600 (CST) Subject: [petsc-users] (no subject) In-Reply-To: References: Message-ID: On Wed, 9 Nov 2011, Dominik Szczerba wrote: > I am still trying to push the Windows build. I am using the separately > compiled blas/lapack to see now a configure error: > > ------------------------------------------------------------------------------- > External package hypre does not work on Microsoft Windows > ******************************************************************************* > > I was able to compile and use hypre before with petsc 3.1 on Windows > natively (without cygwin). Was not easy but certainly possible. Is it > possible to somehow deactivate the check and use my hypre build? edit config/PETSc/packages/hypre.py and add [in __init__()]: self.worksonWindows = 1 Satish > > Thanks for any hints, > Dominik > From paeanball at gmail.com Wed Nov 9 06:51:18 2011 From: paeanball at gmail.com (Bao Kai) Date: Wed, 9 Nov 2011 15:51:18 +0300 Subject: [petsc-users] How to use other matrix directly in petsc? Message-ID: Dear all, I have been writing a serial FEM code with a direct linear solver umfpack, which is pretty easy to use. The problem is the memory required turned to be very big and I can not afford it. So I want to turn to PETSC to use some iterative solver. I have written the the whole matrix assembly process with coordinates list format. The matrix can be converted to CSC format to use umfpack. When I tried to use MatSetValues to generate the Matrix for petsc, it turned out to be really slow. I am wondering if there is some efficient way to convert the generated matrix ( with COO, or CSC, or even CSR ) properly to make it usable for PETSC. Since the code at the moment is serial, the methods working for serial petsc will be OK. And I do many matrix manipulation( some may not easy with petsc) during the matrix assembly, so I do not want to rewrite the whole assembling process with petsc at the moment. I just want to convert the matrix generated to the format the petsc can use. It can save much time. Thank you very much. Best Regards, Kai -------------- next part -------------- An HTML attachment was scrubbed... URL: From knepley at gmail.com Wed Nov 9 07:50:50 2011 From: knepley at gmail.com (Matthew Knepley) Date: Wed, 9 Nov 2011 13:50:50 +0000 Subject: [petsc-users] How to use other matrix directly in petsc? In-Reply-To: References: Message-ID: On Wed, Nov 9, 2011 at 12:51 PM, Bao Kai wrote: > Dear all, > > I have been writing a serial FEM code with a direct linear solver umfpack, > which is pretty easy to use. The problem is the memory required turned to > be very big and I can not afford it. > > So I want to turn to PETSC to use some iterative solver. > > I have written the the whole matrix assembly process with coordinates list > format. The matrix can be converted to CSC format to use umfpack. > > When I tried to use MatSetValues to generate the Matrix for petsc, it > turned out to be really slow. > > I am wondering if there is some efficient way to convert the generated > matrix ( with COO, or CSC, or even CSR ) properly to make it usable for > PETSC. > > Since the code at the moment is serial, the methods working for serial > petsc will be OK. > > And I do many matrix manipulation( some may not easy with petsc) during > the matrix assembly, so I do not want to rewrite the whole assembling > process with petsc at the moment. > > I just want to convert the matrix generated to the format the petsc can > use. It can save much time. > http://www.mcs.anl.gov/petsc/petsc-as/documentation/faq.html#efficient-assembly Matt > Thank you very much. > > > Best Regards, > Kai > > > > > -- What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. -- Norbert Wiener -------------- next part -------------- An HTML attachment was scrubbed... URL: From ckontzialis at lycos.com Wed Nov 9 07:51:35 2011 From: ckontzialis at lycos.com (Konstantinos Kontzialis) Date: Wed, 09 Nov 2011 15:51:35 +0200 Subject: [petsc-users] problem with -ts_type gl Message-ID: <4EBA8567.3080609@lycos.com> Dear all, I run implicitly my code for the boundary layer over a flat plate, with: mpiexec -n 1 valgrind ./hoac blasius -snes_mf_operator -llf_flux -n_out 10 -end_time 50 -implicit -pc_type bjacobi -sub_pc_type ilu -sub_pc_factor_mat_ordering_type rcm -gl -ksp_type fgmres -sub_pc_factor_levels 2 -snes_monitor -snes_converged_reason -ksp_converged_reason -ts_view -ksp_pc_side right -sub_pc_factor_levels 4 -ksp_gmres_restart 500 -dt 1.0e-3 -snes_ksp_ew -ts_type gl and I got: Approximation order = 0 # DOF = 9600 # nodes in mesh = 1281 # elements in mesh = 1200 Navier-Stokes solution Using LLF flux Linear solve converged due to CONVERGED_RTOL iterations 1 Timestep 0: dt = 0.001, T = 0, Res[rho] = 2.06015e-10, Res[rhou] = 23.9721, Res[rhov] = 0.00322747, Res[E] = 0.00680121, CFL = 199.999 0 SNES Function norm 1.660837592895e+03 Linear solve converged due to CONVERGED_RTOL iterations 1 1 SNES Function norm 4.452765833604e+01 Linear solve converged due to CONVERGED_RTOL iterations 2 2 SNES Function norm 2.226427753100e+00 Linear solve converged due to CONVERGED_RTOL iterations 3 3 SNES Function norm 1.378423579772e-02 Linear solve converged due to CONVERGED_RTOL iterations 5 4 SNES Function norm 2.392525805814e-06 Nonlinear solve converged due to CONVERGED_FNORM_RELATIVE 0 SNES Function norm 2.697720824971e+03 Linear solve converged due to CONVERGED_RTOL iterations 1 1 SNES Function norm 6.544960638431e+01 Linear solve converged due to CONVERGED_RTOL iterations 2 2 SNES Function norm 3.215472486217e+00 Linear solve converged due to CONVERGED_RTOL iterations 3 3 SNES Function norm 1.962042780514e-02 Linear solve converged due to CONVERGED_RTOL iterations 5 4 SNES Function norm 3.666057800237e-06 Nonlinear solve converged due to CONVERGED_FNORM_RELATIVE ==2767== Jump to the invalid address stated on the next line ==2767== at 0x0: ??? ==2767== by 0x585DB5E: TSGLChooseNextScheme (gl.c:795) ==2767== by 0x585F12A: TSSolve_GL (gl.c:948) ==2767== by 0x5881DD3: TSSolve (ts.c:1848) ==2767== by 0x4272CC: implicit_time (implicit_time.c:77) ==2767== by 0x4267B3: main (hoac.c:1175) ==2767== Address 0x0 is not stack'd, malloc'd or (recently) free'd ==2767== [0]PETSC ERROR: ------------------------------------------------------------------------ [0]PETSC ERROR: Caught signal number 11 SEGV: Segmentation Violation, probably memory access out of range [0]PETSC ERROR: Try option -start_in_debugger or -on_error_attach_debugger [0]PETSC ERROR: or see http://www.mcs.anl.gov/petsc/petsc-as/documentation/faq.html#valgrind[0]PETSC ERROR: or try http://valgrind.org on GNU/linux and Apple Mac OS X to find memory corruption errors [0]PETSC ERROR: likely location of problem given in stack below [0]PETSC ERROR: --------------------- Stack Frames ------------------------------------ [0]PETSC ERROR: Note: The EXACT line numbers in the stack are not available, [0]PETSC ERROR: INSTEAD the line number of the start of the function [0]PETSC ERROR: is given. [0]PETSC ERROR: [0] TSGLAdaptChoose line 232 /home/kontzialis/petsc-3.2-p5/src/ts/impls/implicit/gl/gladapt.c [0]PETSC ERROR: [0] TSGLChooseNextScheme line 774 /home/kontzialis/petsc-3.2-p5/src/ts/impls/implicit/gl/gl.c [0]PETSC ERROR: --------------------- Error Message ------------------------------------ [0]PETSC ERROR: Signal received! [0]PETSC ERROR: ------------------------------------------------------------------------ [0]PETSC ERROR: Petsc Release Version 3.2.0, Patch 5, Sat Oct 29 13:45:54 CDT 2011 [0]PETSC ERROR: See docs/changes/index.html for recent updates. [0]PETSC ERROR: See docs/faq.html for hints about trouble shooting. [0]PETSC ERROR: See docs/index.html for manual pages. [0]PETSC ERROR: ------------------------------------------------------------------------ [0]PETSC ERROR: ./hoac on a linux-gnu named PlusSodaL by kontzialis Wed Nov 9 13:11:45 2011 [0]PETSC ERROR: Libraries linked from /home/kontzialis/petsc-3.2-p5/linux-gnu-c-debug/lib [0]PETSC ERROR: Configure run at Sat Nov 5 20:58:12 2011 [0]PETSC ERROR: Configure options --with-debugging=1 ---with-mpi-dir=/usr/lib64/mpich2/bin --with-shared-libraries --with-shared-libraries --with-large-file-io=1 --with-precision=double --with-blacs=1 --download-blacs=yes --download-f-blas-lapack=yes --with-plapack=1 --download-plapack=yes --with-scalapack=1 --download-scalapack=yes --with-superlu=1 --download-superlu=yes --with-superlu_dist=1 --download-superlu_dist=yes --with-ml=1 --download-ml=yes --with-umfpack=1 --download-umfpack=yes --with-mpi=1 --download-mpich=1 --with-sundials=1 --download-sundials=1 --with-parmetis=1 --download-parmetis=1 --with-hypre=1 --download-hypre=1 [0]PETSC ERROR: ------------------------------------------------------------------------ [0]PETSC ERROR: User provided function() line 0 in unknown directory unknown file application called MPI_Abort(MPI_COMM_WORLD, 59) - process 0 [cli_0]: aborting job: application called MPI_Abort(MPI_COMM_WORLD, 59) - process 0 ==2767== ==2767== HEAP SUMMARY: ==2767== in use at exit: 10,732,979 bytes in 19,630 blocks ==2767== total heap usage: 53,585 allocs, 33,955 frees, 2,757,452,468 bytes allocated ==2767== ==2767== LEAK SUMMARY: ==2767== definitely lost: 1,114 bytes in 24 blocks ==2767== indirectly lost: 24 bytes in 3 blocks ==2767== possibly lost: 0 bytes in 0 blocks ==2767== still reachable: 10,731,841 bytes in 19,603 blocks ==2767== suppressed: 0 bytes in 0 blocks ==2767== Rerun with --leak-check=full to see details of leaked memory What am I doing wrong? Thank you, Kostas -------------- next part -------------- An HTML attachment was scrubbed... URL: From dominik at itis.ethz.ch Wed Nov 9 07:57:05 2011 From: dominik at itis.ethz.ch (Dominik Szczerba) Date: Wed, 9 Nov 2011 14:57:05 +0100 Subject: [petsc-users] problem with -ts_type gl In-Reply-To: <4EBA8567.3080609@lycos.com> References: <4EBA8567.3080609@lycos.com> Message-ID: please run in a debugger... On Wed, Nov 9, 2011 at 2:51 PM, Konstantinos Kontzialis wrote: > Dear all, > > I run implicitly my code for the boundary layer over a flat plate, with: > > mpiexec -n 1 valgrind ./hoac blasius -snes_mf_operator -llf_flux -n_out 10 > -end_time 50 -implicit -pc_type bjacobi -sub_pc_type ilu > -sub_pc_factor_mat_ordering_type rcm -gl -ksp_type fgmres > -sub_pc_factor_levels 2 -snes_monitor -snes_converged_reason > -ksp_converged_reason -ts_view -ksp_pc_side right -sub_pc_factor_levels 4 > -ksp_gmres_restart 500 -dt 1.0e-3 -snes_ksp_ew -ts_type gl > > and I got: > > > > Approximation order = 0 > # DOF = 9600 > # nodes in mesh = 1281 > # elements in mesh = 1200 > Navier-Stokes solution > Using LLF flux > > > Linear solve converged due to CONVERGED_RTOL iterations 1 > > > Timestep?? 0: dt = 0.001, T = 0, Res[rho] = 2.06015e-10, Res[rhou] = > 23.9721, Res[rhov] = 0.00322747, Res[E] = 0.00680121, CFL = 199.999 > ??? 0 SNES Function norm 1.660837592895e+03 > ??? Linear solve converged due to CONVERGED_RTOL iterations 1 > ??? 1 SNES Function norm 4.452765833604e+01 > ??? Linear solve converged due to CONVERGED_RTOL iterations 2 > ??? 2 SNES Function norm 2.226427753100e+00 > ??? Linear solve converged due to CONVERGED_RTOL iterations 3 > ??? 3 SNES Function norm 1.378423579772e-02 > ??? Linear solve converged due to CONVERGED_RTOL iterations 5 > ??? 4 SNES Function norm 2.392525805814e-06 > ? Nonlinear solve converged due to CONVERGED_FNORM_RELATIVE > ??? 0 SNES Function norm 2.697720824971e+03 > ??? Linear solve converged due to CONVERGED_RTOL iterations 1 > ??? 1 SNES Function norm 6.544960638431e+01 > ??? Linear solve converged due to CONVERGED_RTOL iterations 2 > ??? 2 SNES Function norm 3.215472486217e+00 > ??? Linear solve converged due to CONVERGED_RTOL iterations 3 > ??? 3 SNES Function norm 1.962042780514e-02 > ??? Linear solve converged due to CONVERGED_RTOL iterations 5 > ??? 4 SNES Function norm 3.666057800237e-06 > ? Nonlinear solve converged due to CONVERGED_FNORM_RELATIVE > ==2767== Jump to the invalid address stated on the next line > ==2767==??? at 0x0: ??? > ==2767==??? by 0x585DB5E: TSGLChooseNextScheme (gl.c:795) > ==2767==??? by 0x585F12A: TSSolve_GL (gl.c:948) > ==2767==??? by 0x5881DD3: TSSolve (ts.c:1848) > ==2767==??? by 0x4272CC: implicit_time (implicit_time.c:77) > ==2767==??? by 0x4267B3: main (hoac.c:1175) > ==2767==? Address 0x0 is not stack'd, malloc'd or (recently) free'd > ==2767== > [0]PETSC ERROR: > ------------------------------------------------------------------------ > [0]PETSC ERROR: Caught signal number 11 SEGV: Segmentation Violation, > probably memory access out of range > [0]PETSC ERROR: Try option -start_in_debugger or -on_error_attach_debugger > [0]PETSC ERROR: or see > http://www.mcs.anl.gov/petsc/petsc-as/documentation/faq.html#valgrind[0]PETSC > ERROR: or try http://valgrind.org on GNU/linux and Apple Mac OS X to find > memory corruption errors > [0]PETSC ERROR: likely location of problem given in stack below > [0]PETSC ERROR: ---------------------? Stack Frames > ------------------------------------ > [0]PETSC ERROR: Note: The EXACT line numbers in the stack are not available, > [0]PETSC ERROR:?????? INSTEAD the line number of the start of the function > [0]PETSC ERROR:?????? is given. > [0]PETSC ERROR: [0] TSGLAdaptChoose line 232 > /home/kontzialis/petsc-3.2-p5/src/ts/impls/implicit/gl/gladapt.c > [0]PETSC ERROR: [0] TSGLChooseNextScheme line 774 > /home/kontzialis/petsc-3.2-p5/src/ts/impls/implicit/gl/gl.c > [0]PETSC ERROR: --------------------- Error Message > ------------------------------------ > [0]PETSC ERROR: Signal received! > [0]PETSC ERROR: > ------------------------------------------------------------------------ > [0]PETSC ERROR: Petsc Release Version 3.2.0, Patch 5, Sat Oct 29 13:45:54 > CDT 2011 > [0]PETSC ERROR: See docs/changes/index.html for recent updates. > [0]PETSC ERROR: See docs/faq.html for hints about trouble shooting. > [0]PETSC ERROR: See docs/index.html for manual pages. > [0]PETSC ERROR: > ------------------------------------------------------------------------ > [0]PETSC ERROR: ./hoac on a linux-gnu named PlusSodaL by kontzialis Wed Nov > 9 13:11:45 2011 > [0]PETSC ERROR: Libraries linked from > /home/kontzialis/petsc-3.2-p5/linux-gnu-c-debug/lib > [0]PETSC ERROR: Configure run at Sat Nov? 5 20:58:12 2011 > [0]PETSC ERROR: Configure options --with-debugging=1 > ---with-mpi-dir=/usr/lib64/mpich2/bin --with-shared-libraries > --with-shared-libraries --with-large-file-io=1 --with-precision=double > --with-blacs=1 --download-blacs=yes --download-f-blas-lapack=yes > --with-plapack=1 --download-plapack=yes --with-scalapack=1 > --download-scalapack=yes --with-superlu=1 --download-superlu=yes > --with-superlu_dist=1 --download-superlu_dist=yes --with-ml=1 > --download-ml=yes --with-umfpack=1 --download-umfpack=yes --with-mpi=1 > --download-mpich=1 --with-sundials=1 --download-sundials=1 --with-parmetis=1 > --download-parmetis=1 --with-hypre=1 --download-hypre=1 > [0]PETSC ERROR: > ------------------------------------------------------------------------ > [0]PETSC ERROR: User provided function() line 0 in unknown directory unknown > file > application called MPI_Abort(MPI_COMM_WORLD, 59) - process 0 > [cli_0]: aborting job: > application called MPI_Abort(MPI_COMM_WORLD, 59) - process 0 > ==2767== > ==2767== HEAP SUMMARY: > ==2767==???? in use at exit: 10,732,979 bytes in 19,630 blocks > ==2767==?? total heap usage: 53,585 allocs, 33,955 frees, 2,757,452,468 > bytes allocated > ==2767== > ==2767== LEAK SUMMARY: > ==2767==??? definitely lost: 1,114 bytes in 24 blocks > ==2767==??? indirectly lost: 24 bytes in 3 blocks > ==2767==????? possibly lost: 0 bytes in 0 blocks > ==2767==??? still reachable: 10,731,841 bytes in 19,603 blocks > ==2767==???????? suppressed: 0 bytes in 0 blocks > ==2767== Rerun with --leak-check=full to see details of leaked memory > > What am I doing wrong? > > Thank you, > > Kostas > From behzad.baghapour at gmail.com Wed Nov 9 08:00:33 2011 From: behzad.baghapour at gmail.com (behzad baghapour) Date: Wed, 9 Nov 2011 17:30:33 +0330 Subject: [petsc-users] How to compile with DMDA In-Reply-To: References: Message-ID: Yes, I did mean it. But I received that error. On Wed, Nov 9, 2011 at 3:49 PM, Satish Balay wrote: > On Wed, 9 Nov 2011, behzad baghapour wrote: > > > Dear all, > > > > It may be a repeated question. When I want to run an example with DMDA, I > > received the error: > > > > "could not find pestcdmda.h" > > ^^ you have a typo here.. > > > > I configured Petsc with --mpi-download=1, make a Petsc example with DMDA, > > ^^^^^^^^^^^ you mean --download-mpich? > > Satish > > > and run with mpiexec -n .... > > > > What should I do more? > > > > Thanks, > > B.B. > > > > -- ================================== Behzad Baghapour Ph.D. Candidate, Mechecanical Engineering University of Tehran, Tehran, Iran https://sites.google.com/site/behzadbaghapour Fax: 0098-21-88020741 ================================== -------------- next part -------------- An HTML attachment was scrubbed... URL: From knepley at gmail.com Wed Nov 9 08:09:10 2011 From: knepley at gmail.com (Matthew Knepley) Date: Wed, 9 Nov 2011 14:09:10 +0000 Subject: [petsc-users] How to compile with DMDA In-Reply-To: References: Message-ID: On Wed, Nov 9, 2011 at 2:00 PM, behzad baghapour wrote: > Yes, I did mean it. But I received that error. > 1) Do not type in the error. Cut & paste EXACTLY the error from the run 2) It sounds like you are trying to build without the PETSc makefiles. Since it is breaking, I recommend you use the makefiles instead. There is a section in the manual. Matt > On Wed, Nov 9, 2011 at 3:49 PM, Satish Balay wrote: > >> On Wed, 9 Nov 2011, behzad baghapour wrote: >> >> > Dear all, >> > >> > It may be a repeated question. When I want to run an example with DMDA, >> I >> > received the error: >> > >> > "could not find pestcdmda.h" >> >> ^^ you have a typo here.. >> > >> > I configured Petsc with --mpi-download=1, make a Petsc example with >> DMDA, >> >> ^^^^^^^^^^^ you mean --download-mpich? >> >> Satish >> >> > and run with mpiexec -n .... >> > >> > What should I do more? >> > >> > Thanks, >> > B.B. >> > >> >> > > > -- > ================================== > Behzad Baghapour > Ph.D. Candidate, Mechecanical Engineering > University of Tehran, Tehran, Iran > https://sites.google.com/site/behzadbaghapour > Fax: 0098-21-88020741 > ================================== > > -- What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. -- Norbert Wiener -------------- next part -------------- An HTML attachment was scrubbed... URL: From jedbrown at mcs.anl.gov Wed Nov 9 08:32:17 2011 From: jedbrown at mcs.anl.gov (Jed Brown) Date: Wed, 9 Nov 2011 08:32:17 -0600 Subject: [petsc-users] problem with -ts_type gl In-Reply-To: <4EBA8567.3080609@lycos.com> References: <4EBA8567.3080609@lycos.com> Message-ID: On Wed, Nov 9, 2011 at 07:51, Konstantinos Kontzialis wrote: > ==2767== Jump to the invalid address stated on the next line > ==2767== at 0x0: ??? > ==2767== by 0x585DB5E: TSGLChooseNextScheme (gl.c:795) > ==2767== by 0x585F12A: TSSolve_GL (gl.c:948) > ==2767== by 0x5881DD3: TSSolve (ts.c:1848) > ==2767== by 0x4272CC: implicit_time (implicit_time.c:77) > ==2767== by 0x4267B3: main (hoac.c:1175) > ==2767== Address 0x0 is not stack'd, malloc'd or (recently) free'd > I replied to your other message. -------------- next part -------------- An HTML attachment was scrubbed... URL: From hzhang at mcs.anl.gov Wed Nov 9 08:44:24 2011 From: hzhang at mcs.anl.gov (Hong Zhang) Date: Wed, 9 Nov 2011 08:44:24 -0600 Subject: [petsc-users] How to use other matrix directly in petsc? In-Reply-To: References: Message-ID: Bao: You may look at petsc-umfpack interface ~petsc/src/mat/impls/aij/seq/umfpack/umfpack.c It seems umfpack use csr format, same as petsc aij format. Hong > Dear all, > I have been writing a serial FEM code with a direct linear solver umfpack, > which is pretty easy to use. The problem is the memory required turned to be > very big and I can not afford it. > So I want to turn to PETSC to use ?some iterative solver. > I have written the the whole matrix assembly process with?coordinates list > format. The matrix can be converted to CSC format to use umfpack. > When I tried to use MatSetValues to generate the Matrix for petsc, it turned > out to be really slow. > I am wondering if there is some?efficient?way to convert the generated > matrix ( with COO, or CSC, or even CSR ) properly to make it usable for > PETSC. > Since the code at the moment is serial, the methods working for serial petsc > will be OK. > And I do many matrix manipulation( some may not easy with petsc) during the > matrix assembly, so I do not want to rewrite the whole assembling process > with petsc at the moment. > I just want to convert the matrix generated to the format the petsc can use. > It can save much time. > Thank you very much. > > Best Regards, > Kai > > > From behzad.baghapour at gmail.com Wed Nov 9 08:49:23 2011 From: behzad.baghapour at gmail.com (behzad baghapour) Date: Wed, 9 Nov 2011 18:19:23 +0330 Subject: [petsc-users] How to compile with DMDA In-Reply-To: References: Message-ID: OK. I saw that my make file has a mistake :-) But I have a question here. How I can find out the code runs in parallel as the code hasn't get "rank" and "size" or not defined in main program maybe ? Thanks, B.B. On Wed, Nov 9, 2011 at 5:39 PM, Matthew Knepley wrote: > On Wed, Nov 9, 2011 at 2:00 PM, behzad baghapour < > behzad.baghapour at gmail.com> wrote: > >> Yes, I did mean it. But I received that error. >> > > 1) Do not type in the error. Cut & paste EXACTLY the error from the run > > 2) It sounds like you are trying to build without the PETSc makefiles. > Since it is breaking, > I recommend you use the makefiles instead. There is a section in the > manual. > > Matt > > >> On Wed, Nov 9, 2011 at 3:49 PM, Satish Balay wrote: >> >>> On Wed, 9 Nov 2011, behzad baghapour wrote: >>> >>> > Dear all, >>> > >>> > It may be a repeated question. When I want to run an example with >>> DMDA, I >>> > received the error: >>> > >>> > "could not find pestcdmda.h" >>> >>> ^^ you have a typo here.. >>> > >>> > I configured Petsc with --mpi-download=1, make a Petsc example with >>> DMDA, >>> >>> ^^^^^^^^^^^ you mean --download-mpich? >>> >>> Satish >>> >>> > and run with mpiexec -n .... >>> > >>> > What should I do more? >>> > >>> > Thanks, >>> > B.B. >>> > >>> >>> >> >> >> -- >> ================================== >> Behzad Baghapour >> Ph.D. Candidate, Mechecanical Engineering >> University of Tehran, Tehran, Iran >> https://sites.google.com/site/behzadbaghapour >> Fax: 0098-21-88020741 >> ================================== >> >> > > > -- > What most experimenters take for granted before they begin their > experiments is infinitely more interesting than any results to which their > experiments lead. > -- Norbert Wiener > -------------- next part -------------- An HTML attachment was scrubbed... URL: From knepley at gmail.com Wed Nov 9 08:50:17 2011 From: knepley at gmail.com (Matthew Knepley) Date: Wed, 9 Nov 2011 14:50:17 +0000 Subject: [petsc-users] How to compile with DMDA In-Reply-To: References: Message-ID: On Wed, Nov 9, 2011 at 2:49 PM, behzad baghapour wrote: > OK. I saw that my make file has a mistake :-) > But I have a question here. How I can find out the code runs in parallel > as the code hasn't get "rank" and "size" or not defined in main program > maybe ? > -log_summary Matt > Thanks, B.B. > > > > > > On Wed, Nov 9, 2011 at 5:39 PM, Matthew Knepley wrote: > >> On Wed, Nov 9, 2011 at 2:00 PM, behzad baghapour < >> behzad.baghapour at gmail.com> wrote: >> >>> Yes, I did mean it. But I received that error. >>> >> >> 1) Do not type in the error. Cut & paste EXACTLY the error from the run >> >> 2) It sounds like you are trying to build without the PETSc makefiles. >> Since it is breaking, >> I recommend you use the makefiles instead. There is a section in the >> manual. >> >> Matt >> >> >>> On Wed, Nov 9, 2011 at 3:49 PM, Satish Balay wrote: >>> >>>> On Wed, 9 Nov 2011, behzad baghapour wrote: >>>> >>>> > Dear all, >>>> > >>>> > It may be a repeated question. When I want to run an example with >>>> DMDA, I >>>> > received the error: >>>> > >>>> > "could not find pestcdmda.h" >>>> >>>> ^^ you have a typo here.. >>>> > >>>> > I configured Petsc with --mpi-download=1, make a Petsc example with >>>> DMDA, >>>> >>>> ^^^^^^^^^^^ you mean --download-mpich? >>>> >>>> Satish >>>> >>>> > and run with mpiexec -n .... >>>> > >>>> > What should I do more? >>>> > >>>> > Thanks, >>>> > B.B. >>>> > >>>> >>>> >>> >>> >>> -- >>> ================================== >>> Behzad Baghapour >>> Ph.D. Candidate, Mechecanical Engineering >>> University of Tehran, Tehran, Iran >>> https://sites.google.com/site/behzadbaghapour >>> Fax: 0098-21-88020741 >>> ================================== >>> >>> >> >> >> -- >> What most experimenters take for granted before they begin their >> experiments is infinitely more interesting than any results to which their >> experiments lead. >> -- Norbert Wiener >> > > -- What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. -- Norbert Wiener -------------- next part -------------- An HTML attachment was scrubbed... URL: From xdliang at gmail.com Wed Nov 9 14:51:35 2011 From: xdliang at gmail.com (Xiangdong Liang) Date: Wed, 9 Nov 2011 15:51:35 -0500 Subject: [petsc-users] PETSc default values Message-ID: Hello everyone, I have a question about the default values set by PETSc. For example, KSPSetTolerances(ksp,1.e-5,PETSC_DEFAULT,PETSC_DEFAULT,PETSC_DEFAULT); What's the atol and maxits set by PETSC_DEAULT? Is there a way to print them out? Thanks. Xiangdong From mccomic at mcs.anl.gov Wed Nov 9 14:56:19 2011 From: mccomic at mcs.anl.gov (Mike McCourt) Date: Wed, 9 Nov 2011 14:56:19 -0600 (CST) Subject: [petsc-users] PETSc default values In-Reply-To: Message-ID: <2089955940.14173.1320872179838.JavaMail.root@zimbra.anl.gov> Pass -ksp_view (or -snes_view for SNES problems) as a run time option. -Mike ----- Original Message ----- From: "Xiangdong Liang" To: "PETSc users list" Sent: Wednesday, November 9, 2011 2:51:35 PM Subject: [petsc-users] PETSc default values Hello everyone, I have a question about the default values set by PETSc. For example, KSPSetTolerances(ksp,1.e-5,PETSC_DEFAULT,PETSC_DEFAULT,PETSC_DEFAULT); What's the atol and maxits set by PETSC_DEAULT? Is there a way to print them out? Thanks. Xiangdong From paeanball at gmail.com Thu Nov 10 02:00:08 2011 From: paeanball at gmail.com (Bao Kai) Date: Thu, 10 Nov 2011 11:00:08 +0300 Subject: [petsc-users] How to use other matrix directly in petsc? In-Reply-To: References: Message-ID: Dear Hong and Matthew, I looked at the interface file for umfpack, while I did not understand all the things inside very clearly. I just began to read petsc manual three days ago. I guess the interface file is used to call the functions in umfpack to solve the linear system. But the default matrix format in umfack should be csc, and I did get how the interface handle this problem. My matrix conversion problem is generally solved. The crucial point is that you need to count the number of the non-zero entries in each row and use that information to do the preallocation. I go forward to try some linear solvers now. The matrix generated with my problem is a little poor, resulting from a mixed finite element method, asymmetric and indefinite. And the inclusion of irregular boundary makes it worse. This is actually one reason I began with a direct solver. If you have any suggestions on solve this kinds of linear systems, please tell me. Thank you very much. Best Regard, Kai > Message: 3 > Date: Wed, 9 Nov 2011 08:44:24 -0600 > From: Hong Zhang > Subject: Re: [petsc-users] How to use other matrix directly in petsc? > To: PETSc users list > Message-ID: > > > Content-Type: text/plain; charset=ISO-8859-1 > > Bao: > You may look at petsc-umfpack interface > ~petsc/src/mat/impls/aij/seq/umfpack/umfpack.c > > It seems umfpack use csr format, same as petsc aij format. > > Hong > > > Dear all, > > I have been writing a serial FEM code with a direct linear solver > umfpack, > > which is pretty easy to use. The problem is the memory required turned > to be > > very big and I can not afford it. > > So I want to turn to PETSC to use ?some iterative solver. > > I have written the the whole matrix assembly process with?coordinates > list > > format. The matrix can be converted to CSC format to use umfpack. > > When I tried to use MatSetValues to generate the Matrix for petsc, it > turned > > out to be really slow. > > I am wondering if there is some?efficient?way to convert the generated > > matrix ( with COO, or CSC, or even CSR ) properly to make it usable for > > PETSC. > > Since the code at the moment is serial, the methods working for serial > petsc > > will be OK. > > And I do many matrix manipulation( some may not easy with petsc) during > the > > matrix assembly, so I do not want to rewrite the whole assembling process > > with petsc at the moment. > > I just want to convert the matrix generated to the format the petsc can > use. > > It can save much time. > > Thank you very much. > > > > Best Regards, > > Kai > > > > > > > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From manfred.gratt at uibk.ac.at Thu Nov 10 06:14:03 2011 From: manfred.gratt at uibk.ac.at (Manfred Gratt) Date: Thu, 10 Nov 2011 13:14:03 +0100 Subject: [petsc-users] Changed output of VecView in 3.2 Message-ID: <4EBBC00B.1020304@uibk.ac.at> Hello, I am using petsc since 3.1 and upgraded my program to 3.2. I have now the problem that the Vector output in ASCII has changed. From only a list of numbers to the name of process that had the data and then the numbers. Like this: Vector Object: 2 MPI processes type: mpi Process [0] 0 0 0 Is there a option to change it back to only display numbers again? I use this output only for a rough estimation if the solution is correct and I don't want to rewrite the tool to check the correctness. I use the viewer like this: PetscViewer viewer; PetscViewerASCIIOpen( PETSC_COMM_WORLD, solfname.c_str(), &viewer); VecView( uT, viewer); PetscViewerDestroy( &viewer); Thanks Manfred From bogdan at lmn.pub.ro Thu Nov 10 06:30:28 2011 From: bogdan at lmn.pub.ro (Bogdan Dita) Date: Thu, 10 Nov 2011 14:30:28 +0200 Subject: [petsc-users] Problem when switching from debug to optimized Message-ID: <4324655496228f9063a68529fdefc0e2.squirrel@wm.lmn.pub.ro> Hello, Until a few days ago I've only be using PETSc in debug mode and when I switch to the optimised version(--with-debugging=0) I got a strange result regarding the solve time, what I mean is that it was 10-15 % higher then in debug mode. I'm trying to solve a linear system in parallel with superlu_dist, and I've tested my program on a Beowulf cluster, so far only using a single node with 2 quad-core Intel processors. From what I know the "no debug" version should be faster and I know it should be faster because on my laptop(dual-core Intel) for the same program and even the same matrices the solve time for the optimised version is 2 times faster, but when I use the cluster the optimised version time is slower then the debug version. My quess is that this has something to do with MPI. Any thoughts? Best regards, Bogdan Dita From paeanball at gmail.com Thu Nov 10 07:48:35 2011 From: paeanball at gmail.com (Bao Kai) Date: Thu, 10 Nov 2011 16:48:35 +0300 Subject: [petsc-users] Any suggestion for this kinds of matrix? Message-ID: Dear all, I have been trying with PETSC to solve the linear system from mixed finite element method. The pattern of the matrix is as the following, but due to the irregular boundary involved, the matrix A is not strictly symmetric. A dt* C C^T 0 As a result of the matrix pattern, the diagonal entries of the bottom-right portion are all zero. I am just wondering if there are any suggestion of the type of the solver and preconditioner for this kinds of linear system? Thank you very much. When I tried to solve the system with PETSC, I got the following information. ( PCType PCASM, KSPType, KSPFGMRES ) [0]PETSC ERROR: --------------------- Error Message ------------------------------------ [0]PETSC ERROR: Object is in wrong state! [0]PETSC ERROR: Matrix is missing diagonal entry 288398! [0]PETSC ERROR: ------------------------------------------------------------------------ [0]PETSC ERROR: Petsc Release Version 3.2.0, Patch 4, Sun Oct 23 12:23:18 CDT 2011 [0]PETSC ERROR: See docs/changes/index.html for recent updates. [0]PETSC ERROR: See docs/faq.html for hints about trouble shooting. [0]PETSC ERROR: See docs/index.html for manual pages. [0]PETSC ERROR: ------------------------------------------------------------------------ [0]PETSC ERROR: Libraries linked from /home/baok/software/petsc-3.2-p4/arch-linux2-c-debug-withhypre/lib [0]PETSC ERROR: Configure run at Thu Nov 10 11:49:03 2011 [0]PETSC ERROR: Configure options --download-hypre=yes [0]PETSC ERROR: ------------------------------------------------------------------------ [0]PETSC ERROR: MatILUFactorSymbolic_SeqAIJ_ilu0() line 1636 in /home/baok/software/petsc-3.2-p4/src/mat/impls/aij/seq/aijfact.c [0]PETSC ERROR: MatILUFactorSymbolic_SeqAIJ() line 1740 in /home/baok/software/petsc-3.2-p4/src/mat/impls/aij/seq/aijfact.c [0]PETSC ERROR: MatILUFactorSymbolic() line 6092 in /home/baok/software/petsc-3.2-p4/src/mat/interface/matrix.c [0]PETSC ERROR: PCSetUp_ILU() line 216 in /home/baok/software/petsc-3.2-p4/src/ksp/pc/impls/factor/ilu/ilu.c [0]PETSC ERROR: PCSetUp() line 819 in /home/baok/software/petsc-3.2-p4/src/ksp/pc/interface/precon.c [0]PETSC ERROR: KSPSetUp() line 260 in /home/baok/software/petsc-3.2-p4/src/ksp/ksp/interface/itfunc.c [0]PETSC ERROR: PCSetUpOnBlocks_ASM() line 339 in /home/baok/software/petsc-3.2-p4/src/ksp/pc/impls/asm/asm.c [0]PETSC ERROR: PCSetUpOnBlocks() line 852 in /home/baok/software/petsc-3.2-p4/src/ksp/pc/interface/precon.c [0]PETSC ERROR: KSPSetUpOnBlocks() line 154 in /home/baok/software/petsc-3.2-p4/src/ksp/ksp/interface/itfunc.c [0]PETSC ERROR: KSPSolve() line 380 in /home/baok/software/petsc-3.2-p4/src/ksp/ksp/interface/itfunc.c [0]PETSC ERROR: main() line 261 in src/ksp/ksp/examples/tutorials/ex78.c Best Regards, Kai -------------- next part -------------- An HTML attachment was scrubbed... URL: From jedbrown at mcs.anl.gov Thu Nov 10 07:59:42 2011 From: jedbrown at mcs.anl.gov (Jed Brown) Date: Thu, 10 Nov 2011 07:59:42 -0600 Subject: [petsc-users] How to use other matrix directly in petsc? In-Reply-To: References: Message-ID: On Thu, Nov 10, 2011 at 02:00, Bao Kai wrote: > I guess the interface file is used to call the functions in umfpack to > solve the linear system. But the default matrix format in umfack should be > csc, and I did get how the interface handle this problem. > So we give the CSR data structure to Umfpack which means that Umfpack sees the transpose. Then we use the transpose-solve functionality. > I go forward to try some linear solvers now. > > The matrix generated with my problem is a little poor, resulting from a > mixed finite element method, asymmetric and indefinite. And the inclusion > of irregular boundary makes it worse. This is actually one reason I began > with a direct solver. > > If you have any suggestions on solve this kinds of linear systems, please > tell me. > Start with a direct solver. Once you have things working, you can change the discretization of boundary conditions to be better conditioned and look at iterative solvers (which are somewhat technical for indefinite problems). -------------- next part -------------- An HTML attachment was scrubbed... URL: From knepley at gmail.com Thu Nov 10 08:10:14 2011 From: knepley at gmail.com (Matthew Knepley) Date: Thu, 10 Nov 2011 14:10:14 +0000 Subject: [petsc-users] Any suggestion for this kinds of matrix? In-Reply-To: References: Message-ID: On Thu, Nov 10, 2011 at 1:48 PM, Bao Kai wrote: > Dear all, > > I have been trying with PETSC to solve the linear system from mixed finite > element method. > > The pattern of the matrix is as the following, but due to the irregular > boundary involved, the matrix A is not strictly symmetric. > > A dt* C > > C^T 0 > > As a result of the matrix pattern, the diagonal entries of the > bottom-right portion are all zero. > > I am just wondering if there are any suggestion of the type of the solver > and preconditioner for this kinds of linear system? Thank you very much. > > When I tried to solve the system with PETSC, I got the following > information. ( PCType PCASM, KSPType, KSPFGMRES ) > ILU is jsut not going to work for this type of matrix (a saddle point). I suggest reading about PCFIELDSPLIT. Matt > [0]PETSC ERROR: --------------------- Error Message > ------------------------------------ > [0]PETSC ERROR: Object is in wrong state! > [0]PETSC ERROR: Matrix is missing diagonal entry 288398! > [0]PETSC ERROR: > ------------------------------------------------------------------------ > [0]PETSC ERROR: Petsc Release Version 3.2.0, Patch 4, Sun Oct 23 12:23:18 > CDT 2011 > [0]PETSC ERROR: See docs/changes/index.html for recent updates. > [0]PETSC ERROR: See docs/faq.html for hints about trouble shooting. > [0]PETSC ERROR: See docs/index.html for manual pages. > [0]PETSC ERROR: > ------------------------------------------------------------------------ > [0]PETSC ERROR: Libraries linked from > /home/baok/software/petsc-3.2-p4/arch-linux2-c-debug-withhypre/lib > [0]PETSC ERROR: Configure run at Thu Nov 10 11:49:03 2011 > [0]PETSC ERROR: Configure options --download-hypre=yes > [0]PETSC ERROR: > ------------------------------------------------------------------------ > [0]PETSC ERROR: MatILUFactorSymbolic_SeqAIJ_ilu0() line 1636 in > /home/baok/software/petsc-3.2-p4/src/mat/impls/aij/seq/aijfact.c > [0]PETSC ERROR: MatILUFactorSymbolic_SeqAIJ() line 1740 in > /home/baok/software/petsc-3.2-p4/src/mat/impls/aij/seq/aijfact.c > [0]PETSC ERROR: MatILUFactorSymbolic() line 6092 in > /home/baok/software/petsc-3.2-p4/src/mat/interface/matrix.c > [0]PETSC ERROR: PCSetUp_ILU() line 216 in > /home/baok/software/petsc-3.2-p4/src/ksp/pc/impls/factor/ilu/ilu.c > [0]PETSC ERROR: PCSetUp() line 819 in > /home/baok/software/petsc-3.2-p4/src/ksp/pc/interface/precon.c > [0]PETSC ERROR: KSPSetUp() line 260 in > /home/baok/software/petsc-3.2-p4/src/ksp/ksp/interface/itfunc.c > [0]PETSC ERROR: PCSetUpOnBlocks_ASM() line 339 in > /home/baok/software/petsc-3.2-p4/src/ksp/pc/impls/asm/asm.c > [0]PETSC ERROR: PCSetUpOnBlocks() line 852 in > /home/baok/software/petsc-3.2-p4/src/ksp/pc/interface/precon.c > [0]PETSC ERROR: KSPSetUpOnBlocks() line 154 in > /home/baok/software/petsc-3.2-p4/src/ksp/ksp/interface/itfunc.c > [0]PETSC ERROR: KSPSolve() line 380 in > /home/baok/software/petsc-3.2-p4/src/ksp/ksp/interface/itfunc.c > [0]PETSC ERROR: main() line 261 in src/ksp/ksp/examples/tutorials/ex78.c > > > Best Regards, > Kai > > -- What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. -- Norbert Wiener -------------- next part -------------- An HTML attachment was scrubbed... URL: From knepley at gmail.com Thu Nov 10 08:13:28 2011 From: knepley at gmail.com (Matthew Knepley) Date: Thu, 10 Nov 2011 14:13:28 +0000 Subject: [petsc-users] Problem when switching from debug to optimized In-Reply-To: <4324655496228f9063a68529fdefc0e2.squirrel@wm.lmn.pub.ro> References: <4324655496228f9063a68529fdefc0e2.squirrel@wm.lmn.pub.ro> Message-ID: On Thu, Nov 10, 2011 at 12:30 PM, Bogdan Dita wrote: > > Hello, > > Until a few days ago I've only be using PETSc in debug mode and when I > switch to the optimised version(--with-debugging=0) I got a strange > result regarding the solve time, what I mean is that it was 10-15 % > higher then in debug mode. > I'm trying to solve a linear system in parallel with superlu_dist, and > I've tested my program on a Beowulf cluster, so far only using a single > node with 2 quad-core Intel processors. > From what I know the "no debug" version should be faster and I know it > should be faster because on my laptop(dual-core Intel) for the same > program and even the same matrices the solve time for the optimised > version is 2 times faster, but when I use the cluster the optimised > version time is slower then the debug version. > My quess is that this has something to do with MPI. Any thoughts? > Performance questions are meaningless without the output of -log_summary for all cases. Matt > Best regards, > Bogdan Dita > -- What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. -- Norbert Wiener -------------- next part -------------- An HTML attachment was scrubbed... URL: From jedbrown at mcs.anl.gov Thu Nov 10 08:17:53 2011 From: jedbrown at mcs.anl.gov (Jed Brown) Date: Thu, 10 Nov 2011 08:17:53 -0600 Subject: [petsc-users] Changed output of VecView in 3.2 In-Reply-To: <4EBBC00B.1020304@uibk.ac.at> References: <4EBBC00B.1020304@uibk.ac.at> Message-ID: On Thu, Nov 10, 2011 at 06:14, Manfred Gratt wrote: > I am using petsc since 3.1 and upgraded my program to 3.2. I have now the > problem that the Vector output in ASCII has changed. > From only a list of numbers to the name of process that had the data and > then the numbers. Like this: > > Vector Object: 2 MPI processes > type: mpi > Process [0] > 0 > 0 > 0 > > Is there a option to change it back to only display numbers again? > There isn't an option to do this. Of course it's easy to implement, but it's feature-creep. > I use this output only for a rough estimation if the solution is correct > and I don't want to rewrite the tool to check the correctness. > Is it that hard to update the tests, either by updating the "gold" output, or by passing the results through a filter that strips out those lines, e.g. perl -pe 's,^(Vector Object:\w* \d+ MPI processes| type: \w+|Process \[\d+\])\n,,' -------------- next part -------------- An HTML attachment was scrubbed... URL: From knepley at gmail.com Thu Nov 10 08:21:42 2011 From: knepley at gmail.com (Matthew Knepley) Date: Thu, 10 Nov 2011 14:21:42 +0000 Subject: [petsc-users] Changed output of VecView in 3.2 In-Reply-To: <4EBBC00B.1020304@uibk.ac.at> References: <4EBBC00B.1020304@uibk.ac.at> Message-ID: On Thu, Nov 10, 2011 at 12:14 PM, Manfred Gratt wrote: > Hello, > > I am using petsc since 3.1 and upgraded my program to 3.2. I have now the > problem that the Vector output in ASCII has changed. > From only a list of numbers to the name of process that had the data and > then the numbers. Like this: > > Vector Object: 2 MPI processes > type: mpi > Process [0] > 0 > 0 > 0 > > Is there a option to change it back to only display numbers again? > You can use a different viewer format, like SYMMODU or PCICE. Matt > I use this output only for a rough estimation if the solution is correct > and I don't want to rewrite the tool to check the correctness. > I use the viewer like this: > > PetscViewer viewer; > PetscViewerASCIIOpen( PETSC_COMM_WORLD, solfname.c_str(), &viewer); > VecView( uT, viewer); > PetscViewerDestroy( &viewer); > > Thanks > Manfred > -- What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. -- Norbert Wiener -------------- next part -------------- An HTML attachment was scrubbed... URL: From manfred.gratt at uibk.ac.at Thu Nov 10 08:29:26 2011 From: manfred.gratt at uibk.ac.at (Manfred Gratt) Date: Thu, 10 Nov 2011 15:29:26 +0100 Subject: [petsc-users] Changed output of VecView in 3.2 In-Reply-To: References: <4EBBC00B.1020304@uibk.ac.at> Message-ID: <4EBBDFC6.9070301@uibk.ac.at> Jed Brown wrote: > On Thu, Nov 10, 2011 at 06:14, Manfred Gratt > wrote: > > I am using petsc since 3.1 and upgraded my program to 3.2. I have > now the problem that the Vector output in ASCII has changed. > >From only a list of numbers to the name of process that had the > data and then the numbers. Like this: > > Vector Object: 2 MPI processes > type: mpi > Process [0] > 0 > 0 > 0 > > Is there a option to change it back to only display numbers again? > > > There isn't an option to do this. Of course it's easy to implement, > but it's feature-creep. > > > I use this output only for a rough estimation if the solution is > correct and I don't want to rewrite the tool to check the correctness. > > > Is it that hard to update the tests, either by updating the "gold" > output, or by passing the results through a filter that strips out > those lines, e.g. > > perl -pe 's,^(Vector Object:\w* \d+ MPI processes| type: \w+|Process > \[\d+\])\n,,' Thanks for quick reply. I think I will write a new output function. Manfred From ckontzialis at lycos.com Thu Nov 10 10:25:17 2011 From: ckontzialis at lycos.com (Konstantinos Kontzialis) Date: Thu, 10 Nov 2011 18:25:17 +0200 Subject: [petsc-users] how to speed up convergence? Message-ID: <4EBBFAED.3040808@lycos.com> Dear all, I use the DG method for simulationg the flow over a cylinder at M = 0.2 and Re=250. I use and implicit scheme. I run my code as follows: mpiexec -n 8 ./hoac cylinder -snes_mf_operator -llf_flux -n_out 2 -end_time 0.4 -implicit -pc_type asm -sub_pc_type ilu -sub_pc_factor_mat_ordering_type rcm -sub_pc_factor_reuse_ordering -sub_pc_factor_reuse_fill -gll -ksp_type fgmres -sub_pc_factor_levels 0 -snes_monitor -snes_converged_reason -ksp_converged_reason -ts_view -ksp_pc_side right -sub_pc_factor_nonzeros_along_diagonal -dt 1.0e-3 -ts_type arkimex -ksp_gmres_restart 100 -ksp_max_it 500 -snes_max_fail 100 -snes_max_linear_solve_fail 100 and I get: ********************************************************************** METIS 4.0.3 Copyright 1998, Regents of the University of Minnesota Graph Information --------------------------------------------------- Name: mesh.graph, #Vertices: 1680, #Edges: 3280, #Parts: 8 Recursive Partitioning... ------------------------------------------- 8-way Edge-Cut: 177, Balance: 1.01 Timing Information -------------------------------------------------- I/O: 0.000 Partitioning: 0.000 (PMETIS time) Total: 0.000 ********************************************************************** Approximation order = 2 # DOF = 115200 # nodes in mesh = 1680 # elements in mesh = 1600 Navier-Stokes solution Using LLF flux Linear solve converged due to CONVERGED_RTOL iterations 1 Timestep 0: dt = 0.001, T = 0, Res[rho] = 0.966549, Res[rhou] = 6.11366, Res[rhov] = 0.507325, Res[E] = 2.44463, CFL = 0.942045 0 SNES Function norm 3.203604511352e+03 Linear solve did not converge due to DIVERGED_ITS iterations 500 1 SNES Function norm 3.440800722147e+02 Linear solve did not converge due to DIVERGED_ITS iterations 500 2 SNES Function norm 2.008355246473e+02 Linear solve did not converge due to DIVERGED_ITS iterations 500 3 SNES Function norm 1.177925999321e+02 as you may see the step size is quite small for this problem and I use and inexact solution for the linear part of the newton iterations. Furthermore, I compute numerically the jacobian of the matrix using coloring. Is there a tuning parameter I should set differently or use something else? Thank you, Kostas From knepley at gmail.com Thu Nov 10 10:27:46 2011 From: knepley at gmail.com (Matthew Knepley) Date: Thu, 10 Nov 2011 16:27:46 +0000 Subject: [petsc-users] how to speed up convergence? In-Reply-To: <4EBBFAED.3040808@lycos.com> References: <4EBBFAED.3040808@lycos.com> Message-ID: On Thu, Nov 10, 2011 at 4:25 PM, Konstantinos Kontzialis < ckontzialis at lycos.com> wrote: > Dear all, > > I use the DG method for simulationg the flow over a cylinder at M = 0.2 > and Re=250. I use and implicit scheme. > > I run my code as follows: > > mpiexec -n 8 ./hoac cylinder -snes_mf_operator -llf_flux -n_out 2 > -end_time 0.4 -implicit -pc_type asm -sub_pc_type ilu > -sub_pc_factor_mat_ordering_**type rcm -sub_pc_factor_reuse_ordering > -sub_pc_factor_reuse_fill -gll -ksp_type fgmres -sub_pc_factor_levels 0 > -snes_monitor -snes_converged_reason -ksp_converged_reason -ts_view > -ksp_pc_side right -sub_pc_factor_nonzeros_along_**diagonal -dt 1.0e-3 > -ts_type arkimex -ksp_gmres_restart 100 -ksp_max_it 500 -snes_max_fail 100 > -snes_max_linear_solve_fail 100 > > and I get: > > ************************************************************************** > METIS 4.0.3 Copyright 1998, Regents of the University of Minnesota > > Graph Information ------------------------------**--------------------- > Name: mesh.graph, #Vertices: 1680, #Edges: 3280, #Parts: 8 > > Recursive Partitioning... ------------------------------**------------- > 8-way Edge-Cut: 177, Balance: 1.01 > > Timing Information ------------------------------**-------------------- > I/O: 0.000 > Partitioning: 0.000 (PMETIS time) > Total: 0.000 > ************************************************************************** > > > Approximation order = 2 > # DOF = 115200 > # nodes in mesh = 1680 > # elements in mesh = 1600 > Navier-Stokes solution > Using LLF flux > > > Linear solve converged due to CONVERGED_RTOL iterations 1 > > > Timestep 0: dt = 0.001, T = 0, Res[rho] = 0.966549, Res[rhou] = 6.11366, > Res[rhov] = 0.507325, Res[E] = 2.44463, CFL = 0.942045 > 0 SNES Function norm 3.203604511352e+03 > Linear solve did not converge due to DIVERGED_ITS iterations 500 > 1 SNES Function norm 3.440800722147e+02 > Linear solve did not converge due to DIVERGED_ITS iterations 500 > 2 SNES Function norm 2.008355246473e+02 > Linear solve did not converge due to DIVERGED_ITS iterations 500 > 3 SNES Function norm 1.177925999321e+02 > > as you may see the step size is quite small for this problem and I use and > inexact solution for the linear part of the newton iterations. Furthermore, > I compute numerically the jacobian of the matrix using coloring. > > Is there a tuning parameter I should set differently or use something else? > ASM really stinks for incompressible flow. Consider using PCFIELDSPLIT. Matt > Thank you, > > Kostas > -- What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. -- Norbert Wiener -------------- next part -------------- An HTML attachment was scrubbed... URL: From jedbrown at mcs.anl.gov Thu Nov 10 10:43:14 2011 From: jedbrown at mcs.anl.gov (Jed Brown) Date: Thu, 10 Nov 2011 10:43:14 -0600 Subject: [petsc-users] how to speed up convergence? In-Reply-To: <4EBBFAED.3040808@lycos.com> References: <4EBBFAED.3040808@lycos.com> Message-ID: On Thu, Nov 10, 2011 at 10:25, Konstantinos Kontzialis < ckontzialis at lycos.com> wrote: > mpiexec -n 8 ./hoac cylinder -snes_mf_operator -llf_flux -n_out 2 > -end_time 0.4 -implicit -pc_type asm -sub_pc_type ilu > -sub_pc_factor_mat_ordering_**type rcm -sub_pc_factor_reuse_ordering > -sub_pc_factor_reuse_fill -gll -ksp_type fgmres -sub_pc_factor_levels 0 > -snes_monitor -snes_converged_reason -ksp_converged_reason -ts_view > -ksp_pc_side right -sub_pc_factor_nonzeros_along_**diagonal -dt 1.0e-3 > -ts_type arkimex -ksp_gmres_restart 100 -ksp_max_it 500 -snes_max_fail 100 > -snes_max_linear_solve_fail 100 > > > > Approximation order = 2 > # DOF = 115200 > # nodes in mesh = 1680 > # elements in mesh = 1600 > Navier-Stokes solution > Using LLF flux > Are you using a limiter? > > > Linear solve converged due to CONVERGED_RTOL iterations 1 > > > Timestep 0: dt = 0.001, T = 0, Res[rho] = 0.966549, Res[rhou] = 6.11366, > Res[rhov] = 0.507325, Res[E] = 2.44463, CFL = 0.942045 > 0 SNES Function norm 3.203604511352e+03 > Linear solve did not converge due to DIVERGED_ITS iterations 500 > 1 SNES Function norm 3.440800722147e+02 > Linear solve did not converge due to DIVERGED_ITS iterations 500 > 2 SNES Function norm 2.008355246473e+02 > Linear solve did not converge due to DIVERGED_ITS iterations 500 > 3 SNES Function norm 1.177925999321e+02 > > as you may see the step size is quite small for this problem and I use and > inexact solution for the linear part of the newton iterations. Furthermore, > I compute numerically the jacobian of the matrix using coloring. > No need for -snes_mf_operator if you use coloring. What functions did you use for coloring? See if "-mat_fd_type ds" affects the results (good or bad). How are you ordering degrees of freedom? I would not rely on RCM with AIJ to coalesce all blocks so they can be solved together. I would try -sub_pc_type lu to see if it's the ILU or something else that is responsible for the lack of convergence. What units are you using for state variables and residuals? It is important to choose units so that the system is well-scaled when using implicit methods. How did you implement boundary conditions? Is your system written in conservative or primitive variables? You can do a "low-Mach" preconditioner using PCFieldSplit if the system is in primitve variables. For conservative variables, the preconditioner needs a change of variables and we don't have an interface to do it automatically, so you have to use PCShell. While this might eventually be more efficient than ASM, you should be able to make ASM work adequately. -------------- next part -------------- An HTML attachment was scrubbed... URL: From ckontzialis at lycos.com Thu Nov 10 11:07:47 2011 From: ckontzialis at lycos.com (Konstantinos Kontzialis) Date: Thu, 10 Nov 2011 19:07:47 +0200 Subject: [petsc-users] how to speed up convergence? In-Reply-To: References: Message-ID: <4EBC04E3.2040201@lycos.com> On 11/10/2011 06:43 PM, petsc-users-request at mcs.anl.gov wrote: > how to speed up convergence? Dear Matt and Jed, 1. I use the conservative variables. 2. No limiter. 3. Boundary conditions are weakly imposed (to solve the riemann problem). 4. mat_fd_type ds is worse for the current case 5. sub_pc_type lu did a better work but not to a satisfactory level 6. MATCOLORINGSL is used. For the preconditioner I need to do some changes. Please help me first to understand the coloring types in order to choose the best one (any web pages, links etc.?). Thank you, Kostas From jedbrown at mcs.anl.gov Thu Nov 10 11:20:53 2011 From: jedbrown at mcs.anl.gov (Jed Brown) Date: Thu, 10 Nov 2011 11:20:53 -0600 Subject: [petsc-users] how to speed up convergence? In-Reply-To: <4EBC04E3.2040201@lycos.com> References: <4EBC04E3.2040201@lycos.com> Message-ID: On Thu, Nov 10, 2011 at 11:07, Konstantinos Kontzialis < ckontzialis at lycos.com> wrote: > 1. I use the conservative variables. > > 2. No limiter. > > 3. Boundary conditions are weakly imposed (to solve the riemann problem). > > 4. mat_fd_type ds is worse for the current case > It sounds like your system is poorly scaled. If you were consistent about using PetscScalar in your code, then you can configure --with-precision=__float128 and try that. It should remove the differencing errors. It is very important to scale the system carefully when using coloring or matrix-free finite differencing. It's a waste of time to dwell on the other issues until you know that you have an accurate Jacobian. > 5. sub_pc_type lu did a better work but not to a satisfactory level > In terms of time or iteration count? This test is for iteration count. > > 6. MATCOLORINGSL is used. > > For the preconditioner I need to do some changes. > > Please help me first to understand the coloring types in order to choose > the best one (any web pages, links etc.?). > -------------- next part -------------- An HTML attachment was scrubbed... URL: From paeanball at gmail.com Thu Nov 10 14:45:13 2011 From: paeanball at gmail.com (Bao Kai) Date: Thu, 10 Nov 2011 23:45:13 +0300 Subject: [petsc-users] Any suggestion for this kinds of matrix? Message-ID: Dear Matthew, PCFIELDSPLIT seems a little more complex, I will try that. I tried some different preconditioners, only lu can get right results. With some pc, some wrong results can be obtained, such as the following one. tutorials]$ time ./ex78 -Ain A_in -rhs rhs -solu solu -noshift -pc_type hypre -pc_hypre_type parasails -ksp_gmres_restart 600 -ksp_rtol 1e-5 -ksp_typ gmres Read matrix in ascii format ... m: 288399, n: 288399, nz: 4023176 read A completed rowNumber[0] = 13 rowNumber[1] = 13 rowNumber[2] = 19 read A is complete ! Read rhs in ascii format ... Read exact solution in ascii format ... Accuracy of the reading data: | b - A*u |_2 : 3321.15 Iteration number is : 38 real 0m11.977s user 0m11.752s sys 0m0.216s The iteration should have converged, while converge to some wrong results. Regards, Kai Message: 8 > Date: Thu, 10 Nov 2011 14:10:14 +0000 > From: Matthew Knepley > Subject: Re: [petsc-users] Any suggestion for this kinds of matrix? > To: PETSc users list > Message-ID: > > > Content-Type: text/plain; charset="iso-8859-1" > > On Thu, Nov 10, 2011 at 1:48 PM, Bao Kai wrote: > > > Dear all, > > > > I have been trying with PETSC to solve the linear system from mixed > finite > > element method. > > > > The pattern of the matrix is as the following, but due to the irregular > > boundary involved, the matrix A is not strictly symmetric. > > > > A dt* C > > > > C^T 0 > > > > As a result of the matrix pattern, the diagonal entries of the > > bottom-right portion are all zero. > > > > I am just wondering if there are any suggestion of the type of the solver > > and preconditioner for this kinds of linear system? Thank you very much. > > > > When I tried to solve the system with PETSC, I got the following > > information. ( PCType PCASM, KSPType, KSPFGMRES ) > > > > ILU is jsut not going to work for this type of matrix (a saddle point). I > suggest reading about PCFIELDSPLIT. > > Matt > > > > [0]PETSC ERROR: --------------------- Error Message > > ------------------------------------ > > [0]PETSC ERROR: Object is in wrong state! > > [0]PETSC ERROR: Matrix is missing diagonal entry 288398! > > [0]PETSC ERROR: > > ------------------------------------------------------------------------ > > [0]PETSC ERROR: Petsc Release Version 3.2.0, Patch 4, Sun Oct 23 12:23:18 > > CDT 2011 > > [0]PETSC ERROR: See docs/changes/index.html for recent updates. > > [0]PETSC ERROR: See docs/faq.html for hints about trouble shooting. > > [0]PETSC ERROR: See docs/index.html for manual pages. > > [0]PETSC ERROR: > > ------------------------------------------------------------------------ > > [0]PETSC ERROR: Libraries linked from > > /home/baok/software/petsc-3.2-p4/arch-linux2-c-debug-withhypre/lib > > [0]PETSC ERROR: Configure run at Thu Nov 10 11:49:03 2011 > > [0]PETSC ERROR: Configure options --download-hypre=yes > > [0]PETSC ERROR: > > ------------------------------------------------------------------------ > > [0]PETSC ERROR: MatILUFactorSymbolic_SeqAIJ_ilu0() line 1636 in > > /home/baok/software/petsc-3.2-p4/src/mat/impls/aij/seq/aijfact.c > > [0]PETSC ERROR: MatILUFactorSymbolic_SeqAIJ() line 1740 in > > /home/baok/software/petsc-3.2-p4/src/mat/impls/aij/seq/aijfact.c > > [0]PETSC ERROR: MatILUFactorSymbolic() line 6092 in > > /home/baok/software/petsc-3.2-p4/src/mat/interface/matrix.c > > [0]PETSC ERROR: PCSetUp_ILU() line 216 in > > /home/baok/software/petsc-3.2-p4/src/ksp/pc/impls/factor/ilu/ilu.c > > [0]PETSC ERROR: PCSetUp() line 819 in > > /home/baok/software/petsc-3.2-p4/src/ksp/pc/interface/precon.c > > [0]PETSC ERROR: KSPSetUp() line 260 in > > /home/baok/software/petsc-3.2-p4/src/ksp/ksp/interface/itfunc.c > > [0]PETSC ERROR: PCSetUpOnBlocks_ASM() line 339 in > > /home/baok/software/petsc-3.2-p4/src/ksp/pc/impls/asm/asm.c > > [0]PETSC ERROR: PCSetUpOnBlocks() line 852 in > > /home/baok/software/petsc-3.2-p4/src/ksp/pc/interface/precon.c > > [0]PETSC ERROR: KSPSetUpOnBlocks() line 154 in > > /home/baok/software/petsc-3.2-p4/src/ksp/ksp/interface/itfunc.c > > [0]PETSC ERROR: KSPSolve() line 380 in > > /home/baok/software/petsc-3.2-p4/src/ksp/ksp/interface/itfunc.c > > [0]PETSC ERROR: main() line 261 in src/ksp/ksp/examples/tutorials/ex78.c > > > > > > Best Regards, > > Kai > > > > > > > -- > What most experimenters take for granted before they begin their > experiments is infinitely more interesting than any results to which their > experiments lead. > -- Norbert Wiener > -------------- next part -------------- > An HTML attachment was scrubbed... > URL: < > http://lists.mcs.anl.gov/pipermail/petsc-users/attachments/20111110/2ea2b385/attachment.htm > > > > ------------------------------ > > _______________________________________________ > petsc-users mailing list > petsc-users at mcs.anl.gov > https://lists.mcs.anl.gov/mailman/listinfo/petsc-users > > > End of petsc-users Digest, Vol 35, Issue 29 > ******************************************* > -------------- next part -------------- An HTML attachment was scrubbed... URL: From jedbrown at mcs.anl.gov Thu Nov 10 14:49:55 2011 From: jedbrown at mcs.anl.gov (Jed Brown) Date: Thu, 10 Nov 2011 14:49:55 -0600 Subject: [petsc-users] Any suggestion for this kinds of matrix? In-Reply-To: References: Message-ID: On Thu, Nov 10, 2011 at 14:45, Bao Kai wrote: > PCFIELDSPLIT seems a little more complex, I will try that. > > I tried some different preconditioners, only lu can get right results. > > With some pc, some wrong results can be obtained, such as the following > one. > > tutorials]$ time ./ex78 -Ain A_in -rhs rhs -solu solu -noshift -pc_type > hypre -pc_hypre_type parasails -ksp_gmres_restart 600 -ksp_rtol 1e-5 > -ksp_typ gmres > Always run with -ksp_monitor_true_residual -ksp_converged_reason when checking whether a preconditioner is working. -------------- next part -------------- An HTML attachment was scrubbed... URL: From haoxiang at yahoo.cn Thu Nov 10 23:11:01 2011 From: haoxiang at yahoo.cn (Xiang Hao) Date: Thu, 10 Nov 2011 22:11:01 -0700 Subject: [petsc-users] Problem with including petscdmda.h Message-ID: Hi, I have a question about petscdmda.h. I have a program using PETSc, which is running well. Now I just add a new line #include in my code and get the following error. I don't understand what's going on here. Any help? ------------------------------------------------------------------------------------------------------------- In file included from /home/sci/hao/software/PETSc/include/petscdmda.h:4:0, from /home/sci/hao/programming/C++/ITK/SolveAlpha/SolveAlpha.h:18, from /home/sci/hao/programming/C++/ITK/SolveAlpha/SolveAlpha.cxx:9: /home/sci/hao/software/PETSc/include/petscdm.h:27:8: error: ?PetscClassId? does not name a type /home/sci/hao/software/PETSc/include/petscdm.h:48:8: error: ?PetscBool? does not name a type /home/sci/hao/software/PETSc/include/petscdm.h:120:55: error: ?PetscBool? has not been declared /home/sci/hao/software/PETSc/include/petscdm.h:144:46: error: ?PetscBool? has not been declared /home/sci/hao/software/PETSc/include/petscdm.h:145:42: error: ?PetscBool? has not been declared /home/sci/hao/software/PETSc/include/petscdm.h:146:42: error: ?PetscBool? has not been declared In file included from /home/sci/hao/software/PETSc/include/petscdm.h:157:0, from /home/sci/hao/software/PETSc/include/petscdmda.h:4, from /home/sci/hao/programming/C++/ITK/SolveAlpha/SolveAlpha.h:18, from /home/sci/hao/programming/C++/ITK/SolveAlpha/SolveAlpha.cxx:9: /home/sci/hao/software/PETSc/include/petscbag.h:44:60: error: ?PetscBool? has not been declared In file included from /home/sci/hao/software/PETSc/include/petscdmda.h:5:0, from /home/sci/hao/programming/C++/ITK/SolveAlpha/SolveAlpha.h:18, from /home/sci/hao/programming/C++/ITK/SolveAlpha/SolveAlpha.cxx:9: /home/sci/hao/software/PETSc/include/petscpf.h:41:8: error: ?PetscClassId? does not name a type In file included from /home/sci/hao/software/PETSc/include/petscdmda.h:5:0, from /home/sci/hao/programming/C++/ITK/SolveAlpha/SolveAlpha.h:18, from /home/sci/hao/programming/C++/ITK/SolveAlpha/SolveAlpha.cxx:9: /home/sci/hao/software/PETSc/include/petscpf.h:52:8: error: ?PetscBool? does not name a type ------------------------------------------------------------------------------------------------------------------- The head file of my program begin with the following. The read line is the one I just added. #include #include #include #include #include #include #include #include #include #include -------------- next part -------------- An HTML attachment was scrubbed... URL: From jedbrown at mcs.anl.gov Thu Nov 10 23:17:27 2011 From: jedbrown at mcs.anl.gov (Jed Brown) Date: Thu, 10 Nov 2011 23:17:27 -0600 Subject: [petsc-users] Problem with including petscdmda.h In-Reply-To: References: Message-ID: On Thu, Nov 10, 2011 at 23:11, Xiang Hao wrote: > I have a program using PETSc, which is running well. Now I just add a new > line #include in my code and get the following error. I don't > understand what's going on here. Any help? > > > ------------------------------------------------------------------------------------------------------------- > In file included from /home/sci/hao/software/PETSc/include/petscdmda.h:4:0, > from > /home/sci/hao/programming/C++/ITK/SolveAlpha/SolveAlpha.h:18, > from > /home/sci/hao/programming/C++/ITK/SolveAlpha/SolveAlpha.cxx:9: > /home/sci/hao/software/PETSc/include/petscdm.h:27:8: error: ?PetscClassId? > does not name a type > /home/sci/hao/software/PETSc/include/petscdm.h:48:8: error: ?PetscBool? > does not name a type > The most likely explanation is that a path to an old version of PETSc appears before this one in the header search paths, therefore it finds the file with a new name (petscdm.h) in the new directory, but finds all the supporting headers (which did not change names) in the old directory. Check the command line and environment. -------------- next part -------------- An HTML attachment was scrubbed... URL: From ckontzialis at lycos.com Fri Nov 11 04:39:02 2011 From: ckontzialis at lycos.com (Konstantinos Kontzialis) Date: Fri, 11 Nov 2011 12:39:02 +0200 Subject: [petsc-users] configuration problem Message-ID: <4EBCFB46.8050509@lycos.com> Dear all, I'm trying to configure petsc v3.2 with the following options: ./configure --with-debugging=1 --with-mpi-dir=/usr/lib64/mpich2/bin --with-shared-libraries --with-large-file-io=1 --with-precision=__float128 --with-blacs=1 --download-blacs=yes --download-f-blas-lapack=yes --with-plapack=1 --download-plapack=yes --with-scalapack=1 --download-scalapack=yes --with-superlu=1 --download-superlu=yes --with-superlu_dist=1 --download-superlu_dist=yes --with-ml=1 --download-ml=yes --with-umfpack=1 --download-umfpack=yes --with-sundials=1 --download-sundials=1 --with-parmetis=1 --download-parmetis=1 --with-hypre=1 --download-hypre=1 and I get: ******************************************************************************* UNABLE to CONFIGURE with GIVEN OPTIONS (see configure.log for details): ------------------------------------------------------------------------------- Could not use downloaded f-blas-lapack? ******************************************************************************* What should I do? Kostas From jroman at dsic.upv.es Fri Nov 11 05:05:34 2011 From: jroman at dsic.upv.es (Jose E. Roman) Date: Fri, 11 Nov 2011 12:05:34 +0100 Subject: [petsc-users] configuration problem In-Reply-To: <4EBCFB46.8050509@lycos.com> References: <4EBCFB46.8050509@lycos.com> Message-ID: <1D6E1BAA-7010-42EA-8F4B-AA99D957F94C@dsic.upv.es> El 11/11/2011, a las 11:39, Konstantinos Kontzialis escribi?: > Dear all, > > I'm trying to configure petsc v3.2 with the following options: > > ./configure --with-debugging=1 --with-mpi-dir=/usr/lib64/mpich2/bin > --with-shared-libraries > --with-large-file-io=1 > --with-precision=__float128 > --with-blacs=1 > --download-blacs=yes > --download-f-blas-lapack=yes > --with-plapack=1 > --download-plapack=yes > --with-scalapack=1 > --download-scalapack=yes > --with-superlu=1 > --download-superlu=yes > --with-superlu_dist=1 > --download-superlu_dist=yes > --with-ml=1 --download-ml=yes > --with-umfpack=1 > --download-umfpack=yes > --with-sundials=1 > --download-sundials=1 > --with-parmetis=1 > --download-parmetis=1 > --with-hypre=1 > --download-hypre=1 > > > and I get: > > > ******************************************************************************* > UNABLE to CONFIGURE with GIVEN OPTIONS (see configure.log for details): > ------------------------------------------------------------------------------- > Could not use downloaded f-blas-lapack? > ******************************************************************************* > > What should I do? > > Kostas --with-precision=__float128 only works with --download-f2cblaslapack=yes Jose From paeanball at gmail.com Fri Nov 11 05:12:07 2011 From: paeanball at gmail.com (Bao Kai) Date: Fri, 11 Nov 2011 14:12:07 +0300 Subject: [petsc-users] Any suggestion for this kinds of matrix? Message-ID: Dear Jed, The following is the result with the options you told me. The iteration has converged, while converged at a wrong solution, compared to the result from LU. tutorials]$ time ./ex78 -Ain A_phi -rhs rhs_phi -solu solu_phi -noshift -pc_type hypre -pc_hypre_type parasails -ksp_gmres_restart 600 -ksp_rtol 1e-7 -ksp_typ gmres -ksp_monitor_true_residual -ksp_converged_reason Read matrix in ascii format ... m: 288399, n: 288399, nz: 4023176 read A completed rowNumber[0] = 13 rowNumber[1] = 13 rowNumber[2] = 19 read A is complete ! Read rhs in ascii format ... Read exact solution in ascii format ... 0 KSP preconditioned resid norm 1.311815748108e+00 true resid norm 3.838432566849e-03 ||r(i)||/||b|| 1.000000000000e+00 1 KSP preconditioned resid norm 5.507600629359e-01 true resid norm 1.878066463331e-03 ||r(i)||/||b|| 4.892795250727e-01 2 KSP preconditioned resid norm 4.013507277599e-01 true resid norm 1.785692106749e-03 ||r(i)||/||b|| 4.652138797934e-01 3 KSP preconditioned resid norm 3.186589484171e-01 true resid norm 1.939928539000e-03 ||r(i)||/||b|| 5.053960191341e-01 4 KSP preconditioned resid norm 2.533816267053e-01 true resid norm 2.182716525094e-03 ||r(i)||/||b|| 5.686478756839e-01 5 KSP preconditioned resid norm 1.956749847727e-01 true resid norm 2.436511231157e-03 ||r(i)||/||b|| 6.347672360328e-01 6 KSP preconditioned resid norm 1.489551919079e-01 true resid norm 2.625495747280e-03 ||r(i)||/||b|| 6.840020507214e-01 7 KSP preconditioned resid norm 1.129706022530e-01 true resid norm 2.726727688051e-03 ||r(i)||/||b|| 7.103753004809e-01 8 KSP preconditioned resid norm 8.528153693722e-02 true resid norm 2.764872691683e-03 ||r(i)||/||b|| 7.203129515839e-01 9 KSP preconditioned resid norm 6.419522418091e-02 true resid norm 2.765551151077e-03 ||r(i)||/||b|| 7.204897058665e-01 10 KSP preconditioned resid norm 4.793073337207e-02 true resid norm 2.754803244091e-03 ||r(i)||/||b|| 7.176896288040e-01 11 KSP preconditioned resid norm 3.590594904610e-02 true resid norm 2.738481920714e-03 ||r(i)||/||b|| 7.134375485361e-01 12 KSP preconditioned resid norm 2.683482240096e-02 true resid norm 2.722031760807e-03 ||r(i)||/||b|| 7.091519033880e-01 13 KSP preconditioned resid norm 2.001207136261e-02 true resid norm 2.709429246945e-03 ||r(i)||/||b|| 7.058686585627e-01 14 KSP preconditioned resid norm 1.493908729876e-02 true resid norm 2.699791991674e-03 ||r(i)||/||b|| 7.033579318265e-01 15 KSP preconditioned resid norm 1.111558666088e-02 true resid norm 2.692890954089e-03 ||r(i)||/||b|| 7.015600527536e-01 16 KSP preconditioned resid norm 8.272119255509e-03 true resid norm 2.688069519102e-03 ||r(i)||/||b|| 7.003039580057e-01 17 KSP preconditioned resid norm 6.143976425601e-03 true resid norm 2.684895702336e-03 ||r(i)||/||b|| 6.994771057136e-01 18 KSP preconditioned resid norm 4.563685459707e-03 true resid norm 2.682859292253e-03 ||r(i)||/||b|| 6.989465740324e-01 19 KSP preconditioned resid norm 3.394656398417e-03 true resid norm 2.681459888330e-03 ||r(i)||/||b|| 6.985819971124e-01 20 KSP preconditioned resid norm 2.518916365228e-03 true resid norm 2.680609739005e-03 ||r(i)||/||b|| 6.983605136526e-01 21 KSP preconditioned resid norm 1.872307188353e-03 true resid norm 2.680081243394e-03 ||r(i)||/||b|| 6.982228283859e-01 22 KSP preconditioned resid norm 1.390334828536e-03 true resid norm 2.679742748684e-03 ||r(i)||/||b|| 6.981346427259e-01 23 KSP preconditioned resid norm 1.034606694934e-03 true resid norm 2.679535108562e-03 ||r(i)||/||b|| 6.980805476966e-01 24 KSP preconditioned resid norm 7.710134967260e-04 true resid norm 2.679397134577e-03 ||r(i)||/||b|| 6.980446022989e-01 25 KSP preconditioned resid norm 5.725407938260e-04 true resid norm 2.679329449922e-03 ||r(i)||/||b|| 6.980269688889e-01 26 KSP preconditioned resid norm 4.272990427118e-04 true resid norm 2.679284171589e-03 ||r(i)||/||b|| 6.980151728414e-01 27 KSP preconditioned resid norm 3.181341598383e-04 true resid norm 2.679247206576e-03 ||r(i)||/||b|| 6.980055426050e-01 28 KSP preconditioned resid norm 2.368729933003e-04 true resid norm 2.679233163958e-03 ||r(i)||/||b|| 6.980018841799e-01 29 KSP preconditioned resid norm 1.766017339700e-04 true resid norm 2.679224053276e-03 ||r(i)||/||b|| 6.979995106376e-01 30 KSP preconditioned resid norm 1.313377419946e-04 true resid norm 2.679217016981e-03 ||r(i)||/||b|| 6.979976775210e-01 31 KSP preconditioned resid norm 9.789603459870e-05 true resid norm 2.679213290696e-03 ||r(i)||/||b|| 6.979967067380e-01 32 KSP preconditioned resid norm 7.275708495896e-05 true resid norm 2.679210423371e-03 ||r(i)||/||b|| 6.979959597340e-01 33 KSP preconditioned resid norm 5.412802491776e-05 true resid norm 2.679209810847e-03 ||r(i)||/||b|| 6.979958001573e-01 34 KSP preconditioned resid norm 4.026672785271e-05 true resid norm 2.679209362635e-03 ||r(i)||/||b|| 6.979956833876e-01 35 KSP preconditioned resid norm 2.990907253308e-05 true resid norm 2.679208426592e-03 ||r(i)||/||b|| 6.979954395269e-01 36 KSP preconditioned resid norm 2.226822676398e-05 true resid norm 2.679208136434e-03 ||r(i)||/||b|| 6.979953639342e-01 37 KSP preconditioned resid norm 1.654703590780e-05 true resid norm 2.679208182746e-03 ||r(i)||/||b|| 6.979953759996e-01 38 KSP preconditioned resid norm 1.229268254949e-05 true resid norm 2.679208209944e-03 ||r(i)||/||b|| 6.979953830852e-01 39 KSP preconditioned resid norm 9.149145951039e-06 true resid norm 2.679208050822e-03 ||r(i)||/||b|| 6.979953416303e-01 40 KSP preconditioned resid norm 6.813825018110e-06 true resid norm 2.679207932900e-03 ||r(i)||/||b|| 6.979953109089e-01 41 KSP preconditioned resid norm 5.075333494970e-06 true resid norm 2.679208029175e-03 ||r(i)||/||b|| 6.979953359907e-01 42 KSP preconditioned resid norm 3.770609781438e-06 true resid norm 2.679208069198e-03 ||r(i)||/||b|| 6.979953464175e-01 43 KSP preconditioned resid norm 2.808924777973e-06 true resid norm 2.679208000517e-03 ||r(i)||/||b|| 6.979953285246e-01 44 KSP preconditioned resid norm 2.094599249993e-06 true resid norm 2.679207985642e-03 ||r(i)||/||b|| 6.979953246492e-01 45 KSP preconditioned resid norm 1.559223301396e-06 true resid norm 2.679208018840e-03 ||r(i)||/||b|| 6.979953332981e-01 46 KSP preconditioned resid norm 1.160309778657e-06 true resid norm 2.679208029228e-03 ||r(i)||/||b|| 6.979953360044e-01 47 KSP preconditioned resid norm 8.638154916854e-07 true resid norm 2.679208013013e-03 ||r(i)||/||b|| 6.979953317800e-01 48 KSP preconditioned resid norm 6.436084879799e-07 true resid norm 2.679208008459e-03 ||r(i)||/||b|| 6.979953305937e-01 49 KSP preconditioned resid norm 4.797395939888e-07 true resid norm 2.679208018385e-03 ||r(i)||/||b|| 6.979953331796e-01 50 KSP preconditioned resid norm 3.573839482305e-07 true resid norm 2.679208020910e-03 ||r(i)||/||b|| 6.979953338374e-01 51 KSP preconditioned resid norm 2.662426448119e-07 true resid norm 2.679208017655e-03 ||r(i)||/||b|| 6.979953329896e-01 52 KSP preconditioned resid norm 1.984893339085e-07 true resid norm 2.679208016597e-03 ||r(i)||/||b|| 6.979953327137e-01 53 KSP preconditioned resid norm 1.484050273141e-07 true resid norm 2.679208018006e-03 ||r(i)||/||b|| 6.979953330809e-01 54 KSP preconditioned resid norm 1.106994152625e-07 true resid norm 2.679208019541e-03 ||r(i)||/||b|| 6.979953334807e-01 Linear solve converged due to CONVERGED_RTOL iterations 54 Accuracy of the soltuion on the solution from LU: | u -U_lu |_2 : 3321.15 Iteration number is : 54 Accuracy of the soltuion: | b - A*u |_2 : 0.004681 real 0m15.393s user 0m14.895s sys 0m0.251s > Message: 8 > > Date: Thu, 10 Nov 2011 14:10:14 +0000 > > From: Matthew Knepley > > Subject: Re: [petsc-users] Any suggestion for this kinds of matrix? > > To: PETSc users list > > Message-ID: > > < > CAMYG4GnxR2A+TcVzmsiOgD4wp4P+a_GRnPBg1YQKz7bJT5XiDw at mail.gmail.com > > > > > Content-Type: text/plain; charset="iso-8859-1" > > > > On Thu, Nov 10, 2011 at 1:48 PM, Bao Kai wrote: > > > > > Dear all, > > > > > > I have been trying with PETSC to solve the linear system from mixed > > finite > > > element method. > > > > > > The pattern of the matrix is as the following, but due to the irregular > > > boundary involved, the matrix A is not strictly symmetric. > > > > > > A dt* C > > > > > > C^T 0 > > > > > > As a result of the matrix pattern, the diagonal entries of the > > > bottom-right portion are all zero. > > > > > > I am just wondering if there are any suggestion of the type of the > solver > > > and preconditioner for this kinds of linear system? Thank you very > much. > > > > > > When I tried to solve the system with PETSC, I got the following > > > information. ( PCType PCASM, KSPType, KSPFGMRES ) > > > > > > > ILU is jsut not going to work for this type of matrix (a saddle point). I > > suggest reading about PCFIELDSPLIT. > > > > Matt > > > > > > > [0]PETSC ERROR: --------------------- Error Message > > > ------------------------------------ > > > [0]PETSC ERROR: Object is in wrong state! > > > [0]PETSC ERROR: Matrix is missing diagonal entry 288398! > > > [0]PETSC ERROR: > > > > ------------------------------------------------------------------------ > > > [0]PETSC ERROR: Petsc Release Version 3.2.0, Patch 4, Sun Oct 23 > 12:23:18 > > > CDT 2011 > > > [0]PETSC ERROR: See docs/changes/index.html for recent updates. > > > [0]PETSC ERROR: See docs/faq.html for hints about trouble shooting. > > > [0]PETSC ERROR: See docs/index.html for manual pages. > > > [0]PETSC ERROR: > > > > ------------------------------------------------------------------------ > > > [0]PETSC ERROR: Libraries linked from > > > /home/baok/software/petsc-3.2-p4/arch-linux2-c-debug-withhypre/lib > > > [0]PETSC ERROR: Configure run at Thu Nov 10 11:49:03 2011 > > > [0]PETSC ERROR: Configure options --download-hypre=yes > > > [0]PETSC ERROR: > > > > ------------------------------------------------------------------------ > > > [0]PETSC ERROR: MatILUFactorSymbolic_SeqAIJ_ilu0() line 1636 in > > > /home/baok/software/petsc-3.2-p4/src/mat/impls/aij/seq/aijfact.c > > > [0]PETSC ERROR: MatILUFactorSymbolic_SeqAIJ() line 1740 in > > > /home/baok/software/petsc-3.2-p4/src/mat/impls/aij/seq/aijfact.c > > > [0]PETSC ERROR: MatILUFactorSymbolic() line 6092 in > > > /home/baok/software/petsc-3.2-p4/src/mat/interface/matrix.c > > > [0]PETSC ERROR: PCSetUp_ILU() line 216 in > > > /home/baok/software/petsc-3.2-p4/src/ksp/pc/impls/factor/ilu/ilu.c > > > [0]PETSC ERROR: PCSetUp() line 819 in > > > /home/baok/software/petsc-3.2-p4/src/ksp/pc/interface/precon.c > > > [0]PETSC ERROR: KSPSetUp() line 260 in > > > /home/baok/software/petsc-3.2-p4/src/ksp/ksp/interface/itfunc.c > > > [0]PETSC ERROR: PCSetUpOnBlocks_ASM() line 339 in > > > /home/baok/software/petsc-3.2-p4/src/ksp/pc/impls/asm/asm.c > > > [0]PETSC ERROR: PCSetUpOnBlocks() line 852 in > > > /home/baok/software/petsc-3.2-p4/src/ksp/pc/interface/precon.c > > > [0]PETSC ERROR: KSPSetUpOnBlocks() line 154 in > > > /home/baok/software/petsc-3.2-p4/src/ksp/ksp/interface/itfunc.c > > > [0]PETSC ERROR: KSPSolve() line 380 in > > > /home/baok/software/petsc-3.2-p4/src/ksp/ksp/interface/itfunc.c > > > [0]PETSC ERROR: main() line 261 in > src/ksp/ksp/examples/tutorials/ex78.c > > > > > > > > > Best Regards, > > > Kai > > > > > > > > > > > > -- > > What most experimenters take for granted before they begin their > > experiments is infinitely more interesting than any results to which > their > > experiments lead. > > -- Norbert Wiener > > -------------- next part -------------- > > An HTML attachment was scrubbed... > > URL: < > > > http://lists.mcs.anl.gov/pipermail/petsc-users/attachments/20111110/2ea2b385/attachment.htm > > > > > > > ------------------------------ > > > > _______________________________________________ > > petsc-users mailing list > > petsc-users at mcs.anl.gov > > https://lists.mcs.anl.gov/mailman/listinfo/petsc-users > > > > > > End of petsc-users Digest, Vol 35, Issue 29 > > ******************************************* > > > -------------- next part -------------- > An HTML attachment was scrubbed... > URL: < > http://lists.mcs.anl.gov/pipermail/petsc-users/attachments/20111110/da069f92/attachment-0001.htm > > > > ------------------------------ > > Message: 2 > Date: Thu, 10 Nov 2011 14:49:55 -0600 > From: Jed Brown > Subject: Re: [petsc-users] Any suggestion for this kinds of matrix? > To: PETSc users list > Message-ID: > > > Content-Type: text/plain; charset="utf-8" > > On Thu, Nov 10, 2011 at 14:45, Bao Kai wrote: > > > PCFIELDSPLIT seems a little more complex, I will try that. > > > > I tried some different preconditioners, only lu can get right results. > > > > With some pc, some wrong results can be obtained, such as the following > > one. > > > > tutorials]$ time ./ex78 -Ain A_in -rhs rhs -solu solu -noshift -pc_type > > hypre -pc_hypre_type parasails -ksp_gmres_restart 600 -ksp_rtol 1e-5 > > -ksp_typ gmres > > > > Always run with -ksp_monitor_true_residual -ksp_converged_reason when > checking whether a preconditioner is working. > -------------- next part -------------- > An HTML attachment was scrubbed... > URL: < > http://lists.mcs.anl.gov/pipermail/petsc-users/attachments/20111110/3d5e1de6/attachment-0001.htm > > > > ------------------------------ > > Message: 3 > Date: Thu, 10 Nov 2011 22:11:01 -0700 > From: Xiang Hao > Subject: [petsc-users] Problem with including petscdmda.h > To: PETSc users list > Message-ID: > > > Content-Type: text/plain; charset="windows-1252" > > Hi, > > I have a question about petscdmda.h. > > I have a program using PETSc, which is running well. Now I just add a new > line #include in my code and get the following error. I don't > understand what's going on here. Any help? > > > ------------------------------------------------------------------------------------------------------------- > In file included from /home/sci/hao/software/PETSc/include/petscdmda.h:4:0, > from > /home/sci/hao/programming/C++/ITK/SolveAlpha/SolveAlpha.h:18, > from > /home/sci/hao/programming/C++/ITK/SolveAlpha/SolveAlpha.cxx:9: > /home/sci/hao/software/PETSc/include/petscdm.h:27:8: error: ?PetscClassId? > does not name a type > /home/sci/hao/software/PETSc/include/petscdm.h:48:8: error: ?PetscBool? > does not name a type > /home/sci/hao/software/PETSc/include/petscdm.h:120:55: error: ?PetscBool? > has not been declared > /home/sci/hao/software/PETSc/include/petscdm.h:144:46: error: ?PetscBool? > has not been declared > /home/sci/hao/software/PETSc/include/petscdm.h:145:42: error: ?PetscBool? > has not been declared > /home/sci/hao/software/PETSc/include/petscdm.h:146:42: error: ?PetscBool? > has not been declared > In file included from /home/sci/hao/software/PETSc/include/petscdm.h:157:0, > from /home/sci/hao/software/PETSc/include/petscdmda.h:4, > from > /home/sci/hao/programming/C++/ITK/SolveAlpha/SolveAlpha.h:18, > from > /home/sci/hao/programming/C++/ITK/SolveAlpha/SolveAlpha.cxx:9: > /home/sci/hao/software/PETSc/include/petscbag.h:44:60: error: ?PetscBool? > has not been declared > In file included from /home/sci/hao/software/PETSc/include/petscdmda.h:5:0, > from > /home/sci/hao/programming/C++/ITK/SolveAlpha/SolveAlpha.h:18, > from > /home/sci/hao/programming/C++/ITK/SolveAlpha/SolveAlpha.cxx:9: > /home/sci/hao/software/PETSc/include/petscpf.h:41:8: error: ?PetscClassId? > does not name a type > In file included from /home/sci/hao/software/PETSc/include/petscdmda.h:5:0, > from > /home/sci/hao/programming/C++/ITK/SolveAlpha/SolveAlpha.h:18, > from > /home/sci/hao/programming/C++/ITK/SolveAlpha/SolveAlpha.cxx:9: > /home/sci/hao/software/PETSc/include/petscpf.h:52:8: error: ?PetscBool? > does not name a type > > ------------------------------------------------------------------------------------------------------------------- > > > The head file of my program begin with the following. The read line is the > one I just added. > > #include > #include > #include > #include > #include > #include > #include > #include > #include > #include > -------------- next part -------------- > An HTML attachment was scrubbed... > URL: < > http://lists.mcs.anl.gov/pipermail/petsc-users/attachments/20111110/a9209e2e/attachment-0001.htm > > > > ------------------------------ > > Message: 4 > Date: Thu, 10 Nov 2011 23:17:27 -0600 > From: Jed Brown > Subject: Re: [petsc-users] Problem with including petscdmda.h > To: PETSc users list > Message-ID: > > > Content-Type: text/plain; charset="utf-8" > > On Thu, Nov 10, 2011 at 23:11, Xiang Hao wrote: > > > I have a program using PETSc, which is running well. Now I just add a new > > line #include in my code and get the following error. I > don't > > understand what's going on here. Any help? > > > > > > > ------------------------------------------------------------------------------------------------------------- > > In file included from > /home/sci/hao/software/PETSc/include/petscdmda.h:4:0, > > from > > /home/sci/hao/programming/C++/ITK/SolveAlpha/SolveAlpha.h:18, > > from > > /home/sci/hao/programming/C++/ITK/SolveAlpha/SolveAlpha.cxx:9: > > /home/sci/hao/software/PETSc/include/petscdm.h:27:8: error: > ?PetscClassId? > > does not name a type > > /home/sci/hao/software/PETSc/include/petscdm.h:48:8: error: ?PetscBool? > > does not name a type > > > > The most likely explanation is that a path to an old version of PETSc > appears before this one in the header search paths, therefore it finds the > file with a new name (petscdm.h) in the new directory, but finds all the > supporting headers (which did not change names) in the old directory. Check > the command line and environment. > -------------- next part -------------- > An HTML attachment was scrubbed... > URL: < > http://lists.mcs.anl.gov/pipermail/petsc-users/attachments/20111110/7c64d647/attachment-0001.htm > > > > ------------------------------ > > Message: 5 > Date: Fri, 11 Nov 2011 12:39:02 +0200 > From: Konstantinos Kontzialis > Subject: [petsc-users] configuration problem > To: petsc-users at mcs.anl.gov > Message-ID: <4EBCFB46.8050509 at lycos.com> > Content-Type: text/plain; charset=ISO-8859-1; format=flowed > > Dear all, > > I'm trying to configure petsc v3.2 with the following options: > > ./configure --with-debugging=1 --with-mpi-dir=/usr/lib64/mpich2/bin > --with-shared-libraries > --with-large-file-io=1 > --with-precision=__float128 > --with-blacs=1 > --download-blacs=yes > --download-f-blas-lapack=yes > --with-plapack=1 > --download-plapack=yes > --with-scalapack=1 > --download-scalapack=yes > --with-superlu=1 > --download-superlu=yes > --with-superlu_dist=1 > --download-superlu_dist=yes > --with-ml=1 --download-ml=yes > --with-umfpack=1 > --download-umfpack=yes > --with-sundials=1 > --download-sundials=1 > --with-parmetis=1 > --download-parmetis=1 > --with-hypre=1 > --download-hypre=1 > > > and I get: > > > > ******************************************************************************* > UNABLE to CONFIGURE with GIVEN OPTIONS (see configure.log > for details): > > ------------------------------------------------------------------------------- > Could not use downloaded f-blas-lapack? > > ******************************************************************************* > > What should I do? > > Kostas > > > ------------------------------ > > _______________________________________________ > petsc-users mailing list > petsc-users at mcs.anl.gov > https://lists.mcs.anl.gov/mailman/listinfo/petsc-users > > > End of petsc-users Digest, Vol 35, Issue 32 > ******************************************* > -------------- next part -------------- An HTML attachment was scrubbed... URL: From ckontzialis at lycos.com Fri Nov 11 05:21:32 2011 From: ckontzialis at lycos.com (Konstantinos Kontzialis) Date: Fri, 11 Nov 2011 13:21:32 +0200 Subject: [petsc-users] configuration problem In-Reply-To: References: Message-ID: <4EBD053C.6080902@lycos.com> On 11/11/2011 01:12 PM, petsc-users-request at mcs.anl.gov wrote: > Send petsc-users mailing list submissions to > petsc-users at mcs.anl.gov > > To subscribe or unsubscribe via the World Wide Web, visit > https://lists.mcs.anl.gov/mailman/listinfo/petsc-users > or, via email, send a message with subject or body 'help' to > petsc-users-request at mcs.anl.gov > > You can reach the person managing the list at > petsc-users-owner at mcs.anl.gov > > When replying, please edit your Subject line so it is more specific > than "Re: Contents of petsc-users digest..." > > > Today's Topics: > > 1. Re: configuration problem (Jose E. Roman) > 2. Re: Any suggestion for this kinds of matrix? (Bao Kai) > > > ---------------------------------------------------------------------- > > Message: 1 > Date: Fri, 11 Nov 2011 12:05:34 +0100 > From: "Jose E. Roman" > Subject: Re: [petsc-users] configuration problem > To: PETSc users list > Message-ID:<1D6E1BAA-7010-42EA-8F4B-AA99D957F94C at dsic.upv.es> > Content-Type: text/plain; charset=iso-8859-1 > > > El 11/11/2011, a las 11:39, Konstantinos Kontzialis escribi?: > >> Dear all, >> >> I'm trying to configure petsc v3.2 with the following options: >> >> ./configure --with-debugging=1 --with-mpi-dir=/usr/lib64/mpich2/bin >> --with-shared-libraries >> --with-large-file-io=1 >> --with-precision=__float128 >> --with-blacs=1 >> --download-blacs=yes >> --download-f-blas-lapack=yes >> --with-plapack=1 >> --download-plapack=yes >> --with-scalapack=1 >> --download-scalapack=yes >> --with-superlu=1 >> --download-superlu=yes >> --with-superlu_dist=1 >> --download-superlu_dist=yes >> --with-ml=1 --download-ml=yes >> --with-umfpack=1 >> --download-umfpack=yes >> --with-sundials=1 >> --download-sundials=1 >> --with-parmetis=1 >> --download-parmetis=1 >> --with-hypre=1 >> --download-hypre=1 >> >> >> and I get: >> >> >> ******************************************************************************* >> UNABLE to CONFIGURE with GIVEN OPTIONS (see configure.log for details): >> ------------------------------------------------------------------------------- >> Could not use downloaded f-blas-lapack? >> ******************************************************************************* >> >> What should I do? >> >> Kostas > > > --with-precision=__float128 only works with --download-f2cblaslapack=yes > > Jose > > > > ------------------------------ > > Message: 2 > Date: Fri, 11 Nov 2011 14:12:07 +0300 > From: Bao Kai > Subject: Re: [petsc-users] Any suggestion for this kinds of matrix? > To: petsc-users at mcs.anl.gov > Message-ID: > > Content-Type: text/plain; charset="iso-8859-1" > > Dear Jed, > > The following is the result with the options you told me. The iteration has > converged, while converged at a wrong solution, compared to the result from > LU. > > tutorials]$ time ./ex78 -Ain A_phi -rhs rhs_phi -solu solu_phi -noshift > -pc_type hypre -pc_hypre_type parasails -ksp_gmres_restart 600 -ksp_rtol > 1e-7 -ksp_typ gmres -ksp_monitor_true_residual -ksp_converged_reason > > Read matrix in ascii format ... > m: 288399, n: 288399, nz: 4023176 > read A completed > rowNumber[0] = 13 > rowNumber[1] = 13 > rowNumber[2] = 19 > read A is complete ! > > Read rhs in ascii format ... > > Read exact solution in ascii format ... > 0 KSP preconditioned resid norm 1.311815748108e+00 true resid norm > 3.838432566849e-03 ||r(i)||/||b|| 1.000000000000e+00 > 1 KSP preconditioned resid norm 5.507600629359e-01 true resid norm > 1.878066463331e-03 ||r(i)||/||b|| 4.892795250727e-01 > 2 KSP preconditioned resid norm 4.013507277599e-01 true resid norm > 1.785692106749e-03 ||r(i)||/||b|| 4.652138797934e-01 > 3 KSP preconditioned resid norm 3.186589484171e-01 true resid norm > 1.939928539000e-03 ||r(i)||/||b|| 5.053960191341e-01 > 4 KSP preconditioned resid norm 2.533816267053e-01 true resid norm > 2.182716525094e-03 ||r(i)||/||b|| 5.686478756839e-01 > 5 KSP preconditioned resid norm 1.956749847727e-01 true resid norm > 2.436511231157e-03 ||r(i)||/||b|| 6.347672360328e-01 > 6 KSP preconditioned resid norm 1.489551919079e-01 true resid norm > 2.625495747280e-03 ||r(i)||/||b|| 6.840020507214e-01 > 7 KSP preconditioned resid norm 1.129706022530e-01 true resid norm > 2.726727688051e-03 ||r(i)||/||b|| 7.103753004809e-01 > 8 KSP preconditioned resid norm 8.528153693722e-02 true resid norm > 2.764872691683e-03 ||r(i)||/||b|| 7.203129515839e-01 > 9 KSP preconditioned resid norm 6.419522418091e-02 true resid norm > 2.765551151077e-03 ||r(i)||/||b|| 7.204897058665e-01 > 10 KSP preconditioned resid norm 4.793073337207e-02 true resid norm > 2.754803244091e-03 ||r(i)||/||b|| 7.176896288040e-01 > 11 KSP preconditioned resid norm 3.590594904610e-02 true resid norm > 2.738481920714e-03 ||r(i)||/||b|| 7.134375485361e-01 > 12 KSP preconditioned resid norm 2.683482240096e-02 true resid norm > 2.722031760807e-03 ||r(i)||/||b|| 7.091519033880e-01 > 13 KSP preconditioned resid norm 2.001207136261e-02 true resid norm > 2.709429246945e-03 ||r(i)||/||b|| 7.058686585627e-01 > 14 KSP preconditioned resid norm 1.493908729876e-02 true resid norm > 2.699791991674e-03 ||r(i)||/||b|| 7.033579318265e-01 > 15 KSP preconditioned resid norm 1.111558666088e-02 true resid norm > 2.692890954089e-03 ||r(i)||/||b|| 7.015600527536e-01 > 16 KSP preconditioned resid norm 8.272119255509e-03 true resid norm > 2.688069519102e-03 ||r(i)||/||b|| 7.003039580057e-01 > 17 KSP preconditioned resid norm 6.143976425601e-03 true resid norm > 2.684895702336e-03 ||r(i)||/||b|| 6.994771057136e-01 > 18 KSP preconditioned resid norm 4.563685459707e-03 true resid norm > 2.682859292253e-03 ||r(i)||/||b|| 6.989465740324e-01 > 19 KSP preconditioned resid norm 3.394656398417e-03 true resid norm > 2.681459888330e-03 ||r(i)||/||b|| 6.985819971124e-01 > 20 KSP preconditioned resid norm 2.518916365228e-03 true resid norm > 2.680609739005e-03 ||r(i)||/||b|| 6.983605136526e-01 > 21 KSP preconditioned resid norm 1.872307188353e-03 true resid norm > 2.680081243394e-03 ||r(i)||/||b|| 6.982228283859e-01 > 22 KSP preconditioned resid norm 1.390334828536e-03 true resid norm > 2.679742748684e-03 ||r(i)||/||b|| 6.981346427259e-01 > 23 KSP preconditioned resid norm 1.034606694934e-03 true resid norm > 2.679535108562e-03 ||r(i)||/||b|| 6.980805476966e-01 > 24 KSP preconditioned resid norm 7.710134967260e-04 true resid norm > 2.679397134577e-03 ||r(i)||/||b|| 6.980446022989e-01 > 25 KSP preconditioned resid norm 5.725407938260e-04 true resid norm > 2.679329449922e-03 ||r(i)||/||b|| 6.980269688889e-01 > 26 KSP preconditioned resid norm 4.272990427118e-04 true resid norm > 2.679284171589e-03 ||r(i)||/||b|| 6.980151728414e-01 > 27 KSP preconditioned resid norm 3.181341598383e-04 true resid norm > 2.679247206576e-03 ||r(i)||/||b|| 6.980055426050e-01 > 28 KSP preconditioned resid norm 2.368729933003e-04 true resid norm > 2.679233163958e-03 ||r(i)||/||b|| 6.980018841799e-01 > 29 KSP preconditioned resid norm 1.766017339700e-04 true resid norm > 2.679224053276e-03 ||r(i)||/||b|| 6.979995106376e-01 > 30 KSP preconditioned resid norm 1.313377419946e-04 true resid norm > 2.679217016981e-03 ||r(i)||/||b|| 6.979976775210e-01 > 31 KSP preconditioned resid norm 9.789603459870e-05 true resid norm > 2.679213290696e-03 ||r(i)||/||b|| 6.979967067380e-01 > 32 KSP preconditioned resid norm 7.275708495896e-05 true resid norm > 2.679210423371e-03 ||r(i)||/||b|| 6.979959597340e-01 > 33 KSP preconditioned resid norm 5.412802491776e-05 true resid norm > 2.679209810847e-03 ||r(i)||/||b|| 6.979958001573e-01 > 34 KSP preconditioned resid norm 4.026672785271e-05 true resid norm > 2.679209362635e-03 ||r(i)||/||b|| 6.979956833876e-01 > 35 KSP preconditioned resid norm 2.990907253308e-05 true resid norm > 2.679208426592e-03 ||r(i)||/||b|| 6.979954395269e-01 > 36 KSP preconditioned resid norm 2.226822676398e-05 true resid norm > 2.679208136434e-03 ||r(i)||/||b|| 6.979953639342e-01 > 37 KSP preconditioned resid norm 1.654703590780e-05 true resid norm > 2.679208182746e-03 ||r(i)||/||b|| 6.979953759996e-01 > 38 KSP preconditioned resid norm 1.229268254949e-05 true resid norm > 2.679208209944e-03 ||r(i)||/||b|| 6.979953830852e-01 > 39 KSP preconditioned resid norm 9.149145951039e-06 true resid norm > 2.679208050822e-03 ||r(i)||/||b|| 6.979953416303e-01 > 40 KSP preconditioned resid norm 6.813825018110e-06 true resid norm > 2.679207932900e-03 ||r(i)||/||b|| 6.979953109089e-01 > 41 KSP preconditioned resid norm 5.075333494970e-06 true resid norm > 2.679208029175e-03 ||r(i)||/||b|| 6.979953359907e-01 > 42 KSP preconditioned resid norm 3.770609781438e-06 true resid norm > 2.679208069198e-03 ||r(i)||/||b|| 6.979953464175e-01 > 43 KSP preconditioned resid norm 2.808924777973e-06 true resid norm > 2.679208000517e-03 ||r(i)||/||b|| 6.979953285246e-01 > 44 KSP preconditioned resid norm 2.094599249993e-06 true resid norm > 2.679207985642e-03 ||r(i)||/||b|| 6.979953246492e-01 > 45 KSP preconditioned resid norm 1.559223301396e-06 true resid norm > 2.679208018840e-03 ||r(i)||/||b|| 6.979953332981e-01 > 46 KSP preconditioned resid norm 1.160309778657e-06 true resid norm > 2.679208029228e-03 ||r(i)||/||b|| 6.979953360044e-01 > 47 KSP preconditioned resid norm 8.638154916854e-07 true resid norm > 2.679208013013e-03 ||r(i)||/||b|| 6.979953317800e-01 > 48 KSP preconditioned resid norm 6.436084879799e-07 true resid norm > 2.679208008459e-03 ||r(i)||/||b|| 6.979953305937e-01 > 49 KSP preconditioned resid norm 4.797395939888e-07 true resid norm > 2.679208018385e-03 ||r(i)||/||b|| 6.979953331796e-01 > 50 KSP preconditioned resid norm 3.573839482305e-07 true resid norm > 2.679208020910e-03 ||r(i)||/||b|| 6.979953338374e-01 > 51 KSP preconditioned resid norm 2.662426448119e-07 true resid norm > 2.679208017655e-03 ||r(i)||/||b|| 6.979953329896e-01 > 52 KSP preconditioned resid norm 1.984893339085e-07 true resid norm > 2.679208016597e-03 ||r(i)||/||b|| 6.979953327137e-01 > 53 KSP preconditioned resid norm 1.484050273141e-07 true resid norm > 2.679208018006e-03 ||r(i)||/||b|| 6.979953330809e-01 > 54 KSP preconditioned resid norm 1.106994152625e-07 true resid norm > 2.679208019541e-03 ||r(i)||/||b|| 6.979953334807e-01 > Linear solve converged due to CONVERGED_RTOL iterations 54 > > Accuracy of the soltuion on the solution from LU: | u -U_lu |_2 : 3321.15 > > Iteration number is : 54 > > Accuracy of the soltuion: | b - A*u |_2 : 0.004681 > > real 0m15.393s > user 0m14.895s > sys 0m0.251s > > > > >> Message: 8 >>> Date: Thu, 10 Nov 2011 14:10:14 +0000 >>> From: Matthew Knepley >>> Subject: Re: [petsc-users] Any suggestion for this kinds of matrix? >>> To: PETSc users list >>> Message-ID: >>> < >> CAMYG4GnxR2A+TcVzmsiOgD4wp4P+a_GRnPBg1YQKz7bJT5XiDw at mail.gmail.com >>> Content-Type: text/plain; charset="iso-8859-1" >>> >>> On Thu, Nov 10, 2011 at 1:48 PM, Bao Kai wrote: >>> >>>> Dear all, >>>> >>>> I have been trying with PETSC to solve the linear system from mixed >>> finite >>>> element method. >>>> >>>> The pattern of the matrix is as the following, but due to the irregular >>>> boundary involved, the matrix A is not strictly symmetric. >>>> >>>> A dt* C >>>> >>>> C^T 0 >>>> >>>> As a result of the matrix pattern, the diagonal entries of the >>>> bottom-right portion are all zero. >>>> >>>> I am just wondering if there are any suggestion of the type of the >> solver >>>> and preconditioner for this kinds of linear system? Thank you very >> much. >>>> When I tried to solve the system with PETSC, I got the following >>>> information. ( PCType PCASM, KSPType, KSPFGMRES ) >>>> >>> ILU is jsut not going to work for this type of matrix (a saddle point). I >>> suggest reading about PCFIELDSPLIT. >>> >>> Matt >>> >>> >>>> [0]PETSC ERROR: --------------------- Error Message >>>> ------------------------------------ >>>> [0]PETSC ERROR: Object is in wrong state! >>>> [0]PETSC ERROR: Matrix is missing diagonal entry 288398! >>>> [0]PETSC ERROR: >>>> >> ------------------------------------------------------------------------ >>>> [0]PETSC ERROR: Petsc Release Version 3.2.0, Patch 4, Sun Oct 23 >> 12:23:18 >>>> CDT 2011 >>>> [0]PETSC ERROR: See docs/changes/index.html for recent updates. >>>> [0]PETSC ERROR: See docs/faq.html for hints about trouble shooting. >>>> [0]PETSC ERROR: See docs/index.html for manual pages. >>>> [0]PETSC ERROR: >>>> >> ------------------------------------------------------------------------ >>>> [0]PETSC ERROR: Libraries linked from >>>> /home/baok/software/petsc-3.2-p4/arch-linux2-c-debug-withhypre/lib >>>> [0]PETSC ERROR: Configure run at Thu Nov 10 11:49:03 2011 >>>> [0]PETSC ERROR: Configure options --download-hypre=yes >>>> [0]PETSC ERROR: >>>> >> ------------------------------------------------------------------------ >>>> [0]PETSC ERROR: MatILUFactorSymbolic_SeqAIJ_ilu0() line 1636 in >>>> /home/baok/software/petsc-3.2-p4/src/mat/impls/aij/seq/aijfact.c >>>> [0]PETSC ERROR: MatILUFactorSymbolic_SeqAIJ() line 1740 in >>>> /home/baok/software/petsc-3.2-p4/src/mat/impls/aij/seq/aijfact.c >>>> [0]PETSC ERROR: MatILUFactorSymbolic() line 6092 in >>>> /home/baok/software/petsc-3.2-p4/src/mat/interface/matrix.c >>>> [0]PETSC ERROR: PCSetUp_ILU() line 216 in >>>> /home/baok/software/petsc-3.2-p4/src/ksp/pc/impls/factor/ilu/ilu.c >>>> [0]PETSC ERROR: PCSetUp() line 819 in >>>> /home/baok/software/petsc-3.2-p4/src/ksp/pc/interface/precon.c >>>> [0]PETSC ERROR: KSPSetUp() line 260 in >>>> /home/baok/software/petsc-3.2-p4/src/ksp/ksp/interface/itfunc.c >>>> [0]PETSC ERROR: PCSetUpOnBlocks_ASM() line 339 in >>>> /home/baok/software/petsc-3.2-p4/src/ksp/pc/impls/asm/asm.c >>>> [0]PETSC ERROR: PCSetUpOnBlocks() line 852 in >>>> /home/baok/software/petsc-3.2-p4/src/ksp/pc/interface/precon.c >>>> [0]PETSC ERROR: KSPSetUpOnBlocks() line 154 in >>>> /home/baok/software/petsc-3.2-p4/src/ksp/ksp/interface/itfunc.c >>>> [0]PETSC ERROR: KSPSolve() line 380 in >>>> /home/baok/software/petsc-3.2-p4/src/ksp/ksp/interface/itfunc.c >>>> [0]PETSC ERROR: main() line 261 in >> src/ksp/ksp/examples/tutorials/ex78.c >>>> >>>> Best Regards, >>>> Kai >>>> >>>> >>> >>> -- >>> What most experimenters take for granted before they begin their >>> experiments is infinitely more interesting than any results to which >> their >>> experiments lead. >>> -- Norbert Wiener >>> -------------- next part -------------- >>> An HTML attachment was scrubbed... >>> URL:< >>> >> http://lists.mcs.anl.gov/pipermail/petsc-users/attachments/20111110/2ea2b385/attachment.htm >>> ------------------------------ >>> >>> _______________________________________________ >>> petsc-users mailing list >>> petsc-users at mcs.anl.gov >>> https://lists.mcs.anl.gov/mailman/listinfo/petsc-users >>> >>> >>> End of petsc-users Digest, Vol 35, Issue 29 >>> ******************************************* >>> >> -------------- next part -------------- >> An HTML attachment was scrubbed... >> URL:< >> http://lists.mcs.anl.gov/pipermail/petsc-users/attachments/20111110/da069f92/attachment-0001.htm >> ------------------------------ >> >> Message: 2 >> Date: Thu, 10 Nov 2011 14:49:55 -0600 >> From: Jed Brown >> Subject: Re: [petsc-users] Any suggestion for this kinds of matrix? >> To: PETSc users list >> Message-ID: >> > Content-Type: text/plain; charset="utf-8" >> >> On Thu, Nov 10, 2011 at 14:45, Bao Kai wrote: >> >>> PCFIELDSPLIT seems a little more complex, I will try that. >>> >>> I tried some different preconditioners, only lu can get right results. >>> >>> With some pc, some wrong results can be obtained, such as the following >>> one. >>> >>> tutorials]$ time ./ex78 -Ain A_in -rhs rhs -solu solu -noshift -pc_type >>> hypre -pc_hypre_type parasails -ksp_gmres_restart 600 -ksp_rtol 1e-5 >>> -ksp_typ gmres >>> >> Always run with -ksp_monitor_true_residual -ksp_converged_reason when >> checking whether a preconditioner is working. >> -------------- next part -------------- >> An HTML attachment was scrubbed... >> URL:< >> http://lists.mcs.anl.gov/pipermail/petsc-users/attachments/20111110/3d5e1de6/attachment-0001.htm >> ------------------------------ >> >> Message: 3 >> Date: Thu, 10 Nov 2011 22:11:01 -0700 >> From: Xiang Hao >> Subject: [petsc-users] Problem with including petscdmda.h >> To: PETSc users list >> Message-ID: >> > Content-Type: text/plain; charset="windows-1252" >> >> Hi, >> >> I have a question about petscdmda.h. >> >> I have a program using PETSc, which is running well. Now I just add a new >> line #include in my code and get the following error. I don't >> understand what's going on here. Any help? >> >> >> ------------------------------------------------------------------------------------------------------------- >> In file included from /home/sci/hao/software/PETSc/include/petscdmda.h:4:0, >> from >> /home/sci/hao/programming/C++/ITK/SolveAlpha/SolveAlpha.h:18, >> from >> /home/sci/hao/programming/C++/ITK/SolveAlpha/SolveAlpha.cxx:9: >> /home/sci/hao/software/PETSc/include/petscdm.h:27:8: error: ?PetscClassId? >> does not name a type >> /home/sci/hao/software/PETSc/include/petscdm.h:48:8: error: ?PetscBool? >> does not name a type >> /home/sci/hao/software/PETSc/include/petscdm.h:120:55: error: ?PetscBool? >> has not been declared >> /home/sci/hao/software/PETSc/include/petscdm.h:144:46: error: ?PetscBool? >> has not been declared >> /home/sci/hao/software/PETSc/include/petscdm.h:145:42: error: ?PetscBool? >> has not been declared >> /home/sci/hao/software/PETSc/include/petscdm.h:146:42: error: ?PetscBool? >> has not been declared >> In file included from /home/sci/hao/software/PETSc/include/petscdm.h:157:0, >> from /home/sci/hao/software/PETSc/include/petscdmda.h:4, >> from >> /home/sci/hao/programming/C++/ITK/SolveAlpha/SolveAlpha.h:18, >> from >> /home/sci/hao/programming/C++/ITK/SolveAlpha/SolveAlpha.cxx:9: >> /home/sci/hao/software/PETSc/include/petscbag.h:44:60: error: ?PetscBool? >> has not been declared >> In file included from /home/sci/hao/software/PETSc/include/petscdmda.h:5:0, >> from >> /home/sci/hao/programming/C++/ITK/SolveAlpha/SolveAlpha.h:18, >> from >> /home/sci/hao/programming/C++/ITK/SolveAlpha/SolveAlpha.cxx:9: >> /home/sci/hao/software/PETSc/include/petscpf.h:41:8: error: ?PetscClassId? >> does not name a type >> In file included from /home/sci/hao/software/PETSc/include/petscdmda.h:5:0, >> from >> /home/sci/hao/programming/C++/ITK/SolveAlpha/SolveAlpha.h:18, >> from >> /home/sci/hao/programming/C++/ITK/SolveAlpha/SolveAlpha.cxx:9: >> /home/sci/hao/software/PETSc/include/petscpf.h:52:8: error: ?PetscBool? >> does not name a type >> >> ------------------------------------------------------------------------------------------------------------------- >> >> >> The head file of my program begin with the following. The read line is the >> one I just added. >> >> #include >> #include >> #include >> #include >> #include >> #include >> #include >> #include >> #include >> #include >> -------------- next part -------------- >> An HTML attachment was scrubbed... >> URL:< >> http://lists.mcs.anl.gov/pipermail/petsc-users/attachments/20111110/a9209e2e/attachment-0001.htm >> ------------------------------ >> >> Message: 4 >> Date: Thu, 10 Nov 2011 23:17:27 -0600 >> From: Jed Brown >> Subject: Re: [petsc-users] Problem with including petscdmda.h >> To: PETSc users list >> Message-ID: >> > Content-Type: text/plain; charset="utf-8" >> >> On Thu, Nov 10, 2011 at 23:11, Xiang Hao wrote: >> >>> I have a program using PETSc, which is running well. Now I just add a new >>> line #include in my code and get the following error. I >> don't >>> understand what's going on here. Any help? >>> >>> >>> >> ------------------------------------------------------------------------------------------------------------- >>> In file included from >> /home/sci/hao/software/PETSc/include/petscdmda.h:4:0, >>> from >>> /home/sci/hao/programming/C++/ITK/SolveAlpha/SolveAlpha.h:18, >>> from >>> /home/sci/hao/programming/C++/ITK/SolveAlpha/SolveAlpha.cxx:9: >>> /home/sci/hao/software/PETSc/include/petscdm.h:27:8: error: >> ?PetscClassId? >>> does not name a type >>> /home/sci/hao/software/PETSc/include/petscdm.h:48:8: error: ?PetscBool? >>> does not name a type >>> >> The most likely explanation is that a path to an old version of PETSc >> appears before this one in the header search paths, therefore it finds the >> file with a new name (petscdm.h) in the new directory, but finds all the >> supporting headers (which did not change names) in the old directory. Check >> the command line and environment. >> -------------- next part -------------- >> An HTML attachment was scrubbed... >> URL:< >> http://lists.mcs.anl.gov/pipermail/petsc-users/attachments/20111110/7c64d647/attachment-0001.htm >> ------------------------------ >> >> Message: 5 >> Date: Fri, 11 Nov 2011 12:39:02 +0200 >> From: Konstantinos Kontzialis >> Subject: [petsc-users] configuration problem >> To: petsc-users at mcs.anl.gov >> Message-ID:<4EBCFB46.8050509 at lycos.com> >> Content-Type: text/plain; charset=ISO-8859-1; format=flowed >> >> Dear all, >> >> I'm trying to configure petsc v3.2 with the following options: >> >> ./configure --with-debugging=1 --with-mpi-dir=/usr/lib64/mpich2/bin >> --with-shared-libraries >> --with-large-file-io=1 >> --with-precision=__float128 >> --with-blacs=1 >> --download-blacs=yes >> --download-f-blas-lapack=yes >> --with-plapack=1 >> --download-plapack=yes >> --with-scalapack=1 >> --download-scalapack=yes >> --with-superlu=1 >> --download-superlu=yes >> --with-superlu_dist=1 >> --download-superlu_dist=yes >> --with-ml=1 --download-ml=yes >> --with-umfpack=1 >> --download-umfpack=yes >> --with-sundials=1 >> --download-sundials=1 >> --with-parmetis=1 >> --download-parmetis=1 >> --with-hypre=1 >> --download-hypre=1 >> >> >> and I get: >> >> >> >> ******************************************************************************* >> UNABLE to CONFIGURE with GIVEN OPTIONS (see configure.log >> for details): >> >> ------------------------------------------------------------------------------- >> Could not use downloaded f-blas-lapack? >> >> ******************************************************************************* >> >> What should I do? >> >> Kostas >> >> >> ------------------------------ >> >> _______________________________________________ >> petsc-users mailing list >> petsc-users at mcs.anl.gov >> https://lists.mcs.anl.gov/mailman/listinfo/petsc-users >> >> >> End of petsc-users Digest, Vol 35, Issue 32 >> ******************************************* >> > -------------- next part -------------- > An HTML attachment was scrubbed... > URL: > > ------------------------------ > > _______________________________________________ > petsc-users mailing list > petsc-users at mcs.anl.gov > https://lists.mcs.anl.gov/mailman/listinfo/petsc-users > > > End of petsc-users Digest, Vol 35, Issue 33 > ******************************************* Dear Jose, I do this ./configure --with-debugging=1 --with-mpi-dir=/usr/lib64/mpich2/bin \ > --with-shared-libraries \ > --with-large-file-io=1 \ > --with-precision=__float128 \ > --with-blacs=1 --download-f2cblaslapack=yes \ > --with-plapack=1 --download-plapack=yes \ > --with-scalapack=1 --download-scalapack=yes \ > --with-superlu=1 --download-superlu=yes \ > --with-superlu_dist=1 --download-superlu_dist=yes \ > --with-ml=1 --download-ml=yes \ > --with-umfpack=1 --download-umfpack=yes \ > --with-sundials=1 --download-sundials=1 \ > --with-parmetis=1 --download-parmetis=1 \ > --with-hypre=1 --download-hypre=1 and I get : =============================================================================== Configuring PETSc to compile on your system =============================================================================== TESTING: checkLib from config.packages.BlasLapack(config/BuildSystem/config/packages/Bla******************************************************************************* UNABLE to CONFIGURE with GIVEN OPTIONS (see configure.log for details): ------------------------------------------------------------------------------- --download-f2cblaslapack libraries cannot be used ******************************************************************************* What should I do? From jroman at dsic.upv.es Fri Nov 11 05:24:25 2011 From: jroman at dsic.upv.es (Jose E. Roman) Date: Fri, 11 Nov 2011 12:24:25 +0100 Subject: [petsc-users] configuration problem In-Reply-To: <4EBD053C.6080902@lycos.com> References: <4EBD053C.6080902@lycos.com> Message-ID: <82FF226D-AAD2-4E0B-BC6D-AB644D42A0F3@dsic.upv.es> El 11/11/2011, a las 12:21, Konstantinos Kontzialis escribi?: > Dear Jose, > > I do this > > ./configure --with-debugging=1 --with-mpi-dir=/usr/lib64/mpich2/bin \ > > --with-shared-libraries \ > > --with-large-file-io=1 \ > > --with-precision=__float128 \ > > --with-blacs=1 --download-f2cblaslapack=yes \ > > --with-plapack=1 --download-plapack=yes \ > > --with-scalapack=1 --download-scalapack=yes \ > > --with-superlu=1 --download-superlu=yes \ > > --with-superlu_dist=1 --download-superlu_dist=yes \ > > --with-ml=1 --download-ml=yes \ > > --with-umfpack=1 --download-umfpack=yes \ > > --with-sundials=1 --download-sundials=1 \ > > --with-parmetis=1 --download-parmetis=1 \ > > --with-hypre=1 --download-hypre=1 > > and I get : > > =============================================================================== > Configuring PETSc to compile on your system > =============================================================================== > TESTING: checkLib from config.packages.BlasLapack(config/BuildSystem/config/packages/Bla******************************************************************************* > UNABLE to CONFIGURE with GIVEN OPTIONS (see configure.log for details): > ------------------------------------------------------------------------------- > --download-f2cblaslapack libraries cannot be used > ******************************************************************************* > > What should I do? I forgot to mention that external packages do not support __float128. If you really want to use __float128 it is only possible with plain PETSc. For configuration problems it is better to send configure.log to petsc-maint. Jose From paeanball at gmail.com Fri Nov 11 05:31:06 2011 From: paeanball at gmail.com (Bao Kai) Date: Fri, 11 Nov 2011 14:31:06 +0300 Subject: [petsc-users] petsc-users Digest, Vol 35, Issue 32 In-Reply-To: References: Message-ID: Dear Jed, The following is the result with the options you told me. The iteration has converged, while converged at a wrong solution, compared to the result from LU. tutorials]$ time ./ex78 -Ain A_phi -rhs rhs_phi -solu solu_phi -noshift -pc_type hypre -pc_hypre_type parasails -ksp_gmres_restart 600 -ksp_rtol 1e-7 -ksp_typ gmres -ksp_monitor_true_residual -ksp_converged_reason Read matrix in ascii format ... m: 288399, n: 288399, nz: 4023176 read A completed rowNumber[0] = 13 rowNumber[1] = 13 rowNumber[2] = 19 read A is complete ! Read rhs in ascii format ... Read exact solution in ascii format ... 0 KSP preconditioned resid norm 1.311815748108e+00 true resid norm 3.838432566849e-03 ||r(i)||/||b|| 1.000000000000e+00 1 KSP preconditioned resid norm 5.507600629359e-01 true resid norm 1.878066463331e-03 ||r(i)||/||b|| 4.892795250727e-01 2 KSP preconditioned resid norm 4.013507277599e-01 true resid norm 1.785692106749e-03 ||r(i)||/||b|| 4.652138797934e-01 3 KSP preconditioned resid norm 3.186589484171e-01 true resid norm 1.939928539000e-03 ||r(i)||/||b|| 5.053960191341e-01 4 KSP preconditioned resid norm 2.533816267053e-01 true resid norm 2.182716525094e-03 ||r(i)||/||b|| 5.686478756839e-01 5 KSP preconditioned resid norm 1.956749847727e-01 true resid norm 2.436511231157e-03 ||r(i)||/||b|| 6.347672360328e-01 6 KSP preconditioned resid norm 1.489551919079e-01 true resid norm 2.625495747280e-03 ||r(i)||/||b|| 6.840020507214e-01 7 KSP preconditioned resid norm 1.129706022530e-01 true resid norm 2.726727688051e-03 ||r(i)||/||b|| 7.103753004809e-01 8 KSP preconditioned resid norm 8.528153693722e-02 true resid norm 2.764872691683e-03 ||r(i)||/||b|| 7.203129515839e-01 9 KSP preconditioned resid norm 6.419522418091e-02 true resid norm 2.765551151077e-03 ||r(i)||/||b|| 7.204897058665e-01 10 KSP preconditioned resid norm 4.793073337207e-02 true resid norm 2.754803244091e-03 ||r(i)||/||b|| 7.176896288040e-01 11 KSP preconditioned resid norm 3.590594904610e-02 true resid norm 2.738481920714e-03 ||r(i)||/||b|| 7.134375485361e-01 12 KSP preconditioned resid norm 2.683482240096e-02 true resid norm 2.722031760807e-03 ||r(i)||/||b|| 7.091519033880e-01 13 KSP preconditioned resid norm 2.001207136261e-02 true resid norm 2.709429246945e-03 ||r(i)||/||b|| 7.058686585627e-01 14 KSP preconditioned resid norm 1.493908729876e-02 true resid norm 2.699791991674e-03 ||r(i)||/||b|| 7.033579318265e-01 15 KSP preconditioned resid norm 1.111558666088e-02 true resid norm 2.692890954089e-03 ||r(i)||/||b|| 7.015600527536e-01 16 KSP preconditioned resid norm 8.272119255509e-03 true resid norm 2.688069519102e-03 ||r(i)||/||b|| 7.003039580057e-01 17 KSP preconditioned resid norm 6.143976425601e-03 true resid norm 2.684895702336e-03 ||r(i)||/||b|| 6.994771057136e-01 18 KSP preconditioned resid norm 4.563685459707e-03 true resid norm 2.682859292253e-03 ||r(i)||/||b|| 6.989465740324e-01 19 KSP preconditioned resid norm 3.394656398417e-03 true resid norm 2.681459888330e-03 ||r(i)||/||b|| 6.985819971124e-01 20 KSP preconditioned resid norm 2.518916365228e-03 true resid norm 2.680609739005e-03 ||r(i)||/||b|| 6.983605136526e-01 21 KSP preconditioned resid norm 1.872307188353e-03 true resid norm 2.680081243394e-03 ||r(i)||/||b|| 6.982228283859e-01 22 KSP preconditioned resid norm 1.390334828536e-03 true resid norm 2.679742748684e-03 ||r(i)||/||b|| 6.981346427259e-01 23 KSP preconditioned resid norm 1.034606694934e-03 true resid norm 2.679535108562e-03 ||r(i)||/||b|| 6.980805476966e-01 24 KSP preconditioned resid norm 7.710134967260e-04 true resid norm 2.679397134577e-03 ||r(i)||/||b|| 6.980446022989e-01 25 KSP preconditioned resid norm 5.725407938260e-04 true resid norm 2.679329449922e-03 ||r(i)||/||b|| 6.980269688889e-01 26 KSP preconditioned resid norm 4.272990427118e-04 true resid norm 2.679284171589e-03 ||r(i)||/||b|| 6.980151728414e-01 27 KSP preconditioned resid norm 3.181341598383e-04 true resid norm 2.679247206576e-03 ||r(i)||/||b|| 6.980055426050e-01 28 KSP preconditioned resid norm 2.368729933003e-04 true resid norm 2.679233163958e-03 ||r(i)||/||b|| 6.980018841799e-01 29 KSP preconditioned resid norm 1.766017339700e-04 true resid norm 2.679224053276e-03 ||r(i)||/||b|| 6.979995106376e-01 30 KSP preconditioned resid norm 1.313377419946e-04 true resid norm 2.679217016981e-03 ||r(i)||/||b|| 6.979976775210e-01 31 KSP preconditioned resid norm 9.789603459870e-05 true resid norm 2.679213290696e-03 ||r(i)||/||b|| 6.979967067380e-01 32 KSP preconditioned resid norm 7.275708495896e-05 true resid norm 2.679210423371e-03 ||r(i)||/||b|| 6.979959597340e-01 33 KSP preconditioned resid norm 5.412802491776e-05 true resid norm 2.679209810847e-03 ||r(i)||/||b|| 6.979958001573e-01 34 KSP preconditioned resid norm 4.026672785271e-05 true resid norm 2.679209362635e-03 ||r(i)||/||b|| 6.979956833876e-01 35 KSP preconditioned resid norm 2.990907253308e-05 true resid norm 2.679208426592e-03 ||r(i)||/||b|| 6.979954395269e-01 36 KSP preconditioned resid norm 2.226822676398e-05 true resid norm 2.679208136434e-03 ||r(i)||/||b|| 6.979953639342e-01 37 KSP preconditioned resid norm 1.654703590780e-05 true resid norm 2.679208182746e-03 ||r(i)||/||b|| 6.979953759996e-01 38 KSP preconditioned resid norm 1.229268254949e-05 true resid norm 2.679208209944e-03 ||r(i)||/||b|| 6.979953830852e-01 39 KSP preconditioned resid norm 9.149145951039e-06 true resid norm 2.679208050822e-03 ||r(i)||/||b|| 6.979953416303e-01 40 KSP preconditioned resid norm 6.813825018110e-06 true resid norm 2.679207932900e-03 ||r(i)||/||b|| 6.979953109089e-01 41 KSP preconditioned resid norm 5.075333494970e-06 true resid norm 2.679208029175e-03 ||r(i)||/||b|| 6.979953359907e-01 42 KSP preconditioned resid norm 3.770609781438e-06 true resid norm 2.679208069198e-03 ||r(i)||/||b|| 6.979953464175e-01 43 KSP preconditioned resid norm 2.808924777973e-06 true resid norm 2.679208000517e-03 ||r(i)||/||b|| 6.979953285246e-01 44 KSP preconditioned resid norm 2.094599249993e-06 true resid norm 2.679207985642e-03 ||r(i)||/||b|| 6.979953246492e-01 45 KSP preconditioned resid norm 1.559223301396e-06 true resid norm 2.679208018840e-03 ||r(i)||/||b|| 6.979953332981e-01 46 KSP preconditioned resid norm 1.160309778657e-06 true resid norm 2.679208029228e-03 ||r(i)||/||b|| 6.979953360044e-01 47 KSP preconditioned resid norm 8.638154916854e-07 true resid norm 2.679208013013e-03 ||r(i)||/||b|| 6.979953317800e-01 48 KSP preconditioned resid norm 6.436084879799e-07 true resid norm 2.679208008459e-03 ||r(i)||/||b|| 6.979953305937e-01 49 KSP preconditioned resid norm 4.797395939888e-07 true resid norm 2.679208018385e-03 ||r(i)||/||b|| 6.979953331796e-01 50 KSP preconditioned resid norm 3.573839482305e-07 true resid norm 2.679208020910e-03 ||r(i)||/||b|| 6.979953338374e-01 51 KSP preconditioned resid norm 2.662426448119e-07 true resid norm 2.679208017655e-03 ||r(i)||/||b|| 6.979953329896e-01 52 KSP preconditioned resid norm 1.984893339085e-07 true resid norm 2.679208016597e-03 ||r(i)||/||b|| 6.979953327137e-01 53 KSP preconditioned resid norm 1.484050273141e-07 true resid norm 2.679208018006e-03 ||r(i)||/||b|| 6.979953330809e-01 54 KSP preconditioned resid norm 1.106994152625e-07 true resid norm 2.679208019541e-03 ||r(i)||/||b|| 6.979953334807e-01 Linear solve converged due to CONVERGED_RTOL iterations 54 Accuracy of the soltuion on the solution from LU: | u -U_lu |_2 : 3321.15 Iteration number is : 54 Accuracy of the soltuion: | b - A*u |_2 : 0.004681 real 0m15.393s user 0m14.895s sys 0m0.251s > Message: 2 > Date: Thu, 10 Nov 2011 14:49:55 -0600 > From: Jed Brown > Subject: Re: [petsc-users] Any suggestion for this kinds of matrix? > To: PETSc users list > Message-ID: > > > Content-Type: text/plain; charset="utf-8" > > On Thu, Nov 10, 2011 at 14:45, Bao Kai wrote: > > > PCFIELDSPLIT seems a little more complex, I will try that. > > > > I tried some different preconditioners, only lu can get right results. > > > > With some pc, some wrong results can be obtained, such as the following > > one. > > > > tutorials]$ time ./ex78 -Ain A_in -rhs rhs -solu solu -noshift -pc_type > > hypre -pc_hypre_type parasails -ksp_gmres_restart 600 -ksp_rtol 1e-5 > > -ksp_typ gmres > > > > Always run with -ksp_monitor_true_residual -ksp_converged_reason when > checking whether a preconditioner is working. > -------------- next part -------------- > An HTML attachment was scrubbed... > URL: < > http://lists.mcs.anl.gov/pipermail/petsc-users/attachments/20111110/3d5e1de6/attachment-0001.htm > > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From paeanball at gmail.com Fri Nov 11 05:33:57 2011 From: paeanball at gmail.com (Bao Kai) Date: Fri, 11 Nov 2011 14:33:57 +0300 Subject: [petsc-users] Any suggestion for this kinds of matrix? Message-ID: Dear Jed, The following is the result with the options you told me. The iteration has converged, while converged at a wrong solution, compared to the result from LU. tutorials]$ time ./ex78 -Ain A_phi -rhs rhs_phi -solu solu_phi -noshift -pc_type hypre -pc_hypre_type parasails -ksp_gmres_restart 600 -ksp_rtol 1e-7 -ksp_typ gmres -ksp_monitor_true_residual -ksp_converged_reason Read matrix in ascii format ... m: 288399, n: 288399, nz: 4023176 read A completed rowNumber[0] = 13 rowNumber[1] = 13 rowNumber[2] = 19 read A is complete ! Read rhs in ascii format ... Read exact solution in ascii format ... 0 KSP preconditioned resid norm 1.311815748108e+00 true resid norm 3.838432566849e-03 ||r(i)||/||b|| 1.000000000000e+00 1 KSP preconditioned resid norm 5.507600629359e-01 true resid norm 1.878066463331e-03 ||r(i)||/||b|| 4.892795250727e-01 2 KSP preconditioned resid norm 4.013507277599e-01 true resid norm 1.785692106749e-03 ||r(i)||/||b|| 4.652138797934e-01 3 KSP preconditioned resid norm 3.186589484171e-01 true resid norm 1.939928539000e-03 ||r(i)||/||b|| 5.053960191341e-01 4 KSP preconditioned resid norm 2.533816267053e-01 true resid norm 2.182716525094e-03 ||r(i)||/||b|| 5.686478756839e-01 5 KSP preconditioned resid norm 1.956749847727e-01 true resid norm 2.436511231157e-03 ||r(i)||/||b|| 6.347672360328e-01 6 KSP preconditioned resid norm 1.489551919079e-01 true resid norm 2.625495747280e-03 ||r(i)||/||b|| 6.840020507214e-01 7 KSP preconditioned resid norm 1.129706022530e-01 true resid norm 2.726727688051e-03 ||r(i)||/||b|| 7.103753004809e-01 8 KSP preconditioned resid norm 8.528153693722e-02 true resid norm 2.764872691683e-03 ||r(i)||/||b|| 7.203129515839e-01 9 KSP preconditioned resid norm 6.419522418091e-02 true resid norm 2.765551151077e-03 ||r(i)||/||b|| 7.204897058665e-01 10 KSP preconditioned resid norm 4.793073337207e-02 true resid norm 2.754803244091e-03 ||r(i)||/||b|| 7.176896288040e-01 11 KSP preconditioned resid norm 3.590594904610e-02 true resid norm 2.738481920714e-03 ||r(i)||/||b|| 7.134375485361e-01 12 KSP preconditioned resid norm 2.683482240096e-02 true resid norm 2.722031760807e-03 ||r(i)||/||b|| 7.091519033880e-01 13 KSP preconditioned resid norm 2.001207136261e-02 true resid norm 2.709429246945e-03 ||r(i)||/||b|| 7.058686585627e-01 14 KSP preconditioned resid norm 1.493908729876e-02 true resid norm 2.699791991674e-03 ||r(i)||/||b|| 7.033579318265e-01 15 KSP preconditioned resid norm 1.111558666088e-02 true resid norm 2.692890954089e-03 ||r(i)||/||b|| 7.015600527536e-01 16 KSP preconditioned resid norm 8.272119255509e-03 true resid norm 2.688069519102e-03 ||r(i)||/||b|| 7.003039580057e-01 17 KSP preconditioned resid norm 6.143976425601e-03 true resid norm 2.684895702336e-03 ||r(i)||/||b|| 6.994771057136e-01 18 KSP preconditioned resid norm 4.563685459707e-03 true resid norm 2.682859292253e-03 ||r(i)||/||b|| 6.989465740324e-01 19 KSP preconditioned resid norm 3.394656398417e-03 true resid norm 2.681459888330e-03 ||r(i)||/||b|| 6.985819971124e-01 20 KSP preconditioned resid norm 2.518916365228e-03 true resid norm 2.680609739005e-03 ||r(i)||/||b|| 6.983605136526e-01 21 KSP preconditioned resid norm 1.872307188353e-03 true resid norm 2.680081243394e-03 ||r(i)||/||b|| 6.982228283859e-01 22 KSP preconditioned resid norm 1.390334828536e-03 true resid norm 2.679742748684e-03 ||r(i)||/||b|| 6.981346427259e-01 23 KSP preconditioned resid norm 1.034606694934e-03 true resid norm 2.679535108562e-03 ||r(i)||/||b|| 6.980805476966e-01 24 KSP preconditioned resid norm 7.710134967260e-04 true resid norm 2.679397134577e-03 ||r(i)||/||b|| 6.980446022989e-01 25 KSP preconditioned resid norm 5.725407938260e-04 true resid norm 2.679329449922e-03 ||r(i)||/||b|| 6.980269688889e-01 26 KSP preconditioned resid norm 4.272990427118e-04 true resid norm 2.679284171589e-03 ||r(i)||/||b|| 6.980151728414e-01 27 KSP preconditioned resid norm 3.181341598383e-04 true resid norm 2.679247206576e-03 ||r(i)||/||b|| 6.980055426050e-01 28 KSP preconditioned resid norm 2.368729933003e-04 true resid norm 2.679233163958e-03 ||r(i)||/||b|| 6.980018841799e-01 29 KSP preconditioned resid norm 1.766017339700e-04 true resid norm 2.679224053276e-03 ||r(i)||/||b|| 6.979995106376e-01 30 KSP preconditioned resid norm 1.313377419946e-04 true resid norm 2.679217016981e-03 ||r(i)||/||b|| 6.979976775210e-01 31 KSP preconditioned resid norm 9.789603459870e-05 true resid norm 2.679213290696e-03 ||r(i)||/||b|| 6.979967067380e-01 32 KSP preconditioned resid norm 7.275708495896e-05 true resid norm 2.679210423371e-03 ||r(i)||/||b|| 6.979959597340e-01 33 KSP preconditioned resid norm 5.412802491776e-05 true resid norm 2.679209810847e-03 ||r(i)||/||b|| 6.979958001573e-01 34 KSP preconditioned resid norm 4.026672785271e-05 true resid norm 2.679209362635e-03 ||r(i)||/||b|| 6.979956833876e-01 35 KSP preconditioned resid norm 2.990907253308e-05 true resid norm 2.679208426592e-03 ||r(i)||/||b|| 6.979954395269e-01 36 KSP preconditioned resid norm 2.226822676398e-05 true resid norm 2.679208136434e-03 ||r(i)||/||b|| 6.979953639342e-01 37 KSP preconditioned resid norm 1.654703590780e-05 true resid norm 2.679208182746e-03 ||r(i)||/||b|| 6.979953759996e-01 38 KSP preconditioned resid norm 1.229268254949e-05 true resid norm 2.679208209944e-03 ||r(i)||/||b|| 6.979953830852e-01 39 KSP preconditioned resid norm 9.149145951039e-06 true resid norm 2.679208050822e-03 ||r(i)||/||b|| 6.979953416303e-01 40 KSP preconditioned resid norm 6.813825018110e-06 true resid norm 2.679207932900e-03 ||r(i)||/||b|| 6.979953109089e-01 41 KSP preconditioned resid norm 5.075333494970e-06 true resid norm 2.679208029175e-03 ||r(i)||/||b|| 6.979953359907e-01 42 KSP preconditioned resid norm 3.770609781438e-06 true resid norm 2.679208069198e-03 ||r(i)||/||b|| 6.979953464175e-01 43 KSP preconditioned resid norm 2.808924777973e-06 true resid norm 2.679208000517e-03 ||r(i)||/||b|| 6.979953285246e-01 44 KSP preconditioned resid norm 2.094599249993e-06 true resid norm 2.679207985642e-03 ||r(i)||/||b|| 6.979953246492e-01 45 KSP preconditioned resid norm 1.559223301396e-06 true resid norm 2.679208018840e-03 ||r(i)||/||b|| 6.979953332981e-01 46 KSP preconditioned resid norm 1.160309778657e-06 true resid norm 2.679208029228e-03 ||r(i)||/||b|| 6.979953360044e-01 47 KSP preconditioned resid norm 8.638154916854e-07 true resid norm 2.679208013013e-03 ||r(i)||/||b|| 6.979953317800e-01 48 KSP preconditioned resid norm 6.436084879799e-07 true resid norm 2.679208008459e-03 ||r(i)||/||b|| 6.979953305937e-01 49 KSP preconditioned resid norm 4.797395939888e-07 true resid norm 2.679208018385e-03 ||r(i)||/||b|| 6.979953331796e-01 50 KSP preconditioned resid norm 3.573839482305e-07 true resid norm 2.679208020910e-03 ||r(i)||/||b|| 6.979953338374e-01 51 KSP preconditioned resid norm 2.662426448119e-07 true resid norm 2.679208017655e-03 ||r(i)||/||b|| 6.979953329896e-01 52 KSP preconditioned resid norm 1.984893339085e-07 true resid norm 2.679208016597e-03 ||r(i)||/||b|| 6.979953327137e-01 53 KSP preconditioned resid norm 1.484050273141e-07 true resid norm 2.679208018006e-03 ||r(i)||/||b|| 6.979953330809e-01 54 KSP preconditioned resid norm 1.106994152625e-07 true resid norm 2.679208019541e-03 ||r(i)||/||b|| 6.979953334807e-01 Linear solve converged due to CONVERGED_RTOL iterations 54 Accuracy of the soltuion on the solution from LU: | u -U_lu |_2 : 3321.15 Iteration number is : 54 Accuracy of the soltuion: | b - A*u |_2 : 0.004681 real 0m15.393s user 0m14.895s sys 0m0.251s Regards, Kai > ------------------------------ > > Message: 2 > Date: Thu, 10 Nov 2011 14:49:55 -0600 > From: Jed Brown > Subject: Re: [petsc-users] Any suggestion for this kinds of matrix? > To: PETSc users list > Message-ID: > > > Content-Type: text/plain; charset="utf-8" > > On Thu, Nov 10, 2011 at 14:45, Bao Kai wrote: > > > PCFIELDSPLIT seems a little more complex, I will try that. > > > > I tried some different preconditioners, only lu can get right results. > > > > With some pc, some wrong results can be obtained, such as the following > > one. > > > > tutorials]$ time ./ex78 -Ain A_in -rhs rhs -solu solu -noshift -pc_type > > hypre -pc_hypre_type parasails -ksp_gmres_restart 600 -ksp_rtol 1e-5 > > -ksp_typ gmres > > > > Always run with -ksp_monitor_true_residual -ksp_converged_reason when > checking whether a preconditioner is working. > -------------- next part -------------- > An HTML attachment was scrubbed... > URL: < > http://lists.mcs.anl.gov/pipermail/petsc-users/attachments/20111110/3d5e1de6/attachment-0001.htm > > > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From ckontzialis at lycos.com Fri Nov 11 05:52:09 2011 From: ckontzialis at lycos.com (Konstantinos Kontzialis) Date: Fri, 11 Nov 2011 13:52:09 +0200 Subject: [petsc-users] how to speed up convergence In-Reply-To: References: Message-ID: <4EBD0C69.3070709@lycos.com> On 11/10/2011 08:00 PM, petsc-users-request at mcs.anl.gov wrote: > how to speed up convergence Dear Jed, 1. I'm consistent now with the use of PetscsScalar in my code. I was not!!!!! 2. mat_fd_type ds works better now. Running with: mpiexec -n 8 ./hoac cylinder -llf_flux -n_out 2 -end_time 0.4 -implicit -pc_type asm -sub_pc_type ilu -sub_pc_factor_reuse_ordering -sub_pc_factor_reuse_fill -gl -ksp_type fgmres -sub_pc_factor_levels 0 -snes_monitor -snes_converged_reason -ksp_converged_reason -ts_view -ksp_pc_side right -sub_pc_factor_nonzeros_along_diagonal -dt 1.0e-2 -ts_type arkimex -ksp_gmres_restart 50 -snes_max_fail 100 -snes_max_linear_solve_fail 100 -ksp_max_it 100 -mat_fd_type ds I get: Timestep 0: dt = 0.01, T = 0, Res[rho] = 1.44982, Res[rhou] = 6.94003, Res[rhov] = 0.524307, Res[E] = 3.70306, CFL = 10.1859 0 SNES Function norm 3.847597576099e+03 Linear solve did not converge due to DIVERGED_ITS iterations 100 1 SNES Function norm 2.993575158449e+03 Linear solve did not converge due to DIVERGED_ITS iterations 100 2 SNES Function norm 2.992570405848e+03 Linear solve did not converge due to DIVERGED_ITS iterations 100 3 SNES Function norm 2.991073138769e+03 But, running with : mpiexec -n 8 ./hoac cylinder -llf_flux -n_out 2 -end_time 0.4 -implicit -pc_type asm -sub_pc_type ilu -sub_pc_factor_reuse_ordering -sub_pc_factor_reuse_fill -gl -ksp_type fgmres -sub_pc_factor_levels 0 -snes_monitor -snes_converged_reason -ksp_converged_reason -ts_view -ksp_pc_side right -sub_pc_factor_nonzeros_along_diagonal -dt 1.0e-2 -ts_type arkimex -ksp_gmres_restart 50 -snes_max_fail 100 -snes_max_linear_solve_fail 100 -ksp_max_it 100 I get: Timestep 0: dt = 0.01, T = 0, Res[rho] = 1.44982, Res[rhou] = 6.94003, Res[rhov] = 0.524307, Res[E] = 3.70306, CFL = 10.1859 0 SNES Function norm 3.847597576099e+03 Linear solve did not converge due to DIVERGED_ITS iterations 100 1 SNES Function norm 3.633205436192e+03 Linear solve did not converge due to DIVERGED_ITS iterations 100 2 SNES Function norm 3.623266783173e+03 Linear solve did not converge due to DIVERGED_ITS iterations 100 3 SNES Function norm 3.621467573805e+03 However as you may see the convergence is very very slow? Any suggestions? Kostas -------------- next part -------------- An HTML attachment was scrubbed... URL: From markus.sons at gmail.com Fri Nov 11 03:35:14 2011 From: markus.sons at gmail.com (markus.sons at gmail.com) Date: Fri, 11 Nov 2011 10:35:14 +0100 Subject: [petsc-users] Small vectors and matrices Message-ID: Hello petsc-users, I'm currently trying the basic functionality of PETSc and reading some of the tutorial codes. Now I'd like to ask you, whether you think that PETSc is also suitable for small, local vector computations. Something like calculating point-to-point distance in 3D etc? I guess it would be hard to read as there are probably a lot of steps (e.g. creating an index array, AssemblyBegin and AssemblyEnd) and due to the lack of overloaded operators. Of course I could now write a C++ class which will serve as a nice wrapper for PETSc, but it seems to me that the PETSc objects have a lot of overhead and are really developed for solving and assembling huge equation systems. So, what do you recommend? Using PETSc for the large-scale computations and some simple Vec and Mat class for small, local stuff? Thanks in advance! Markus -------------- next part -------------- An HTML attachment was scrubbed... URL: From jedbrown at mcs.anl.gov Fri Nov 11 07:44:46 2011 From: jedbrown at mcs.anl.gov (Jed Brown) Date: Fri, 11 Nov 2011 07:44:46 -0600 Subject: [petsc-users] petsc-users Digest, Vol 35, Issue 32 In-Reply-To: References: Message-ID: On Fri, Nov 11, 2011 at 05:31, Bao Kai wrote: > The following is the result with the options you told me. The iteration > has converged, while converged at a wrong solution, compared to the result > from LU. > > tutorials]$ time ./ex78 -Ain A_phi -rhs rhs_phi -solu solu_phi -noshift > -pc_type hypre -pc_hypre_type parasails -ksp_gmres_restart 600 -ksp_rtol > 1e-7 -ksp_typ gmres -ksp_monitor_true_residual -ksp_converged_reason > > > Read matrix in ascii format ... > m: 288399, n: 288399, nz: 4023176 > read A completed > rowNumber[0] = 13 > rowNumber[1] = 13 > rowNumber[2] = 19 > read A is complete ! > > Read rhs in ascii format ... > > Read exact solution in ascii format ... > 0 KSP preconditioned resid norm 1.311815748108e+00 true resid norm > 3.838432566849e-03 ||r(i)||/||b|| 1.000000000000e+00 > 1 KSP preconditioned resid norm 5.507600629359e-01 true resid norm > 1.878066463331e-03 ||r(i)||/||b|| 4.892795250727e-01 > [...] > 54 KSP preconditioned resid norm 1.106994152625e-07 true resid norm > 2.679208019541e-03 ||r(i)||/||b|| 6.979953334807e-01 > Linear solve converged due to CONVERGED_RTOL iterations 54 > Look, the true residual increased. This usually means that the preconditioner is singular or nearly so. This is not a surprised, sparse approximate inverse rarely works in my experience. -------------- next part -------------- An HTML attachment was scrubbed... URL: From jedbrown at mcs.anl.gov Fri Nov 11 07:58:51 2011 From: jedbrown at mcs.anl.gov (Jed Brown) Date: Fri, 11 Nov 2011 07:58:51 -0600 Subject: [petsc-users] how to speed up convergence In-Reply-To: <4EBD0C69.3070709@lycos.com> References: <4EBD0C69.3070709@lycos.com> Message-ID: On Fri, Nov 11, 2011 at 05:52, Konstantinos Kontzialis < ckontzialis at lycos.com> wrote: > However as you may see the convergence is very very slow? Any suggestions? The linear solve is still diverging, so it's meaningless what the SNES residual is. You need to monitor the true residual and find out why the linear solve is failing. As I told you many times, you need to check the scaling of the equations. It appears that you are seeing noise from rounding error in finite differencing. This usually means that you need to rescale your equations (choose different units). I also asked several times about ordering of unknowns, but you never replied to that part either. Please go through the items in this FAQ: http://www.mcs.anl.gov/petsc/petsc-as/documentation/faq.html#kspdiverged -------------- next part -------------- An HTML attachment was scrubbed... URL: From jedbrown at mcs.anl.gov Fri Nov 11 08:05:27 2011 From: jedbrown at mcs.anl.gov (Jed Brown) Date: Fri, 11 Nov 2011 08:05:27 -0600 Subject: [petsc-users] Small vectors and matrices In-Reply-To: References: Message-ID: On Fri, Nov 11, 2011 at 03:35, markus.sons at gmail.com wrote: > So, what do you recommend? Using PETSc for the large-scale computations > and some simple Vec and Mat class for small, local stuff? Don't use PETSc Mat/Vec for very small local problems like the 3x3 or 4x4 matrices. If you are already addicted to templates and overloading, you might check out a library like Eigen which is competitive with BLAS/Lapack for some problems on some architectures. -------------- next part -------------- An HTML attachment was scrubbed... URL: From juhaj at iki.fi Fri Nov 11 08:07:13 2011 From: juhaj at iki.fi (Juha =?iso-8859-1?q?J=E4ykk=E4?=) Date: Fri, 11 Nov 2011 14:07:13 +0000 Subject: [petsc-users] RK substeps Message-ID: <201111111407.13173.juhaj@iki.fi> Hi list! I was wondering if it is possible to access the internal TS/RK substep dt somehow? Apart from gleaning them from the source or Dormand&Prince, that is. In case I change the table, for example... Cheers, Juha From markus.sons at gmail.com Fri Nov 11 08:11:21 2011 From: markus.sons at gmail.com (markus.sons at gmail.com) Date: Fri, 11 Nov 2011 15:11:21 +0100 Subject: [petsc-users] Small vectors and matrices In-Reply-To: References: Message-ID: Thanks for the quick answer. We already have a wrapper class for BLAS/Lapack and it's currently being used for small problems as well as solving the - possibly huge - equation system. We want to use PETSc to solve this problem in parallel and would have hoped to be able to simultaneously drop the wrapper. I guess a mixed approach would be perfect then? On Fri, Nov 11, 2011 at 3:05 PM, Jed Brown wrote: > On Fri, Nov 11, 2011 at 03:35, markus.sons at gmail.com < > markus.sons at gmail.com> wrote: > >> So, what do you recommend? Using PETSc for the large-scale computations >> and some simple Vec and Mat class for small, local stuff? > > > Don't use PETSc Mat/Vec for very small local problems like the 3x3 or 4x4 > matrices. If you are already addicted to templates and overloading, you > might check out a library like Eigen which is competitive with BLAS/Lapack > for some problems on some architectures. > -------------- next part -------------- An HTML attachment was scrubbed... URL: From jedbrown at mcs.anl.gov Fri Nov 11 08:11:49 2011 From: jedbrown at mcs.anl.gov (Jed Brown) Date: Fri, 11 Nov 2011 08:11:49 -0600 Subject: [petsc-users] RK substeps In-Reply-To: <201111111407.13173.juhaj@iki.fi> References: <201111111407.13173.juhaj@iki.fi> Message-ID: On Fri, Nov 11, 2011 at 08:07, Juha J?ykk? wrote: > I was wondering if it is possible to access the internal TS/RK substep dt > somehow? Apart from gleaning them from the source or Dormand&Prince, that > is. > In case I change the table, for example... > There is not an API. The TSRK implementation will be updated to accept a user-provided table of coefficients like TSARKIMEX and TSROSW, but you shouldn't need to access the coefficients except to view them, right? -------------- next part -------------- An HTML attachment was scrubbed... URL: From knepley at gmail.com Fri Nov 11 08:14:47 2011 From: knepley at gmail.com (Matthew Knepley) Date: Fri, 11 Nov 2011 14:14:47 +0000 Subject: [petsc-users] Small vectors and matrices In-Reply-To: References: Message-ID: On Fri, Nov 11, 2011 at 2:11 PM, markus.sons at gmail.com < markus.sons at gmail.com> wrote: > Thanks for the quick answer. We already have a wrapper class for > BLAS/Lapack and it's currently being used for small problems as well as > solving the - possibly huge - equation system. > We want to use PETSc to solve this problem in parallel and would have > hoped to be able to simultaneously drop the wrapper. I guess a mixed > approach would be perfect then? > You can certainly use Vec for small vectors. The overhead is minimal. However, it will not optimize these operations (neither does BLAS) in the same way as Eigen. BLAS is optimized for vectors of length 10000+. Matt > On Fri, Nov 11, 2011 at 3:05 PM, Jed Brown wrote: > >> On Fri, Nov 11, 2011 at 03:35, markus.sons at gmail.com < >> markus.sons at gmail.com> wrote: >> >>> So, what do you recommend? Using PETSc for the large-scale computations >>> and some simple Vec and Mat class for small, local stuff? >> >> >> Don't use PETSc Mat/Vec for very small local problems like the 3x3 or 4x4 >> matrices. If you are already addicted to templates and overloading, you >> might check out a library like Eigen which is competitive with BLAS/Lapack >> for some problems on some architectures. >> > > -- What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. -- Norbert Wiener -------------- next part -------------- An HTML attachment was scrubbed... URL: From jedbrown at mcs.anl.gov Fri Nov 11 08:21:27 2011 From: jedbrown at mcs.anl.gov (Jed Brown) Date: Fri, 11 Nov 2011 08:21:27 -0600 Subject: [petsc-users] Small vectors and matrices In-Reply-To: References: Message-ID: On Fri, Nov 11, 2011 at 08:11, markus.sons at gmail.com wrote: > We want to use PETSc to solve this problem in parallel and would have > hoped to be able to simultaneously drop the wrapper. I guess a mixed > approach would be perfect then? If it's performance-sensitive and smaller than dimension 10 or 20, you want to avoid BLAS (and PETSc Mat/Vec). You also don't want to parallelize super small problems; solve them redundantly if necessary. For larger problems, use whatever abstraction you like. MatDense has minimal overhead forwarding into BLAS/Lapack, but not every function is wrapped. -------------- next part -------------- An HTML attachment was scrubbed... URL: From zonexo at gmail.com Fri Nov 11 08:26:34 2011 From: zonexo at gmail.com (TAY wee-beng) Date: Fri, 11 Nov 2011 15:26:34 +0100 Subject: [petsc-users] Automatic partition of Cartesian grids In-Reply-To: References: Message-ID: <4EBD309A.8030706@gmail.com> Hi, I am currently using PETSc for my Fortran CFD code and I am manually partitioning my Cartesian grids. So in 2D, it will be something like u(1:size_x,jstart:jend), where the y component is partitioned into 2,4,8 etc parts depending on the no. of processors. However, it seems that there are better ways to do it to get a more balanced load. Do I use DMDA in PETSc to do it? I'm now using staggered Cartesian grids for my u,v,p. Is there an example to construct a Laplace/Poisson equation using DMDA? It is mentioned in the manual that DMMG infrastructure will be replaced in the next release and we should not use it. Is this related to DMDA? Yours sincerely, TAY wee-beng From juhaj at iki.fi Fri Nov 11 08:29:10 2011 From: juhaj at iki.fi (Juha =?utf-8?q?J=C3=A4ykk=C3=A4?=) Date: Fri, 11 Nov 2011 14:29:10 +0000 Subject: [petsc-users] RK substeps In-Reply-To: References: <201111111407.13173.juhaj@iki.fi> Message-ID: <201111111429.10528.juhaj@iki.fi> > There is not an API. The TSRK implementation will be updated to accept a > user-provided table of coefficients like TSARKIMEX and TSROSW, but you > shouldn't need to access the coefficients except to view them, right? Right. I just may need them for my boundary conditions: they depend on time derivatives, i.e. change in value of something between time steps. It may be necessary to compute those values separately for each substep, but I am not sure yet. I am just making sure I have easy access to them - from your answer it looks like I need to hard code them. -Juha From knepley at gmail.com Fri Nov 11 08:29:49 2011 From: knepley at gmail.com (Matthew Knepley) Date: Fri, 11 Nov 2011 14:29:49 +0000 Subject: [petsc-users] Automatic partition of Cartesian grids In-Reply-To: <4EBD309A.8030706@gmail.com> References: <4EBD309A.8030706@gmail.com> Message-ID: On Fri, Nov 11, 2011 at 2:26 PM, TAY wee-beng wrote: > Hi, > > I am currently using PETSc for my Fortran CFD code and I am manually > partitioning my Cartesian grids. So in 2D, it will be something like > u(1:size_x,jstart:jend), where the y component is partitioned into 2,4,8 > etc parts depending on the no. of processors. > > However, it seems that there are better ways to do it to get a more > balanced load. Do I use DMDA in PETSc to do it? I'm now using staggered > Cartesian grids for my u,v,p. Is there an example to construct a > Laplace/Poisson equation using DMDA? > Yes use DMDA. Look at SNES ex5 and ex50. > It is mentioned in the manual that DMMG infrastructure will be replaced in > the next release and we should not use it. Is this related to DMDA? > Only if you use multigrid. Matt > Yours sincerely, > > TAY wee-beng > > > -- What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. -- Norbert Wiener -------------- next part -------------- An HTML attachment was scrubbed... URL: From jedbrown at mcs.anl.gov Fri Nov 11 08:30:48 2011 From: jedbrown at mcs.anl.gov (Jed Brown) Date: Fri, 11 Nov 2011 08:30:48 -0600 Subject: [petsc-users] Automatic partition of Cartesian grids In-Reply-To: <4EBD309A.8030706@gmail.com> References: <4EBD309A.8030706@gmail.com> Message-ID: On Fri, Nov 11, 2011 at 08:26, TAY wee-beng wrote: > However, it seems that there are better ways to do it to get a more > balanced load. Do I use DMDA in PETSc to do it? I'm now using staggered > Cartesian grids for my u,v,p. Is there an example to construct a > Laplace/Poisson equation using DMDA? > src/snes/examples/tutorials/ex5.c (Bratu problem, simple nonlinearity added to Poisson) See also src/snes/examples/tutorials/ex50.c (thermal/lid-driven cavity) > > It is mentioned in the manual that DMMG infrastructure will be replaced in > the next release and we should not use it. Is this related to DMDA? > DMMG used DM (the algebraic interface implemented by DMDA). DMDA is recommended, but don't write new code with DMMG. -------------- next part -------------- An HTML attachment was scrubbed... URL: From jedbrown at mcs.anl.gov Fri Nov 11 08:33:44 2011 From: jedbrown at mcs.anl.gov (Jed Brown) Date: Fri, 11 Nov 2011 08:33:44 -0600 Subject: [petsc-users] RK substeps In-Reply-To: <201111111429.10528.juhaj@iki.fi> References: <201111111407.13173.juhaj@iki.fi> <201111111429.10528.juhaj@iki.fi> Message-ID: On Fri, Nov 11, 2011 at 08:29, Juha J?ykk? wrote: > Right. I just may need them for my boundary conditions: they depend on time > derivatives, i.e. change in value of something between time steps. It may > be > necessary to compute those values separately for each substep, but I am not > sure yet. I am just making sure I have easy access to them - from your > answer > it looks like I need to hard code them. > We pass in the time of the stage when your RHSFunction is called. If you haven't eliminated boundary conditions, your system is likely differential-algebraic, in which case you can use the IFunction interface with an implicit or IMEX method (e.g. TSARKIMEX, TSROSW). This has been improved in petsc-dev, so consider using it if you want the latest adaptivity and IMEX features. -------------- next part -------------- An HTML attachment was scrubbed... URL: From juhaj at iki.fi Fri Nov 11 08:58:57 2011 From: juhaj at iki.fi (Juha =?utf-8?q?J=C3=A4ykk=C3=A4?=) Date: Fri, 11 Nov 2011 14:58:57 +0000 Subject: [petsc-users] RK substeps In-Reply-To: References: <201111111407.13173.juhaj@iki.fi> <201111111429.10528.juhaj@iki.fi> Message-ID: <201111111458.57476.juhaj@iki.fi> > We pass in the time of the stage when your RHSFunction is called. If you Right, how can I have missed that?!? Thanks. > haven't eliminated boundary conditions, your system is likely > differential-algebraic, in which case you can use the IFunction interface Eliminated? I am not sure what you mean. My boundaries are "periodic", but thanks to the nature of my problem, they are periodic only up to a gauge transformation. I need to perform that gauge transform at each time step. I would be more than happy not to have to do that, but I cannot really see a way out of it, so, if you have any ideas, I am very interested in hearing them. > improved in petsc-dev, so consider using it if you want the latest Thanks for the tip, but I am stuck at 3.1 because I need TAO, too and it still does not support 3.2. Cheers, Juha From knepley at gmail.com Fri Nov 11 09:01:26 2011 From: knepley at gmail.com (Matthew Knepley) Date: Fri, 11 Nov 2011 15:01:26 +0000 Subject: [petsc-users] RK substeps In-Reply-To: <201111111458.57476.juhaj@iki.fi> References: <201111111407.13173.juhaj@iki.fi> <201111111429.10528.juhaj@iki.fi> <201111111458.57476.juhaj@iki.fi> Message-ID: On Fri, Nov 11, 2011 at 2:58 PM, Juha J?ykk? wrote: > > We pass in the time of the stage when your RHSFunction is called. If you > > Right, how can I have missed that?!? Thanks. > > > haven't eliminated boundary conditions, your system is likely > > differential-algebraic, in which case you can use the IFunction interface > > Eliminated? I am not sure what you mean. My boundaries are "periodic", but > thanks to the nature of my problem, they are periodic only up to a gauge > transformation. I need to perform that gauge transform at each time step. > > I would be more than happy not to have to do that, but I cannot really see > a > way out of it, so, if you have any ideas, I am very interested in hearing > them. > > > improved in petsc-dev, so consider using it if you want the latest > > Thanks for the tip, but I am stuck at 3.1 because I need TAO, too and it > still > does not support 3.2. > What exactly are you using TAO to do? Thanks, Matt > Cheers, > Juha > -- What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. -- Norbert Wiener -------------- next part -------------- An HTML attachment was scrubbed... URL: From dominik at itis.ethz.ch Fri Nov 11 09:25:17 2011 From: dominik at itis.ethz.ch (Dominik Szczerba) Date: Fri, 11 Nov 2011 16:25:17 +0100 Subject: [petsc-users] Question about win32fe Message-ID: Which compiler options does win32fe use? In particular, /MD(d) or /MT(d)? How to see or eventually influence the choice? Many thanks, Dominik From juhaj at iki.fi Fri Nov 11 09:52:00 2011 From: juhaj at iki.fi (Juha =?utf-8?q?J=C3=A4ykk=C3=A4?=) Date: Fri, 11 Nov 2011 15:52:00 +0000 Subject: [petsc-users] RK substeps In-Reply-To: References: <201111111407.13173.juhaj@iki.fi> <201111111458.57476.juhaj@iki.fi> Message-ID: <201111111552.00920.juhaj@iki.fi> > What exactly are you using TAO to do? I have a large scale minimisation problem related to the problem I am using PETSc to solve. The PETSc code is independent, so I could have two PETSc's around and use the older one for TAO, but that seems like a lot of hassle without significant benefit. I know the SNES module could be used as a substitute for TAO. At the moment, I solve min(\int f(x)), where TAO needs the gradients of f(x); I could also solve discrete grad(f(x))=0 with SNES, but as I then need compute the Jacobian (=Hessian of f(x)), too, this has seemed to be too memory intensive to be useful. I am not aware of a way around this. -Juha From knepley at gmail.com Fri Nov 11 09:55:32 2011 From: knepley at gmail.com (Matthew Knepley) Date: Fri, 11 Nov 2011 15:55:32 +0000 Subject: [petsc-users] RK substeps In-Reply-To: <201111111552.00920.juhaj@iki.fi> References: <201111111407.13173.juhaj@iki.fi> <201111111458.57476.juhaj@iki.fi> <201111111552.00920.juhaj@iki.fi> Message-ID: On Fri, Nov 11, 2011 at 3:52 PM, Juha J?ykk? wrote: > > What exactly are you using TAO to do? > > I have a large scale minimisation problem related to the problem I am using > PETSc to solve. The PETSc code is independent, so I could have two PETSc's > around and use the older one for TAO, but that seems like a lot of hassle > without significant benefit. > > I know the SNES module could be used as a substitute for TAO. At the > moment, I > solve min(\int f(x)), where TAO needs the gradients of f(x); I could also > solve discrete grad(f(x))=0 with SNES, but as I then need compute the > Jacobian > (=Hessian of f(x)), too, this has seemed to be too memory intensive to be > useful. I am not aware of a way around this. There are plenty of ways to use SNES without a Mat. You can use -snes_mf, which uses a FD approximation to the action of the matrix. You can use -snes_type qn which uses a quasi-Newton approximation, or even -snes_type nrichardson, which just uses successive substitutions with your residual function. Thanks, Matt > > -Juha -- What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. -- Norbert Wiener -------------- next part -------------- An HTML attachment was scrubbed... URL: From jedbrown at mcs.anl.gov Fri Nov 11 10:16:04 2011 From: jedbrown at mcs.anl.gov (Jed Brown) Date: Fri, 11 Nov 2011 10:16:04 -0600 Subject: [petsc-users] RK substeps In-Reply-To: <201111111552.00920.juhaj@iki.fi> References: <201111111407.13173.juhaj@iki.fi> <201111111458.57476.juhaj@iki.fi> <201111111552.00920.juhaj@iki.fi> Message-ID: On Fri, Nov 11, 2011 at 09:52, Juha J?ykk? wrote: > I have a large scale minimisation problem related to the problem I am using > PETSc to solve. The PETSc code is independent, so I could have two PETSc's > around and use the older one for TAO, but that seems like a lot of hassle > without significant benefit. > You can use a pre-release version: http://www.mcs.anl.gov/research/projects/tao/download/tao-2.0-beta7.tar.gz As Matt says, some of the new PETSc nonlinear solvers can be used for optimization, but if you have already written code for TAO, I would try this version. -------------- next part -------------- An HTML attachment was scrubbed... URL: From balay at mcs.anl.gov Fri Nov 11 10:16:59 2011 From: balay at mcs.anl.gov (Satish Balay) Date: Fri, 11 Nov 2011 10:16:59 -0600 (CST) Subject: [petsc-users] Question about win32fe In-Reply-To: References: Message-ID: On Fri, 11 Nov 2011, Dominik Szczerba wrote: > Which compiler options does win32fe use? > In particular, /MD(d) or /MT(d)? > How to see or eventually influence the choice? Well its more of default CFLAGS to configure. We default to using /MT to match MPICH default. This is because MS compiler enforces this on us. If we mix object files compiled with multiple variants of these options [i.e some code gets compiled with /MD - and some with /MT] - then the linker gives errors. If you need to change the defualts - you can tell configure to use the appropriate variant via CFLAGS option [and similarly FFLAGS] - and that should be picked up. For cl defaults: CFLAGS = -MT -wd4996 [debug] COPTFLAGS = -Z7 [optimized] COPTFLAGS = -O2 -QxW [similarly for CXXFLAGS/CXXOPTFLAGS and FFLAGS/FOPTFLAGS etc for icl,ifort, cvf90 etc.] Satish From xiaohl at ices.utexas.edu Fri Nov 11 12:06:59 2011 From: xiaohl at ices.utexas.edu (xiaohl) Date: Fri, 11 Nov 2011 12:06:59 -0600 Subject: [petsc-users] questions In-Reply-To: <7A3039C2-5A22-4C09-B292-C4C1E7959888@mcs.anl.gov> References: <968ef386e6ec8346a0938d4183f739f2@ices.utexas.edu> <7A3039C2-5A22-4C09-B292-C4C1E7959888@mcs.anl.gov> Message-ID: <162078b24cf158b70749499b53c9ff08@ices.utexas.edu> Hi I am going to use user defined context for this function call DMDASetLocalFunction(da,(DMDALocalFunction1) FormFunctionLocal); But How can I pass the "ctx" to the function FormFunctionLocal; PetscErrorCode FormFunctionLocal(DMDALocalInfo *info, PetscScalar ***u, PetscScalar ***f, void * ctx){ } I look at the example /petsc-3.2-p2/src/snes/examples/tutorials/ex19.c.html I think you intialize the "user" context by DMMGCreate(comm,nlevels,&user,&dmmg); for DMMG Do you have the similar routine for DMDA? Hailong On Wed, 2 Nov 2011 15:53:27 -0500, Barry Smith wrote: > On Nov 2, 2011, at 3:42 PM, Matthew Knepley wrote: > >> On Wed, Nov 2, 2011 at 8:38 PM, xiaohl >> wrote: >> Hi >> >> I am going to implement cell center difference method for >> u = - K grad p >> div u = f >> where p is the pressure , u is the velocity, f is the source term. >> >> my goal is to assemble the matrix and test the performance of >> different linear solvers in parallel. >> >> my question is how can I read the input file for K where K is n*n >> tensor. >> >> MatLoad() > > Hm, I think you should use a DMDA with n*n size dof and then use > VecLoad() to load the entries of K. > > Barry > >> >> second one is that do you have any similar examples? >> >> Nothing with the mixed-discretization of the Laplacian. >> >> Matt >> >> Hailong >> >> >> >> -- >> What most experimenters take for granted before they begin their >> experiments is infinitely more interesting than any results to which >> their experiments lead. >> -- Norbert Wiener From dominik at itis.ethz.ch Fri Nov 11 12:07:16 2011 From: dominik at itis.ethz.ch (Dominik Szczerba) Date: Fri, 11 Nov 2011 19:07:16 +0100 Subject: [petsc-users] configure error windows debug mode Message-ID: I have managed to configure and build petsc natively on Windows with MSVC 2010 in release mode. With debug=1 I get this error. Any ideas what migh have gone wrong? Regards, Dominik =============================================================================== CMake process failed with status 256. Proceeding.. =============================================================================== From knepley at gmail.com Fri Nov 11 12:13:15 2011 From: knepley at gmail.com (Matthew Knepley) Date: Fri, 11 Nov 2011 18:13:15 +0000 Subject: [petsc-users] questions In-Reply-To: <162078b24cf158b70749499b53c9ff08@ices.utexas.edu> References: <968ef386e6ec8346a0938d4183f739f2@ices.utexas.edu> <7A3039C2-5A22-4C09-B292-C4C1E7959888@mcs.anl.gov> <162078b24cf158b70749499b53c9ff08@ices.utexas.edu> Message-ID: On Fri, Nov 11, 2011 at 6:06 PM, xiaohl wrote: > > Hi > > I am going to use user defined context for this function call > DMDASetLocalFunction(da,(**DMDALocalFunction1) FormFunctionLocal); > > But How can I pass the "ctx" to the function FormFunctionLocal; > > PetscErrorCode FormFunctionLocal(**DMDALocalInfo *info, PetscScalar ***u, > PetscScalar ***f, void * ctx){ > } > > I look at the example > /petsc-3.2-p2/src/snes/**examples/tutorials/ex19.c.html > > I think you intialize the "user" context by > DMMGCreate(comm,nlevels,&user,**&dmmg); > for DMMG > > Do you have the similar routine for DMDA? > If you use SNES, then you can pass it as the last argument to http://www.mcs.anl.gov/petsc/petsc-as/snapshots/petsc-dev/docs/manualpages/SNES/SNESSetFunction.html If you use SNESetDM() which sets this automatically, then it will use the one given to http://www.mcs.anl.gov/petsc/petsc-as/snapshots/petsc-dev/docs/manualpages/DM/DMSetApplicationContext.html Barry: Where should this be documented? Matt Hailong > > On Wed, 2 Nov 2011 15:53:27 -0500, Barry Smith wrote: > >> On Nov 2, 2011, at 3:42 PM, Matthew Knepley wrote: >> >> On Wed, Nov 2, 2011 at 8:38 PM, xiaohl wrote: >>> Hi >>> >>> I am going to implement cell center difference method for >>> u = - K grad p >>> div u = f >>> where p is the pressure , u is the velocity, f is the source term. >>> >>> my goal is to assemble the matrix and test the performance of different >>> linear solvers in parallel. >>> >>> my question is how can I read the input file for K where K is n*n tensor. >>> >>> MatLoad() >>> >> >> Hm, I think you should use a DMDA with n*n size dof and then use >> VecLoad() to load the entries of K. >> >> Barry >> >> >>> second one is that do you have any similar examples? >>> >>> Nothing with the mixed-discretization of the Laplacian. >>> >>> Matt >>> >>> Hailong >>> >>> >>> >>> -- >>> What most experimenters take for granted before they begin their >>> experiments is infinitely more interesting than any results to which their >>> experiments lead. >>> -- Norbert Wiener >>> >> > -- What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. -- Norbert Wiener -------------- next part -------------- An HTML attachment was scrubbed... URL: From jedbrown at mcs.anl.gov Fri Nov 11 13:32:39 2011 From: jedbrown at mcs.anl.gov (Jed Brown) Date: Fri, 11 Nov 2011 13:32:39 -0600 Subject: [petsc-users] configure error windows debug mode In-Reply-To: References: Message-ID: On Fri, Nov 11, 2011 at 12:07, Dominik Szczerba wrote: > I have managed to configure and build petsc natively on Windows with > MSVC 2010 in release mode. With debug=1 I get this error. Any ideas > what migh have gone wrong? > As usual, send configure.log to petsc-maint. (There is probably a path that was not discovered.) -------------- next part -------------- An HTML attachment was scrubbed... URL: From balay at mcs.anl.gov Fri Nov 11 13:42:08 2011 From: balay at mcs.anl.gov (Satish Balay) Date: Fri, 11 Nov 2011 13:42:08 -0600 (CST) Subject: [petsc-users] configure error windows debug mode In-Reply-To: References: Message-ID: On Fri, 11 Nov 2011, Dominik Szczerba wrote: > I have managed to configure and build petsc natively on Windows with > MSVC 2010 in release mode. With debug=1 I get this error. Any ideas > what migh have gone wrong? > > Regards, > Dominik > > =============================================================================== > CMake process failed with status 256. Proceeding.. > =============================================================================== > Configure didn't abort here. It continued and printed a nice 'completed' summary [which you neglected to copy/paste] And you must have seen this message for the optimized build aswell. And as the summary indicates - you can do 'make PETSC_DIR=.. PETSC_ARCH=..' to build the libraries. Jed, Perhaps we should print this message to configure.log - and not to screen - and have a cmake section in summary? cmake:enabled [or disabled] Satish From jedbrown at mcs.anl.gov Fri Nov 11 14:23:27 2011 From: jedbrown at mcs.anl.gov (Jed Brown) Date: Fri, 11 Nov 2011 14:23:27 -0600 Subject: [petsc-users] configure error windows debug mode In-Reply-To: References: Message-ID: On Fri, Nov 11, 2011 at 13:42, Satish Balay wrote: > Jed, Perhaps we should print this message to configure.log - and not > to screen - and have a cmake section in summary? > > cmake:enabled [or disabled] > That would be okay. I put a lot of diagnostics into configure.log now. The problem comes from implicit paths not being put into the appropriate places. For example, /usr/lib/openmpi is placed in flibs, but not in cxxlibs. So if I do --with-fortran=0, then libmpi_cxx.so is not found. I don't know how the MPI path shows up in flibs because MPI.py does not seem to do it. -------------- next part -------------- An HTML attachment was scrubbed... URL: From Mohamed.Adel at bibalex.org Fri Nov 11 15:09:26 2011 From: Mohamed.Adel at bibalex.org (Mohamed Adel) Date: Fri, 11 Nov 2011 21:09:26 +0000 Subject: [petsc-users] petsc 3.2-p5 test error Message-ID: Dear all, I'm trying to compile petsc version 3.2-p5 with IntelMPI-3.2. The configuration and compilation goes fine, while the test crashes with the following error. ------------------------------------------------------------------------------------------------- $ make PETSC_DIR=/opt/petsc-3.2-p5/intel test Running test examples to verify correct installation Using PETSC_DIR=/opt/petsc-3.2-p5/intel and PETSC_ARCH=arch-linux2-c-debug C/C++ example src/snes/examples/tutorials/ex19 run successfully with 1 MPI process Possible error running C/C++ src/snes/examples/tutorials/ex19 with 2 MPI processes See http://www.mcs.anl.gov/petsc/petsc-as/documentation/faq.html lid velocity = 0.0016, prandtl # = 1, grashof # = 1 [0]PETSC ERROR: Petsc_DelComm() line 430 in src/sys/objects/pinit.c [1]PETSC ERROR: Petsc_DelComm() line 430 in src/sys/objects/pinit.c [1]PETSC ERROR: PetscSubcommCreate_interlaced() line 288 in src/sys/objects/subcomm.c [1]PETSC ERROR: PetscSubcommSetType() line 71 in src/sys/objects/subcomm.c [1]PETSC ERROR: PCSetUp_Redundant() line 77 in src/ksp/pc/impls/redundant/redundant.c [1]PETSC ERROR: PCSetUp() line 819 in src/ksp/pc/interface/precon.c [1]PETSC ERROR: KSPSetUp() line 260 in src/ksp/ksp/interface/itfunc.c [1]PETSC ERROR: PCSetUp_MG() line 678 in src/ksp/pc/impls/mg/mg.c [1]PETSC ERROR: PCSetUp() line 819 in src/ksp/pc/interface/precon.c [1]PETSC ERROR: KSPSetUp() line 260 in src/ksp/ksp/interface/itfunc.c [1]PETSC ERROR: KSPSolve() line 379 in src/ksp/ksp/interface/itfunc.c [1]PETSC ERROR: SNES_KSPSolve() line 3396 in src/snes/interface/snes.c [1]PETSC ERROR: SNESSolve_LS() line 190 in src/snes/impls/ls/ls.c [1]PETSC ERROR: SNESSolve() line 2676 in src/snes/interface/snes.c [1]PETSC ERROR: DMMGSolveSNES() line 540 in src/snes/utils/damgsnes.c [1]PETSC ERROR: DMMGSolve() line 331 in src/snes/utils/damg.c [1]PETSC ERROR: main() line 160 in src/snes/examples/tutorials/ex19.c [cli_1]: aborting job: application called MPI_Abort(MPI_COMM_WORLD, 805896965) - process 1 [0]PETSC ERROR: PetscSubcommCreate_interlaced() line 288 in src/sys/objects/subcomm.c [0]PETSC ERROR: PetscSubcommSetType() line 71 in src/sys/objects/subcomm.c [0]PETSC ERROR: PCSetUp_Redundant() line 77 in src/ksp/pc/impls/redundant/redundant.c [0]PETSC ERROR: PCSetUp() line 819 in src/ksp/pc/interface/precon.c [0]PETSC ERROR: KSPSetUp() line 260 in src/ksp/ksp/interface/itfunc.c [0]PETSC ERROR: PCSetUp_MG() line 678 in src/ksp/pc/impls/mg/mg.c [0]PETSC ERROR: PCSetUp() line 819 in src/ksp/pc/interface/precon.c [0]PETSC ERROR: KSPSetUp() line 260 in src/ksp/ksp/interface/itfunc.c [0]PETSC ERROR: KSPSolve() line 379 in src/ksp/ksp/interface/itfunc.c [0]PETSC ERROR: SNES_KSPSolve() line 3396 in src/snes/interface/snes.c [0]PETSC ERROR: SNESSolve_LS() line 190 in src/snes/impls/ls/ls.c [0]PETSC ERROR: SNESSolve() line 2676 in src/snes/interface/snes.c [0]PETSC ERROR: DMMGSolveSNES() line 540 in src/snes/utils/damgsnes.c [0]PETSC ERROR: DMMGSolve() line 331 in src/snes/utils/damg.c [0]PETSC ERROR: main() line 160 in src/snes/examples/tutorials/ex19.c [cli_0]: aborting job: application called MPI_Abort(MPI_COMM_WORLD, 590597) - process 0 rank 1 in job 50 login02.local_33989 caused collective abort of all ranks exit status of rank 1: return code 5 Fortran example src/snes/examples/tutorials/ex5f run successfully with 1 MPI process Completed test example ------------------------------------------------------------------------------------------------- Any idea about what might be wrong with the test? I configured petsc with the following configurations. ./configure --prefix=/opt/petsc-3.2-p5/intel --with-shared-libraries=1 --with-blas-lapack-dir=/opt/intel/Compiler/11.0/074/mkl/lib/em64t thanks in advance, --ma From bsmith at mcs.anl.gov Fri Nov 11 15:46:26 2011 From: bsmith at mcs.anl.gov (Barry Smith) Date: Fri, 11 Nov 2011 15:46:26 -0600 Subject: [petsc-users] petsc 3.2-p5 test error In-Reply-To: References: Message-ID: The code where it is crashing is very simple and works on all the other MPI implementations we've run on. It is ierr = MPI_Comm_split(comm,0,duprank,&dupcomm);CHKERRQ(ierr); ierr = PetscCommDuplicate(dupcomm,&psubcomm->dupparent,PETSC_NULL);CHKERRQ(ierr); ierr = PetscCommDuplicate(subcomm,&psubcomm->comm,PETSC_NULL);CHKERRQ(ierr); ierr = MPI_Comm_free(&dupcomm);CHKERRQ(ierr); ierr = MPI_Comm_free(&subcomm);CHKERRQ(ierr); So my first guess is that this is a bug in the IntelMPI. What kind of system are you running on? I would recommend trying ./configure with --download-mpich and see if that runs correctly in parallel. If so that hints at an IntelMPI problem. Is the IntelMPI version you are using the most recent with all patches applied? Please send future emails on this issue to petsc-maint at mcs.anl.gov Barry On Nov 11, 2011, at 3:09 PM, Mohamed Adel wrote: > Dear all, > > I'm trying to compile petsc version 3.2-p5 with IntelMPI-3.2. > The configuration and compilation goes fine, while the test crashes with the following error. > ------------------------------------------------------------------------------------------------- > $ make PETSC_DIR=/opt/petsc-3.2-p5/intel test > Running test examples to verify correct installation > Using PETSC_DIR=/opt/petsc-3.2-p5/intel and PETSC_ARCH=arch-linux2-c-debug > C/C++ example src/snes/examples/tutorials/ex19 run successfully with 1 MPI process > Possible error running C/C++ src/snes/examples/tutorials/ex19 with 2 MPI processes > See http://www.mcs.anl.gov/petsc/petsc-as/documentation/faq.html > lid velocity = 0.0016, prandtl # = 1, grashof # = 1 > [0]PETSC ERROR: Petsc_DelComm() line 430 in src/sys/objects/pinit.c > [1]PETSC ERROR: Petsc_DelComm() line 430 in src/sys/objects/pinit.c > [1]PETSC ERROR: PetscSubcommCreate_interlaced() line 288 in src/sys/objects/subcomm.c > [1]PETSC ERROR: PetscSubcommSetType() line 71 in src/sys/objects/subcomm.c > [1]PETSC ERROR: PCSetUp_Redundant() line 77 in src/ksp/pc/impls/redundant/redundant.c > [1]PETSC ERROR: PCSetUp() line 819 in src/ksp/pc/interface/precon.c > [1]PETSC ERROR: KSPSetUp() line 260 in src/ksp/ksp/interface/itfunc.c > [1]PETSC ERROR: PCSetUp_MG() line 678 in src/ksp/pc/impls/mg/mg.c > [1]PETSC ERROR: PCSetUp() line 819 in src/ksp/pc/interface/precon.c > [1]PETSC ERROR: KSPSetUp() line 260 in src/ksp/ksp/interface/itfunc.c > [1]PETSC ERROR: KSPSolve() line 379 in src/ksp/ksp/interface/itfunc.c > [1]PETSC ERROR: SNES_KSPSolve() line 3396 in src/snes/interface/snes.c > [1]PETSC ERROR: SNESSolve_LS() line 190 in src/snes/impls/ls/ls.c > [1]PETSC ERROR: SNESSolve() line 2676 in src/snes/interface/snes.c > [1]PETSC ERROR: DMMGSolveSNES() line 540 in src/snes/utils/damgsnes.c > [1]PETSC ERROR: DMMGSolve() line 331 in src/snes/utils/damg.c > [1]PETSC ERROR: main() line 160 in src/snes/examples/tutorials/ex19.c > [cli_1]: aborting job: > application called MPI_Abort(MPI_COMM_WORLD, 805896965) - process 1 > [0]PETSC ERROR: PetscSubcommCreate_interlaced() line 288 in src/sys/objects/subcomm.c > [0]PETSC ERROR: PetscSubcommSetType() line 71 in src/sys/objects/subcomm.c > [0]PETSC ERROR: PCSetUp_Redundant() line 77 in src/ksp/pc/impls/redundant/redundant.c > [0]PETSC ERROR: PCSetUp() line 819 in src/ksp/pc/interface/precon.c > [0]PETSC ERROR: KSPSetUp() line 260 in src/ksp/ksp/interface/itfunc.c > [0]PETSC ERROR: PCSetUp_MG() line 678 in src/ksp/pc/impls/mg/mg.c > [0]PETSC ERROR: PCSetUp() line 819 in src/ksp/pc/interface/precon.c > [0]PETSC ERROR: KSPSetUp() line 260 in src/ksp/ksp/interface/itfunc.c > [0]PETSC ERROR: KSPSolve() line 379 in src/ksp/ksp/interface/itfunc.c > [0]PETSC ERROR: SNES_KSPSolve() line 3396 in src/snes/interface/snes.c > [0]PETSC ERROR: SNESSolve_LS() line 190 in src/snes/impls/ls/ls.c > [0]PETSC ERROR: SNESSolve() line 2676 in src/snes/interface/snes.c > [0]PETSC ERROR: DMMGSolveSNES() line 540 in src/snes/utils/damgsnes.c > [0]PETSC ERROR: DMMGSolve() line 331 in src/snes/utils/damg.c > [0]PETSC ERROR: main() line 160 in src/snes/examples/tutorials/ex19.c > [cli_0]: aborting job: > application called MPI_Abort(MPI_COMM_WORLD, 590597) - process 0 > rank 1 in job 50 login02.local_33989 caused collective abort of all ranks > exit status of rank 1: return code 5 > Fortran example src/snes/examples/tutorials/ex5f run successfully with 1 MPI process > Completed test example > ------------------------------------------------------------------------------------------------- > > Any idea about what might be wrong with the test? > I configured petsc with the following configurations. > ./configure --prefix=/opt/petsc-3.2-p5/intel --with-shared-libraries=1 --with-blas-lapack-dir=/opt/intel/Compiler/11.0/074/mkl/lib/em64t > > > thanks in advance, > --ma From behzad.baghapour at gmail.com Sat Nov 12 02:08:57 2011 From: behzad.baghapour at gmail.com (behzad baghapour) Date: Sat, 12 Nov 2011 11:38:57 +0330 Subject: [petsc-users] Solving with SNES Message-ID: Dear developers, I should pass a user-defined context, containing element and face local data obtained from PDE solution, to RHS and Jacobian evaluation of SNES process. Here I need to update my context each Newton Iteration. I may pass my context into a monitor function set by SNESMonitorSet. In the monitor function, I cast into my context and update my field (elements and faces) for next evaluations in RHS and Jacobian routines in Newton process. Please let me know, am I in right way? Is there any better procedure in this matter? Thanks, BB. -------------- next part -------------- An HTML attachment was scrubbed... URL: From jedbrown at mcs.anl.gov Sat Nov 12 08:34:01 2011 From: jedbrown at mcs.anl.gov (Jed Brown) Date: Sat, 12 Nov 2011 08:34:01 -0600 Subject: [petsc-users] Solving with SNES In-Reply-To: References: Message-ID: On Sat, Nov 12, 2011 at 02:08, behzad baghapour wrote: > I should pass a user-defined context, containing element and face local > data obtained from PDE solution, to RHS and Jacobian evaluation of SNES > process. Here I need to update my context each Newton Iteration. I may pass > my context into a monitor function set by SNESMonitorSet. In the monitor > function, I cast into my context and update my field (elements and faces) > for next evaluations in RHS and Jacobian routines in Newton process. Are you updating external forcing information? The state vector is passed in to the residual and Jacobian routines, you should just use it. -------------- next part -------------- An HTML attachment was scrubbed... URL: From behzad.baghapour at gmail.com Sat Nov 12 10:20:00 2011 From: behzad.baghapour at gmail.com (behzad baghapour) Date: Sat, 12 Nov 2011 19:50:00 +0330 Subject: [petsc-users] Solving with SNES In-Reply-To: References: Message-ID: The state of my problem is set and saved in elements and faces. So I should pass and update them for residual and Jacobian calculations in each Newton step. It is better for me to use them since the element and face connectivities required in residual and jacobian matrix are saved there. How would be my best choice then??? On Sat, Nov 12, 2011 at 6:04 PM, Jed Brown wrote: > On Sat, Nov 12, 2011 at 02:08, behzad baghapour < > behzad.baghapour at gmail.com> wrote: > >> I should pass a user-defined context, containing element and face local >> data obtained from PDE solution, to RHS and Jacobian evaluation of SNES >> process. Here I need to update my context each Newton Iteration. I may pass >> my context into a monitor function set by SNESMonitorSet. In the monitor >> function, I cast into my context and update my field (elements and faces) >> for next evaluations in RHS and Jacobian routines in Newton process. > > > Are you updating external forcing information? The state vector is passed > in to the residual and Jacobian routines, you should just use it. > -------------- next part -------------- An HTML attachment was scrubbed... URL: From knepley at gmail.com Sat Nov 12 10:24:49 2011 From: knepley at gmail.com (Matthew Knepley) Date: Sat, 12 Nov 2011 16:24:49 +0000 Subject: [petsc-users] Solving with SNES In-Reply-To: References: Message-ID: On Sat, Nov 12, 2011 at 4:20 PM, behzad baghapour < behzad.baghapour at gmail.com> wrote: > The state of my problem is set and saved in elements and faces. So I > should pass and update them for residual and Jacobian calculations in each > Newton step. It is better for me to use them since the element and face > connectivities required in residual and jacobian matrix are saved there. > How would be my best choice then??? > I cannot understand what information you are talking about, so I can't make any recommendation. If you can clearly state, preferably with equations, what you are storing, we might be able to help. Matt > On Sat, Nov 12, 2011 at 6:04 PM, Jed Brown wrote: > >> On Sat, Nov 12, 2011 at 02:08, behzad baghapour < >> behzad.baghapour at gmail.com> wrote: >> >>> I should pass a user-defined context, containing element and face local >>> data obtained from PDE solution, to RHS and Jacobian evaluation of SNES >>> process. Here I need to update my context each Newton Iteration. I may pass >>> my context into a monitor function set by SNESMonitorSet. In the monitor >>> function, I cast into my context and update my field (elements and faces) >>> for next evaluations in RHS and Jacobian routines in Newton process. >> >> >> Are you updating external forcing information? The state vector is passed >> in to the residual and Jacobian routines, you should just use it. >> > -- What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. -- Norbert Wiener -------------- next part -------------- An HTML attachment was scrubbed... URL: From behzad.baghapour at gmail.com Sat Nov 12 11:50:41 2011 From: behzad.baghapour at gmail.com (behzad baghapour) Date: Sat, 12 Nov 2011 21:20:41 +0330 Subject: [petsc-users] Solving with SNES In-Reply-To: References: Message-ID: So, I am solving the Euler equations (inviscid compressible flow) using discontinuous Galerkin method: M (dQ/dt) = R, R = nonlinear residual, M = mass matrix, Implicit procedure: F = M*(Q^(n+1)-Q^n)/DT - R, dF/dQ=J=M/DT-dR/dQ, Newton => Q^(n+1) = Q^n - (dF/dQ)^(-1) F ( with proper preconditioning ) The nonlinear residual and Jacobian matrix are evaluated with a face-based method. Two integrals are involved (the effect of element internal connections and face flux connections). All of my calculations of flow states are solved and stored in arrays of objects called elements.So my residual and Derivative of residual routines are developed based on the element (and face) objects. Then it is better for me to keep them when using petsc as a nonlinear solver. Hope these make any help. Thanks, Behzad -------------- next part -------------- An HTML attachment was scrubbed... URL: From jedbrown at mcs.anl.gov Sat Nov 12 11:59:36 2011 From: jedbrown at mcs.anl.gov (Jed Brown) Date: Sat, 12 Nov 2011 11:59:36 -0600 Subject: [petsc-users] Solving with SNES In-Reply-To: References: Message-ID: On Sat, Nov 12, 2011 at 11:50, behzad baghapour wrote: > The nonlinear residual and Jacobian matrix are evaluated with a face-based > method. Two integrals are involved (the effect of element internal > connections and face flux connections). All of my calculations of flow > states are solved and stored in arrays of objects called elements. Are you talking about intermediate quantities like Roe averages on face quadrature points? The state X at which the residual is evaluated is different each time, so you would be using old values if you stashed them. The residual might be evaluated multiple times before a Jacobian is requested. Normally the residual is much less expensive than the Jacobian, so it's not worth stashing the intermediate quantities. See also this FAQ: http://www.mcs.anl.gov/petsc/petsc-as/documentation/faq.html#functionjacobian -------------- next part -------------- An HTML attachment was scrubbed... URL: From bsmith at mcs.anl.gov Sat Nov 12 11:57:27 2011 From: bsmith at mcs.anl.gov (Barry Smith) Date: Sat, 12 Nov 2011 11:57:27 -0600 Subject: [petsc-users] Solving with SNES In-Reply-To: References: Message-ID: On Nov 12, 2011, at 2:08 AM, behzad baghapour wrote: > Dear developers, > > I should pass a user-defined context, containing element and face local data obtained from PDE solution, to RHS and Jacobian evaluation of SNES process. Here I need to update my context each Newton Iteration. I may pass my context into a monitor function set by SNESMonitorSet. In the monitor function, I cast into my context and update my field (elements and faces) for next evaluations in RHS and Jacobian routines in Newton process. Why don't you simply pass them in the SNESSetFunction() context and compute them inside your compute function, then use them. Also pass the same context into SNESSetJacobian() and you can reuse that information in computing in the Jacobian. I don't see any benefit in computing in the monitor function, that is not really the right place either practically or philosophically. Barry > > Please let me know, am I in right way? > > Is there any better procedure in this matter? > > Thanks, BB. From knepley at gmail.com Sat Nov 12 12:01:49 2011 From: knepley at gmail.com (Matthew Knepley) Date: Sat, 12 Nov 2011 18:01:49 +0000 Subject: [petsc-users] Solving with SNES In-Reply-To: References: Message-ID: On Sat, Nov 12, 2011 at 5:50 PM, behzad baghapour < behzad.baghapour at gmail.com> wrote: > So, I am solving the Euler equations (inviscid compressible flow) using > discontinuous Galerkin method: > > M (dQ/dt) = R, > > R = nonlinear residual, > M = mass matrix, > > Implicit procedure: > F = M*(Q^(n+1)-Q^n)/DT - R, > dF/dQ=J=M/DT-dR/dQ, > Newton => Q^(n+1) = Q^n - (dF/dQ)^(-1) F ( with proper preconditioning ) > > The nonlinear residual and Jacobian matrix are evaluated with a face-based > method. Two integrals are involved (the effect of element internal > connections and face flux connections). All of my calculations of flow > states are solved and stored in arrays of objects called elements.So my > residual and Derivative of residual routines are developed based on the > element (and face) objects. Then it is better for me to keep them when > using petsc as a nonlinear solver. > Pass your mesh data structure in using the ctx argument for your residual and Jacobian evaluation routines. Matt > Hope these make any help. > Thanks, > Behzad > -- What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. -- Norbert Wiener -------------- next part -------------- An HTML attachment was scrubbed... URL: From behzad.baghapour at gmail.com Sat Nov 12 12:10:34 2011 From: behzad.baghapour at gmail.com (behzad baghapour) Date: Sat, 12 Nov 2011 21:40:34 +0330 Subject: [petsc-users] Solving with SNES In-Reply-To: References: Message-ID: Yes.The Roe average is a common method in this area. I couldn't understand well what you said about stashing the quantities. Each residual evaluation is proceeded by a Jacobian one if I choose an Inexact Newton method and freeze Jacobian matrix for some Newton iterations. Please make it for me more clear. Thanks a lot, On Sat, Nov 12, 2011 at 9:29 PM, Jed Brown wrote: > On Sat, Nov 12, 2011 at 11:50, behzad baghapour < > behzad.baghapour at gmail.com> wrote: > >> The nonlinear residual and Jacobian matrix are evaluated with a >> face-based method. Two integrals are involved (the effect of element >> internal connections and face flux connections). All of my calculations of >> flow states are solved and stored in arrays of objects called elements. > > > Are you talking about intermediate quantities like Roe averages on face > quadrature points? The state X at which the residual is evaluated is > different each time, so you would be using old values if you stashed them. > The residual might be evaluated multiple times before a Jacobian is > requested. Normally the residual is much less expensive than the Jacobian, > so it's not worth stashing the intermediate quantities. See also this FAQ: > > > http://www.mcs.anl.gov/petsc/petsc-as/documentation/faq.html#functionjacobian > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From jedbrown at mcs.anl.gov Sat Nov 12 12:14:52 2011 From: jedbrown at mcs.anl.gov (Jed Brown) Date: Sat, 12 Nov 2011 12:14:52 -0600 Subject: [petsc-users] Solving with SNES In-Reply-To: References: Message-ID: On Sat, Nov 12, 2011 at 12:10, behzad baghapour wrote: > Yes.The Roe average is a common method in this area. > I couldn't understand well what you said about stashing the quantities. > Each residual evaluation is proceeded by a Jacobian one if I choose an > Inexact Newton method and freeze Jacobian matrix for some Newton > iterations. Please make it for me more clear. > As the FAQ discusses, the residual may be called multiple times before a Jacobian is needed, for example, in a line search. Also, some time integration methods (e.g. Rosenbrocks) will evaluate the residual several times before getting a new Jacobian. If it's free for you to stash these intermediate quantities during residual evaluation, then go ahead and do it (but don't forget that even if it's all computed, just putting it in memory costs something too). But make residual evaluation, not the monitor, the place where these intermediate quantities are computed and stored. The monitor is not called in the semantically correct place for this purpose. -------------- next part -------------- An HTML attachment was scrubbed... URL: From behzad.baghapour at gmail.com Sat Nov 12 12:15:02 2011 From: behzad.baghapour at gmail.com (behzad baghapour) Date: Sat, 12 Nov 2011 21:45:02 +0330 Subject: [petsc-users] Solving with SNES In-Reply-To: References: Message-ID: I am going to do this, however, for the next newton evaluation I should update my context. I can,t find any command in petsc to do it for me. Upon the previous discussion in this forum about SNES, I understood that a monitor function is called after each newton iteration and when it is called it may be a good chance to update my context within it. I will appreciate any idea if there is a better way to do my job in this matter. Thanks, On Sat, Nov 12, 2011 at 9:27 PM, Barry Smith wrote: > > On Nov 12, 2011, at 2:08 AM, behzad baghapour wrote: > > > Dear developers, > > > > I should pass a user-defined context, containing element and face local > data obtained from PDE solution, to RHS and Jacobian evaluation of SNES > process. Here I need to update my context each Newton Iteration. I may pass > my context into a monitor function set by SNESMonitorSet. In the monitor > function, I cast into my context and update my field (elements and faces) > for next evaluations in RHS and Jacobian routines in Newton process. > > Why don't you simply pass them in the SNESSetFunction() context and > compute them inside your compute function, then use them. Also pass the > same context into SNESSetJacobian() and you can reuse that information in > computing in the Jacobian. I don't see any benefit in computing in the > monitor function, that is not really the right place either practically or > philosophically. > > > Barry > > > > > Please let me know, am I in right way? > > > > Is there any better procedure in this matter? > > > > Thanks, BB. > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From jedbrown at mcs.anl.gov Sat Nov 12 12:20:16 2011 From: jedbrown at mcs.anl.gov (Jed Brown) Date: Sat, 12 Nov 2011 12:20:16 -0600 Subject: [petsc-users] Solving with SNES In-Reply-To: References: Message-ID: On Sat, Nov 12, 2011 at 12:15, behzad baghapour wrote: > I am going to do this, however, for the next newton evaluation I should > update my context. > I don't understand. You have some work space in your context. Use that work space during residual evaluation and leave the partial evaluations there. Use the same context for Jacobian evaluation, and use the partially evaluation quantities (instead of recomputing them). > I can,t find any command in petsc to do it for me. Upon the previous > discussion in this forum about SNES, I understood that a monitor function > is called after each newton iteration and when it is called it may be a > good chance to update my context within it. > The first time SNESSolve() is called, the residual is evaluated before the monitor is called. If a line search is done, the residual may be called multiple times before the monitor is called. Don't use the monitor for this, just put the code in your residual evaluation. -------------- next part -------------- An HTML attachment was scrubbed... URL: From behzad.baghapour at gmail.com Sat Nov 12 12:22:56 2011 From: behzad.baghapour at gmail.com (behzad baghapour) Date: Sat, 12 Nov 2011 21:52:56 +0330 Subject: [petsc-users] Solving with SNES In-Reply-To: References: Message-ID: OK. My vision is made more clear. So, after a line-search update in newton iteration, I should in some way pass the updated values into my objects (elements) so that the FormJacobian and FormFunctions use these objects for the next newton iteration. Then, where is the proper place to do this? On Sat, Nov 12, 2011 at 9:44 PM, Jed Brown wrote: > On Sat, Nov 12, 2011 at 12:10, behzad baghapour < > behzad.baghapour at gmail.com> wrote: > >> Yes.The Roe average is a common method in this area. >> I couldn't understand well what you said about stashing the quantities. >> Each residual evaluation is proceeded by a Jacobian one if I choose an >> Inexact Newton method and freeze Jacobian matrix for some Newton >> iterations. Please make it for me more clear. >> > > As the FAQ discusses, the residual may be called multiple times before a > Jacobian is needed, for example, in a line search. Also, some time > integration methods (e.g. Rosenbrocks) will evaluate the residual several > times before getting a new Jacobian. If it's free for you to stash these > intermediate quantities during residual evaluation, then go ahead and do it > (but don't forget that even if it's all computed, just putting it in memory > costs something too). But make residual evaluation, not the monitor, the > place where these intermediate quantities are computed and stored. The > monitor is not called in the semantically correct place for this purpose. > -------------- next part -------------- An HTML attachment was scrubbed... URL: From jedbrown at mcs.anl.gov Sat Nov 12 12:27:37 2011 From: jedbrown at mcs.anl.gov (Jed Brown) Date: Sat, 12 Nov 2011 12:27:37 -0600 Subject: [petsc-users] Solving with SNES In-Reply-To: References: Message-ID: On Sat, Nov 12, 2011 at 12:22, behzad baghapour wrote: > So, after a line-search update in newton iteration, I should in some way > pass the updated values into my objects (elements) so that the FormJacobian > and FormFunctions use these objects for the next newton iteration. No. It's very simple. Put work arrays in a context and set that context for both FormFunction() and FormJacobian(). In FormFunction(): do NOT assume that anything useful is in the work array. You can't possibly put something semantically meaningful, so treat them as uninitialized memory. THIS function puts the useful stuff into the work arrays. You can do it at the start of the function or you can do it "while" computing the residual. In FormJacobian(): use the values that were placed in the work arrays by FormFunction(). -------------- next part -------------- An HTML attachment was scrubbed... URL: From behzad.baghapour at gmail.com Sat Nov 12 12:29:48 2011 From: behzad.baghapour at gmail.com (behzad baghapour) Date: Sat, 12 Nov 2011 21:59:48 +0330 Subject: [petsc-users] Solving with SNES In-Reply-To: References: Message-ID: OK. I try to following this procedure. Thanks a lot. On Sat, Nov 12, 2011 at 9:57 PM, Jed Brown wrote: > On Sat, Nov 12, 2011 at 12:22, behzad baghapour < > behzad.baghapour at gmail.com> wrote: > >> So, after a line-search update in newton iteration, I should in some way >> pass the updated values into my objects (elements) so that the FormJacobian >> and FormFunctions use these objects for the next newton iteration. > > > No. It's very simple. Put work arrays in a context and set that context > for both FormFunction() and FormJacobian(). > > In FormFunction(): do NOT assume that anything useful is in the work > array. You can't possibly put something semantically meaningful, so treat > them as uninitialized memory. THIS function puts the useful stuff into the > work arrays. You can do it at the start of the function or you can do it > "while" computing the residual. > > In FormJacobian(): use the values that were placed in the work arrays by > FormFunction(). > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From matze999 at gmail.com Sun Nov 13 00:34:22 2011 From: matze999 at gmail.com (Matt Funk) Date: Sat, 12 Nov 2011 23:34:22 -0700 Subject: [petsc-users] advice on different performance using hypre:euclid through petsc vs directly through hypre Message-ID: Hi, i am solving a system for which is interfaced both with petsc and hypre. I.e. i have some data and build the matrix up either via petsc or hypre (for hypre i use the sstruct interface). The output of the system after several hundred timestep is only different on the order of 1e-04 for a non-linear system. So in terms of accuracy things agree pretty well such that i think i can rule out that the issue is related to the matrix itself. Anyway, for both interfaces i am using the Euclid/BiCGSTAB combination (rel.tol. 1e-08). I would expect similar results in terms of performance which i do not get. For PETSC:HYPRE_EUCLID:BICGSTAB for the first 10 timesteps i get 12 iterations per timestep. Using HYPRE directly i get convergence after 3 iterations. At first the it seems like the tolerance is the issue. I get the residuals and iterations as follows: 1) HYPRE: HYPRE_SStructBiCGSTABGetNumIterations(m_SStruct->ssSolver, &a_iterations); HYPRE_SStructBiCGSTABGetFinalRelativeResidualNorm(m_SStruct->ssSolver, &a_relres); 2) PETSC: m_ierr = KSPGetIterationNumber(m_ksp, &a_iterations_solver); m_ierr = KSPGetResidualNorm(m_ksp, &a_relres_solver); Other than that i do change the type of norm or anything. The residual using HYPRE is on the order of 1e-09 The residual using PETSC is on the order of 1e-04 So not only dies HYPRE use less iterations, it also gives the smaller residual. I think this is a user error as i cannot really explain why there would be such a vast difference. I was just wondering if anyone has any insight as to what else i could try or an attempt at some explanation as to what i am seeing. thanks matt -------------- next part -------------- An HTML attachment was scrubbed... URL: From matze999 at gmail.com Sun Nov 13 00:42:17 2011 From: matze999 at gmail.com (Matt Funk) Date: Sat, 12 Nov 2011 23:42:17 -0700 Subject: [petsc-users] advice on different performance using hypre:euclid through petsc vs directly through hypre In-Reply-To: References: Message-ID: Just reread my email and i need to apologize for all the grammatical errors in it. Hopefully it still makes sense. matt On Sat, Nov 12, 2011 at 11:34 PM, Matt Funk wrote: > Hi, > i am solving a system for which is interfaced both with petsc and hypre. > I.e. i have some data and build the matrix up either via petsc or hypre > (for hypre i use the sstruct interface). The output of the system after > several hundred timestep is only different on the order of 1e-04 for a > non-linear system. So in terms of accuracy things agree pretty well such > that i think i can rule out that the issue is related to the matrix itself. > > Anyway, for both interfaces i am using the Euclid/BiCGSTAB combination > (rel.tol. 1e-08). I would expect similar results in terms of performance > which i do not get. > For PETSC:HYPRE_EUCLID:BICGSTAB for the first 10 timesteps i get 12 > iterations per timestep. Using HYPRE directly i get convergence after 3 > iterations. At first the it seems like the tolerance is the issue. I get > the residuals and iterations as follows: > 1) HYPRE: > HYPRE_SStructBiCGSTABGetNumIterations(m_SStruct->ssSolver, &a_iterations); > HYPRE_SStructBiCGSTABGetFinalRelativeResidualNorm(m_SStruct->ssSolver, > &a_relres); > 2) PETSC: > m_ierr = KSPGetIterationNumber(m_ksp, &a_iterations_solver); > m_ierr = KSPGetResidualNorm(m_ksp, &a_relres_solver); > Other than that i do change the type of norm or anything. > > The residual using HYPRE is on the order of 1e-09 > The residual using PETSC is on the order of 1e-04 > > So not only dies HYPRE use less iterations, it also gives the smaller > residual. > > I think this is a user error as i cannot really explain why there would be > such a vast difference. I was just wondering if anyone has any insight as > to what else i could try or an attempt at some explanation as to what i am > seeing. > > thanks > matt > -- Matt Funk Research Associate Plant and Environmental Scienc. Dept. New Mexico State University -------------- next part -------------- An HTML attachment was scrubbed... URL: From bsmith at mcs.anl.gov Sun Nov 13 08:11:10 2011 From: bsmith at mcs.anl.gov (Barry Smith) Date: Sun, 13 Nov 2011 08:11:10 -0600 Subject: [petsc-users] advice on different performance using hypre:euclid through petsc vs directly through hypre In-Reply-To: References: Message-ID: <5E323054-D053-4D16-AB16-9277E53A371E@mcs.anl.gov> Are you using for example the preconditioned norm with PETSc and unpreconditioned with hypre? Barry On Nov 13, 2011, at 12:34 AM, Matt Funk wrote: > Hi, > i am solving a system for which is interfaced both with petsc and hypre. > I.e. i have some data and build the matrix up either via petsc or hypre > (for hypre i use the sstruct interface). The output of the system after > several hundred timestep is only different on the order of 1e-04 for a > non-linear system. So in terms of accuracy things agree pretty well such > that i think i can rule out that the issue is related to the matrix itself. > > Anyway, for both interfaces i am using the Euclid/BiCGSTAB combination > (rel.tol. 1e-08). I would expect similar results in terms of performance > which i do not get. > For PETSC:HYPRE_EUCLID:BICGSTAB for the first 10 timesteps i get 12 > iterations per timestep. Using HYPRE directly i get convergence after 3 > iterations. At first the it seems like the tolerance is the issue. I get > the residuals and iterations as follows: > 1) HYPRE: > HYPRE_SStructBiCGSTABGetNumIterations(m_SStruct->ssSolver, &a_iterations); > HYPRE_SStructBiCGSTABGetFinalRelativeResidualNorm(m_SStruct->ssSolver, > &a_relres); > 2) PETSC: > m_ierr = KSPGetIterationNumber(m_ksp, &a_iterations_solver); > m_ierr = KSPGetResidualNorm(m_ksp, &a_relres_solver); > Other than that i do change the type of norm or anything. > > The residual using HYPRE is on the order of 1e-09 > The residual using PETSC is on the order of 1e-04 > > So not only dies HYPRE use less iterations, it also gives the smaller > residual. > > I think this is a user error as i cannot really explain why there would be > such a vast difference. I was just wondering if anyone has any insight as > to what else i could try or an attempt at some explanation as to what i am > seeing. > > thanks > matt From bsmith at mcs.anl.gov Sun Nov 13 17:28:29 2011 From: bsmith at mcs.anl.gov (Barry Smith) Date: Sun, 13 Nov 2011 17:28:29 -0600 Subject: [petsc-users] questions In-Reply-To: References: <968ef386e6ec8346a0938d4183f739f2@ices.utexas.edu> <7A3039C2-5A22-4C09-B292-C4C1E7959888@mcs.anl.gov> <162078b24cf158b70749499b53c9ff08@ices.utexas.edu> Message-ID: I added info to the manual page for DMDASetLocalFunction(). On Nov 11, 2011, at 12:13 PM, Matthew Knepley wrote: > On Fri, Nov 11, 2011 at 6:06 PM, xiaohl wrote: > > Hi > > I am going to use user defined context for this function call > DMDASetLocalFunction(da,(DMDALocalFunction1) FormFunctionLocal); > > But How can I pass the "ctx" to the function FormFunctionLocal; > > PetscErrorCode FormFunctionLocal(DMDALocalInfo *info, PetscScalar ***u, > PetscScalar ***f, void * ctx){ > } > > I look at the example > /petsc-3.2-p2/src/snes/examples/tutorials/ex19.c.html > > I think you intialize the "user" context by > DMMGCreate(comm,nlevels,&user,&dmmg); > for DMMG > > Do you have the similar routine for DMDA? > > If you use SNES, then you can pass it as the last argument to > > http://www.mcs.anl.gov/petsc/petsc-as/snapshots/petsc-dev/docs/manualpages/SNES/SNESSetFunction.html > > If you use SNESetDM() which sets this automatically, then it will use the one given to > > http://www.mcs.anl.gov/petsc/petsc-as/snapshots/petsc-dev/docs/manualpages/DM/DMSetApplicationContext.html > > Barry: Where should this be documented? > > Matt > > Hailong > > On Wed, 2 Nov 2011 15:53:27 -0500, Barry Smith wrote: > On Nov 2, 2011, at 3:42 PM, Matthew Knepley wrote: > > On Wed, Nov 2, 2011 at 8:38 PM, xiaohl wrote: > Hi > > I am going to implement cell center difference method for > u = - K grad p > div u = f > where p is the pressure , u is the velocity, f is the source term. > > my goal is to assemble the matrix and test the performance of different linear solvers in parallel. > > my question is how can I read the input file for K where K is n*n tensor. > > MatLoad() > > Hm, I think you should use a DMDA with n*n size dof and then use > VecLoad() to load the entries of K. > > Barry > > > second one is that do you have any similar examples? > > Nothing with the mixed-discretization of the Laplacian. > > Matt > > Hailong > > > > -- > What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. > -- Norbert Wiener > > > > > -- > What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. > -- Norbert Wiener From aeronova.mailing at gmail.com Sun Nov 13 23:39:26 2011 From: aeronova.mailing at gmail.com (Kyunghoon Lee) Date: Mon, 14 Nov 2011 13:39:26 +0800 Subject: [petsc-users] petsc runtime error in connection with libmesh Message-ID: Hi all, I got a petsc runtime error as follows: [0]PETSC ERROR: --------------------- Error Message ------------------------------------ [0]PETSC ERROR: Object is in wrong state! [0]PETSC ERROR: Not for unassembled matrix! [0]PETSC ERROR: ------------------------------------------------------------------------ [0]PETSC ERROR: Petsc Release Version 3.2.0, Patch 5, Sat Oct 29 13:45:54 CDT 2011 [0]PETSC ERROR: See docs/changes/index.html for recent updates. [0]PETSC ERROR: See docs/faq.html for hints about trouble shooting. [0]PETSC ERROR: See docs/index.html for manual pages. [0]PETSC ERROR: ------------------------------------------------------------------------ [0]PETSC ERROR: ./ex30-dbg on a arch-darw named ken-pc.sutd.edu.sg by aeronova Mon Nov 14 13:33:08 2011 [0]PETSC ERROR: Libraries linked from /Users/aeronova/Development/local/lib64/petsc/petsc-3.2-p5/lib [0]PETSC ERROR: Configure run at Mon Nov 14 12:55:15 2011 [0]PETSC ERROR: Configure options --prefix=/Users/aeronova/Development/local/lib64/petsc/petsc-3.2-p5 --download-mpich=1 --download-blacs=1 --download-parmetis=1 --download-scalapack=1 --download-mumps=1 --download-umfpack=1 --with-clanguage=C++ [0]PETSC ERROR: ------------------------------------------------------------------------ [0]PETSC ERROR: MatGetRow() line 350 in /Users/aeronova/Development/local/src/petsc-3.2-p5/src/mat/interface/matrix.c [0]PETSC ERROR: MatAXPY_BasicWithPreallocation() line 98 in /Users/aeronova/Development/local/src/petsc-3.2-p5/src/mat/utils/axpy.c [0]PETSC ERROR: MatAXPY_SeqAIJ() line 2718 in /Users/aeronova/Development/local/src/petsc-3.2-p5/src/mat/impls/aij/seq/aij.c [0]PETSC ERROR: MatAXPY() line 39 in /Users/aeronova/Development/local/src/petsc-3.2-p5/src/mat/utils/axpy.c [0]PETSC ERROR: add() line 566 in "unknowndirectory/"/Users/aeronova/Development/local/lib64/libmesh/include/numerics/petsc_matrix.h application called MPI_Abort(comm=0x84000000, 73) - process 0 [unset]: aborting job: application called MPI_Abort(comm=0x84000000, 73) - process 0 make[2]: *** [run] Error 73 make[1]: *** [run] Error 1 make: *** [run_examples] Error 2 I configured petsc with ./configure --prefix=/Users/aeronova/Development/local/lib64/petsc/petsc-3.2-p5 --download-mpich=1 --download-blacs=1 --download-parmetis=1 --download-scalapack=1 --download-mumps=1 --download-umfpack=1 --with-clanguage=C++ In the error message, I'm not sure why I got "unknowndirectory/" even though I specified the correct path. I'd appreciate it if someone could help me with this problem. K. Lee. -------------- next part -------------- An HTML attachment was scrubbed... URL: From jedbrown at mcs.anl.gov Sun Nov 13 23:48:42 2011 From: jedbrown at mcs.anl.gov (Jed Brown) Date: Sun, 13 Nov 2011 23:48:42 -0600 Subject: [petsc-users] petsc runtime error in connection with libmesh In-Reply-To: References: Message-ID: On Sun, Nov 13, 2011 at 23:39, Kyunghoon Lee wrote: > I got a petsc runtime error as follows: > > [0]PETSC ERROR: --------------------- Error Message > ------------------------------------ > [0]PETSC ERROR: Object is in wrong state! > [0]PETSC ERROR: Not for unassembled matrix! > [0]PETSC ERROR: > ------------------------------------------------------------------------ > [0]PETSC ERROR: Petsc Release Version 3.2.0, Patch 5, Sat Oct 29 13:45:54 > CDT 2011 > [0]PETSC ERROR: See docs/changes/index.html for recent updates. > [0]PETSC ERROR: See docs/faq.html for hints about trouble shooting. > [0]PETSC ERROR: See docs/index.html for manual pages. > [0]PETSC ERROR: > ------------------------------------------------------------------------ > [0]PETSC ERROR: ./ex30-dbg on a arch-darw named ken-pc.sutd.edu.sg by > aeronova Mon Nov 14 13:33:08 2011 > [0]PETSC ERROR: Libraries linked from > /Users/aeronova/Development/local/lib64/petsc/petsc-3.2-p5/lib > [0]PETSC ERROR: Configure run at Mon Nov 14 12:55:15 2011 > [0]PETSC ERROR: Configure options > --prefix=/Users/aeronova/Development/local/lib64/petsc/petsc-3.2-p5 > --download-mpich=1 --download-blacs=1 --download-parmetis=1 > --download-scalapack=1 --download-mumps=1 --download-umfpack=1 > --with-clanguage=C++ > [0]PETSC ERROR: > ------------------------------------------------------------------------ > [0]PETSC ERROR: MatGetRow() line 350 in > /Users/aeronova/Development/local/src/petsc-3.2-p5/src/mat/interface/matrix.c > [0]PETSC ERROR: MatAXPY_BasicWithPreallocation() line 98 in > /Users/aeronova/Development/local/src/petsc-3.2-p5/src/mat/utils/axpy.c > [0]PETSC ERROR: MatAXPY_SeqAIJ() line 2718 in > /Users/aeronova/Development/local/src/petsc-3.2-p5/src/mat/impls/aij/seq/aij.c > [0]PETSC ERROR: MatAXPY() line 39 in > /Users/aeronova/Development/local/src/petsc-3.2-p5/src/mat/utils/axpy.c > [0]PETSC ERROR: add() line 566 in > "unknowndirectory/"/Users/aeronova/Development/local/lib64/libmesh/include/numerics/petsc_matrix.h > You can define __INSDIR__="" to make the error handling macro treat that part of the path as actually empty instead of using "unknowndirectory". > application called MPI_Abort(comm=0x84000000, 73) - process 0 > [unset]: aborting job: > application called MPI_Abort(comm=0x84000000, 73) - process 0 > make[2]: *** [run] Error 73 > make[1]: *** [run] Error 1 > make: *** [run_examples] Error 2 > > > I configured petsc with > > ./configure > --prefix=/Users/aeronova/Development/local/lib64/petsc/petsc-3.2-p5 > --download-mpich=1 --download-blacs=1 --download-parmetis=1 > --download-scalapack=1 --download-mumps=1 --download-umfpack=1 > --with-clanguage=C++ > > In the error message, I'm not sure why I got "unknowndirectory/" even > though I specified the correct path. I'd appreciate it if someone could > help me with this problem. > I'm rebuilding libmesh to reproduce (with the brand-new example), but the problem is that the matrix has not been assembled before this function is called. -------------- next part -------------- An HTML attachment was scrubbed... URL: From behzad.baghapour at gmail.com Mon Nov 14 05:35:59 2011 From: behzad.baghapour at gmail.com (behzad baghapour) Date: Mon, 14 Nov 2011 15:05:59 +0330 Subject: [petsc-users] How to use class function in SNESSetFunction Message-ID: Dear developers, In my code, the function "residual" used in SNESSetFunction() for calculating Nonlinear function is a member of a class called "solver". When compiling with Petsc, I received the error: error: argument of type ?PetscErrorCode (solver::)(_p_SNES*, _p_Vec*, _p_Vec*, void*)? does not match ?PetscErrorCode (*)(_p_SNES*, _p_Vec*, _p_Vec*, void*)? How can I access that class member function is SNESSetFunction()? I need to keep my code structure as before. Thanks, BB. -------------- next part -------------- An HTML attachment was scrubbed... URL: From bogdan at lmn.pub.ro Mon Nov 14 06:02:06 2011 From: bogdan at lmn.pub.ro (Bogdan Dita) Date: Mon, 14 Nov 2011 14:02:06 +0200 Subject: [petsc-users] Problem when switching from debug to optimized Message-ID: Hello, Below is my post from a few days ago and this time I've attached the output from log_summary. " Until a few days ago I've only be using PETSc in debug mode and when I switch to the optimised version(--with-debugging=0) I got a strange result regarding the solve time, what I mean is that it was 10-15 % higher then in debug mode. I'm trying to solve a linear system in parallel with superlu_dist, and I've tested my program on a Beowulf cluster, so far only using a single node with 2 quad-core Intel processors. From what I know the "no debug" version should be faster and I know it should be faster because on my laptop(dual-core Intel) for the same program and even the same matrices the solve time for the optimised version is 2 times faster, but when I use the cluster the optimised version time is slower then the debug version. Any thoughts? " Best regards, Bogdan Dita -------------- next part -------------- A non-text attachment was scrubbed... Name: petsc_log_debug.pdf Type: application/pdf Size: 23620 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: petsc_log_NOdebug.pdf Type: application/pdf Size: 23101 bytes Desc: not available URL: From knepley at gmail.com Mon Nov 14 07:34:37 2011 From: knepley at gmail.com (Matthew Knepley) Date: Mon, 14 Nov 2011 13:34:37 +0000 Subject: [petsc-users] Problem when switching from debug to optimized In-Reply-To: References: Message-ID: On Mon, Nov 14, 2011 at 12:02 PM, Bogdan Dita wrote: > > Hello, > > Below is my post from a few days ago and this time I've attached the > output from log_summary. > The time increase comes completely from SuperLU_dist during the factorization phase. You should use -ksp_view so we can see what solver options are used. Matt > " > Until a few days ago I've only be using PETSc in debug mode and when I > switch to the optimised version(--with-debugging=0) I got a strange > result regarding the solve time, what I mean is that it was 10-15 % > higher then in debug mode. > I'm trying to solve a linear system in parallel with superlu_dist, and > I've tested my program on a Beowulf cluster, so far only using a single > node with 2 quad-core Intel processors. > From what I know the "no debug" version should be faster and I know it > should be faster because on my laptop(dual-core Intel) for the same > program and even the same matrices the solve time for the optimised > version is 2 times faster, but when I use the cluster the optimised > version time is slower then the debug version. > Any thoughts? > > " > Best regards, > Bogdan Dita > > > > -- What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. -- Norbert Wiener -------------- next part -------------- An HTML attachment was scrubbed... URL: From knepley at gmail.com Mon Nov 14 07:35:09 2011 From: knepley at gmail.com (Matthew Knepley) Date: Mon, 14 Nov 2011 13:35:09 +0000 Subject: [petsc-users] How to use class function in SNESSetFunction In-Reply-To: References: Message-ID: On Mon, Nov 14, 2011 at 11:35 AM, behzad baghapour < behzad.baghapour at gmail.com> wrote: > Dear developers, > > In my code, the function "residual" used in SNESSetFunction() for > calculating Nonlinear function is a member of a class called "solver". > When compiling with Petsc, I received the error: > > error: argument of type ?PetscErrorCode (solver::)(_p_SNES*, _p_Vec*, > _p_Vec*, void*)? does not match ?PetscErrorCode (*)(_p_SNES*, _p_Vec*, > _p_Vec*, void*)? > > How can I access that class member function is SNESSetFunction()? > I need to keep my code structure as before. > Make the function static. Matt > Thanks, BB -- What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. -- Norbert Wiener -------------- next part -------------- An HTML attachment was scrubbed... URL: From hzhang at mcs.anl.gov Mon Nov 14 08:17:03 2011 From: hzhang at mcs.anl.gov (Hong Zhang) Date: Mon, 14 Nov 2011 08:17:03 -0600 Subject: [petsc-users] Problem when switching from debug to optimized In-Reply-To: References: Message-ID: Suggest experiment with mumps parallel direct LU solver as well. Hong On Mon, Nov 14, 2011 at 7:34 AM, Matthew Knepley wrote: > On Mon, Nov 14, 2011 at 12:02 PM, Bogdan Dita wrote: >> >> ?Hello, >> >> ?Below is my post from a few days ago and this time I've attached the >> output from log_summary. > > The time increase comes completely from SuperLU_dist during the > factorization > phase. You should use -ksp_view so we can see what solver options are used. > ? ?Matt > >> >> " >> ?Until a few days ago I've only be using PETSc in debug mode and when I >> switch to the optimised version(--with-debugging=0) I got a strange >> result regarding the solve time, what I mean is that it was 10-15 % >> higher then in debug mode. >> ?I'm trying to solve a linear system in parallel with superlu_dist, and >> I've tested my program on a Beowulf cluster, so far only using a single >> node with 2 quad-core Intel processors. >> ?From what I know the "no debug" version should be faster and I know it >> should be faster because on my laptop(dual-core Intel) for the same >> program and even the same matrices the solve time for the optimised >> version is 2 times faster, but when I use the cluster the optimised >> version time is slower then the debug version. >> ?Any thoughts? >> >> " >> ?Best regards, >> ?Bogdan Dita >> >> >> > > > > -- > What most experimenters take for granted before they begin their experiments > is infinitely more interesting than any results to which their experiments > lead. > -- Norbert Wiener > From codypermann at gmail.com Mon Nov 14 13:39:05 2011 From: codypermann at gmail.com (Cody Permann) Date: Mon, 14 Nov 2011 12:39:05 -0700 Subject: [petsc-users] Petsc Options Message-ID: There doesn't appear to be an API in PETSc for getting back the command line options "used" or "unused" for a simulation. Yes I am aware that the options unused can be printed but there doesn't appear to be a mechanism for returning them back through a function call. I'd like to add an option to MOOSE that would work like PETSc's "-options_left" CLI argument, but in order to do so I need to combine the options recognized for both libraries to report the global unused list. Right now both MOOSE and PETSc have full access to the raw ARGV vector and each library recognizes it's own options and ignores the rest. I could strip out the options from ARGV before passing it to PETSc in conjunction with "-options_left" but that doesn't give me quite as much flexibility as I'd like. It looks like there are about two dozen or so PETSc related options functions in the API but none of them return unused options, or otherwise allow me to query whether any particular option was recognized or not. Is this assumption correct? Thanks, Cody From bsmith at mcs.anl.gov Mon Nov 14 13:41:53 2011 From: bsmith at mcs.anl.gov (Barry Smith) Date: Mon, 14 Nov 2011 13:41:53 -0600 Subject: [petsc-users] Petsc Options In-Reply-To: References: Message-ID: <92838009-1647-4905-92C0-0E4AC2C83E1C@mcs.anl.gov> Cody, What would you like the API to look like? Barry On Nov 14, 2011, at 1:39 PM, Cody Permann wrote: > There doesn't appear to be an API in PETSc for getting back the command line options "used" or "unused" for a simulation. Yes I am aware that the options unused can be printed but there doesn't appear to be a mechanism for returning them back through a function call. I'd like to add an option to MOOSE that would work like PETSc's "-options_left" CLI argument, but in order to do so I need to combine the options recognized for both libraries to report the global unused list. Right now both MOOSE and PETSc have full access to the raw ARGV vector and each library recognizes it's own options and ignores the rest. > > I could strip out the options from ARGV before passing it to PETSc in conjunction with "-options_left" but that doesn't give me quite as much flexibility as I'd like. It looks like there are about two dozen or so PETSc related options functions in the API but none of them return unused options, or otherwise allow me to query whether any particular option was recognized or not. Is this assumption correct? > > Thanks, > Cody From codypermann at gmail.com Mon Nov 14 15:16:01 2011 From: codypermann at gmail.com (Cody Permann) Date: Mon, 14 Nov 2011 14:16:01 -0700 Subject: [petsc-users] Petsc Options In-Reply-To: <92838009-1647-4905-92C0-0E4AC2C83E1C@mcs.anl.gov> References: <92838009-1647-4905-92C0-0E4AC2C83E1C@mcs.anl.gov> Message-ID: How about a function that would fill in a char *[] with the options used or a function that would return a boolean for a single option indicating whether it was used or not? Basically we just need a public way to get at the data in PetscOptionTable::used. Thanks, Cody On Nov 14, 2011, at 12:41 PM, Barry Smith wrote: > > Cody, > > What would you like the API to look like? > > Barry > > > On Nov 14, 2011, at 1:39 PM, Cody Permann wrote: > >> There doesn't appear to be an API in PETSc for getting back the command line options "used" or "unused" for a simulation. Yes I am aware that the options unused can be printed but there doesn't appear to be a mechanism for returning them back through a function call. I'd like to add an option to MOOSE that would work like PETSc's "-options_left" CLI argument, but in order to do so I need to combine the options recognized for both libraries to report the global unused list. Right now both MOOSE and PETSc have full access to the raw ARGV vector and each library recognizes it's own options and ignores the rest. >> >> I could strip out the options from ARGV before passing it to PETSc in conjunction with "-options_left" but that doesn't give me quite as much flexibility as I'd like. It looks like there are about two dozen or so PETSc related options functions in the API but none of them return unused options, or otherwise allow me to query whether any particular option was recognized or not. Is this assumption correct? >> >> Thanks, >> Cody > From knepley at gmail.com Mon Nov 14 15:20:26 2011 From: knepley at gmail.com (Matthew Knepley) Date: Mon, 14 Nov 2011 21:20:26 +0000 Subject: [petsc-users] Petsc Options In-Reply-To: References: <92838009-1647-4905-92C0-0E4AC2C83E1C@mcs.anl.gov> Message-ID: On Mon, Nov 14, 2011 at 9:16 PM, Cody Permann wrote: > How about a function that would fill in a char *[] with the options used > or a function that would return a boolean for a single option indicating > whether it was used or not? Basically we just need a public way to get at > the data in PetscOptionTable::used. > I don't like the whole table of options used, and we definitely need PetscOptionsOptionUsed(). What about providing the number of options, and an array of all option names? Matt > Thanks, > Cody > > On Nov 14, 2011, at 12:41 PM, Barry Smith wrote: > > > > > Cody, > > > > What would you like the API to look like? > > > > Barry > > > > > > On Nov 14, 2011, at 1:39 PM, Cody Permann wrote: > > > >> There doesn't appear to be an API in PETSc for getting back the command > line options "used" or "unused" for a simulation. Yes I am aware that the > options unused can be printed but there doesn't appear to be a mechanism > for returning them back through a function call. I'd like to add an option > to MOOSE that would work like PETSc's "-options_left" CLI argument, but in > order to do so I need to combine the options recognized for both libraries > to report the global unused list. Right now both MOOSE and PETSc have full > access to the raw ARGV vector and each library recognizes it's own options > and ignores the rest. > >> > >> I could strip out the options from ARGV before passing it to PETSc in > conjunction with "-options_left" but that doesn't give me quite as much > flexibility as I'd like. It looks like there are about two dozen or so > PETSc related options functions in the API but none of them return unused > options, or otherwise allow me to query whether any particular option was > recognized or not. Is this assumption correct? > >> > >> Thanks, > >> Cody > > > > -- What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. -- Norbert Wiener -------------- next part -------------- An HTML attachment was scrubbed... URL: From dominik at itis.ethz.ch Mon Nov 14 16:34:56 2011 From: dominik at itis.ethz.ch (Dominik Szczerba) Date: Mon, 14 Nov 2011 23:34:56 +0100 Subject: [petsc-users] configure error windows debug mode In-Reply-To: References: Message-ID: It's very difficult to draw systematic conclusions because eachconfigure+make takes about an hour on my system... Trying again, I was not successful either, but this time with adifferent message: Error during compile, check win64-test7/conf/make.logSend it and win64-test7/conf/configure.log to petsc-maint at mcs.anl.gov So I send the logs to petsc-maint, appreciating any insight. Regards,Dominik On Fri, Nov 11, 2011 at 7:07 PM, Dominik Szczerba wrote: > I have managed to configure and build petsc natively on Windows with > MSVC 2010 in release mode. With debug=1 I get this error. Any ideas > what migh have gone wrong? > > Regards, > Dominik > > =============================================================================== > ? ? ?CMake process failed with status 256. Proceeding.. > =============================================================================== > From petsc-maint at mcs.anl.gov Mon Nov 14 16:42:11 2011 From: petsc-maint at mcs.anl.gov (Satish Balay) Date: Mon, 14 Nov 2011 16:42:11 -0600 (CST) Subject: [petsc-users] [petsc-maint #96600] Re: configure error windows debug mode In-Reply-To: References: Message-ID: >>>>> libfast in: /cygdrive/c/pack/petsc-3.2-p5/src/sys/plog/utils make[8]: vfork: Resource temporarily unavailable make[7]: [libfast] Error 2 (ignored) <<<<<<< Some error with windows filesystem access from cygwin. To recompile these missing buildfiles - try: cd /cygdrive/c/pack/petsc-3.2-p5/src/sys/plog/utils make PETSC_DIR=/cygdrive/c/pack/petsc-3.2-p5 PETSC_ARCH=win64-test7 lib cd /cygdrive/c/pack/petsc-3.2-p5 make PETSC_DIR=/cygdrive/c/pack/petsc-3.2-p5 PETSC_ARCH=win64-test7 test Satish On Mon, 14 Nov 2011, Dominik Szczerba wrote: > It's very difficult to draw systematic conclusions because each > configure+make takes about an hour on my system... > > Trying again, I was not successful either, but this time with a > different message: > > Error during compile, check win64-test7/conf/make.log > Send it and win64-test7/conf/configure.log to petsc-maint at mcs.anl.gov > > So I send cc this mail along with the logs to petsc-maint, > appreciating any insight. > > Regards, > Dominik > > On Fri, Nov 11, 2011 at 8:42 PM, Satish Balay wrote: > > On Fri, 11 Nov 2011, Dominik Szczerba wrote: > > > >> I have managed to configure and build petsc natively on Windows with > >> MSVC 2010 in release mode. With debug=1 I get this error. Any ideas > >> what migh have gone wrong? > >> > >> Regards, > >> Dominik > >> > >> =============================================================================== > >> ? ? ? CMake process failed with status 256. Proceeding.. > >> =============================================================================== > >> > > > > > > Configure didn't abort here. It continued and printed a nice > > 'completed' summary [which you neglected to copy/paste] > > > > And you must have seen this message for the optimized build aswell. > > > > And as the summary indicates - you can do 'make > > PETSC_DIR=.. PETSC_ARCH=..' to build the libraries. > > > > Jed, Perhaps we should print this message to configure.log - and not > > to screen - and have a cmake section in summary? > > > > cmake:enabled [or disabled] > > > > Satish > > > > > > From codypermann at gmail.com Mon Nov 14 17:06:36 2011 From: codypermann at gmail.com (Cody Permann) Date: Mon, 14 Nov 2011 16:06:36 -0700 Subject: [petsc-users] Petsc Options In-Reply-To: References: <92838009-1647-4905-92C0-0E4AC2C83E1C@mcs.anl.gov> Message-ID: <7AB3E5E2-D1F8-4BF3-8A52-2DEB44BA5AE8@gmail.com> On Nov 14, 2011, at 2:20 PM, Matthew Knepley wrote: > On Mon, Nov 14, 2011 at 9:16 PM, Cody Permann wrote: > How about a function that would fill in a char *[] with the options used or a function that would return a boolean for a single option indicating whether it was used or not? Basically we just need a public way to get at the data in PetscOptionTable::used. > > I don't like the whole table of options used, and we definitely need PetscOptionsOptionUsed(). What about providing > the number of options, and an array of all option names? That will work fine. Anything is better than what we have now ;) > > Matt > > Thanks, > Cody > > On Nov 14, 2011, at 12:41 PM, Barry Smith wrote: > > > > > Cody, > > > > What would you like the API to look like? > > > > Barry > > > > > > On Nov 14, 2011, at 1:39 PM, Cody Permann wrote: > > > >> There doesn't appear to be an API in PETSc for getting back the command line options "used" or "unused" for a simulation. Yes I am aware that the options unused can be printed but there doesn't appear to be a mechanism for returning them back through a function call. I'd like to add an option to MOOSE that would work like PETSc's "-options_left" CLI argument, but in order to do so I need to combine the options recognized for both libraries to report the global unused list. Right now both MOOSE and PETSc have full access to the raw ARGV vector and each library recognizes it's own options and ignores the rest. > >> > >> I could strip out the options from ARGV before passing it to PETSc in conjunction with "-options_left" but that doesn't give me quite as much flexibility as I'd like. It looks like there are about two dozen or so PETSc related options functions in the API but none of them return unused options, or otherwise allow me to query whether any particular option was recognized or not. Is this assumption correct? > >> > >> Thanks, > >> Cody > > > > > > > -- > What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. > -- Norbert Wiener -------------- next part -------------- An HTML attachment was scrubbed... URL: From bsmith at mcs.anl.gov Mon Nov 14 20:11:03 2011 From: bsmith at mcs.anl.gov (Barry Smith) Date: Mon, 14 Nov 2011 20:11:03 -0600 Subject: [petsc-users] Petsc Options In-Reply-To: <7AB3E5E2-D1F8-4BF3-8A52-2DEB44BA5AE8@gmail.com> References: <92838009-1647-4905-92C0-0E4AC2C83E1C@mcs.anl.gov> <7AB3E5E2-D1F8-4BF3-8A52-2DEB44BA5AE8@gmail.com> Message-ID: <99DD86A3-80DA-432A-92B7-86AFF8480272@mcs.anl.gov> I have added to petsc-dev the following #undef __FUNCT__ #define __FUNCT__ "PetscOptionsUsed" /*@C PetscOptionsUsed - Indicates if PETSc has used a particular option set in the database Not Collective Input Parameter: . option - string name of option Output Parameter: . used - PETSC_TRUE if the option was used, otherwise false, including if option was not found in options database Level: advanced .seealso: PetscOptionsView(), PetscOptionsLeft(), PetscOptionsAllUsed() @*/ PetscErrorCode PetscOptionsUsed(const char *option,PetscBool *used) { PetscInt i; PetscErrorCode ierr; PetscFunctionBegin; *used = PETSC_FALSE; for (i=0; iN; i++) { ierr = PetscStrcmp(options->names[i],option,used);CHKERRQ(ierr); if (*used) { *used = options->used[i]; break; } } PetscFunctionReturn(0); } On Nov 14, 2011, at 5:06 PM, Cody Permann wrote: > > On Nov 14, 2011, at 2:20 PM, Matthew Knepley wrote: > >> On Mon, Nov 14, 2011 at 9:16 PM, Cody Permann wrote: >> How about a function that would fill in a char *[] with the options used or a function that would return a boolean for a single option indicating whether it was used or not? Basically we just need a public way to get at the data in PetscOptionTable::used. >> >> I don't like the whole table of options used, and we definitely need PetscOptionsOptionUsed(). What about providing >> the number of options, and an array of all option names? > > That will work fine. Anything is better than what we have now ;) > >> >> Matt >> >> Thanks, >> Cody >> >> On Nov 14, 2011, at 12:41 PM, Barry Smith wrote: >> >> > >> > Cody, >> > >> > What would you like the API to look like? >> > >> > Barry >> > >> > >> > On Nov 14, 2011, at 1:39 PM, Cody Permann wrote: >> > >> >> There doesn't appear to be an API in PETSc for getting back the command line options "used" or "unused" for a simulation. Yes I am aware that the options unused can be printed but there doesn't appear to be a mechanism for returning them back through a function call. I'd like to add an option to MOOSE that would work like PETSc's "-options_left" CLI argument, but in order to do so I need to combine the options recognized for both libraries to report the global unused list. Right now both MOOSE and PETSc have full access to the raw ARGV vector and each library recognizes it's own options and ignores the rest. >> >> >> >> I could strip out the options from ARGV before passing it to PETSc in conjunction with "-options_left" but that doesn't give me quite as much flexibility as I'd like. It looks like there are about two dozen or so PETSc related options functions in the API but none of them return unused options, or otherwise allow me to query whether any particular option was recognized or not. Is this assumption correct? >> >> >> >> Thanks, >> >> Cody >> > >> >> >> >> >> -- >> What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. >> -- Norbert Wiener > From codypermann at gmail.com Mon Nov 14 20:55:51 2011 From: codypermann at gmail.com (Cody Permann) Date: Mon, 14 Nov 2011 19:55:51 -0700 Subject: [petsc-users] Petsc Options In-Reply-To: <99DD86A3-80DA-432A-92B7-86AFF8480272@mcs.anl.gov> References: <92838009-1647-4905-92C0-0E4AC2C83E1C@mcs.anl.gov> <7AB3E5E2-D1F8-4BF3-8A52-2DEB44BA5AE8@gmail.com> <99DD86A3-80DA-432A-92B7-86AFF8480272@mcs.anl.gov> Message-ID: Perfect that will work for me. Thanks, Cody On Mon, Nov 14, 2011 at 7:11 PM, Barry Smith wrote: > > I have added to petsc-dev the following > > #undef __FUNCT__ > #define __FUNCT__ "PetscOptionsUsed" > /*@C > PetscOptionsUsed - Indicates if PETSc has used a particular option set > in the database > > Not Collective > > Input Parameter: > . option - string name of option > > Output Parameter: > . used - PETSC_TRUE if the option was used, otherwise false, including > if option was not found in options database > > Level: advanced > > .seealso: PetscOptionsView(), PetscOptionsLeft(), PetscOptionsAllUsed() > @*/ > PetscErrorCode PetscOptionsUsed(const char *option,PetscBool *used) > { > PetscInt i; > PetscErrorCode ierr; > > PetscFunctionBegin; > *used = PETSC_FALSE; > for (i=0; iN; i++) { > ierr = PetscStrcmp(options->names[i],option,used);CHKERRQ(ierr); > if (*used) { > *used = options->used[i]; > break; > } > } > PetscFunctionReturn(0); > } > > On Nov 14, 2011, at 5:06 PM, Cody Permann wrote: > > > > > On Nov 14, 2011, at 2:20 PM, Matthew Knepley wrote: > > > >> On Mon, Nov 14, 2011 at 9:16 PM, Cody Permann > wrote: > >> How about a function that would fill in a char *[] with the options > used or a function that would return a boolean for a single option > indicating whether it was used or not? Basically we just need a public way > to get at the data in PetscOptionTable::used. > >> > >> I don't like the whole table of options used, and we definitely need > PetscOptionsOptionUsed(). What about providing > >> the number of options, and an array of all option names? > > > > That will work fine. Anything is better than what we have now ;) > > > >> > >> Matt > >> > >> Thanks, > >> Cody > >> > >> On Nov 14, 2011, at 12:41 PM, Barry Smith wrote: > >> > >> > > >> > Cody, > >> > > >> > What would you like the API to look like? > >> > > >> > Barry > >> > > >> > > >> > On Nov 14, 2011, at 1:39 PM, Cody Permann wrote: > >> > > >> >> There doesn't appear to be an API in PETSc for getting back the > command line options "used" or "unused" for a simulation. Yes I am aware > that the options unused can be printed but there doesn't appear to be a > mechanism for returning them back through a function call. I'd like to add > an option to MOOSE that would work like PETSc's "-options_left" CLI > argument, but in order to do so I need to combine the options recognized > for both libraries to report the global unused list. Right now both MOOSE > and PETSc have full access to the raw ARGV vector and each library > recognizes it's own options and ignores the rest. > >> >> > >> >> I could strip out the options from ARGV before passing it to PETSc > in conjunction with "-options_left" but that doesn't give me quite as much > flexibility as I'd like. It looks like there are about two dozen or so > PETSc related options functions in the API but none of them return unused > options, or otherwise allow me to query whether any particular option was > recognized or not. Is this assumption correct? > >> >> > >> >> Thanks, > >> >> Cody > >> > > >> > >> > >> > >> > >> -- > >> What most experimenters take for granted before they begin their > experiments is infinitely more interesting than any results to which their > experiments lead. > >> -- Norbert Wiener > > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From dominik at itis.ethz.ch Tue Nov 15 02:10:01 2011 From: dominik at itis.ethz.ch (Dominik Szczerba) Date: Tue, 15 Nov 2011 09:10:01 +0100 Subject: [petsc-users] configure error windows debug mode In-Reply-To: References: Message-ID: >> I have managed to configure and build petsc natively on Windows with >> MSVC 2010 in release mode. With debug=1 I get this error. Any ideas >> what migh have gone wrong? >> >> Regards, >> Dominik >> >> =============================================================================== >> ? ? ? CMake process failed with status 256. Proceeding.. >> =============================================================================== > Configure didn't abort here. It continued and printed a nice > 'completed' summary [which you neglected to copy/paste] > > And you must have seen this message for the optimized build aswell. I tried again, and no, it is there only for debug. I send the log to the other list. Why I am worried is that I can not link debug version to my application on Windows, I can only link the optimized one. I want to eliminate this error message as a potential culprit. Many thanks for any hints, Dominik From dominik at itis.ethz.ch Tue Nov 15 05:17:13 2011 From: dominik at itis.ethz.ch (Dominik Szczerba) Date: Tue, 15 Nov 2011 12:17:13 +0100 Subject: [petsc-users] zero diagonal values and Jacobi preconditioner Message-ID: What will happen if my matrix has (close to, or exactly) zeros on the diagonal? Does Petsc handle such cases smartly somehow or am I right to expect poor convergence (values close to zero) or failure (values exactly zero)? Many thanks, Dominik From knepley at gmail.com Tue Nov 15 06:22:52 2011 From: knepley at gmail.com (Matthew Knepley) Date: Tue, 15 Nov 2011 06:22:52 -0600 Subject: [petsc-users] zero diagonal values and Jacobi preconditioner In-Reply-To: References: Message-ID: On Tue, Nov 15, 2011 at 5:17 AM, Dominik Szczerba wrote: > What will happen if my matrix has (close to, or exactly) zeros on the > diagonal? Does Petsc handle such cases smartly somehow or am I right > to expect poor convergence (values close to zero) or failure (values > exactly zero)? > It would depend on what preconditioner you are using. Matt > Many thanks, > Dominik > -- What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. -- Norbert Wiener -------------- next part -------------- An HTML attachment was scrubbed... URL: From dominik at itis.ethz.ch Tue Nov 15 06:33:24 2011 From: dominik at itis.ethz.ch (Dominik Szczerba) Date: Tue, 15 Nov 2011 13:33:24 +0100 Subject: [petsc-users] zero diagonal values and Jacobi preconditioner In-Reply-To: References: Message-ID: Jacobi... On Tue, Nov 15, 2011 at 1:22 PM, Matthew Knepley wrote: > On Tue, Nov 15, 2011 at 5:17 AM, Dominik Szczerba > wrote: >> >> What will happen if my matrix has (close to, or exactly) zeros on the >> diagonal? Does Petsc handle such cases smartly somehow or am I right >> to expect poor convergence (values close to zero) or failure (values >> exactly zero)? > > It would depend on what preconditioner you are using. > ? ?Matt > >> >> Many thanks, >> Dominik > > > > -- > What most experimenters take for granted before they begin their experiments > is infinitely more interesting than any results to which their experiments > lead. > -- Norbert Wiener > From knepley at gmail.com Tue Nov 15 06:36:25 2011 From: knepley at gmail.com (Matthew Knepley) Date: Tue, 15 Nov 2011 06:36:25 -0600 Subject: [petsc-users] zero diagonal values and Jacobi preconditioner In-Reply-To: References: Message-ID: On Tue, Nov 15, 2011 at 6:33 AM, Dominik Szczerba wrote: > Jacobi... > http://www.mcs.anl.gov/petsc/petsc-as/snapshots/petsc-current/docs/manualpages/PC/PCJACOBI.html Matt > On Tue, Nov 15, 2011 at 1:22 PM, Matthew Knepley > wrote: > > On Tue, Nov 15, 2011 at 5:17 AM, Dominik Szczerba > > wrote: > >> > >> What will happen if my matrix has (close to, or exactly) zeros on the > >> diagonal? Does Petsc handle such cases smartly somehow or am I right > >> to expect poor convergence (values close to zero) or failure (values > >> exactly zero)? > > > > It would depend on what preconditioner you are using. > > Matt > > > >> > >> Many thanks, > >> Dominik > > > > > > > > -- > > What most experimenters take for granted before they begin their > experiments > > is infinitely more interesting than any results to which their > experiments > > lead. > > -- Norbert Wiener > > > -- What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. -- Norbert Wiener -------------- next part -------------- An HTML attachment was scrubbed... URL: From balay at mcs.anl.gov Tue Nov 15 06:39:17 2011 From: balay at mcs.anl.gov (Satish Balay) Date: Tue, 15 Nov 2011 06:39:17 -0600 (CST) Subject: [petsc-users] configure error windows debug mode In-Reply-To: References: Message-ID: On Tue, 15 Nov 2011, Dominik Szczerba wrote: > >> I have managed to configure and build petsc natively on Windows with > >> MSVC 2010 in release mode. With debug=1 I get this error. Any ideas > >> what migh have gone wrong? > >> > >> Regards, > >> Dominik > >> > >> =============================================================================== > >> ? ? ? CMake process failed with status 256. Proceeding.. > >> =============================================================================== > > > Configure didn't abort here. It continued and printed a nice > > 'completed' summary [which you neglected to copy/paste] > > > > And you must have seen this message for the optimized build aswell. > > I tried again, and no, it is there only for debug. I send the log to > the other list. > Why I am worried is that I can not link debug version to my > application on Windows, I can only link the optimized one. I want to > eliminate this error message as a potential culprit. What errors? Does 'make test' work? If so - you are looking at the wrong place. Satish > > Many thanks for any hints, > Dominik > From dominik at itis.ethz.ch Tue Nov 15 07:06:42 2011 From: dominik at itis.ethz.ch (Dominik Szczerba) Date: Tue, 15 Nov 2011 14:06:42 +0100 Subject: [petsc-users] configure error windows debug mode In-Reply-To: References: Message-ID: >> >> =============================================================================== >> >> ? ? ? CMake process failed with status 256. Proceeding.. >> >> =============================================================================== >> Why I am worried is that I can not link debug version to my >> application on Windows, I can only link the optimized one. I want to >> eliminate this error message as a potential culprit. > Does 'make test' work? If so - you are looking at the wrong place. No, during compilation I get a very similar error as previously reported: libfast in: /cygdrive/c/pack/petsc-3.2-p5/src/mat/impls/adj libfast in: /cygdrive/c/pack/petsc-3.2-p5/src/mat/impls/adj/mpi make[9]: vfork: Resource temporarily unavailable If I type now "make test" I get: libpetsc.lib(matregis.o) : error LNK2019: unresolved external symbol MatCreate_M PIAdj referenced in function MatRegisterAll libpetsc.lib(gasm.o) : error LNK2019: unresolved external symbol MatCreateMPIAdj referenced in function PCGASMCreateSubdomains libpetsc.lib(asm.o) : error LNK2001: unresolved external symbol MatCreateMPIAdj libpetsc.lib(pmetis.o) : error LNK2001: unresolved external symbol MatCreateMPIA dj libpetsc.lib(mpibaij.o) : error LNK2001: unresolved external symbol MatCreateMPI Adj C:\pack\PETSC-~1.2-P\src\snes\examples\TUTORI~1\ex19.exe : fatal error LNK1120: 2 unresolved externals Note, before it was when building the debug mode, now it happens when building release mode. So it is not reproducible and likely explains the linking problems I have with my own applications. Portions of Petsc are simply not compiled, so some symbols are naturally missing. They just happened by bad luck to previously affect the debug and now the release mode. So hopefully approaching a conclusion of my 2 weeks long struggle: 1) what is this error, and how to prevent it? 2) make.log reporting 'Completed building libraries" at the end is confusing. It made me think all is fine. I believe the error should be intercepted and building stopped. Many thanks and regards, Dominik From knepley at gmail.com Tue Nov 15 07:15:08 2011 From: knepley at gmail.com (Matthew Knepley) Date: Tue, 15 Nov 2011 07:15:08 -0600 Subject: [petsc-users] configure error windows debug mode In-Reply-To: References: Message-ID: On Tue, Nov 15, 2011 at 7:06 AM, Dominik Szczerba wrote: > >> >> > =============================================================================== > >> >> CMake process failed with status 256. Proceeding.. > >> >> > =============================================================================== > > >> Why I am worried is that I can not link debug version to my > >> application on Windows, I can only link the optimized one. I want to > >> eliminate this error message as a potential culprit. > > > Does 'make test' work? If so - you are looking at the wrong place. > > No, during compilation I get a very similar error as previously reported: > > libfast in: /cygdrive/c/pack/petsc-3.2-p5/src/mat/impls/adj > libfast in: /cygdrive/c/pack/petsc-3.2-p5/src/mat/impls/adj/mpi > make[9]: vfork: Resource temporarily unavailable > > If I type now "make test" I get: > > libpetsc.lib(matregis.o) : error LNK2019: unresolved external symbol > MatCreate_M > PIAdj referenced in function MatRegisterAll > libpetsc.lib(gasm.o) : error LNK2019: unresolved external symbol > MatCreateMPIAdj > referenced in function PCGASMCreateSubdomains > libpetsc.lib(asm.o) : error LNK2001: unresolved external symbol > MatCreateMPIAdj > libpetsc.lib(pmetis.o) : error LNK2001: unresolved external symbol > MatCreateMPIA > dj > libpetsc.lib(mpibaij.o) : error LNK2001: unresolved external symbol > MatCreateMPI > Adj > C:\pack\PETSC-~1.2-P\src\snes\examples\TUTORI~1\ex19.exe : fatal error > LNK1120: > 2 unresolved externals > > Note, before it was when building the debug mode, now it happens when > building release mode. > So it is not reproducible and likely explains the linking problems I > have with my own applications. Portions of Petsc are simply not > compiled, so some symbols are naturally missing. They just happened by > bad luck to previously affect the debug and now the release mode. > > So hopefully approaching a conclusion of my 2 weeks long struggle: > > 1) what is this error, and how to prevent it? > This is a Windows filesystem problem. It has nothing to do with PETSc. make[9]: vfork: Resource temporarily unavailable It fails to create a process to compile the file. You can keep running make until everything gets built. From looking at Google, there is no fix for this problem, other than abandoning Windows. Matt 2) make.log reporting 'Completed building libraries" at the end is > confusing. It made me think all is fine. I believe the error should be > intercepted and building stopped. > > Many thanks and regards, > Dominik > -- What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. -- Norbert Wiener -------------- next part -------------- An HTML attachment was scrubbed... URL: From behzad.baghapour at gmail.com Tue Nov 15 07:22:33 2011 From: behzad.baghapour at gmail.com (behzad baghapour) Date: Tue, 15 Nov 2011 16:52:33 +0330 Subject: [petsc-users] How to use class function in SNESSetFunction In-Reply-To: References: Message-ID: Thanks. -------------- next part -------------- An HTML attachment was scrubbed... URL: From dominik at itis.ethz.ch Tue Nov 15 07:50:35 2011 From: dominik at itis.ethz.ch (Dominik Szczerba) Date: Tue, 15 Nov 2011 14:50:35 +0100 Subject: [petsc-users] configure error windows debug mode In-Reply-To: References: Message-ID: >> libfast in: /cygdrive/c/pack/petsc-3.2-p5/src/mat/impls/adj >> libfast in: /cygdrive/c/pack/petsc-3.2-p5/src/mat/impls/adj/mpi >> make[9]: vfork: Resource temporarily unavailable > This is a Windows filesystem problem. It has nothing to do with PETSc. > ? make[9]: vfork: Resource temporarily unavailable > > It fails to create a process to compile the file. You can keep running make > until everything gets built. From looking at Google, there is no fix for > this > problem, other than abandoning Windows. > ? ? Matt I wish I could... but it's not up to me. I work on linux, just trying to get a task off my desk. I am sitting on a virtual machine. Any idea if this might be somehow related to the problem? Thanks Dominik From balay at mcs.anl.gov Tue Nov 15 08:10:01 2011 From: balay at mcs.anl.gov (Satish Balay) Date: Tue, 15 Nov 2011 08:10:01 -0600 (CST) Subject: [petsc-users] configure error windows debug mode In-Reply-To: References: Message-ID: On Tue, 15 Nov 2011, Dominik Szczerba wrote: > >> libfast in: /cygdrive/c/pack/petsc-3.2-p5/src/mat/impls/adj > >> libfast in: /cygdrive/c/pack/petsc-3.2-p5/src/mat/impls/adj/mpi > >> make[9]: vfork: Resource temporarily unavailable > > > This is a Windows filesystem problem. It has nothing to do with PETSc. > > ? make[9]: vfork: Resource temporarily unavailable > > > > It fails to create a process to compile the file. You can keep running make > > until everything gets built. From looking at Google, there is no fix for > > this > > problem, other than abandoning Windows. > > ? ? Matt > > I wish I could... but it's not up to me. I work on linux, just trying > to get a task off my desk. I am sitting on a virtual machine. Any idea > if this might be somehow related to the problem? Did you see my previous reply to this issue? Looks like instead of trying this suggestion - you've assumed the problem was elsewere - and attempted a complete rebuild. Please invoke 'make' [with the correct PETSC_ARCH and PETSC_DIR] in the appropriate 'source' dir - where-ever you see 'vfork' error - to complete the build of those sources. Satish --------------------------------------------------------------- >>>>> libfast in: /cygdrive/c/pack/petsc-3.2-p5/src/sys/plog/utils make[8]: vfork: Resource temporarily unavailable make[7]: [libfast] Error 2 (ignored) <<<<<<< Some error with windows filesystem access from cygwin. To recompile these missing buildfiles - try: cd /cygdrive/c/pack/petsc-3.2-p5/src/sys/plog/utils make PETSC_DIR=/cygdrive/c/pack/petsc-3.2-p5 PETSC_ARCH=win64-test7 lib cd /cygdrive/c/pack/petsc-3.2-p5 make PETSC_DIR=/cygdrive/c/pack/petsc-3.2-p5 PETSC_ARCH=win64-test7 test Satish From dominik at itis.ethz.ch Tue Nov 15 08:22:11 2011 From: dominik at itis.ethz.ch (Dominik Szczerba) Date: Tue, 15 Nov 2011 15:22:11 +0100 Subject: [petsc-users] configure error windows debug mode In-Reply-To: References: Message-ID: > Did you see my previous reply to this issue? Looks like instead of > trying this suggestion - you've assumed the problem was elsewere - and > attempted a complete rebuild. > > Please invoke 'make' [with the correct PETSC_ARCH and PETSC_DIR] in > the appropriate 'source' dir - where-ever you see 'vfork' error - to > complete the build of those sources. I saw it, I apologize I did not answer, but seeing the other error (cmake error) made me think I better start completely fresh. Meanwhile I tried the "rebase" hint that I found in the old archives. I will do as you say the next time the error occurs and post an update. Many thanks Dominik From balay at mcs.anl.gov Tue Nov 15 08:29:44 2011 From: balay at mcs.anl.gov (Satish Balay) Date: Tue, 15 Nov 2011 08:29:44 -0600 (CST) Subject: [petsc-users] configure error windows debug mode In-Reply-To: References: Message-ID: On Tue, 15 Nov 2011, Dominik Szczerba wrote: > > Did you see my previous reply to this issue? Looks like instead of > > trying this suggestion - you've assumed the problem was elsewere - and > > attempted a complete rebuild. > > > > Please invoke 'make' [with the correct PETSC_ARCH and PETSC_DIR] in > > the appropriate 'source' dir - where-ever you see 'vfork' error - to > > complete the build of those sources. > > > I saw it, I apologize I did not answer, but seeing the other error > (cmake error) made me think I better start completely fresh. Meanwhile > I tried the "rebase" hint that I found in the old archives. I will do > as you say the next time the error occurs and post an update. As mentioned in the 'prior-proir' message - the cmake message is misleading. The configure did complete - and print a nice summary of what its doing. Configure however decided to use the legacy build - instead of cmake buld - due to the cmake error. Note - you were using legacy build for both debug and optimized builds. If you see stuff like: >>>>>> libfast in: /cygdrive/c/pack/petsc-3.2-p5/src/mat/impls/adj <<<<<< Then you are using the non-cmake legacy build. [which is the fallback when cmake part of configure fails - for whatever reason] Satish From dominik at itis.ethz.ch Tue Nov 15 08:49:55 2011 From: dominik at itis.ethz.ch (Dominik Szczerba) Date: Tue, 15 Nov 2011 15:49:55 +0100 Subject: [petsc-users] configure error windows debug mode In-Reply-To: References: Message-ID: On Tue, Nov 15, 2011 at 3:29 PM, Satish Balay wrote: > On Tue, 15 Nov 2011, Dominik Szczerba wrote: > >> > Did you see my previous reply to this issue? Looks like instead of >> > trying this suggestion - you've assumed the problem was elsewere - and >> > attempted a complete rebuild. >> > >> > Please invoke 'make' [with the correct PETSC_ARCH and PETSC_DIR] in >> > the appropriate 'source' dir - where-ever you see 'vfork' error - to >> > complete the build of those sources. >> >> >> I saw it, I apologize I did not answer, but seeing the other error >> (cmake error) made me think I better start completely fresh. Meanwhile >> I tried the "rebase" hint that I found in the old archives. I will do >> as you say the next time the error occurs and post an update. > > As mentioned in the 'prior-proir' message - the cmake message is > misleading. ?The configure did complete - and print a nice summary of > what its doing. Configure however decided to use the legacy build - > instead of cmake buld - due to the cmake error. > > Note - you were using legacy build for both debug and optimized > builds. If you see stuff like: > >>>>>>> > libfast in: /cygdrive/c/pack/petsc-3.2-p5/src/mat/impls/adj > <<<<<< > > Then you are using the non-cmake legacy build. [which is the fallback > when cmake part of configure fails - for whatever reason] > > Satish Just to be reproducible: how can I explicitly force the legacy build? Or is it dying soon and I better switch now? Is using cmake to build but not configure really simplifying things? Thanks and regards, Dominik From jedbrown at mcs.anl.gov Tue Nov 15 09:02:01 2011 From: jedbrown at mcs.anl.gov (Jed Brown) Date: Tue, 15 Nov 2011 09:02:01 -0600 Subject: [petsc-users] configure error windows debug mode In-Reply-To: References: Message-ID: On Tue, Nov 15, 2011 at 08:49, Dominik Szczerba wrote: > Just to be reproducible: how can I explicitly force the legacy build? > make all-legacy > Or is it dying soon and I better switch now? > Is using cmake to build but not configure really simplifying things? > The cmake build runs in parallel with dependencies that the legacy (recursive) makefile system does not. Otherwise, it should produce the same result. -------------- next part -------------- An HTML attachment was scrubbed... URL: From dominik at itis.ethz.ch Tue Nov 15 09:04:42 2011 From: dominik at itis.ethz.ch (Dominik Szczerba) Date: Tue, 15 Nov 2011 16:04:42 +0100 Subject: [petsc-users] configure error windows debug mode In-Reply-To: References: Message-ID: On Tue, Nov 15, 2011 at 4:02 PM, Jed Brown wrote: > On Tue, Nov 15, 2011 at 08:49, Dominik Szczerba > wrote: >> >> Just to be reproducible: how can I explicitly force the legacy build? > > make all-legacy > >> >> Or is it dying soon and I better switch now? >> Is using cmake to build but not configure really simplifying things? > > The cmake build runs in parallel with dependencies that the legacy > (recursive) makefile system does not. Otherwise, it should produce the same > result. That probably explains at least to some extent why it takes 1h or my Windows box and only minutes on my linux installation... Thanks, Dominik From jedbrown at mcs.anl.gov Tue Nov 15 09:09:11 2011 From: jedbrown at mcs.anl.gov (Jed Brown) Date: Tue, 15 Nov 2011 09:09:11 -0600 Subject: [petsc-users] configure error windows debug mode In-Reply-To: References: Message-ID: On Tue, Nov 15, 2011 at 09:04, Dominik Szczerba wrote: > That probably explains at least to some extent why it takes 1h or my > Windows box and only minutes on my linux installation... > Configure is independent of cmake/legacy builds. Most of the difference is likely coming from disk performance. -------------- next part -------------- An HTML attachment was scrubbed... URL: From dominik at itis.ethz.ch Tue Nov 15 09:11:52 2011 From: dominik at itis.ethz.ch (Dominik Szczerba) Date: Tue, 15 Nov 2011 16:11:52 +0100 Subject: [petsc-users] configure error windows debug mode In-Reply-To: References: Message-ID: On Tue, Nov 15, 2011 at 4:09 PM, Jed Brown wrote: > On Tue, Nov 15, 2011 at 09:04, Dominik Szczerba > wrote: >> >> That probably explains at least to some extent why it takes 1h or my >> Windows box and only minutes on my linux installation... > > Configure is independent of cmake/legacy builds. Most of the difference is > likely coming from disk performance. Quite so, it is configure that takes a lot of time. Dominik From knepley at gmail.com Tue Nov 15 09:30:44 2011 From: knepley at gmail.com (Matthew Knepley) Date: Tue, 15 Nov 2011 09:30:44 -0600 Subject: [petsc-users] configure error windows debug mode In-Reply-To: References: Message-ID: On Tue, Nov 15, 2011 at 9:11 AM, Dominik Szczerba wrote: > On Tue, Nov 15, 2011 at 4:09 PM, Jed Brown wrote: > > On Tue, Nov 15, 2011 at 09:04, Dominik Szczerba > > wrote: > >> > >> That probably explains at least to some extent why it takes 1h or my > >> Windows box and only minutes on my linux installation... > > > > Configure is independent of cmake/legacy builds. Most of the difference > is > > likely coming from disk performance. > > Quite so, it is configure that takes a lot of time. The claim is that this is a mismatch between Cygwin and Windows. I see not reason to dispute this. Matt > > Dominik > -- What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. -- Norbert Wiener -------------- next part -------------- An HTML attachment was scrubbed... URL: From dominik at itis.ethz.ch Tue Nov 15 09:33:49 2011 From: dominik at itis.ethz.ch (Dominik Szczerba) Date: Tue, 15 Nov 2011 16:33:49 +0100 Subject: [petsc-users] question about package dependencies Message-ID: Is there somewhere a dependency matrix what packages are required for what packages? In particular, do I need f2cblaslapack when I need mpich, hypre and parmetis only? Would I need it for MUMPS? Thanks Dominik From knepley at gmail.com Tue Nov 15 09:36:15 2011 From: knepley at gmail.com (Matthew Knepley) Date: Tue, 15 Nov 2011 09:36:15 -0600 Subject: [petsc-users] question about package dependencies In-Reply-To: References: Message-ID: On Tue, Nov 15, 2011 at 9:33 AM, Dominik Szczerba wrote: > Is there somewhere a dependency matrix what packages are required for > what packages? > This is built dynamically by the configure process. > In particular, do I need f2cblaslapack when I need mpich, hypre and > parmetis only? > I have no idea what this question means. f2cblas is for people with no Fortran compiler or built in BLAS. BLAS is required by PETSc, so you always need some version. Matt Would I need it for MUMPS? > > Thanks > Dominik > -- What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. -- Norbert Wiener -------------- next part -------------- An HTML attachment was scrubbed... URL: From bbelson at princeton.edu Tue Nov 15 09:43:48 2011 From: bbelson at princeton.edu (Brandt Belson) Date: Tue, 15 Nov 2011 10:43:48 -0500 Subject: [petsc-users] Using petsc for banded matrices and 2D finite differences Message-ID: Hi all, I'm writing a 3D incompressible fluids solver for transitional and turbulent boundary layers, and would like to make use of petsc if possible. At each time-step I'll need to solve matrix equations arising from finite differences in two dimensions (x and y) on a structured grid. The matrix is block tri/penta-diagonal, depending on the stencil, and the blocks are also tri/penta-diagonal. Correct me if I'm wrong, but I believe these types of matrix equations can be solved directly and cheaply on one node. The two options I'm comparing are: 1. Distribute the data in z and solve x-y plane matrices with LAPACK or other serial or shared-memory libraries. Then do an MPI all-to-all to distribute the data in x and/or y, and do all computations in z. This method allows all calculations to be done with all necessary data available to one node, so serial or shared-memory algorithms can be used. The disadvantages are that the MPI all-to-all can be expensive and the number of nodes is limited by the number of points in the z direction. 2. Distribute the data in x-y only and use petsc to do matrix solves in the x-y plane across nodes. The data would always be contiguous in z. The possible disadvantage is that the x-y plane matrix solves could be slower. However, there is no need for an all-to-all and the number of nodes is roughly limited by nx*ny instead of nz. The size of the grid will be about 1000 x 100 x 500 in x, y, and z, so matrices would be about 100,000 x 100,000, but the grid size could vary. For anyone interested, the derivatives in x and y are done with compact finite differences, and in z with discrete Fourier transforms. I also hope to make use petsc4py and python. Thanks for your help, Brandt -------------- next part -------------- An HTML attachment was scrubbed... URL: From balay at mcs.anl.gov Tue Nov 15 09:57:46 2011 From: balay at mcs.anl.gov (Satish Balay) Date: Tue, 15 Nov 2011 09:57:46 -0600 (CST) Subject: [petsc-users] question about package dependencies In-Reply-To: References: Message-ID: On Tue, 15 Nov 2011, Matthew Knepley wrote: > On Tue, Nov 15, 2011 at 9:33 AM, Dominik Szczerba wrote: > > > Is there somewhere a dependency matrix what packages are required for > > what packages? > > This is built dynamically by the configure process. You can look at setupDependencies() in the correspnding package.py [for eg: config/PETSc/packages/MUMPS.py for mumps dependencies] Satish > > In particular, do I need f2cblaslapack when I need mpich, hypre and > > parmetis only? > > > > I have no idea what this question means. f2cblas is for people with no > Fortran compiler > or built in BLAS. BLAS is required by PETSc, so you always need some > version. > > Matt > > Would I need it for MUMPS? > > > > Thanks > > Dominik > > > > > > From jedbrown at mcs.anl.gov Tue Nov 15 09:57:57 2011 From: jedbrown at mcs.anl.gov (Jed Brown) Date: Tue, 15 Nov 2011 09:57:57 -0600 Subject: [petsc-users] Using petsc for banded matrices and 2D finite differences In-Reply-To: References: Message-ID: On Tue, Nov 15, 2011 at 09:43, Brandt Belson wrote: > I'm writing a 3D incompressible fluids solver for transitional and > turbulent boundary layers, and would like to make use of petsc if possible. > At each time-step I'll need to solve matrix equations arising from finite > differences in two dimensions (x and y) on a structured grid. The matrix is > block tri/penta-diagonal, depending on the stencil, and the blocks are also > tri/penta-diagonal. Correct me if I'm wrong, but I believe these types of > matrix equations can be solved directly and cheaply on one node. > > The two options I'm comparing are: > 1. Distribute the data in z and solve x-y plane matrices with LAPACK or > other serial or shared-memory libraries. Then do an MPI all-to-all to > distribute the data in x and/or y, and do all computations in z. This > method allows all calculations to be done with all necessary data available > to one node, so serial or shared-memory algorithms can be used. The > disadvantages are that the MPI all-to-all can be expensive and the number > of nodes is limited by the number of points in the z direction. > > 2. Distribute the data in x-y only and use petsc to do matrix solves in > the x-y plane across nodes. The data would always be contiguous in z. The > possible disadvantage is that the x-y plane matrix solves could be slower. > However, there is no need for an all-to-all and the number of nodes is > roughly limited by nx*ny instead of nz. > > The size of the grid will be about 1000 x 100 x 500 in x, y, and z, so > matrices would be about 100,000 x 100,000, but the grid size could vary. > > For anyone interested, the derivatives in x and y are done with compact > finite differences, and in z with discrete Fourier transforms. I also hope > to make use petsc4py and python. > Are the problems in the x-y planes linear or nonlinear? Are the coefficients the same in each level? What equations are being solved in this direction? (You can do much better than "banded" solvers, ala LAPACK, for these plane problems, but the best methods will depend on the problem. Fortunately, you can don't have to change the code to change the method.) About how many processes would you like to run this problem size on? Since the Fourier direction is densely coupled, it would be convenient to keep it local, but it's probably worth partitioning if your desired subdomain sizes get very small. -------------- next part -------------- An HTML attachment was scrubbed... URL: From bbelson at princeton.edu Tue Nov 15 10:57:57 2011 From: bbelson at princeton.edu (Brandt Belson) Date: Tue, 15 Nov 2011 11:57:57 -0500 Subject: [petsc-users] Using petsc for banded matrices and 2D finite differences In-Reply-To: References: Message-ID: Thanks for getting back to me. The matrix solves in each x-y plane are linear. The matrices depend on the z wavenumber and so are different at each x-y slice. The equations are basically Helmholtz and Poisson type. They are 3D, but when done in Fourier space, they decouple so each x-y plane can be solved independently. I'd like to run on a few hundred processors, but if possible I'd like it to scale to more processors for higher Re. I agree that keeping the z-dimension data local is beneficial for FFTs. Thanks, Brandt On Tue, Nov 15, 2011 at 10:57 AM, Jed Brown wrote: > On Tue, Nov 15, 2011 at 09:43, Brandt Belson wrote: > >> I'm writing a 3D incompressible fluids solver for transitional and >> turbulent boundary layers, and would like to make use of petsc if possible. >> At each time-step I'll need to solve matrix equations arising from finite >> differences in two dimensions (x and y) on a structured grid. The matrix is >> block tri/penta-diagonal, depending on the stencil, and the blocks are also >> tri/penta-diagonal. Correct me if I'm wrong, but I believe these types of >> matrix equations can be solved directly and cheaply on one node. >> >> The two options I'm comparing are: >> 1. Distribute the data in z and solve x-y plane matrices with LAPACK or >> other serial or shared-memory libraries. Then do an MPI all-to-all to >> distribute the data in x and/or y, and do all computations in z. This >> method allows all calculations to be done with all necessary data available >> to one node, so serial or shared-memory algorithms can be used. The >> disadvantages are that the MPI all-to-all can be expensive and the number >> of nodes is limited by the number of points in the z direction. >> >> 2. Distribute the data in x-y only and use petsc to do matrix solves in >> the x-y plane across nodes. The data would always be contiguous in z. The >> possible disadvantage is that the x-y plane matrix solves could be slower. >> However, there is no need for an all-to-all and the number of nodes is >> roughly limited by nx*ny instead of nz. >> >> The size of the grid will be about 1000 x 100 x 500 in x, y, and z, so >> matrices would be about 100,000 x 100,000, but the grid size could vary. >> >> For anyone interested, the derivatives in x and y are done with compact >> finite differences, and in z with discrete Fourier transforms. I also hope >> to make use petsc4py and python. >> > > Are the problems in the x-y planes linear or nonlinear? Are the > coefficients the same in each level? What equations are being solved in > this direction? (You can do much better than "banded" solvers, ala LAPACK, > for these plane problems, but the best methods will depend on the problem. > Fortunately, you can don't have to change the code to change the method.) > > About how many processes would you like to run this problem size on? Since > the Fourier direction is densely coupled, it would be convenient to keep it > local, but it's probably worth partitioning if your desired subdomain sizes > get very small. > -------------- next part -------------- An HTML attachment was scrubbed... URL: From jedbrown at mcs.anl.gov Tue Nov 15 11:12:57 2011 From: jedbrown at mcs.anl.gov (Jed Brown) Date: Tue, 15 Nov 2011 11:12:57 -0600 Subject: [petsc-users] Using petsc for banded matrices and 2D finite differences In-Reply-To: References: Message-ID: On Tue, Nov 15, 2011 at 10:57, Brandt Belson wrote: > The matrix solves in each x-y plane are linear. The matrices depend on the > z wavenumber and so are different at each x-y slice. The equations are > basically Helmholtz and Poisson type. > What is the sign of the shift ("good" or "bad" Helmholtz)? If bad, is the wave number high? > They are 3D, but when done in Fourier space, they decouple so each x-y > plane can be solved independently. > > I'd like to run on a few hundred processors, but if possible I'd like it > to scale to more processors for higher Re. I agree that keeping the > z-dimension data local is beneficial for FFTs. > That process count still means about 1M dofs per process, so having 500 in one direction is still fine. It would be nice to avoid a direct solve on each slice, in which case the partition you describe should be fine. If you can't avoid it, then you may want to do a parallel "transpose" where you can solve planar problems on sub-communicators. Jack Poulson (Cc'd) may have some advice because he has been doing this for high frequency Helmholtz. -------------- next part -------------- An HTML attachment was scrubbed... URL: From bbelson at princeton.edu Tue Nov 15 12:03:42 2011 From: bbelson at princeton.edu (Brandt Belson) Date: Tue, 15 Nov 2011 13:03:42 -0500 Subject: [petsc-users] Using petsc for banded matrices and 2D finite differences In-Reply-To: References: Message-ID: I'm not sure what you mean by the sign of the shift, but the equations are roughly of the form: (I dt/Re - L) u = f where dt~0.1, Re~1000, and L is the Laplacian in 3D, so once it is Fourier transformed each x-y plane has equations like this: (I dt/Re + I k_z^2 - L_{k_z}) \hat{u}_{k_z} = \hat{f}_{k_z} I'm not sure which wavenumber you mean, but k_z goes as nz. Thanks, Brandt On Tue, Nov 15, 2011 at 12:12 PM, Jed Brown wrote: > On Tue, Nov 15, 2011 at 10:57, Brandt Belson wrote: > >> The matrix solves in each x-y plane are linear. The matrices depend on >> the z wavenumber and so are different at each x-y slice. The equations are >> basically Helmholtz and Poisson type. >> > > What is the sign of the shift ("good" or "bad" Helmholtz)? If bad, is the > wave number high? > > >> They are 3D, but when done in Fourier space, they decouple so each x-y >> plane can be solved independently. >> > >> I'd like to run on a few hundred processors, but if possible I'd like it >> to scale to more processors for higher Re. I agree that keeping the >> z-dimension data local is beneficial for FFTs. >> > > That process count still means about 1M dofs per process, so having 500 in > one direction is still fine. It would be nice to avoid a direct solve on > each slice, in which case the partition you describe should be fine. If you > can't avoid it, then you may want to do a parallel "transpose" where you > can solve planar problems on sub-communicators. Jack Poulson (Cc'd) may > have some advice because he has been doing this for high frequency > Helmholtz. > -------------- next part -------------- An HTML attachment was scrubbed... URL: From jedbrown at mcs.anl.gov Tue Nov 15 12:11:53 2011 From: jedbrown at mcs.anl.gov (Jed Brown) Date: Tue, 15 Nov 2011 12:11:53 -0600 Subject: [petsc-users] Using petsc for banded matrices and 2D finite differences In-Reply-To: References: Message-ID: On Tue, Nov 15, 2011 at 12:03, Brandt Belson wrote: > I'm not sure what you mean by the sign of the shift, but the equations are > roughly of the form: > > (I dt/Re - L) u = f > > where dt~0.1, Re~1000, and L is the Laplacian in 3D, > The time step size is in the numerator? (It's more commonly the other way.) In any case, this is a positive shift, so you always have a positive definite operator. Multigrid should work very well, so you shouldn't need to bother with direct solvers. > so once it is Fourier transformed each x-y plane has equations like this: > > (I dt/Re + I k_z^2 - L_{k_z}) \hat{u}_{k_z} = \hat{f}_{k_z} > > I'm not sure which wavenumber you mean, but k_z goes as nz. > It arises for frequency-domain problems where the shift is in the other direction (producing an indefinite operator). -------------- next part -------------- An HTML attachment was scrubbed... URL: From jack.poulson at gmail.com Tue Nov 15 12:12:06 2011 From: jack.poulson at gmail.com (Jack Poulson) Date: Tue, 15 Nov 2011 12:12:06 -0600 Subject: [petsc-users] Using petsc for banded matrices and 2D finite differences In-Reply-To: References: Message-ID: Brandt, On Tue, Nov 15, 2011 at 11:12 AM, Jed Brown wrote: > Jack Poulson (Cc'd) may have some advice because he has been doing this > for high frequency Helmholtz. > > The matrix is block tri/penta-diagonal, depending on the stencil, and the blocks are also > tri/penta-diagonal. Correct me if I'm wrong, but I believe these types of matrix equations can be > solved directly and cheaply on one node. These solves should be cheap, but since the bandwidth is not small, it makes more sense to use a sparse-direct solver (e.g., MUMPS or SuperLU). The complexity of banded factorizations for matrices of size N x N with bandwidth of size b is O(b^2 N), and the memory and solve complexity is O(bN). For 2d sparse-direct the factorization complexity is O(N^{3/2}) and the memory and solve complexities are O(N log(N)). Thus, when the bandwidth is larger than O(N^{1/4}), it makes sense to consider sparse direct in order to make the factorization cheaper. I suspect that in your case, b=sqrt(N). > The two options I'm comparing are: > 1. Distribute the data in z and solve x-y plane matrices with LAPACK or other serial or shared- > memory libraries. Then do an MPI all-to-all to distribute the data in x and/or y, and do all > computations in z. This method allows all calculations to be done with all necessary data > available to one node, so serial or shared-memory algorithms can be used. The disadvantages > are that the MPI all-to-all can be expensive and the number of nodes is limited by the number of > points in the z direction. > > 2. Distribute the data in x-y only and use petsc to do matrix solves in the x-y plane across > nodes. The data would always be contiguous in z. The possible disadvantage is that the x-y > plane matrix solves could be slower. However, there is no need for an all-to-all and the number > of nodes is roughly limited by nx*ny instead of nz. > > The size of the grid will be about 1000 x 100 x 500 in x, y, and z, so matrices would be about > 100,000 x 100,000, but the grid size could vary. I would look into a modification of approach 1, where, if you have p processes and an nx x ny x nz grid, you use p/nz processes for each xy plane solve simultaneously, and then to perform the AllToAll communication to rearrange the solutions. I think that you are overestimating the cost of the AllToAll communication. Jack -------------- next part -------------- An HTML attachment was scrubbed... URL: From dominik at itis.ethz.ch Tue Nov 15 13:32:17 2011 From: dominik at itis.ethz.ch (Dominik Szczerba) Date: Tue, 15 Nov 2011 20:32:17 +0100 Subject: [petsc-users] question about package dependencies In-Reply-To: References: Message-ID: >> > Is there somewhere a dependency matrix what packages are required for >> > what packages? > You can look at setupDependencies() in the correspnding package.py > [for eg: config/PETSc/packages/MUMPS.py for mumps dependencies] Thanks this was very useful. Dominik > > > Satish > >> > In particular, do I need f2cblaslapack when I need mpich, hypre and >> > parmetis only? >> > >> >> I have no idea what this question means. f2cblas is for people with no >> Fortran compiler >> or built in BLAS. BLAS is required by PETSc, so you always need some >> version. >> >> ? ? Matt >> >> Would I need it for MUMPS? >> > >> > Thanks >> > Dominik >> > >> >> >> >> > > From bbelson at princeton.edu Tue Nov 15 14:49:21 2011 From: bbelson at princeton.edu (Brandt Belson) Date: Tue, 15 Nov 2011 15:49:21 -0500 Subject: [petsc-users] Using petsc for banded matrices and 2D finite differences In-Reply-To: References: Message-ID: Thanks for the replies. Sorry, I was mistaken about the dt/Re coefficient. I use an implicit time step on the linear part of incompressible Navier-Stokes, so roughly discretizing du/dt = 1/Re * Lu with implicit Euler for simplicity gives: (u^{n+1} - u^n) / dt = 1/Re L u^{n+1} rearranging: (I - dt/Re L) u^{n+1} = u^n I incorrectly had dt/Re on I instead of L before. For tri-diagonal matrices, I believe the direct solve is O(N), and similar methods exist for block tri-diagonal matrices. I know multigrid is also O(N), but I've heard it tends to be slower. I think the bandwidth (b) of the matrices is small. For example for a 5-point stencil (center, left, right, below, above) the bandwidth is 5 and for a 9-point stencil it is 9. The other limiting factor of doing option 1, with the all-to-all, is the number of nodes is limited by nz. That might be ok, but I would like the code to scale to larger grids in the future. Thanks, Brandt On Tue, Nov 15, 2011 at 1:12 PM, Jack Poulson wrote: > Brandt, > > On Tue, Nov 15, 2011 at 11:12 AM, Jed Brown wrote: > >> Jack Poulson (Cc'd) may have some advice because he has been doing this >> for high frequency Helmholtz. >> > > > The matrix is block tri/penta-diagonal, depending on the stencil, and > the blocks are also > > tri/penta-diagonal. Correct me if I'm wrong, but I believe these types > of matrix equations can be > > solved directly and cheaply on one node. > > These solves should be cheap, but since the bandwidth is not small, it > makes more sense to use a sparse-direct solver (e.g., MUMPS or SuperLU). > The complexity of banded factorizations for matrices of size N x N with > bandwidth of size b is O(b^2 N), and the memory and solve complexity is > O(bN). For 2d sparse-direct the factorization complexity is O(N^{3/2}) and > the memory and solve complexities are O(N log(N)). Thus, when the bandwidth > is larger than O(N^{1/4}), it makes sense to consider sparse direct in > order to make the factorization cheaper. I suspect that in your case, > b=sqrt(N). > > > The two options I'm comparing are: > > 1. Distribute the data in z and solve x-y plane matrices with LAPACK or > other serial or shared- > > memory libraries. Then do an MPI all-to-all to distribute the data in x > and/or y, and do all > > computations in z. This method allows all calculations to be done with > all necessary data > > available to one node, so serial or shared-memory algorithms can be > used. The disadvantages > > are that the MPI all-to-all can be expensive and the number of nodes is > limited by the number of > > points in the z direction. > > > > 2. Distribute the data in x-y only and use petsc to do matrix solves in > the x-y plane across > > nodes. The data would always be contiguous in z. The possible > disadvantage is that the x-y > > plane matrix solves could be slower. However, there is no need for an > all-to-all and the number > > of nodes is roughly limited by nx*ny instead of nz. > > > > The size of the grid will be about 1000 x 100 x 500 in x, y, and z, so > matrices would be about > > 100,000 x 100,000, but the grid size could vary. > > I would look into a modification of approach 1, where, if you have p > processes and an nx x ny x nz grid, you use p/nz processes for each xy > plane solve simultaneously, and then to perform the AllToAll communication > to rearrange the solutions. I think that you are overestimating the cost of > the AllToAll communication. > > Jack > -------------- next part -------------- An HTML attachment was scrubbed... URL: From jedbrown at mcs.anl.gov Tue Nov 15 14:59:54 2011 From: jedbrown at mcs.anl.gov (Jed Brown) Date: Tue, 15 Nov 2011 14:59:54 -0600 Subject: [petsc-users] Using petsc for banded matrices and 2D finite differences In-Reply-To: References: Message-ID: On Tue, Nov 15, 2011 at 14:49, Brandt Belson wrote: > Sorry, I was mistaken about the dt/Re coefficient. I use an implicit time > step on the linear part of incompressible Navier-Stokes, so roughly > discretizing du/dt = 1/Re * Lu with implicit Euler for simplicity gives: > > (u^{n+1} - u^n) / dt = 1/Re L u^{n+1} > rearranging: > (I - dt/Re L) u^{n+1} = u^n > > I incorrectly had dt/Re on I instead of L before. > Right. And you are treating the (nonlinear) convective term explicitly? > > For tri-diagonal matrices, I believe the direct solve is O(N), and similar > methods exist for block tri-diagonal matrices. I know multigrid is also > O(N), but I've heard it tends to be slower. > The "block tri-diagonal" matrices have to deal with fill. They are not O(N) because the bandwidth b=sqrt(N), so you need b*N storage and b^2*N time (as Jack said). At high Reynolds number and short time steps (high resolution), your system is pretty well-conditioned because it is mostly identity. You should be able to solve it very fast using iterative methods (perhaps with a lightweight multigrid or other preconditioner). > > I think the bandwidth (b) of the matrices is small. For example for a > 5-point stencil (center, left, right, below, above) the bandwidth is 5 and > for a 9-point stencil it is 9. > No, if the neighbors in the x-direction are "close", then the neighbors in the y-direction will be "far" (order n=sqrt(N) for an n*n grid). > > The other limiting factor of doing option 1, with the all-to-all, is the > number of nodes is limited by nz. That might be ok, but I would like the > code to scale to larger grids in the future. > Hence Jack's hybrid suggestion of using process blocks to handle chunks. As usual with parallel programming, choose a distribution and write your code, but as you write it, keep in mind that you will likely change the distribution later (e.g. don't hard-code the bounds for "local" computations). -------------- next part -------------- An HTML attachment was scrubbed... URL: From bbelson at princeton.edu Tue Nov 15 15:42:36 2011 From: bbelson at princeton.edu (Brandt Belson) Date: Tue, 15 Nov 2011 16:42:36 -0500 Subject: [petsc-users] Using petsc for banded matrices and 2D finite differences In-Reply-To: References: Message-ID: Yes, I'm treating the nonlinear term explicitly. Ok, I now see what you mean about b=sqrt(N). I misunderstood the definition of bandwidth. I guess with petsc it is easy to experiment with multigrid vs sparse direct solvers like Jack mentioned, so maybe I'll try both. I see what you and Jack mean about using blocks of processes for xy planes, transposing the data, and using multiple processes on each z. I'll code things flexibly so I can change how the data is distributed. I'll probably have more questions as I get deeper, but I think I'm sold on using petsc. Thanks for your quick replies and guidance! Brandt On Tue, Nov 15, 2011 at 3:59 PM, Jed Brown wrote: > On Tue, Nov 15, 2011 at 14:49, Brandt Belson wrote: > >> Sorry, I was mistaken about the dt/Re coefficient. I use an implicit time >> step on the linear part of incompressible Navier-Stokes, so roughly >> discretizing du/dt = 1/Re * Lu with implicit Euler for simplicity gives: >> >> (u^{n+1} - u^n) / dt = 1/Re L u^{n+1} >> rearranging: >> (I - dt/Re L) u^{n+1} = u^n >> >> I incorrectly had dt/Re on I instead of L before. >> > > Right. And you are treating the (nonlinear) convective term explicitly? > > >> >> For tri-diagonal matrices, I believe the direct solve is O(N), and >> similar methods exist for block tri-diagonal matrices. I know multigrid is >> also O(N), but I've heard it tends to be slower. >> > > The "block tri-diagonal" matrices have to deal with fill. They are not > O(N) because the bandwidth b=sqrt(N), so you need b*N storage and b^2*N > time (as Jack said). At high Reynolds number and short time steps (high > resolution), your system is pretty well-conditioned because it is mostly > identity. You should be able to solve it very fast using iterative methods > (perhaps with a lightweight multigrid or other preconditioner). > > >> >> I think the bandwidth (b) of the matrices is small. For example for a >> 5-point stencil (center, left, right, below, above) the bandwidth is 5 and >> for a 9-point stencil it is 9. >> > > No, if the neighbors in the x-direction are "close", then the neighbors in > the y-direction will be "far" (order n=sqrt(N) for an n*n grid). > > >> >> The other limiting factor of doing option 1, with the all-to-all, is the >> number of nodes is limited by nz. That might be ok, but I would like the >> code to scale to larger grids in the future. >> > > Hence Jack's hybrid suggestion of using process blocks to handle chunks. > As usual with parallel programming, choose a distribution and write your > code, but as you write it, keep in mind that you will likely change the > distribution later (e.g. don't hard-code the bounds for "local" > computations). > -------------- next part -------------- An HTML attachment was scrubbed... URL: From gdiso at ustc.edu Wed Nov 16 08:25:05 2011 From: gdiso at ustc.edu (Gong Ding) Date: Wed, 16 Nov 2011 22:25:05 +0800 (CST) Subject: [petsc-users] petsc code crash on AIX with POE Message-ID: <32830097.327341321453505738.JavaMail.coremail@mail.ustc.edu> Hi, I tried to compile my petsc application (c++) on AIX6.1 with POE by IBM xlc (mpCC_r in fact) on PPC6. The serial code (with mpi uni) runs ok. However, parallel code always crash with error message ERROR: 0031-250 task 0: IOT/Abort trap And a core dumped. dbx gives little message about the core bash-3.00$ dbx ~/packages/genius/bin/genius.AIX core Type 'help' for help. Core file "core" is older than current program (ignored) reading symbolic information ... (dbx) where ustart() at 0x9fffffff0000240 The petsc is configured by CONFIGURE_OPTIONS = --known-level1-dcache-size=32768 --known-level1-dcache-linesize=32 --known-level1-dcache-assoc=2 --known-memcmp-ok=1 --known-endian=big --known-sizeof-char=1 --known-sizeof-void-p=8 --known-sizeof-short=2 --known-sizeof-int=4 --known-sizeof-long=8 --known-sizeof-long-long=8 --known-sizeof-float=4 --known-sizeof-double=8 --known-sizeof-size_t=8 --known-bits-per-byte=8 --known-sizeof-MPI_Comm=4 --known-sizeof-MPI_Fint=4 --known-mpi-long-double=1 --download-f-blas-lapack=1 --download-mumps=1 --download-blacs=1 --download-parmetis=1 --download-scalapack=1 --download-superlu=1 --with-debugging=0 --with-cc=\"mpcc_r -q64\" --with-fc=\"mpxlf_r -q64\" --with-batch=1 --with-shared-libraries=1 --known-mpi-shared-libraries=1 --with-x=0 --with-pic=1 It seems petsc example works well. NOTE, petsc is compiled as c library, my application is c++, which links petsc library. My code should stable enough, it works well on Linux/windows, and do not have memory problem (checked by valgrind). I guess there are some compile/link issue caused the problem. Does any one have suggestions? Gong Ding From balay at mcs.anl.gov Wed Nov 16 08:58:32 2011 From: balay at mcs.anl.gov (Satish Balay) Date: Wed, 16 Nov 2011 08:58:32 -0600 (CST) Subject: [petsc-users] petsc code crash on AIX with POE In-Reply-To: <32830097.327341321453505738.JavaMail.coremail@mail.ustc.edu> References: <32830097.327341321453505738.JavaMail.coremail@mail.ustc.edu> Message-ID: Perhaps you can build with --with-debugging=1 - and get a proper stack trace with the debugger? [use a different PETSC_ARCH for this build - so that the current optimzed build is untouched.] Also - does sequential run with mpcc_r work? Satish On Wed, 16 Nov 2011, Gong Ding wrote: > Hi, > I tried to compile my petsc application (c++) on AIX6.1 with POE by IBM xlc (mpCC_r in fact) on PPC6. > The serial code (with mpi uni) runs ok. > However, parallel code always crash with error message > ERROR: 0031-250 task 0: IOT/Abort trap > And a core dumped. > > dbx gives little message about the core > bash-3.00$ dbx ~/packages/genius/bin/genius.AIX core > Type 'help' for help. > Core file "core" is older than current program (ignored) > reading symbolic information ... > (dbx) where > ustart() at 0x9fffffff0000240 > > > The petsc is configured by > CONFIGURE_OPTIONS = --known-level1-dcache-size=32768 --known-level1-dcache-linesize=32 --known-level1-dcache-assoc=2 --known-memcmp-ok=1 --known-endian=big --known-sizeof-char=1 --known-sizeof-void-p=8 --known-sizeof-short=2 --known-sizeof-int=4 --known-sizeof-long=8 --known-sizeof-long-long=8 --known-sizeof-float=4 --known-sizeof-double=8 --known-sizeof-size_t=8 --known-bits-per-byte=8 --known-sizeof-MPI_Comm=4 --known-sizeof-MPI_Fint=4 --known-mpi-long-double=1 --download-f-blas-lapack=1 --download-mumps=1 --download-blacs=1 --download-parmetis=1 --download-scalapack=1 --download-superlu=1 --with-debugging=0 --with-cc=\"mpcc_r -q64\" --with-fc=\"mpxlf_r -q64\" --with-batch=1 --with-shared-libraries=1 --known-mpi-shared-libraries=1 --with-x=0 --with-pic=1 > > It seems petsc example works well. NOTE, petsc is compiled as c library, my application is c++, which links petsc library. > > My code should stable enough, it works well on Linux/windows, and do not have memory problem (checked by valgrind). > I guess there are some compile/link issue caused the problem. > Does any one have suggestions? > > Gong Ding > > > > > From manuel.perezcerquera at polito.it Wed Nov 16 10:09:37 2011 From: manuel.perezcerquera at polito.it (PEREZ CERQUERA MANUEL RICARDO) Date: Wed, 16 Nov 2011 17:09:37 +0100 Subject: [petsc-users] ERROR Arguments are incompatible in MatSetValues() Message-ID: Hi all, I have a petsc code which works for a certain number of unknowns, I mean it works with a matrix of 12000X 12000 elements, then when I increase the Number of unknowns to 25000 X 25000 I got this error at certain point in one of the MatSetValues() function, I don't know which are the possible causes, I have enough memory to handle it. Could you give me some ideas of whats going on? . Thanks [0]PETSC ERROR: Fortran Pause - Enter command or to continue. --------------------- Error Message ------------------------------------ [0]PETSC ERROR: Arguments are incompatible! [0]PETSC ERROR: Memory regions overlap: either use PetscMemmov() or make sure your copy regions and lengths are correct. Length (bytes) 1217067760 first address -844745856 second address -1712377200! [0]PETSC ERROR: ---------------------------------------------------------------- -------- [0]PETSC ERROR: Petsc Release Version 3.2.0, Patch 5, Sat Oct 29 13:45:54 CDT 20 11 [0]PETSC ERROR: See docs/changes/index.html for recent updates. [0]PETSC ERROR: See docs/faq.html for hints about trouble shooting. [0]PETSC ERROR: See docs/index.html for manual pages. [0]PETSC ERROR: ---------------------------------------------------------------- -------- [0]PETSC ERROR: C:\Documents and Settings\d022117\Desktop\MPIRunsPatrju\patreju1 .1.exe on a arch-mswi named GVSRV by d022117 Tue Nov 15 19:00:13 2011 [0]PETSC ERROR: Libraries linked from /home/d022117/petsc-3.2-p5/arch-mswin-cxx- debug/lib [0]PETSC ERROR: Configure run at Mon Nov 7 13:06:56 2011 [0]PETSC ERROR: Configure options --with-cc="win32fe cl" --with-fc="win32fe ifor t" --with-cxx="win32fe cl" --download-f-blas-lapack=1 --with-scalar-type=complex --with-clanguage=cxx --useThreads=0 [0]PETSC ERROR: ---------------------------------------------------------------- -------- [0]PETSC ERROR: PetscMemcpy() line 1779 in src/mat/impls/aij/seq/e:\users\manuel \phd\cygwin\home\d022117\petsc-3.2-p5\include\petscsys.h [0]PETSC ERROR: MatSetValues_SeqAIJ() line 331 in src/mat/impls/aij/seq/E:\Users \Manuel\Phd\Cygwin\home\d022117\PETSC-~2.2-P\src\mat\impls\aij\seq\aij.c [0]PETSC ERROR: MatSetValues() line 1115 in src/mat/interface/E:\Users\Manuel\Ph d\Cygwin\home\d022117\PETSC-~2.2-P\src\mat\INTERF~1\matrix.c job aborted: rank: node: exit code[: error message] 0: gvsrv.delen.polito.it: 1: process 0 exited without calling finalize Eng. Manuel Ricardo Perez Cerquera. MSc. Ph.D student Antenna and EMC Lab (LACE) Istituto Superiore Mario Boella (ISMB) Politecnico di Torino Via Pier Carlo Boggio 61, Torino 10138, Italy Email: manuel.perezcerquera at polito.it Phone: +39 0112276704 Fax: +39 011 2276 299 From yxliuwm at gmail.com Wed Nov 16 10:16:47 2011 From: yxliuwm at gmail.com (Yixun Liu) Date: Wed, 16 Nov 2011 11:16:47 -0500 Subject: [petsc-users] configuration error on Windows with VS2005 Message-ID: Hi, I have a configuration error as I install PETSc. According to the installation instructions, I first *Setup cygwin bash shell with Working Compilers:* C:\Program Files (x86)\Microsoft Visual Studio 8\VC>c:\cygwin\bin\bash.exe --login I got the following message: Your group is currently "mkpasswd". This indicates that your gid is not in /etc/group and your uid is not in /etc/passwd. The /etc/passwd (and possibly /etc/group) files should be rebuilt. See the man pages for mkpasswd and mkgroup then, for example, run mkpasswd -l [-d] > /etc/passwd mkgroup -l [-d] > /etc/group Note that the -d switch is necessary for domain users. Then I run the commands: mkpasswd -l > /etc/passwd and mkpasswd -l > /etc/group at last, as I run command cl, I got message: Microsoft (R) 32-bit C/C++ Optimizing Compiler Version 14.00.50727.762 for 80x86 Copyright (C) Microsoft Corporation. All rights reserved. usage: cl [ option... ] filename... [ /link linkoption... ] So, I think it works now. Then I configure PETSc with the following command, but get the error "C compiler you provided with -with-cc=win32fe cl does not work". liuy14 at CC1DR1C515W05 /cygdrive/c/Yixun/VC/petsc-3.2-p5# ./configure --with-cc='win32fe cl' --with-fc='win32fe ifort' --with-cxx='win 32fe cl' --download-f-blas-lapack=1 =============================================================================== Configuring PETSc to compile on your system =============================================================================== TESTING: configureExternalPackagesDir from config.framework(config/BuildSystem/c TESTING: configureDebuggers from PETSc.utilities.debuggers(config/PETSc/utilitie TESTING: configureCMake from PETSc.utilities.CMake(config/PETSc/utilities/CMake. TESTING: configureCLanguage from PETSc.utilities.languages(config/PETSc/utilitie TESTING: configureLanguageSupport from PETSc.utilities.languages(config/PETSc/ut TESTING: configureExternC from PETSc.utilities.languages(config/PETSc/utilities/ TESTING: configureFortranLanguage from PETSc.utilities.languages(config/PETSc/ut TESTING: configureMake from config.programs(config/BuildSystem/config/programs.p TESTING: configureMkdir from config.programs(config/BuildSystem/config/programs. TESTING: configurePrograms from config.programs(config/BuildSystem/config/progra TESTING: configureMercurial from config.sourceControl(config/BuildSystem/config/ TESTING: configureCVS from config.sourceControl(config/BuildSystem/config/source TESTING: configureSubversion from config.sourceControl(config/BuildSystem/config TESTING: configureDirectories from PETSc.utilities.petscdir(config/PETSc/utiliti TESTING: configureExternalPackagesDir from PETSc.utilities.petscdir(config/PETSc TESTING: configureInstallationMethod from PETSc.utilities.petscdir(config/PETSc/ TESTING: configureETags from PETSc.utilities.Etags(config/PETSc/utilities/Etags. TESTING: getDatafilespath from PETSc.utilities.dataFilesPath(config/PETSc/utilit TESTING: resetEnvCompilers from config.setCompilers(config/BuildSystem/config/se TESTING: checkMPICompilerOverride from config.setCompilers(config/BuildSystem/co TESTING: checkVendor from config.setCompilers(config/BuildSystem/config/setCompi TESTING: checkInitialFlags from config.setCompilers(config/BuildSystem/config/se TESTING: checkCCompiler from config.setCompilers(config/BuildSystem/config/setCo File "/cygdrive/c/Yixun/VC/petsc-3.2-p5/config/BuildSystem/config/setCompilers .py", line 508, in checkCCompiler self.checkCompiler('C') File "/cygdrive/c/Yixun/VC/petsc-3.2-p5/config/BuildSystem/config/setCompilers .py", line 410, in checkCompiler raise RuntimeError(msg) ******************************************************************************* UNABLE to CONFIGURE with GIVEN OPTIONS (see configure.log for detail s): ------------------------------------------------------------------------------- C compiler you provided with -with-cc=win32fe cl does not work ******************************************************************************* Thank you for your help. Best, Yixun -------------- next part -------------- An HTML attachment was scrubbed... URL: From balay at mcs.anl.gov Wed Nov 16 10:21:29 2011 From: balay at mcs.anl.gov (Satish Balay) Date: Wed, 16 Nov 2011 10:21:29 -0600 (CST) Subject: [petsc-users] configuration error on Windows with VS2005 In-Reply-To: References: Message-ID: Please send the corresponding configure.log to petsc-maint at mcs.anl.gov Satish On Wed, 16 Nov 2011, Yixun Liu wrote: > Hi, > I have a configuration error as I install PETSc. > > According to the installation instructions, I first *Setup cygwin bash > shell with Working Compilers:* > > C:\Program Files (x86)\Microsoft Visual Studio 8\VC>c:\cygwin\bin\bash.exe > --login > > I got the following message: > > Your group is currently "mkpasswd". This indicates that your > gid is not in /etc/group and your uid is not in /etc/passwd. > The /etc/passwd (and possibly /etc/group) files should be rebuilt. > See the man pages for mkpasswd and mkgroup then, for example, run > mkpasswd -l [-d] > /etc/passwd > mkgroup -l [-d] > /etc/group > Note that the -d switch is necessary for domain users. > > Then I run the commands: > mkpasswd -l > /etc/passwd and > mkpasswd -l > /etc/group > > at last, as I run command cl, I got message: > > Microsoft (R) 32-bit C/C++ Optimizing Compiler Version 14.00.50727.762 for > 80x86 > Copyright (C) Microsoft Corporation. All rights reserved. > usage: cl [ option... ] filename... [ /link linkoption... ] > > So, I think it works now. > > Then I configure PETSc with the following command, but get the error "C > compiler you provided with -with-cc=win32fe cl does not work". > > > liuy14 at CC1DR1C515W05 /cygdrive/c/Yixun/VC/petsc-3.2-p5# ./configure > --with-cc='win32fe cl' --with-fc='win32fe ifort' --with-cxx='win 32fe cl' > --download-f-blas-lapack=1 > > =============================================================================== > Configuring PETSc to compile on your system > =============================================================================== > TESTING: configureExternalPackagesDir from > config.framework(config/BuildSystem/c > TESTING: configureDebuggers from > PETSc.utilities.debuggers(config/PETSc/utilitie > TESTING: configureCMake from > PETSc.utilities.CMake(config/PETSc/utilities/CMake. > TESTING: configureCLanguage from > PETSc.utilities.languages(config/PETSc/utilitie > TESTING: configureLanguageSupport from > PETSc.utilities.languages(config/PETSc/ut > TESTING: configureExternC from > PETSc.utilities.languages(config/PETSc/utilities/ > TESTING: configureFortranLanguage from > PETSc.utilities.languages(config/PETSc/ut > TESTING: configureMake from > config.programs(config/BuildSystem/config/programs.p > TESTING: configureMkdir from > config.programs(config/BuildSystem/config/programs. > TESTING: configurePrograms from > config.programs(config/BuildSystem/config/progra > TESTING: configureMercurial from > config.sourceControl(config/BuildSystem/config/ > TESTING: configureCVS from > config.sourceControl(config/BuildSystem/config/source > TESTING: configureSubversion from > config.sourceControl(config/BuildSystem/config > TESTING: configureDirectories from > PETSc.utilities.petscdir(config/PETSc/utiliti > TESTING: configureExternalPackagesDir from > PETSc.utilities.petscdir(config/PETSc > TESTING: configureInstallationMethod from > PETSc.utilities.petscdir(config/PETSc/ > TESTING: configureETags from > PETSc.utilities.Etags(config/PETSc/utilities/Etags. > TESTING: getDatafilespath from > PETSc.utilities.dataFilesPath(config/PETSc/utilit > TESTING: resetEnvCompilers from > config.setCompilers(config/BuildSystem/config/se > TESTING: checkMPICompilerOverride from > config.setCompilers(config/BuildSystem/co > TESTING: checkVendor from > config.setCompilers(config/BuildSystem/config/setCompi > TESTING: checkInitialFlags from > config.setCompilers(config/BuildSystem/config/se > TESTING: checkCCompiler from > config.setCompilers(config/BuildSystem/config/setCo > File > "/cygdrive/c/Yixun/VC/petsc-3.2-p5/config/BuildSystem/config/setCompilers > .py", line 508, in checkCCompiler > self.checkCompiler('C') > File > "/cygdrive/c/Yixun/VC/petsc-3.2-p5/config/BuildSystem/config/setCompilers > .py", line 410, in checkCompiler > raise RuntimeError(msg) > ******************************************************************************* > UNABLE to CONFIGURE with GIVEN OPTIONS (see configure.log for > detail > s): > ------------------------------------------------------------------------------- > C compiler you provided with -with-cc=win32fe cl does not work > ******************************************************************************* > > > Thank you for your help. > > Best, > > Yixun > From petsc-maint at mcs.anl.gov Wed Nov 16 11:33:13 2011 From: petsc-maint at mcs.anl.gov (Matthew Knepley) Date: Wed, 16 Nov 2011 11:33:13 -0600 Subject: [petsc-users] ERROR Arguments are incompatible in MatSetValues() In-Reply-To: References: Message-ID: On Wed, Nov 16, 2011 at 10:09 AM, PEREZ CERQUERA MANUEL RICARDO < manuel.perezcerquera at polito.it> wrote: > Hi all, > > I have a petsc code which works for a certain number of unknowns, I mean > it works with a matrix of 12000X 12000 elements, then when I increase the > Number of unknowns to 25000 X 25000 I got this error at certain point in > one of the MatSetValues() function, I don't know which are the possible > causes, I have enough memory to handle it. Could you give me some ideas of > whats going on? . Thanks > It appears that you are overflowing the integer offsets. I recommend trying this either: a) With a 64-bit OS or if you cannot upgrade the machine b) Configuring with --with-64-bit-indices Thanks, Matt > [0]PETSC ERROR: Fortran Pause - Enter command or to continue. > --------------------- Error Message ------------------------------**------ > [0]PETSC ERROR: Arguments are incompatible! > [0]PETSC ERROR: Memory regions overlap: either use PetscMemmov() > or make sure your copy regions and lengths are correct. > Length (bytes) 1217067760 first address -844745856 second > address > -1712377200! > [0]PETSC ERROR: ------------------------------** > ------------------------------**---- > -------- > [0]PETSC ERROR: Petsc Release Version 3.2.0, Patch 5, Sat Oct 29 13:45:54 > CDT 20 > 11 > [0]PETSC ERROR: See docs/changes/index.html for recent updates. > [0]PETSC ERROR: See docs/faq.html for hints about trouble shooting. > [0]PETSC ERROR: See docs/index.html for manual pages. > [0]PETSC ERROR: ------------------------------** > ------------------------------**---- > -------- > [0]PETSC ERROR: C:\Documents and Settings\d022117\Desktop\** > MPIRunsPatrju\patreju1 > .1.exe on a arch-mswi named GVSRV by d022117 Tue Nov 15 19:00:13 2011 > [0]PETSC ERROR: Libraries linked from /home/d022117/petsc-3.2-p5/** > arch-mswin-cxx- > debug/lib > [0]PETSC ERROR: Configure run at Mon Nov 7 13:06:56 2011 > [0]PETSC ERROR: Configure options --with-cc="win32fe cl" > --with-fc="win32fe ifor > t" --with-cxx="win32fe cl" --download-f-blas-lapack=1 > --with-scalar-type=complex > --with-clanguage=cxx --useThreads=0 > [0]PETSC ERROR: ------------------------------** > ------------------------------**---- > -------- > [0]PETSC ERROR: PetscMemcpy() line 1779 in src/mat/impls/aij/seq/e:\** > users\manuel > \phd\cygwin\home\d022117\**petsc-3.2-p5\include\petscsys.**h > [0]PETSC ERROR: MatSetValues_SeqAIJ() line 331 in > src/mat/impls/aij/seq/E:\Users > \Manuel\Phd\Cygwin\home\**d022117\PETSC-~2.2-P\src\mat\** > impls\aij\seq\aij.c > [0]PETSC ERROR: MatSetValues() line 1115 in src/mat/interface/E:\Users\** > Manuel\Ph > d\Cygwin\home\d022117\PETSC-~**2.2-P\src\mat\INTERF~1\matrix.**c > > job aborted: > rank: node: exit code[: error message] > 0: gvsrv.delen.polito.it: 1: process 0 exited without calling finalize > > Eng. Manuel Ricardo Perez Cerquera. MSc. Ph.D student > Antenna and EMC Lab (LACE) > Istituto Superiore Mario Boella (ISMB) > Politecnico di Torino > Via Pier Carlo Boggio 61, Torino 10138, Italy > Email: manuel.perezcerquera at polito.it > Phone: +39 0112276704 > Fax: +39 011 2276 299 > -- What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. -- Norbert Wiener -------------- next part -------------- An HTML attachment was scrubbed... URL: From yxliuwm at gmail.com Wed Nov 16 13:31:32 2011 From: yxliuwm at gmail.com (Yixun Liu) Date: Wed, 16 Nov 2011 14:31:32 -0500 Subject: [petsc-users] petsc-users Digest, Vol 35, Issue 52 In-Reply-To: References: Message-ID: Hi, Please see the attachment. Yixun On Wed, Nov 16, 2011 at 1:00 PM, wrote: > Send petsc-users mailing list submissions to > petsc-users at mcs.anl.gov > > To subscribe or unsubscribe via the World Wide Web, visit > https://lists.mcs.anl.gov/mailman/listinfo/petsc-users > or, via email, send a message with subject or body 'help' to > petsc-users-request at mcs.anl.gov > > You can reach the person managing the list at > petsc-users-owner at mcs.anl.gov > > When replying, please edit your Subject line so it is more specific > than "Re: Contents of petsc-users digest..." > > > Today's Topics: > > 1. Re: configuration error on Windows with VS2005 (Satish Balay) > 2. Re: ERROR Arguments are incompatible in MatSetValues() > (Matthew Knepley) > > > ---------------------------------------------------------------------- > > Message: 1 > Date: Wed, 16 Nov 2011 10:21:29 -0600 (CST) > From: Satish Balay > Subject: Re: [petsc-users] configuration error on Windows with VS2005 > To: PETSc users list > Message-ID: > Content-Type: TEXT/PLAIN; charset=US-ASCII > > Please send the corresponding configure.log to petsc-maint at mcs.anl.gov > > Satish > > On Wed, 16 Nov 2011, Yixun Liu wrote: > > > Hi, > > I have a configuration error as I install PETSc. > > > > According to the installation instructions, I first *Setup cygwin bash > > shell with Working Compilers:* > > > > C:\Program Files (x86)\Microsoft Visual Studio > 8\VC>c:\cygwin\bin\bash.exe > > --login > > > > I got the following message: > > > > Your group is currently "mkpasswd". This indicates that your > > gid is not in /etc/group and your uid is not in /etc/passwd. > > The /etc/passwd (and possibly /etc/group) files should be rebuilt. > > See the man pages for mkpasswd and mkgroup then, for example, run > > mkpasswd -l [-d] > /etc/passwd > > mkgroup -l [-d] > /etc/group > > Note that the -d switch is necessary for domain users. > > > > Then I run the commands: > > mkpasswd -l > /etc/passwd and > > mkpasswd -l > /etc/group > > > > at last, as I run command cl, I got message: > > > > Microsoft (R) 32-bit C/C++ Optimizing Compiler Version 14.00.50727.762 > for > > 80x86 > > Copyright (C) Microsoft Corporation. All rights reserved. > > usage: cl [ option... ] filename... [ /link linkoption... ] > > > > So, I think it works now. > > > > Then I configure PETSc with the following command, but get the error "C > > compiler you provided with -with-cc=win32fe cl does not work". > > > > > > liuy14 at CC1DR1C515W05 /cygdrive/c/Yixun/VC/petsc-3.2-p5# ./configure > > --with-cc='win32fe cl' --with-fc='win32fe ifort' --with-cxx='win 32fe > cl' > > --download-f-blas-lapack=1 > > > > > =============================================================================== > > Configuring PETSc to compile on your system > > > =============================================================================== > > TESTING: configureExternalPackagesDir from > > config.framework(config/BuildSystem/c > > TESTING: configureDebuggers from > > PETSc.utilities.debuggers(config/PETSc/utilitie > > TESTING: configureCMake from > > PETSc.utilities.CMake(config/PETSc/utilities/CMake. > > TESTING: configureCLanguage from > > PETSc.utilities.languages(config/PETSc/utilitie > > TESTING: configureLanguageSupport from > > PETSc.utilities.languages(config/PETSc/ut > > TESTING: configureExternC from > > PETSc.utilities.languages(config/PETSc/utilities/ > > TESTING: configureFortranLanguage from > > PETSc.utilities.languages(config/PETSc/ut > > TESTING: configureMake from > > config.programs(config/BuildSystem/config/programs.p > > TESTING: configureMkdir from > > config.programs(config/BuildSystem/config/programs. > > TESTING: configurePrograms from > > config.programs(config/BuildSystem/config/progra > > TESTING: configureMercurial from > > config.sourceControl(config/BuildSystem/config/ > > TESTING: configureCVS from > > config.sourceControl(config/BuildSystem/config/source > > TESTING: configureSubversion from > > config.sourceControl(config/BuildSystem/config > > TESTING: configureDirectories from > > PETSc.utilities.petscdir(config/PETSc/utiliti > > TESTING: configureExternalPackagesDir from > > PETSc.utilities.petscdir(config/PETSc > > TESTING: configureInstallationMethod from > > PETSc.utilities.petscdir(config/PETSc/ > > TESTING: configureETags from > > PETSc.utilities.Etags(config/PETSc/utilities/Etags. > > TESTING: getDatafilespath from > > PETSc.utilities.dataFilesPath(config/PETSc/utilit > > TESTING: resetEnvCompilers from > > config.setCompilers(config/BuildSystem/config/se > > TESTING: checkMPICompilerOverride from > > config.setCompilers(config/BuildSystem/co > > TESTING: checkVendor from > > config.setCompilers(config/BuildSystem/config/setCompi > > TESTING: checkInitialFlags from > > config.setCompilers(config/BuildSystem/config/se > > TESTING: checkCCompiler from > > config.setCompilers(config/BuildSystem/config/setCo > > File > > "/cygdrive/c/Yixun/VC/petsc-3.2-p5/config/BuildSystem/config/setCompilers > > .py", line 508, in checkCCompiler > > self.checkCompiler('C') > > File > > "/cygdrive/c/Yixun/VC/petsc-3.2-p5/config/BuildSystem/config/setCompilers > > .py", line 410, in checkCompiler > > raise RuntimeError(msg) > > > ******************************************************************************* > > UNABLE to CONFIGURE with GIVEN OPTIONS (see configure.log for > > detail > > s): > > > ------------------------------------------------------------------------------- > > C compiler you provided with -with-cc=win32fe cl does not work > > > ******************************************************************************* > > > > > > Thank you for your help. > > > > Best, > > > > Yixun > > > > > > ------------------------------ > > Message: 2 > Date: Wed, 16 Nov 2011 11:33:13 -0600 > From: Matthew Knepley > Subject: Re: [petsc-users] ERROR Arguments are incompatible in > MatSetValues() > To: PETSc users list > Message-ID: > > > Content-Type: text/plain; charset="iso-8859-1" > > On Wed, Nov 16, 2011 at 10:09 AM, PEREZ CERQUERA MANUEL RICARDO < > manuel.perezcerquera at polito.it> wrote: > > > Hi all, > > > > I have a petsc code which works for a certain number of unknowns, I mean > > it works with a matrix of 12000X 12000 elements, then when I increase the > > Number of unknowns to 25000 X 25000 I got this error at certain point in > > one of the MatSetValues() function, I don't know which are the possible > > causes, I have enough memory to handle it. Could you give me some ideas > of > > whats going on? . Thanks > > > > It appears that you are overflowing the integer offsets. I recommend trying > this either: > > a) With a 64-bit OS > > or if you cannot upgrade the machine > > b) Configuring with --with-64-bit-indices > > Thanks, > > Matt > > > > [0]PETSC ERROR: Fortran Pause - Enter command or to continue. > > --------------------- Error Message > ------------------------------**------ > > [0]PETSC ERROR: Arguments are incompatible! > > [0]PETSC ERROR: Memory regions overlap: either use PetscMemmov() > > or make sure your copy regions and lengths are correct. > > Length (bytes) 1217067760 first address -844745856 second > > address > > -1712377200! > > [0]PETSC ERROR: ------------------------------** > > ------------------------------**---- > > -------- > > [0]PETSC ERROR: Petsc Release Version 3.2.0, Patch 5, Sat Oct 29 13:45:54 > > CDT 20 > > 11 > > [0]PETSC ERROR: See docs/changes/index.html for recent updates. > > [0]PETSC ERROR: See docs/faq.html for hints about trouble shooting. > > [0]PETSC ERROR: See docs/index.html for manual pages. > > [0]PETSC ERROR: ------------------------------** > > ------------------------------**---- > > -------- > > [0]PETSC ERROR: C:\Documents and Settings\d022117\Desktop\** > > MPIRunsPatrju\patreju1 > > .1.exe on a arch-mswi named GVSRV by d022117 Tue Nov 15 19:00:13 2011 > > [0]PETSC ERROR: Libraries linked from /home/d022117/petsc-3.2-p5/** > > arch-mswin-cxx- > > debug/lib > > [0]PETSC ERROR: Configure run at Mon Nov 7 13:06:56 2011 > > [0]PETSC ERROR: Configure options --with-cc="win32fe cl" > > --with-fc="win32fe ifor > > t" --with-cxx="win32fe cl" --download-f-blas-lapack=1 > > --with-scalar-type=complex > > --with-clanguage=cxx --useThreads=0 > > [0]PETSC ERROR: ------------------------------** > > ------------------------------**---- > > -------- > > [0]PETSC ERROR: PetscMemcpy() line 1779 in src/mat/impls/aij/seq/e:\** > > users\manuel > > \phd\cygwin\home\d022117\**petsc-3.2-p5\include\petscsys.**h > > [0]PETSC ERROR: MatSetValues_SeqAIJ() line 331 in > > src/mat/impls/aij/seq/E:\Users > > \Manuel\Phd\Cygwin\home\**d022117\PETSC-~2.2-P\src\mat\** > > impls\aij\seq\aij.c > > [0]PETSC ERROR: MatSetValues() line 1115 in src/mat/interface/E:\Users\** > > Manuel\Ph > > d\Cygwin\home\d022117\PETSC-~**2.2-P\src\mat\INTERF~1\matrix.**c > > > > job aborted: > > rank: node: exit code[: error message] > > 0: gvsrv.delen.polito.it: 1: process 0 exited without calling finalize > > > > Eng. Manuel Ricardo Perez Cerquera. MSc. Ph.D student > > Antenna and EMC Lab (LACE) > > Istituto Superiore Mario Boella (ISMB) > > Politecnico di Torino > > Via Pier Carlo Boggio 61, Torino 10138, Italy > > Email: manuel.perezcerquera at polito.it > > Phone: +39 0112276704 > > Fax: +39 011 2276 299 > > > > > > -- > What most experimenters take for granted before they begin their > experiments is infinitely more interesting than any results to which their > experiments lead. > -- Norbert Wiener > -------------- next part -------------- > An HTML attachment was scrubbed... > URL: < > http://lists.mcs.anl.gov/pipermail/petsc-users/attachments/20111116/f98593ae/attachment-0001.htm > > > > ------------------------------ > > _______________________________________________ > petsc-users mailing list > petsc-users at mcs.anl.gov > https://lists.mcs.anl.gov/mailman/listinfo/petsc-users > > > End of petsc-users Digest, Vol 35, Issue 52 > ******************************************* > -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: configure.log Type: application/octet-stream Size: 27861 bytes Desc: not available URL: From sean at mcs.anl.gov Wed Nov 16 13:36:17 2011 From: sean at mcs.anl.gov (Sean Farley) Date: Wed, 16 Nov 2011 13:36:17 -0600 Subject: [petsc-users] petsc-users Digest, Vol 35, Issue 52 In-Reply-To: References: Message-ID: > > Please see the attachment. > Two things: 1) Satish asked you to send the configure.log to petsc-maint at mcs.anl.gov, not petsc-users 2) In the future, please to not reply to a "Digest" email, it makes it terribly difficult for us to understand what you're replying to. -------------- next part -------------- An HTML attachment was scrubbed... URL: From knepley at gmail.com Wed Nov 16 13:36:52 2011 From: knepley at gmail.com (Matthew Knepley) Date: Wed, 16 Nov 2011 13:36:52 -0600 Subject: [petsc-users] petsc-users Digest, Vol 35, Issue 52 In-Reply-To: References: Message-ID: On Wed, Nov 16, 2011 at 1:31 PM, Yixun Liu wrote: > Hi, > Please see the attachment. > Your compiler does not work (it returns error code from the compile). Perhaps you have not correctly set environment variables? Matt > > Yixun > > On Wed, Nov 16, 2011 at 1:00 PM, wrote: > >> Send petsc-users mailing list submissions to >> petsc-users at mcs.anl.gov >> >> To subscribe or unsubscribe via the World Wide Web, visit >> https://lists.mcs.anl.gov/mailman/listinfo/petsc-users >> or, via email, send a message with subject or body 'help' to >> petsc-users-request at mcs.anl.gov >> >> You can reach the person managing the list at >> petsc-users-owner at mcs.anl.gov >> >> When replying, please edit your Subject line so it is more specific >> than "Re: Contents of petsc-users digest..." >> >> >> Today's Topics: >> >> 1. Re: configuration error on Windows with VS2005 (Satish Balay) >> 2. Re: ERROR Arguments are incompatible in MatSetValues() >> (Matthew Knepley) >> >> >> ---------------------------------------------------------------------- >> >> Message: 1 >> Date: Wed, 16 Nov 2011 10:21:29 -0600 (CST) >> From: Satish Balay >> Subject: Re: [petsc-users] configuration error on Windows with VS2005 >> To: PETSc users list >> Message-ID: >> Content-Type: TEXT/PLAIN; charset=US-ASCII >> >> Please send the corresponding configure.log to petsc-maint at mcs.anl.gov >> >> Satish >> >> On Wed, 16 Nov 2011, Yixun Liu wrote: >> >> > Hi, >> > I have a configuration error as I install PETSc. >> > >> > According to the installation instructions, I first *Setup cygwin bash >> > shell with Working Compilers:* >> > >> > C:\Program Files (x86)\Microsoft Visual Studio >> 8\VC>c:\cygwin\bin\bash.exe >> > --login >> > >> > I got the following message: >> > >> > Your group is currently "mkpasswd". This indicates that your >> > gid is not in /etc/group and your uid is not in /etc/passwd. >> > The /etc/passwd (and possibly /etc/group) files should be rebuilt. >> > See the man pages for mkpasswd and mkgroup then, for example, run >> > mkpasswd -l [-d] > /etc/passwd >> > mkgroup -l [-d] > /etc/group >> > Note that the -d switch is necessary for domain users. >> > >> > Then I run the commands: >> > mkpasswd -l > /etc/passwd and >> > mkpasswd -l > /etc/group >> > >> > at last, as I run command cl, I got message: >> > >> > Microsoft (R) 32-bit C/C++ Optimizing Compiler Version 14.00.50727.762 >> for >> > 80x86 >> > Copyright (C) Microsoft Corporation. All rights reserved. >> > usage: cl [ option... ] filename... [ /link linkoption... ] >> > >> > So, I think it works now. >> > >> > Then I configure PETSc with the following command, but get the error "C >> > compiler you provided with -with-cc=win32fe cl does not work". >> > >> > >> > liuy14 at CC1DR1C515W05 /cygdrive/c/Yixun/VC/petsc-3.2-p5# ./configure >> > --with-cc='win32fe cl' --with-fc='win32fe ifort' --with-cxx='win 32fe >> cl' >> > --download-f-blas-lapack=1 >> > >> > >> =============================================================================== >> > Configuring PETSc to compile on your system >> > >> =============================================================================== >> > TESTING: configureExternalPackagesDir from >> > config.framework(config/BuildSystem/c >> > TESTING: configureDebuggers from >> > PETSc.utilities.debuggers(config/PETSc/utilitie >> > TESTING: configureCMake from >> > PETSc.utilities.CMake(config/PETSc/utilities/CMake. >> > TESTING: configureCLanguage from >> > PETSc.utilities.languages(config/PETSc/utilitie >> > TESTING: configureLanguageSupport from >> > PETSc.utilities.languages(config/PETSc/ut >> > TESTING: configureExternC from >> > PETSc.utilities.languages(config/PETSc/utilities/ >> > TESTING: configureFortranLanguage from >> > PETSc.utilities.languages(config/PETSc/ut >> > TESTING: configureMake from >> > config.programs(config/BuildSystem/config/programs.p >> > TESTING: configureMkdir from >> > config.programs(config/BuildSystem/config/programs. >> > TESTING: configurePrograms from >> > config.programs(config/BuildSystem/config/progra >> > TESTING: configureMercurial from >> > config.sourceControl(config/BuildSystem/config/ >> > TESTING: configureCVS from >> > config.sourceControl(config/BuildSystem/config/source >> > TESTING: configureSubversion from >> > config.sourceControl(config/BuildSystem/config >> > TESTING: configureDirectories from >> > PETSc.utilities.petscdir(config/PETSc/utiliti >> > TESTING: configureExternalPackagesDir from >> > PETSc.utilities.petscdir(config/PETSc >> > TESTING: configureInstallationMethod from >> > PETSc.utilities.petscdir(config/PETSc/ >> > TESTING: configureETags from >> > PETSc.utilities.Etags(config/PETSc/utilities/Etags. >> > TESTING: getDatafilespath from >> > PETSc.utilities.dataFilesPath(config/PETSc/utilit >> > TESTING: resetEnvCompilers from >> > config.setCompilers(config/BuildSystem/config/se >> > TESTING: checkMPICompilerOverride from >> > config.setCompilers(config/BuildSystem/co >> > TESTING: checkVendor from >> > config.setCompilers(config/BuildSystem/config/setCompi >> > TESTING: checkInitialFlags from >> > config.setCompilers(config/BuildSystem/config/se >> > TESTING: checkCCompiler from >> > config.setCompilers(config/BuildSystem/config/setCo >> > File >> > >> "/cygdrive/c/Yixun/VC/petsc-3.2-p5/config/BuildSystem/config/setCompilers >> > .py", line 508, in checkCCompiler >> > self.checkCompiler('C') >> > File >> > >> "/cygdrive/c/Yixun/VC/petsc-3.2-p5/config/BuildSystem/config/setCompilers >> > .py", line 410, in checkCompiler >> > raise RuntimeError(msg) >> > >> ******************************************************************************* >> > UNABLE to CONFIGURE with GIVEN OPTIONS (see configure.log >> for >> > detail >> > s): >> > >> ------------------------------------------------------------------------------- >> > C compiler you provided with -with-cc=win32fe cl does not work >> > >> ******************************************************************************* >> > >> > >> > Thank you for your help. >> > >> > Best, >> > >> > Yixun >> > >> >> >> >> ------------------------------ >> >> Message: 2 >> Date: Wed, 16 Nov 2011 11:33:13 -0600 >> From: Matthew Knepley >> Subject: Re: [petsc-users] ERROR Arguments are incompatible in >> MatSetValues() >> To: PETSc users list >> Message-ID: >> > Zk4Cf-Q at mail.gmail.com> >> Content-Type: text/plain; charset="iso-8859-1" >> >> On Wed, Nov 16, 2011 at 10:09 AM, PEREZ CERQUERA MANUEL RICARDO < >> manuel.perezcerquera at polito.it> wrote: >> >> > Hi all, >> > >> > I have a petsc code which works for a certain number of unknowns, I mean >> > it works with a matrix of 12000X 12000 elements, then when I increase >> the >> > Number of unknowns to 25000 X 25000 I got this error at certain point in >> > one of the MatSetValues() function, I don't know which are the possible >> > causes, I have enough memory to handle it. Could you give me some ideas >> of >> > whats going on? . Thanks >> > >> >> It appears that you are overflowing the integer offsets. I recommend >> trying >> this either: >> >> a) With a 64-bit OS >> >> or if you cannot upgrade the machine >> >> b) Configuring with --with-64-bit-indices >> >> Thanks, >> >> Matt >> >> >> > [0]PETSC ERROR: Fortran Pause - Enter command or to continue. >> > --------------------- Error Message >> ------------------------------**------ >> > [0]PETSC ERROR: Arguments are incompatible! >> > [0]PETSC ERROR: Memory regions overlap: either use PetscMemmov() >> > or make sure your copy regions and lengths are correct. >> > Length (bytes) 1217067760 first address -844745856 second >> > address >> > -1712377200! >> > [0]PETSC ERROR: ------------------------------** >> > ------------------------------**---- >> > -------- >> > [0]PETSC ERROR: Petsc Release Version 3.2.0, Patch 5, Sat Oct 29 >> 13:45:54 >> > CDT 20 >> > 11 >> > [0]PETSC ERROR: See docs/changes/index.html for recent updates. >> > [0]PETSC ERROR: See docs/faq.html for hints about trouble shooting. >> > [0]PETSC ERROR: See docs/index.html for manual pages. >> > [0]PETSC ERROR: ------------------------------** >> > ------------------------------**---- >> > -------- >> > [0]PETSC ERROR: C:\Documents and Settings\d022117\Desktop\** >> > MPIRunsPatrju\patreju1 >> > .1.exe on a arch-mswi named GVSRV by d022117 Tue Nov 15 19:00:13 2011 >> > [0]PETSC ERROR: Libraries linked from /home/d022117/petsc-3.2-p5/** >> > arch-mswin-cxx- >> > debug/lib >> > [0]PETSC ERROR: Configure run at Mon Nov 7 13:06:56 2011 >> > [0]PETSC ERROR: Configure options --with-cc="win32fe cl" >> > --with-fc="win32fe ifor >> > t" --with-cxx="win32fe cl" --download-f-blas-lapack=1 >> > --with-scalar-type=complex >> > --with-clanguage=cxx --useThreads=0 >> > [0]PETSC ERROR: ------------------------------** >> > ------------------------------**---- >> > -------- >> > [0]PETSC ERROR: PetscMemcpy() line 1779 in src/mat/impls/aij/seq/e:\** >> > users\manuel >> > \phd\cygwin\home\d022117\**petsc-3.2-p5\include\petscsys.**h >> > [0]PETSC ERROR: MatSetValues_SeqAIJ() line 331 in >> > src/mat/impls/aij/seq/E:\Users >> > \Manuel\Phd\Cygwin\home\**d022117\PETSC-~2.2-P\src\mat\** >> > impls\aij\seq\aij.c >> > [0]PETSC ERROR: MatSetValues() line 1115 in >> src/mat/interface/E:\Users\** >> > Manuel\Ph >> > d\Cygwin\home\d022117\PETSC-~**2.2-P\src\mat\INTERF~1\matrix.**c >> > >> > job aborted: >> > rank: node: exit code[: error message] >> > 0: gvsrv.delen.polito.it: 1: process 0 exited without calling finalize >> > >> > Eng. Manuel Ricardo Perez Cerquera. MSc. Ph.D student >> > Antenna and EMC Lab (LACE) >> > Istituto Superiore Mario Boella (ISMB) >> > Politecnico di Torino >> > Via Pier Carlo Boggio 61, Torino 10138, Italy >> > Email: manuel.perezcerquera at polito.it >> > Phone: +39 0112276704 >> > Fax: +39 011 2276 299 >> > >> >> >> >> -- >> What most experimenters take for granted before they begin their >> experiments is infinitely more interesting than any results to which their >> experiments lead. >> -- Norbert Wiener >> -------------- next part -------------- >> An HTML attachment was scrubbed... >> URL: < >> http://lists.mcs.anl.gov/pipermail/petsc-users/attachments/20111116/f98593ae/attachment-0001.htm >> > >> >> ------------------------------ >> >> _______________________________________________ >> petsc-users mailing list >> petsc-users at mcs.anl.gov >> https://lists.mcs.anl.gov/mailman/listinfo/petsc-users >> >> >> End of petsc-users Digest, Vol 35, Issue 52 >> ******************************************* >> > > -- What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. -- Norbert Wiener -------------- next part -------------- An HTML attachment was scrubbed... URL: From ckontzialis at lycos.com Thu Nov 17 05:33:55 2011 From: ckontzialis at lycos.com (Konstantinos Kontzialis) Date: Thu, 17 Nov 2011 13:33:55 +0200 Subject: [petsc-users] sundials and ts Message-ID: <4EC4F123.4060603@lycos.com> Dear all, I want to use sundials with -ts_sundials_type adams, but I get the following error: Timestep 0: dt = 0.01, T = 0, Res[rho] = 7.80568e-18, Res[rhou] = 0.0258457, Res[rhov] = 1.25e-12, Res[E] = 4.12368e-09, CFL = 1999.99 [0]PETSC ERROR: --------------------- Error Message ------------------------------------ [0]PETSC ERROR: Invalid argument! [0]PETSC ERROR: Vector is not ghosted! [0]PETSC ERROR: ------------------------------------------------------------------------ [0]PETSC ERROR: Petsc Release Version 3.2.0, Patch 5, Sat Oct 29 13:45:54 CDT 2011 [0]PETSC ERROR: See docs/changes/index.html for recent updates. [0]PETSC ERROR: See docs/faq.html for hints about trouble shooting. [0]PETSC ERROR: See docs/index.html for manual pages. [0]PETSC ERROR: ------------------------------------------------------------------------ [0]PETSC ERROR: ./hoac on a linux-gnu named PlusSodaL by kontzialis Thu Nov 17 13:32:48 2011 [0]PETSC ERROR: Libraries linked from /home/kontzialis/petsc-3.2-p5/linux-gnu-c-debug/lib [0]PETSC ERROR: Configure run at Sat Nov 5 20:58:12 2011 [0]PETSC ERROR: Configure options --with-debugging=1 ---with-mpi-dir=/usr/lib64/mpich2/bin --with-shared-libraries --with-shared-libraries --with-large-file-io=1 --with-precision=double --with-blacs=1 --download-blacs=yes --download-f-blas-lapack=yes --with-plapack=1 --download-plapack=yes --with-scalapack=1 --download-scalapack=yes --with-superlu=1 --download-superlu=yes --with-superlu_dist=1 --download-superlu_dist=yes --with-ml=1 --download-ml=yes --with-umfpack=1 --download-umfpack=yes --with-mpi=1 --download-mpich=1 --with-sundials=1 --download-sundials=1 --with-parmetis=1 --download-parmetis=1 --with-hypre=1 --download-hypre=1 [0]PETSC ERROR: ------------------------------------------------------------------------ [0]PETSC ERROR: VecGhostUpdateBegin() line 170 in /home/kontzialis/petsc-3.2-p5/src/vec/vec/impls/mpi/commonmpvec.c [0]PETSC ERROR: residual() line 43 in "unknowndirectory/"../src/residual.c [0]PETSC ERROR: base_residual_implicit() line 28 in "unknowndirectory/"../src/base_residual_implicit.c [0]PETSC ERROR: TSComputeIFunction() line 339 in /home/kontzialis/petsc-3.2-p5/src/ts/interface/ts.c [0]PETSC ERROR: TSFunction_Sundials() line 99 in /home/kontzialis/petsc-3.2-p5/src/ts/impls/implicit/sundials/sundials.c application called MPI_Abort(comm=0x84000000, 62) - process 0 [cli_0]: aborting job: application called MPI_Abort(comm=0x84000000, 62) - process 0 ==15195== ==15195== HEAP SUMMARY: ==15195== in use at exit: 3,110,164 bytes in 18,559 blocks ==15195== total heap usage: 36,568 allocs, 18,009 frees, 300,373,551 bytes allocated ==15195== ==15195== LEAK SUMMARY: ==15195== definitely lost: 39,298 bytes in 24 blocks ==15195== indirectly lost: 24 bytes in 3 blocks ==15195== possibly lost: 0 bytes in 0 blocks ==15195== still reachable: 3,070,842 bytes in 18,532 blocks ==15195== suppressed: 0 bytes in 0 blocks ==15195== Rerun with --leak-check=full to see details of leaked memory ==15195== ==15195== For counts of detected and suppressed errors, rerun with: -v ==15195== ERROR SUMMARY: 0 errors from 0 contexts (suppressed: 6 from 6) Any suggestions? Thank you, Kostas From jedbrown at mcs.anl.gov Thu Nov 17 05:56:45 2011 From: jedbrown at mcs.anl.gov (Jed Brown) Date: Thu, 17 Nov 2011 05:56:45 -0600 Subject: [petsc-users] sundials and ts In-Reply-To: <4EC4F123.4060603@lycos.com> References: <4EC4F123.4060603@lycos.com> Message-ID: On Thu, Nov 17, 2011 at 05:33, Konstantinos Kontzialis < ckontzialis at lycos.com> wrote: > [0]PETSC ERROR: VecGhostUpdateBegin() line 170 in > /home/kontzialis/petsc-3.2-p5/**src/vec/vec/impls/mpi/**commonmpvec.c > [0]PETSC ERROR: residual() line 43 in "unknowndirectory/"../src/** > residual.c > [0]PETSC ERROR: base_residual_implicit() line 28 in > "unknowndirectory/"../src/**base_residual_implicit.c > [0]PETSC ERROR: TSComputeIFunction() line 339 in > /home/kontzialis/petsc-3.2-p5/**src/ts/interface/ts.c > [0]PETSC ERROR: TSFunction_Sundials() line 99 in > /home/kontzialis/petsc-3.2-p5/**src/ts/impls/implicit/** > sundials/sundials.c > We can't pass ghosted Vecs through the Sundials interface. You'll have to copy into a ghosted work vector if you use VecGhost in your function. Also, if your IFunction does not have the form G(x,xdot) = xdot + F(x) (that is, if dG/dxdot is not the identitiy), then this method will not produce correct answers. Unfortunately, this is a limitation of the method/implementation. -------------- next part -------------- An HTML attachment was scrubbed... URL: From john.schmidt at utah.edu Thu Nov 17 10:52:29 2011 From: john.schmidt at utah.edu (John Schmidt) Date: Thu, 17 Nov 2011 09:52:29 -0700 Subject: [petsc-users] trouble installing petsc 3.1-p8 on centos 4.8 Message-ID: <201111170952.29958.john.schmidt@utah.edu> Hi, I have a collegue that is trying to install petsc 3.1-p8 on centos 4.8. Centos 4.8 is using the gcc-3.4.5 compilers. He installed openmpi-1.4.4 and that seems to work with our Uintah software, however, he is having issues with the petsc build. Attached is his configure.log that hopefully will help in diagnosing the problem. Any help is greatly appreciated. Thanks, John Schmidt john.schmidt at utah.edu -------------- next part -------------- A non-text attachment was scrubbed... Name: configure.log.gz Type: application/x-gzip Size: 169546 bytes Desc: not available URL: From balay at mcs.anl.gov Thu Nov 17 10:59:41 2011 From: balay at mcs.anl.gov (Satish Balay) Date: Thu, 17 Nov 2011 10:59:41 -0600 (CST) Subject: [petsc-users] trouble installing petsc 3.1-p8 on centos 4.8 In-Reply-To: <201111170952.29958.john.schmidt@utah.edu> References: <201111170952.29958.john.schmidt@utah.edu> Message-ID: Configure Options: --configModules=PETSc.Configure --optionsModule=PETSc.compilerOptions --with-shared --with-debugging=0 --with-batch --with-mpi-dir=/home/edward/MPM/openmpi-1.4.4/build2 > Must give a default value for known-mpi-shared since executables cannot be run Well - as the message says - configure needs the user to tell it if mpi libraries are shared or not. i.e specify -known-mpi-shared=1 [or 0]. When --with-batch option is used - configure can't do 'run tests' to determin this itself. But I think the primary issue is - the openmpi install is broken. Hence you might gotten a message about - "can't run binaries - suggest using --with-batch" This is because oenmpi install is broken until you set the following [as per openmpi installation instructions] export LD_LIBRARY_PATH=/home/edward/MPM/openmpi-1.4.4/build2/lib petsc-3.2 works arround this user-unfriendly feature. Satish On Thu, 17 Nov 2011, John Schmidt wrote: > Hi, > > I have a collegue that is trying to install petsc 3.1-p8 on centos 4.8. > Centos 4.8 is using the gcc-3.4.5 compilers. He installed openmpi-1.4.4 and > that seems to work with our Uintah software, however, he is having issues with > the petsc build. Attached is his configure.log that hopefully will help in > diagnosing the problem. Any help is greatly appreciated. > > Thanks, > > John Schmidt > john.schmidt at utah.edu > From john.schmidt at utah.edu Thu Nov 17 11:10:53 2011 From: john.schmidt at utah.edu (John Schmidt) Date: Thu, 17 Nov 2011 10:10:53 -0700 Subject: [petsc-users] trouble installing petsc 3.1-p8 on centos 4.8 In-Reply-To: References: <201111170952.29958.john.schmidt@utah.edu> Message-ID: <201111171010.53091.john.schmidt@utah.edu> Hi, Thanks so much for this explanation. Right now our Uintah software doesn't support the petsc 3.2 due to some slight api changes that have been made, so we are using an older version of petsc. Hopefully, Edward will be able to get petsc built with your suggestions. Thanks once again. John On Thursday 17 November 2011 9:59:41 AM Satish Balay wrote: > Configure Options: --configModules=PETSc.Configure > --optionsModule=PETSc.compilerOptions --with-shared --with-debugging=0 > --with-batch --with-mpi-dir=/home/edward/MPM/openmpi-1.4.4/build2 > > > Must give a default value for known-mpi-shared since executables cannot > > be run > > Well - as the message says - configure needs the user to tell it if > mpi libraries are shared or not. i.e specify -known-mpi-shared=1 [or > 0]. When --with-batch option is used - configure can't do 'run tests' > to determin this itself. > > But I think the primary issue is - the openmpi install is > broken. Hence you might gotten a message about - "can't run binaries - > suggest using --with-batch" > > This is because oenmpi install is broken until you set the following > [as per openmpi installation instructions] > > export LD_LIBRARY_PATH=/home/edward/MPM/openmpi-1.4.4/build2/lib > > petsc-3.2 works arround this user-unfriendly feature. > > Satish > > On Thu, 17 Nov 2011, John Schmidt wrote: > > Hi, > > > > I have a collegue that is trying to install petsc 3.1-p8 on centos 4.8. > > Centos 4.8 is using the gcc-3.4.5 compilers. He installed openmpi-1.4.4 > > and that seems to work with our Uintah software, however, he is having > > issues with the petsc build. Attached is his configure.log that > > hopefully will help in diagnosing the problem. Any help is greatly > > appreciated. > > > > Thanks, > > > > John Schmidt > > john.schmidt at utah.edu From rongliang.chan at gmail.com Thu Nov 17 11:17:11 2011 From: rongliang.chan at gmail.com (Rongliang Chen) Date: Thu, 17 Nov 2011 10:17:11 -0700 Subject: [petsc-users] KSPGMRESOrthog costs too much time Message-ID: Hi All, In my log_summary output, I found that nearly 80% of the total time is spent on KSPGMRESOrthog. I think this does not make sense ( the log_summary output followed). Who has any idea about this? Another question, I am using the two-level asm precondtioner. On the coarse level I use one-level asm preconditioned GMRES to solve a coarse problem. So both the fine level solver and coarse level solver call the function KSPGMRESOrthog. In the log_summary output, I just know the total time spent on KSPGMRESOrthog and how can I know how much time is spent on the coarse level KSPGMRESOrthog and how much is spent on fine level KSPGMRESOrthog? Thanks. Best, Rongliang ------------------------------------------------------------------------------------------------------------------------ ************************************************************************************************************************ *** WIDEN YOUR WINDOW TO 120 CHARACTERS. Use 'enscript -r -fCourier9' to print this document *** ************************************************************************************************************************ ---------------------------------------------- PETSc Performance Summary: ---------------------------------------------- ./joab on a Janus-nod named node1777 with 1024 processors, by ronglian Thu Nov 17 00:32:04 2011 Using Petsc Release Version 3.2.0, Patch 4, Sun Oct 23 12:23:18 CDT 2011 Max Max/Min Avg Total Time (sec): 1.162e+03 1.00001 1.162e+03 Objects: 6.094e+03 1.00099 6.090e+03 Flops: 6.284e+11 81.61246 4.097e+10 4.195e+13 Flops/sec: 5.410e+08 81.61201 3.527e+07 3.612e+10 MPI Messages: 4.782e+06 305.55857 3.053e+05 3.126e+08 MPI Message Lengths: 1.018e+10 254.67349 2.106e+03 6.583e+11 MPI Reductions: 2.079e+05 1.00003 Flop counting convention: 1 flop = 1 real number operation of type (multiply/divide/add/subtract) e.g., VecAXPY() for real vectors of length N --> 2N flops and VecAXPY() for complex vectors of length N --> 8N flops Summary of Stages: ----- Time ------ ----- Flops ----- --- Messages --- -- Message Lengths -- -- Reductions -- Avg %Total Avg %Total counts %Total Avg %Total counts %Total 0: Main Stage: 1.1615e+03 100.0% 4.1953e+13 100.0% 3.126e+08 100.0% 2.106e+03 100.0% 2.079e+05 100.0% ------------------------------------------------------------------------------------------------------------------------ See the 'Profiling' chapter of the users' manual for details on interpreting output. Phase summary info: Count: number of times phase was executed Time and Flops: Max - maximum over all processors Ratio - ratio of maximum to minimum over all processors Mess: number of messages sent Avg. len: average message length Reduct: number of global reductions Global: entire computation Stage: stages of a computation. Set stages with PetscLogStagePush() and PetscLogStagePop(). %T - percent time in this phase %f - percent flops in this phase %M - percent messages in this phase %L - percent message lengths in this phase %R - percent reductions in this phase Total Mflop/s: 10e-6 * (sum of flops over all processors)/(max time over all processors) ------------------------------------------------------------------------------------------------------------------------ Event Count Time (sec) Flops --- Global --- --- Stage --- Total Max Ratio Max Ratio Max Ratio Mess Avg len Reduct %T %f %M %L %R %T %f %M %L %R Mflop/s ------------------------------------------------------------------------------------------------------------------------ --- Event Stage 0: Main Stage MatMult 102148 1.0 6.4223e+0277.1 3.35e+1014.0 1.3e+08 9.8e+02 0.0e+00 4 12 43 20 0 4 12 43 20 0 7698 MatMultTranspose 2286 1.0 1.7585e+00 4.8 4.07e+08 1.5 7.4e+06 1.1e+03 0.0e+00 0 1 2 1 0 0 1 2 1 0 197783 MatSolve 9754141.2 8.8720e+02283.1 5.69e+11199.6 0.0e+00 0.0e+00 0.0e+00 5 76 0 0 0 5 76 0 0 0 35949 MatLUFactorSym 7 1.0 8.4092e-0119.6 0.00e+00 0.0 0.0e+00 0.0e+00 2.1e+01 0 0 0 0 0 0 0 0 0 0 0 MatLUFactorNum 28 1.0 1.1228e+0131.9 7.81e+0919.7 0.0e+00 0.0e+00 0.0e+00 0 3 0 0 0 0 3 0 0 0 95551 MatAssemblyBegin 168 1.0 2.3209e+0130.3 0.00e+00 0.0 4.0e+05 3.3e+04 2.8e+02 2 0 0 2 0 2 0 0 2 0 0 MatAssemblyEnd 168 1.0 3.5127e+01 1.0 0.00e+00 0.0 7.0e+04 2.7e+02 2.2e+02 3 0 0 0 0 3 0 0 0 0 0 MatGetRowIJ 7 2.3 2.0276e-0215.7 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 0 MatGetSubMatrice 28 1.0 1.8989e+00 4.4 0.00e+00 0.0 3.4e+05 3.5e+04 1.1e+02 0 0 0 2 0 0 0 0 2 0 0 MatGetOrdering 7 2.3 4.9773e-0119.7 0.00e+00 0.0 0.0e+00 0.0e+00 1.1e+01 0 0 0 0 0 0 0 0 0 0 0 MatIncreaseOvrlp 1 1.0 1.4734e-01 1.4 0.00e+00 0.0 6.9e+04 4.9e+02 8.0e+00 0 0 0 0 0 0 0 0 0 0 0 MatPartitioning 1 1.0 9.0198e-01 1.0 0.00e+00 0.0 0.0e+00 0.0e+00 1.0e+00 0 0 0 0 0 0 0 0 0 0 0 MatZeroEntries 70 1.0 1.2433e-01 3.5 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 0 VecDot 25 1.0 1.7762e-02 7.7 1.67e+05 2.7 0.0e+00 0.0e+00 2.5e+01 0 0 0 0 0 0 0 0 0 0 4775 VecMDot 95252 1.0 1.0035e+0343.9 1.35e+1017.9 0.0e+00 0.0e+00 9.5e+04 78 4 0 0 46 78 4 0 0 46 1751 VecNorm 97622 1.0 3.8131e+01 2.4 5.13e+0831.3 0.0e+00 0.0e+00 9.8e+04 3 0 0 0 47 3 0 0 0 47 1353 VecScale 97567 1.0 3.4131e-0113.9 2.56e+0831.5 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 75343 VecCopy 9260 1.0 4.8895e-0221.3 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 0 VecSet 211118 1.0 3.0709e+0072.3 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 0 VecAXPY 6963 1.0 9.5037e-02 3.5 5.61e+07 1.8 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 437314 VecWAXPY 2319 1.0 6.7898e-02 2.9 1.20e+07 1.5 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 150007 VecMAXPY 97567 1.0 1.1990e+0128.2 1.40e+1018.1 0.0e+00 0.0e+00 0.0e+00 0 4 0 0 0 0 4 0 0 0 150771 VecAssemblyBegin 4648 1.0 2.5907e+01 5.0 0.00e+00 0.0 2.8e+06 8.9e+02 1.4e+04 1 0 1 0 7 1 0 1 0 7 0 VecAssemblyEnd 4648 1.0 9.1485e-0228.2 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 0 VecScatterBegin 299601 1.0 1.6790e+0198.2 0.00e+00 0.0 3.1e+08 2.0e+03 0.0e+00 0 0 99 96 0 0 0 99 96 0 0 VecScatterEnd 299601 1.0 6.6906e+02175.9 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 4 0 0 0 0 4 0 0 0 0 0 VecReduceArith 8 1.0 2.5487e-04 3.3 5.83e+04 2.1 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 144027 VecReduceComm 4 1.0 9.3198e-04 3.1 0.00e+00 0.0 0.0e+00 0.0e+00 4.0e+00 0 0 0 0 0 0 0 0 0 0 0 VecNormalize 95269 1.0 3.7148e+01 2.4 7.37e+08745.1 0.0e+00 0.0e+00 9.5e+04 3 0 0 0 46 3 0 0 0 46 1254 SNESSolve 4 1.0 1.1395e+03 1.0 6.28e+1183.1 3.1e+08 2.1e+03 2.1e+05 98100100 99100 98100100 99100 36673 SNESLineSearch 25 1.0 3.6445e+00 1.0 1.48e+07 2.7 3.9e+05 9.8e+03 5.4e+02 0 0 0 1 0 0 0 0 1 0 2125 SNESFunctionEval 34 1.0 2.0350e+01 1.0 7.41e+06 2.8 4.1e+05 1.1e+04 5.2e+02 2 0 0 1 0 2 0 0 1 0 182 SNESJacobianEval 25 1.0 4.3534e+01 1.0 1.61e+06 1.5 2.7e+05 3.3e+04 2.4e+02 4 0 0 1 0 4 0 0 1 0 31 KSPGMRESOrthog 95252 1.0 1.0043e+0330.8 2.71e+1017.9 0.0e+00 0.0e+00 9.5e+04 78 8 0 0 46 78 8 0 0 46 3499 KSPSetup 56 1.0 3.2959e-02 1.9 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 0 KSPSolve 26 1.0 1.0764e+03 1.0 6.28e+1181.7 3.1e+08 2.1e+03 2.1e+05 93100100 98 99 93100100 98 99 38965 PCSetUp 74 1.0 1.4828e+0113.4 7.81e+0919.7 4.5e+05 2.6e+04 2.3e+02 0 3 0 2 0 0 3 0 2 0 72355 PCSetUpOnBlocks 2289 1.0 1.1525e+01720.0 7.19e+09781.3 0.0e+00 0.0e+00 2.7e+01 0 1 0 0 0 0 1 0 0 0 29053 PCApply 3956 1.0 1.0618e+03 1.0 6.18e+11119.7 3.0e+08 2.1e+03 2.0e+05 89 91 95 93 96 89 91 95 93 96 36057 ------------------------------------------------------------------------------------------------------------------------ Memory usage is given in bytes: Object Type Creations Destructions Memory Descendants' Mem. Reports information only for process 0. --- Event Stage 0: Main Stage Matrix 48 48 326989216 0 Matrix Partitioning 1 1 640 0 Index Set 192 192 867864 0 IS L to G Mapping 2 2 59480 0 Vector 5781 5781 176334584 0 Vector Scatter 31 31 32612 0 Application Order 2 2 36714016 0 SNES 4 4 5088 0 Krylov Solver 14 14 39362888 0 Preconditioner 18 18 16120 0 Viewer 1 0 0 0 ======================================================================================================================== -------------- next part -------------- An HTML attachment was scrubbed... URL: From jedbrown at mcs.anl.gov Thu Nov 17 11:43:34 2011 From: jedbrown at mcs.anl.gov (Jed Brown) Date: Thu, 17 Nov 2011 11:43:34 -0600 Subject: [petsc-users] KSPGMRESOrthog costs too much time In-Reply-To: References: Message-ID: On Thu, Nov 17, 2011 at 11:17, Rongliang Chen wrote: > In my log_summary output, I found that nearly 80% of the total time is > spent on KSPGMRESOrthog. I think this does not make sense ( the log_summary > output followed). Who has any idea about this? > Reductions are very expensive relative to everything else on the coarse level. You can try more levels or a different coarse level solver. You can also likely get away with solving the coarse problem inexactly. Alternatively, you can try getting Chebychev to help you out. Use -ksp_chebychev_estimate_eigenvalues to tune Chebychev (possibly to target a specific part of the spectrum). http://www.mcs.anl.gov/petsc/snapshots/petsc-dev/docs/manualpages/KSP/KSPChebychevSetEstimateEigenvalues.html > > Another question, I am using the two-level asm precondtioner. On the > coarse level I use one-level asm preconditioned GMRES to solve a coarse > problem. So both the fine level solver and coarse level solver call the > function KSPGMRESOrthog. In the log_summary output, I just know the total > time spent on KSPGMRESOrthog and how can I know how much time is spent on > the coarse level KSPGMRESOrthog and how much is spent on fine level > KSPGMRESOrthog? Thanks. > I assume you are using PCMG for this, so you can add -pc_mg_log to profile the time on each level independently. You seem to have many KSPGMRESOrthog steps per fine-level PCApply, so I think most of the time is in the coarse level. -------------- next part -------------- An HTML attachment was scrubbed... URL: From rongliang.chan at gmail.com Thu Nov 17 12:18:56 2011 From: rongliang.chan at gmail.com (Rongliang Chen) Date: Thu, 17 Nov 2011 11:18:56 -0700 Subject: [petsc-users] KSPGMRESOrthog costs too much time Message-ID: Hi Jed, Thank you for your reply. I am using the composite pc now not the PCMG: ierr = PCCompositeAddPC(finepc,PCSHELL);CHKERRQ(ierr); ierr = PCCompositeAddPC(finepc,PCASM);CHKERRQ(ierr); ierr = PCCompositeGetPC(finepc,0,&coarsesolve);CHKERRQ(ierr); ierr = PCShellSetContext(coarsesolve,ctx);CHKERRQ(ierr); ierr = PCShellSetApply(coarsesolve, CoarseSolvePCApply);CHKERRQ(ierr); ierr = PCCompositeGetPC(finepc,1,&asmpc);CHKERRQ(ierr); ierr = PCSetOptionsPrefix(asmpc,"fine_");CHKERRQ(ierr); ierr = PCSetFromOptions(asmpc);CHKERRQ(ierr); ierr = PCSetType(asmpc,PCASM);CHKERRQ(ierr); ierr = PCASMSetOverlap(asmpc,0);CHKERRQ(ierr); ierr = PCASMSetLocalSubdomains(asmpc,1,&grid->df_global_asm, PETSC_NULL);CHKERRQ(ierr); I just use two level method now and it is not very easy to try more levels since I am using unstructure meshes. I tried to solve the coarse level exactly by LU and it works well if the coarse problem is small. When the coarse problem is large, LU is also very slow (when the fine level problem is large, I can not use very small coarse level problem). I found that nearly 90% of the time is spent on the coarse level when the number of processor is large (np >512), so I want to know which step of the coarse level solver costs the most of the time. Thanks. Best, Rongliang > Message: 4 > Date: Thu, 17 Nov 2011 11:43:34 -0600 > From: Jed Brown > Subject: Re: [petsc-users] KSPGMRESOrthog costs too much time > To: PETSc users list > Message-ID: > > > Content-Type: text/plain; charset="utf-8" > > On Thu, Nov 17, 2011 at 11:17, Rongliang Chen >wrote: > > > In my log_summary output, I found that nearly 80% of the total time is > > spent on KSPGMRESOrthog. I think this does not make sense ( the > log_summary > > output followed). Who has any idea about this? > > > > Reductions are very expensive relative to everything else on the coarse > level. You can try more levels or a different coarse level solver. You can > also likely get away with solving the coarse problem inexactly. > > Alternatively, you can try getting Chebychev to help you out. Use > -ksp_chebychev_estimate_eigenvalues to tune Chebychev (possibly to target a > specific part of the spectrum). > > > http://www.mcs.anl.gov/petsc/snapshots/petsc-dev/docs/manualpages/KSP/KSPChebychevSetEstimateEigenvalues.html > > > > > > Another question, I am using the two-level asm precondtioner. On the > > coarse level I use one-level asm preconditioned GMRES to solve a coarse > > problem. So both the fine level solver and coarse level solver call the > > function KSPGMRESOrthog. In the log_summary output, I just know the total > > time spent on KSPGMRESOrthog and how can I know how much time is spent on > > the coarse level KSPGMRESOrthog and how much is spent on fine level > > KSPGMRESOrthog? Thanks. > > > > I assume you are using PCMG for this, so you can add -pc_mg_log to profile > the time on each level independently. You seem to have many KSPGMRESOrthog > steps per fine-level PCApply, so I think most of the time is in the coarse > level. > -------------- next part -------------- > An HTML attachment was scrubbed... > URL: < > http://lists.mcs.anl.gov/pipermail/petsc-users/attachments/20111117/f6c1ca41/attachment.htm > > > > ------------------------------ > > _______________________________________________ > petsc-users mailing list > petsc-users at mcs.anl.gov > https://lists.mcs.anl.gov/mailman/listinfo/petsc-users > > > End of petsc-users Digest, Vol 35, Issue 56 > ******************************************* > -------------- next part -------------- An HTML attachment was scrubbed... URL: From jedbrown at mcs.anl.gov Thu Nov 17 12:50:23 2011 From: jedbrown at mcs.anl.gov (Jed Brown) Date: Thu, 17 Nov 2011 12:50:23 -0600 Subject: [petsc-users] KSPGMRESOrthog costs too much time In-Reply-To: References: Message-ID: On Thu, Nov 17, 2011 at 12:18, Rongliang Chen wrote: > I am using the composite pc now not the PCMG: > > ierr = PCCompositeAddPC(finepc,PCSHELL);CHKERRQ(ierr); > ierr = PCCompositeAddPC(finepc,PCASM);CHKERRQ(ierr); > > ierr = PCCompositeGetPC(finepc,0,&coarsesolve);CHKERRQ(ierr); > ierr = PCShellSetContext(coarsesolve,ctx);CHKERRQ(ierr); > ierr = PCShellSetApply(coarsesolve, > CoarseSolvePCApply);CHKERRQ(ierr); > > ierr = PCCompositeGetPC(finepc,1,&asmpc);CHKERRQ(ierr); > ierr = PCSetOptionsPrefix(asmpc,"fine_");CHKERRQ(ierr); > ierr = PCSetFromOptions(asmpc);CHKERRQ(ierr); > > ierr = PCSetType(asmpc,PCASM);CHKERRQ(ierr); > ierr = PCASMSetOverlap(asmpc,0);CHKERRQ(ierr); > ierr = PCASMSetLocalSubdomains(asmpc,1,&grid->df_global_asm, > PETSC_NULL);CHKERRQ(ierr); > You can make your own event for the coarse level solve. Using PCMG instead of PCComposite would make your code more flexible, so you may want to consider doing it at some point. > > I just use two level method now and it is not very easy to try more levels > since I am using unstructure meshes. > I tried to solve the coarse level exactly by LU and it works well if the > coarse problem is small. When the coarse problem is large, LU is also very > slow (when the fine level problem is large, I can not use very small coarse > level problem). I found that nearly 90% of the time is spent on the coarse > level when the number of processor is large (np >512), so I want to know > which step of the coarse level solver costs the most of the time. Thanks. > This is a common problem. Depending on your equations, you might be able to solve the coarse-level problem using algebraic multigrid. Try -coarse_pc_type gamg (or however you set up the options prefixes; maybe also "hypre" or "ml" if you have those installed). -------------- next part -------------- An HTML attachment was scrubbed... URL: From balay at mcs.anl.gov Thu Nov 17 13:02:25 2011 From: balay at mcs.anl.gov (Satish Balay) Date: Thu, 17 Nov 2011 13:02:25 -0600 (CST) Subject: [petsc-users] trouble installing petsc 3.1-p8 on centos 4.8 In-Reply-To: <0C320B5E5936454AAFD72EA8457EE53E1A721563B6@pappcsmx-01.fpinnovations.lan> References: <201111170952.29958.john.schmidt@utah.edu> <0C320B5E5936454AAFD72EA8457EE53E1A721563B6@pappcsmx-01.fpinnovations.lan> Message-ID: On Thu, 17 Nov 2011, Edward Le wrote: > export LD_LIBRARY_PATH=/home/edward/MPM/openmpi-1.4.4/build2/lib > > ./config/configure.py --configModules=PETSc.Configure --optionsModule=PETSc.compilerOptions --with-shared --known-mpi-shared=1 --with-debugging=0 --with-batch --with-mpi-dir=/home/edward/MPM/openmpi-1.4.4/build2 Sorry - should have said: with the correct LD_LIBRARY_PATH - you shouldn't need --with-batch --known-mpi-shared=1 options. >>>>>> Executing: /home/edward/MPM/openmpi-1.4.4/build2/bin/mpicc -show sh: icc -I/opt/openmpi-1.2.6/include -pthread -L/opt/openmpi-1.2.6/lib -lmpi -lopen-rte -lopen-pal -ldl -Wl,--export-dynamic -lnsl -lutil <<<<<< Looks like openmpi is installed in /opt/openmpi-1.2.6/ - so LD_LIBRARY_PATH=/opt/openmpi-1.2.6/lib and --with-mpi-dir=/opt/openmpi-1.2.6 is more appropriate. But I see some wierd version differences between the above 2. Perhaps you'll have not done 'make install' - after building openmpi? Or used the wrong prefix for it? > /home/edward/MPM/uintah/petsc-3.1-p8/linux-gnu-c-opt/lib/libpetsc.so: undefined reference to `mpi_conversion_fn_null_' Might be releated to the above. Or - just let petsc install mpi to avoid these issues.. ./configure --download-openmpi=1 --with-cc=icc --with-fc=ifort --with-shared=1 Satish From xiaohl at ices.utexas.edu Thu Nov 17 13:08:39 2011 From: xiaohl at ices.utexas.edu (xiaohl) Date: Thu, 17 Nov 2011 13:08:39 -0600 Subject: [petsc-users] questions In-Reply-To: <7A3039C2-5A22-4C09-B292-C4C1E7959888@mcs.anl.gov> References: <968ef386e6ec8346a0938d4183f739f2@ices.utexas.edu> <7A3039C2-5A22-4C09-B292-C4C1E7959888@mcs.anl.gov> Message-ID: Hi I got the error [0]PETSC ERROR: --------------------- Error Message ------------------------------------ [0]PETSC ERROR: Invalid argument! [0]PETSC ERROR: Not a vector next in file! from the line "ierr = VecLoad(u,viewer); CHKERRQ(ierr);" Here is my code Vec u; ierr = DMDACreate3d(PETSC_COMM_WORLD,DMDA_BOUNDARY_NONE,DMDA_BOUNDARY_NONE, DMDA_BOUNDARY_NONE,DMDA_STENCIL_STAR,-4,-4,-4, PETSC_DECIDE,PETSC_DECIDE,PETSC_DECIDE,1,1, PETSC_NULL,PETSC_NULL,PETSC_NULL,&da); CHKERRQ(ierr); ierr = DMGetGlobalVector(da,&u); CHKERRQ(ierr); ierr = PetscViewerBinaryOpen(PETSC_COMM_WORLD,"permX.bin",FILE_MODE_READ,&viewer);CHKERRQ(ierr); ierr = VecLoad(u,viewer); CHKERRQ(ierr); I just followed the example "http://www.mcs.anl.gov/petsc/snapshots/petsc-dev/src/dm/examples/tutorials/ex9.c.html" Do you know what happened? Hailong On Wed, 2 Nov 2011 15:53:27 -0500, Barry Smith wrote: > On Nov 2, 2011, at 3:42 PM, Matthew Knepley wrote: > >> On Wed, Nov 2, 2011 at 8:38 PM, xiaohl >> wrote: >> Hi >> >> I am going to implement cell center difference method for >> u = - K grad p >> div u = f >> where p is the pressure , u is the velocity, f is the source term. >> >> my goal is to assemble the matrix and test the performance of >> different linear solvers in parallel. >> >> my question is how can I read the input file for K where K is n*n >> tensor. >> >> MatLoad() > > Hm, I think you should use a DMDA with n*n size dof and then use > VecLoad() to load the entries of K. > > Barry > >> >> second one is that do you have any similar examples? >> >> Nothing with the mixed-discretization of the Laplacian. >> >> Matt >> >> Hailong >> >> >> >> -- >> What most experimenters take for granted before they begin their >> experiments is infinitely more interesting than any results to which >> their experiments lead. >> -- Norbert Wiener From mccomic at mcs.anl.gov Thu Nov 17 13:13:53 2011 From: mccomic at mcs.anl.gov (Mike McCourt) Date: Thu, 17 Nov 2011 13:13:53 -0600 (CST) Subject: [petsc-users] questions In-Reply-To: Message-ID: <562524929.38823.1321557233856.JavaMail.root@zimbra.anl.gov> This is complaining that what is stored in that file is not a vector. How was that binary file created? -Mike ----- Original Message ----- From: "xiaohl" To: "PETSc users list" Sent: Thursday, November 17, 2011 1:08:39 PM Subject: Re: [petsc-users] questions Hi I got the error [0]PETSC ERROR: --------------------- Error Message ------------------------------------ [0]PETSC ERROR: Invalid argument! [0]PETSC ERROR: Not a vector next in file! from the line "ierr = VecLoad(u,viewer); CHKERRQ(ierr);" Here is my code Vec u; ierr = DMDACreate3d(PETSC_COMM_WORLD,DMDA_BOUNDARY_NONE,DMDA_BOUNDARY_NONE, DMDA_BOUNDARY_NONE,DMDA_STENCIL_STAR,-4,-4,-4, PETSC_DECIDE,PETSC_DECIDE,PETSC_DECIDE,1,1, PETSC_NULL,PETSC_NULL,PETSC_NULL,&da); CHKERRQ(ierr); ierr = DMGetGlobalVector(da,&u); CHKERRQ(ierr); ierr = PetscViewerBinaryOpen(PETSC_COMM_WORLD,"permX.bin",FILE_MODE_READ,&viewer);CHKERRQ(ierr); ierr = VecLoad(u,viewer); CHKERRQ(ierr); I just followed the example "http://www.mcs.anl.gov/petsc/snapshots/petsc-dev/src/dm/examples/tutorials/ex9.c.html" Do you know what happened? Hailong On Wed, 2 Nov 2011 15:53:27 -0500, Barry Smith wrote: > On Nov 2, 2011, at 3:42 PM, Matthew Knepley wrote: > >> On Wed, Nov 2, 2011 at 8:38 PM, xiaohl >> wrote: >> Hi >> >> I am going to implement cell center difference method for >> u = - K grad p >> div u = f >> where p is the pressure , u is the velocity, f is the source term. >> >> my goal is to assemble the matrix and test the performance of >> different linear solvers in parallel. >> >> my question is how can I read the input file for K where K is n*n >> tensor. >> >> MatLoad() > > Hm, I think you should use a DMDA with n*n size dof and then use > VecLoad() to load the entries of K. > > Barry > >> >> second one is that do you have any similar examples? >> >> Nothing with the mixed-discretization of the Laplacian. >> >> Matt >> >> Hailong >> >> >> >> -- >> What most experimenters take for granted before they begin their >> experiments is infinitely more interesting than any results to which >> their experiments lead. >> -- Norbert Wiener From xiaohl at ices.utexas.edu Thu Nov 17 13:35:39 2011 From: xiaohl at ices.utexas.edu (xiaohl) Date: Thu, 17 Nov 2011 13:35:39 -0600 Subject: [petsc-users] questions In-Reply-To: <7A3039C2-5A22-4C09-B292-C4C1E7959888@mcs.anl.gov> References: <968ef386e6ec8346a0938d4183f739f2@ices.utexas.edu> <7A3039C2-5A22-4C09-B292-C4C1E7959888@mcs.anl.gov> Message-ID: Hi Mike I created the binary file from a ASCII file by C. How could I create a file which I could use for petsc with DM routine? Hailong On Wed, 2 Nov 2011 15:53:27 -0500, Barry Smith wrote: > On Nov 2, 2011, at 3:42 PM, Matthew Knepley wrote: > >> On Wed, Nov 2, 2011 at 8:38 PM, xiaohl >> wrote: >> Hi >> >> I am going to implement cell center difference method for >> u = - K grad p >> div u = f >> where p is the pressure , u is the velocity, f is the source term. >> >> my goal is to assemble the matrix and test the performance of >> different linear solvers in parallel. >> >> my question is how can I read the input file for K where K is n*n >> tensor. >> >> MatLoad() > > Hm, I think you should use a DMDA with n*n size dof and then use > VecLoad() to load the entries of K. > > Barry > >> >> second one is that do you have any similar examples? >> >> Nothing with the mixed-discretization of the Laplacian. >> >> Matt >> >> Hailong >> >> >> >> -- >> What most experimenters take for granted before they begin their >> experiments is infinitely more interesting than any results to which >> their experiments lead. >> -- Norbert Wiener From xiaohl at ices.utexas.edu Thu Nov 17 13:55:14 2011 From: xiaohl at ices.utexas.edu (xiaohl) Date: Thu, 17 Nov 2011 13:55:14 -0600 Subject: [petsc-users] questions In-Reply-To: <7A3039C2-5A22-4C09-B292-C4C1E7959888@mcs.anl.gov> References: <968ef386e6ec8346a0938d4183f739f2@ices.utexas.edu> <7A3039C2-5A22-4C09-B292-C4C1E7959888@mcs.anl.gov> Message-ID: <5f7189465d847f6f18f5ddbac5cc5f8c@ices.utexas.edu> Hi Mike I created the binary file from a ASCII file by C. How could I create a file which I could use for petsc with DM routine? Hailong This is complaining that what is stored in that file is not a vector. How was that binary file created? -Mike ----- Original Message ----- From: "xiaohl" To: "PETSc users list" Sent: Thursday, November 17, 2011 1:08:39 PM Subject: Re: [petsc-users] questions Hi I got the error [0]PETSC ERROR: --------------------- Error Message ------------------------------------ [0]PETSC ERROR: Invalid argument! [0]PETSC ERROR: Not a vector next in file! from the line "ierr = VecLoad(u,viewer); CHKERRQ(ierr);" Here is my code Vec u; ierr = DMDACreate3d(PETSC_COMM_WORLD,DMDA_BOUNDARY_NONE,DMDA_BOUNDARY_NONE, DMDA_BOUNDARY_NONE,DMDA_STENCIL_STAR,-4,-4,-4, PETSC_DECIDE,PETSC_DECIDE,PETSC_DECIDE,1,1, PETSC_NULL,PETSC_NULL,PETSC_NULL,&da); CHKERRQ(ierr); ierr = DMGetGlobalVector(da,&u); CHKERRQ(ierr); ierr = PetscViewerBinaryOpen(PETSC_COMM_WORLD,"permX.bin",FILE_MODE_READ,&viewer);CHKERRQ(ierr); ierr = VecLoad(u,viewer); CHKERRQ(ierr); I just followed the example "http://www.mcs.anl.gov/petsc/snapshots/petsc-dev/src/dm/examples/tutorials/ex9.c.html" Do you know what happened? Hailong On Wed, 2 Nov 2011 15:53:27 -0500, Barry Smith wrote: > On Nov 2, 2011, at 3:42 PM, Matthew Knepley wrote: > >> On Wed, Nov 2, 2011 at 8:38 PM, xiaohl >> wrote: >> Hi >> >> I am going to implement cell center difference method for >> u = - K grad p >> div u = f >> where p is the pressure , u is the velocity, f is the source term. >> >> my goal is to assemble the matrix and test the performance of >> different linear solvers in parallel. >> >> my question is how can I read the input file for K where K is n*n >> tensor. >> >> MatLoad() > > Hm, I think you should use a DMDA with n*n size dof and then use > VecLoad() to load the entries of K. > > Barry > >> >> second one is that do you have any similar examples? >> >> Nothing with the mixed-discretization of the Laplacian. >> >> Matt >> >> Hailong >> >> >> >> -- >> What most experimenters take for granted before they begin their >> experiments is infinitely more interesting than any results to which >> their experiments lead. >> -- Norbert Wiener From mccomic at mcs.anl.gov Thu Nov 17 13:59:15 2011 From: mccomic at mcs.anl.gov (Mike McCourt) Date: Thu, 17 Nov 2011 13:59:15 -0600 (CST) Subject: [petsc-users] questions In-Reply-To: <5f7189465d847f6f18f5ddbac5cc5f8c@ices.utexas.edu> Message-ID: <581765057.39058.1321559955320.JavaMail.root@zimbra.anl.gov> In order to use VecLoad, the binary file should have been generated by VecView: http://www.mcs.anl.gov/petsc/snapshots/petsc-current/docs/manualpages/Vec/VecView.html#VecView because there is information needed by PETSc to create a vector that you cannot provide independently. There are several formats for inputting a vector into PETSc, but unfortunately, I don't think you can roll your own binary file. -Mike ----- Original Message ----- From: "xiaohl" To: "PETSc users list" Sent: Thursday, November 17, 2011 1:55:14 PM Subject: Re: [petsc-users] questions Hi Mike I created the binary file from a ASCII file by C. How could I create a file which I could use for petsc with DM routine? Hailong This is complaining that what is stored in that file is not a vector. How was that binary file created? -Mike ----- Original Message ----- From: "xiaohl" To: "PETSc users list" Sent: Thursday, November 17, 2011 1:08:39 PM Subject: Re: [petsc-users] questions Hi I got the error [0]PETSC ERROR: --------------------- Error Message ------------------------------------ [0]PETSC ERROR: Invalid argument! [0]PETSC ERROR: Not a vector next in file! from the line "ierr = VecLoad(u,viewer); CHKERRQ(ierr);" Here is my code Vec u; ierr = DMDACreate3d(PETSC_COMM_WORLD,DMDA_BOUNDARY_NONE,DMDA_BOUNDARY_NONE, DMDA_BOUNDARY_NONE,DMDA_STENCIL_STAR,-4,-4,-4, PETSC_DECIDE,PETSC_DECIDE,PETSC_DECIDE,1,1, PETSC_NULL,PETSC_NULL,PETSC_NULL,&da); CHKERRQ(ierr); ierr = DMGetGlobalVector(da,&u); CHKERRQ(ierr); ierr = PetscViewerBinaryOpen(PETSC_COMM_WORLD,"permX.bin",FILE_MODE_READ,&viewer);CHKERRQ(ierr); ierr = VecLoad(u,viewer); CHKERRQ(ierr); I just followed the example "http://www.mcs.anl.gov/petsc/snapshots/petsc-dev/src/dm/examples/tutorials/ex9.c.html" Do you know what happened? Hailong On Wed, 2 Nov 2011 15:53:27 -0500, Barry Smith wrote: > On Nov 2, 2011, at 3:42 PM, Matthew Knepley wrote: > >> On Wed, Nov 2, 2011 at 8:38 PM, xiaohl >> wrote: >> Hi >> >> I am going to implement cell center difference method for >> u = - K grad p >> div u = f >> where p is the pressure , u is the velocity, f is the source term. >> >> my goal is to assemble the matrix and test the performance of >> different linear solvers in parallel. >> >> my question is how can I read the input file for K where K is n*n >> tensor. >> >> MatLoad() > > Hm, I think you should use a DMDA with n*n size dof and then use > VecLoad() to load the entries of K. > > Barry > >> >> second one is that do you have any similar examples? >> >> Nothing with the mixed-discretization of the Laplacian. >> >> Matt >> >> Hailong >> >> >> >> -- >> What most experimenters take for granted before they begin their >> experiments is infinitely more interesting than any results to which >> their experiments lead. >> -- Norbert Wiener From rongliang.chan at gmail.com Thu Nov 17 14:19:12 2011 From: rongliang.chan at gmail.com (Rongliang Chen) Date: Thu, 17 Nov 2011 13:19:12 -0700 Subject: [petsc-users] KSPGMRESOrthog costs too much time Message-ID: Hi Jed, Thank you for your suggestions. Best, Rongliang > ------------------------------ > > Message: 2 > Date: Thu, 17 Nov 2011 12:50:23 -0600 > From: Jed Brown > Subject: Re: [petsc-users] KSPGMRESOrthog costs too much time > To: PETSc users list > Message-ID: > > > Content-Type: text/plain; charset="utf-8" > > On Thu, Nov 17, 2011 at 12:18, Rongliang Chen >wrote: > > > I am using the composite pc now not the PCMG: > > > > ierr = PCCompositeAddPC(finepc,PCSHELL);CHKERRQ(ierr); > > ierr = PCCompositeAddPC(finepc,PCASM);CHKERRQ(ierr); > > > > ierr = PCCompositeGetPC(finepc,0,&coarsesolve);CHKERRQ(ierr); > > ierr = PCShellSetContext(coarsesolve,ctx);CHKERRQ(ierr); > > ierr = PCShellSetApply(coarsesolve, > > CoarseSolvePCApply);CHKERRQ(ierr); > > > > ierr = PCCompositeGetPC(finepc,1,&asmpc);CHKERRQ(ierr); > > ierr = PCSetOptionsPrefix(asmpc,"fine_");CHKERRQ(ierr); > > ierr = PCSetFromOptions(asmpc);CHKERRQ(ierr); > > > > ierr = PCSetType(asmpc,PCASM);CHKERRQ(ierr); > > ierr = PCASMSetOverlap(asmpc,0);CHKERRQ(ierr); > > ierr = PCASMSetLocalSubdomains(asmpc,1,&grid->df_global_asm, > > PETSC_NULL);CHKERRQ(ierr); > > > > You can make your own event for the coarse level solve. Using PCMG instead > of PCComposite would make your code more flexible, so you may want to > consider doing it at some point. > > > > > > I just use two level method now and it is not very easy to try more > levels > > since I am using unstructure meshes. > > I tried to solve the coarse level exactly by LU and it works well if the > > coarse problem is small. When the coarse problem is large, LU is also > very > > slow (when the fine level problem is large, I can not use very small > coarse > > level problem). I found that nearly 90% of the time is spent on the > coarse > > level when the number of processor is large (np >512), so I want to know > > which step of the coarse level solver costs the most of the time. Thanks. > > > > This is a common problem. Depending on your equations, you might be able to > solve the coarse-level problem using algebraic multigrid. Try > -coarse_pc_type gamg (or however you set up the options prefixes; maybe > also "hypre" or "ml" if you have those installed). > -------------- next part -------------- > An HTML attachment was scrubbed... > URL: < > http://lists.mcs.anl.gov/pipermail/petsc-users/attachments/20111117/81aa79c0/attachment-0001.htm > > > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From ecoon at lanl.gov Thu Nov 17 14:21:51 2011 From: ecoon at lanl.gov (Ethan Coon) Date: Thu, 17 Nov 2011 13:21:51 -0700 Subject: [petsc-users] questions In-Reply-To: References: <968ef386e6ec8346a0938d4183f739f2@ices.utexas.edu> <7A3039C2-5A22-4C09-B292-C4C1E7959888@mcs.anl.gov> Message-ID: <1321561311.5867.5.camel@echo.lanl.gov> On Thu, 2011-11-17 at 13:35 -0600, xiaohl wrote: > Hi Mike > > I created the binary file from a ASCII file by C. > How could I create a file which I could use for petsc with DM routine? > There are a few ways to do this, but the easiest options are -- 1. write a C program that creates a DM and a Vec, sets the values of the Vec, and calls VecView (like the example you quoted does). 2. use matlab or python/numpy to generate arrays of the correct size/type/order, and then save those arrays to Petsc Binary format using the scripts in $PETSC_DIR/bin/matlab and $PETSC_DIR/bin/pythonscripts Ethan > Hailong > > On Wed, 2 Nov 2011 15:53:27 -0500, Barry Smith > wrote: > > On Nov 2, 2011, at 3:42 PM, Matthew Knepley wrote: > > > >> On Wed, Nov 2, 2011 at 8:38 PM, xiaohl > >> wrote: > >> Hi > >> > >> I am going to implement cell center difference method for > >> u = - K grad p > >> div u = f > >> where p is the pressure , u is the velocity, f is the source term. > >> > >> my goal is to assemble the matrix and test the performance of > >> different linear solvers in parallel. > >> > >> my question is how can I read the input file for K where K is n*n > >> tensor. > >> > >> MatLoad() > > > > Hm, I think you should use a DMDA with n*n size dof and then use > > VecLoad() to load the entries of K. > > > > Barry > > > >> > >> second one is that do you have any similar examples? > >> > >> Nothing with the mixed-discretization of the Laplacian. > >> > >> Matt > >> > >> Hailong > >> > >> > >> > >> -- > >> What most experimenters take for granted before they begin their > >> experiments is infinitely more interesting than any results to which > >> their experiments lead. > >> -- Norbert Wiener > -- ------------------------------------ Ethan Coon Post-Doctoral Researcher Applied Mathematics - T-5 Los Alamos National Laboratory 505-665-8289 http://www.ldeo.columbia.edu/~ecoon/ ------------------------------------ From bsmith at mcs.anl.gov Thu Nov 17 14:31:52 2011 From: bsmith at mcs.anl.gov (Barry Smith) Date: Thu, 17 Nov 2011 14:31:52 -0600 Subject: [petsc-users] questions In-Reply-To: <1321561311.5867.5.camel@echo.lanl.gov> References: <968ef386e6ec8346a0938d4183f739f2@ices.utexas.edu> <7A3039C2-5A22-4C09-B292-C4C1E7959888@mcs.anl.gov> <1321561311.5867.5.camel@echo.lanl.gov> Message-ID: <3FB3FB82-D453-4E79-8F9A-C3C52C147C2C@mcs.anl.gov> On Nov 17, 2011, at 2:21 PM, Ethan Coon wrote: > On Thu, 2011-11-17 at 13:35 -0600, xiaohl wrote: >> Hi Mike >> >> I created the binary file from a ASCII file by C. >> How could I create a file which I could use for petsc with DM routine? >> > > There are a few ways to do this, but the easiest options are -- > > 1. write a C program that creates a DM and a Vec, sets the values of the > Vec, and calls VecView (like the example you quoted does). > > 2. use matlab or python/numpy to generate arrays of the correct > size/type/order, and then save those arrays to Petsc Binary format using > the scripts in $PETSC_DIR/bin/matlab and $PETSC_DIR/bin/pythonscripts You can also write a binary file directly from C (or maybe Fortran) using the EXACT format as indicated in the manual page http://www.mcs.anl.gov/petsc/snapshots/petsc-current/docs/manualpages/Vec/VecLoad.html Barry > > > Ethan > >> Hailong >> >> On Wed, 2 Nov 2011 15:53:27 -0500, Barry Smith >> wrote: >>> On Nov 2, 2011, at 3:42 PM, Matthew Knepley wrote: >>> >>>> On Wed, Nov 2, 2011 at 8:38 PM, xiaohl >>>> wrote: >>>> Hi >>>> >>>> I am going to implement cell center difference method for >>>> u = - K grad p >>>> div u = f >>>> where p is the pressure , u is the velocity, f is the source term. >>>> >>>> my goal is to assemble the matrix and test the performance of >>>> different linear solvers in parallel. >>>> >>>> my question is how can I read the input file for K where K is n*n >>>> tensor. >>>> >>>> MatLoad() >>> >>> Hm, I think you should use a DMDA with n*n size dof and then use >>> VecLoad() to load the entries of K. >>> >>> Barry >>> >>>> >>>> second one is that do you have any similar examples? >>>> >>>> Nothing with the mixed-discretization of the Laplacian. >>>> >>>> Matt >>>> >>>> Hailong >>>> >>>> >>>> >>>> -- >>>> What most experimenters take for granted before they begin their >>>> experiments is infinitely more interesting than any results to which >>>> their experiments lead. >>>> -- Norbert Wiener >> > > -- > ------------------------------------ > Ethan Coon > Post-Doctoral Researcher > Applied Mathematics - T-5 > Los Alamos National Laboratory > 505-665-8289 > > http://www.ldeo.columbia.edu/~ecoon/ > ------------------------------------ > From Robert.Ellis at geosoft.com Thu Nov 17 15:54:55 2011 From: Robert.Ellis at geosoft.com (Robert Ellis) Date: Thu, 17 Nov 2011 21:54:55 +0000 Subject: [petsc-users] VecSetValues Message-ID: <18205E5ECD2A1A4584F2BFC0BCBDE95526ED2A6A@exchange.geosoft.com> Hello All, I have a troubling intermittent problem with the simple VecSetValues/VecAssemblyBegin functions after porting a robust long working application to a cloud platform. * I have 30M doubles on rank0 * I intend to assign them non sequentially among 32 processors, ranks 1-31. * On rank0 only I use VecSetValues(x,...) to make the assignment. So far everything is fine. * I call VecAssemblyBegin expecting this to distribute the values appropriately. Sometimes this works, but about 50% of the time I see errors, immediately on calling VecAssemblyBegin, of the following form: [23]PETSC ERROR: Fatal error in MPI_Allreduce: Other MPI error, error stack: MPI_Allreduce(919).........................: MPI_Allreduce(sbuf=0000000012DE29B0, rbuf=00000000069F6ED0, count=32, dtype=USER, op=0x98000000, comm=0x84000002) failed MPIR_Allreduce_impl(776)...................: MPIR_Allreduce_intra(220)..................: MPIR_Bcast_impl(1273)......................: MPIR_Bcast_intra(1107).....................: MPIR_Bcast_binomial(143)...................: MPIC_Recv(110).............................: MPIC_Wait(540).............................: MPIDI_CH3I_Progress(353)...................: MPID_nem_mpich2_blocking_recv(905).........: MPID_nem_newtcp_module_poll(37)............: MPID_nem_newtcp_module_connpoll(2655)......: recv_id_or_tmpvc_info_success_handler(1278): read from socket failed - No error --------------------- Error Message ------------------------------------ [23]PETSC ERROR: Out of memory. This could be due to allocating [23]PETSC ERROR: too large an object or bleeding by not properly [23]PETSC ERROR: destroying unneeded objects. [23]PETSC ERROR: Memory allocated 0 Memory used by process 0 [23]PETSC ERROR: Try running with -malloc_dump or -malloc_log for info. [23]PETSC ERROR: Memory requested 18446744066053327000! [23]PETSC ERROR: ------------------------------------------------------------------------ [23]PETSC ERROR: Petsc Release Version 3.1.0, Patch 7, Mon Dec 20 14:26:37 CST 2010 [23]PETSC ERROR: See docs/changes/index.html for recent updates. [23]PETSC ERROR: See docs/faq.html for hints about trouble shooting. [23]PETSC ERROR: See docs/in ... My questions are (1) has anybody seen anything like this type of VecAssemblyBegin error? or (2) is it likely that splitting the VecSetValue in smaller blocks will help? or (4) is it likely that moving to mpich2 1.4p1 would help? (3) any other thoughts? Thanks in advance, Rob -------------- next part -------------- An HTML attachment was scrubbed... URL: From knepley at gmail.com Thu Nov 17 15:57:57 2011 From: knepley at gmail.com (Matthew Knepley) Date: Thu, 17 Nov 2011 15:57:57 -0600 Subject: [petsc-users] VecSetValues In-Reply-To: <18205E5ECD2A1A4584F2BFC0BCBDE95526ED2A6A@exchange.geosoft.com> References: <18205E5ECD2A1A4584F2BFC0BCBDE95526ED2A6A@exchange.geosoft.com> Message-ID: On Thu, Nov 17, 2011 at 3:54 PM, Robert Ellis wrote: > Hello All,**** > > ** ** > > I have a troubling intermittent problem with the simple > VecSetValues/VecAssemblyBegin functions after porting a robust long working > application to a cloud platform. **** > > ** ** > > **? **I have 30M doubles on rank0**** > > **? **I intend to assign them non sequentially among 32 > processors, ranks 1-31.**** > > **? **On rank0 only I use VecSetValues(x,...) to make the > assignment. So far everything is fine.**** > > **? **I call VecAssemblyBegin expecting this to distribute the > values appropriately.**** > > ** ** > > Sometimes this works, but about 50% of the time I see errors, immediately > on calling VecAssemblyBegin, of the following form:**** > > ** ** > > [23]PETSC ERROR: Fatal error in MPI_Allreduce: Other MPI > error, error stack:**** > > MPI_Allreduce(919).........................: > MPI_Allreduce(sbuf=0000000012DE29B0, rbuf=00000000069F6ED0, count=32, > dtype=USER, op=0x98000000, comm=0x84000002) failed**** > > MPIR_Allreduce_impl(776)...................:**** > > MPIR_Allreduce_intra(220)..................:**** > > MPIR_Bcast_impl(1273)......................:**** > > MPIR_Bcast_intra(1107).....................:**** > > MPIR_Bcast_binomial(143)...................:**** > > MPIC_Recv(110).............................:**** > > MPIC_Wait(540).............................:**** > > MPIDI_CH3I_Progress(353)...................:**** > > MPID_nem_mpich2_blocking_recv(905).........:**** > > MPID_nem_newtcp_module_poll(37)............:**** > > MPID_nem_newtcp_module_connpoll(2655)......:**** > > recv_id_or_tmpvc_info_success_handler(1278): read from > socket failed - No error**** > > --------------------- Error Message > ------------------------------------**** > > [23]PETSC ERROR: Out of memory. This could be due to > allocating**** > > [23]PETSC ERROR: too large an object or bleeding by not > properly**** > > [23]PETSC ERROR: destroying unneeded objects.**** > > [23]PETSC ERROR: Memory allocated 0 Memory used by process > 0**** > > [23]PETSC ERROR: Try running with -malloc_dump or > -malloc_log for info.**** > > [23]PETSC ERROR: Memory requested 18446744066053327000!*** > * > > [23]PETSC ERROR: > ------------------------------------------------------------------------** > ** > > [23]PETSC ERROR: Petsc Release Version 3.1.0, Patch 7, Mon > Dec 20 14:26:37 CST 2010**** > > [23]PETSC ERROR: See docs/changes/index.html for recent > updates.**** > > [23]PETSC ERROR: See docs/faq.html for hints about trouble > shooting.**** > > [23]PETSC ERROR: See docs/in ...**** > > ** ** > > My questions are (1) has anybody seen anything like this type of > VecAssemblyBegin error? or (2) is it likely that splitting the VecSetValue > in smaller blocks will help? or (4) is it likely that moving to mpich2 > 1.4p1 would help? (3) any other thoughts? > I would recommend interleaving your VecSetValues() with VecAssemblyBegin/End() calls. It certainly sounds like you are overflowing buffers in the MPI implementation. Matt > Thanks in advance,**** > > Rob **** > -- What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. -- Norbert Wiener -------------- next part -------------- An HTML attachment was scrubbed... URL: From thomas.witkowski at tu-dresden.de Fri Nov 18 07:02:54 2011 From: thomas.witkowski at tu-dresden.de (Thomas Witkowski) Date: Fri, 18 Nov 2011 14:02:54 +0100 Subject: [petsc-users] Assembling primal Schur matrix in FETI-DP method Message-ID: <4EC6577E.1020309@tu-dresden.de> In my current FETI-DP implementation, the solution of the Schur complement on the primal variables is done by an iterative solver. This works quite good, but for small and mid size 2D problems I would like to test it with direct assembling and inverting the Schur complement matrix. In my notation, the matrix is defined by S_PiPi = K_PiPi - K_PiB inv(K_BB) K_BPi "Pi" are the primal and "B" the non-primal variables. K_BB is factorized with a (local) direct solver (umpfack or mumps). But how can I create a matrix from the last expression? Is there a way to do a matrix-matrix multiplication in PETSc, where the first matrix is the (implicit defined) dense inverse of a sparse matrix, and the second matrix is a sparse matrix? Or is it required to extract the rows of K_BPi in some way and to perform than a matrix-vector multiplication with inv(K_BB)? Thomas From jedbrown at mcs.anl.gov Fri Nov 18 07:31:12 2011 From: jedbrown at mcs.anl.gov (Jed Brown) Date: Fri, 18 Nov 2011 07:31:12 -0600 Subject: [petsc-users] Assembling primal Schur matrix in FETI-DP method In-Reply-To: <4EC6577E.1020309@tu-dresden.de> References: <4EC6577E.1020309@tu-dresden.de> Message-ID: On Fri, Nov 18, 2011 at 07:02, Thomas Witkowski < thomas.witkowski at tu-dresden.de> wrote: > In my current FETI-DP implementation, the solution of the Schur complement > on the primal variables is done by an iterative solver. This works quite > good, but for small and mid size 2D problems I would like to test it with > direct assembling and inverting the Schur complement matrix. In my > notation, the matrix is defined by > > S_PiPi = K_PiPi - K_PiB inv(K_BB) K_BPi > > "Pi" are the primal and "B" the non-primal variables. K_BB is factorized > with a (local) direct solver (umpfack or mumps). But how can I create a > matrix from the last expression? Is there a way to do a matrix-matrix > multiplication in PETSc, where the first matrix is the (implicit defined) > dense inverse of a sparse matrix, and the second matrix is a sparse matrix? > Or is it required to extract the rows of K_BPi in some way and to perform > than a matrix-vector multiplication with inv(K_BB)? > You should be able to construct the sparsity pattern of the resulting matrix, therefore you can color it and get the explicit operator by MatFDColoringApply() where the "base" vector is zero and the "function" is just MatMult(). There should probably be a Mat function that does this for you given a known sparsity pattern. -------------- next part -------------- An HTML attachment was scrubbed... URL: From Thomas.Witkowski at tu-dresden.de Fri Nov 18 07:53:08 2011 From: Thomas.Witkowski at tu-dresden.de (Thomas Witkowski) Date: Fri, 18 Nov 2011 14:53:08 +0100 Subject: [petsc-users] Assembling primal Schur matrix in FETI-DP method In-Reply-To: References: <4EC6577E.1020309@tu-dresden.de> Message-ID: <20111118145308.q7gzzdshwkw0sk80@mail.zih.tu-dresden.de> Zitat von Jed Brown : > On Fri, Nov 18, 2011 at 07:02, Thomas Witkowski < > thomas.witkowski at tu-dresden.de> wrote: > >> In my current FETI-DP implementation, the solution of the Schur complement >> on the primal variables is done by an iterative solver. This works quite >> good, but for small and mid size 2D problems I would like to test it with >> direct assembling and inverting the Schur complement matrix. In my >> notation, the matrix is defined by >> >> S_PiPi = K_PiPi - K_PiB inv(K_BB) K_BPi >> >> "Pi" are the primal and "B" the non-primal variables. K_BB is factorized >> with a (local) direct solver (umpfack or mumps). But how can I create a >> matrix from the last expression? Is there a way to do a matrix-matrix >> multiplication in PETSc, where the first matrix is the (implicit defined) >> dense inverse of a sparse matrix, and the second matrix is a sparse matrix? >> Or is it required to extract the rows of K_BPi in some way and to perform >> than a matrix-vector multiplication with inv(K_BB)? >> > > You should be able to construct the sparsity pattern of the resulting > matrix, therefore you can color it and get the explicit operator by > MatFDColoringApply() where the "base" vector is zero and the "function" is > just MatMult(). > > There should probably be a Mat function that does this for you given a > known sparsity pattern. > Defining the sparsity pattern is no problem. But is there any documentation on MatFDColoringApply? The online function reference is quite short and is not mentioned in the manuel. So, I have no real idea how this is related to my question. Thomas From bsmith at mcs.anl.gov Fri Nov 18 12:24:57 2011 From: bsmith at mcs.anl.gov (Barry Smith) Date: Fri, 18 Nov 2011 12:24:57 -0600 Subject: [petsc-users] Assembling primal Schur matrix in FETI-DP method In-Reply-To: <4EC6577E.1020309@tu-dresden.de> References: <4EC6577E.1020309@tu-dresden.de> Message-ID: <5E0EE636-5C70-42A6-87C7-9BA0F842388D@mcs.anl.gov> http://www.mcs.anl.gov/petsc/documentation/faq.html#schurcomplement On Nov 18, 2011, at 7:02 AM, Thomas Witkowski wrote: > In my current FETI-DP implementation, the solution of the Schur complement on the primal variables is done by an iterative solver. This works quite good, but for small and mid size 2D problems I would like to test it with direct assembling and inverting the Schur complement matrix. In my notation, the matrix is defined by > > S_PiPi = K_PiPi - K_PiB inv(K_BB) K_BPi > > "Pi" are the primal and "B" the non-primal variables. K_BB is factorized with a (local) direct solver (umpfack or mumps). But how can I create a matrix from the last expression? Is there a way to do a matrix-matrix multiplication in PETSc, where the first matrix is the (implicit defined) dense inverse of a sparse matrix, and the second matrix is a sparse matrix? Or is it required to extract the rows of K_BPi in some way and to perform than a matrix-vector multiplication with inv(K_BB)? > > Thomas From jedbrown at mcs.anl.gov Fri Nov 18 12:36:04 2011 From: jedbrown at mcs.anl.gov (Jed Brown) Date: Fri, 18 Nov 2011 12:36:04 -0600 Subject: [petsc-users] Assembling primal Schur matrix in FETI-DP method In-Reply-To: <5E0EE636-5C70-42A6-87C7-9BA0F842388D@mcs.anl.gov> References: <4EC6577E.1020309@tu-dresden.de> <5E0EE636-5C70-42A6-87C7-9BA0F842388D@mcs.anl.gov> Message-ID: On Fri, Nov 18, 2011 at 12:24, Barry Smith wrote: > http://www.mcs.anl.gov/petsc/documentation/faq.html#schurcomplement This FAQ is too old and needs to be updated. -------------- next part -------------- An HTML attachment was scrubbed... URL: From jedbrown at mcs.anl.gov Fri Nov 18 12:42:15 2011 From: jedbrown at mcs.anl.gov (Jed Brown) Date: Fri, 18 Nov 2011 12:42:15 -0600 Subject: [petsc-users] Assembling primal Schur matrix in FETI-DP method In-Reply-To: <20111118145308.q7gzzdshwkw0sk80@mail.zih.tu-dresden.de> References: <4EC6577E.1020309@tu-dresden.de> <20111118145308.q7gzzdshwkw0sk80@mail.zih.tu-dresden.de> Message-ID: On Fri, Nov 18, 2011 at 07:53, Thomas Witkowski < Thomas.Witkowski at tu-dresden.de> wrote: > Defining the sparsity pattern is no problem. But is there any > documentation on MatFDColoringApply? The online function reference is quite > short and is not mentioned in the manuel. So, I have no real idea how this > is related to my question. You can find an example in snes/examples/tutorials/ex15.c The point is that it can compute the Schur complement by coloring. In your case, you can do subdomain solves independently, so you can compute S = K_PiPi - K_PiB inv(K_BB) K_BPi on each subdomain using Tmp = inv(K_BB) K_BPi (via MatMatSolve) Tmp2 = K_PiB * Tmp (via MatMatMult) S = K_PiPi - Tmp2 (via MatAXPY) You would then assemble the resulting independent matrices into the global matrix. -------------- next part -------------- An HTML attachment was scrubbed... URL: From bsmith at mcs.anl.gov Fri Nov 18 12:50:55 2011 From: bsmith at mcs.anl.gov (Barry Smith) Date: Fri, 18 Nov 2011 12:50:55 -0600 Subject: [petsc-users] Assembling primal Schur matrix in FETI-DP method In-Reply-To: References: <4EC6577E.1020309@tu-dresden.de> <5E0EE636-5C70-42A6-87C7-9BA0F842388D@mcs.anl.gov> Message-ID: On Nov 18, 2011, at 12:36 PM, Jed Brown wrote: > On Fri, Nov 18, 2011 at 12:24, Barry Smith wrote: > http://www.mcs.anl.gov/petsc/documentation/faq.html#schurcomplement > > This FAQ is too old and needs to be updated. Someone broke all your fingers :-) From av.nova at gmail.com Fri Nov 18 15:49:43 2011 From: av.nova at gmail.com (NovA) Date: Sat, 19 Nov 2011 00:49:43 +0300 Subject: [petsc-users] building PETSc-3.2p5 with openMPI-1.5.4 under Windows Message-ID: Hello everybody! Recently I've tried to build PETSc-3.2p5 under WindowsXP-x64 with openMPI-1.5.4 binary package using Intel C++ 11.1 compiler. I've got couple of strange problems, but succeeded to resolve them. I just want to share the solution. Hope, this could help to improve documentation or configuration procedure. After a long stage of trial-and-error I managed to provide the ./configure.py with options that worked. Under cygwin the configuration lasts forever... Anyway, the configure stage finished successfully, but the building stage brought the following problems. (1) Building stops at src/sys/viewer/impls/ascii/filev.c with the syntax errors in mpi.h in the lines: OMPI_DECLSPEC MPI_Fint MPI_Comm_c2f(MPI_Comm comm); and OMPI_DECLSPEC MPI_Comm MPI_Comm_f2c(MPI_Fint comm); Tedious investigation showed that it was resulted from substitutions in petscfix.h: #define MPI_Comm_f2c(a) (a) #define MPI_Comm_c2f(a) (a) And these definitions came in turn from "TEST configureConversion" of configure.py. The test was indeed failed with unresolved external symbol "ompi_mpi_comm_world" (NOT the one it verifies). That just because not all openMPI libraries were specified in the command line (only -lmpi)... Therefore the very cause of the problem is the incorrect options for ./configure concerning MPI. I used "--with-mpi-dir=..." and "--with-cc="win32fe icl"" (win32fe wrapper can't handle mpicc.exe). I thought they should correctly configure openMPI includes and libraries, but they did not for the latter. Silently. So my solution is to remove --with-mpi-dir configure option and replace it with --with-mpi-include="$MPI_DIR/include", --with-mpi-lib="[libmpi.lib,libopen-pal.lib,libopen-rte.lib]" (no spaces), --CC_LINKER_FLAGS="-L$MPI_DIR/lib", --CFLAGS="-DOMPI_IMPORTS -DOPAL_IMPORTS -DORTE_IMPORTS". The values are taken from "mpicc.exe --showme". (2) Building stops at src/sys/error/fp.c with the errors that integer/pointer expression expected in the line if (feclearexcept(FE_ALL_EXCEPT)) ... It seems the reason it that "Intel C++ 11.1" provide "fenv.h" header which is detected by the ./configure and PETSC_HAVE_FENV_H is defined. But the function declared in intel's fenv.h as void: extern void _FENV_PUBAPI feclearexcept (int excepts) ; For now I just commented out PETSC_HAVE_FENV_H in petscconf.h. Is there a better way to workaround this? (3) Building stops at src\sys\objects\pinit.c with the errors "expression must have a constant value" concerning MPI_COMM_NULL. I already reported this in the mail-list a year and half ago for openMPI-1.4.1 and PETSc-3.1. Then Satish Balay filled the bug to openMPI developers ( https://svn.open-mpi.org/trac/ompi/ticket/2368 ). Unfortunately, the ticket is still open and workaround for PETSc is the same: "replacing all occurrences of 'MPI_COMM_NULL' in pinit.c with '0'". Probably it's worth to refresh that ticket somehow. Best regards, Andrey From bsmith at mcs.anl.gov Fri Nov 18 16:02:19 2011 From: bsmith at mcs.anl.gov (Barry Smith) Date: Fri, 18 Nov 2011 16:02:19 -0600 Subject: [petsc-users] building PETSc-3.2p5 with openMPI-1.5.4 under Windows In-Reply-To: References: Message-ID: <194F183D-CFBA-456F-85CE-05177FE5AF01@mcs.anl.gov> Thanks for the complete detailed error report. On Nov 18, 2011, at 3:49 PM, NovA wrote: > Hello everybody! > > Recently I've tried to build PETSc-3.2p5 under WindowsXP-x64 with > openMPI-1.5.4 binary package using Intel C++ 11.1 compiler. I've got > couple of strange problems, but succeeded to resolve them. I just want > to share the solution. Hope, this could help to improve documentation > or configuration procedure. > > After a long stage of trial-and-error I managed to provide the > ./configure.py with options that worked. Under cygwin the > configuration lasts forever... Anyway, the configure stage finished > successfully, but the building stage brought the following problems. > > (1) Building stops at src/sys/viewer/impls/ascii/filev.c with the > syntax errors in mpi.h in the lines: > OMPI_DECLSPEC MPI_Fint MPI_Comm_c2f(MPI_Comm comm); > and > OMPI_DECLSPEC MPI_Comm MPI_Comm_f2c(MPI_Fint comm); > > Tedious investigation showed that it was resulted from substitutions > in petscfix.h: > #define MPI_Comm_f2c(a) (a) > #define MPI_Comm_c2f(a) (a) > > And these definitions came in turn from "TEST configureConversion" of > configure.py. The test was indeed failed with unresolved external > symbol "ompi_mpi_comm_world" (NOT the one it verifies). That just > because not all openMPI libraries were specified in the command line > (only -lmpi)... > > Therefore the very cause of the problem is the incorrect options for > ./configure concerning MPI. I used "--with-mpi-dir=..." and > "--with-cc="win32fe icl"" (win32fe wrapper can't handle mpicc.exe). I > thought they should correctly configure openMPI includes and > libraries, but they did not for the latter. Silently. > > So my solution is to remove --with-mpi-dir configure option and > replace it with --with-mpi-include="$MPI_DIR/include", > --with-mpi-lib="[libmpi.lib,libopen-pal.lib,libopen-rte.lib]" (no > spaces), --CC_LINKER_FLAGS="-L$MPI_DIR/lib", --CFLAGS="-DOMPI_IMPORTS > -DOPAL_IMPORTS -DORTE_IMPORTS". The values are taken from "mpicc.exe > --showme". > Satish, Can you add testing of this complicated configuration to come before the simplier one of -lmpi to prevent this problem? > > (2) Building stops at src/sys/error/fp.c with the errors that > integer/pointer expression expected in the line > if (feclearexcept(FE_ALL_EXCEPT)) ... > > It seems the reason it that "Intel C++ 11.1" provide "fenv.h" header > which is detected by the ./configure and PETSC_HAVE_FENV_H is defined. > But the function declared in intel's fenv.h as void: > extern void _FENV_PUBAPI feclearexcept (int excepts) ; > > For now I just commented out PETSC_HAVE_FENV_H in petscconf.h. Is > there a better way to workaround this? Satish, Can you add a test HAVE_FECLEAREXCEPT_RETURN_INT to configure then in the code if this is not defined call feclearexcept without the error checking? > > (3) Building stops at src\sys\objects\pinit.c with the errors > "expression must have a constant value" concerning MPI_COMM_NULL. > I already reported this in the mail-list a year and half ago for > openMPI-1.4.1 and PETSc-3.1. Then Satish Balay filled the bug to > openMPI developers ( https://svn.open-mpi.org/trac/ompi/ticket/2368 ). > Unfortunately, the ticket is still open and workaround for PETSc is > the same: "replacing all occurrences of 'MPI_COMM_NULL' in pinit.c > with '0'". Probably it's worth to refresh that ticket somehow. Satish, Can this be fixed by having configure check for MPI_COMM_NULL and generating a MPI_COMM_NULL in petscfix.h if it doesn't exist? Let me know the results of each of these fixes (in 3.2) on petsc-maint. Thanks Barry > > > Best regards, > Andrey From jedbrown at mcs.anl.gov Fri Nov 18 18:19:40 2011 From: jedbrown at mcs.anl.gov (Jed Brown) Date: Fri, 18 Nov 2011 18:19:40 -0600 Subject: [petsc-users] Assembling primal Schur matrix in FETI-DP method In-Reply-To: References: <4EC6577E.1020309@tu-dresden.de> <5E0EE636-5C70-42A6-87C7-9BA0F842388D@mcs.anl.gov> Message-ID: No, I pushed this. On Nov 18, 2011 12:50 PM, "Barry Smith" wrote: > > On Nov 18, 2011, at 12:36 PM, Jed Brown wrote: > > > On Fri, Nov 18, 2011 at 12:24, Barry Smith wrote: > > http://www.mcs.anl.gov/petsc/documentation/faq.html#schurcomplement > > > > This FAQ is too old and needs to be updated. > > Someone broke all your fingers :-) > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From gdiso at ustc.edu Sat Nov 19 22:02:18 2011 From: gdiso at ustc.edu (Gong Ding) Date: Sun, 20 Nov 2011 12:02:18 +0800 (CST) Subject: [petsc-users] Assembling primal Schur matrix in FETI-DP method In-Reply-To: References: <4EC6577E.1020309@tu-dresden.de> Message-ID: <30143498.332801321761738267.JavaMail.coremail@mail.ustc.edu> I have the same request as explictly building schur complement matrix when K_PiB inv(K_BB) K_BPi is small. The dynamic insert of matrix entry can be really useful here. Several months ago I had committed a patch for this purpos. Will anyone try to merge the patch? On Fri, Nov 18, 2011 at 07:02, Thomas Witkowski wrote: In my current FETI-DP implementation, the solution of the Schur complement on the primal variables is done by an iterative solver. This works quite good, but for small and mid size 2D problems I would like to test it with direct assembling and inverting the Schur complement matrix. In my notation, the matrix is defined by S_PiPi = K_PiPi - K_PiB inv(K_BB) K_BPi "Pi" are the primal and "B" the non-primal variables. K_BB is factorized with a (local) direct solver (umpfack or mumps). But how can I create a matrix from the last expression? Is there a way to do a matrix-matrix multiplication in PETSc, where the first matrix is the (implicit defined) dense inverse of a sparse matrix, and the second matrix is a sparse matrix? Or is it required to extract the rows of K_BPi in some way and to perform than a matrix-vector multiplication with inv(K_BB)? You should be able to construct the sparsity pattern of the resulting matrix, therefore you can color it and get the explicit operator by MatFDColoringApply() where the "base" vector is zero and the "function" is just MatMult(). There should probably be a Mat function that does this for you given a known sparsity pattern. -------------- next part -------------- An HTML attachment was scrubbed... URL: From jedbrown at mcs.anl.gov Sat Nov 19 22:12:22 2011 From: jedbrown at mcs.anl.gov (Jed Brown) Date: Sat, 19 Nov 2011 22:12:22 -0600 Subject: [petsc-users] Assembling primal Schur matrix in FETI-DP method In-Reply-To: <30143498.332801321761738267.JavaMail.coremail@mail.ustc.edu> References: <4EC6577E.1020309@tu-dresden.de> <30143498.332801321761738267.JavaMail.coremail@mail.ustc.edu> Message-ID: On Sat, Nov 19, 2011 at 22:02, Gong Ding wrote: > I have the same request as explictly building schur complement matrix when > K_PiB inv(K_BB) K_BPi is small. > The dynamic insert of matrix entry can be really useful here. > I don't believe it is necessary or desirable to use dynamic preallocation in this case. Additionally, the FETI-DP or BDDC Schur complements have quite special structure that can be exploited to assemble them. I don't know whether it makes sense to have a public API for exploiting that structure. For one thing, building the MatSchurComplement produces a more synchronous algorithm than necessary. My preference would be to exploit the structure as part of the PCFETIDP implementation instead of by adding special-purpose code to MatSchurComplement. > Several months ago I had committed a patch for this purpos. > Will anyone try to merge the patch? > We didn't want a whole new matrix type for this. I think we gave a suggestion for making the patch work in a less intrusive way with existing formats. I still have it on my list to modify your patch for this purpose, but sadly, haven't had time to complete it. So it's not forgotten and I'm pretty confident that it will be done before the next release (probably in the spring). -------------- next part -------------- An HTML attachment was scrubbed... URL: From mike.hui.zhang at hotmail.com Sun Nov 20 07:00:52 2011 From: mike.hui.zhang at hotmail.com (Hui Zhang) Date: Sun, 20 Nov 2011 14:00:52 +0100 Subject: [petsc-users] fem mesh generation Message-ID: Doest petsc has built-in *cartesian* mesh generator for finite elements? What I want is: inputs the size of a domain, outputs the geometric coordinates like (0.1, 0.2, 0.3) for each nodes, volume elements like (1,2,...,8), face elements like (1,2,...,4). I saw there is a DMMeshGenerate, but no examples. From jedbrown at mcs.anl.gov Sun Nov 20 07:48:14 2011 From: jedbrown at mcs.anl.gov (Jed Brown) Date: Sun, 20 Nov 2011 07:48:14 -0600 Subject: [petsc-users] fem mesh generation In-Reply-To: References: Message-ID: On Sun, Nov 20, 2011 at 07:00, Hui Zhang wrote: > Doest petsc has built-in *cartesian* mesh generator for finite elements? > What I want is: inputs the size of a domain, outputs the geometric > coordinates > like (0.1, 0.2, 0.3) for each nodes, volume elements like (1,2,...,8), > face elements like (1,2,...,4). > You can use DMDA and then loop over the local part of the domain however you like. See DMDASetUniformCoordinates() and also finite element examples like src/snes/examples/tutorials/ex48.c or src/ksp/ksp/examples/tutorials/ex43.c or ex42.c. The function DMDAGetElements() exists, but I think it is of limited utility. -------------- next part -------------- An HTML attachment was scrubbed... URL: From gdiso at ustc.edu Sun Nov 20 22:01:50 2011 From: gdiso at ustc.edu (Gong Ding) Date: Mon, 21 Nov 2011 12:01:50 +0800 (CST) Subject: [petsc-users] Memory allocation limit in IBM AIX POE Message-ID: <29726085.334361321848110746.JavaMail.coremail@mail.ustc.edu> Hi, This problem has nothing to do with petsc. However, I hope some one knows POE of AIX in this mailing list. We now have a small cluster of PPC6. The system is AIX6.1, with xlc and POE. I found that when compiled with POE(compiled by mpCC_r, call MPI_Init at the beginning of main function, see below), any memory allocation exceeds about 4M will crash. The testing code is really simple: #define HAVE_MPI #ifdef HAVE_MPI #include "mpi.h" #endif #include #include struct ST { int i; int j; int k; double v; }; int main(int argc, char **argv) { #ifdef HAVE_MPI MPI_Init(&argc, &argv); #endif std::vector array; for(size_t i=0; i<300000; i++) { ST st; array.push_back(st); } #ifdef HAVE_MPI MPI_Finalize(); #endif return 0; } And the makefile: ALL: crash_mpi crash_mpi: crash_mpi.o mpCC_r -q64 -brtl -o crash_mpi crash_mpi.o crash_mpi.o : crash_mpi.cc mpCC_r -q64 -qansialias -qrtti=all -c crash_mpi.cc The code tries to push 300K structure ST into a vector. Total memory request is about 6M. This code will always crash. However, push 200K ST into vector works well. It seems POE has some memory limitation here. Does anyone know how to cancle this limitation? Thanks Gong Ding From balay at mcs.anl.gov Sun Nov 20 22:55:48 2011 From: balay at mcs.anl.gov (Satish Balay) Date: Sun, 20 Nov 2011 22:55:48 -0600 (CST) Subject: [petsc-users] Memory allocation limit in IBM AIX POE In-Reply-To: <29726085.334361321848110746.JavaMail.coremail@mail.ustc.edu> References: <29726085.334361321848110746.JavaMail.coremail@mail.ustc.edu> Message-ID: perhaps not related - but on older IBM machines we used to defualt to using a compiler option -bmaxdata:0x70000000 Without this - the memory allocatable was much lower than the system ram. I guess you can give this a try.. Satish On Mon, 21 Nov 2011, Gong Ding wrote: > Hi, > This problem has nothing to do with petsc. > However, I hope some one knows POE of AIX in this mailing list. > > We now have a small cluster of PPC6. > The system is AIX6.1, with xlc and POE. > I found that when compiled with POE(compiled by mpCC_r, call MPI_Init at the beginning of main function, see below), > any memory allocation exceeds about 4M will crash. > > The testing code is really simple: > > #define HAVE_MPI > > #ifdef HAVE_MPI > #include "mpi.h" > #endif > > #include > #include > > struct ST > { > int i; > int j; > int k; > double v; > }; > > > int main(int argc, char **argv) > { > #ifdef HAVE_MPI > MPI_Init(&argc, &argv); > #endif > > std::vector array; > for(size_t i=0; i<300000; i++) > { > ST st; > array.push_back(st); > } > > #ifdef HAVE_MPI > MPI_Finalize(); > #endif > > return 0; > } > > And the makefile: > ALL: crash_mpi > > crash_mpi: crash_mpi.o > mpCC_r -q64 -brtl -o crash_mpi crash_mpi.o > > crash_mpi.o : crash_mpi.cc > mpCC_r -q64 -qansialias -qrtti=all -c crash_mpi.cc > > The code tries to push 300K structure ST into a vector. > Total memory request is about 6M. > This code will always crash. > However, push 200K ST into vector works well. > > It seems POE has some memory limitation here. > Does anyone know how to cancle this limitation? > > Thanks > > Gong Ding > > > > From gdiso at ustc.edu Sun Nov 20 23:07:16 2011 From: gdiso at ustc.edu (Gong Ding) Date: Mon, 21 Nov 2011 13:07:16 +0800 (CST) Subject: [petsc-users] petsc-3.2-p5 compile error on AIX/xlc++ with clanguage=cxx Message-ID: <5627984.334481321852036090.JavaMail.coremail@mail.ustc.edu> mpCC_r -q64 -o fp.o -c -qrtti=dyna -O -+ -qpic -I/gpfs1/cogenda/cogenda/packages/petsc/include -I/gpfs1/cogenda/cogenda/packages/petsc/IBM-XLCPP/include -I/usr/lpp/ppe.poe/include -D__INSDIR__=src/sys/error/fp.c "fp.c", line 283.112: 1540-0256 (S) A parameter of type "PetscErrorType" cannot be initialized with an expression of type "int". "fp.c", line 283.112: 1540-1205 (I) The error occurred while converting to parameter 7 of "PetscError(MPI_Comm, int, const char *, const char *, const char *, PetscErrorCode, PetscErrorType, const char *, ...)". It seems line 283 ierr = PetscError(PETSC_COMM_SELF,0,"User provided function","Unknown file","Unknown directory",PETSC_ERR_FP,1,"floating point error"); should be changed to ierr = PetscError(PETSC_COMM_SELF,0,"User provided function","Unknown file","Unknown directory",PETSC_ERR_FP,PETSC_ERROR_REPEAT,"floating point error"); From jedbrown at mcs.anl.gov Sun Nov 20 23:26:00 2011 From: jedbrown at mcs.anl.gov (Jed Brown) Date: Sun, 20 Nov 2011 23:26:00 -0600 Subject: [petsc-users] petsc-3.2-p5 compile error on AIX/xlc++ with clanguage=cxx In-Reply-To: <5627984.334481321852036090.JavaMail.coremail@mail.ustc.edu> References: <5627984.334481321852036090.JavaMail.coremail@mail.ustc.edu> Message-ID: 2011/11/20 Gong Ding > It seems line 283 > ierr = PetscError(PETSC_COMM_SELF,0,"User provided function","Unknown > file","Unknown directory",PETSC_ERR_FP,1,"floating point error"); > should be changed to > ierr = PetscError(PETSC_COMM_SELF,0,"User provided function","Unknown > file","Unknown directory",PETSC_ERR_FP,PETSC_ERROR_REPEAT,"floating point > error"); > Yes, this is the problem. Fix pushed: http://petsc.cs.iit.edu/petsc/releases/petsc-3.2/rev/40749cf4cd9f -------------- next part -------------- An HTML attachment was scrubbed... URL: From behzad.baghapour at gmail.com Mon Nov 21 02:33:00 2011 From: behzad.baghapour at gmail.com (behzad baghapour) Date: Mon, 21 Nov 2011 12:03:00 +0330 Subject: [petsc-users] Get Stuck in SNES Message-ID: Dear Developers, I've finished programming with SNES. But when I run, SNES just do one Nonlinear iteration and comes out !!! I check that the residual evaluation is correct BUT the linear solver may not work while the number of linear iteration is "0". I also check that the jacobian routine is called during process but I don't know how to check the jacobian and preconditioner in SNES to ensure the correct values. Here I want to know what procedure I should do and where is the beginning point for debugging with SNES? Thanks a lot, BehZad -------------- next part -------------- An HTML attachment was scrubbed... URL: From Thomas.Witkowski at tu-dresden.de Mon Nov 21 04:00:48 2011 From: Thomas.Witkowski at tu-dresden.de (Thomas Witkowski) Date: Mon, 21 Nov 2011 11:00:48 +0100 Subject: [petsc-users] Assembling primal Schur matrix in FETI-DP method In-Reply-To: References: <4EC6577E.1020309@tu-dresden.de> <20111118145308.q7gzzdshwkw0sk80@mail.zih.tu-dresden.de> Message-ID: <20111121110048.juek8qdfsw0k4g00@mail.zih.tu-dresden.de> Zitat von Jed Brown : > On Fri, Nov 18, 2011 at 07:53, Thomas Witkowski < > Thomas.Witkowski at tu-dresden.de> wrote: > >> Defining the sparsity pattern is no problem. But is there any >> documentation on MatFDColoringApply? The online function reference is quite >> short and is not mentioned in the manuel. So, I have no real idea how this >> is related to my question. > > > You can find an example in snes/examples/tutorials/ex15.c > > The point is that it can compute the Schur complement by coloring. In your > case, you can do subdomain solves independently, so you can compute > > S = K_PiPi - K_PiB inv(K_BB) K_BPi > > on each subdomain using > > Tmp = inv(K_BB) K_BPi (via MatMatSolve) Some technical question on this point: How can I explicitly factorize a sequential matrix? Is MatLUFactor the correct function to do it? If so, how can I provide the package (i.e. umfpack) that should be used for factorization? Thomas > > Tmp2 = K_PiB * Tmp (via MatMatMult) > > S = K_PiPi - Tmp2 (via MatAXPY) > > > You would then assemble the resulting independent matrices into the global > matrix. > From dominik at itis.ethz.ch Mon Nov 21 07:07:51 2011 From: dominik at itis.ethz.ch (Dominik Szczerba) Date: Mon, 21 Nov 2011 14:07:51 +0100 Subject: [petsc-users] MatCreateScatter documentation Message-ID: MatCreateScatter documentation seems incomplete: http://www.mcs.anl.gov/petsc/petsc-current/docs/manualpages/Mat/MatCreateScatter.html It seems to finish half sentence, after "For example," And an example is the exact reason I went to this page :) Best regards, Dominik From Thomas.Witkowski at tu-dresden.de Mon Nov 21 07:31:48 2011 From: Thomas.Witkowski at tu-dresden.de (Thomas Witkowski) Date: Mon, 21 Nov 2011 14:31:48 +0100 Subject: [petsc-users] Efficient interaction of local and distributed Mat/Vec Message-ID: <20111121143148.6tc9ntpvfowk444k@mail.zih.tu-dresden.de> In my FETI-DP code, there are many interactions between local and distributed data structures (matrices, vectors). Mainly, there is on each rank a matrix mat_bb (MATSEQAIJ) representing the local subdomain problem (without the primal nodes). In my implementation, the corresponding vector f_b is a distributed VECMPI. Thus, on each rank the local part of f_b corresponds to the size of the local matrix mat_bb. For each solution with mat_bb and the right-hand-side f_b, my code creates a temporary vector f_b_seq (VECSEQ), creates two IS (for the global indices in f_b and the local in f_b_seq), and copy the values from f_b to f_b_seq with a VecScatter. After the solution with m_b_b, the same is done the other way round. This works fine. My question: Is this the best/most efficient way to do it with PETSc? I'm not really sure. It's a lot of code and I do not like the idea of coping the same value from one date structure to another one just to make them "compatible" in some way. Thanks for any suggestions, Thomas From jedbrown at mcs.anl.gov Mon Nov 21 07:47:35 2011 From: jedbrown at mcs.anl.gov (Jed Brown) Date: Mon, 21 Nov 2011 07:47:35 -0600 Subject: [petsc-users] Get Stuck in SNES In-Reply-To: References: Message-ID: On Mon, Nov 21, 2011 at 02:33, behzad baghapour wrote: > Dear Developers, > > I've finished programming with SNES. But when I run, SNES just do one > Nonlinear iteration and comes out !!! > I check that the residual evaluation is correct BUT the linear solver may > not work while the number of linear iteration is "0". I also check that the > jacobian routine is called during process but I don't know how to check the > jacobian and preconditioner in SNES to ensure the correct values. > -snes_converged_reason -ksp_converged_reason > > Here I want to know what procedure I should do and where is the beginning > point for debugging with SNES? > http://www.mcs.anl.gov/petsc/documentation/faq.html#newton -------------- next part -------------- An HTML attachment was scrubbed... URL: From jedbrown at mcs.anl.gov Mon Nov 21 07:53:30 2011 From: jedbrown at mcs.anl.gov (Jed Brown) Date: Mon, 21 Nov 2011 07:53:30 -0600 Subject: [petsc-users] Assembling primal Schur matrix in FETI-DP method In-Reply-To: <20111121110048.juek8qdfsw0k4g00@mail.zih.tu-dresden.de> References: <4EC6577E.1020309@tu-dresden.de> <20111118145308.q7gzzdshwkw0sk80@mail.zih.tu-dresden.de> <20111121110048.juek8qdfsw0k4g00@mail.zih.tu-dresden.de> Message-ID: On Mon, Nov 21, 2011 at 04:00, Thomas Witkowski < Thomas.Witkowski at tu-dresden.de> wrote: > Some technical question on this point: How can I explicitly factorize a > sequential matrix? Is MatLUFactor the correct function to do it? In general, I recommend using the KSP. You can KSPSetType(ksp,KSPPREONLY), PCSetType(pc,PCLU). Call KSPSolve() multiple times, the factorization will be reused. Or you can use MatLUFactor() directly if you really want. > If so, how can I provide the package (i.e. umfpack) that should be used > for factorization? See the MatSolverPackage argument in one of these. http://www.mcs.anl.gov/petsc/petsc-dev/docs/manualpages/PC/PCFactorSetMatSolverPackage.html http://www.mcs.anl.gov/petsc/petsc-dev/docs/manualpages/Mat/MatGetFactor.html -------------- next part -------------- An HTML attachment was scrubbed... URL: From buyong.huier at gmail.com Mon Nov 21 04:52:43 2011 From: buyong.huier at gmail.com (Hui Zhang) Date: Mon, 21 Nov 2011 11:52:43 +0100 Subject: [petsc-users] restricted DMDAGlobaltoLocal Message-ID: When using DMDAGlobaltoLocal, how can I communicate only the boundary of the ghosted domain (with stecil width larger than one)? From jedbrown at mcs.anl.gov Mon Nov 21 08:01:10 2011 From: jedbrown at mcs.anl.gov (Jed Brown) Date: Mon, 21 Nov 2011 08:01:10 -0600 Subject: [petsc-users] restricted DMDAGlobaltoLocal In-Reply-To: References: Message-ID: On Mon, Nov 21, 2011 at 04:52, Hui Zhang wrote: > When using DMDAGlobaltoLocal, how can I communicate only the boundary of > the > ghosted domain (with stecil width larger than one)? > Only the ghost values are communicated. The interior is all local. -------------- next part -------------- An HTML attachment was scrubbed... URL: From Thomas.Witkowski at tu-dresden.de Mon Nov 21 08:03:54 2011 From: Thomas.Witkowski at tu-dresden.de (Thomas Witkowski) Date: Mon, 21 Nov 2011 15:03:54 +0100 Subject: [petsc-users] Assembling primal Schur matrix in FETI-DP method In-Reply-To: References: <4EC6577E.1020309@tu-dresden.de> <20111118145308.q7gzzdshwkw0sk80@mail.zih.tu-dresden.de> <20111121110048.juek8qdfsw0k4g00@mail.zih.tu-dresden.de> Message-ID: <20111121150354.b7ow44mqg0ckc0og@mail.zih.tu-dresden.de> Zitat von Jed Brown : > On Mon, Nov 21, 2011 at 04:00, Thomas Witkowski < > Thomas.Witkowski at tu-dresden.de> wrote: > >> Some technical question on this point: How can I explicitly factorize a >> sequential matrix? Is MatLUFactor the correct function to do it? > > > In general, I recommend using the KSP. You can KSPSetType(ksp,KSPPREONLY), > PCSetType(pc,PCLU). Call KSPSolve() multiple times, the factorization will > be reused. Or you can use MatLUFactor() directly if you really want. > > Okay, but how to make use of the KSP in MatMatSolve? From jedbrown at mcs.anl.gov Mon Nov 21 08:12:05 2011 From: jedbrown at mcs.anl.gov (Jed Brown) Date: Mon, 21 Nov 2011 08:12:05 -0600 Subject: [petsc-users] Efficient interaction of local and distributed Mat/Vec In-Reply-To: <20111121143148.6tc9ntpvfowk444k@mail.zih.tu-dresden.de> References: <20111121143148.6tc9ntpvfowk444k@mail.zih.tu-dresden.de> Message-ID: On Mon, Nov 21, 2011 at 07:31, Thomas Witkowski < Thomas.Witkowski at tu-dresden.de> wrote: > In my FETI-DP code, there are many interactions between local and > distributed data structures (matrices, vectors). Mainly, there is on each > rank a matrix mat_bb (MATSEQAIJ) representing the local subdomain problem > (without the primal nodes). In my implementation, the corresponding vector > f_b is a distributed VECMPI. Thus, on each rank the local part of f_b > corresponds to the size of the local matrix mat_bb. For each solution with > mat_bb and the right-hand-side f_b, my code creates a temporary vector > f_b_seq (VECSEQ), creates two IS (for the global indices in f_b and the > local in f_b_seq), and copy the values from f_b to f_b_seq with a > VecScatter. After the solution with m_b_b, the same is done the other way > round. > > This works fine. My question: Is this the best/most efficient way to do it > with PETSc? I'm not really sure. It's a lot of code and I do not like the > idea of coping the same value from one date structure to another one just > to make them "compatible" in some way. > It is very unlikely that these copies (which end up calling memcpy()) have a measurable effect on performance. There is a new alternative that would be slightly less code and will avoid the copy in some cases. Call VecGetSubVector(f_b,is_f_b,&f_b_seq); // is_f_b should have been created on MPI_COMM_SELF // use f_b_seq VecRestoreSubVector(f_b,is_f_b,&f_b_seq); The index set is_f_b should have been created on MPI_COMM_SELF (the communicator that you want f_b_seq to reside on) and contain the global indices from f_b that you want. http://www.mcs.anl.gov/petsc/petsc-dev/docs/manualpages/Vec/VecGetSubVector.html It is also possible to use VecPlaceArray(), but I find that much more ugly than VecGetSubVector(). -------------- next part -------------- An HTML attachment was scrubbed... URL: From Thomas.Witkowski at tu-dresden.de Mon Nov 21 08:12:40 2011 From: Thomas.Witkowski at tu-dresden.de (Thomas Witkowski) Date: Mon, 21 Nov 2011 15:12:40 +0100 Subject: [petsc-users] Assembling primal Schur matrix in FETI-DP method In-Reply-To: <20111121150354.b7ow44mqg0ckc0og@mail.zih.tu-dresden.de> References: <4EC6577E.1020309@tu-dresden.de> <20111118145308.q7gzzdshwkw0sk80@mail.zih.tu-dresden.de> <20111121110048.juek8qdfsw0k4g00@mail.zih.tu-dresden.de> <20111121150354.b7ow44mqg0ckc0og@mail.zih.tu-dresden.de> Message-ID: <20111121151240.0loyzylkowo4c8kc@mail.zih.tu-dresden.de> Zitat von Thomas Witkowski : > Zitat von Jed Brown : > >> On Mon, Nov 21, 2011 at 04:00, Thomas Witkowski < >> Thomas.Witkowski at tu-dresden.de> wrote: >> >>> Some technical question on this point: How can I explicitly factorize a >>> sequential matrix? Is MatLUFactor the correct function to do it? >> >> >> In general, I recommend using the KSP. You can KSPSetType(ksp,KSPPREONLY), >> PCSetType(pc,PCLU). Call KSPSolve() multiple times, the factorization will >> be reused. Or you can use MatLUFactor() directly if you really want. >> >> > > Okay, but how to make use of the KSP in MatMatSolve? KSPGetOperators? From jedbrown at mcs.anl.gov Mon Nov 21 08:13:03 2011 From: jedbrown at mcs.anl.gov (Jed Brown) Date: Mon, 21 Nov 2011 08:13:03 -0600 Subject: [petsc-users] Assembling primal Schur matrix in FETI-DP method In-Reply-To: <20111121150354.b7ow44mqg0ckc0og@mail.zih.tu-dresden.de> References: <4EC6577E.1020309@tu-dresden.de> <20111118145308.q7gzzdshwkw0sk80@mail.zih.tu-dresden.de> <20111121110048.juek8qdfsw0k4g00@mail.zih.tu-dresden.de> <20111121150354.b7ow44mqg0ckc0og@mail.zih.tu-dresden.de> Message-ID: On Mon, Nov 21, 2011 at 08:03, Thomas Witkowski < Thomas.Witkowski at tu-dresden.de> wrote: > Okay, but how to make use of the KSP in MatMatSolve? With this approach, you just call KSPSolve() multiple times. -------------- next part -------------- An HTML attachment was scrubbed... URL: From bsmith at mcs.anl.gov Mon Nov 21 08:25:27 2011 From: bsmith at mcs.anl.gov (Barry Smith) Date: Mon, 21 Nov 2011 08:25:27 -0600 Subject: [petsc-users] restricted DMDAGlobaltoLocal In-Reply-To: References: Message-ID: <380C9AC4-799D-4A07-A1FF-1F92C8065EC3@mcs.anl.gov> On Nov 21, 2011, at 8:01 AM, Jed Brown wrote: > On Mon, Nov 21, 2011 at 04:52, Hui Zhang wrote: > When using DMDAGlobaltoLocal, how can I communicate only the boundary of the > ghosted domain (with stecil width larger than one)? > > Only the ghost values are communicated. The interior is all local. I think Hui means DMDALocalToLocalBegin() and DMDALocalToLocalEnd() but normally one doesn't need this with the PETSc solvers, one only needs the DMGlobalToLocalBegin/End() Barry From jedbrown at mcs.anl.gov Mon Nov 21 08:52:21 2011 From: jedbrown at mcs.anl.gov (Jed Brown) Date: Mon, 21 Nov 2011 08:52:21 -0600 Subject: [petsc-users] MatCreateScatter documentation In-Reply-To: References: Message-ID: On Mon, Nov 21, 2011 at 07:07, Dominik Szczerba wrote: > MatCreateScatter documentation seems incomplete: > > > http://www.mcs.anl.gov/petsc/petsc-current/docs/manualpages/Mat/MatCreateScatter.html > > It seems to finish half sentence, after "For example," > Looks like the rest of that sentence was earlier in the man page. What are you trying to do? -------------- next part -------------- An HTML attachment was scrubbed... URL: From Thomas.Witkowski at tu-dresden.de Mon Nov 21 09:00:27 2011 From: Thomas.Witkowski at tu-dresden.de (Thomas Witkowski) Date: Mon, 21 Nov 2011 16:00:27 +0100 Subject: [petsc-users] Assembling primal Schur matrix in FETI-DP method In-Reply-To: References: <4EC6577E.1020309@tu-dresden.de> <20111118145308.q7gzzdshwkw0sk80@mail.zih.tu-dresden.de> <20111121110048.juek8qdfsw0k4g00@mail.zih.tu-dresden.de> <20111121150354.b7ow44mqg0ckc0og@mail.zih.tu-dresden.de> Message-ID: <20111121160027.p74x2j48lcsoggk8@mail.zih.tu-dresden.de> Zitat von Jed Brown : > On Mon, Nov 21, 2011 at 08:03, Thomas Witkowski < > Thomas.Witkowski at tu-dresden.de> wrote: > >> Okay, but how to make use of the KSP in MatMatSolve? > > > With this approach, you just call KSPSolve() multiple times. > I have to multiple two matrices, one is implicitly defined by a KSP, the other one is explicitly assembled. So when using KSPSolve(), it must be called for each column of the second matrix? This seems not to be very efficient as I thought the matrices are stored row wise? Is there a way to get the columns of a SEQAIJ matrix? Thomas From jedbrown at mcs.anl.gov Mon Nov 21 09:11:35 2011 From: jedbrown at mcs.anl.gov (Jed Brown) Date: Mon, 21 Nov 2011 09:11:35 -0600 Subject: [petsc-users] Assembling primal Schur matrix in FETI-DP method In-Reply-To: <20111121160027.p74x2j48lcsoggk8@mail.zih.tu-dresden.de> References: <4EC6577E.1020309@tu-dresden.de> <20111118145308.q7gzzdshwkw0sk80@mail.zih.tu-dresden.de> <20111121110048.juek8qdfsw0k4g00@mail.zih.tu-dresden.de> <20111121150354.b7ow44mqg0ckc0og@mail.zih.tu-dresden.de> <20111121160027.p74x2j48lcsoggk8@mail.zih.tu-dresden.de> Message-ID: On Mon, Nov 21, 2011 at 09:00, Thomas Witkowski < Thomas.Witkowski at tu-dresden.de> wrote: > I have to multiple two matrices, one is implicitly defined by a KSP, the > other one is explicitly assembled. So when using KSPSolve(), it must be > called for each column of the second matrix? This seems not to be very > efficient as I thought the matrices are stored row wise? Is there a way to > get the columns of a SEQAIJ matrix? In this context, I think it is easier to store the matrix as a collection of column vectors, but you can also use MATDENSE. This thing isn't sparse (and it wouldn't do you any good anyway; there are few algorithms that can solve sparse right hand sides and there isn't much point because the solutions would still be dense). -------------- next part -------------- An HTML attachment was scrubbed... URL: From bsmith at mcs.anl.gov Mon Nov 21 09:19:53 2011 From: bsmith at mcs.anl.gov (Barry Smith) Date: Mon, 21 Nov 2011 09:19:53 -0600 Subject: [petsc-users] Assembling primal Schur matrix in FETI-DP method In-Reply-To: References: <4EC6577E.1020309@tu-dresden.de> <20111118145308.q7gzzdshwkw0sk80@mail.zih.tu-dresden.de> <20111121110048.juek8qdfsw0k4g00@mail.zih.tu-dresden.de> <20111121150354.b7ow44mqg0ckc0og@mail.zih.tu-dresden.de> <20111121160027.p74x2j48lcsoggk8@mail.zih.tu-dresden.de> Message-ID: <8CDEA93F-ED8B-40E7-A20C-7C45E4F261F9@mcs.anl.gov> On Nov 21, 2011, at 9:11 AM, Jed Brown wrote: > On Mon, Nov 21, 2011 at 09:00, Thomas Witkowski wrote: > I have to multiple two matrices, one is implicitly defined by a KSP, the other one is explicitly assembled. So when using KSPSolve(), it must be called for each column of the second matrix? This seems not to be very efficient as I thought the matrices are stored row wise? Is there a way to get the columns of a SEQAIJ matrix? > > In this context, I think it is easier to store the matrix as a collection of column vectors, but you can also use MATDENSE. This thing isn't sparse (and it wouldn't do you any good anyway; there are few algorithms that can solve sparse right hand sides and there isn't much point because the solutions would still be dense). From the FAQ I pointed you to earlier How can I compute the Schur complement, Kbb - Kab * inverse(Kbb) * Kba in PETSc? It is very expensive to compute the Schur complement of a matrix and very rarely needed in practice. We highly recommend avoiding algorithms that need it. The Schur complement of a matrix (dense or sparse) is essentially always dense, so begin by ? forming a dense matrix Kba, ? also create another dense matrix T of the same size. ? Then factor the matrix Kaa with MatLUFactor() or MatCholeskyFactor(), call the result A. ? Then call MatMatSolve(A,Kba,T). ? Then call MatMatMult(Kab,T,MAT_INITIAL_MATRIX,1.0,&S). ? Now call MatAXPY(S,-1.0,Kbb,MAT_SUBSET_NONZERO). ? Followed by MatScale(S,-1.0); Note there is never a reason to use KSP to do the solve because for the many solves needed a direct solver will always win over using an iterative solver and since the result is dense it doesn't make sense to do this computation with huge matrices. Barry If you want to use an external LU solver (it will not be faster than PETSc's so why bother). You would use MatGetFactor() then MatLUFactorSymbolic() followed by MatLUFactorNumeric(). From jedbrown at mcs.anl.gov Mon Nov 21 09:26:58 2011 From: jedbrown at mcs.anl.gov (Jed Brown) Date: Mon, 21 Nov 2011 09:26:58 -0600 Subject: [petsc-users] Assembling primal Schur matrix in FETI-DP method In-Reply-To: <8CDEA93F-ED8B-40E7-A20C-7C45E4F261F9@mcs.anl.gov> References: <4EC6577E.1020309@tu-dresden.de> <20111118145308.q7gzzdshwkw0sk80@mail.zih.tu-dresden.de> <20111121110048.juek8qdfsw0k4g00@mail.zih.tu-dresden.de> <20111121150354.b7ow44mqg0ckc0og@mail.zih.tu-dresden.de> <20111121160027.p74x2j48lcsoggk8@mail.zih.tu-dresden.de> <8CDEA93F-ED8B-40E7-A20C-7C45E4F261F9@mcs.anl.gov> Message-ID: On Mon, Nov 21, 2011 at 09:19, Barry Smith wrote: > Note there is never a reason to use KSP to do the solve because for the > many solves needed a direct solver will always win over using an iterative > solver > Depends. If multigrid works well on the subdomain problems, then it will definitely win for large subdomains. (Or you might not even be able to afford that amount of memory.) Yes, you could use smaller subdomains, but most people only write FETI-DP as a 2-level method and the size of the coarse space is a serious problem. > and since the result is dense it doesn't make sense to do this computation > with huge matrices. > But you are only solving against a few right hand sides. Between 4 and 200, for example, while the subdomain size might be more than 100k. If you want to use an external LU solver (it will not be faster than > PETSc's so why bother). > This must be a joke. ;-) -------------- next part -------------- An HTML attachment was scrubbed... URL: From dominik at itis.ethz.ch Mon Nov 21 09:36:50 2011 From: dominik at itis.ethz.ch (Dominik Szczerba) Date: Mon, 21 Nov 2011 16:36:50 +0100 Subject: [petsc-users] MatCreateScatter documentation In-Reply-To: References: Message-ID: I am thinking of the simplest but not necessarily efficient way to pull a distributed matrix onto one CPU... I would appreciate a pointer or two... I know I can dump it to disk via MatView, but without disk IO would be better. Thanks and regards, Dominik On Mon, Nov 21, 2011 at 3:52 PM, Jed Brown wrote: > On Mon, Nov 21, 2011 at 07:07, Dominik Szczerba > wrote: >> >> MatCreateScatter documentation seems incomplete: >> >> >> http://www.mcs.anl.gov/petsc/petsc-current/docs/manualpages/Mat/MatCreateScatter.html >> >> It seems to finish half sentence, after "For example," > > Looks like the rest of that sentence was earlier in the man page. What are > you trying to do? From mike.hui.zhang at hotmail.com Mon Nov 21 09:39:51 2011 From: mike.hui.zhang at hotmail.com (Hui Zhang) Date: Mon, 21 Nov 2011 16:39:51 +0100 Subject: [petsc-users] restricted DMDAGlobaltoLocal In-Reply-To: <380C9AC4-799D-4A07-A1FF-1F92C8065EC3@mcs.anl.gov> References: <380C9AC4-799D-4A07-A1FF-1F92C8065EC3@mcs.anl.gov> Message-ID: > On Nov 21, 2011, at 8:01 AM, Jed Brown wrote: > >> On Mon, Nov 21, 2011 at 04:52, Hui Zhang wrote: >> When using DMDAGlobaltoLocal, how can I communicate only the boundary of the >> ghosted domain (with stecil width larger than one)? >> >> Only the ghost values are communicated. The interior is all local. > > I think Hui means DMDALocalToLocalBegin() and DMDALocalToLocalEnd() but normally one doesn't need this with the PETSc solvers, one only needs the DMGlobalToLocalBegin/End() > > Barry > Thank you all! I'm actually implementing domain decomposition methods with restricted communication. From my understanding, DMDALocalToLocalXX is also communicating all ghost values of the second Vec. However, in a restricted way, only the ghost points most far away from the interior are communicated. From Thomas.Witkowski at tu-dresden.de Mon Nov 21 09:41:38 2011 From: Thomas.Witkowski at tu-dresden.de (Thomas Witkowski) Date: Mon, 21 Nov 2011 16:41:38 +0100 Subject: [petsc-users] Assembling primal Schur matrix in FETI-DP method In-Reply-To: <8CDEA93F-ED8B-40E7-A20C-7C45E4F261F9@mcs.anl.gov> References: <4EC6577E.1020309@tu-dresden.de> <20111118145308.q7gzzdshwkw0sk80@mail.zih.tu-dresden.de> <20111121110048.juek8qdfsw0k4g00@mail.zih.tu-dresden.de> <20111121150354.b7ow44mqg0ckc0og@mail.zih.tu-dresden.de> <20111121160027.p74x2j48lcsoggk8@mail.zih.tu-dresden.de> <8CDEA93F-ED8B-40E7-A20C-7C45E4F261F9@mcs.anl.gov> Message-ID: <20111121164138.393ecjz3ks08wsg8@mail.zih.tu-dresden.de> Zitat von Barry Smith : > > On Nov 21, 2011, at 9:11 AM, Jed Brown wrote: > > > From the FAQ I pointed you to earlier > > How can I compute the Schur complement, Kbb - Kab * inverse(Kbb) * > Kba in PETSc? > > It is very expensive to compute the Schur complement of a matrix and > very rarely needed in practice. We highly recommend avoiding > algorithms that need it. The Schur complement of a matrix (dense or > sparse) is essentially always dense, so begin by > ? forming a dense matrix Kba, > ? also create another dense matrix T of the same size. > ? Then factor the matrix Kaa with MatLUFactor() or > MatCholeskyFactor(), call the result A. > ? Then call MatMatSolve(A,Kba,T). > ? Then call MatMatMult(Kab,T,MAT_INITIAL_MATRIX,1.0,&S). > ? Now call MatAXPY(S,-1.0,Kbb,MAT_SUBSET_NONZERO). > ? Followed by MatScale(S,-1.0); > > Note there is never a reason to use KSP to do the solve because for > the many solves needed a direct solver will always win over using an > iterative solver and since the result is dense it doesn't make > sense to do this computation with huge matrices. > > Barry > > If you want to use an external LU solver (it will not be faster than > PETSc's so why bother). You would use MatGetFactor() then > MatLUFactorSymbolic() followed by MatLUFactorNumeric(). > > > In my case the Schur complemt should be quite sparse, so I want to build it explicitly. My main problem is still how to compute inverse(Kbb) * Kba Sorry for asking again, but no of the solutions seems to be sastisfying. When I understood you (and Jed) right, there are two general ways: either I define inverse(Kbb) either as a Mat object and use MatMatSolve or via KSP and using KSPSolve. The first option seems fine, but one of you noted that it is not possible to reuse the LU factorization. The would be a huge drawback as I have to use inverse(Kbb) in different context. When defining inverse(Kbb) via KSP, as I do it at the moment (and yes, I want to use here direct solvers only), I must store Kba either column wise or in a dense way. Both is not really feasible. Have I missed something to solve this problem? Thomas From jedbrown at mcs.anl.gov Mon Nov 21 09:41:50 2011 From: jedbrown at mcs.anl.gov (Jed Brown) Date: Mon, 21 Nov 2011 09:41:50 -0600 Subject: [petsc-users] MatCreateScatter documentation In-Reply-To: References: Message-ID: On Mon, Nov 21, 2011 at 09:36, Dominik Szczerba wrote: > I am thinking of the simplest but not necessarily efficient way to > pull a distributed matrix onto one CPU... I would appreciate a pointer > or two... I know I can dump it to disk via MatView, but without disk > IO would be better. > http://www.mcs.anl.gov/petsc/petsc-dev/docs/manualpages/Mat/MatGetSubMatrices.html -------------- next part -------------- An HTML attachment was scrubbed... URL: From jedbrown at mcs.anl.gov Mon Nov 21 09:48:47 2011 From: jedbrown at mcs.anl.gov (Jed Brown) Date: Mon, 21 Nov 2011 09:48:47 -0600 Subject: [petsc-users] Assembling primal Schur matrix in FETI-DP method In-Reply-To: <20111121164138.393ecjz3ks08wsg8@mail.zih.tu-dresden.de> References: <4EC6577E.1020309@tu-dresden.de> <20111118145308.q7gzzdshwkw0sk80@mail.zih.tu-dresden.de> <20111121110048.juek8qdfsw0k4g00@mail.zih.tu-dresden.de> <20111121150354.b7ow44mqg0ckc0og@mail.zih.tu-dresden.de> <20111121160027.p74x2j48lcsoggk8@mail.zih.tu-dresden.de> <8CDEA93F-ED8B-40E7-A20C-7C45E4F261F9@mcs.anl.gov> <20111121164138.393ecjz3ks08wsg8@mail.zih.tu-dresden.de> Message-ID: On Mon, Nov 21, 2011 at 09:41, Thomas Witkowski < Thomas.Witkowski at tu-dresden.de> wrote: > In my case the Schur complemt should be quite sparse, > So semantically, your Kbb is a parallel block-diagonal matrix. In my opinion, you don't actually want to store it that way because then you are only allowed to solve with the whole thing, which would make the algorithm more synchronous than necessary. So I would store each block in its own matrix with its own local communicator (MPI_COMM_SELF, of if you are being more general, some suitable subcommunicator). > so I want to build it explicitly. My main problem is still how to compute > > inverse(Kbb) * Kba > > Sorry for asking again, but no of the solutions seems to be sastisfying. > When I understood you (and Jed) right, there are two general ways: either I > define inverse(Kbb) either as a Mat object and use MatMatSolve or via KSP > and using KSPSolve. The first option seems fine, but one of you noted that > it is not possible to reuse the LU factorization. > No, both ways reuse the LU factorization. > The would be a huge drawback as I have to use inverse(Kbb) in different > context. When defining inverse(Kbb) via KSP, as I do it at the moment (and > yes, I want to use here direct solvers only), I must store Kba either > column wise or in a dense way. Both is not really feasible. > You extract the piece of Kba that is relevant to each piece of Kbb. This will have only a few columns and is naturally stored columnwise (either as an array of column vectors or as MATDENSE). After solving these blocks, you will have another tall skinny matrix (either as Vecs or MATDENSE) corresponding to each block of Kbb. Now you multiply the appropriate blocks Kab and put the (sparse, low dimension per block) result back into a global sparse matrix (for the coarse problem). -------------- next part -------------- An HTML attachment was scrubbed... URL: From knepley at gmail.com Mon Nov 21 09:55:17 2011 From: knepley at gmail.com (Matthew Knepley) Date: Mon, 21 Nov 2011 09:55:17 -0600 Subject: [petsc-users] restricted DMDAGlobaltoLocal In-Reply-To: References: <380C9AC4-799D-4A07-A1FF-1F92C8065EC3@mcs.anl.gov> Message-ID: On Mon, Nov 21, 2011 at 9:39 AM, Hui Zhang wrote: > > > On Nov 21, 2011, at 8:01 AM, Jed Brown wrote: > > > >> On Mon, Nov 21, 2011 at 04:52, Hui Zhang > wrote: > >> When using DMDAGlobaltoLocal, how can I communicate only the boundary > of the > >> ghosted domain (with stecil width larger than one)? > >> > >> Only the ghost values are communicated. The interior is all local. > > > > I think Hui means DMDALocalToLocalBegin() and DMDALocalToLocalEnd() > but normally one doesn't need this with the PETSc solvers, one only needs > the DMGlobalToLocalBegin/End() > > > > Barry > > > > Thank you all! > > I'm actually implementing domain decomposition methods with restricted > communication. > From my understanding, DMDALocalToLocalXX is also communicating all ghost > values of > the second Vec. However, in a restricted way, only the ghost points most > far away from > the interior are communicated. No, the communication pattern in the same. You exchange values which are shared by multiple processes. The difference is that LocalToLocal puts values directly into a ghosted local vector, whereas LoaclToGlobal puts them into a global vector, which is not ghosted. Matt -- What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. -- Norbert Wiener -------------- next part -------------- An HTML attachment was scrubbed... URL: From Thomas.Witkowski at tu-dresden.de Mon Nov 21 10:12:41 2011 From: Thomas.Witkowski at tu-dresden.de (Thomas Witkowski) Date: Mon, 21 Nov 2011 17:12:41 +0100 Subject: [petsc-users] Assembling primal Schur matrix in FETI-DP method In-Reply-To: References: <4EC6577E.1020309@tu-dresden.de> <20111118145308.q7gzzdshwkw0sk80@mail.zih.tu-dresden.de> <20111121110048.juek8qdfsw0k4g00@mail.zih.tu-dresden.de> <20111121150354.b7ow44mqg0ckc0og@mail.zih.tu-dresden.de> <20111121160027.p74x2j48lcsoggk8@mail.zih.tu-dresden.de> <8CDEA93F-ED8B-40E7-A20C-7C45E4F261F9@mcs.anl.gov> Message-ID: <20111121171241.3allcfhmskk0swog@mail.zih.tu-dresden.de> Zitat von Jed Brown : > On Mon, Nov 21, 2011 at 09:19, Barry Smith wrote: > > Yes, you could use smaller subdomains, but most people only write FETI-DP > as a 2-level method and the size of the coarse space is a serious problem. > > Do you know about someone who work on a FETI-DP implementation with more than two levels? Thomas From jedbrown at mcs.anl.gov Mon Nov 21 10:24:01 2011 From: jedbrown at mcs.anl.gov (Jed Brown) Date: Mon, 21 Nov 2011 10:24:01 -0600 Subject: [petsc-users] Assembling primal Schur matrix in FETI-DP method In-Reply-To: <20111121171241.3allcfhmskk0swog@mail.zih.tu-dresden.de> References: <4EC6577E.1020309@tu-dresden.de> <20111118145308.q7gzzdshwkw0sk80@mail.zih.tu-dresden.de> <20111121110048.juek8qdfsw0k4g00@mail.zih.tu-dresden.de> <20111121150354.b7ow44mqg0ckc0og@mail.zih.tu-dresden.de> <20111121160027.p74x2j48lcsoggk8@mail.zih.tu-dresden.de> <8CDEA93F-ED8B-40E7-A20C-7C45E4F261F9@mcs.anl.gov> <20111121171241.3allcfhmskk0swog@mail.zih.tu-dresden.de> Message-ID: On Mon, Nov 21, 2011 at 10:12, Thomas Witkowski < Thomas.Witkowski at tu-dresden.de> wrote: > Do you know about someone who work on a FETI-DP implementation with more > than two levels? > There are a few implementations based on BDDC (algorithmically equivalent) including a prototype by Xuemin Tu (based on the terminology from Olof Widlund and Axel Klawonn) and another from Jakub Sistek and colleagues (based on Jan Mandel's terminology), see this thread: http://lists.mcs.anl.gov/pipermail/petsc-users/2011-October/010354.html http://lists.mcs.anl.gov/pipermail/petsc-dev/2011-October/005991.html(continuation moved to petsc-dev) Last time I talked to Clark Dohrmann, he was also doing some multilevel variants of BDDC and overlapping Schwarz (with closely related coarse spaces). He may have software for this, but I haven't seen it. -------------- next part -------------- An HTML attachment was scrubbed... URL: From zonexo at gmail.com Mon Nov 21 13:10:50 2011 From: zonexo at gmail.com (TAY wee-beng) Date: Mon, 21 Nov 2011 20:10:50 +0100 Subject: [petsc-users] sqrt: DOMAIN error during run Message-ID: <4ECAA23A.7050704@gmail.com> Hi, I have extended my 2D CFD code to 3D. When I tried to do a step by step debug using Compaq visual Fortran (CVF) in windows, there is no error. However, if I execute the code directly, the following error happens: - sqrt: DOMAIN error Image PC Routine Line Source ibm2d_high_Re_wo_ 00B95C69 Unknown Unknown Unknown ibm2d_high_Re_wo_ 00B95AC7 Unknown Unknown Unknown ibm2d_high_Re_wo_ 00B95C21 Unknown Unknown Unknown ibm2d_high_Re_wo_ 00B9CEA8 Unknown Unknown Unknown ibm2d_high_Re_wo_ 00BBF61D Unknown Unknown Unknown ibm2d_high_Re_wo_ 00BB7E30 Unknown Unknown Unknown ibm2d_high_Re_wo_ 00BB1B98 Unknown Unknown Unknown ibm2d_high_Re_wo_ 00B79319 Unknown Unknown Unknown ibm2d_high_Re_wo_ 0085C29A Unknown Unknown Unknown ibm2d_high_Re_wo_ 0063C656 Unknown Unknown Unknown ibm2d_high_Re_wo_ 008EC67D Unknown Unknown Unknown ibm2d_high_Re_wo_ 005C95DC Unknown Unknown Unknown ibm2d_high_Re_wo_ 00549D2C Unknown Unknown Unknown ibm2d_high_Re_wo_ 00500BEE Unknown Unknown Unknown ibm2d_high_Re_wo_ 00428FE3 Unknown Unknown Unknown ibm2d_high_Re_wo_ 00463073 Unknown Unknown Unknown ibm2d_high_Re_wo_ 00BC5309 Unknown Unknown Unknown ibm2d_high_Re_wo_ 00BB1035 Unknown Unknown Unknown kernel32.dll 7C817067 Unknown Unknown Unknown Incrementally linked image--PC correlation disabled. I repeated running the code in Linux and calling "KSPGetConvergedReason" gives: P Diverged Does anyone know why this error occurs? It's strange that the step by step debug works and gives a reasonable solution but executing the code gives error. Also, when changing from 2D to 3D, is there any additional stuff I need to add in besides adding another dimension? -- Yours sincerely, TAY wee-beng From jedbrown at mcs.anl.gov Mon Nov 21 13:14:58 2011 From: jedbrown at mcs.anl.gov (Jed Brown) Date: Mon, 21 Nov 2011 13:14:58 -0600 Subject: [petsc-users] sqrt: DOMAIN error during run In-Reply-To: <4ECAA23A.7050704@gmail.com> References: <4ECAA23A.7050704@gmail.com> Message-ID: On Mon, Nov 21, 2011 at 13:10, TAY wee-beng wrote: > - sqrt: DOMAIN error You are trying to take the (real-valued) square root of a negative number. Find out why that is happening. -------------- next part -------------- An HTML attachment was scrubbed... URL: From bsmith at mcs.anl.gov Mon Nov 21 13:19:30 2011 From: bsmith at mcs.anl.gov (Barry Smith) Date: Mon, 21 Nov 2011 13:19:30 -0600 Subject: [petsc-users] restricted DMDAGlobaltoLocal In-Reply-To: References: <380C9AC4-799D-4A07-A1FF-1F92C8065EC3@mcs.anl.gov> Message-ID: <4162CF83-1299-4EA2-8F7A-0F92D8837944@mcs.anl.gov> On Nov 21, 2011, at 9:39 AM, Hui Zhang wrote: > >> On Nov 21, 2011, at 8:01 AM, Jed Brown wrote: >> >>> On Mon, Nov 21, 2011 at 04:52, Hui Zhang wrote: >>> When using DMDAGlobaltoLocal, how can I communicate only the boundary of the >>> ghosted domain (with stecil width larger than one)? >>> >>> Only the ghost values are communicated. The interior is all local. >> >> I think Hui means DMDALocalToLocalBegin() and DMDALocalToLocalEnd() but normally one doesn't need this with the PETSc solvers, one only needs the DMGlobalToLocalBegin/End() >> >> Barry >> > > Thank you all! > > I'm actually implementing domain decomposition methods with restricted communication. > From my understanding, DMDALocalToLocalXX is also communicating all ghost values of > the second Vec. However, in a restricted way, only the ghost points most far away from > the interior are communicated. Yes, we always communicate all ghost points, not just the boundary of the ghosted domain. Sorry I miss understood your earlier question. It may be possible to modify the DMDA to communicate only the boundary but would require mucking around with the details of the code a bit. Barry From bsmith at mcs.anl.gov Mon Nov 21 13:33:29 2011 From: bsmith at mcs.anl.gov (Barry Smith) Date: Mon, 21 Nov 2011 13:33:29 -0600 Subject: [petsc-users] sqrt: DOMAIN error during run In-Reply-To: <4ECAA23A.7050704@gmail.com> References: <4ECAA23A.7050704@gmail.com> Message-ID: <82AC4B33-6FB1-4573-9D87-599A9C86CA49@mcs.anl.gov> If it runs in one mode (like in a debugger) but crashes in another (like not in the debugger) this is often a symptom of memory corruption: http://www.mcs.anl.gov/petsc/documentation/faq.html#valgrind Barry On Nov 21, 2011, at 1:10 PM, TAY wee-beng wrote: > Hi, > > I have extended my 2D CFD code to 3D. > > When I tried to do a step by step debug using Compaq visual Fortran (CVF) in windows, there is no error. > > However, if I execute the code directly, the following error happens: > > - sqrt: DOMAIN error > Image PC Routine Line Source > ibm2d_high_Re_wo_ 00B95C69 Unknown Unknown Unknown > ibm2d_high_Re_wo_ 00B95AC7 Unknown Unknown Unknown > ibm2d_high_Re_wo_ 00B95C21 Unknown Unknown Unknown > ibm2d_high_Re_wo_ 00B9CEA8 Unknown Unknown Unknown > ibm2d_high_Re_wo_ 00BBF61D Unknown Unknown Unknown > ibm2d_high_Re_wo_ 00BB7E30 Unknown Unknown Unknown > ibm2d_high_Re_wo_ 00BB1B98 Unknown Unknown Unknown > ibm2d_high_Re_wo_ 00B79319 Unknown Unknown Unknown > ibm2d_high_Re_wo_ 0085C29A Unknown Unknown Unknown > ibm2d_high_Re_wo_ 0063C656 Unknown Unknown Unknown > ibm2d_high_Re_wo_ 008EC67D Unknown Unknown Unknown > ibm2d_high_Re_wo_ 005C95DC Unknown Unknown Unknown > ibm2d_high_Re_wo_ 00549D2C Unknown Unknown Unknown > ibm2d_high_Re_wo_ 00500BEE Unknown Unknown Unknown > ibm2d_high_Re_wo_ 00428FE3 Unknown Unknown Unknown > ibm2d_high_Re_wo_ 00463073 Unknown Unknown Unknown > ibm2d_high_Re_wo_ 00BC5309 Unknown Unknown Unknown > ibm2d_high_Re_wo_ 00BB1035 Unknown Unknown Unknown > kernel32.dll 7C817067 Unknown Unknown Unknown > > Incrementally linked image--PC correlation disabled. > > I repeated running the code in Linux and calling "KSPGetConvergedReason" gives: > > P Diverged > > Does anyone know why this error occurs? It's strange that the step by step debug works and gives a reasonable solution but executing the code gives error. > > Also, when changing from 2D to 3D, is there any additional stuff I need to add in besides adding another dimension? > > -- > Yours sincerely, > > TAY wee-beng > From andrej.mesaros at bc.edu Mon Nov 21 22:47:53 2011 From: andrej.mesaros at bc.edu (Andrej Mesaros) Date: Mon, 21 Nov 2011 23:47:53 -0500 Subject: [petsc-users] Memory for matrix assembly Message-ID: <4ECB2979.2080502@bc.edu> Dear all, I need guidance in finding the memory needed for matrix assembly. The job that fails when I reserve 3.5GB memory per node gives me the error output below. The job was run on 96 nodes, each storing its own part of a matrix (around 60k rows each, ~100M non-zero complex entries). The error occurs during assembly (similar numbers for every node): [25]PETSC ERROR: Out of memory. This could be due to allocating [25]PETSC ERROR: too large an object or bleeding by not properly [25]PETSC ERROR: destroying unneeded objects. [25]PETSC ERROR: Memory allocated 4565256864 Memory used by process 3658739712 [25]PETSC ERROR: Try running with -malloc_dump or -malloc_log for info. [25]PETSC ERROR: Memory requested 980025524! [25]PETSC ERROR: ------------------------------------------------------------------------ [25]PETSC ERROR: Petsc Release Version 3.1.0, Patch 8, Thu Mar 17 13:37:48 CDT 2011 [25]PETSC ERROR: See docs/changes/index.html for recent updates. [25]PETSC ERROR: See docs/faq.html for hints about trouble shooting. [25]PETSC ERROR: See docs/index.html for manual pages. [25]PETSC ERROR: ------------------------------------------------------------------------ [25]PETSC ERROR: Unknown Name on a linux-gnu named compute-5-54.local by mesaros Wed Oct 12 22:27:13 2011 [25]PETSC ERROR: Libraries linked from /home/mesaros/code/petsc-3.1-p8/linux-gnu-mpi-debug-complex/lib [25]PETSC ERROR: Configure run at Thu Jun 30 12:30:13 2011 [25]PETSC ERROR: Configure options --with-scalar-type=complex --with-64-bit-indices=1 --download-f-blas-lapack=yes --download-mpich=1 --with-mpi-exec=/usr/publ$ [25]PETSC ERROR: ------------------------------------------------------------------------ [25]PETSC ERROR: PetscMallocAlign() line 49 in src/sys/memory/mal.c [25]PETSC ERROR: PetscTrMallocDefault() line 192 in src/sys/memory/mtr.c [25]PETSC ERROR: PetscPostIrecvInt() line 250 in src/sys/utils/mpimesg.c [25]PETSC ERROR: MatStashScatterBegin_Private() line 498 in src/mat/utils/matstash.c [25]PETSC ERROR: MatAssemblyBegin_MPIAIJ() line 474 in src/mat/impls/aij/mpi/mpiaij.c [25]PETSC ERROR: MatAssemblyBegin() line 4564 in src/mat/interface/matrix.c Now, how much memory would I need per node for this assembly to work? Is it "Memory allocated" + "Memory requested", which is around 5.5GB? And did it fail when "Memory used by process" reached ~3.5GB, which was the limit for the job? Usually, breaking the limit on memory per node kills the job, and PETSc then doesn't give the above "Out of memory" output. Additionally, can I simply estimate the additional memory needed for SLEPc to find ~100 lowest eigenvalues? Any insight is highly appreciated, and thanks to developers for the great software! Andrej PS: 64bit mode should be working fine (Linux 2.6, PETSc compiled with 64bit, giving x86_64 executable on AMD cluster). PETSc+SLEPc definitely works fine when parallelized matrices use ~2.5GB memory per node. From jedbrown at mcs.anl.gov Mon Nov 21 22:57:08 2011 From: jedbrown at mcs.anl.gov (Jed Brown) Date: Mon, 21 Nov 2011 22:57:08 -0600 Subject: [petsc-users] Memory for matrix assembly In-Reply-To: <4ECB2979.2080502@bc.edu> References: <4ECB2979.2080502@bc.edu> Message-ID: On Mon, Nov 21, 2011 at 22:47, Andrej Mesaros wrote: > Dear all, > > I need guidance in finding the memory needed for matrix assembly. > > The job that fails when I reserve 3.5GB memory per node gives me the error > output below. The job was run on 96 nodes, each storing its own part of a > matrix (around 60k rows each, ~100M non-zero complex entries). > > The error occurs during assembly (similar numbers for every node): > > [25]PETSC ERROR: Out of memory. This could be due to allocating > [25]PETSC ERROR: too large an object or bleeding by not properly > [25]PETSC ERROR: destroying unneeded objects. > [25]PETSC ERROR: Memory allocated 4565256864 Memory used by process > 3658739712 > [25]PETSC ERROR: Try running with -malloc_dump or -malloc_log for info. > [25]PETSC ERROR: Memory requested 980025524! > [25]PETSC ERROR: > ------------------------------**------------------------------** > ------------ > [25]PETSC ERROR: Petsc Release Version 3.1.0, Patch 8, Thu Mar 17 > 13:37:48 CDT 2011 > [25]PETSC ERROR: See docs/changes/index.html for recent updates. > [25]PETSC ERROR: See docs/faq.html for hints about trouble shooting. > [25]PETSC ERROR: See docs/index.html for manual pages. > [25]PETSC ERROR: > ------------------------------**------------------------------** > ------------ > [25]PETSC ERROR: Unknown Name on a linux-gnu named compute-5-54.local by > mesaros Wed Oct 12 22:27:13 2011 > [25]PETSC ERROR: Libraries linked from > /home/mesaros/code/petsc-3.1-**p8/linux-gnu-mpi-debug-**complex/lib > [25]PETSC ERROR: Configure run at Thu Jun 30 12:30:13 2011 > [25]PETSC ERROR: Configure options --with-scalar-type=complex > --with-64-bit-indices=1 --download-f-blas-lapack=yes --download-mpich=1 > --with-mpi-exec=/usr/publ$ > [25]PETSC ERROR: > ------------------------------**------------------------------** > ------------ > [25]PETSC ERROR: PetscMallocAlign() line 49 in src/sys/memory/mal.c > [25]PETSC ERROR: PetscTrMallocDefault() line 192 in src/sys/memory/mtr.c > [25]PETSC ERROR: PetscPostIrecvInt() line 250 in src/sys/utils/mpimesg.c > Looks like you are trying to half a billion (--with-64-bit-indices) or a billion entries. How are you computing the nonzeros? Is it possible that many processes are computing entries that need to go to one process? > [25]PETSC ERROR: MatStashScatterBegin_Private() line 498 in > src/mat/utils/matstash.c > [25]PETSC ERROR: MatAssemblyBegin_MPIAIJ() line 474 in > src/mat/impls/aij/mpi/mpiaij.c > [25]PETSC ERROR: MatAssemblyBegin() line 4564 in src/mat/interface/matrix.c > > > Now, how much memory would I need per node for this assembly to work? Is > it "Memory allocated" + "Memory requested", which is around 5.5GB? And did > it fail when "Memory used by process" reached ~3.5GB, which was the limit > for the job? Usually, breaking the limit on memory per node kills the job, > and PETSc then doesn't give the above "Out of memory" output. > > Additionally, can I simply estimate the additional memory needed for SLEPc > to find ~100 lowest eigenvalues? > Start with what is typically needed by PETSc (the matrix, the setup cost is for your preconditioner, the vectors for the Krylov method) and add 100*n*sizeof(PetscScalar). -------------- next part -------------- An HTML attachment was scrubbed... URL: From andrej.mesaros at bc.edu Mon Nov 21 23:46:58 2011 From: andrej.mesaros at bc.edu (Andrej Mesaros) Date: Tue, 22 Nov 2011 00:46:58 -0500 Subject: [petsc-users] Memory for matrix assembly In-Reply-To: References: <4ECB2979.2080502@bc.edu> Message-ID: <4ECB3752.3080305@bc.edu> Jed Brown wrote: > On Mon, Nov 21, 2011 at 22:47, Andrej Mesaros > wrote: > > Dear all, > > I need guidance in finding the memory needed for matrix assembly. > > The job that fails when I reserve 3.5GB memory per node gives me the > error output below. The job was run on 96 nodes, each storing its > own part of a matrix (around 60k rows each, ~100M non-zero complex > entries). > > The error occurs during assembly (similar numbers for every node): > > [25]PETSC ERROR: Out of memory. This could be due to allocating > [25]PETSC ERROR: too large an object or bleeding by not properly > [25]PETSC ERROR: destroying unneeded objects. > [25]PETSC ERROR: Memory allocated 4565256864 Memory > used by process > 3658739712 > [25]PETSC ERROR: Try running with -malloc_dump or -malloc_log for info. > [25]PETSC ERROR: Memory requested 980025524! > [25]PETSC ERROR: > ------------------------------__------------------------------__------------ > [25]PETSC ERROR: Petsc Release Version 3.1.0, Patch 8, Thu Mar 17 > 13:37:48 CDT 2011 > [25]PETSC ERROR: See docs/changes/index.html for recent updates. > [25]PETSC ERROR: See docs/faq.html for hints about trouble shooting. > [25]PETSC ERROR: See docs/index.html for manual pages. > [25]PETSC ERROR: > ------------------------------__------------------------------__------------ > [25]PETSC ERROR: Unknown Name on a linux-gnu named compute-5-54.local by > mesaros Wed Oct 12 22:27:13 2011 > [25]PETSC ERROR: Libraries linked from > /home/mesaros/code/petsc-3.1-__p8/linux-gnu-mpi-debug-__complex/lib > [25]PETSC ERROR: Configure run at Thu Jun 30 12:30:13 2011 > [25]PETSC ERROR: Configure options --with-scalar-type=complex > --with-64-bit-indices=1 --download-f-blas-lapack=yes --download-mpich=1 > --with-mpi-exec=/usr/publ$ > [25]PETSC ERROR: > ------------------------------__------------------------------__------------ > [25]PETSC ERROR: PetscMallocAlign() line 49 in src/sys/memory/mal.c > [25]PETSC ERROR: PetscTrMallocDefault() line 192 in src/sys/memory/mtr.c > [25]PETSC ERROR: PetscPostIrecvInt() line 250 in src/sys/utils/mpimesg.c > > > Looks like you are trying to half a billion (--with-64-bit-indices) or a > billion entries. How are you computing the nonzeros? Is it possible that > many processes are computing entries that need to go to one process? My code has a function which, when given a fixed matrix row index, calculates one by one values of all non-zero matrix elements in this row, while also returning the column index of each of these elements. So, all I need to do is put that the 1st process has a loop for row index going from 1 to 60k, the 2nd process has the loop going from 60k+1 to 120k, etc. Inside the loops, the row index is given, so it finds the non-zero elements and their column indices. > > [25]PETSC ERROR: MatStashScatterBegin_Private() line 498 in > src/mat/utils/matstash.c > [25]PETSC ERROR: MatAssemblyBegin_MPIAIJ() line 474 in > src/mat/impls/aij/mpi/mpiaij.c > [25]PETSC ERROR: MatAssemblyBegin() line 4564 in > src/mat/interface/matrix.c > > > Now, how much memory would I need per node for this assembly to > work? Is it "Memory allocated" + "Memory requested", which is around > 5.5GB? And did it fail when "Memory used by process" reached ~3.5GB, > which was the limit for the job? Usually, breaking the limit on > memory per node kills the job, and PETSc then doesn't give the above > "Out of memory" output. > > Additionally, can I simply estimate the additional memory needed for > SLEPc to find ~100 lowest eigenvalues? > > > Start with what is typically needed by PETSc (the matrix, the setup cost > is for your preconditioner, the vectors for the Krylov method) and add > 100*n*sizeof(PetscScalar). To clarify, is "n" the matrix dimension? So that's memory for 100 vectors (the Krylov space) plus the memory already taken by PETSc when assembly is done? Thanks a lot! From behzad.baghapour at gmail.com Mon Nov 21 23:58:49 2011 From: behzad.baghapour at gmail.com (behzad baghapour) Date: Tue, 22 Nov 2011 09:28:49 +0330 Subject: [petsc-users] Get Stuck in SNES In-Reply-To: References: Message-ID: I run the code with given options and the output is: Linear solve converged due to CONVERGED_RTOL iterations 2 Nonlinear solve did not converge due to DIVERGED_LINE_SEARCH So, Is this may related to updating residual during Line Search?, which I know the solution is well conditioned especially in first iterations. -------------- next part -------------- An HTML attachment was scrubbed... URL: From jedbrown at mcs.anl.gov Mon Nov 21 23:59:07 2011 From: jedbrown at mcs.anl.gov (Jed Brown) Date: Mon, 21 Nov 2011 23:59:07 -0600 Subject: [petsc-users] Memory for matrix assembly In-Reply-To: <4ECB3752.3080305@bc.edu> References: <4ECB2979.2080502@bc.edu> <4ECB3752.3080305@bc.edu> Message-ID: On Mon, Nov 21, 2011 at 23:46, Andrej Mesaros wrote: > My code has a function which, when given a fixed matrix row index, > calculates one by one values of all non-zero matrix elements in this row, > while also returning the column index of each of these elements. So, all I > need to do is put that the 1st process has a loop for row index going from > 1 to 60k, the 2nd process has the loop going from 60k+1 to 120k, etc. > Inside the loops, the row index is given, so it finds the non-zero elements > and their column indices. > That is fine, but the error circumnstance indicated that a huge number of entries were being computed by a different process. It is possible that memory was corrupted earlier. You can try smaller problems with valgrind or -malloc_debug -malloc_dump, but if these don't work, it could be difficult to track down. To clarify, is "n" the matrix dimension? So that's memory for 100 vectors >> (the Krylov space) plus the memory already taken by PETSc when assembly is >> done? > > Yeah, roughly that used by PETSc plus those additional vectors needed by SLEPc. -------------- next part -------------- An HTML attachment was scrubbed... URL: From jedbrown at mcs.anl.gov Tue Nov 22 00:00:37 2011 From: jedbrown at mcs.anl.gov (Jed Brown) Date: Tue, 22 Nov 2011 00:00:37 -0600 Subject: [petsc-users] Get Stuck in SNES In-Reply-To: References: Message-ID: On Mon, Nov 21, 2011 at 23:58, behzad baghapour wrote: > Linear solve converged due to CONVERGED_RTOL iterations 2 > Nonlinear solve did not converge due to DIVERGED_LINE_SEARCH > > So, Is this may related to updating residual during Line Search?, which I > know the solution is well conditioned especially in first iterations. > Suspect an incorrect Jacobian. Add -snes_mf_operator and see what happens. Follow the list of instructions in the FAQ I sent in the last message. -------------- next part -------------- An HTML attachment was scrubbed... URL: From behzad.baghapour at gmail.com Tue Nov 22 00:04:19 2011 From: behzad.baghapour at gmail.com (behzad baghapour) Date: Tue, 22 Nov 2011 09:34:19 +0330 Subject: [petsc-users] Get Stuck in SNES In-Reply-To: References: Message-ID: Thanks. I will follow the instructions in FAQ. -------------- next part -------------- An HTML attachment was scrubbed... URL: From xiaohl1986 at gmail.com Tue Nov 22 00:10:29 2011 From: xiaohl1986 at gmail.com (Hailong Xiao) Date: Tue, 22 Nov 2011 14:10:29 +0800 Subject: [petsc-users] How can I zero the vec got from DMGetGlobalVector? Message-ID: Hi How can I zero the vec got from DMGetGlobalVector? for example after I called DMGetGlobalVector(dm, &g); How can I zero g globally? -- Hailong -------------- next part -------------- An HTML attachment was scrubbed... URL: From jedbrown at mcs.anl.gov Tue Nov 22 00:13:26 2011 From: jedbrown at mcs.anl.gov (Jed Brown) Date: Tue, 22 Nov 2011 00:13:26 -0600 Subject: [petsc-users] How can I zero the vec got from DMGetGlobalVector? In-Reply-To: References: Message-ID: On Tue, Nov 22, 2011 at 00:10, Hailong Xiao wrote: > How can I zero the vec got from DMGetGlobalVector? > > for example after I called DMGetGlobalVector(dm, &g); > http://www.mcs.anl.gov/petsc/petsc-dev/docs/manualpages/Vec/VecZeroEntries.html -------------- next part -------------- An HTML attachment was scrubbed... URL: From Thomas.Witkowski at tu-dresden.de Tue Nov 22 00:29:38 2011 From: Thomas.Witkowski at tu-dresden.de (Thomas Witkowski) Date: Tue, 22 Nov 2011 07:29:38 +0100 Subject: [petsc-users] Copy data from MATMPIAIJ to MATSEQDENSE Message-ID: <20111122072938.hlg720zoys08sscg@mail.zih.tu-dresden.de> Whats the best way to copy data from a MATMPIAIJ to a local MATSEQDENSE? The problem with MatGetSubMatrix seems to be that both matrices must have the same communicator. But the MATMPIAIJ has, say, PETSC_COMM_WORLD, but the MATSEQDENSE has PETSC_COMM_SELF. If it is simpler to extract subvectors from MATMPIAIJ to VECSEQ, this would to the same for me. Thanks, Thomas From jroman at dsic.upv.es Tue Nov 22 01:40:44 2011 From: jroman at dsic.upv.es (Jose E. Roman) Date: Tue, 22 Nov 2011 08:40:44 +0100 Subject: [petsc-users] Memory for matrix assembly In-Reply-To: <4ECB3752.3080305@bc.edu> References: <4ECB2979.2080502@bc.edu> <4ECB3752.3080305@bc.edu> Message-ID: <6787718F-A318-4A37-A7AD-D97EF1BB2444@dsic.upv.es> El 22/11/2011, a las 06:46, Andrej Mesaros escribi?: >> Additionally, can I simply estimate the additional memory needed for >> SLEPc to find ~100 lowest eigenvalues? >> Start with what is typically needed by PETSc (the matrix, the setup cost is for your preconditioner, the vectors for the Krylov method) and add 100*n*sizeof(PetscScalar). > > To clarify, is "n" the matrix dimension? So that's memory for 100 vectors (the Krylov space) plus the memory already taken by PETSc when assembly is done? By default, SLEPc uses a basis of 2*nev vectors, so 200 in your case. (See the value of ncv with -eps_view). If you want to reduce this size, give a value to parameter mpd. For instance, mpd=30 and nev=100 will give ncv=130. See EPSSetDimensions(). Jose From zonexo at gmail.com Tue Nov 22 02:37:25 2011 From: zonexo at gmail.com (TAY wee-beng) Date: Tue, 22 Nov 2011 09:37:25 +0100 Subject: [petsc-users] sqrt: DOMAIN error during run In-Reply-To: <82AC4B33-6FB1-4573-9D87-599A9C86CA49@mcs.anl.gov> References: <4ECAA23A.7050704@gmail.com> <82AC4B33-6FB1-4573-9D87-599A9C86CA49@mcs.anl.gov> Message-ID: <4ECB5F45.2090209@gmail.com> Thanks Barry and Jed, I found that the error is due to a divide by zero. Some variables had zero values and were not calculated. Hence division by these values gave NaN. Strangely, running thru the debugger gave zero instead. Yours sincerely, TAY wee-beng On 21/11/2011 8:33 PM, Barry Smith wrote: > If it runs in one mode (like in a debugger) but crashes in another (like not in the debugger) this is often a symptom of memory corruption: http://www.mcs.anl.gov/petsc/documentation/faq.html#valgrind > > Barry > > On Nov 21, 2011, at 1:10 PM, TAY wee-beng wrote: > >> Hi, >> >> I have extended my 2D CFD code to 3D. >> >> When I tried to do a step by step debug using Compaq visual Fortran (CVF) in windows, there is no error. >> >> However, if I execute the code directly, the following error happens: >> >> - sqrt: DOMAIN error >> Image PC Routine Line Source >> ibm2d_high_Re_wo_ 00B95C69 Unknown Unknown Unknown >> ibm2d_high_Re_wo_ 00B95AC7 Unknown Unknown Unknown >> ibm2d_high_Re_wo_ 00B95C21 Unknown Unknown Unknown >> ibm2d_high_Re_wo_ 00B9CEA8 Unknown Unknown Unknown >> ibm2d_high_Re_wo_ 00BBF61D Unknown Unknown Unknown >> ibm2d_high_Re_wo_ 00BB7E30 Unknown Unknown Unknown >> ibm2d_high_Re_wo_ 00BB1B98 Unknown Unknown Unknown >> ibm2d_high_Re_wo_ 00B79319 Unknown Unknown Unknown >> ibm2d_high_Re_wo_ 0085C29A Unknown Unknown Unknown >> ibm2d_high_Re_wo_ 0063C656 Unknown Unknown Unknown >> ibm2d_high_Re_wo_ 008EC67D Unknown Unknown Unknown >> ibm2d_high_Re_wo_ 005C95DC Unknown Unknown Unknown >> ibm2d_high_Re_wo_ 00549D2C Unknown Unknown Unknown >> ibm2d_high_Re_wo_ 00500BEE Unknown Unknown Unknown >> ibm2d_high_Re_wo_ 00428FE3 Unknown Unknown Unknown >> ibm2d_high_Re_wo_ 00463073 Unknown Unknown Unknown >> ibm2d_high_Re_wo_ 00BC5309 Unknown Unknown Unknown >> ibm2d_high_Re_wo_ 00BB1035 Unknown Unknown Unknown >> kernel32.dll 7C817067 Unknown Unknown Unknown >> >> Incrementally linked image--PC correlation disabled. >> >> I repeated running the code in Linux and calling "KSPGetConvergedReason" gives: >> >> P Diverged >> >> Does anyone know why this error occurs? It's strange that the step by step debug works and gives a reasonable solution but executing the code gives error. >> >> Also, when changing from 2D to 3D, is there any additional stuff I need to add in besides adding another dimension? >> >> -- >> Yours sincerely, >> >> TAY wee-beng >> From behzad.baghapour at gmail.com Tue Nov 22 03:19:15 2011 From: behzad.baghapour at gmail.com (behzad baghapour) Date: Tue, 22 Nov 2011 12:49:15 +0330 Subject: [petsc-users] Get Stuck in SNES In-Reply-To: References: Message-ID: OK. Here I think my jacobian or updating rule may not set properly. So, Is there any way to update the solution in Newton iteration "without" Linesearch or TrustZone. I mean the normal full update? x^(n+1) = x^n + dx ?? Thanks a lot. -------------- next part -------------- An HTML attachment was scrubbed... URL: From Thomas.Witkowski at tu-dresden.de Tue Nov 22 03:51:19 2011 From: Thomas.Witkowski at tu-dresden.de (Thomas Witkowski) Date: Tue, 22 Nov 2011 10:51:19 +0100 Subject: [petsc-users] Copy data from MATMPIAIJ to MATSEQDENSE In-Reply-To: <20111122072938.hlg720zoys08sscg@mail.zih.tu-dresden.de> References: <20111122072938.hlg720zoys08sscg@mail.zih.tu-dresden.de> Message-ID: <20111122105119.8h7kmw0em80ggg8k@mail.zih.tu-dresden.de> Zitat von Thomas Witkowski : > Whats the best way to copy data from a MATMPIAIJ to a local > MATSEQDENSE? The problem with MatGetSubMatrix seems to be that both > matrices must have the same communicator. But the MATMPIAIJ has, say, > PETSC_COMM_WORLD, but the MATSEQDENSE has PETSC_COMM_SELF. If it is > simpler to extract subvectors from MATMPIAIJ to VECSEQ, this would to > the same for me. I would a solution by myself: I call MatGetRow on each rank and than use the values to put them into the dense local matrix. Thomas From B.Sanderse at cwi.nl Tue Nov 22 04:40:50 2011 From: B.Sanderse at cwi.nl (Benjamin Sanderse) Date: Tue, 22 Nov 2011 11:40:50 +0100 Subject: [petsc-users] binary writing to tecplot Message-ID: <25074BCB-727C-4297-BB78-C0FECF650B01@cwi.nl> Hello all, I am trying to output parallel data in binary format that can be read by Tecplot. For this I use the TecIO library from Tecplot, which provide a set of Fortran/C subroutines. With these subroutines it is easy to write binary files that can be read by Tecplot, but, as far as I can see, they can not be directly used with parallel Petsc vectors. On a single processor everything works fine, but on more processors it fails. I am thinking now of different workarounds: 1. Create a sequential vector from the parallel vector, and call the TecIO subroutines with this sequential vector. For large problems this will probably be too slow, and actually I don't know how to copy the content of a parallel vector into a sequential one. 2. Write a tecplot file from each processor, with the data from that processor. The problem is that this requires combining the files afterwards, and this is probably not easy (certainly not in binary format?). 3. Change the tecplot subroutines or write own binary output with VecView(). It might not be easy to get the output right so that Tecplot understands it. Do you have suggestions? Are there other possibilities? Thanks, Benjamin From jedbrown at mcs.anl.gov Tue Nov 22 06:57:46 2011 From: jedbrown at mcs.anl.gov (Jed Brown) Date: Tue, 22 Nov 2011 06:57:46 -0600 Subject: [petsc-users] Get Stuck in SNES In-Reply-To: References: Message-ID: On Tue, Nov 22, 2011 at 03:19, behzad baghapour wrote: > OK. Here I think my jacobian or updating rule may not set properly. > > So, Is there any way to update the solution in Newton iteration "without" > Linesearch or TrustZone. I mean the normal full update? x^(n+1) = x^n + dx > ?? > -snes_ls_type basic (or -snes_ls_type basicnonorms) Alternatively, you can use -snes_ls_monitor to see what is happening in the line search. Perhaps your function is using data from the wrong place (e.g. using the Vec stored in a user context/global instead of the one that is passed in)? -------------- next part -------------- An HTML attachment was scrubbed... URL: From jedbrown at mcs.anl.gov Tue Nov 22 07:04:03 2011 From: jedbrown at mcs.anl.gov (Jed Brown) Date: Tue, 22 Nov 2011 07:04:03 -0600 Subject: [petsc-users] binary writing to tecplot In-Reply-To: <25074BCB-727C-4297-BB78-C0FECF650B01@cwi.nl> References: <25074BCB-727C-4297-BB78-C0FECF650B01@cwi.nl> Message-ID: On Tue, Nov 22, 2011 at 04:40, Benjamin Sanderse wrote: > Hello all, > > I am trying to output parallel data in binary format that can be read by > Tecplot. For this I use the TecIO library from Tecplot, which provide a set > of Fortran/C subroutines. With these subroutines it is easy to write binary > files that can be read by Tecplot, but, as far as I can see, they can not > be directly used with parallel Petsc vectors. On a single processor > everything works fine, but on more processors it fails. > I am thinking now of different workarounds: > > 1. Create a sequential vector from the parallel vector, and call the TecIO > subroutines with this sequential vector. For large problems this will > probably be too slow, and actually I don't know how to copy the content of > a parallel vector into a sequential one. > 2. Write a tecplot file from each processor, with the data from that > processor. The problem is that this requires combining the files > afterwards, and this is probably not easy (certainly not in binary format?). > 3. Change the tecplot subroutines or write own binary output with > VecView(). It might not be easy to get the output right so that Tecplot > understands it. > Are you using DMDA or do you have your own unstructured mesh? Supposedly Tecplot can read HDF5, so you might be able to use PETSc's parallel HDF5 viewer and still read it with Tecplot. What other formats does Tecplot support and how do they recommend handling parallelism? The open source visualization packages have put a great deal of effort into supporting many different data formats (including Tecplot's native format). It would be rather unfair if Tecplot didn't support some of the open source formats. -------------- next part -------------- An HTML attachment was scrubbed... URL: From behzad.baghapour at gmail.com Tue Nov 22 07:19:39 2011 From: behzad.baghapour at gmail.com (behzad baghapour) Date: Tue, 22 Nov 2011 16:49:39 +0330 Subject: [petsc-users] Get Stuck in SNES In-Reply-To: References: Message-ID: Thanks a lot. I'm watching to find my mistake. -------------- next part -------------- An HTML attachment was scrubbed... URL: From bsmith at mcs.anl.gov Tue Nov 22 07:44:51 2011 From: bsmith at mcs.anl.gov (Barry Smith) Date: Tue, 22 Nov 2011 07:44:51 -0600 Subject: [petsc-users] Copy data from MATMPIAIJ to MATSEQDENSE In-Reply-To: <20111122105119.8h7kmw0em80ggg8k@mail.zih.tu-dresden.de> References: <20111122072938.hlg720zoys08sscg@mail.zih.tu-dresden.de> <20111122105119.8h7kmw0em80ggg8k@mail.zih.tu-dresden.de> Message-ID: <02295E69-0076-42C8-86C4-8148D2E35D89@mcs.anl.gov> On Nov 22, 2011, at 3:51 AM, Thomas Witkowski wrote: > Zitat von Thomas Witkowski : > >> Whats the best way to copy data from a MATMPIAIJ to a local >> MATSEQDENSE? The problem with MatGetSubMatrix seems to be that both >> matrices must have the same communicator. But the MATMPIAIJ has, say, >> PETSC_COMM_WORLD, but the MATSEQDENSE has PETSC_COMM_SELF. If it is >> simpler to extract subvectors from MATMPIAIJ to VECSEQ, this would to >> the same for me. > > I would a solution by myself: I call MatGetRow on each rank and than use the values to > put them into the dense local matrix. > Why not MatConvert() or MatGetSubMatrices()? Why convert a parallel matrix to sequential? Barry > Thomas From B.Sanderse at cwi.nl Tue Nov 22 07:53:51 2011 From: B.Sanderse at cwi.nl (Benjamin Sanderse) Date: Tue, 22 Nov 2011 14:53:51 +0100 Subject: [petsc-users] binary writing to tecplot In-Reply-To: References: <25074BCB-727C-4297-BB78-C0FECF650B01@cwi.nl> Message-ID: <93830961-EBAF-4573-B547-367022339205@cwi.nl> Thanks for the quick reply. Tecplot is able to read HDF5 files according to the possible types listed in its dataloader, so that seems like an interesting option. But where do I find Petsc documentation on writing HDF5 files? Is this in the development version? By the way, I am using structured Cartesian grids, and all operations such as interpolation and differentiation are carried out as matrix-vector products, which also include boundary conditions. I am not using DAs at the moment, would it be better to use those? Op 22 nov 2011, om 14:04 heeft Jed Brown het volgende geschreven: > On Tue, Nov 22, 2011 at 04:40, Benjamin Sanderse wrote: > Hello all, > > I am trying to output parallel data in binary format that can be read by Tecplot. For this I use the TecIO library from Tecplot, which provide a set of Fortran/C subroutines. With these subroutines it is easy to write binary files that can be read by Tecplot, but, as far as I can see, they can not be directly used with parallel Petsc vectors. On a single processor everything works fine, but on more processors it fails. > I am thinking now of different workarounds: > > 1. Create a sequential vector from the parallel vector, and call the TecIO subroutines with this sequential vector. For large problems this will probably be too slow, and actually I don't know how to copy the content of a parallel vector into a sequential one. > 2. Write a tecplot file from each processor, with the data from that processor. The problem is that this requires combining the files afterwards, and this is probably not easy (certainly not in binary format?). > 3. Change the tecplot subroutines or write own binary output with VecView(). It might not be easy to get the output right so that Tecplot understands it. > > Are you using DMDA or do you have your own unstructured mesh? Supposedly Tecplot can read HDF5, so you might be able to use PETSc's parallel HDF5 viewer and still read it with Tecplot. > > What other formats does Tecplot support and how do they recommend handling parallelism? The open source visualization packages have put a great deal of effort into supporting many different data formats (including Tecplot's native format). It would be rather unfair if Tecplot didn't support some of the open source formats. -------------- next part -------------- An HTML attachment was scrubbed... URL: From jedbrown at mcs.anl.gov Tue Nov 22 08:15:02 2011 From: jedbrown at mcs.anl.gov (Jed Brown) Date: Tue, 22 Nov 2011 08:15:02 -0600 Subject: [petsc-users] binary writing to tecplot In-Reply-To: <93830961-EBAF-4573-B547-367022339205@cwi.nl> References: <25074BCB-727C-4297-BB78-C0FECF650B01@cwi.nl> <93830961-EBAF-4573-B547-367022339205@cwi.nl> Message-ID: On Tue, Nov 22, 2011 at 07:53, Benjamin Sanderse wrote: > Tecplot is able to read HDF5 files according to the possible types listed > in its dataloader, so that seems like an interesting option. > You will have to find their documentation of how to write the HDF5 file or how to specify what is in the file (XDMF is one way). > But where do I find Petsc documentation on writing HDF5 files? Is this in > the development version? > http://www.mcs.anl.gov/petsc/petsc-current/docs/manualpages/Viewer/PetscViewerHDF5Open.html More generally, and for anything, it's often useful to search this page: http://www.mcs.anl.gov/petsc/petsc-current/docs/manualpages/singleindex.html ) > By the way, I am using structured Cartesian grids, and all operations such > as interpolation and differentiation are carried out as matrix-vector > products, which also include boundary conditions. I am not using DAs at the > moment, would it be better to use those? > Defining derivatives this way is often complicated at boundary conditions, but whatever works for you. Using DMDA allows PETSc viewers to put more semantic information into the file. -------------- next part -------------- An HTML attachment was scrubbed... URL: From andrej.mesaros at bc.edu Tue Nov 22 08:30:25 2011 From: andrej.mesaros at bc.edu (Andrej Mesaros) Date: Tue, 22 Nov 2011 09:30:25 -0500 Subject: [petsc-users] Memory for matrix assembly In-Reply-To: References: <4ECB2979.2080502@bc.edu> <4ECB3752.3080305@bc.edu> Message-ID: <4ECBB201.9020205@bc.edu> Jed Brown wrote: > On Mon, Nov 21, 2011 at 23:46, Andrej Mesaros > wrote: > > My code has a function which, when given a fixed matrix row index, > calculates one by one values of all non-zero matrix elements in this > row, while also returning the column index of each of these > elements. So, all I need to do is put that the 1st process has a > loop for row index going from 1 to 60k, the 2nd process has the loop > going from 60k+1 to 120k, etc. Inside the loops, the row index is > given, so it finds the non-zero elements and their column indices. > > > That is fine, but the error circumnstance indicated that a huge number > of entries were being computed by a different process. It is possible > that memory was corrupted earlier. You can try smaller problems with > valgrind or -malloc_debug -malloc_dump, but if these don't work, it > could be difficult to track down. Given that I indeed call MatSetValues exclusively with row indices within the range determined by MatGetOwnershipRange, it should be impossible to generate entries on the wrong process, right? In such a case, could the corruption you mention be somehow due to the way I call other PETSc functions? Or is it at all possible that too small preallocation is making a problem? Also, what is the meaning of the memories in the report: "allocated", "used by process" and "requested"? Still don't understand, and couldn't find in the manual. > To clarify, is "n" the matrix dimension? So that's memory for > 100 vectors (the Krylov space) plus the memory already taken by > PETSc when assembly is done? > > > Yeah, roughly that used by PETSc plus those additional vectors needed by > SLEPc. From gdiso at ustc.edu Tue Nov 22 08:47:11 2011 From: gdiso at ustc.edu (Gong Ding) Date: Tue, 22 Nov 2011 22:47:11 +0800 (CST) Subject: [petsc-users] Anyone meet mumps crash on AIX? Message-ID: <7128779.337141321973231141.JavaMail.coremail@mail.ustc.edu> Hi, I am testing my code on AIX. petsc 3.2 with MUMPS 4.10 ALWAYS crash, both serial and parallel, with ERROR: 0031-250 task 0: Segmentation fault. However, other direct solver seems ok, i.e. superlu. The core file was checked by gdb, but only littel information: Program terminated with signal 11, Segmentation fault. #0 0x000000010150d8ec in dmumps_462 () I had checked the code with valgrind on Linux/AMD64, which also reported some memory problem (but never crash) ==10354== Invalid read of size 8 ==10354== at 0x57BE749: __intel_new_memcpy (in /opt/intel/Compiler/11.1/038/lib/intel64/libirc.so) ==10354== by 0x57A0AF5: _intel_fast_memcpy.J (in /opt/intel/Compiler/11.1/038/lib/intel64/libirc.so) ==10354== by 0x185F043: dmumps_363_ (in /home/gdiso/genius_master/bin/genius.LINUX) ==10354== by 0x186B114: dmumps_26_ (in /home/gdiso/genius_master/bin/genius.LINUX) ==10354== by 0x193D382: dmumps_.P (in /home/gdiso/genius_master/bin/genius.LINUX) ==10354== by 0x183EA6E: dmumps_f77_ (in /home/gdiso/genius_master/bin/genius.LINUX) ==10354== by 0x181EBE7: dmumps_c (in /home/gdiso/genius_master/bin/genius.LINUX) ==10354== by 0x1406EFE: MatLUFactorSymbolic_AIJMUMPS (in /home/gdiso/genius_master/bin/genius.LINUX) ==10354== by 0x1321009: MatLUFactorSymbolic (in /home/gdiso/genius_master/bin/genius.LINUX) ==10354== by 0x15FF044: PCSetUp_LU (in /home/gdiso/genius_master/bin/genius.LINUX) ==10354== by 0x15AB193: PCSetUp (in /home/gdiso/genius_master/bin/genius.LINUX) ==10354== by 0x16364DD: KSPSetUp (in /home/gdiso/genius_master/bin/genius.LINUX) ==10354== Address 0x9318d78 is 248 bytes inside a block of size 252 alloc'd ==10354== at 0x4A0776F: malloc (vg_replace_malloc.c:263) ==10354== by 0x5CB3C43: for_allocate (in /opt/intel/Compiler/11.1/038/lib/intel64/libifcore.so.5) ==10354== by 0x19CF545: mumps_754_ (in /home/gdiso/genius_master/bin/genius.LINUX) ==10354== by 0x186A392: dmumps_26_ (in /home/gdiso/genius_master/bin/genius.LINUX) ==10354== by 0x193D382: dmumps_.P (in /home/gdiso/genius_master/bin/genius.LINUX) ==10354== by 0x183EA6E: dmumps_f77_ (in /home/gdiso/genius_master/bin/genius.LINUX) ==10354== by 0x181EBE7: dmumps_c (in /home/gdiso/genius_master/bin/genius.LINUX) ==10354== by 0x1406EFE: MatLUFactorSymbolic_AIJMUMPS (in /home/gdiso/genius_master/bin/genius.LINUX) ==10354== by 0x1321009: MatLUFactorSymbolic (in /home/gdiso/genius_master/bin/genius.LINUX) ==10354== by 0x15FF044: PCSetUp_LU (in /home/gdiso/genius_master/bin/genius.LINUX) ==10354== by 0x15AB193: PCSetUp (in /home/gdiso/genius_master/bin/genius.LINUX) ==10354== by 0x16364DD: KSPSetUp (in /home/gdiso/genius_master/bin/genius.LINUX) Any suggestion? Gong Ding From jedbrown at mcs.anl.gov Tue Nov 22 08:50:16 2011 From: jedbrown at mcs.anl.gov (Jed Brown) Date: Tue, 22 Nov 2011 08:50:16 -0600 Subject: [petsc-users] Memory for matrix assembly In-Reply-To: <4ECBB201.9020205@bc.edu> References: <4ECB2979.2080502@bc.edu> <4ECB3752.3080305@bc.edu> <4ECBB201.9020205@bc.edu> Message-ID: On Tue, Nov 22, 2011 at 08:30, Andrej Mesaros wrote: > Given that I indeed call MatSetValues exclusively with row indices within > the range determined by MatGetOwnershipRange, it should be impossible to > generate entries on the wrong process, right? In such a case, could the > corruption you mention be somehow due to the way I call other PETSc > functions? Or is it at all possible that too small preallocation is making > a problem? > Try setting these options and running in debug mode. MatSetOption(A,MAT_NO_OFF_PROC_ENTRIES,PETSC_TRUE); MatSetOption(A,MAT_NEW_NONZERO_ALLOCATION_ERR,PETSC_TRUE); > > Also, what is the meaning of the memories in the report: "allocated", > obtained with PetscMalloc() http://www.mcs.anl.gov/petsc/petsc-dev/docs/manualpages/Sys/PetscMallocGetCurrentUsage.html > "used by process" > resident set size returned by getrusage(), procfs, or similar http://www.mcs.anl.gov/petsc/petsc-dev/docs/manualpages/Sys/PetscMemoryGetCurrentUsage.html > and "requested"? > Amount you are trying to allocate now. -------------- next part -------------- An HTML attachment was scrubbed... URL: From huyaoyu1986 at gmail.com Tue Nov 22 09:23:19 2011 From: huyaoyu1986 at gmail.com (Yaoyu Hu) Date: Tue, 22 Nov 2011 23:23:19 +0800 Subject: [petsc-users] Only one column is in the matrix after solving the inverse matrix Message-ID: Hi, everyone, I am new to PETSc, and I have just begun to use it together with slepc. It is really fantastic and I like it! It was not from me but one of my colleges who wanted to solve an inverse of a matrix, which had 400 rows and columns. I know that it is not good to design a algorithm that has a process for solving the inverse of a matrix. I just wanted to give it a try. However, things turned out wired. I followed the instructions on the FAQ web page of PETSc, the one using MatLUFractor() and MatMatSolve(). After I finished the coding, I tried the program. The result I got was a matrix which only has its first column but nothing else. I did not know what's happened. The following is the codes I used. It is kind of badly organized and the gmail web page ignores my 'Tab's, forgive me for that. Thanks ahead! ============Code Begins============= /* * MI.cpp * * Created on: Nov 22, 2011 * Author: huyaoyu */ static char help[] = "Give the inverse of a matrix.\n\n"; #include #include #include #undef __FUNCT__ #define __FUNCT__ "main" #define DIMENSION 2 int main(int argc,char **args) { PetscErrorCode ierr; PetscMPIInt size; Mat A,CA; Mat BB,XX; IS is_row; IS is_col; MatFactorInfo mfinfo; PetscMPIInt n; PetscScalar* array_scalar = new PetscScalar[DIMENSION*DIMENSION]; for(int i=0;i>temp_scalar; array_scalar[i*DIMENSION + j] = temp_scalar; } } in_file.close(); ierr = MatCreateSeqDense(PETSC_COMM_WORLD,DIMENSION,DIMENSION,PETSC_NULL,&A); CHKERRQ(ierr); ierr = MatCreateSeqDense(PETSC_COMM_WORLD,DIMENSION,DIMENSION,PETSC_NULL,&CA); CHKERRQ(ierr); ierr = MatSetValues(A,DIMENSION,idxm,DIMENSION,idxm,array_scalar,INSERT_VALUES); CHKERRQ(ierr); ierr = MatAssemblyBegin(A,MAT_FINAL_ASSEMBLY); CHKERRQ(ierr); ierr = MatAssemblyEnd(A,MAT_FINAL_ASSEMBLY); CHKERRQ(ierr); ierr = MatDuplicate(A,MAT_COPY_VALUES,&CA); CHKERRQ(ierr); ierr = MatLUFactor(A,is_row,is_col,&mfinfo); CHKERRQ(ierr); ierr = MatMatSolve(A,BB,XX); CHKERRQ(ierr); ierr = PetscPrintf(PETSC_COMM_SELF,"The inverse of A matrix is:\n"); CHKERRQ(ierr); ierr = MatView(XX,PETSC_VIEWER_STDOUT_SELF); CHKERRQ(ierr); ierr = MatMatMult(CA,XX,MAT_REUSE_MATRIX,PETSC_DEFAULT,&BB); ierr = MatView(BB,PETSC_VIEWER_STDOUT_SELF); CHKERRQ(ierr); // destroy ierr = MatDestroy(&A); CHKERRQ(ierr); ierr = MatDestroy(&BB); CHKERRQ(ierr); ierr = MatDestroy(&XX); CHKERRQ(ierr); ierr = MatDestroy(&CA); CHKERRQ(ierr); ierr = PetscFinalize(); delete[] array_scalar; return 0; } ============Code Ends============== -------------- next part -------------- An HTML attachment was scrubbed... URL: From hzhang at mcs.anl.gov Tue Nov 22 10:08:25 2011 From: hzhang at mcs.anl.gov (Hong Zhang) Date: Tue, 22 Nov 2011 10:08:25 -0600 Subject: [petsc-users] Only one column is in the matrix after solving the inverse matrix In-Reply-To: References: Message-ID: You can take a look at examples petsc-3.2/src/mat/examples/tests/ex1.c, ex125.c and ex125.c. Hong On Tue, Nov 22, 2011 at 9:23 AM, Yaoyu Hu wrote: > Hi, everyone, > > I am new to PETSc, and I have just begun to use it together with slepc. It > is really fantastic and I like it! > > > > It?was not from me but one of my colleges who wanted to solve an inverse of > a matrix, which had 400 rows and columns. > > > > I know that it is not good to design a algorithm that has a process for > solving the inverse of a matrix. I just wanted to?give it a try. However, > things turned out wired. I followed the instructions on the FAQ web page of > PETSc, the one using MatLUFractor() and MatMatSolve(). After I finished the > coding, I tried the program. The result I got?was a matrix which only has > its first column but nothing else. I did not know what's happened. The > following is the codes I used. It is kind of badly organized and the gmail > web page ignores my 'Tab's, forgive me for that. > > > > Thanks ahead! > > > > ============Code Begins============= > > /* > ?* MI.cpp > ?* > ?*? Created on: Nov 22, 2011 > ?*????? Author: huyaoyu > ?*/ > > static char help[] = "Give the inverse of a matrix.\n\n"; > > #include > > #include > #include > > #undef __FUNCT__ > #define __FUNCT__ "main" > > #define DIMENSION 2 > > int main(int argc,char **args) > { > ?PetscErrorCode ierr; > ?PetscMPIInt??? size; > ?Mat A,CA; > ?Mat BB,XX; > ?IS is_row; > ?IS is_col; > ?MatFactorInfo mfinfo; > ?PetscMPIInt n; > ?PetscScalar* array_scalar = new PetscScalar[DIMENSION*DIMENSION]; > > ?for(int i=0;i ?{ > ??array_scalar[i] = 0.0; > ?} > ?for(int i=0;i ?{ > ??array_scalar[i*DIMENSION + i] = 1.0; > ?} > > ?int idxm[DIMENSION]; > ?for(int i=0;i ?{ > ??idxm[i] = i; > ?} > > ?PetscInitialize(&argc,&args,(char *)0,help); > ?ierr = MPI_Comm_size(PETSC_COMM_WORLD,&size);CHKERRQ(ierr); > ?if (size != 1) SETERRQ(PETSC_COMM_WORLD,1,"This is a uniprocessor example > only!"); > ?ierr = PetscOptionsGetInt(PETSC_NULL,"-n",&n,PETSC_NULL);CHKERRQ(ierr); > > ?// B & X > ?ierr = > MatCreateSeqDense(PETSC_COMM_WORLD,DIMENSION,DIMENSION,PETSC_NULL,&BB); > CHKERRQ(ierr); > ?ierr = > MatCreateSeqDense(PETSC_COMM_WORLD,DIMENSION,DIMENSION,PETSC_NULL,&XX); > CHKERRQ(ierr); > > ?ierr = > MatSetValues(BB,DIMENSION,idxm,DIMENSION,idxm,array_scalar,INSERT_VALUES); > CHKERRQ(ierr); > ?ierr = MatAssemblyBegin(BB,MAT_FINAL_ASSEMBLY); CHKERRQ(ierr); > ?ierr = MatAssemblyEnd(BB,MAT_FINAL_ASSEMBLY); CHKERRQ(ierr); > > ?ierr = ISCreateStride(MPI_COMM_WORLD,DIMENSION,0,1,&is_row); CHKERRQ(ierr); > ?ierr = ISCreateStride(MPI_COMM_WORLD,DIMENSION,0,1,&is_col); CHKERRQ(ierr); > ?ierr = MatFactorInfoInitialize(&mfinfo); CHKERRQ(ierr); > > ?// matrix A > ?std::fstream in_file; > ?double temp_scalar; > ?in_file.open("./A.txt",std::ifstream::in); > ?if(!in_file.good()) > ?{ > ??ierr = PetscPrintf(PETSC_COMM_SELF,"File open failed!\n"); CHKERRQ(ierr); > ??return 1; > ?} > > ?for(int i=0;i ?{ > ??for(int j=0;j ??{ > ???in_file>>temp_scalar; > > ???array_scalar[i*DIMENSION + j] = temp_scalar; > ??} > ?} > ?in_file.close(); > > ?ierr = > MatCreateSeqDense(PETSC_COMM_WORLD,DIMENSION,DIMENSION,PETSC_NULL,&A); > CHKERRQ(ierr); > ?ierr = > MatCreateSeqDense(PETSC_COMM_WORLD,DIMENSION,DIMENSION,PETSC_NULL,&CA); > CHKERRQ(ierr); > ?ierr = > MatSetValues(A,DIMENSION,idxm,DIMENSION,idxm,array_scalar,INSERT_VALUES); > CHKERRQ(ierr); > > ?ierr = MatAssemblyBegin(A,MAT_FINAL_ASSEMBLY); CHKERRQ(ierr); > ?ierr = MatAssemblyEnd(A,MAT_FINAL_ASSEMBLY); CHKERRQ(ierr); > > ?ierr = MatDuplicate(A,MAT_COPY_VALUES,&CA); CHKERRQ(ierr); > > ?ierr = MatLUFactor(A,is_row,is_col,&mfinfo); CHKERRQ(ierr); > ?ierr = MatMatSolve(A,BB,XX); CHKERRQ(ierr); > ?ierr = PetscPrintf(PETSC_COMM_SELF,"The inverse of A matrix is:\n"); > CHKERRQ(ierr); > ?ierr = MatView(XX,PETSC_VIEWER_STDOUT_SELF); CHKERRQ(ierr); > > ?ierr = MatMatMult(CA,XX,MAT_REUSE_MATRIX,PETSC_DEFAULT,&BB); > ?ierr = MatView(BB,PETSC_VIEWER_STDOUT_SELF); CHKERRQ(ierr); > > ?// destroy > ?ierr = MatDestroy(&A); CHKERRQ(ierr); > ?ierr = MatDestroy(&BB); CHKERRQ(ierr); > ?ierr = MatDestroy(&XX); CHKERRQ(ierr); > ?ierr = MatDestroy(&CA); CHKERRQ(ierr); > > ?ierr = PetscFinalize(); > > ?delete[] array_scalar; > > ?return 0; > } > > ============Code Ends============== From marc.medale at polytech.univ-mrs.fr Tue Nov 22 11:36:44 2011 From: marc.medale at polytech.univ-mrs.fr (Marc MEDALE) Date: Tue, 22 Nov 2011 18:36:44 +0100 Subject: [petsc-users] Is LU decomposition performed in MUMPS accessible in PETSc? In-Reply-To: References: Message-ID: Hi, Let A be a square sparse matrix of size n x n (n of order up to 1 M). Is it possible to access in PETSc to the lower and upper decomposition coefficient triangles (L and U) of A factored in the MUMPS external package? Thank you very much for your help. Best regards. Marc MEDALE ========================================================= Polytech'Marseille, D?partement de M?canique Energ?tique Laboratoire IUSTI, UMR 6595 CNRS-Universit? Aix-Marseille Technopole de Chateau-Gombert, 5 rue Enrico Fermi 13453 MARSEILLE, Cedex 13, FRANCE --------------------------------------------------------------------------------------------------- Tel : +33 (0)4.91.10.69.14 ou 38 Fax : +33 (0)4.91.10.69.69 e-mail : Marc.Medale at polytech.univ-mrs.fr ========================================================= From hzhang at mcs.anl.gov Tue Nov 22 11:59:29 2011 From: hzhang at mcs.anl.gov (Hong Zhang) Date: Tue, 22 Nov 2011 11:59:29 -0600 Subject: [petsc-users] Is LU decomposition performed in MUMPS accessible in PETSc? In-Reply-To: References: Message-ID: Marc : > Let A be a square sparse matrix of size n x n (n of order up to 1 M). Is it > possible to access in PETSc to the lower and upper decomposition coefficient > triangles (L and U) of A factored in the MUMPS external package? You may run the code with option -mat_mumps_icntl_4 <0>: ICNTL(4): level of printing (0 to 4) (None) with '-mat_mumps_icntl_4 4', you'll see the data structure of analysis phase, ... If you need more, contact mumps developer. Hong > ========================================================= > Polytech'Marseille, D?partement de M?canique Energ?tique > Laboratoire IUSTI, UMR 6595 CNRS-Universit? Aix-Marseille > Technopole de Chateau-Gombert, 5 rue Enrico Fermi > 13453 MARSEILLE, Cedex 13, FRANCE > --------------------------------------------------------------------------------------------------- > Tel ?: +33 (0)4.91.10.69.14 ou 38 > Fax : +33 (0)4.91.10.69.69 > e-mail : Marc.Medale at polytech.univ-mrs.fr > ========================================================= > > > From medale at polytech.univ-mrs.fr Tue Nov 22 13:18:01 2011 From: medale at polytech.univ-mrs.fr (medale at polytech.univ-mrs.fr) Date: Tue, 22 Nov 2011 20:18:01 +0100 (CET) Subject: [petsc-users] Is LU decomposition performed in MUMPS accessible in PETSc? In-Reply-To: References: Message-ID: <49323.109.9.172.24.1321989481.squirrel@webmail.polytech.univ-mrs.fr> Thanks Hong. Marc MEDALE > Marc : > >> Let A be a square sparse matrix of size n x n (n of order up to 1 M). Is >> it >> possible to access in PETSc to the lower and upper decomposition >> coefficient >> triangles (L and U) of A factored in the MUMPS external package? > > You may run the code with option > -mat_mumps_icntl_4 <0>: ICNTL(4): level of printing (0 to 4) (None) > > with '-mat_mumps_icntl_4 4', you'll see the data structure of analysis > phase, ... > If you need more, contact mumps developer. > > Hong > >> ========================================================= >> Polytech'Marseille, D?partement de M?canique Energ?tique >> Laboratoire IUSTI, UMR 6595 CNRS-Universit? Aix-Marseille >> Technopole de Chateau-Gombert, 5 rue Enrico Fermi >> 13453 MARSEILLE, Cedex 13, FRANCE >> --------------------------------------------------------------------------------------------------- >> Tel ?: +33 (0)4.91.10.69.14 ou 38 >> Fax : +33 (0)4.91.10.69.69 >> e-mail : Marc.Medale at polytech.univ-mrs.fr >> ========================================================= >> >> >> > From bsmith at mcs.anl.gov Tue Nov 22 14:24:13 2011 From: bsmith at mcs.anl.gov (Barry Smith) Date: Tue, 22 Nov 2011 14:24:13 -0600 Subject: [petsc-users] Anyone meet mumps crash on AIX? In-Reply-To: <7128779.337141321973231141.JavaMail.coremail@mail.ustc.edu> References: <7128779.337141321973231141.JavaMail.coremail@mail.ustc.edu> Message-ID: I would report all the memory issues found with valgrind to the MUMPS developers and demand action on their part. Minor memory corruption on some systems can do no harm while on other systems it leads to crashes. Barry On Nov 22, 2011, at 8:47 AM, Gong Ding wrote: > Hi, > I am testing my code on AIX. > petsc 3.2 with MUMPS 4.10 ALWAYS crash, both serial and parallel, with ERROR: 0031-250 task 0: Segmentation fault. > However, other direct solver seems ok, i.e. superlu. > > The core file was checked by gdb, but only littel information: > Program terminated with signal 11, Segmentation fault. > #0 0x000000010150d8ec in dmumps_462 () > > I had checked the code with valgrind on Linux/AMD64, which also reported some memory problem (but never crash) > ==10354== Invalid read of size 8 > ==10354== at 0x57BE749: __intel_new_memcpy (in /opt/intel/Compiler/11.1/038/lib/intel64/libirc.so) > ==10354== by 0x57A0AF5: _intel_fast_memcpy.J (in /opt/intel/Compiler/11.1/038/lib/intel64/libirc.so) > ==10354== by 0x185F043: dmumps_363_ (in /home/gdiso/genius_master/bin/genius.LINUX) > ==10354== by 0x186B114: dmumps_26_ (in /home/gdiso/genius_master/bin/genius.LINUX) > ==10354== by 0x193D382: dmumps_.P (in /home/gdiso/genius_master/bin/genius.LINUX) > ==10354== by 0x183EA6E: dmumps_f77_ (in /home/gdiso/genius_master/bin/genius.LINUX) > ==10354== by 0x181EBE7: dmumps_c (in /home/gdiso/genius_master/bin/genius.LINUX) > ==10354== by 0x1406EFE: MatLUFactorSymbolic_AIJMUMPS (in /home/gdiso/genius_master/bin/genius.LINUX) > ==10354== by 0x1321009: MatLUFactorSymbolic (in /home/gdiso/genius_master/bin/genius.LINUX) > ==10354== by 0x15FF044: PCSetUp_LU (in /home/gdiso/genius_master/bin/genius.LINUX) > ==10354== by 0x15AB193: PCSetUp (in /home/gdiso/genius_master/bin/genius.LINUX) > ==10354== by 0x16364DD: KSPSetUp (in /home/gdiso/genius_master/bin/genius.LINUX) > ==10354== Address 0x9318d78 is 248 bytes inside a block of size 252 alloc'd > ==10354== at 0x4A0776F: malloc (vg_replace_malloc.c:263) > ==10354== by 0x5CB3C43: for_allocate (in /opt/intel/Compiler/11.1/038/lib/intel64/libifcore.so.5) > ==10354== by 0x19CF545: mumps_754_ (in /home/gdiso/genius_master/bin/genius.LINUX) > ==10354== by 0x186A392: dmumps_26_ (in /home/gdiso/genius_master/bin/genius.LINUX) > ==10354== by 0x193D382: dmumps_.P (in /home/gdiso/genius_master/bin/genius.LINUX) > ==10354== by 0x183EA6E: dmumps_f77_ (in /home/gdiso/genius_master/bin/genius.LINUX) > ==10354== by 0x181EBE7: dmumps_c (in /home/gdiso/genius_master/bin/genius.LINUX) > ==10354== by 0x1406EFE: MatLUFactorSymbolic_AIJMUMPS (in /home/gdiso/genius_master/bin/genius.LINUX) > ==10354== by 0x1321009: MatLUFactorSymbolic (in /home/gdiso/genius_master/bin/genius.LINUX) > ==10354== by 0x15FF044: PCSetUp_LU (in /home/gdiso/genius_master/bin/genius.LINUX) > ==10354== by 0x15AB193: PCSetUp (in /home/gdiso/genius_master/bin/genius.LINUX) > ==10354== by 0x16364DD: KSPSetUp (in /home/gdiso/genius_master/bin/genius.LINUX) > > Any suggestion? > > Gong Ding > > From knepley at gmail.com Tue Nov 22 16:55:19 2011 From: knepley at gmail.com (Matthew Knepley) Date: Tue, 22 Nov 2011 16:55:19 -0600 Subject: [petsc-users] binary writing to tecplot In-Reply-To: <93830961-EBAF-4573-B547-367022339205@cwi.nl> References: <25074BCB-727C-4297-BB78-C0FECF650B01@cwi.nl> <93830961-EBAF-4573-B547-367022339205@cwi.nl> Message-ID: On Tue, Nov 22, 2011 at 7:53 AM, Benjamin Sanderse wrote: > Thanks for the quick reply. > Tecplot is able to read HDF5 files according to the possible types listed > in its dataloader, so that seems like an interesting option. But where do I > find Petsc documentation on writing HDF5 files? Is this in the development > version? > By the way, I am using structured Cartesian grids, and all operations such > as interpolation and differentiation are carried out as matrix-vector > products, which also include boundary conditions. I am not using DAs at the > moment, would it be better to use those? > DMDAs only manage a very special data layout on a Cartesian grid. It can store a given number of dofs at each vertex, If you fit that scenario, definitely use them. Matt > Op 22 nov 2011, om 14:04 heeft Jed Brown het volgende geschreven: > > On Tue, Nov 22, 2011 at 04:40, Benjamin Sanderse wrote: > >> Hello all, >> >> I am trying to output parallel data in binary format that can be read by >> Tecplot. For this I use the TecIO library from Tecplot, which provide a set >> of Fortran/C subroutines. With these subroutines it is easy to write binary >> files that can be read by Tecplot, but, as far as I can see, they can not >> be directly used with parallel Petsc vectors. On a single processor >> everything works fine, but on more processors it fails. >> I am thinking now of different workarounds: >> >> 1. Create a sequential vector from the parallel vector, and call the >> TecIO subroutines with this sequential vector. For large problems this will >> probably be too slow, and actually I don't know how to copy the content of >> a parallel vector into a sequential one. >> 2. Write a tecplot file from each processor, with the data from that >> processor. The problem is that this requires combining the files >> afterwards, and this is probably not easy (certainly not in binary format?). >> 3. Change the tecplot subroutines or write own binary output with >> VecView(). It might not be easy to get the output right so that Tecplot >> understands it. >> > > Are you using DMDA or do you have your own unstructured mesh? Supposedly > Tecplot can read HDF5, so you might be able to use PETSc's parallel HDF5 > viewer and still read it with Tecplot. > > What other formats does Tecplot support and how do they recommend handling > parallelism? The open source visualization packages have put a great deal > of effort into supporting many different data formats (including Tecplot's > native format). It would be rather unfair if Tecplot didn't support some of > the open source formats. > > > > > -- What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. -- Norbert Wiener -------------- next part -------------- An HTML attachment was scrubbed... URL: From knepley at gmail.com Tue Nov 22 16:59:12 2011 From: knepley at gmail.com (Matthew Knepley) Date: Tue, 22 Nov 2011 16:59:12 -0600 Subject: [petsc-users] Memory for matrix assembly In-Reply-To: References: <4ECB2979.2080502@bc.edu> <4ECB3752.3080305@bc.edu> <4ECBB201.9020205@bc.edu> Message-ID: On Tue, Nov 22, 2011 at 8:50 AM, Jed Brown wrote: > On Tue, Nov 22, 2011 at 08:30, Andrej Mesaros wrote: > >> Given that I indeed call MatSetValues exclusively with row indices within >> the range determined by MatGetOwnershipRange, it should be impossible to >> generate entries on the wrong process, right? In such a case, could the >> corruption you mention be somehow due to the way I call other PETSc >> functions? Or is it at all possible that too small preallocation is making >> a problem? >> > > Try setting these options and running in debug mode. > > MatSetOption(A,MAT_NO_OFF_PROC_ENTRIES,PETSC_TRUE); > MatSetOption(A,MAT_NEW_NONZERO_ALLOCATION_ERR,PETSC_TRUE); > I believe -info gives info on communicated entries after MatAssemble. Matt > > >> >> Also, what is the meaning of the memories in the report: "allocated", >> > > obtained with PetscMalloc() > > > http://www.mcs.anl.gov/petsc/petsc-dev/docs/manualpages/Sys/PetscMallocGetCurrentUsage.html > > >> "used by process" >> > > resident set size returned by getrusage(), procfs, or similar > > > http://www.mcs.anl.gov/petsc/petsc-dev/docs/manualpages/Sys/PetscMemoryGetCurrentUsage.html > > >> and "requested"? >> > > Amount you are trying to allocate now. > -- What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. -- Norbert Wiener -------------- next part -------------- An HTML attachment was scrubbed... URL: From huyaoyu1986 at gmail.com Tue Nov 22 23:11:48 2011 From: huyaoyu1986 at gmail.com (huyaoyu) Date: Wed, 23 Nov 2011 13:11:48 +0800 Subject: [petsc-users] petsc-users Digest, Vol 35, Issue 70 In-Reply-To: References: Message-ID: <1322025108.3509.14.camel@hyyv460> Hong, Thank you for your help! I tried the example code of petsc-3.2/src/mat/examples/tests/ex1.c. But the result is the same. I attach the modified code and the results here. I don't know what is the problem exactly. By the way I am using PETSc on a 64bit ubuntu 11.04. I complied MPICH2 myself, made symbolic links in the /usr/bin for mpicc, mpiexec etc, and configured PETSc use the command line: ./configure --with-cc=mpicc --with-fc=mpif90 --download-f-blas-lapack=1 I break some lines of the code to avoid line wrapping in Evolution mail program. > You can take a look at examples > petsc-3.2/src/mat/examples/tests/ex1.c, ex125.c and ex125.c. > > Hong > > On Tue, Nov 22, 2011 at 9:23 AM, Yaoyu Hu wrote: > > Hi, everyone, > > > > I am new to PETSc, and I have just begun to use it together with slepc. It > > is really fantastic and I like it! > > > > > > > > It?was not from me but one of my colleges who wanted to solve an inverse of > > a matrix, which had 400 rows and columns. > > > > > > > > I know that it is not good to design a algorithm that has a process for > > solving the inverse of a matrix. I just wanted to?give it a try. However, > > things turned out wired. I followed the instructions on the FAQ web page of > > PETSc, the one using MatLUFractor() and MatMatSolve(). After I finished the > > coding, I tried the program. The result I got?was a matrix which only has > > its first column but nothing else. I did not know what's happened. The > > following is the codes I used. > > =============Modified code begins================= static char help[] = "Give the inverse of a matrix.\n\n"; #include #include #include #undef __FUNCT__ #define __FUNCT__ "main" #define DIMENSION 2 int main(int argc,char **args) { PetscErrorCode ierr; PetscMPIInt size; Mat A,CA; // CA is the copy of A Mat RHS,XX; // XX is the inverse result MatFactorInfo mfinfo; PetscScalar* array_scalar = new PetscScalar[DIMENSION*DIMENSION]; PetscScalar* array_for_RHS; // clean the values of array_scalar for(int i=0;i>temp_scalar; array_scalar[i*DIMENSION + j] = temp_scalar; } } in_file.close(); // matrices creation and initialization ierr = MatCreateSeqDense(PETSC_COMM_WORLD, DIMENSION,DIMENSION,PETSC_NULL,&A); CHKERRQ(ierr); ierr = MatCreateSeqDense(PETSC_COMM_WORLD, DIMENSION,DIMENSION,PETSC_NULL,&CA); CHKERRQ(ierr); ierr = MatSetValues(A,DIMENSION,idxm,DIMENSION,idxm, array_scalar,INSERT_VALUES); CHKERRQ(ierr); ierr = MatAssemblyBegin(A,MAT_FINAL_ASSEMBLY); CHKERRQ(ierr); ierr = MatAssemblyEnd(A,MAT_FINAL_ASSEMBLY); CHKERRQ(ierr); // create CA ierr = MatDuplicate(A,MAT_COPY_VALUES,&CA); CHKERRQ(ierr); ierr = PetscPrintf(PETSC_COMM_SELF, "The A matrix is:\n"); CHKERRQ(ierr); ierr = MatView(A,PETSC_VIEWER_STDOUT_SELF); CHKERRQ(ierr); // in-place LUFactor ierr = MatLUFactor(A,0,0,0); CHKERRQ(ierr); // solve for the inverse matrix XX ierr = MatMatSolve(A,RHS,XX); CHKERRQ(ierr); ierr = PetscPrintf(PETSC_COMM_SELF, "The inverse of A matrix is:\n"); CHKERRQ(ierr); ierr = MatView(XX,PETSC_VIEWER_STDOUT_SELF); CHKERRQ(ierr); ierr = PetscPrintf(PETSC_COMM_SELF, "The multiplied result is:\n"); CHKERRQ(ierr); ierr = MatMatMult(CA,XX,MAT_REUSE_MATRIX,PETSC_DEFAULT,&RHS); ierr = MatView(RHS,PETSC_VIEWER_STDOUT_SELF); CHKERRQ(ierr); // destroy ierr = MatDestroy(&A); CHKERRQ(ierr); ierr = MatDestroy(&RHS); CHKERRQ(ierr); ierr = MatDestroy(&XX); CHKERRQ(ierr); ierr = MatDestroy(&CA); CHKERRQ(ierr); ierr = PetscFinalize(); delete[] array_scalar; return 0; } ===========Modified code ends============== The input file(DIMENSION = 2) and the results are A.txt: 2.0 3.0 -20.0 55.0 Results: The A matrix is: Matrix Object: 1 MPI processes type: seqdense 2.0000000000000000e+00 3.0000000000000000e+00 -2.0000000000000000e+01 5.5000000000000000e+01 The inverse of A matrix is: Matrix Object: 1 MPI processes type: seqdense 3.2352941176470590e-01 -0.0000000000000000e+00 1.1764705882352941e-01 0.0000000000000000e+00 The multiplied result is: Matrix Object: 1 MPI processes type: seqdense 1.0000000000000000e+00 0.0000000000000000e+00 0.0000000000000000e+00 0.0000000000000000e+00 From huyaoyu1986 at gmail.com Tue Nov 22 23:12:32 2011 From: huyaoyu1986 at gmail.com (huyaoyu) Date: Wed, 23 Nov 2011 13:12:32 +0800 Subject: [petsc-users] petsc-users Digest, Vol 35, Issue 70 In-Reply-To: References: Message-ID: <1322025152.3509.15.camel@hyyv460> Hong, Thank you for your help! I tried the example code of petsc-3.2/src/mat/examples/tests/ex1.c. But the result is the same. I attach the modified code and the results here. I don't know what is the problem exactly. By the way I am using PETSc on a 64bit ubuntu 11.04. I complied MPICH2 myself, made symbolic links in the /usr/bin for mpicc, mpiexec etc, and configured PETSc use the command line: ./configure --with-cc=mpicc --with-fc=mpif90 --download-f-blas-lapack=1 I break some lines of the code to avoid line wrapping in Evolution mail program. > You can take a look at examples > petsc-3.2/src/mat/examples/tests/ex1.c, ex125.c and ex125.c. > > Hong > > On Tue, Nov 22, 2011 at 9:23 AM, Yaoyu Hu wrote: > > Hi, everyone, > > > > I am new to PETSc, and I have just begun to use it together with slepc. It > > is really fantastic and I like it! > > > > > > > > It?was not from me but one of my colleges who wanted to solve an inverse of > > a matrix, which had 400 rows and columns. > > > > > > > > I know that it is not good to design a algorithm that has a process for > > solving the inverse of a matrix. I just wanted to?give it a try. However, > > things turned out wired. I followed the instructions on the FAQ web page of > > PETSc, the one using MatLUFractor() and MatMatSolve(). After I finished the > > coding, I tried the program. The result I got?was a matrix which only has > > its first column but nothing else. I did not know what's happened. The > > following is the codes I used. > > =============Modified code begins================= static char help[] = "Give the inverse of a matrix.\n\n"; #include #include #include #undef __FUNCT__ #define __FUNCT__ "main" #define DIMENSION 2 int main(int argc,char **args) { PetscErrorCode ierr; PetscMPIInt size; Mat A,CA; // CA is the copy of A Mat RHS,XX; // XX is the inverse result MatFactorInfo mfinfo; PetscScalar* array_scalar = new PetscScalar[DIMENSION*DIMENSION]; PetscScalar* array_for_RHS; // clean the values of array_scalar for(int i=0;i>temp_scalar; array_scalar[i*DIMENSION + j] = temp_scalar; } } in_file.close(); // matrices creation and initialization ierr = MatCreateSeqDense(PETSC_COMM_WORLD, DIMENSION,DIMENSION,PETSC_NULL,&A); CHKERRQ(ierr); ierr = MatCreateSeqDense(PETSC_COMM_WORLD, DIMENSION,DIMENSION,PETSC_NULL,&CA); CHKERRQ(ierr); ierr = MatSetValues(A,DIMENSION,idxm,DIMENSION,idxm, array_scalar,INSERT_VALUES); CHKERRQ(ierr); ierr = MatAssemblyBegin(A,MAT_FINAL_ASSEMBLY); CHKERRQ(ierr); ierr = MatAssemblyEnd(A,MAT_FINAL_ASSEMBLY); CHKERRQ(ierr); // create CA ierr = MatDuplicate(A,MAT_COPY_VALUES,&CA); CHKERRQ(ierr); ierr = PetscPrintf(PETSC_COMM_SELF, "The A matrix is:\n"); CHKERRQ(ierr); ierr = MatView(A,PETSC_VIEWER_STDOUT_SELF); CHKERRQ(ierr); // in-place LUFactor ierr = MatLUFactor(A,0,0,0); CHKERRQ(ierr); // solve for the inverse matrix XX ierr = MatMatSolve(A,RHS,XX); CHKERRQ(ierr); ierr = PetscPrintf(PETSC_COMM_SELF, "The inverse of A matrix is:\n"); CHKERRQ(ierr); ierr = MatView(XX,PETSC_VIEWER_STDOUT_SELF); CHKERRQ(ierr); ierr = PetscPrintf(PETSC_COMM_SELF, "The multiplied result is:\n"); CHKERRQ(ierr); ierr = MatMatMult(CA,XX,MAT_REUSE_MATRIX,PETSC_DEFAULT,&RHS); ierr = MatView(RHS,PETSC_VIEWER_STDOUT_SELF); CHKERRQ(ierr); // destroy ierr = MatDestroy(&A); CHKERRQ(ierr); ierr = MatDestroy(&RHS); CHKERRQ(ierr); ierr = MatDestroy(&XX); CHKERRQ(ierr); ierr = MatDestroy(&CA); CHKERRQ(ierr); ierr = PetscFinalize(); delete[] array_scalar; return 0; } ===========Modified code ends============== The input file(DIMENSION = 2) and the results are A.txt: 2.0 3.0 -20.0 55.0 Results: The A matrix is: Matrix Object: 1 MPI processes type: seqdense 2.0000000000000000e+00 3.0000000000000000e+00 -2.0000000000000000e+01 5.5000000000000000e+01 The inverse of A matrix is: Matrix Object: 1 MPI processes type: seqdense 3.2352941176470590e-01 -0.0000000000000000e+00 1.1764705882352941e-01 0.0000000000000000e+00 The multiplied result is: Matrix Object: 1 MPI processes type: seqdense 1.0000000000000000e+00 0.0000000000000000e+00 0.0000000000000000e+00 0.0000000000000000e+00 From huyaoyu1986 at gmail.com Tue Nov 22 23:13:15 2011 From: huyaoyu1986 at gmail.com (huyaoyu) Date: Wed, 23 Nov 2011 13:13:15 +0800 Subject: [petsc-users] Only one column is in the matrix after solving the inverse matrix In-Reply-To: References: Message-ID: <1322025195.3509.16.camel@hyyv460> Hong, Thank you for your help! I tried the example code of petsc-3.2/src/mat/examples/tests/ex1.c. But the result is the same. I attach the modified code and the results here. I don't know what is the problem exactly. By the way I am using PETSc on a 64bit ubuntu 11.04. I complied MPICH2 myself, made symbolic links in the /usr/bin for mpicc, mpiexec etc, and configured PETSc use the command line: ./configure --with-cc=mpicc --with-fc=mpif90 --download-f-blas-lapack=1 I break some lines of the code to avoid line wrapping in Evolution mail program. > You can take a look at examples > petsc-3.2/src/mat/examples/tests/ex1.c, ex125.c and ex125.c. > > Hong > > On Tue, Nov 22, 2011 at 9:23 AM, Yaoyu Hu wrote: > > Hi, everyone, > > > > I am new to PETSc, and I have just begun to use it together with slepc. It > > is really fantastic and I like it! > > > > > > > > It?was not from me but one of my colleges who wanted to solve an inverse of > > a matrix, which had 400 rows and columns. > > > > > > > > I know that it is not good to design a algorithm that has a process for > > solving the inverse of a matrix. I just wanted to?give it a try. However, > > things turned out wired. I followed the instructions on the FAQ web page of > > PETSc, the one using MatLUFractor() and MatMatSolve(). After I finished the > > coding, I tried the program. The result I got?was a matrix which only has > > its first column but nothing else. I did not know what's happened. The > > following is the codes I used. > > =============Modified code begins================= static char help[] = "Give the inverse of a matrix.\n\n"; #include #include #include #undef __FUNCT__ #define __FUNCT__ "main" #define DIMENSION 2 int main(int argc,char **args) { PetscErrorCode ierr; PetscMPIInt size; Mat A,CA; // CA is the copy of A Mat RHS,XX; // XX is the inverse result MatFactorInfo mfinfo; PetscScalar* array_scalar = new PetscScalar[DIMENSION*DIMENSION]; PetscScalar* array_for_RHS; // clean the values of array_scalar for(int i=0;i>temp_scalar; array_scalar[i*DIMENSION + j] = temp_scalar; } } in_file.close(); // matrices creation and initialization ierr = MatCreateSeqDense(PETSC_COMM_WORLD, DIMENSION,DIMENSION,PETSC_NULL,&A); CHKERRQ(ierr); ierr = MatCreateSeqDense(PETSC_COMM_WORLD, DIMENSION,DIMENSION,PETSC_NULL,&CA); CHKERRQ(ierr); ierr = MatSetValues(A,DIMENSION,idxm,DIMENSION,idxm, array_scalar,INSERT_VALUES); CHKERRQ(ierr); ierr = MatAssemblyBegin(A,MAT_FINAL_ASSEMBLY); CHKERRQ(ierr); ierr = MatAssemblyEnd(A,MAT_FINAL_ASSEMBLY); CHKERRQ(ierr); // create CA ierr = MatDuplicate(A,MAT_COPY_VALUES,&CA); CHKERRQ(ierr); ierr = PetscPrintf(PETSC_COMM_SELF, "The A matrix is:\n"); CHKERRQ(ierr); ierr = MatView(A,PETSC_VIEWER_STDOUT_SELF); CHKERRQ(ierr); // in-place LUFactor ierr = MatLUFactor(A,0,0,0); CHKERRQ(ierr); // solve for the inverse matrix XX ierr = MatMatSolve(A,RHS,XX); CHKERRQ(ierr); ierr = PetscPrintf(PETSC_COMM_SELF, "The inverse of A matrix is:\n"); CHKERRQ(ierr); ierr = MatView(XX,PETSC_VIEWER_STDOUT_SELF); CHKERRQ(ierr); ierr = PetscPrintf(PETSC_COMM_SELF, "The multiplied result is:\n"); CHKERRQ(ierr); ierr = MatMatMult(CA,XX,MAT_REUSE_MATRIX,PETSC_DEFAULT,&RHS); ierr = MatView(RHS,PETSC_VIEWER_STDOUT_SELF); CHKERRQ(ierr); // destroy ierr = MatDestroy(&A); CHKERRQ(ierr); ierr = MatDestroy(&RHS); CHKERRQ(ierr); ierr = MatDestroy(&XX); CHKERRQ(ierr); ierr = MatDestroy(&CA); CHKERRQ(ierr); ierr = PetscFinalize(); delete[] array_scalar; return 0; } ===========Modified code ends============== The input file(DIMENSION = 2) and the results are A.txt: 2.0 3.0 -20.0 55.0 Results: The A matrix is: Matrix Object: 1 MPI processes type: seqdense 2.0000000000000000e+00 3.0000000000000000e+00 -2.0000000000000000e+01 5.5000000000000000e+01 The inverse of A matrix is: Matrix Object: 1 MPI processes type: seqdense 3.2352941176470590e-01 -0.0000000000000000e+00 1.1764705882352941e-01 0.0000000000000000e+00 The multiplied result is: Matrix Object: 1 MPI processes type: seqdense 1.0000000000000000e+00 0.0000000000000000e+00 0.0000000000000000e+00 0.0000000000000000e+00 From behzad.baghapour at gmail.com Wed Nov 23 00:47:23 2011 From: behzad.baghapour at gmail.com (behzad baghapour) Date: Wed, 23 Nov 2011 10:17:23 +0330 Subject: [petsc-users] Get Stuck in SNES In-Reply-To: References: Message-ID: As I want to keep my code structure as before applying SNES, I defined a FieldContext and pass my data like element and face values into this context by pointing there addresses of elements and faces of the field like this: typedef struct { element* e; face* f; flow* flw; }FieldCtx; FC.e = e; FC.f = f; FC.flw = flw; -------------- next part -------------- An HTML attachment was scrubbed... URL: From jedbrown at mcs.anl.gov Wed Nov 23 00:49:42 2011 From: jedbrown at mcs.anl.gov (Jed Brown) Date: Wed, 23 Nov 2011 00:49:42 -0600 Subject: [petsc-users] Get Stuck in SNES In-Reply-To: References: Message-ID: On Wed, Nov 23, 2011 at 00:47, behzad baghapour wrote: > As I want to keep my code structure as before applying SNES, I defined a > FieldContext and pass my data like element and face values into this > context by pointing there addresses of elements and faces of the field like > this: > > typedef struct { > element* e; face* f; flow* flw; }FieldCtx; > > FC.e = e; > FC.f = f; > FC.flw = flw; > This is mesh topology and geometry or are these state variables? If the latter, then you can't do it this way. You have to evaluate the residual at the state passed into your SNES residual function, not based on whatever is in this other struct. -------------- next part -------------- An HTML attachment was scrubbed... URL: From behzad.baghapour at gmail.com Wed Nov 23 00:50:15 2011 From: behzad.baghapour at gmail.com (behzad baghapour) Date: Wed, 23 Nov 2011 10:20:15 +0330 Subject: [petsc-users] Get Stuck in SNES In-Reply-To: References: Message-ID: As I want to keep my code structure as before applying SNES, I defined a FieldContext and pass my data like element and face values into this context by pointing there addresses of elements and faces of the field like this: typedef struct { element* e; face* f; flow* flw; }FieldCtx; and then: FC.e = e; FC.f = f; FC.flw = flw; and pass FieldContext into SNESSetFunction and SNESSetJacobian. Here I checked that all my routine work as before except one (related to convective flux) which I really do not know why the values just for this routine is not match as before. -------------- next part -------------- An HTML attachment was scrubbed... URL: From behzad.baghapour at gmail.com Wed Nov 23 00:55:18 2011 From: behzad.baghapour at gmail.com (behzad baghapour) Date: Wed, 23 Nov 2011 10:25:18 +0330 Subject: [petsc-users] Get Stuck in SNES In-Reply-To: References: Message-ID: These are state variables which I solve them by KSP and iterate with a full Newton method. I though if I point the values of defined FieldCtx to my field state then the routines of residual and jacobian my progressively call them in SNES solution. Is that wrong ? How should I do then ? -------------- next part -------------- An HTML attachment was scrubbed... URL: From jedbrown at mcs.anl.gov Wed Nov 23 00:57:20 2011 From: jedbrown at mcs.anl.gov (Jed Brown) Date: Wed, 23 Nov 2011 00:57:20 -0600 Subject: [petsc-users] Get Stuck in SNES In-Reply-To: References: Message-ID: On Wed, Nov 23, 2011 at 00:55, behzad baghapour wrote: > These are state variables which I solve them by KSP and iterate with a > full Newton method. I though if I point the values of defined FieldCtx to > my field state then the routines of residual and jacobian my progressively > call them in SNES solution. Is that wrong ? How should I do then ? The field variables go in the Vec. That is the state at which you need to evaluate the residual (and, optionally, the Jacobian). SNES does not ever look at what you put inside the context. -------------- next part -------------- An HTML attachment was scrubbed... URL: From behzad.baghapour at gmail.com Wed Nov 23 01:06:47 2011 From: behzad.baghapour at gmail.com (behzad baghapour) Date: Wed, 23 Nov 2011 10:36:47 +0330 Subject: [petsc-users] Get Stuck in SNES In-Reply-To: References: Message-ID: Thanks for your help. Here is my procedure: 1- Define context ( as in previous email ) 2- Related the addresses ( as in previous email ) 4- create the context FieldCtx FC; 3- call residual routine: ierr = SNESSetFunction( snes, r, _petsc_residualVector, (void*)&FC ); CHKERRQ( ierr ); 4- define residual routine: ( x is solution and r is residual vector ) PetscErrorCode solver::_petsc_residualVector( SNES snes, Vec x, Vec r, void* ctx ) { ierr = VecGetArray( x, &xx ); CHKERRQ( ierr ); for( c=0; ce[c].Q[p] = xx[c*(noe*num)+p]; ierr = VecRestoreArray( x, &xx ); CHKERRQ( ierr ); interiorFlux( FC->flw, FC->e ); faceFlux ( FC->flw, FC->f, FC->e ); ierr = VecGetArray( r, &rr ); CHKERRQ( ierr ); for( c=0; ce[c].R[p]; } ierr = VecRestoreArray( r, &rr ); CHKERRQ( ierr ); } 5- same procedure for Jacobian 6- set opitions 7- solve with SNESSolve() Is it look right with SNES ??? Thanks again for your attention. -------------- next part -------------- An HTML attachment was scrubbed... URL: From behzad.baghapour at gmail.com Wed Nov 23 01:13:39 2011 From: behzad.baghapour at gmail.com (behzad baghapour) Date: Wed, 23 Nov 2011 10:43:39 +0330 Subject: [petsc-users] Get Stuck in SNES In-Reply-To: References: Message-ID: In addition I set FieldCtx* FC = (FieldCtx*) ctx; in the _petsc_resiadualVector before any calculations Thanks a lot -------------- next part -------------- An HTML attachment was scrubbed... URL: From dominik at itis.ethz.ch Wed Nov 23 05:43:21 2011 From: dominik at itis.ethz.ch (Dominik Szczerba) Date: Wed, 23 Nov 2011 12:43:21 +0100 Subject: [petsc-users] Avoid MUMPS ordering for distributed matrices? Message-ID: In my procedure considerable time is spent to partition the domain. When using MUMPS as a solver for my matrix I see the message: "Ordering based on METIS" and this seems to take a lot of time. Is it not possible to take over the existing ordering, or is this ordering something else than matrix partitioning that I had already performed? Thanks for any clarifications, Dominik From jedbrown at mcs.anl.gov Wed Nov 23 07:03:18 2011 From: jedbrown at mcs.anl.gov (Jed Brown) Date: Wed, 23 Nov 2011 07:03:18 -0600 Subject: [petsc-users] Get Stuck in SNES In-Reply-To: References: Message-ID: On Wed, Nov 23, 2011 at 01:06, behzad baghapour wrote: > for( c=0; c FC->e[c].Q[p] = xx[c*(noe*num)+p]; > You haven't told me about "noe" or "num". Do you mean for this to read xx[c*tot+p]? > > ierr = VecRestoreArray( x, &xx ); CHKERRQ( ierr ); > > interiorFlux( FC->flw, FC->e ); > faceFlux ( FC->flw, FC->f, FC->e ); > The first of these should set FC->e (if you are adding into it, then you need to zero it first) and the second should add into it. -------------- next part -------------- An HTML attachment was scrubbed... URL: From jedbrown at mcs.anl.gov Wed Nov 23 07:18:45 2011 From: jedbrown at mcs.anl.gov (Jed Brown) Date: Wed, 23 Nov 2011 07:18:45 -0600 Subject: [petsc-users] Avoid MUMPS ordering for distributed matrices? In-Reply-To: References: Message-ID: On Wed, Nov 23, 2011 at 05:43, Dominik Szczerba wrote: > In my procedure considerable time is spent to partition the domain. > When using MUMPS as a solver for my matrix I see the message: > > "Ordering based on METIS" > This is an ordering to reduce fill in factorization, not to to partition the domain. Last I heard, symbolic factorization was done in serial, which explains why you find it taking a lot of time. The right hand side and solution vectors are also passed on rank 0, which presents another inefficiency/imbalance and memory bottleneck. Talk to the MUMPS developers or use a different package if you don't like these properties. -------------- next part -------------- An HTML attachment was scrubbed... URL: From dominik at itis.ethz.ch Wed Nov 23 07:22:51 2011 From: dominik at itis.ethz.ch (Dominik Szczerba) Date: Wed, 23 Nov 2011 14:22:51 +0100 Subject: [petsc-users] MatCreateScatter documentation In-Reply-To: References: Message-ID: Would not MatConvert do the same, but easier? Thanks Dominik On Mon, Nov 21, 2011 at 4:41 PM, Jed Brown wrote: > On Mon, Nov 21, 2011 at 09:36, Dominik Szczerba > wrote: >> >> I am thinking of the simplest but not necessarily efficient way to >> pull a distributed matrix onto one CPU... I would appreciate a pointer >> or two... I know I can dump it to disk via MatView, but without disk >> IO would be better. > > http://www.mcs.anl.gov/petsc/petsc-dev/docs/manualpages/Mat/MatGetSubMatrices.html From jedbrown at mcs.anl.gov Wed Nov 23 07:25:54 2011 From: jedbrown at mcs.anl.gov (Jed Brown) Date: Wed, 23 Nov 2011 07:25:54 -0600 Subject: [petsc-users] MatCreateScatter documentation In-Reply-To: References: Message-ID: On Wed, Nov 23, 2011 at 07:22, Dominik Szczerba wrote: > Would not MatConvert do the same, but easier? No, how would MatConvert() know on which communicator to put the result? What would all the other processes get? MatGetSubMatrices() is the right amount of explicit about these things. -------------- next part -------------- An HTML attachment was scrubbed... URL: From dominik at itis.ethz.ch Wed Nov 23 07:54:17 2011 From: dominik at itis.ethz.ch (Dominik Szczerba) Date: Wed, 23 Nov 2011 14:54:17 +0100 Subject: [petsc-users] inspect contents of a big matrix Message-ID: I am a bit stuck trying to inspect some elements in a big matrix. I need to dump a matrix and a vector to a file, that is given, but I need to make sure some of their values are exactly as I expect them. ASCII is reportedly not an option, the matrix be too big. HDF5 Viewer complains it does not support either MPIAIJ or SEQAIJ matrices (why not save si, sj and sa arrays?) - it has no problem with VECMPI though. Binary seems to work but I do not know how to peek the values stored there. Any other suggestions? Many thanks! Dominik From jedbrown at mcs.anl.gov Wed Nov 23 07:57:53 2011 From: jedbrown at mcs.anl.gov (Jed Brown) Date: Wed, 23 Nov 2011 07:57:53 -0600 Subject: [petsc-users] inspect contents of a big matrix In-Reply-To: References: Message-ID: On Wed, Nov 23, 2011 at 07:54, Dominik Szczerba wrote: > Binary seems to work but I do not know how to peek the values stored there. > bin/matlab/PetscBinaryRead.m or the python equivalent in bin/python/ or MatGetSubMatrix() the part you find interesting and view that or MatGetValues() -------------- next part -------------- An HTML attachment was scrubbed... URL: From dominik at itis.ethz.ch Wed Nov 23 08:06:36 2011 From: dominik at itis.ethz.ch (Dominik Szczerba) Date: Wed, 23 Nov 2011 15:06:36 +0100 Subject: [petsc-users] get matlab representation si,sj,sa from MATSEQAIJ Message-ID: I have converted my MATMPIAIJ to MATSEQAIJ using MatGetSubMatrices. I need to dump it to a hdf5 file in form of three vectors: si, sj (int) and sa (double), with their sizes equal to the number of nonzeros. I looked through the help index page for AIJ, matlab and similar keywords, but can not find a suitable function. How can I get the 3 needed arrays out or my (sequential) matrix? Many thanks for any hints, Dominik From jedbrown at mcs.anl.gov Wed Nov 23 08:09:31 2011 From: jedbrown at mcs.anl.gov (Jed Brown) Date: Wed, 23 Nov 2011 08:09:31 -0600 Subject: [petsc-users] get matlab representation si, sj, sa from MATSEQAIJ In-Reply-To: References: Message-ID: On Wed, Nov 23, 2011 at 08:06, Dominik Szczerba wrote: > I looked through the help index page for AIJ, matlab and similar > keywords, but can not find a suitable function. > As I said in the other mail, just view to a PETSc binary file and read it with bin/matlab/PetscBinaryRead.m -------------- next part -------------- An HTML attachment was scrubbed... URL: From jean-frederic at thebault-net.com Wed Nov 23 08:24:06 2011 From: jean-frederic at thebault-net.com (jean-frederic thebault) Date: Wed, 23 Nov 2011 15:24:06 +0100 Subject: [petsc-users] is something wrong with nnz ? Message-ID: Hi, I'm wondering what's wrong in my code. I'm using PETSc to solve a linear system, and willing to use a multi-processor computer. 9 years ago, I used petsc-2.1.3 with success. Few weeks ago, I've update petsc with the 3.1-p8 version and made the necessary changes to work with. No problem. And recently, I've migrate to petsc-3.2-p5. Compilation is OK. But when I do simulation, now, I have some PETSC-ERROR in the log file, even using only one processor (see the out.log file in this email). However, I think I defined MatMPI and VecMPI correctly, according to the doc. The log file tell that something wrong with the nnz which should not be greater than row length (??). I can't see what's wrong. And also, with the previous version of PETSc I've used, the were no problem using -pc_type bjacobi and -sub_pc_type sor, juste to solve linear system doing parallel computations and because SOR is not parallelized. But now, when I use -pc_type bjacobi and -sub_pc_type sor, with 3 rank, I experiment some convergence problem during my simulation. I'd really like to calculate in parallel because of, in further time, with the 2.1.3 petsc version, with a cluster and a myrinet switch, I obtained some interesting results in terme of performance (that time, the more number of processors I used, the faster the calculations were). But now, with the new architecture-PC (2 quad core processors, and no switch of course, all is in the same computer), each time I do the simulation with one more processor, I lost time). Any idea of what's wrong ? I would appreciate some help in that purpose Best Regards. John -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: out.log Type: application/octet-stream Size: 8583 bytes Desc: not available URL: From dominik at itis.ethz.ch Wed Nov 23 08:40:20 2011 From: dominik at itis.ethz.ch (Dominik Szczerba) Date: Wed, 23 Nov 2011 15:40:20 +0100 Subject: [petsc-users] MatCreateScatter documentation In-Reply-To: References: Message-ID: You are right, thanks! On Wed, Nov 23, 2011 at 2:25 PM, Jed Brown wrote: > On Wed, Nov 23, 2011 at 07:22, Dominik Szczerba > wrote: >> >> Would not MatConvert do the same, but easier? > > No, how would MatConvert() know on which communicator to put the result? > What would all the other processes get? > MatGetSubMatrices() is the right amount of explicit about these things. From dominik at itis.ethz.ch Wed Nov 23 08:54:06 2011 From: dominik at itis.ethz.ch (Dominik Szczerba) Date: Wed, 23 Nov 2011 15:54:06 +0100 Subject: [petsc-users] get matlab representation si, sj, sa from MATSEQAIJ In-Reply-To: References: Message-ID: On Wed, Nov 23, 2011 at 3:09 PM, Jed Brown wrote:> On Wed, Nov 23, 2011 at 08:06, Dominik Szczerba > wrote:>>>> I looked through the help index page for AIJ, matlab and similar>> keywords, but can not find a suitable function.>> As I said in the other mail, just view to a PETSc binary file and read it> with bin/matlab/PetscBinaryRead.m Thanks for a useful hint. I can not use Matlab, trying with python. I do: In [1]: import PetscBinaryIO In [2]: io = PetscBinaryIO.PetscBinaryIO() In [3]: objects = io.readBinaryFile("Ab.dat") --------------------------------------------------------------------------- NameError Traceback (most recent call last) /home/dsz/data/test-solve/NS/cylinder/steady/ in () /home/dsz/pack/petsc-3.2-p5/bin/pythonscripts/PetscBinaryIO.pyc in decorated_f(self, *args, **kwargs) 88 self._update_dtypes() 89 ---> 90 result = f(self, *args, **kwargs) 91 92 if changed: /home/dsz/pack/petsc-3.2-p5/bin/pythonscripts/PetscBinaryIO.pyc in readBinaryFile(self, fid, mattype) 404 objects.append(self.readIS(fid)) 405 elif objecttype == 'Mat': --> 406 objects.append(self.readMat(fid,mattype)) 407 elif objecttype == 'Bag': 408 raise NotImplementedError('Bag Reader not yet implemented') /home/dsz/pack/petsc-3.2-p5/bin/pythonscripts/PetscBinaryIO.pyc in decorated_f(self, *args, **kwargs) 88 self._update_dtypes() 89 ---> 90 result = f(self, *args, **kwargs) 91 92 if changed: /home/dsz/pack/petsc-3.2-p5/bin/pythonscripts/PetscBinaryIO.pyc in readMat(self, fh, mattype) 332 333 if mattype == 'sparse': --> 334 return readMatSparse(fh) 335 elif mattype == 'dense': 336 return readMatDense(fh) NameError: global name 'readMatSparse' is not defined I see that readMatSparse is a part of the script, so I am not missing any externals. My variables: $ echo $PETSC_DIR /home/dsz/pack/petsc-3.2-p5 $ echo $PETSC_ARCH linux-gnu-c-debug I have petsc-3.2-p5/bin/pythonscripts/ in PYTHONPATH. What did I miss? Many thanks Dominik From jedbrown at mcs.anl.gov Wed Nov 23 08:55:27 2011 From: jedbrown at mcs.anl.gov (Jed Brown) Date: Wed, 23 Nov 2011 08:55:27 -0600 Subject: [petsc-users] is something wrong with nnz ? In-Reply-To: References: Message-ID: On Wed, Nov 23, 2011 at 08:24, jean-frederic thebault < jean-frederic at thebault-net.com> wrote: > I'm wondering what's wrong in my code. I'm using PETSc to solve a linear > system, and willing to use a multi-processor computer. 9 years ago, I used > petsc-2.1.3 with success. Few weeks ago, I've update petsc with the 3.1-p8 > version and made the necessary changes to work with. No problem. And > recently, I've migrate to petsc-3.2-p5. Compilation is OK. But when I do > simulation, now, I have some PETSC-ERROR in the log file, even using only > one processor (see the out.log file in this email). > You are calling MatSetOption() with the wrong number of arguments. C compilers tell you about this, but Fortran compilers do not. http://www.mcs.anl.gov/petsc/petsc-current/docs/manualpages/Mat/MatSetOption.html > However, I think I defined MatMPI and VecMPI correctly, according to the > doc. The log file tell that something wrong with the nnz which should not > be greater than row length (??). > The log you sent does not say anything about nnz. Fix the call to MatSetOption(). And also, with the previous version of PETSc I've used, the were no problem > using -pc_type bjacobi and -sub_pc_type sor, juste to solve linear system > doing parallel computations and because SOR is not parallelized. But now, > when I use -pc_type bjacobi and -sub_pc_type sor, with 3 rank, I experiment > some convergence problem during my simulation. > These options should do the same thing they used to do. Make sure you are assembling correctly. If it's still confusing, run the old and new code with -ksp_monitor_true_residual -ksp_converged_reason -ksp_view -pc_type bjacobi -sub_pc_type sor and send the output of both for us to look at. Also note that you can use -pc_type sor even in parallel. There are options for local iterations and full iterations. -------------- next part -------------- An HTML attachment was scrubbed... URL: From jedbrown at mcs.anl.gov Wed Nov 23 09:07:43 2011 From: jedbrown at mcs.anl.gov (Jed Brown) Date: Wed, 23 Nov 2011 09:07:43 -0600 Subject: [petsc-users] get matlab representation si, sj, sa from MATSEQAIJ In-Reply-To: References: Message-ID: On Wed, Nov 23, 2011 at 08:54, Dominik Szczerba wrote: > NameError: global name 'readMatSparse' is not defined Looks like the person who wrote this didn't test the final version. I pushed a fix to the petsc-3.2 repository, it will be in the next patch level. You can pull from http://petsc.cs.iit.edu/petsc/releases/petsc-3.2or apply the attached patch. -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: need-self.patch Type: text/x-patch Size: 1184 bytes Desc: not available URL: From bsmith at mcs.anl.gov Wed Nov 23 09:25:33 2011 From: bsmith at mcs.anl.gov (Barry Smith) Date: Wed, 23 Nov 2011 09:25:33 -0600 Subject: [petsc-users] Avoid MUMPS ordering for distributed matrices? In-Reply-To: References: Message-ID: <5D3580A3-E295-47A8-A058-395B39630F8F@mcs.anl.gov> On Nov 23, 2011, at 7:18 AM, Jed Brown wrote: > On Wed, Nov 23, 2011 at 05:43, Dominik Szczerba wrote: > In my procedure considerable time is spent to partition the domain. > When using MUMPS as a solver for my matrix I see the message: > > "Ordering based on METIS" > > This is an ordering to reduce fill in factorization, not to to partition the domain. Last I heard, symbolic factorization was done in serial, which explains why you find it taking a lot of time. The right hand side and solution vectors are also passed on rank 0, which presents another inefficiency/imbalance and memory bottleneck. Talk to the MUMPS developers or use a different package if you don't like these properties. Mumps does now have an option of parallel ordering. Run with -help and look at the options like -mat_mumps_icntl_28","ICNTL(28): use 1 for sequential analysis and ictnl(7) ordering, or 2 for parallel analysis and ictnl(29) ordering -mat_mumps_icntl_29","ICNTL(29): parallel ordering 1 = ptscotch 2 = parmetis I apologize that the options are organized in such a silly way but that is how MUMPS is organized. Barry From gshy2014 at gmail.com Wed Nov 23 09:50:54 2011 From: gshy2014 at gmail.com (Shiyuan) Date: Wed, 23 Nov 2011 09:50:54 -0600 Subject: [petsc-users] KSP with MatNullSpace Message-ID: Hi, I want to solve a singular system with a known nullspace. However, I the KSP solve diverges with KSP_INDEFINTE_PC even if I disable the preconditioning by PCNONE. this is how I setup the system. What did I do wrong? Any possible causes? Thanks. ierr=MatNullSpaceCreate(PETSC_COMM_SELF,PETSC_FALSE,1,&phi,&nsp);CHKERRV(ierr); ierr=KSPCreate(PETSC_COMM_SELF,&ksp);CHKERRV(ierr); ierr=KSPSetOperators(ksp,A,A,DIFFERENT_NONZERO_PATTERN);CHKERRV(ierr); ierr=KSPSetNullSpace(ksp,nsp);CHKERRV(ierr); ierr=KSPSetType(ksp,KSPCG);CHKERRV(ierr); ierr=KSPGetPC(ksp,&prec);CHKERRV(ierr); ierr=PCSetType(prec,PCNONE);CHKERRV(ierr); ierr=KSPSetTolerances(ksp,1e-5,1e-20,1e5,10000);CHKERRV(ierr); ierr=KSPSetFromOptions(ksp);CHKERRV(ierr); ierr=KSPSetUp(ksp);CHKERRV(ierr); Shiyuan -------------- next part -------------- An HTML attachment was scrubbed... URL: From jroman at dsic.upv.es Wed Nov 23 09:51:46 2011 From: jroman at dsic.upv.es (Jose E. Roman) Date: Wed, 23 Nov 2011 16:51:46 +0100 Subject: [petsc-users] Avoid MUMPS ordering for distributed matrices? In-Reply-To: <5D3580A3-E295-47A8-A058-395B39630F8F@mcs.anl.gov> References: <5D3580A3-E295-47A8-A058-395B39630F8F@mcs.anl.gov> Message-ID: <4EA5026E-863D-40B8-A0DE-835B45E45483@dsic.upv.es> El 23/11/2011, a las 16:25, Barry Smith escribi?: > > On Nov 23, 2011, at 7:18 AM, Jed Brown wrote: > >> On Wed, Nov 23, 2011 at 05:43, Dominik Szczerba wrote: >> In my procedure considerable time is spent to partition the domain. >> When using MUMPS as a solver for my matrix I see the message: >> >> "Ordering based on METIS" >> >> This is an ordering to reduce fill in factorization, not to to partition the domain. Last I heard, symbolic factorization was done in serial, which explains why you find it taking a lot of time. The right hand side and solution vectors are also passed on rank 0, which presents another inefficiency/imbalance and memory bottleneck. Talk to the MUMPS developers or use a different package if you don't like these properties. > > Mumps does now have an option of parallel ordering. > > Run with -help and look at the options like > > -mat_mumps_icntl_28","ICNTL(28): use 1 for sequential analysis and ictnl(7) ordering, or 2 for parallel analysis and ictnl(29) ordering > > -mat_mumps_icntl_29","ICNTL(29): parallel ordering 1 = ptscotch 2 = parmetis > > > I apologize that the options are organized in such a silly way but that is how MUMPS is organized. > > Barry > I have a question related to this. We tried today (with petsc-dev) -mat_mumps_icntl_28 2 -mat_mumps_icntl_29 2 and it works, but -mat_mumps_icntl_28 2 -mat_mumps_icntl_29 1 gives an error of MUMPS complaining that PTScotch was not enabled. Should the combination MUMPS+PTScotch work in petsc-dev? We did --download-mumps --download-ptscotch --download-parmetis Jose From jean-frederic at thebault-net.com Wed Nov 23 09:55:20 2011 From: jean-frederic at thebault-net.com (jean-frederic thebault) Date: Wed, 23 Nov 2011 16:55:20 +0100 Subject: [petsc-users] is something wrong with nnz ? In-Reply-To: References: Message-ID: Thanks for your response. Sorry about that, to reduce the size of the log file, unfortunetly, I did took out the bad lines... In the out.log I've put in this email, I've make sure there are... Actually, I don't use MatSetOption, but MatSetFromOption instead. However, when I called MatSetFromOption, the PETSC_COMM_WORLD was missing. But now, it's getting worse !! (as you could see in the out.log included in this email). Le 23 novembre 2011 15:55, Jed Brown a ?crit : > On Wed, Nov 23, 2011 at 08:24, jean-frederic thebault < > jean-frederic at thebault-net.com> wrote: > >> I'm wondering what's wrong in my code. I'm using PETSc to solve a linear >> system, and willing to use a multi-processor computer. 9 years ago, I used >> petsc-2.1.3 with success. Few weeks ago, I've update petsc with the 3.1-p8 >> version and made the necessary changes to work with. No problem. And >> recently, I've migrate to petsc-3.2-p5. Compilation is OK. But when I do >> simulation, now, I have some PETSC-ERROR in the log file, even using only >> one processor (see the out.log file in this email). >> > > You are calling MatSetOption() with the wrong number of arguments. C > compilers tell you about this, but Fortran compilers do not. > > > http://www.mcs.anl.gov/petsc/petsc-current/docs/manualpages/Mat/MatSetOption.html > > >> However, I think I defined MatMPI and VecMPI correctly, according to the >> doc. The log file tell that something wrong with the nnz which should not >> be greater than row length (??). >> > > The log you sent does not say anything about nnz. Fix the call to > MatSetOption(). > > And also, with the previous version of PETSc I've used, the were no >> problem using -pc_type bjacobi and -sub_pc_type sor, juste to solve linear >> system doing parallel computations and because SOR is not parallelized. But >> now, when I use -pc_type bjacobi and -sub_pc_type sor, with 3 rank, I >> experiment some convergence problem during my simulation. >> > > These options should do the same thing they used to do. Make sure you are > assembling correctly. If it's still confusing, run the old and new code with > > -ksp_monitor_true_residual -ksp_converged_reason -ksp_view -pc_type > bjacobi -sub_pc_type sor > > and send the output of both for us to look at. > > Also note that you can use -pc_type sor even in parallel. There are > options for local iterations and full iterations. > -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: out.log Type: application/octet-stream Size: 5513 bytes Desc: not available URL: From balay at mcs.anl.gov Wed Nov 23 10:12:49 2011 From: balay at mcs.anl.gov (Satish Balay) Date: Wed, 23 Nov 2011 10:12:49 -0600 (CST) Subject: [petsc-users] Avoid MUMPS ordering for distributed matrices? In-Reply-To: <4EA5026E-863D-40B8-A0DE-835B45E45483@dsic.upv.es> References: <5D3580A3-E295-47A8-A058-395B39630F8F@mcs.anl.gov> <4EA5026E-863D-40B8-A0DE-835B45E45483@dsic.upv.es> Message-ID: On Wed, 23 Nov 2011, Jose E. Roman wrote: > I have a question related to this. We tried today (with petsc-dev) > -mat_mumps_icntl_28 2 -mat_mumps_icntl_29 2 > and it works, but > -mat_mumps_icntl_28 2 -mat_mumps_icntl_29 1 > gives an error of MUMPS complaining that PTScotch was not enabled. > > Should the combination MUMPS+PTScotch work in petsc-dev? We did --download-mumps --download-ptscotch --download-parmetis Looks like mumps.py was not updated correctly with 'scotch -> ptscotch' change. I've pushed a fix - can you retry? [rerun configure ..] Satish From balay at mcs.anl.gov Wed Nov 23 10:17:07 2011 From: balay at mcs.anl.gov (Satish Balay) Date: Wed, 23 Nov 2011 10:17:07 -0600 (CST) Subject: [petsc-users] is something wrong with nnz ? In-Reply-To: References: Message-ID: Since you get a SEGV - I would suggest running the code in the debugger - to check where its crashing. Also run with valgrind to see where problems start.. Mostlikely the issues would be change in prototypes for PETSc functions - between releases. Satish On Wed, 23 Nov 2011, jean-frederic thebault wrote: > Thanks for your response. > > Sorry about that, to reduce the size of the log file, unfortunetly, I did > took out the bad lines... In the out.log I've put in this email, I've make > sure there are... > > Actually, I don't use MatSetOption, but MatSetFromOption instead. However, > when I called MatSetFromOption, the PETSC_COMM_WORLD was missing. But now, > it's getting worse !! (as you could see in the out.log included in this > email). > > Le 23 novembre 2011 15:55, Jed Brown a ?crit : > > > On Wed, Nov 23, 2011 at 08:24, jean-frederic thebault < > > jean-frederic at thebault-net.com> wrote: > > > >> I'm wondering what's wrong in my code. I'm using PETSc to solve a linear > >> system, and willing to use a multi-processor computer. 9 years ago, I used > >> petsc-2.1.3 with success. Few weeks ago, I've update petsc with the 3.1-p8 > >> version and made the necessary changes to work with. No problem. And > >> recently, I've migrate to petsc-3.2-p5. Compilation is OK. But when I do > >> simulation, now, I have some PETSC-ERROR in the log file, even using only > >> one processor (see the out.log file in this email). > >> > > > > You are calling MatSetOption() with the wrong number of arguments. C > > compilers tell you about this, but Fortran compilers do not. > > > > > > http://www.mcs.anl.gov/petsc/petsc-current/docs/manualpages/Mat/MatSetOption.html > > > > > >> However, I think I defined MatMPI and VecMPI correctly, according to the > >> doc. The log file tell that something wrong with the nnz which should not > >> be greater than row length (??). > >> > > > > The log you sent does not say anything about nnz. Fix the call to > > MatSetOption(). > > > > And also, with the previous version of PETSc I've used, the were no > >> problem using -pc_type bjacobi and -sub_pc_type sor, juste to solve linear > >> system doing parallel computations and because SOR is not parallelized. But > >> now, when I use -pc_type bjacobi and -sub_pc_type sor, with 3 rank, I > >> experiment some convergence problem during my simulation. > >> > > > > These options should do the same thing they used to do. Make sure you are > > assembling correctly. If it's still confusing, run the old and new code with > > > > -ksp_monitor_true_residual -ksp_converged_reason -ksp_view -pc_type > > bjacobi -sub_pc_type sor > > > > and send the output of both for us to look at. > > > > Also note that you can use -pc_type sor even in parallel. There are > > options for local iterations and full iterations. > > > From jean-frederic at thebault-net.com Wed Nov 23 10:37:06 2011 From: jean-frederic at thebault-net.com (jean-frederic thebault) Date: Wed, 23 Nov 2011 17:37:06 +0100 Subject: [petsc-users] is something wrong with nnz ? In-Reply-To: References: Message-ID: Well, actually, when I call MatSetFromOption, with the right arguments, the whole simulation is running, but there are some PETSC-ERROR about nnz, and when I comment the calling of MatSetFromOption, the simulation (program) stop at the first calculation. This time, I'm putting in this email the log-file with calling of MatSetFromOption (then with PETSC-ERROR on nnz)... Le 23 novembre 2011 17:17, Satish Balay a ?crit : > Since you get a SEGV - I would suggest running the code in the > debugger - to check where its crashing. > > Also run with valgrind to see where problems start.. Mostlikely the > issues would be change in prototypes for PETSc functions - between > releases. > > Satish > > On Wed, 23 Nov 2011, jean-frederic thebault wrote: > > > Thanks for your response. > > > > Sorry about that, to reduce the size of the log file, unfortunetly, I did > > took out the bad lines... In the out.log I've put in this email, I've > make > > sure there are... > > > > Actually, I don't use MatSetOption, but MatSetFromOption instead. > However, > > when I called MatSetFromOption, the PETSC_COMM_WORLD was missing. But > now, > > it's getting worse !! (as you could see in the out.log included in this > > email). > > > > Le 23 novembre 2011 15:55, Jed Brown a ?crit : > > > > > On Wed, Nov 23, 2011 at 08:24, jean-frederic thebault < > > > jean-frederic at thebault-net.com> wrote: > > > > > >> I'm wondering what's wrong in my code. I'm using PETSc to solve a > linear > > >> system, and willing to use a multi-processor computer. 9 years ago, I > used > > >> petsc-2.1.3 with success. Few weeks ago, I've update petsc with the > 3.1-p8 > > >> version and made the necessary changes to work with. No problem. And > > >> recently, I've migrate to petsc-3.2-p5. Compilation is OK. But when I > do > > >> simulation, now, I have some PETSC-ERROR in the log file, even using > only > > >> one processor (see the out.log file in this email). > > >> > > > > > > You are calling MatSetOption() with the wrong number of arguments. C > > > compilers tell you about this, but Fortran compilers do not. > > > > > > > > > > http://www.mcs.anl.gov/petsc/petsc-current/docs/manualpages/Mat/MatSetOption.html > > > > > > > > >> However, I think I defined MatMPI and VecMPI correctly, according to > the > > >> doc. The log file tell that something wrong with the nnz which should > not > > >> be greater than row length (??). > > >> > > > > > > The log you sent does not say anything about nnz. Fix the call to > > > MatSetOption(). > > > > > > And also, with the previous version of PETSc I've used, the were no > > >> problem using -pc_type bjacobi and -sub_pc_type sor, juste to solve > linear > > >> system doing parallel computations and because SOR is not > parallelized. But > > >> now, when I use -pc_type bjacobi and -sub_pc_type sor, with 3 rank, I > > >> experiment some convergence problem during my simulation. > > >> > > > > > > These options should do the same thing they used to do. Make sure you > are > > > assembling correctly. If it's still confusing, run the old and new > code with > > > > > > -ksp_monitor_true_residual -ksp_converged_reason -ksp_view -pc_type > > > bjacobi -sub_pc_type sor > > > > > > and send the output of both for us to look at. > > > > > > Also note that you can use -pc_type sor even in parallel. There are > > > options for local iterations and full iterations. > > > > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: out.log Type: application/octet-stream Size: 86530 bytes Desc: not available URL: From jroman at dsic.upv.es Wed Nov 23 10:42:40 2011 From: jroman at dsic.upv.es (Jose E. Roman) Date: Wed, 23 Nov 2011 17:42:40 +0100 Subject: [petsc-users] Avoid MUMPS ordering for distributed matrices? In-Reply-To: References: <5D3580A3-E295-47A8-A058-395B39630F8F@mcs.anl.gov> <4EA5026E-863D-40B8-A0DE-835B45E45483@dsic.upv.es> Message-ID: El 23/11/2011, a las 17:12, Satish Balay escribi?: > On Wed, 23 Nov 2011, Jose E. Roman wrote: > >> I have a question related to this. We tried today (with petsc-dev) >> -mat_mumps_icntl_28 2 -mat_mumps_icntl_29 2 >> and it works, but >> -mat_mumps_icntl_28 2 -mat_mumps_icntl_29 1 >> gives an error of MUMPS complaining that PTScotch was not enabled. >> >> Should the combination MUMPS+PTScotch work in petsc-dev? We did --download-mumps --download-ptscotch --download-parmetis > > > Looks like mumps.py was not updated correctly with 'scotch -> ptscotch' change. > > I've pushed a fix - can you retry? [rerun configure ..] > > Satish Yes, now it works. Thanks. Jose From balay at mcs.anl.gov Wed Nov 23 10:44:01 2011 From: balay at mcs.anl.gov (Satish Balay) Date: Wed, 23 Nov 2011 10:44:01 -0600 (CST) Subject: [petsc-users] is something wrong with nnz ? In-Reply-To: References: Message-ID: > [1]PETSC ERROR: d_nnz cannot be less than 0: local row 78 value -750763693! There is something wrong in your code. You'll have to verify the prototypes of all functions - with the petsc-3.2 documentation. And valgrind is an easy way to pinpoint to the problem source. [also a debugger should show you whats going wrong] Satish On Wed, 23 Nov 2011, jean-frederic thebault wrote: > Well, actually, when I call MatSetFromOption, with the right arguments, the > whole simulation is running, but there are some PETSC-ERROR about nnz, and > when I comment the calling of MatSetFromOption, the simulation (program) > stop at the first calculation. This time, I'm putting in this email the > log-file with calling of MatSetFromOption (then with PETSC-ERROR on nnz)... > > Le 23 novembre 2011 17:17, Satish Balay a ?crit : > > > Since you get a SEGV - I would suggest running the code in the > > debugger - to check where its crashing. > > > > Also run with valgrind to see where problems start.. Mostlikely the > > issues would be change in prototypes for PETSc functions - between > > releases. > > > > Satish > > > > On Wed, 23 Nov 2011, jean-frederic thebault wrote: > > > > > Thanks for your response. > > > > > > Sorry about that, to reduce the size of the log file, unfortunetly, I did > > > took out the bad lines... In the out.log I've put in this email, I've > > make > > > sure there are... > > > > > > Actually, I don't use MatSetOption, but MatSetFromOption instead. > > However, > > > when I called MatSetFromOption, the PETSC_COMM_WORLD was missing. But > > now, > > > it's getting worse !! (as you could see in the out.log included in this > > > email). > > > > > > Le 23 novembre 2011 15:55, Jed Brown a ?crit : > > > > > > > On Wed, Nov 23, 2011 at 08:24, jean-frederic thebault < > > > > jean-frederic at thebault-net.com> wrote: > > > > > > > >> I'm wondering what's wrong in my code. I'm using PETSc to solve a > > linear > > > >> system, and willing to use a multi-processor computer. 9 years ago, I > > used > > > >> petsc-2.1.3 with success. Few weeks ago, I've update petsc with the > > 3.1-p8 > > > >> version and made the necessary changes to work with. No problem. And > > > >> recently, I've migrate to petsc-3.2-p5. Compilation is OK. But when I > > do > > > >> simulation, now, I have some PETSC-ERROR in the log file, even using > > only > > > >> one processor (see the out.log file in this email). > > > >> > > > > > > > > You are calling MatSetOption() with the wrong number of arguments. C > > > > compilers tell you about this, but Fortran compilers do not. > > > > > > > > > > > > > > http://www.mcs.anl.gov/petsc/petsc-current/docs/manualpages/Mat/MatSetOption.html > > > > > > > > > > > >> However, I think I defined MatMPI and VecMPI correctly, according to > > the > > > >> doc. The log file tell that something wrong with the nnz which should > > not > > > >> be greater than row length (??). > > > >> > > > > > > > > The log you sent does not say anything about nnz. Fix the call to > > > > MatSetOption(). > > > > > > > > And also, with the previous version of PETSc I've used, the were no > > > >> problem using -pc_type bjacobi and -sub_pc_type sor, juste to solve > > linear > > > >> system doing parallel computations and because SOR is not > > parallelized. But > > > >> now, when I use -pc_type bjacobi and -sub_pc_type sor, with 3 rank, I > > > >> experiment some convergence problem during my simulation. > > > >> > > > > > > > > These options should do the same thing they used to do. Make sure you > > are > > > > assembling correctly. If it's still confusing, run the old and new > > code with > > > > > > > > -ksp_monitor_true_residual -ksp_converged_reason -ksp_view -pc_type > > > > bjacobi -sub_pc_type sor > > > > > > > > and send the output of both for us to look at. > > > > > > > > Also note that you can use -pc_type sor even in parallel. There are > > > > options for local iterations and full iterations. > > > > > > > > > > From jean-frederic at thebault-net.com Wed Nov 23 10:55:24 2011 From: jean-frederic at thebault-net.com (jean-frederic thebault) Date: Wed, 23 Nov 2011 17:55:24 +0100 Subject: [petsc-users] is something wrong with nnz ? In-Reply-To: References: Message-ID: OK, Im not familiar with debugger (!!) but I will try to see something with valgrind... I thougth I have already checked the prototypes of all functions when upgrading with petsc-3.1.p8 and petsc-3.2.p5, but will do that again. Thanks anyway. Best Regards. John Le 23 novembre 2011 17:44, Satish Balay a ?crit : > > [1]PETSC ERROR: d_nnz cannot be less than 0: local row 78 value > -750763693! > > There is something wrong in your code. You'll have to verify the > prototypes of all functions - with the petsc-3.2 documentation. > > And valgrind is an easy way to pinpoint to the problem source. [also a > debugger should show you whats going wrong] > > Satish > > > On Wed, 23 Nov 2011, jean-frederic thebault wrote: > > > Well, actually, when I call MatSetFromOption, with the right arguments, > the > > whole simulation is running, but there are some PETSC-ERROR about nnz, > and > > when I comment the calling of MatSetFromOption, the simulation (program) > > stop at the first calculation. This time, I'm putting in this email the > > log-file with calling of MatSetFromOption (then with PETSC-ERROR on > nnz)... > > > > Le 23 novembre 2011 17:17, Satish Balay a ?crit : > > > > > Since you get a SEGV - I would suggest running the code in the > > > debugger - to check where its crashing. > > > > > > Also run with valgrind to see where problems start.. Mostlikely the > > > issues would be change in prototypes for PETSc functions - between > > > releases. > > > > > > Satish > > > > > > On Wed, 23 Nov 2011, jean-frederic thebault wrote: > > > > > > > Thanks for your response. > > > > > > > > Sorry about that, to reduce the size of the log file, unfortunetly, > I did > > > > took out the bad lines... In the out.log I've put in this email, I've > > > make > > > > sure there are... > > > > > > > > Actually, I don't use MatSetOption, but MatSetFromOption instead. > > > However, > > > > when I called MatSetFromOption, the PETSC_COMM_WORLD was missing. But > > > now, > > > > it's getting worse !! (as you could see in the out.log included in > this > > > > email). > > > > > > > > Le 23 novembre 2011 15:55, Jed Brown a ?crit > : > > > > > > > > > On Wed, Nov 23, 2011 at 08:24, jean-frederic thebault < > > > > > jean-frederic at thebault-net.com> wrote: > > > > > > > > > >> I'm wondering what's wrong in my code. I'm using PETSc to solve a > > > linear > > > > >> system, and willing to use a multi-processor computer. 9 years > ago, I > > > used > > > > >> petsc-2.1.3 with success. Few weeks ago, I've update petsc with > the > > > 3.1-p8 > > > > >> version and made the necessary changes to work with. No problem. > And > > > > >> recently, I've migrate to petsc-3.2-p5. Compilation is OK. But > when I > > > do > > > > >> simulation, now, I have some PETSC-ERROR in the log file, even > using > > > only > > > > >> one processor (see the out.log file in this email). > > > > >> > > > > > > > > > > You are calling MatSetOption() with the wrong number of arguments. > C > > > > > compilers tell you about this, but Fortran compilers do not. > > > > > > > > > > > > > > > > > > > http://www.mcs.anl.gov/petsc/petsc-current/docs/manualpages/Mat/MatSetOption.html > > > > > > > > > > > > > > >> However, I think I defined MatMPI and VecMPI correctly, according > to > > > the > > > > >> doc. The log file tell that something wrong with the nnz which > should > > > not > > > > >> be greater than row length (??). > > > > >> > > > > > > > > > > The log you sent does not say anything about nnz. Fix the call to > > > > > MatSetOption(). > > > > > > > > > > And also, with the previous version of PETSc I've used, the were > no > > > > >> problem using -pc_type bjacobi and -sub_pc_type sor, juste to > solve > > > linear > > > > >> system doing parallel computations and because SOR is not > > > parallelized. But > > > > >> now, when I use -pc_type bjacobi and -sub_pc_type sor, with 3 > rank, I > > > > >> experiment some convergence problem during my simulation. > > > > >> > > > > > > > > > > These options should do the same thing they used to do. Make sure > you > > > are > > > > > assembling correctly. If it's still confusing, run the old and new > > > code with > > > > > > > > > > -ksp_monitor_true_residual -ksp_converged_reason -ksp_view -pc_type > > > > > bjacobi -sub_pc_type sor > > > > > > > > > > and send the output of both for us to look at. > > > > > > > > > > Also note that you can use -pc_type sor even in parallel. There are > > > > > options for local iterations and full iterations. > > > > > > > > > > > > > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From mperezcerquera at gmail.com Wed Nov 23 11:08:44 2011 From: mperezcerquera at gmail.com (Manuel Ricardo Perez Cerquera) Date: Wed, 23 Nov 2011 18:08:44 +0100 Subject: [petsc-users] is something wrong with nnz ? In-Reply-To: References: Message-ID: Hi , I had the same problem some days ago, be careful about the KIND size of the variables, when variables KINDS you pass to the functions are different from the KINDS of the dummy variables of the functions, yields in these kind of problems. Cheers ! Manuel. 2011/11/23 jean-frederic thebault > Well, actually, when I call MatSetFromOption, with the right arguments, > the whole simulation is running, but there are some PETSC-ERROR about nnz, > and when I comment the calling of MatSetFromOption, the simulation > (program) stop at the first calculation. This time, I'm putting in this > email the log-file with calling of MatSetFromOption (then with PETSC-ERROR > on nnz)... > > Le 23 novembre 2011 17:17, Satish Balay a ?crit : > > Since you get a SEGV - I would suggest running the code in the >> debugger - to check where its crashing. >> >> Also run with valgrind to see where problems start.. Mostlikely the >> issues would be change in prototypes for PETSc functions - between >> releases. >> >> Satish >> >> On Wed, 23 Nov 2011, jean-frederic thebault wrote: >> >> > Thanks for your response. >> > >> > Sorry about that, to reduce the size of the log file, unfortunetly, I >> did >> > took out the bad lines... In the out.log I've put in this email, I've >> make >> > sure there are... >> > >> > Actually, I don't use MatSetOption, but MatSetFromOption instead. >> However, >> > when I called MatSetFromOption, the PETSC_COMM_WORLD was missing. But >> now, >> > it's getting worse !! (as you could see in the out.log included in this >> > email). >> > >> > Le 23 novembre 2011 15:55, Jed Brown a ?crit : >> > >> > > On Wed, Nov 23, 2011 at 08:24, jean-frederic thebault < >> > > jean-frederic at thebault-net.com> wrote: >> > > >> > >> I'm wondering what's wrong in my code. I'm using PETSc to solve a >> linear >> > >> system, and willing to use a multi-processor computer. 9 years ago, >> I used >> > >> petsc-2.1.3 with success. Few weeks ago, I've update petsc with the >> 3.1-p8 >> > >> version and made the necessary changes to work with. No problem. And >> > >> recently, I've migrate to petsc-3.2-p5. Compilation is OK. But when >> I do >> > >> simulation, now, I have some PETSC-ERROR in the log file, even using >> only >> > >> one processor (see the out.log file in this email). >> > >> >> > > >> > > You are calling MatSetOption() with the wrong number of arguments. C >> > > compilers tell you about this, but Fortran compilers do not. >> > > >> > > >> > > >> http://www.mcs.anl.gov/petsc/petsc-current/docs/manualpages/Mat/MatSetOption.html >> > > >> > > >> > >> However, I think I defined MatMPI and VecMPI correctly, according to >> the >> > >> doc. The log file tell that something wrong with the nnz which >> should not >> > >> be greater than row length (??). >> > >> >> > > >> > > The log you sent does not say anything about nnz. Fix the call to >> > > MatSetOption(). >> > > >> > > And also, with the previous version of PETSc I've used, the were no >> > >> problem using -pc_type bjacobi and -sub_pc_type sor, juste to solve >> linear >> > >> system doing parallel computations and because SOR is not >> parallelized. But >> > >> now, when I use -pc_type bjacobi and -sub_pc_type sor, with 3 rank, I >> > >> experiment some convergence problem during my simulation. >> > >> >> > > >> > > These options should do the same thing they used to do. Make sure you >> are >> > > assembling correctly. If it's still confusing, run the old and new >> code with >> > > >> > > -ksp_monitor_true_residual -ksp_converged_reason -ksp_view -pc_type >> > > bjacobi -sub_pc_type sor >> > > >> > > and send the output of both for us to look at. >> > > >> > > Also note that you can use -pc_type sor even in parallel. There are >> > > options for local iterations and full iterations. >> > > >> > >> > > -- Eng. Manuel Ricardo Perez Cerquera. MSc. Ph.D student Antenna and EMC Lab (LACE) Istituto Superiore Mario Boella (ISMB) Politecnico di Torino Via Pier Carlo Boggio 61, Torino 10138, Italy Email: manuel.perezcerquera at polito.it Phone: +39 0112276704 Fax: +39 011 2276 299 -------------- next part -------------- An HTML attachment was scrubbed... URL: From u.tabak at tudelft.nl Wed Nov 23 11:13:12 2011 From: u.tabak at tudelft.nl (Umut Tabak) Date: Wed, 23 Nov 2011 18:13:12 +0100 Subject: [petsc-users] is something wrong with nnz ? In-Reply-To: References: Message-ID: <4ECD29A8.40002@tudelft.nl> On 11/23/2011 05:55 PM, jean-frederic thebault wrote: > OK, Im not familiar with debugger (!!) but I will try to see something > with valgrind... To be a more effective programmer, debugging is essential so investing some time in it pays off in the long run quite a lot. -------------- next part -------------- An HTML attachment was scrubbed... URL: From bsmith at mcs.anl.gov Wed Nov 23 13:26:28 2011 From: bsmith at mcs.anl.gov (Barry Smith) Date: Wed, 23 Nov 2011 13:26:28 -0600 Subject: [petsc-users] KSP with MatNullSpace In-Reply-To: References: Message-ID: Print out the matrix for a small grid then check in Matlab that it is symmetric and has no negative eigenvalues. Barry On Nov 23, 2011, at 9:50 AM, Shiyuan wrote: > Hi, > I want to solve a singular system with a known nullspace. However, I the KSP solve diverges with KSP_INDEFINTE_PC even if I disable the preconditioning by PCNONE. > this is how I setup the system. What did I do wrong? Any possible causes? Thanks. > > ierr=MatNullSpaceCreate(PETSC_COMM_SELF,PETSC_FALSE,1,&phi,&nsp);CHKERRV(ierr); > ierr=KSPCreate(PETSC_COMM_SELF,&ksp);CHKERRV(ierr); > ierr=KSPSetOperators(ksp,A,A,DIFFERENT_NONZERO_PATTERN);CHKERRV(ierr); > ierr=KSPSetNullSpace(ksp,nsp);CHKERRV(ierr); > ierr=KSPSetType(ksp,KSPCG);CHKERRV(ierr); > ierr=KSPGetPC(ksp,&prec);CHKERRV(ierr); > ierr=PCSetType(prec,PCNONE);CHKERRV(ierr); > ierr=KSPSetTolerances(ksp,1e-5,1e-20,1e5,10000);CHKERRV(ierr); > ierr=KSPSetFromOptions(ksp);CHKERRV(ierr); > ierr=KSPSetUp(ksp);CHKERRV(ierr); > > Shiyuan From gshy2014 at gmail.com Wed Nov 23 13:31:06 2011 From: gshy2014 at gmail.com (Shiyuan) Date: Wed, 23 Nov 2011 13:31:06 -0600 Subject: [petsc-users] KSP with MatNullSpace In-Reply-To: References: Message-ID: I've check it in Matlab. the matrix A is symmetric and has no negative eigenvalues( only a zero eigen value). I've also checked that the nullspace is correct( norm(A*phi)<1e-11); Thanks. On Wed, Nov 23, 2011 at 1:26 PM, Barry Smith wrote: > > Print out the matrix for a small grid then check in Matlab that it is > symmetric and has no negative eigenvalues. > > Barry > > > On Nov 23, 2011, at 9:50 AM, Shiyuan wrote: > > > Hi, > > I want to solve a singular system with a known nullspace. However, I > the KSP solve diverges with KSP_INDEFINTE_PC even if I disable the > preconditioning by PCNONE. > > this is how I setup the system. What did I do wrong? Any possible > causes? Thanks. > > > > > ierr=MatNullSpaceCreate(PETSC_COMM_SELF,PETSC_FALSE,1,&phi,&nsp);CHKERRV(ierr); > > ierr=KSPCreate(PETSC_COMM_SELF,&ksp);CHKERRV(ierr); > > ierr=KSPSetOperators(ksp,A,A,DIFFERENT_NONZERO_PATTERN);CHKERRV(ierr); > > ierr=KSPSetNullSpace(ksp,nsp);CHKERRV(ierr); > > ierr=KSPSetType(ksp,KSPCG);CHKERRV(ierr); > > ierr=KSPGetPC(ksp,&prec);CHKERRV(ierr); > > ierr=PCSetType(prec,PCNONE);CHKERRV(ierr); > > ierr=KSPSetTolerances(ksp,1e-5,1e-20,1e5,10000);CHKERRV(ierr); > > ierr=KSPSetFromOptions(ksp);CHKERRV(ierr); > > ierr=KSPSetUp(ksp);CHKERRV(ierr); > > > > Shiyuan > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From ataicher at ices.utexas.edu Wed Nov 23 13:58:18 2011 From: ataicher at ices.utexas.edu (Abraham Taicher) Date: Wed, 23 Nov 2011 13:58:18 -0600 Subject: [petsc-users] What's the point of DMDA_BOUNDARY_GHOSTED? Message-ID: <4ECD505A.6080101@ices.utexas.edu> Hi, I'm trying to add a Dirichlet boundary condition to my finite element code. I want the vector with the boundary data to split across processors. I have a rectangular grid so I could just make 4 different 1D DA's and split them up according to the way the 2D DA is split. Is that what DMDA_BOUNDARY_GHOSTED is for? If not, what is it for? thanks, Abraham Taicher From bsmith at mcs.anl.gov Wed Nov 23 17:00:17 2011 From: bsmith at mcs.anl.gov (Barry Smith) Date: Wed, 23 Nov 2011 17:00:17 -0600 Subject: [petsc-users] KSP with MatNullSpace In-Reply-To: References: Message-ID: <9F0F599F-F0FD-4E60-B601-0C422839CF18@mcs.anl.gov> On Nov 23, 2011, at 1:31 PM, Shiyuan wrote: > I've check it in Matlab. the matrix A is symmetric and has no negative eigenvalues( only a zero eigen value). I've also checked that the nullspace is correct( norm(A*phi)<1e-11); Hmm. You can run with -ksp_view_binary and email to petsc-maint at mcs.anl.gov the resulting file called binaryoutput. Barry > Thanks. > > On Wed, Nov 23, 2011 at 1:26 PM, Barry Smith wrote: > > Print out the matrix for a small grid then check in Matlab that it is symmetric and has no negative eigenvalues. > > Barry > > > On Nov 23, 2011, at 9:50 AM, Shiyuan wrote: > > > Hi, > > I want to solve a singular system with a known nullspace. However, I the KSP solve diverges with KSP_INDEFINTE_PC even if I disable the preconditioning by PCNONE. > > this is how I setup the system. What did I do wrong? Any possible causes? Thanks. > > > > ierr=MatNullSpaceCreate(PETSC_COMM_SELF,PETSC_FALSE,1,&phi,&nsp);CHKERRV(ierr); > > ierr=KSPCreate(PETSC_COMM_SELF,&ksp);CHKERRV(ierr); > > ierr=KSPSetOperators(ksp,A,A,DIFFERENT_NONZERO_PATTERN);CHKERRV(ierr); > > ierr=KSPSetNullSpace(ksp,nsp);CHKERRV(ierr); > > ierr=KSPSetType(ksp,KSPCG);CHKERRV(ierr); > > ierr=KSPGetPC(ksp,&prec);CHKERRV(ierr); > > ierr=PCSetType(prec,PCNONE);CHKERRV(ierr); > > ierr=KSPSetTolerances(ksp,1e-5,1e-20,1e5,10000);CHKERRV(ierr); > > ierr=KSPSetFromOptions(ksp);CHKERRV(ierr); > > ierr=KSPSetUp(ksp);CHKERRV(ierr); > > > > Shiyuan > > From bsmith at mcs.anl.gov Wed Nov 23 17:14:13 2011 From: bsmith at mcs.anl.gov (Barry Smith) Date: Wed, 23 Nov 2011 17:14:13 -0600 Subject: [petsc-users] What's the point of DMDA_BOUNDARY_GHOSTED? In-Reply-To: <4ECD505A.6080101@ices.utexas.edu> References: <4ECD505A.6080101@ices.utexas.edu> Message-ID: <2F8BE4E0-A6F2-47A3-97DB-BAC8296781F3@mcs.anl.gov> On Nov 23, 2011, at 1:58 PM, Abraham Taicher wrote: > Hi, > > I'm trying to add a Dirichlet boundary condition to my finite element > code. I want the vector with the boundary data to split across > processors. I have a rectangular grid so I could just make 4 different > 1D DA's and split them up according to the way the 2D DA is split. Is > that what DMDA_BOUNDARY_GHOSTED is for? Yes, it can be used for that purpose. You fill up the "extra" ghost locations with your Dirichlet values and then the stencil computations just use them at the edge of the array. There is really no way to use 1D DAs in this case to store Dirichlet boundary conditions. Just stick the values into the right locations in the ghost locations of the 2D DA. Barry > If not, what is it for? > > thanks, > > Abraham Taicher From xdliang at gmail.com Wed Nov 23 22:26:55 2011 From: xdliang at gmail.com (Xiangdong Liang) Date: Wed, 23 Nov 2011 23:26:55 -0500 Subject: [petsc-users] a simple question about options left Message-ID: Hello everyone, When I compile my program with debug mode and ran it, I got WARNING! There are options you set that were not used! WARNING! could be spelling mistake, etc! Option left: name:-1 value: -1 I even got the warning if I do not provide any options. What's the option left? However, such warning about the option left is gone if I compile my program in arch-opt mode. Thanks. Xiangdong From bsmith at mcs.anl.gov Wed Nov 23 22:30:24 2011 From: bsmith at mcs.anl.gov (Barry Smith) Date: Wed, 23 Nov 2011 22:30:24 -0600 Subject: [petsc-users] a simple question about options left In-Reply-To: References: Message-ID: <5F9D8904-8394-48A9-AB07-D4717C1AE751@mcs.anl.gov> On Nov 23, 2011, at 10:26 PM, Xiangdong Liang wrote: > Hello everyone, > > When I compile my program with debug mode and ran it, I got > > WARNING! There are options you set that were not used! > WARNING! could be spelling mistake, etc! > Option left: name:-1 value: -1 You must have some of your own command line arguments like for example -1 -1 and our argument processing is not smart enough to ignore it. I can reproduce it and will see if there is any reasonable way for us to eliminate the message in this case. It is safe to ignore. > > I even got the warning if I do not provide any options. What's the option left? > > However, such warning about the option left is gone if I compile my > program in arch-opt mode. Thanks. We don't print the warning with optimized code. > > Xiangdong From xdliang at gmail.com Wed Nov 23 22:51:36 2011 From: xdliang at gmail.com (Xiangdong Liang) Date: Wed, 23 Nov 2011 23:51:36 -0500 Subject: [petsc-users] a simple question about options left In-Reply-To: <5F9D8904-8394-48A9-AB07-D4717C1AE751@mcs.anl.gov> References: <5F9D8904-8394-48A9-AB07-D4717C1AE751@mcs.anl.gov> Message-ID: Thanks, Barry! Yes, I do have my own command line arguments -1 and 1. Good to know that it's safe to ignore the warning. Xiangdong On Wed, Nov 23, 2011 at 11:30 PM, Barry Smith wrote: > > On Nov 23, 2011, at 10:26 PM, Xiangdong Liang wrote: > >> Hello everyone, >> >> When I compile my program with debug mode and ran it, I got >> >> WARNING! There are options you set that were not used! >> WARNING! could be spelling mistake, etc! >> Option left: name:-1 value: -1 > > ? You must have some of your own command line arguments like for example -1 -1 and our argument processing is not smart enough to ignore it. I can reproduce it and will see if there is any reasonable way for us to eliminate the message in this case. > > ? ?It is safe to ignore. > > >> >> I even got the warning if I do not provide any options. What's the option left? >> >> However, such warning about the option left is gone if I compile my >> program in arch-opt mode. ?Thanks. > > ? We don't print the warning with optimized code. > >> >> Xiangdong > > From dominik at itis.ethz.ch Thu Nov 24 04:49:15 2011 From: dominik at itis.ethz.ch (Dominik Szczerba) Date: Thu, 24 Nov 2011 11:49:15 +0100 Subject: [petsc-users] Avoid MUMPS ordering for distributed matrices? In-Reply-To: <5D3580A3-E295-47A8-A058-395B39630F8F@mcs.anl.gov> References: <5D3580A3-E295-47A8-A058-395B39630F8F@mcs.anl.gov> Message-ID: > ? Mumps does now have an option of parallel ordering. > > ? Run with -help and look at the options like > > -mat_mumps_icntl_28","ICNTL(28): use 1 for sequential analysis and ictnl(7) ordering, or 2 for parallel analysis and ictnl(29) ordering > > -mat_mumps_icntl_29","ICNTL(29): parallel ordering 1 = ptscotch 2 = parmetis Thanks a lot Barry, this was very useful to know. I now see: "Using ParMETIS for parallel ordering." Not that it speeds the ordering up very much... ;) > I apologize that the options are organized in such a silly way but that is how MUMPS is organized. I suggest to add the above two options to http://www.mcs.anl.gov/petsc/petsc-current/docs/manualpages/Mat/MATSOLVERMUMPS.html or just obsolete the list of options, only providing a cross link to MatMumpsSetIcntl() and a reference to the MUMPS User Guide. Regards Dominik > > ? Barry > > > From behzad.baghapour at gmail.com Thu Nov 24 05:00:38 2011 From: behzad.baghapour at gmail.com (behzad baghapour) Date: Thu, 24 Nov 2011 14:30:38 +0330 Subject: [petsc-users] Get Stuck in SNES In-Reply-To: References: Message-ID: On Wed, Nov 23, 2011 at 4:33 PM, Jed Brown wrote: > On Wed, Nov 23, 2011 at 01:06, behzad baghapour < > behzad.baghapour at gmail.com> wrote: > >> for( c=0; c> FC->e[c].Q[p] = xx[c*(noe*num)+p]; >> > > You haven't told me about "noe" or "num". Do you mean for this to read > xx[c*tot+p]? > noe is the number of equations (equal to number of flow states) and num is the number of shape functions in element (according to the order of accuracy) > > >> >> ierr = VecRestoreArray( x, &xx ); CHKERRQ( ierr ); >> >> interiorFlux( FC->flw, FC->e ); >> faceFlux ( FC->flw, FC->f, FC->e ); >> > > The first of these should set FC->e (if you are adding into it, then you > need to zero it first) and the second should add into it. > I found that my mistake is that I forgot to set upwind effect in my defined field context. The problem was not stable, it is not related to SNES setup. However, I can't find out the difference between "basic" and "basicnonorms" in line search method. In addition, Is it possible to set the number of line-search corrections or is just decided by the solver? Thanks a lot, BehZad -------------- next part -------------- An HTML attachment was scrubbed... URL: From knepley at gmail.com Thu Nov 24 05:09:43 2011 From: knepley at gmail.com (Matthew Knepley) Date: Thu, 24 Nov 2011 05:09:43 -0600 Subject: [petsc-users] Get Stuck in SNES In-Reply-To: References: Message-ID: On Thu, Nov 24, 2011 at 5:00 AM, behzad baghapour < behzad.baghapour at gmail.com> wrote: > > > On Wed, Nov 23, 2011 at 4:33 PM, Jed Brown wrote: > >> On Wed, Nov 23, 2011 at 01:06, behzad baghapour < >> behzad.baghapour at gmail.com> wrote: >> >>> for( c=0; c>> FC->e[c].Q[p] = xx[c*(noe*num)+p]; >>> >> >> You haven't told me about "noe" or "num". Do you mean for this to read >> xx[c*tot+p]? >> > > noe is the number of equations (equal to number of flow states) and num is > the number of shape functions in element (according to the order of > accuracy) > >> >> >>> >>> ierr = VecRestoreArray( x, &xx ); CHKERRQ( ierr ); >>> >>> interiorFlux( FC->flw, FC->e ); >>> faceFlux ( FC->flw, FC->f, FC->e ); >>> >> >> The first of these should set FC->e (if you are adding into it, then you >> need to zero it first) and the second should add into it. >> > > I found that my mistake is that I forgot to set upwind effect in my > defined field context. The problem was not stable, it is not related to > SNES setup. > > However, I can't find out the difference between "basic" and > "basicnonorms" in line search method. > Basic takes the full Newton step, but checks for decrease. nonorms just takes it and moves on. > In addition, Is it possible to set the number of line-search corrections > or is just decided by the solver? > You can set a custom line search. I am not sure what you mean by this "number of line search correctons" Matt > Thanks a lot, > BehZad > > > -- What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. -- Norbert Wiener -------------- next part -------------- An HTML attachment was scrubbed... URL: From behzad.baghapour at gmail.com Thu Nov 24 05:37:37 2011 From: behzad.baghapour at gmail.com (behzad baghapour) Date: Thu, 24 Nov 2011 15:07:37 +0330 Subject: [petsc-users] Get Stuck in SNES In-Reply-To: References: Message-ID: I meant that to do just one or two step in line search correction and let the solution goes on even if the error reduction may not reach the desired tolerance. It may help to reduce the computational cost especially in first iteration or in almost ill-conditioned situation where the Newton iteration may not holds the stability or does not lead to correct solution and a change in preconditioning is needed to overcome the situation. On Thu, Nov 24, 2011 at 2:39 PM, Matthew Knepley wrote: > On Thu, Nov 24, 2011 at 5:00 AM, behzad baghapour < > behzad.baghapour at gmail.com> wrote: > >> >> >> On Wed, Nov 23, 2011 at 4:33 PM, Jed Brown wrote: >> >>> On Wed, Nov 23, 2011 at 01:06, behzad baghapour < >>> behzad.baghapour at gmail.com> wrote: >>> >>>> for( c=0; c>>> FC->e[c].Q[p] = xx[c*(noe*num)+p]; >>>> >>> >>> You haven't told me about "noe" or "num". Do you mean for this to read >>> xx[c*tot+p]? >>> >> >> noe is the number of equations (equal to number of flow states) and num >> is the number of shape functions in element (according to the order of >> accuracy) >> >>> >>> >>>> >>>> ierr = VecRestoreArray( x, &xx ); CHKERRQ( ierr ); >>>> >>>> interiorFlux( FC->flw, FC->e ); >>>> faceFlux ( FC->flw, FC->f, FC->e ); >>>> >>> >>> The first of these should set FC->e (if you are adding into it, then you >>> need to zero it first) and the second should add into it. >>> >> >> I found that my mistake is that I forgot to set upwind effect in my >> defined field context. The problem was not stable, it is not related to >> SNES setup. >> >> However, I can't find out the difference between "basic" and >> "basicnonorms" in line search method. >> > > Basic takes the full Newton step, but checks for decrease. nonorms just > takes it and moves on. > > >> In addition, Is it possible to set the number of line-search corrections >> or is just decided by the solver? >> > > You can set a custom line search. I am not sure what you mean by this > "number of line search correctons" > > Matt > > >> Thanks a lot, >> BehZad >> >> >> > > > -- > What most experimenters take for granted before they begin their > experiments is infinitely more interesting than any results to which their > experiments lead. > -- Norbert Wiener > -------------- next part -------------- An HTML attachment was scrubbed... URL: From knepley at gmail.com Thu Nov 24 05:54:43 2011 From: knepley at gmail.com (Matthew Knepley) Date: Thu, 24 Nov 2011 05:54:43 -0600 Subject: [petsc-users] Get Stuck in SNES In-Reply-To: References: Message-ID: On Thu, Nov 24, 2011 at 5:37 AM, behzad baghapour < behzad.baghapour at gmail.com> wrote: > I meant that to do just one or two step in line search correction and let > the solution goes on even if the error reduction may not reach the desired > tolerance. It may help to reduce the computational cost especially in first > iteration or in almost ill-conditioned situation where the Newton iteration > may not holds the stability or does not lead to correct solution and a > change in preconditioning is needed to overcome the situation. > http://www.mcs.anl.gov/petsc/petsc-dev/docs/manualpages/SNES/SNESSetMaxNonlinearStepFailures.html Matt > On Thu, Nov 24, 2011 at 2:39 PM, Matthew Knepley wrote: > >> On Thu, Nov 24, 2011 at 5:00 AM, behzad baghapour < >> behzad.baghapour at gmail.com> wrote: >> >>> >>> >>> On Wed, Nov 23, 2011 at 4:33 PM, Jed Brown wrote: >>> >>>> On Wed, Nov 23, 2011 at 01:06, behzad baghapour < >>>> behzad.baghapour at gmail.com> wrote: >>>> >>>>> for( c=0; c>>>> FC->e[c].Q[p] = xx[c*(noe*num)+p]; >>>>> >>>> >>>> You haven't told me about "noe" or "num". Do you mean for this to read >>>> xx[c*tot+p]? >>>> >>> >>> noe is the number of equations (equal to number of flow states) and num >>> is the number of shape functions in element (according to the order of >>> accuracy) >>> >>>> >>>> >>>>> >>>>> ierr = VecRestoreArray( x, &xx ); CHKERRQ( ierr ); >>>>> >>>>> interiorFlux( FC->flw, FC->e ); >>>>> faceFlux ( FC->flw, FC->f, FC->e ); >>>>> >>>> >>>> The first of these should set FC->e (if you are adding into it, then >>>> you need to zero it first) and the second should add into it. >>>> >>> >>> I found that my mistake is that I forgot to set upwind effect in my >>> defined field context. The problem was not stable, it is not related to >>> SNES setup. >>> >>> However, I can't find out the difference between "basic" and >>> "basicnonorms" in line search method. >>> >> >> Basic takes the full Newton step, but checks for decrease. nonorms just >> takes it and moves on. >> >> >>> In addition, Is it possible to set the number of line-search corrections >>> or is just decided by the solver? >>> >> >> You can set a custom line search. I am not sure what you mean by this >> "number of line search correctons" >> >> Matt >> >> >>> Thanks a lot, >>> BehZad >>> >>> >>> >> >> >> -- >> What most experimenters take for granted before they begin their >> experiments is infinitely more interesting than any results to which their >> experiments lead. >> -- Norbert Wiener >> > > -- What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. -- Norbert Wiener -------------- next part -------------- An HTML attachment was scrubbed... URL: From dominik at itis.ethz.ch Thu Nov 24 07:14:25 2011 From: dominik at itis.ethz.ch (Dominik Szczerba) Date: Thu, 24 Nov 2011 14:14:25 +0100 Subject: [petsc-users] questions about matrix preallocation Message-ID: In any case, I use KSPSetOperators(ksp, A, A, SAME_NONZERO_PATTERN). 1) If I provide only approximation for number of zeros per row (d_nz and o_nz) and I overshoot - will the extra memory still be there or is automatically freed? 2) Do I need to MatMPIAIJSetPreallocation() after MatZeroEntries(A)? MatZeroEntries man page says: If the matrix was not preallocated then a default, likely poor preallocation will be set in the matrix, so this should be called after the preallocation phase. A was preallocated during creation, but unless I call MatMPIAIJSetPreallocation after MatZeroEntries, I get a lot of unneded mallocs. Many thanks for any clarifications. Dominik From jedbrown at mcs.anl.gov Thu Nov 24 07:22:28 2011 From: jedbrown at mcs.anl.gov (Jed Brown) Date: Thu, 24 Nov 2011 07:22:28 -0600 Subject: [petsc-users] questions about matrix preallocation In-Reply-To: References: Message-ID: On Thu, Nov 24, 2011 at 07:14, Dominik Szczerba wrote: > In any case, I use KSPSetOperators(ksp, A, A, SAME_NONZERO_PATTERN). > > 1) If I provide only approximation for number of zeros per row (d_nz > and o_nz) and I overshoot - will the extra memory still be there or is > automatically freed? > It will still be there, but the data structure is compacted so that the "holes" are all at the very end of the arrays, so it does not influence performance at all. Reallocating and copying would cause much higher peak memory usage, so it's usually not desirable. > > 2) Do I need to MatMPIAIJSetPreallocation() after MatZeroEntries(A)? > No > MatZeroEntries man page says: > > If the matrix was not preallocated then a default, likely poor > preallocation will be set in the matrix, so this should be called > after the preallocation phase. > > A was preallocated during creation, but unless I call > MatMPIAIJSetPreallocation after MatZeroEntries, I get a lot of unneded > mallocs. > That should not happen. Perhaps the type was not set in your original preallocation, but this function does not change allocation. PetscErrorCode MatZeroEntries_SeqAIJ(Mat A) { Mat_SeqAIJ *a = (Mat_SeqAIJ*)A->data; PetscErrorCode ierr; PetscFunctionBegin; ierr = PetscMemzero(a->a,(a->i[A->rmap->n])*sizeof(PetscScalar));CHKERRQ(ierr); a->idiagvalid = PETSC_FALSE; a->ibdiagvalid = PETSC_FALSE; PetscFunctionReturn(0); } -------------- next part -------------- An HTML attachment was scrubbed... URL: From dominik at itis.ethz.ch Thu Nov 24 07:33:51 2011 From: dominik at itis.ethz.ch (Dominik Szczerba) Date: Thu, 24 Nov 2011 14:33:51 +0100 Subject: [petsc-users] questions about matrix preallocation In-Reply-To: References: Message-ID: >> A was preallocated during creation, but unless I call >> MatMPIAIJSetPreallocation after MatZeroEntries, I get a lot of unneded >> mallocs. > > That should not happen. Perhaps the type was not set in your original > preallocation, but this function does not change allocation. I create A only once as: MatCreateMPIAIJ(PETSC_COMM_WORLD, PETSC_DECIDE, PETSC_DECIDE, NTOT, NTOT, 500, PETSC_NULL, 500, PETSC_NULL, &A); (with 500 a big overshoot) then I do in a loop: 1. ierr = MatMPIAIJSetPreallocation(A, 500, PETSC_NULL, 500, PETSC_NULL); CHKERRQ(ierr); 2. ierr = MatZeroEntries(A); CHKERRQ(ierr); Line 1. does not matter the first time in the loop, info.mallocs is always 0, but if it is commented out, info.mallocs is non-zero the second time loop is executed. So I am confused, is the preallocation in MatCreateMPIAIJ not sufficient? Does it need to be refreshed? Thanks Dominik From jedbrown at mcs.anl.gov Thu Nov 24 07:36:59 2011 From: jedbrown at mcs.anl.gov (Jed Brown) Date: Thu, 24 Nov 2011 07:36:59 -0600 Subject: [petsc-users] questions about matrix preallocation In-Reply-To: References: Message-ID: On Thu, Nov 24, 2011 at 07:33, Dominik Szczerba wrote: > MatCreateMPIAIJ(PETSC_COMM_WORLD, PETSC_DECIDE, PETSC_DECIDE, NTOT, > NTOT, 500, PETSC_NULL, 500, PETSC_NULL, &A); > > (with 500 a big overshoot) > > then I do in a loop: > > 1. ierr = MatMPIAIJSetPreallocation(A, 500, PETSC_NULL, 500, > PETSC_NULL); CHKERRQ(ierr); > 2. ierr = MatZeroEntries(A); CHKERRQ(ierr); > > Line 1. does not matter the first time in the loop, info.mallocs is > always 0, but if it is commented out, info.mallocs is non-zero the > second time loop is executed. > So I am confused, is the preallocation in MatCreateMPIAIJ not > sufficient? Does it need to be refreshed? > You must be changing the number of nonzeros in each iteration of the loop. I suggest assembling all possible entries (even the ones that happen to be zero) in the first assembly. Then there will be space and you can reuse the data structure. -------------- next part -------------- An HTML attachment was scrubbed... URL: From dominik at itis.ethz.ch Thu Nov 24 07:46:48 2011 From: dominik at itis.ethz.ch (Dominik Szczerba) Date: Thu, 24 Nov 2011 14:46:48 +0100 Subject: [petsc-users] questions about matrix preallocation In-Reply-To: References: Message-ID: On Thu, Nov 24, 2011 at 2:36 PM, Jed Brown wrote: > On Thu, Nov 24, 2011 at 07:33, Dominik Szczerba > wrote: >> >> MatCreateMPIAIJ(PETSC_COMM_WORLD, PETSC_DECIDE, PETSC_DECIDE, NTOT, >> NTOT, 500, PETSC_NULL, 500, PETSC_NULL, &A); >> >> (with 500 a big overshoot) >> >> then I do in a loop: >> >> 1. ierr = MatMPIAIJSetPreallocation(A, 500, PETSC_NULL, 500, >> PETSC_NULL); CHKERRQ(ierr); >> 2. ierr = MatZeroEntries(A); CHKERRQ(ierr); >> >> Line 1. does not matter the first time in the loop, info.mallocs is >> always 0, but if it is commented out, info.mallocs is non-zero the >> second time loop is executed. >> So I am confused, is the preallocation in MatCreateMPIAIJ not >> sufficient? Does it need to be refreshed? > > You must be changing the number of nonzeros in each iteration of the loop. Yes, that is true, but 1) within the overshot preallocation margin 2) I do it at each execution of the loop, so I do not see why only the second time there are mallocs. I display mallocs directly before KSPSolve(). > I suggest assembling all possible entries (even the ones that happen to be > zero) in the first assembly. Then there will be space and you can reuse the > data structure. This will require a bigger change in my code. Is always calling MatMPIAIJSetPreallocation a bad (inefficient) alternative? Thanks Dominik From jedbrown at mcs.anl.gov Thu Nov 24 08:18:03 2011 From: jedbrown at mcs.anl.gov (Jed Brown) Date: Thu, 24 Nov 2011 08:18:03 -0600 Subject: [petsc-users] questions about matrix preallocation In-Reply-To: References: Message-ID: On Thu, Nov 24, 2011 at 07:46, Dominik Szczerba wrote: > Yes, that is true, but 1) within the overshot preallocation margin 2) > I do it at each execution of the loop, so I do not see why only the > second time there are mallocs. I display mallocs directly before > KSPSolve(). > The matrix is compacted. The implementation (MatSeqAIJSetPreallocation_SeqAIJ) could be changed to analyze the newly prescribed preallocation relative to the old allocation and determine that it can fit within the old storage. The problem here is that someone might specifically preallocate smaller because they want to free the old storage. So maybe we should just recognize an exact match, but what if the user called MatCreateMPIAIJWithSplitArrays() (so they still own the arrays) and they are calling this preallocation routine because they want to reuse the arrays for something else? If we can settle on some understandable semantics for when freeing and reallocating makes sense, then we can put it in. Or you could insert zeros into those locations that might be used later. > > > I suggest assembling all possible entries (even the ones that happen to > be > > zero) in the first assembly. Then there will be space and you can reuse > the > > data structure. > > This will require a bigger change in my code. Is always calling > MatMPIAIJSetPreallocation a bad (inefficient) alternative? > Depends on the availability of memory, but probably not. Always profile. -------------- next part -------------- An HTML attachment was scrubbed... URL: From behzad.baghapour at gmail.com Thu Nov 24 08:23:14 2011 From: behzad.baghapour at gmail.com (behzad baghapour) Date: Thu, 24 Nov 2011 17:53:14 +0330 Subject: [petsc-users] Get Stuck in SNES In-Reply-To: References: Message-ID: Thanks. -------------- next part -------------- An HTML attachment was scrubbed... URL: From dominik at itis.ethz.ch Thu Nov 24 08:47:31 2011 From: dominik at itis.ethz.ch (Dominik Szczerba) Date: Thu, 24 Nov 2011 15:47:31 +0100 Subject: [petsc-users] questions about matrix preallocation In-Reply-To: References: Message-ID: This all was very useful for me to know, thank you. Dominik On Thu, Nov 24, 2011 at 3:18 PM, Jed Brown wrote: > On Thu, Nov 24, 2011 at 07:46, Dominik Szczerba > wrote: >> >> Yes, that is true, but 1) within the overshot preallocation margin 2) >> I do it at each execution of the loop, so I do not see why only the >> second time there are mallocs. I display mallocs directly before >> KSPSolve(). > > The matrix is compacted. The implementation > (MatSeqAIJSetPreallocation_SeqAIJ) could be changed to analyze the newly > prescribed preallocation relative to the old allocation and determine that > it can fit within the old storage. The problem here is that someone might > specifically preallocate smaller because they want to free the old storage. > So maybe we should just recognize an exact match, but what if the user > called MatCreateMPIAIJWithSplitArrays() (so they still own the arrays) and > they are calling this preallocation routine because they want to reuse the > arrays for something else? > If we can settle on some understandable semantics for when freeing and > reallocating makes sense, then we can put it in. > Or you could insert zeros into those locations that might be used later. > >> >> > I suggest assembling all possible entries (even the ones that happen to >> > be >> > zero) in the first assembly. Then there will be space and you can reuse >> > the >> > data structure. >> >> This will require a bigger change in my code. Is always calling >> MatMPIAIJSetPreallocation a bad (inefficient) alternative? > > Depends on the availability of memory, but probably not. Always profile. From balay at mcs.anl.gov Thu Nov 24 09:42:06 2011 From: balay at mcs.anl.gov (Satish Balay) Date: Thu, 24 Nov 2011 09:42:06 -0600 (CST) Subject: [petsc-users] [petsc-maint #97642] Configuration with Intel compilers In-Reply-To: References: Message-ID: 1. hypre requires a c++ compiler 2. --with-mpi-dir=/curc/tools/free/redhat_5_x86_64/openmpi-1.4.3_intel-12.0_ib --with-cc=icc --with-fc=ifort Its best to use mpicc/mpif90/mpicxx from the mpi install - and not change compilers for mpi. So use only '--with-mpi-dir=/curc/tools/free/redhat_5_x86_64/openmpi-1.4.3_intel-12.0_ib' [so that PETSc automatically uses mpicc from the specified mpi-dir] - or use: --with-cc=/curc/tools/free/redhat_5_x86_64/openmpi-1.4.3_intel-12.0_ib/bin/mpicc --with-fc=/curc/tools/free/redhat_5_x86_64/openmpi-1.4.3_intel-12.0_ib/bin/mpif90 --with-cxx=/curc/tools/free/redhat_5_x86_64/openmpi-1.4.3_intel-12.0_ib/bin/mpicxx Satish On Thu, 24 Nov 2011, Mohamad M. Nasr-Azadani wrote: > Hi, > > I am trying to compile petsc with intel compilers. > This is my configuration: > > ./configure > --with-mpi-dir=/curc/tools/free/redhat_5_x86_64/openmpi-1.4.3_intel-12.0_ib > --with-cc=icc --with-fc=ifort --download-f-blas-lapack=1 --with-debugging=0 > --download-hypre=/home/mmnasr/hypre-2.7.0b.tar.gz COPTFLAGS='-O3' > FOPTFLAGS='-O3' > > It does not go through. > I have also seen this link > http://www.mcs.anl.gov/petsc/documentation/faq.html#mpi-compilers > and tried --with-mpi-compilers=0. and it did not work either. > The farthest I could get was when I defined --with-cc=icc --with-fc=ifort. > But still it stops at > > > TESTING: CxxMPICheck from > config.packages.MPI(/home/mmnasr/petsc-3.1-p8/config/BuildSystem/config/packages/MPI.py:618) > ******************************************************************************* > UNABLE to CONFIGURE with GIVEN OPTIONS (see configure.log for > details): > ------------------------------------------------------------------------------- > C++ error! MPI_Finalize() could not be located! > ******************************************************************************* > > > I have attached the configure.log file. > Any suggestions? > > Happy thanksgiving, > Best, > Mohamad > > From balay at mcs.anl.gov Thu Nov 24 10:20:43 2011 From: balay at mcs.anl.gov (Satish Balay) Date: Thu, 24 Nov 2011 10:20:43 -0600 (CST) Subject: [petsc-users] a simple question about options left In-Reply-To: References: <5F9D8904-8394-48A9-AB07-D4717C1AE751@mcs.anl.gov> Message-ID: You can use PetscOptionsGetInt() for your own command line arguments aswell.. Satish On Wed, 23 Nov 2011, Xiangdong Liang wrote: > Thanks, Barry! Yes, I do have my own command line arguments -1 and 1. > Good to know that it's safe to ignore the warning. > > Xiangdong > > On Wed, Nov 23, 2011 at 11:30 PM, Barry Smith wrote: > > > > On Nov 23, 2011, at 10:26 PM, Xiangdong Liang wrote: > > > >> Hello everyone, > >> > >> When I compile my program with debug mode and ran it, I got > >> > >> WARNING! There are options you set that were not used! > >> WARNING! could be spelling mistake, etc! > >> Option left: name:-1 value: -1 > > > > ? You must have some of your own command line arguments like for example -1 -1 and our argument processing is not smart enough to ignore it. I can reproduce it and will see if there is any reasonable way for us to eliminate the message in this case. > > > > ? ?It is safe to ignore. > > > > > >> > >> I even got the warning if I do not provide any options. What's the option left? > >> > >> However, such warning about the option left is gone if I compile my > >> program in arch-opt mode. ?Thanks. > > > > ? We don't print the warning with optimized code. > > > >> > >> Xiangdong > > > > > From iamkyungjoo at gmail.com Thu Nov 24 11:59:46 2011 From: iamkyungjoo at gmail.com (Kyungjoo Kim) Date: Thu, 24 Nov 2011 11:59:46 -0600 Subject: [petsc-users] Question about preallocation on MPIAIJ Message-ID: <8261BC8B-DD5B-4BB6-B035-A15715DEA339@gmail.com> Dear Petsc experts I have a elemental matrices that are not assembled yet, but those matrices have bijection map from local to global. So far I preallocate memory large enough to cache those unassembled matrices with expecting that Petsc manages the assembly efficiently. There are two cases: 1) Preallocation is large enough for the assembled matrices for each processor 2) More memory space needs for the additional member elements from assembly procedure. This problem is easy to manage by sharing assembly information only. But I am wondering how efficient Petsc assembly procedure is made. Even though I can assembly the matrices by scattering necessary parts to others without using Petsc API, I am not very sure that I am doing correctly with respect to communication cost. And possibly Petsc assembly does same way as I do. ( So far both cases take more time than I expect ). FYI: Unlike the example in the Petsc, the node index in the unstructured grid is highly irregular. And the partition of mesh is already defined. Any suggestion or example or reference ? Thank you. Kyungjoo From bsmith at mcs.anl.gov Thu Nov 24 12:25:15 2011 From: bsmith at mcs.anl.gov (Barry Smith) Date: Thu, 24 Nov 2011 12:25:15 -0600 Subject: [petsc-users] Question about preallocation on MPIAIJ In-Reply-To: <8261BC8B-DD5B-4BB6-B035-A15715DEA339@gmail.com> References: <8261BC8B-DD5B-4BB6-B035-A15715DEA339@gmail.com> Message-ID: <6410CE0D-289C-4A7E-968A-24CE7E404492@mcs.anl.gov> If you can call MatMPIAIJSetPreallocation() with the correct preallocation amounts then your approach of storing ALL the element stiffnesses before putting them into the PETSc matrices will require MUCH MUCH more memory then simply calling MatSetValuesLocal() as soon as each element stiffness is computed and your approach will not run faster. Barry On Nov 24, 2011, at 11:59 AM, Kyungjoo Kim wrote: > Dear Petsc experts > > > I have a elemental matrices that are not assembled yet, but those matrices have bijection map from local to global. > So far I preallocate memory large enough to cache those unassembled matrices with expecting that Petsc manages the assembly efficiently. > > There are two cases: > > 1) Preallocation is large enough for the assembled matrices for each processor > 2) More memory space needs for the additional member elements from assembly procedure. > > This problem is easy to manage by sharing assembly information only. > > But I am wondering how efficient Petsc assembly procedure is made. > Even though I can assembly the matrices by scattering necessary parts to others without using Petsc API, I am not very sure that I am doing correctly with respect to communication cost. And possibly Petsc assembly does same way as I do. ( So far both cases take more time than I expect ). > > > FYI: > Unlike the example in the Petsc, the node index in the unstructured grid is highly irregular. And the partition of mesh is already defined. > > Any suggestion or example or reference ? > > > > Thank you. > > > > > Kyungjoo From dominik at itis.ethz.ch Fri Nov 25 02:51:11 2011 From: dominik at itis.ethz.ch (Dominik Szczerba) Date: Fri, 25 Nov 2011 09:51:11 +0100 Subject: [petsc-users] Question on MatView and VecView Message-ID: Do matrices and vectors saved to a file with MatView and VecView store only their values or also metadata like local sizes, non-zero structure, nnz preallocation, ...? Thanks, Dominik From dominik at itis.ethz.ch Fri Nov 25 05:45:20 2011 From: dominik at itis.ethz.ch (Dominik Szczerba) Date: Fri, 25 Nov 2011 12:45:20 +0100 Subject: [petsc-users] problems using MUMPS as a linear solver Message-ID: I am using MUMPS as a linear solver in my custom non-linear procedure (basically a loop). I specify the following options: -pc_type lu -pc_factor_mat_solver_package mumps -mat_mumps_icntl_4 3 -mat_mumps_icntl_28 2 -mat_mumps_icntl_29 2 -ksp_monitor_true_residual -ksp_converged_reason -ksp_view I have noticed, that this does not actually do what I expect: I just want to unconditionally solve my system of equations with MUMPS, but I see that it actually is only a preconditioner. Gmres is in charge, performing iterations. I tried -ksp_max_it 1 but then I get divergence because residue reduction is somehow very poor (?), which does not make sense, because I expect the system be solved down to the epsilon. So is it somehow possible to just solve my Ax=b with MUMPS? Regards, Dominik From dominik at itis.ethz.ch Fri Nov 25 06:22:16 2011 From: dominik at itis.ethz.ch (Dominik Szczerba) Date: Fri, 25 Nov 2011 13:22:16 +0100 Subject: [petsc-users] problems using MUMPS as a linear solver In-Reply-To: References: Message-ID: I have figured out that I need -ksp_type preonly to get rid of fgmres. Petsc is very flexible! Thanks Dominik On Fri, Nov 25, 2011 at 12:45 PM, Dominik Szczerba wrote: > I am using MUMPS as a linear solver in my custom non-linear procedure > (basically a loop). > I specify the following options: > > -pc_type lu -pc_factor_mat_solver_package mumps -mat_mumps_icntl_4 3 > -mat_mumps_icntl_28 2 -mat_mumps_icntl_29 2 -ksp_monitor_true_residual > -ksp_converged_reason -ksp_view > > I have noticed, that this does not actually do what I expect: I just > want to unconditionally solve my system of equations with MUMPS, but I > see that it actually is only a preconditioner. Gmres is in charge, > performing iterations. I tried -ksp_max_it 1 but then I get divergence > because residue reduction is somehow very poor (?), which does not > make sense, because I expect the system be solved down to the epsilon. > > So is it somehow possible to just solve my Ax=b with MUMPS? > > Regards, > Dominik > From jedbrown at mcs.anl.gov Fri Nov 25 08:30:12 2011 From: jedbrown at mcs.anl.gov (Jed Brown) Date: Fri, 25 Nov 2011 08:30:12 -0600 Subject: [petsc-users] Question on MatView and VecView In-Reply-To: References: Message-ID: On Fri, Nov 25, 2011 at 02:51, Dominik Szczerba wrote: > Do matrices and vectors saved to a file with MatView and VecView store > only their values or also metadata like local sizes, non-zero > structure, nnz preallocation, ...? > It stores size and number of nonzeros per row, but not "local sizes". You can read in on a different number of processes. The format is described in the man pages: http://www.mcs.anl.gov/petsc/petsc-dev/docs/manualpages/Mat/MatLoad.html http://www.mcs.anl.gov/petsc/petsc-dev/docs/manualpages/Vec/VecLoad.html -------------- next part -------------- An HTML attachment was scrubbed... URL: From jedbrown at mcs.anl.gov Fri Nov 25 08:35:47 2011 From: jedbrown at mcs.anl.gov (Jed Brown) Date: Fri, 25 Nov 2011 08:35:47 -0600 Subject: [petsc-users] problems using MUMPS as a linear solver In-Reply-To: References: Message-ID: On Fri, Nov 25, 2011 at 05:45, Dominik Szczerba wrote: > I tried -ksp_max_it 1 but then I get divergence > because residue reduction is somehow very poor (?), which does not > make sense, because I expect the system be solved down to the epsilon. > This can happen if the linear system is very ill conditioned. You can run the direct solve inside -ksp_type richardson for a more classical "iterative refinement". Using a Krylov method is generally better. Because of the way GMRES preconditioning works, converging after one iteration still means two preconditioner applications (so much cheaper than factorization that you probably don't notice it). In contrast, one iteration of FGMRES only does one preconditioner application. -------------- next part -------------- An HTML attachment was scrubbed... URL: From dominik at itis.ethz.ch Fri Nov 25 15:11:33 2011 From: dominik at itis.ethz.ch (Dominik Szczerba) Date: Fri, 25 Nov 2011 22:11:33 +0100 Subject: [petsc-users] problems using MUMPS as a linear solver In-Reply-To: References: Message-ID: > This can happen if the linear system is very ill conditioned. You can run > the direct solve inside -ksp_type richardson for a more classical "iterative > refinement". Using a Krylov method is generally better. Because of the way > GMRES preconditioning works, converging after one iteration still means two > preconditioner applications (so much cheaper than factorization that you > probably don't notice it). In contrast, one iteration of FGMRES only does > one preconditioner application. Yes, the system is badly conditioned. But am I right with my (experimental) finding that the ultimate way to go is -ksp_type preonly? Will it do what I want: just solve my system exactly once using MUMPS and not Krylov? Thanks Dominik From jedbrown at mcs.anl.gov Fri Nov 25 15:24:51 2011 From: jedbrown at mcs.anl.gov (Jed Brown) Date: Fri, 25 Nov 2011 15:24:51 -0600 Subject: [petsc-users] problems using MUMPS as a linear solver In-Reply-To: References: Message-ID: On Fri, Nov 25, 2011 at 15:11, Dominik Szczerba wrote: > Yes, the system is badly conditioned. But am I right with my > (experimental) finding that the ultimate way to go is -ksp_type > preonly? Will it do what I want: just solve my system exactly once > using MUMPS and not Krylov? > Direct solvers are not immune to numerical stability issues. -ksp_type preonly does not check whether the system has been solved, it just uses whatever the preconditioner (direct solve in this case) returned. Putting it inside a Krylov method generally makes it more robust. If the residual is not actually small, you should try different MUMPS flags (consult their user's manual) or try a different solver. -------------- next part -------------- An HTML attachment was scrubbed... URL: From dominik at itis.ethz.ch Fri Nov 25 15:48:28 2011 From: dominik at itis.ethz.ch (Dominik Szczerba) Date: Fri, 25 Nov 2011 22:48:28 +0100 Subject: [petsc-users] problems using MUMPS as a linear solver In-Reply-To: References: Message-ID: > Direct solvers are not immune to numerical stability issues. -ksp_type > preonly does not check whether the system has been solved, it just uses > whatever the preconditioner (direct solve in this case) returned. Putting it > inside a Krylov method generally makes it more robust. This is very useful to know. I now can manage to converge with -ksp_type fgmres and -ksp_max_it 1. It does not seem to perform any slower than -ksp_type preonly. Thanks, Dominik From ataicher at ices.utexas.edu Sat Nov 26 17:07:53 2011 From: ataicher at ices.utexas.edu (Abraham Taicher) Date: Sat, 26 Nov 2011 17:07:53 -0600 Subject: [petsc-users] What's the point of DMDA_BOUNDARY_GHOSTED? In-Reply-To: <2F8BE4E0-A6F2-47A3-97DB-BAC8296781F3@mcs.anl.gov> References: <4ECD505A.6080101@ices.utexas.edu> <2F8BE4E0-A6F2-47A3-97DB-BAC8296781F3@mcs.anl.gov> Message-ID: <4ED17149.6060705@ices.utexas.edu> Hi, It seems like you're implying that there are better ways to include Dirichlet (and Neumann) data that is distributed according to the DMDA object. How would you include Dirichlet BC? It would be really helpful to see an implementation of ghosted boundary. I could not find one on the website. Is there one out there? Thanks, Abraham Taicher On 11/23/2011 05:14 PM, Barry Smith wrote: > On Nov 23, 2011, at 1:58 PM, Abraham Taicher wrote: > >> Hi, >> >> I'm trying to add a Dirichlet boundary condition to my finite element >> code. I want the vector with the boundary data to split across >> processors. I have a rectangular grid so I could just make 4 different >> 1D DA's and split them up according to the way the 2D DA is split. Is >> that what DMDA_BOUNDARY_GHOSTED is for? > Yes, it can be used for that purpose. You fill up the "extra" ghost locations with your Dirichlet values and then the stencil computations just use them at the edge of the array. > > There is really no way to use 1D DAs in this case to store Dirichlet boundary conditions. Just stick the values into the right locations in the ghost locations of the 2D DA. > > Barry > > >> If not, what is it for? >> >> thanks, >> >> Abraham Taicher From rongliang.chan at gmail.com Sun Nov 27 12:31:36 2011 From: rongliang.chan at gmail.com (Rongliang Chen) Date: Sun, 27 Nov 2011 11:31:36 -0700 Subject: [petsc-users] ccgraph.c error Message-ID: Hello, I got the following error message when I run my code. Does anyone know what may be the problem? Thanks. Best, Rongliang --------------------------------------------------------------------------------------------------- ***ASSERTION failed on line 169 of file ccgraph.c: cadjwgtsum[cnvtxs] == idxsum(nedges, cadjwgt) 0 91 89 91 0 [850]PETSC ERROR: ------------------------------------------------------------------------ [850]PETSC ERROR: Caught signal number 11 SEGV: Segmentation Violation, probably memory access out of range [850]PETSC ERROR: Try option -start_in_debugger or -on_error_attach_debugger [850]PETSC ERROR: or see http://www.mcs.anl.gov/petsc/petsc-as/documentation/faq.html#valgrind[850]PETSCERROR: or try http://valgrind.org on GNU/linux and Apple Mac OS X to find memory corruption errors [850]PETSC ERROR: configure using --with-debugging=yes, recompile, link, and run [850]PETSC ERROR: to get more information on the crash. [850]PETSC ERROR: --------------------- Error Message ------------------------------------ [850]PETSC ERROR: Signal received! [850]PETSC ERROR: ------------------------------------------------------------------------ [850]PETSC ERROR: Petsc Release Version 3.2.0, Patch 4, Sun Oct 23 12:23:18 CDT 2011 [850]PETSC ERROR: See docs/changes/index.html for recent updates. [850]PETSC ERROR: See docs/faq.html for hints about trouble shooting. [850]PETSC ERROR: See docs/index.html for manual pages. [850]PETSC ERROR: ------------------------------------------------------------------------ [850]PETSC ERROR: ./joab on a Janus-nod named node1338 by ronglian Sat Nov 26 21:44:16 2011 [850]PETSC ERROR: Libraries linked from /projects/ronglian/soft/petsc-3.2-p4/Janus-nodebug/lib [850]PETSC ERROR: Configure run at Tue Oct 25 19:27:52 2011 [850]PETSC ERROR: Configure options --known-level1-dcache-size=32768 --known-level1-dcache-linesize=32 --known-level1-dcache-assoc=0 --known-memcmp-ok=1 --known-sizeof-char=1 --known-sizeof-void-p=8 --known-sizeof-short=2 --known-sizeof-int=4 --known-sizeof-long=8 --known-sizeof-long-long=8 --known-sizeof-float=4 --known-sizeof-double=8 --known-sizeof-size_t=8 --known-bits-per-byte=8 --known-sizeof-MPI_Comm=8 --known-sizeof-MPI_Fint=4 --known-mpi-long-double=1 --with-batch=1 --with-mpi-shared=1 --known-mpi-shared-libraries=0 --with-debugging=0 --download-f-blas-lapack=1 --download-parmetis=1 --download-superlu_dist=1 --download-superlu=1 --download-scalapack=1 --download-blacs=1 --download-mumps=1 --download-hypre=1 [850]PETSC ERROR: ------------------------------------------------------------------------ [850]PETSC ERROR: User provided function() line 0 in unknown directory unknown file -------------------------------------------------------------------------- MPI_ABORT was invoked on rank 850 in communicator MPI_COMM_WORLD with errorcode 59. NOTE: invoking MPI_ABORT causes Open MPI to kill all MPI processes. You may or may not see output from other processes, depending on exactly when Open MPI kills them. -------------------------------------------------------------------------- ***ASSERTION failed on line 169 of file ccgraph.c: cadjwgtsum[cnvtxs] == idxsum(nedges, cadjwgt) 0 91 89 91 0 [986]PETSC ERROR: ------------------------------------------------------------------------ [986]PETSC ERROR: Caught signal number 11 SEGV: Segmentation Violation, probably memory access out of range [986]PETSC ERROR: Try option -start_in_debugger or -on_error_attach_debugger [986]PETSC ERROR: or see http://www.mcs.anl.gov/petsc/petsc-as/documentation/faq.html#valgrind[986]PETSCERROR: or try http://valgrind.org on GNU/linux and Apple Mac OS X to find memory corruption errors [986]PETSC ERROR: configure using --with-debugging=yes, recompile, link, and run [986]PETSC ERROR: to get more information on the crash. [986]PETSC ERROR: --------------------- Error Message ------------------------------------ [986]PETSC ERROR: Signal received! [986]PETSC ERROR: ------------------------------------------------------------------------ [986]PETSC ERROR: Petsc Release Version 3.2.0, Patch 4, Sun Oct 23 12:23:18 CDT 2011 [986]PETSC ERROR: See docs/changes/index.html for recent updates. [986]PETSC ERROR: See docs/faq.html for hints about trouble shooting. [986]PETSC ERROR: See docs/index.html for manual pages. [986]PETSC ERROR: ------------------------------------------------------------------------ [986]PETSC ERROR: ./joab on a Janus-nod named node1326 by ronglian Sat Nov 26 21:44:16 2011 [986]PETSC ERROR: Libraries linked from /projects/ronglian/soft/petsc-3.2-p4/Janus-nodebug/lib [986]PETSC ERROR: Configure run at Tue Oct 25 19:27:52 2011 [986]PETSC ERROR: Configure options --known-level1-dcache-size=32768 --known-level1-dcache-linesize=32 --known-level1-dcache-assoc=0 --known-memcmp-ok=1 --known-sizeof-char=1 --known-sizeof-void-p=8 --known-sizeof-short=2 --known-sizeof-int=4 --known-sizeof-long=8 --known-sizeof-long-long=8 --known-sizeof-float=4 --known-sizeof-double=8 --known-sizeof-size_t=8 --known-bits-per-byte=8 --known-sizeof-MPI_Comm=8 --known-sizeof-MPI_Fint=4 --known-mpi-long-double=1 --with-batch=1 --with-mpi-shared=1 --known-mpi-shared-libraries=0 --with-debugging=0 --download-f-blas-lapack=1 --download-parmetis=1 --download-superlu_dist=1 --download-superlu=1 --download-scalapack=1 --download-blacs=1 --download-mumps=1 --download-hypre=1 [986]PETSC ERROR: ------------------------------------------------------------------------ [986]PETSC ERROR: User provided function() line 0 in unknown directory unknown file ***ASSERTION failed on line 169 of file ccgraph.c: cadjwgtsum[cnvtxs] == idxsum(nedges, cadjwgt) 0 91 89 91 0 [1409]PETSC ERROR: ------------------------------------------------------------------------ [1409]PETSC ERROR: Caught signal number 11 SEGV: Segmentation Violation, probably memory access out of range [1409]PETSC ERROR: Try option -start_in_debugger or -on_error_attach_debugger [1409]PETSC ERROR: or see http://www.mcs.anl.gov/petsc/petsc-as/documentation/faq.html#valgrind[1409]PETSCERROR: or try http://valgrind.org on GNU/linux and Apple Mac OS X to find memory corruption errors [1409]PETSC ERROR: configure using --with-debugging=yes, recompile, link, and run [1409]PETSC ERROR: to get more information on the crash. [1409]PETSC ERROR: --------------------- Error Message ------------------------------------ [1409]PETSC ERROR: Signal received! [1409]PETSC ERROR: ------------------------------------------------------------------------ [1409]PETSC ERROR: Petsc Release Version 3.2.0, Patch 4, Sun Oct 23 12:23:18 CDT 2011 [1409]PETSC ERROR: See docs/changes/index.html for recent updates. [1409]PETSC ERROR: See docs/faq.html for hints about trouble shooting. [1409]PETSC ERROR: See docs/index.html for manual pages. [1409]PETSC ERROR: ------------------------------------------------------------------------ [1409]PETSC ERROR: ./joab on a Janus-nod named node1268 by ronglian Sat Nov 26 21:44:16 2011 [1409]PETSC ERROR: Libraries linked from /projects/ronglian/soft/petsc-3.2-p4/Janus-nodebug/lib [1409]PETSC ERROR: Configure run at Tue Oct 25 19:27:52 2011 [1409]PETSC ERROR: Configure options --known-level1-dcache-size=32768 --known-level1-dcache-linesize=32 --known-level1-dcache-assoc=0 --known-memcmp-ok=1 --known-sizeof-char=1 --known-sizeof-void-p=8 --known-sizeof-short=2 --known-sizeof-int=4 --known-sizeof-long=8 --known-sizeof-long-long=8 --known-sizeof-float=4 --known-sizeof-double=8 --known-sizeof-size_t=8 --known-bits-per-byte=8 --known-sizeof-MPI_Comm=8 --known-sizeof-MPI_Fint=4 --known-mpi-long-double=1 --with-batch=1 --with-mpi-shared=1 --known-mpi-shared-libraries=0 --with-debugging=0 --download-f-blas-lapack=1 --download-parmetis=1 --download-superlu_dist=1 --download-superlu=1 --download-scalapack=1 --download-blacs=1 --download-mumps=1 --download-hypre=1 [1409]PETSC ERROR: ------------------------------------------------------------------------ [1409]PETSC ERROR: User provided function() line 0 in unknown directory unknown file ***ASSERTION failed on line 169 of file ccgraph.c: cadjwgtsum[cnvtxs] == idxsum(nedges, cadjwgt) 0 91 89 91 0 [1233]PETSC ERROR: ------------------------------------------------------------------------ [1233]PETSC ERROR: Caught signal number 11 SEGV: Segmentation Violation, probably memory access out of range [1233]PETSC ERROR: Try option -start_in_debugger or -on_error_attach_debugger [1233]PETSC ERROR: or see http://www.mcs.anl.gov/petsc/petsc-as/documentation/faq.html#valgrind[1233]PETSCERROR: or try http://valgrind.org on GNU/linux and Apple Mac OS X to find memory corruption errors [1233]PETSC ERROR: configure using --with-debugging=yes, recompile, link, and run [1233]PETSC ERROR: to get more information on the crash. [1233]PETSC ERROR: --------------------- Error Message ------------------------------------ [1233]PETSC ERROR: Signal received! [1233]PETSC ERROR: ------------------------------------------------------------------------ [1233]PETSC ERROR: Petsc Release Version 3.2.0, Patch 4, Sun Oct 23 12:23:18 CDT 2011 [1233]PETSC ERROR: See docs/changes/index.html for recent updates. [1233]PETSC ERROR: See docs/faq.html for hints about trouble shooting. [1233]PETSC ERROR: See docs/index.html for manual pages. [1233]PETSC ERROR: ------------------------------------------------------------------------ [1233]PETSC ERROR: ./joab on a Janus-nod named node1304 by ronglian Sat Nov 26 21:44:16 2011 [1233]PETSC ERROR: Libraries linked from /projects/ronglian/soft/petsc-3.2-p4/Janus-nodebug/lib [1233]PETSC ERROR: Configure run at Tue Oct 25 19:27:52 2011 [1233]PETSC ERROR: Configure options --known-level1-dcache-size=32768 --known-level1-dcache-linesize=32 --known-level1-dcache-assoc=0 --known-memcmp-ok=1 --known-sizeof-char=1 --known-sizeof-void-p=8 --known-sizeof-short=2 --known-sizeof-int=4 --known-sizeof-long=8 --known-sizeof-long-long=8 --known-sizeof-float=4 --known-sizeof-double=8 --known-sizeof-size_t=8 --known-bits-per-byte=8 --known-sizeof-MPI_Comm=8 --known-sizeof-MPI_Fint=4 --known-mpi-long-double=1 --with-batch=1 --with-mpi-shared=1 --known-mpi-shared-libraries=0 --with-debugging=0 --download-f-blas-lapack=1 --download-parmetis=1 --download-superlu_dist=1 --download-superlu=1 --download-scalapack=1 --download-blacs=1 --download-mumps=1 --download-hypre=1 [1233]PETSC ERROR: ------------------------------------------------------------------------ [1233]PETSC ERROR: User provided function() line 0 in unknown directory unknown file ***ASSERTION failed on line 169 of file ccgraph.c: cadjwgtsum[cnvtxs] == idxsum(nedges, cadjwgt) 0 91 89 91 0 [1026]PETSC ERROR: ------------------------------------------------------------------------ [1026]PETSC ERROR: Caught signal number 11 SEGV: Segmentation Violation, probably memory access out of range [1026]PETSC ERROR: Try option -start_in_debugger or -on_error_attach_debugger [1026]PETSC ERROR: or see http://www.mcs.anl.gov/petsc/petsc-as/documentation/faq.html#valgrind[1026]PETSCERROR: or try http://valgrind.org on GNU/linux and Apple Mac OS X to find memory corruption errors [1026]PETSC ERROR: configure using --with-debugging=yes, recompile, link, and run [1026]PETSC ERROR: to get more information on the crash. [1026]PETSC ERROR: --------------------- Error Message ------------------------------------ [1026]PETSC ERROR: Signal received! [1026]PETSC ERROR: ------------------------------------------------------------------------ [1026]PETSC ERROR: Petsc Release Version 3.2.0, Patch 4, Sun Oct 23 12:23:18 CDT 2011 [1026]PETSC ERROR: See docs/changes/index.html for recent updates. [1026]PETSC ERROR: See docs/faq.html for hints about trouble shooting. [1026]PETSC ERROR: See docs/index.html for manual pages. [1026]PETSC ERROR: ------------------------------------------------------------------------ [1026]PETSC ERROR: ./joab on a Janus-nod named node1323 by ronglian Sat Nov 26 21:44:16 2011 [1026]PETSC ERROR: Libraries linked from /projects/ronglian/soft/petsc-3.2-p4/Janus-nodebug/lib [1026]PETSC ERROR: Configure run at Tue Oct 25 19:27:52 2011 [1026]PETSC ERROR: Configure options --known-level1-dcache-size=32768 --known-level1-dcache-linesize=32 --known-level1-dcache-assoc=0 --known-memcmp-ok=1 --known-sizeof-char=1 --known-sizeof-void-p=8 --known-sizeof-short=2 --known-sizeof-int=4 --known-sizeof-long=8 --known-sizeof-long-long=8 --known-sizeof-float=4 --known-sizeof-double=8 --known-sizeof-size_t=8 --known-bits-per-byte=8 --known-sizeof-MPI_Comm=8 --known-sizeof-MPI_Fint=4 --known-mpi-long-double=1 --with-batch=1 --with-mpi-shared=1 --known-mpi-shared-libraries=0 --with-debugging=0 --download-f-blas-lapack=1 --download-parmetis=1 --download-superlu_dist=1 --download-superlu=1 --download-scalapack=1 --download-blacs=1 --download-mumps=1 --download-hypre=1 [1026]PETSC ERROR: ------------------------------------------------------------------------ [1026]PETSC ERROR: User provided function() line 0 in unknown directory unknown file ***ASSERTION failed on line 169 of file ccgraph.c: cadjwgtsum[cnvtxs] == idxsum(nedges, cadjwgt) 0 91 89 91 0 [1326]PETSC ERROR: ------------------------------------------------------------------------ [1326]PETSC ERROR: Caught signal number 11 SEGV: Segmentation Violation, probably memory access out of range [1326]PETSC ERROR: Try option -start_in_debugger or -on_error_attach_debugger [1326]PETSC ERROR: or see http://www.mcs.anl.gov/petsc/petsc-as/documentation/faq.html#valgrind[1326]PETSCERROR: or try http://valgrind.org on GNU/linux and Apple Mac OS X to find memory corruption errors [1326]PETSC ERROR: configure using --with-debugging=yes, recompile, link, and run [1326]PETSC ERROR: to get more information on the crash. [1326]PETSC ERROR: --------------------- Error Message ------------------------------------ [1326]PETSC ERROR: Signal received! [1326]PETSC ERROR: ------------------------------------------------------------------------ [1326]PETSC ERROR: Petsc Release Version 3.2.0, Patch 4, Sun Oct 23 12:23:18 CDT 2011 [1326]PETSC ERROR: See docs/changes/index.html for recent updates. [1326]PETSC ERROR: See docs/faq.html for hints about trouble shooting. [1326]PETSC ERROR: See docs/index.html for manual pages. [1326]PETSC ERROR: ------------------------------------------------------------------------ [1326]PETSC ERROR: ./joab on a Janus-nod named node1276 by ronglian Sat Nov 26 21:44:16 2011 [1326]PETSC ERROR: Libraries linked from /projects/ronglian/soft/petsc-3.2-p4/Janus-nodebug/lib [1326]PETSC ERROR: Configure run at Tue Oct 25 19:27:52 2011 [1326]PETSC ERROR: Configure options --known-level1-dcache-size=32768 --known-level1-dcache-linesize=32 --known-level1-dcache-assoc=0 --known-memcmp-ok=1 --known-sizeof-char=1 --known-sizeof-void-p=8 --known-sizeof-short=2 --known-sizeof-int=4 --known-sizeof-long=8 --known-sizeof-long-long=8 --known-sizeof-float=4 --known-sizeof-double=8 --known-sizeof-size_t=8 --known-bits-per-byte=8 --known-sizeof-MPI_Comm=8 --known-sizeof-MPI_Fint=4 --known-mpi-long-double=1 --with-batch=1 --with-mpi-shared=1 --known-mpi-shared-libraries=0 --with-debugging=0 --download-f-blas-lapack=1 --download-parmetis=1 --download-superlu_dist=1 --download-superlu=1 --download-scalapack=1 --download-blacs=1 --download-mumps=1 --download-hypre=1 [1326]PETSC ERROR: ------------------------------------------------------------------------ [1326]PETSC ERROR: User provided function() line 0 in unknown directory unknown file ***ASSERTION failed on line 169 of file ccgraph.c: cadjwgtsum[cnvtxs] == idxsum(nedges, cadjwgt) 0 91 89 91 0 -------------- next part -------------- An HTML attachment was scrubbed... URL: From jedbrown at mcs.anl.gov Sun Nov 27 12:35:43 2011 From: jedbrown at mcs.anl.gov (Jed Brown) Date: Sun, 27 Nov 2011 12:35:43 -0600 Subject: [petsc-users] ccgraph.c error In-Reply-To: References: Message-ID: On Sun, Nov 27, 2011 at 12:31, Rongliang Chen wrote: > ***ASSERTION failed on line 169 of file ccgraph.c: cadjwgtsum[cnvtxs] == > idxsum(nedges, cadjwgt) This is part of the graph library Metis. Perhaps an invalid graph was passed to Metis? Without a stack trace (you can configure -with-debugging=1, the default), we don't even know whether this error was reached through PETSc, through your code, or through another library. -------------- next part -------------- An HTML attachment was scrubbed... URL: From mmsussman at gmail.com Sun Nov 27 13:21:13 2011 From: mmsussman at gmail.com (Mike Sussman) Date: Sun, 27 Nov 2011 14:21:13 -0500 Subject: [petsc-users] TS snes failure In-Reply-To: References: Message-ID: <1322421673.3615.14.camel@ozhp> Hello, Recently I ran a very simple problem with a known solution using the DAE form in TS. When it got the wrong answer, I looked carefully at its progress and discovered that the nonlinear (snes) solves were failing because of my incorrect Jacobian. BUT these failures were not reported by TS, and there was no indication that the solution was suspect. I tried setting -ts_max_snes_failures 1, but the calculation still proceeded without halting. Am I missing some setting? I am using 3.2p5 from Fortran. From knepley at gmail.com Sun Nov 27 13:23:25 2011 From: knepley at gmail.com (Matthew Knepley) Date: Sun, 27 Nov 2011 13:23:25 -0600 Subject: [petsc-users] TS snes failure In-Reply-To: <1322421673.3615.14.camel@ozhp> References: <1322421673.3615.14.camel@ozhp> Message-ID: On Sun, Nov 27, 2011 at 1:21 PM, Mike Sussman wrote: > Hello, > > Recently I ran a very simple problem with a known solution using the DAE > form in TS. When it got the wrong answer, I looked carefully at its > progress and discovered that the nonlinear (snes) solves were failing > because of my incorrect Jacobian. BUT these failures were not reported > by TS, and there was no indication that the solution was suspect. > > I tried setting -ts_max_snes_failures 1, but the calculation still > proceeded without halting. Am I missing some setting? > > I am using 3.2p5 from Fortran. > 1) Are you calling TSStep() or TSSolve()? 2) Do you check all return codes from PETSc calls? Matt -- What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. -- Norbert Wiener -------------- next part -------------- An HTML attachment was scrubbed... URL: From rongliang.chan at gmail.com Sun Nov 27 14:04:57 2011 From: rongliang.chan at gmail.com (Rongliang Chen) Date: Sun, 27 Nov 2011 13:04:57 -0700 Subject: [petsc-users] ccgraph.c error Message-ID: Hi Jed, Thank you for your reply. I have no idea if I have passed a invalid graph to Metis. This error just appeared when I run my code with some number of processors. I run my code with the debug version of PETSc and I got the following error message. Best, Rongliang ------------------------------------------------------ ***ASSERTION failed on line 169 of file ccgraph.c: cadjwgtsum[cnvtxs] == idxsum(nedges, cadjwgt) 0 107 106 107 0 [313]PETSC ERROR: ------------------------------------------------------------------------ [313]PETSC ERROR: Caught signal number 11 SEGV: Segmentation Violation, probably memory access out of range [313]PETSC ERROR: Try option -start_in_debugger or -on_error_attach_debugger [313]PETSC ERROR: or see http://www.mcs.anl.gov/petsc/petsc-as/documentation/faq.html#valgrind[313]PETSCERROR: or try http://valgrind.org on GNU/linux and Apple Mac OS X to find memory corruption errors [313]PETSC ERROR: likely location of problem given in stack below [313]PETSC ERROR: --------------------- Stack Frames ------------------------------------ ***ASSERTION failed on line 169 of file ccgraph.c: cadjwgtsum[cnvtxs] == idxsum(nedges, cadjwgt) 0 107 106 107 0 [313]PETSC ERROR: Note: The EXACT line numbers in the stack are not available, [313]PETSC ERROR: INSTEAD the line number of the start of the function [313]PETSC ERROR: is given. [313]PETSC ERROR: [313] MatLUFactorNumeric_SuperLU_DIST line 280 src/mat/impls/aij/mpi/superlu_dist/superlu_dist.c [313]PETSC ERROR: [313] MatLUFactorNumeric line 2864 src/mat/interface/matrix.c [313]PETSC ERROR: [313] PCApply line 373 src/ksp/pc/interface/precon.c [313]PETSC ERROR: [313] FGMREScycle line 119 src/ksp/ksp/impls/gmres/fgmres/fgmres.c [313]PETSC ERROR: [313] KSPSolve_FGMRES line 282 src/ksp/ksp/impls/gmres/fgmres/fgmres.c [313]PETSC ERROR: [313] KSPSolve line 331 src/ksp/ksp/interface/itfunc.c [313]PETSC ERROR: [313] SNES_KSPSolve line 3394 src/snes/interface/snes.c [313]PETSC ERROR: [313] SNESSolve_LS line 142 src/snes/impls/ls/ls.c [401]PETSC ERROR: ------------------------------------------------------------------------ [401]PETSC ERROR: Caught signal number 11 SEGV: Segmentation Violation, probably memory access out of range [401]PETSC ERROR: Try option -start_in_debugger or -on_error_attach_debugger [401]PETSC ERROR: or see http://www.mcs.anl.gov/petsc/petsc-as/documentation/faq.html#valgrind[401]PETSCERROR: or try http://valgrind.org on GNU/linux and Apple Mac OS X to find memory corruption errors [401]PETSC ERROR: likely location of problem given in stack below [401]PETSC ERROR: --------------------- Stack Frames ------------------------------------ [313]PETSC ERROR: [313] SNESSolve line 2647 src/snes/interface/snes.c [313]PETSC ERROR: [313] TimeStep line 1554 /home/rlchen/rlchen/soft/fixedmesh/code_changed/joab.c [313]PETSC ERROR: [313] SolveSteadyState line 1429 /home/rlchen/rlchen/soft/fixedmesh/code_changed/joab.c [313]PETSC ERROR: [313] ComputefixedBoundary line 588 /home/rlchen/rlchen/soft/fixedmesh/code_changed/joab.c [313]PETSC ERROR: [313] Elements line 290 /home/rlchen/rlchen/soft/fixedmesh/code_changed/setelement.c [313]PETSC ERROR: [313] Elements line 290 /home/rlchen/rlchen/soft/fixedmesh/code_changed/setelement.c [313]PETSC ERROR: [313] Elements line 290 /home/rlchen/rlchen/soft/fixedmesh/code_changed/setelement.c [313]PETSC ERROR: [313] Elements line 290 /home/rlchen/rlchen/soft/fixedmesh/code_changed/setelement.c [313]PETSC ERROR: [313] Elements line 290 /home/rlchen/rlchen/soft/fixedmesh/code_changed/setelement.c [313]PETSC ERROR: [313] Elements line 290 /home/rlchen/rlchen/soft/fixedmesh/code_changed/setelement.c [313]PETSC ERROR: [313] Elements line 290 /home/rlchen/rlchen/soft/fixedmesh/code_changed/setelement.c [313]PETSC ERROR: [313] Elements line 290 /home/rlchen/rlchen/soft/fixedmesh/code_changed/setelement.c [313]PETSC ERROR: [313] Elements line 290 /home/rlchen/rlchen/soft/fixedmesh/code_changed/setelement.c [313]PETSC ERROR: [313] Elements line 290 /home/rlchen/rlchen/soft/fixedmesh/code_changed/setelement.c [313]PETSC ERROR: [313] Elements line 290 /home/rlchen/rlchen/soft/fixedmesh/code_changed/setelement.c [313]PETSC ERROR: [313] Elements line 290 /home/rlchen/rlchen/soft/fixedmesh/code_changed/setelement.c [313]PETSC ERROR: [313] Elements line 290 /home/rlchen/rlchen/soft/fixedmesh/code_changed/setelement.c [313]PETSC ERROR: [313] Elements line 290 /home/rlchen/rlchen/soft/fixedmesh/code_changed/setelement.c [313]PETSC ERROR: [313] Elements line 290 /home/rlchen/rlchen/soft/fixedmesh/code_changed/setelement.c [313]PETSC ERROR: [313] Elements line 290 /home/rlchen/rlchen/soft/fixedmesh/code_changed/setelement.c [313]PETSC ERROR: [313] Elements line 290 /home/rlchen/rlchen/soft/fixedmesh/code_changed/setelement.c [313]PETSC ERROR: [313] Elements line 290 /home/rlchen/rlchen/soft/fixedmesh/code_changed/setelement.c [313]PETSC ERROR: [313] Elements line 290 /home/rlchen/rlchen/soft/fixedmesh/code_changed/setelement.c [313]PETSC ERROR: [313] Elements line 290 /home/rlchen/rlchen/soft/fixedmesh/code_changed/setelement.c [313]PETSC ERROR: [313] Elements line 290 /home/rlchen/rlchen/soft/fixedmesh/code_changed/setelement.c [313]PETSC ERROR: [313] Elements line 290 /home/rlchen/rlchen/soft/fixedmesh/code_changed/setelement.c [313]PETSC ERROR: [313] Elements line 290 /home/rlchen/rlchen/soft/fixedmesh/code_changed/setelement.c [313]PETSC ERROR: [313] Elements line 290 /home/rlchen/rlchen/soft/fixedmesh/code_changed/setelement.c [313]PETSC ERROR: [313] Elements line 290 /home/rlchen/rlchen/soft/fixedmesh/code_changed/setelement.c [313]PETSC ERROR: [313] Elements line 290 /home/rlchen/rlchen/soft/fixedmesh/code_changed/setelement.c [313]PETSC ERROR: [313] Elements line 290 /home/rlchen/rlchen/soft/fixedmesh/code_changed/setelement.c [313]PETSC ERROR: [313] Elements line 290 /home/rlchen/rlchen/soft/fixedmesh/code_changed/setelement.c [313]PETSC ERROR: [313] Elements line 290 /home/rlchen/rlchen/soft/fixedmesh/code_changed/setelement.c [313]PETSC ERROR: [313] Elements line 290 /home/rlchen/rlchen/soft/fixedmesh/code_changed/setelement.c [313]PETSC ERROR: [313] Elements line 290 /home/rlchen/rlchen/soft/fixedmesh/code_changed/setelement.c [313]PETSC ERROR: [313] Elements line 290 /home/rlchen/rlchen/soft/fixedmesh/code_changed/setelement.c [313]PETSC ERROR: [313] Elements line 290 /home/rlchen/rlchen/soft/fixedmesh/code_changed/setelement.c [313]PETSC ERROR: [313] Elements line 290 /home/rlchen/rlchen/soft/fixedmesh/code_changed/setelement.c [313]PETSC ERROR: [313] Elements line 290 /home/rlchen/rlchen/soft/fixedmesh/code_changed/setelement.c [313]PETSC ERROR: [313] Elements line 290 /home/rlchen/rlchen/soft/fixedmesh/code_changed/setelement.c [313]PETSC ERROR: [313] Elements line 290 /home/rlchen/rlchen/soft/fixedmesh/code_changed/setelement.c -------------------------------------------------------------------------- MPI_ABORT was invoked on rank 313 in communicator MPI_COMM_WORLD with errorcode 59. NOTE: invoking MPI_ABORT causes Open MPI to kill all MPI processes. You may or may not see output from other processes, depending on exactly when Open MPI kills them. -------------------------------------------------------------------------- [313]PETSC ERROR: [313] Elements line 290 /home/rlchen/rlchen/soft/fixedmesh/code_changed/setelement.c [313]PETSC ERROR: [313] Elements line 290 /home/rlchen/rlchen/soft/fixedmesh/code_changed/setelement.c [313]PETSC ERROR: [313] SetReferenceElement line 424 /home/rlchen/rlchen/soft/fixedmesh/code_changed/setelement.c [313]PETSC ERROR: --------------------- Error Message ------------------------------------ [313]PETSC ERROR: Signal received! [313]PETSC ERROR: ------------------------------------------------------------------------ [313]PETSC ERROR: Petsc Release Version 3.2.0, Patch 1, Mon Sep 12 16:01:51 CDT 2011 [313]PETSC ERROR: See docs/changes/index.html for recent updates. [313]PETSC ERROR: See docs/faq.html for hints about trouble shooting. [313]PETSC ERROR: See docs/index.html for manual pages. [313]PETSC ERROR: ------------------------------------------------------------------------ [313]PETSC ERROR: ./joab on a Janus-deb named node1739 by ronglian Sun Nov 27 12:50:41 2011 [313]PETSC ERROR: Libraries linked from /home/ronglian/soft/petsc-3.2-p1/Janus-debug/lib [313]PETSC ERROR: Configure run at Tue Sep 13 18:32:21 2011 [313]PETSC ERROR: Configure options --known-level1-dcache-size=32768 --known-level1-dcache-linesize=32 --known-level1-dcache-assoc=0 --known-memcmp-ok=1 --known-sizeof-char=1 --known-sizeof-void-p=8 --known-sizeof-short=2 --known-sizeof-int=4 --known-sizeof-long=8 --known-sizeof-long-long=8 --known-sizeof-float=4 --known-sizeof-double=8 --known-sizeof-size_t=8 --known-bits-per-byte=8 --known-sizeof-MPI_Comm=8 --known-sizeof-MPI_Fint=4 --known-mpi-long-double=1 --with-batch=1 --with-mpi-shared-libraries=1 --known-mpi-shared-libraries=0 --download-f-blas-lapack=1 --download-hypre=1 --download-superlu=1 --download-parmetis=1 --download-superlu_dist=1 --download-blacs=1 --download-scalapack=1 --with-debugging=1 [313]PETSC ERROR: ------------------------------------------------------------------------ [313]PETSC ERROR: User provided function() line 0 in unknown directory unknown file [401]PETSC ERROR: Note: The EXACT line numbers in the stack are not available, [401]PETSC ERROR: INSTEAD the line number of the start of the function [401]PETSC ERROR: is given. [401]PETSC ERROR: [401] MatLUFactorNumeric_SuperLU_DIST line 280 src/mat/impls/aij/mpi/superlu_dist/superlu_dist.c [401]PETSC ERROR: [401] MatLUFactorNumeric line 2864 src/mat/interface/matrix.c [401]PETSC ERROR: [401] PCApply line 373 src/ksp/pc/interface/precon.c [401]PETSC ERROR: [401] FGMREScycle line 119 src/ksp/ksp/impls/gmres/fgmres/fgmres.c [401]PETSC ERROR: [401] KSPSolve_FGMRES line 282 src/ksp/ksp/impls/gmres/fgmres/fgmres.c [401]PETSC ERROR: [401] KSPSolve line 331 src/ksp/ksp/interface/itfunc.c [401]PETSC ERROR: [401] SNES_KSPSolve line 3394 src/snes/interface/snes.c [401]PETSC ERROR: [401] SNESSolve_LS line 142 src/snes/impls/ls/ls.c [401]PETSC ERROR: [401] SNESSolve line 2647 src/snes/interface/snes.c [401]PETSC ERROR: [401] TimeStep line 1554 /home/rlchen/rlchen/soft/fixedmesh/code_changed/joab.c [401]PETSC ERROR: [401] SolveSteadyState line 1429 /home/rlchen/rlchen/soft/fixedmesh/code_changed/joab.c [401]PETSC ERROR: [401] ComputefixedBoundary line 588 /home/rlchen/rlchen/soft/fixedmesh/code_changed/joab.c [401]PETSC ERROR: [401] Elements line 290 /home/rlchen/rlchen/soft/fixedmesh/code_changed/setelement.c [401]PETSC ERROR: [401] Elements line 290 /home/rlchen/rlchen/soft/fixedmesh/code_changed/setelement.c [401]PETSC ERROR: [401] Elements line 290 /home/rlchen/rlchen/soft/fixedmesh/code_changed/setelement.c [401]PETSC ERROR: [401] Elements line 290 /home/rlchen/rlchen/soft/fixedmesh/code_changed/setelement.c [401]PETSC ERROR: [401] Elements line 290 /home/rlchen/rlchen/soft/fixedmesh/code_changed/setelement.c [401]PETSC ERROR: [401] Elements line 290 /home/rlchen/rlchen/soft/fixedmesh/code_changed/setelement.c [401]PETSC ERROR: [401] Elements line 290 /home/rlchen/rlchen/soft/fixedmesh/code_changed/setelement.c [401]PETSC ERROR: [401] Elements line 290 /home/rlchen/rlchen/soft/fixedmesh/code_changed/setelement.c [401]PETSC ERROR: [401] Elements line 290 /home/rlchen/rlchen/soft/fixedmesh/code_changed/setelement.c [401]PETSC ERROR: [401] Elements line 290 /home/rlchen/rlchen/soft/fixedmesh/code_changed/setelement.c [401]PETSC ERROR: [401] Elements line 290 /home/rlchen/rlchen/soft/fixedmesh/code_changed/setelement.c [401]PETSC ERROR: [401] Elements line 290 /home/rlchen/rlchen/soft/fixedmesh/code_changed/setelement.c [401]PETSC ERROR: [401] Elements line 290 /home/rlchen/rlchen/soft/fixedmesh/code_changed/setelement.c [401]PETSC ERROR: [401] Elements line 290 /home/rlchen/rlchen/soft/fixedmesh/code_changed/setelement.c [401]PETSC ERROR: [401] Elements line 290 /home/rlchen/rlchen/soft/fixedmesh/code_changed/setelement.c [401]PETSC ERROR: [401] Elements line 290 /home/rlchen/rlchen/soft/fixedmesh/code_changed/setelement.c [401]PETSC ERROR: [401] Elements line 290 /home/rlchen/rlchen/soft/fixedmesh/code_changed/setelement.c [401]PETSC ERROR: [401] Elements line 290 /home/rlchen/rlchen/soft/fixedmesh/code_changed/setelement.c [401]PETSC ERROR: [401] Elements line 290 /home/rlchen/rlchen/soft/fixedmesh/code_changed/setelement.c [401]PETSC ERROR: [401] Elements line 290 /home/rlchen/rlchen/soft/fixedmesh/code_changed/setelement.c [401]PETSC ERROR: [401] Elements line 290 /home/rlchen/rlchen/soft/fixedmesh/code_changed/setelement.c [401]PETSC ERROR: [401] Elements line 290 /home/rlchen/rlchen/soft/fixedmesh/code_changed/setelement.c [401]PETSC ERROR: [401] Elements line 290 /home/rlchen/rlchen/soft/fixedmesh/code_changed/setelement.c [401]PETSC ERROR: [401] Elements line 290 /home/rlchen/rlchen/soft/fixedmesh/code_changed/setelement.c [401]PETSC ERROR: [401] Elements line 290 /home/rlchen/rlchen/soft/fixedmesh/code_changed/setelement.c [401]PETSC ERROR: [401] Elements line 290 /home/rlchen/rlchen/soft/fixedmesh/code_changed/setelement.c [401]PETSC ERROR: [401] Elements line 290 /home/rlchen/rlchen/soft/fixedmesh/code_changed/setelement.c [401]PETSC ERROR: [401] Elements line 290 /home/rlchen/rlchen/soft/fixedmesh/code_changed/setelement.c [401]PETSC ERROR: [401] Elements line 290 /home/rlchen/rlchen/soft/fixedmesh/code_changed/setelement.c [401]PETSC ERROR: [401] Elements line 290 /home/rlchen/rlchen/soft/fixedmesh/code_changed/setelement.c [401]PETSC ERROR: [401] Elements line 290 /home/rlchen/rlchen/soft/fixedmesh/code_changed/setelement.c [401]PETSC ERROR: [401] Elements line 290 /home/rlchen/rlchen/soft/fixedmesh/code_changed/setelement.c [401]PETSC ERROR: [401] Elements line 290 /home/rlchen/rlchen/soft/fixedmesh/code_changed/setelement.c [401]PETSC ERROR: [401] Elements line 290 /home/rlchen/rlchen/soft/fixedmesh/code_changed/setelement.c [401]PETSC ERROR: [401] Elements line 290 /home/rlchen/rlchen/soft/fixedmesh/code_changed/setelement.c [401]PETSC ERROR: [401] Elements line 290 /home/rlchen/rlchen/soft/fixedmesh/code_changed/setelement.c [401]PETSC ERROR: [401] Elements line 290 /home/rlchen/rlchen/soft/fixedmesh/code_changed/setelement.c [401]PETSC ERROR: [401] Elements line 290 /home/rlchen/rlchen/soft/fixedmesh/code_changed/setelement.c [401]PETSC ERROR: [401] Elements line 290 /home/rlchen/rlchen/soft/fixedmesh/code_changed/setelement.c [401]PETSC ERROR: [401] SetReferenceElement line 424 /home/rlchen/rlchen/soft/fixedmesh/code_changed/setelement.c [401]PETSC ERROR: --------------------- Error Message ------------------------------------ [401]PETSC ERROR: Signal received! [401]PETSC ERROR: ------------------------------------------------------------------------ [401]PETSC ERROR: Petsc Release Version 3.2.0, Patch 1, Mon Sep 12 16:01:51 CDT 2011 [401]PETSC ERROR: See docs/changes/index.html for recent updates. [401]PETSC ERROR: See docs/faq.html for hints about trouble shooting. [401]PETSC ERROR: See docs/index.html for manual pages. [401]PETSC ERROR: ------------------------------------------------------------------------ [401]PETSC ERROR: ./joab on a Janus-deb named node1708 by ronglian Sun Nov 27 12:50:41 2011 [401]PETSC ERROR: Libraries linked from /home/ronglian/soft/petsc-3.2-p1/Janus-debug/lib [401]PETSC ERROR: Configure run at Tue Sep 13 18:32:21 2011 [401]PETSC ERROR: Configure options --known-level1-dcache-size=32768 --known-level1-dcache-linesize=32 --known-level1-dcache-assoc=0 --known-memcmp-ok=1 --known-sizeof-char=1 --known-sizeof-void-p=8 --known-sizeof-short=2 --known-sizeof-int=4 --known-sizeof-long=8 --known-sizeof-long-long=8 --known-sizeof-float=4 --known-sizeof-double=8 --known-sizeof-size_t=8 --known-bits-per-byte=8 --known-sizeof-MPI_Comm=8 --known-sizeof-MPI_Fint=4 --known-mpi-long-double=1 --with-batch=1 --with-mpi-shared-libraries=1 --known-mpi-shared-libraries=0 --download-f-blas-lapack=1 --download-hypre=1 --download-superlu=1 --download-parmetis=1 --download-superlu_dist=1 --download-blacs=1 --download-scalapack=1 --with-debugging=1 [401]PETSC ERROR: ------------------------------------------------------------------------ [401]PETSC ERROR: User provided function() line 0 in unknown directory unknown file ***ASSERTION failed on line 169 of file ccgraph.c: cadjwgtsum[cnvtxs] == idxsum(nedges, cadjwgt) 0 107 106 107 0 [511]PETSC ERROR: ------------------------------------------------------------------------ [511]PETSC ERROR: Caught signal number 11 SEGV: Segmentation Violation, probably memory access out of range [511]PETSC ERROR: Try option -start_in_debugger or -on_error_attach_debugger [511]PETSC ERROR: or see http://www.mcs.anl.gov/petsc/petsc-as/documentation/faq.html#valgrind[511]PETSCERROR: or try http://valgrind.org on GNU/linux and Apple Mac OS X to find memory corruption errors [511]PETSC ERROR: likely location of problem given in stack below [511]PETSC ERROR: --------------------- Stack Frames ------------------------------------ [511]PETSC ERROR: Note: The EXACT line numbers in the stack are not available, [511]PETSC ERROR: INSTEAD the line number of the start of the function [511]PETSC ERROR: is given. [511]PETSC ERROR: [511] MatLUFactorNumeric_SuperLU_DIST line 280 src/mat/impls/aij/mpi/superlu_dist/superlu_dist.c [511]PETSC ERROR: [511] MatLUFactorNumeric line 2864 src/mat/interface/matrix.c [511]PETSC ERROR: [511] PCApply line 373 src/ksp/pc/interface/precon.c [511]PETSC ERROR: [511] FGMREScycle line 119 src/ksp/ksp/impls/gmres/fgmres/fgmres.c [511]PETSC ERROR: [511] KSPSolve_FGMRES line 282 src/ksp/ksp/impls/gmres/fgmres/fgmres.c [511]PETSC ERROR: [511] KSPSolve line 331 src/ksp/ksp/interface/itfunc.c [511]PETSC ERROR: [511] SNES_KSPSolve line 3394 src/snes/interface/snes.c [511]PETSC ERROR: [511] SNESSolve_LS line 142 src/snes/impls/ls/ls.c [511]PETSC ERROR: [511] SNESSolve line 2647 src/snes/interface/snes.c [511]PETSC ERROR: [511] TimeStep line 1554 /home/rlchen/rlchen/soft/fixedmesh/code_changed/joab.c [511]PETSC ERROR: [511] SolveSteadyState line 1429 /home/rlchen/rlchen/soft/fixedmesh/code_changed/joab.c [511]PETSC ERROR: [511] ComputefixedBoundary line 588 /home/rlchen/rlchen/soft/fixedmesh/code_changed/joab.c [511]PETSC ERROR: [511] Elements line 290 /home/rlchen/rlchen/soft/fixedmesh/code_changed/setelement.c [511]PETSC ERROR: [511] Elements line 290 /home/rlchen/rlchen/soft/fixedmesh/code_changed/setelement.c [511]PETSC ERROR: [511] Elements line 290 /home/rlchen/rlchen/soft/fixedmesh/code_changed/setelement.c [511]PETSC ERROR: [511] Elements line 290 /home/rlchen/rlchen/soft/fixedmesh/code_changed/setelement.c [511]PETSC ERROR: [511] Elements line 290 /home/rlchen/rlchen/soft/fixedmesh/code_changed/setelement.c [511]PETSC ERROR: [511] Elements line 290 /home/rlchen/rlchen/soft/fixedmesh/code_changed/setelement.c [511]PETSC ERROR: [511] Elements line 290 /home/rlchen/rlchen/soft/fixedmesh/code_changed/setelement.c [511]PETSC ERROR: [511] Elements line 290 /home/rlchen/rlchen/soft/fixedmesh/code_changed/setelement.c [511]PETSC ERROR: [511] Elements line 290 /home/rlchen/rlchen/soft/fixedmesh/code_changed/setelement.c [511]PETSC ERROR: [511] Elements line 290 /home/rlchen/rlchen/soft/fixedmesh/code_changed/setelement.c [511]PETSC ERROR: [511] Elements line 290 /home/rlchen/rlchen/soft/fixedmesh/code_changed/setelement.c [511]PETSC ERROR: [511] Elements line 290 /home/rlchen/rlchen/soft/fixedmesh/code_changed/setelement.c [511]PETSC ERROR: [511] Elements line 290 /home/rlchen/rlchen/soft/fixedmesh/code_changed/setelement.c [511]PETSC ERROR: [511] Elements line 290 /home/rlchen/rlchen/soft/fixedmesh/code_changed/setelement.c [511]PETSC ERROR: [511] Elements line 290 /home/rlchen/rlchen/soft/fixedmesh/code_changed/setelement.c [511]PETSC ERROR: [511] Elements line 290 /home/rlchen/rlchen/soft/fixedmesh/code_changed/setelement.c [511]PETSC ERROR: [511] Elements line 290 /home/rlchen/rlchen/soft/fixedmesh/code_changed/setelement.c [511]PETSC ERROR: [511] Elements line 290 /home/rlchen/rlchen/soft/fixedmesh/code_changed/setelement.c [511]PETSC ERROR: [511] Elements line 290 /home/rlchen/rlchen/soft/fixedmesh/code_changed/setelement.c [511]PETSC ERROR: [511] Elements line 290 /home/rlchen/rlchen/soft/fixedmesh/code_changed/setelement.c [511]PETSC ERROR: [511] Elements line 290 /home/rlchen/rlchen/soft/fixedmesh/code_changed/setelement.c [511]PETSC ERROR: [511] Elements line 290 /home/rlchen/rlchen/soft/fixedmesh/code_changed/setelement.c [511]PETSC ERROR: [511] Elements line 290 /home/rlchen/rlchen/soft/fixedmesh/code_changed/setelement.c [511]PETSC ERROR: [511] Elements line 290 /home/rlchen/rlchen/soft/fixedmesh/code_changed/setelement.c [511]PETSC ERROR: [511] Elements line 290 /home/rlchen/rlchen/soft/fixedmesh/code_changed/setelement.c [511]PETSC ERROR: [511] Elements line 290 /home/rlchen/rlchen/soft/fixedmesh/code_changed/setelement.c [511]PETSC ERROR: [511] Elements line 290 /home/rlchen/rlchen/soft/fixedmesh/code_changed/setelement.c [511]PETSC ERROR: [511] Elements line 290 /home/rlchen/rlchen/soft/fixedmesh/code_changed/setelement.c [511]PETSC ERROR: [511] Elements line 290 /home/rlchen/rlchen/soft/fixedmesh/code_changed/setelement.c [511]PETSC ERROR: [511] Elements line 290 /home/rlchen/rlchen/soft/fixedmesh/code_changed/setelement.c [511]PETSC ERROR: [511] Elements line 290 /home/rlchen/rlchen/soft/fixedmesh/code_changed/setelement.c [511]PETSC ERROR: [511] Elements line 290 /home/rlchen/rlchen/soft/fixedmesh/code_changed/setelement.c [511]PETSC ERROR: [511] Elements line 290 /home/rlchen/rlchen/soft/fixedmesh/code_changed/setelement.c [511]PETSC ERROR: [511] Elements line 290 /home/rlchen/rlchen/soft/fixedmesh/code_changed/setelement.c [511]PETSC ERROR: [511] Elements line 290 /home/rlchen/rlchen/soft/fixedmesh/code_changed/setelement.c [511]PETSC ERROR: [511] Elements line 290 /home/rlchen/rlchen/soft/fixedmesh/code_changed/setelement.c [511]PETSC ERROR: [511] Elements line 290 /home/rlchen/rlchen/soft/fixedmesh/code_changed/setelement.c [511]PETSC ERROR: [511] Elements line 290 /home/rlchen/rlchen/soft/fixedmesh/code_changed/setelement.c [511]PETSC ERROR: [511] Elements line 290 /home/rlchen/rlchen/soft/fixedmesh/code_changed/setelement.c [511]PETSC ERROR: [511] SetReferenceElement line 424 /home/rlchen/rlchen/soft/fixedmesh/code_changed/setelement.c [511]PETSC ERROR: --------------------- Error Message ------------------------------------ [511]PETSC ERROR: Signal received! [511]PETSC ERROR: ------------------------------------------------------------------------ [511]PETSC ERROR: Petsc Release Version 3.2.0, Patch 1, Mon Sep 12 16:01:51 CDT 2011 [511]PETSC ERROR: See docs/changes/index.html for recent updates. [511]PETSC ERROR: See docs/faq.html for hints about trouble shooting. [511]PETSC ERROR: See docs/index.html for manual pages. [511]PETSC ERROR: ------------------------------------------------------------------------ [511]PETSC ERROR: ./joab on a Janus-deb named node1376 by ronglian Sun Nov 27 12:50:41 2011 [511]PETSC ERROR: Libraries linked from /home/ronglian/soft/petsc-3.2-p1/Janus-debug/lib [511]PETSC ERROR: Configure run at Tue Sep 13 18:32:21 2011 [511]PETSC ERROR: Configure options --known-level1-dcache-size=32768 --known-level1-dcache-linesize=32 --known-level1-dcache-assoc=0 --known-memcmp-ok=1 --known-sizeof-char=1 --known-sizeof-void-p=8 --known-sizeof-short=2 --known-sizeof-int=4 --known-sizeof-long=8 --known-sizeof-long-long=8 --known-sizeof-float=4 --known-sizeof-double=8 --known-sizeof-size_t=8 --known-bits-per-byte=8 --known-sizeof-MPI_Comm=8 --known-sizeof-MPI_Fint=4 --known-mpi-long-double=1 --with-batch=1 --with-mpi-shared-libraries=1 --known-mpi-shared-libraries=0 --download-f-blas-lapack=1 --download-hypre=1 --download-superlu=1 --download-parmetis=1 --download-superlu_dist=1 --download-blacs=1 --download-scalapack=1 --with-debugging=1 [511]PETSC ERROR: ------------------------------------------------------------------------ [511]PETSC ERROR: User provided function() line 0 in unknown directory unknown file ***ASSERTION failed on line 169 of file ccgraph.c: cadjwgtsum[cnvtxs] == idxsum(nedges, cadjwgt) 0 107 106 107 0 [404]PETSC ERROR: ------------------------------------------------------------------------ [404]PETSC ERROR: Caught signal number 11 SEGV: Segmentation Violation, probably memory access out of range [404]PETSC ERROR: Try option -start_in_debugger or -on_error_attach_debugger [404]PETSC ERROR: or see http://www.mcs.anl.gov/petsc/petsc-as/documentation/faq.html#valgrind[404]PETSCERROR: or try http://valgrind.org on GNU/linux and Apple Mac OS X to find memory corruption errors [404]PETSC ERROR: likely location of problem given in stack below [404]PETSC ERROR: --------------------- Stack Frames ------------------------------------ [404]PETSC ERROR: Note: The EXACT line numbers in the stack are not available, [404]PETSC ERROR: INSTEAD the line number of the start of the function [404]PETSC ERROR: is given. [404]PETSC ERROR: [404] MatLUFactorNumeric_SuperLU_DIST line 280 src/mat/impls/aij/mpi/superlu_dist/superlu_dist.c [404]PETSC ERROR: [404] MatLUFactorNumeric line 2864 src/mat/interface/matrix.c [404]PETSC ERROR: [404] PCApply line 373 src/ksp/pc/interface/precon.c [404]PETSC ERROR: [404] FGMREScycle line 119 src/ksp/ksp/impls/gmres/fgmres/fgmres.c [404]PETSC ERROR: [404] KSPSolve_FGMRES line 282 src/ksp/ksp/impls/gmres/fgmres/fgmres.c [404]PETSC ERROR: [404] KSPSolve line 331 src/ksp/ksp/interface/itfunc.c [404]PETSC ERROR: [404] SNES_KSPSolve line 3394 src/snes/interface/snes.c [404]PETSC ERROR: [404] SNESSolve_LS line 142 src/snes/impls/ls/ls.c [404]PETSC ERROR: [404] SNESSolve line 2647 src/snes/interface/snes.c [404]PETSC ERROR: [404] TimeStep line 1554 /home/rlchen/rlchen/soft/fixedmesh/code_changed/joab.c [404]PETSC ERROR: [404] SolveSteadyState line 1429 /home/rlchen/rlchen/soft/fixedmesh/code_changed/joab.c [404]PETSC ERROR: [404] ComputefixedBoundary line 588 /home/rlchen/rlchen/soft/fixedmesh/code_changed/joab.c [404]PETSC ERROR: [404] Elements line 290 /home/rlchen/rlchen/soft/fixedmesh/code_changed/setelement.c [404]PETSC ERROR: [404] Elements line 290 /home/rlchen/rlchen/soft/fixedmesh/code_changed/setelement.c [404]PETSC ERROR: [404] Elements line 290 /home/rlchen/rlchen/soft/fixedmesh/code_changed/setelement.c [404]PETSC ERROR: [404] Elements line 290 /home/rlchen/rlchen/soft/fixedmesh/code_changed/setelement.c [404]PETSC ERROR: [404] Elements line 290 /home/rlchen/rlchen/soft/fixedmesh/code_changed/setelement.c [404]PETSC ERROR: [404] Elements line 290 /home/rlchen/rlchen/soft/fixedmesh/code_changed/setelement.c [404]PETSC ERROR: [404] Elements line 290 /home/rlchen/rlchen/soft/fixedmesh/code_changed/setelement.c [404]PETSC ERROR: [404] Elements line 290 /home/rlchen/rlchen/soft/fixedmesh/code_changed/setelement.c [404]PETSC ERROR: [404] Elements line 290 /home/rlchen/rlchen/soft/fixedmesh/code_changed/setelement.c [404]PETSC ERROR: [404] Elements line 290 /home/rlchen/rlchen/soft/fixedmesh/code_changed/setelement.c [404]PETSC ERROR: [404] Elements line 290 /home/rlchen/rlchen/soft/fixedmesh/code_changed/setelement.c [404]PETSC ERROR: [404] Elements line 290 /home/rlchen/rlchen/soft/fixedmesh/code_changed/setelement.c [404]PETSC ERROR: [404] Elements line 290 /home/rlchen/rlchen/soft/fixedmesh/code_changed/setelement.c [404]PETSC ERROR: [404] Elements line 290 /home/rlchen/rlchen/soft/fixedmesh/code_changed/setelement.c [404]PETSC ERROR: [404] Elements line 290 /home/rlchen/rlchen/soft/fixedmesh/code_changed/setelement.c [404]PETSC ERROR: [404] Elements line 290 /home/rlchen/rlchen/soft/fixedmesh/code_changed/setelement.c [404]PETSC ERROR: [404] Elements line 290 /home/rlchen/rlchen/soft/fixedmesh/code_changed/setelement.c [404]PETSC ERROR: [404] Elements line 290 /home/rlchen/rlchen/soft/fixedmesh/code_changed/setelement.c [404]PETSC ERROR: [404] Elements line 290 /home/rlchen/rlchen/soft/fixedmesh/code_changed/setelement.c [404]PETSC ERROR: [404] Elements line 290 /home/rlchen/rlchen/soft/fixedmesh/code_changed/setelement.c [404]PETSC ERROR: [404] Elements line 290 /home/rlchen/rlchen/soft/fixedmesh/code_changed/setelement.c [404]PETSC ERROR: [404] Elements line 290 /home/rlchen/rlchen/soft/fixedmesh/code_changed/setelement.c [404]PETSC ERROR: [404] Elements line 290 /home/rlchen/rlchen/soft/fixedmesh/code_changed/setelement.c [404]PETSC ERROR: [404] Elements line 290 /home/rlchen/rlchen/soft/fixedmesh/code_changed/setelement.c [404]PETSC ERROR: [404] Elements line 290 /home/rlchen/rlchen/soft/fixedmesh/code_changed/setelement.c [404]PETSC ERROR: [404] Elements line 290 /home/rlchen/rlchen/soft/fixedmesh/code_changed/setelement.c [404]PETSC ERROR: [404] Elements line 290 /home/rlchen/rlchen/soft/fixedmesh/code_changed/setelement.c [404]PETSC ERROR: [404] Elements line 290 /home/rlchen/rlchen/soft/fixedmesh/code_changed/setelement.c [404]PETSC ERROR: [404] Elements line 290 /home/rlchen/rlchen/soft/fixedmesh/code_changed/setelement.c [404]PETSC ERROR: [404] Elements line 290 /home/rlchen/rlchen/soft/fixedmesh/code_changed/setelement.c [404]PETSC ERROR: [404] Elements line 290 /home/rlchen/rlchen/soft/fixedmesh/code_changed/setelement.c [404]PETSC ERROR: [404] Elements line 290 /home/rlchen/rlchen/soft/fixedmesh/code_changed/setelement.c [404]PETSC ERROR: [404] Elements line 290 /home/rlchen/rlchen/soft/fixedmesh/code_changed/setelement.c [404]PETSC ERROR: [404] Elements line 290 /home/rlchen/rlchen/soft/fixedmesh/code_changed/setelement.c [404]PETSC ERROR: [404] Elements line 290 /home/rlchen/rlchen/soft/fixedmesh/code_changed/setelement.c [404]PETSC ERROR: [404] Elements line 290 /home/rlchen/rlchen/soft/fixedmesh/code_changed/setelement.c [404]PETSC ERROR: [404] Elements line 290 /home/rlchen/rlchen/soft/fixedmesh/code_changed/setelement.c [404]PETSC ERROR: [404] Elements line 290 /home/rlchen/rlchen/soft/fixedmesh/code_changed/setelement.c [404]PETSC ERROR: [404] Elements line 290 /home/rlchen/rlchen/soft/fixedmesh/code_changed/setelement.c [404]PETSC ERROR: [404] SetReferenceElement line 424 /home/rlchen/rlchen/soft/fixedmesh/code_changed/setelement.c [404]PETSC ERROR: --------------------- Error Message ------------------------------------ [404]PETSC ERROR: Signal received! [404]PETSC ERROR: ------------------------------------------------------------------------ [404]PETSC ERROR: Petsc Release Version 3.2.0, Patch 1, Mon Sep 12 16:01:51 CDT 2011 [404]PETSC ERROR: See docs/changes/index.html for recent updates. [404]PETSC ERROR: See docs/faq.html for hints about trouble shooting. [404]PETSC ERROR: See docs/index.html for manual pages. [404]PETSC ERROR: ------------------------------------------------------------------------ [404]PETSC ERROR: ./joab on a Janus-deb named node1708 by ronglian Sun Nov 27 12:50:41 2011 [404]PETSC ERROR: Libraries linked from /home/ronglian/soft/petsc-3.2-p1/Janus-debug/lib [404]PETSC ERROR: Configure run at Tue Sep 13 18:32:21 2011 [404]PETSC ERROR: Configure options --known-level1-dcache-size=32768 --known-level1-dcache-linesize=32 --known-level1-dcache-assoc=0 --known-memcmp-ok=1 --known-sizeof-char=1 --known-sizeof-void-p=8 --known-sizeof-short=2 --known-sizeof-int=4 --known-sizeof-long=8 --known-sizeof-long-long=8 --known-sizeof-float=4 --known-sizeof-double=8 --known-sizeof-size_t=8 --known-bits-per-byte=8 --known-sizeof-MPI_Comm=8 --known-sizeof-MPI_Fint=4 --known-mpi-long-double=1 --with-batch=1 --with-mpi-shared-libraries=1 --known-mpi-shared-libraries=0 --download-f-blas-lapack=1 --download-hypre=1 --download-superlu=1 --download-parmetis=1 --download-superlu_dist=1 --download-blacs=1 --download-scalapack=1 --with-debugging=1 [404]PETSC ERROR: ------------------------------------------------------------------------ [404]PETSC ERROR: User provided function() line 0 in unknown directory unknown file ***ASSERTION failed on line 169 of file ccgraph.c: cadjwgtsum[cnvtxs] == idxsum(nedges, cadjwgt) 0 107 106 107 0 [371]PETSC ERROR: ------------------------------------------------------------------------ [371]PETSC ERROR: Caught signal number 11 SEGV: Segmentation Violation, probably memory access out of range [371]PETSC ERROR: Try option -start_in_debugger or -on_error_attach_debugger [371]PETSC ERROR: or see http://www.mcs.anl.gov/petsc/petsc-as/documentation/faq.html#valgrind[371]PETSCERROR: or try http://valgrind.org on GNU/linux and Apple Mac OS X to find memory corruption errors [371]PETSC ERROR: likely location of problem given in stack below [371]PETSC ERROR: --------------------- Stack Frames ------------------------------------ [371]PETSC ERROR: Note: The EXACT line numbers in the stack are not available, [371]PETSC ERROR: INSTEAD the line number of the start of the function [371]PETSC ERROR: is given. [371]PETSC ERROR: [371] MatLUFactorNumeric_SuperLU_DIST line 280 src/mat/impls/aij/mpi/superlu_dist/superlu_dist.c [371]PETSC ERROR: [371] MatLUFactorNumeric line 2864 src/mat/interface/matrix.c [371]PETSC ERROR: [371] PCApply line 373 src/ksp/pc/interface/precon.c [371]PETSC ERROR: [371] FGMREScycle line 119 src/ksp/ksp/impls/gmres/fgmres/fgmres.c [371]PETSC ERROR: [371] KSPSolve_FGMRES line 282 src/ksp/ksp/impls/gmres/fgmres/fgmres.c [371]PETSC ERROR: [371] KSPSolve line 331 src/ksp/ksp/interface/itfunc.c [371]PETSC ERROR: [371] SNES_KSPSolve line 3394 src/snes/interface/snes.c [371]PETSC ERROR: [371] SNESSolve_LS line 142 src/snes/impls/ls/ls.c [371]PETSC ERROR: [371] SNESSolve line 2647 src/snes/interface/snes.c [371]PETSC ERROR: [371] TimeStep line 1554 /home/rlchen/rlchen/soft/fixedmesh/code_changed/joab.c [371]PETSC ERROR: [371] SolveSteadyState line 1429 /home/rlchen/rlchen/soft/fixedmesh/code_changed/joab.c [371]PETSC ERROR: [371] ComputefixedBoundary line 588 /home/rlchen/rlchen/soft/fixedmesh/code_changed/joab.c [371]PETSC ERROR: [371] Elements line 290 /home/rlchen/rlchen/soft/fixedmesh/code_changed/setelement.c [371]PETSC ERROR: [371] Elements line 290 /home/rlchen/rlchen/soft/fixedmesh/code_changed/setelement.c [371]PETSC ERROR: [371] Elements line 290 /home/rlchen/rlchen/soft/fixedmesh/code_changed/setelement.c [371]PETSC ERROR: [371] Elements line 290 /home/rlchen/rlchen/soft/fixedmesh/code_changed/setelement.c [371]PETSC ERROR: [371] Elements line 290 /home/rlchen/rlchen/soft/fixedmesh/code_changed/setelement.c [371]PETSC ERROR: [371] Elements line 290 /home/rlchen/rlchen/soft/fixedmesh/code_changed/setelement.c [371]PETSC ERROR: [371] Elements line 290 /home/rlchen/rlchen/soft/fixedmesh/code_changed/setelement.c [371]PETSC ERROR: [371] Elements line 290 /home/rlchen/rlchen/soft/fixedmesh/code_changed/setelement.c [371]PETSC ERROR: [371] Elements line 290 /home/rlchen/rlchen/soft/fixedmesh/code_changed/setelement.c [371]PETSC ERROR: [371] Elements line 290 /home/rlchen/rlchen/soft/fixedmesh/code_changed/setelement.c [371]PETSC ERROR: [371] Elements line 290 /home/rlchen/rlchen/soft/fixedmesh/code_changed/setelement.c [371]PETSC ERROR: [371] Elements line 290 /home/rlchen/rlchen/soft/fixedmesh/code_changed/setelement.c [371]PETSC ERROR: [371] Elements line 290 /home/rlchen/rlchen/soft/fixedmesh/code_changed/setelement.c [371]PETSC ERROR: [371] Elements line 290 /home/rlchen/rlchen/soft/fixedmesh/code_changed/setelement.c [371]PETSC ERROR: [371] Elements line 290 /home/rlchen/rlchen/soft/fixedmesh/code_changed/setelement.c [371]PETSC ERROR: [371] Elements line 290 /home/rlchen/rlchen/soft/fixedmesh/code_changed/setelement.c [371]PETSC ERROR: [371] Elements line 290 /home/rlchen/rlchen/soft/fixedmesh/code_changed/setelement.c [371]PETSC ERROR: [371] Elements line 290 /home/rlchen/rlchen/soft/fixedmesh/code_changed/setelement.c [371]PETSC ERROR: [371] Elements line 290 /home/rlchen/rlchen/soft/fixedmesh/code_changed/setelement.c [371]PETSC ERROR: [371] Elements line 290 /home/rlchen/rlchen/soft/fixedmesh/code_changed/setelement.c [371]PETSC ERROR: [371] Elements line 290 /home/rlchen/rlchen/soft/fixedmesh/code_changed/setelement.c [371]PETSC ERROR: [371] Elements line 290 /home/rlchen/rlchen/soft/fixedmesh/code_changed/setelement.c [371]PETSC ERROR: [371] Elements line 290 /home/rlchen/rlchen/soft/fixedmesh/code_changed/setelement.c [371]PETSC ERROR: [371] Elements line 290 /home/rlchen/rlchen/soft/fixedmesh/code_changed/setelement.c [371]PETSC ERROR: [371] Elements line 290 /home/rlchen/rlchen/soft/fixedmesh/code_changed/setelement.c [371]PETSC ERROR: [371] Elements line 290 /home/rlchen/rlchen/soft/fixedmesh/code_changed/setelement.c [371]PETSC ERROR: [371] Elements line 290 /home/rlchen/rlchen/soft/fixedmesh/code_changed/setelement.c [371]PETSC ERROR: [371] Elements line 290 /home/rlchen/rlchen/soft/fixedmesh/code_changed/setelement.c [371]PETSC ERROR: [371] Elements line 290 /home/rlchen/rlchen/soft/fixedmesh/code_changed/setelement.c [371]PETSC ERROR: [371] Elements line 290 /home/rlchen/rlchen/soft/fixedmesh/code_changed/setelement.c [371]PETSC ERROR: [371] Elements line 290 /home/rlchen/rlchen/soft/fixedmesh/code_changed/setelement.c [371]PETSC ERROR: [371] Elements line 290 /home/rlchen/rlchen/soft/fixedmesh/code_changed/setelement.c [371]PETSC ERROR: [371] Elements line 290 /home/rlchen/rlchen/soft/fixedmesh/code_changed/setelement.c [371]PETSC ERROR: [371] Elements line 290 /home/rlchen/rlchen/soft/fixedmesh/code_changed/setelement.c [371]PETSC ERROR: [371] Elements line 290 /home/rlchen/rlchen/soft/fixedmesh/code_changed/setelement.c [371]PETSC ERROR: [371] Elements line 290 /home/rlchen/rlchen/soft/fixedmesh/code_changed/setelement.c [371]PETSC ERROR: [371] Elements line 290 /home/rlchen/rlchen/soft/fixedmesh/code_changed/setelement.c [371]PETSC ERROR: [371] Elements line 290 /home/rlchen/rlchen/soft/fixedmesh/code_changed/setelement.c [371]PETSC ERROR: [371] Elements line 290 /home/rlchen/rlchen/soft/fixedmesh/code_changed/setelement.c [371]PETSC ERROR: [371] SetReferenceElement line 424 /home/rlchen/rlchen/soft/fixedmesh/code_changed/setelement.c > This is part of the graph library Metis. Perhaps an invalid graph was > passed to Metis? Without a stack trace (you can configure > -with-debugging=1, the default), we don't even know whether this error was > reached through PETSc, through your code, or through another library. > -------------- next part -------------- > An HTML attachment was scrubbed... > URL: < > http://lists.mcs.anl.gov/pipermail/petsc-users/attachments/20111127/c5040628/attachment.htm > > > > ------------------------------ > > _______________________________________________ > petsc-users mailing list > petsc-users at mcs.anl.gov > https://lists.mcs.anl.gov/mailman/listinfo/petsc-users > > > End of petsc-users Digest, Vol 35, Issue 84 > ******************************************* > -------------- next part -------------- An HTML attachment was scrubbed... URL: From knepley at gmail.com Sun Nov 27 14:11:31 2011 From: knepley at gmail.com (Matthew Knepley) Date: Sun, 27 Nov 2011 14:11:31 -0600 Subject: [petsc-users] ccgraph.c error In-Reply-To: References: Message-ID: On Sun, Nov 27, 2011 at 2:04 PM, Rongliang Chen wrote: > Hi Jed, > > Thank you for your reply. > I have no idea if I have passed a invalid graph to Metis. > This error just appeared when I run my code with some number of > processors. > I run my code with the debug version of PETSc and I got the following > error message. > This is a SuperLU problem. Please send configure.log to petsc-maint at mcs.anl.gov Jed, could this be the mismatch between SuperLU and the new ParMetis? Matt > Best, > Rongliang > > ------------------------------------------------------ > ***ASSERTION failed on line 169 of file ccgraph.c: cadjwgtsum[cnvtxs] == > idxsum(nedges, cadjwgt) > 0 107 106 107 0 > [313]PETSC ERROR: > ------------------------------------------------------------------------ > [313]PETSC ERROR: Caught signal number 11 SEGV: Segmentation Violation, > probably memory access out of range > [313]PETSC ERROR: Try option -start_in_debugger or > -on_error_attach_debugger > [313]PETSC ERROR: or see > http://www.mcs.anl.gov/petsc/petsc-as/documentation/faq.html#valgrind[313]PETSCERROR: or try > http://valgrind.org on GNU/linux and Apple Mac OS X to find memory > corruption errors > [313]PETSC ERROR: likely location of problem given in stack below > [313]PETSC ERROR: --------------------- Stack Frames > ------------------------------------ > ***ASSERTION failed on line 169 of file ccgraph.c: cadjwgtsum[cnvtxs] == > idxsum(nedges, cadjwgt) > 0 107 106 107 0 > [313]PETSC ERROR: Note: The EXACT line numbers in the stack are not > available, > [313]PETSC ERROR: INSTEAD the line number of the start of the > function > [313]PETSC ERROR: is given. > [313]PETSC ERROR: [313] MatLUFactorNumeric_SuperLU_DIST line 280 > src/mat/impls/aij/mpi/superlu_dist/superlu_dist.c > [313]PETSC ERROR: [313] MatLUFactorNumeric line 2864 > src/mat/interface/matrix.c > [313]PETSC ERROR: [313] PCApply line 373 src/ksp/pc/interface/precon.c > [313]PETSC ERROR: [313] FGMREScycle line 119 > src/ksp/ksp/impls/gmres/fgmres/fgmres.c > [313]PETSC ERROR: [313] KSPSolve_FGMRES line 282 > src/ksp/ksp/impls/gmres/fgmres/fgmres.c > [313]PETSC ERROR: [313] KSPSolve line 331 src/ksp/ksp/interface/itfunc.c > [313]PETSC ERROR: [313] SNES_KSPSolve line 3394 src/snes/interface/snes.c > [313]PETSC ERROR: [313] SNESSolve_LS line 142 src/snes/impls/ls/ls.c > [401]PETSC ERROR: > ------------------------------------------------------------------------ > [401]PETSC ERROR: Caught signal number 11 SEGV: Segmentation Violation, > probably memory access out of range > [401]PETSC ERROR: Try option -start_in_debugger or > -on_error_attach_debugger > [401]PETSC ERROR: or see > http://www.mcs.anl.gov/petsc/petsc-as/documentation/faq.html#valgrind[401]PETSCERROR: or try > http://valgrind.org on GNU/linux and Apple Mac OS X to find memory > corruption errors > [401]PETSC ERROR: likely location of problem given in stack below > [401]PETSC ERROR: --------------------- Stack Frames > ------------------------------------ > [313]PETSC ERROR: [313] SNESSolve line 2647 src/snes/interface/snes.c > [313]PETSC ERROR: [313] TimeStep line 1554 > /home/rlchen/rlchen/soft/fixedmesh/code_changed/joab.c > [313]PETSC ERROR: [313] SolveSteadyState line 1429 > /home/rlchen/rlchen/soft/fixedmesh/code_changed/joab.c > [313]PETSC ERROR: [313] ComputefixedBoundary line 588 > /home/rlchen/rlchen/soft/fixedmesh/code_changed/joab.c > [313]PETSC ERROR: [313] Elements line 290 > /home/rlchen/rlchen/soft/fixedmesh/code_changed/setelement.c > [313]PETSC ERROR: [313] Elements line 290 > /home/rlchen/rlchen/soft/fixedmesh/code_changed/setelement.c > [313]PETSC ERROR: [313] Elements line 290 > /home/rlchen/rlchen/soft/fixedmesh/code_changed/setelement.c > [313]PETSC ERROR: [313] Elements line 290 > /home/rlchen/rlchen/soft/fixedmesh/code_changed/setelement.c > [313]PETSC ERROR: [313] Elements line 290 > /home/rlchen/rlchen/soft/fixedmesh/code_changed/setelement.c > [313]PETSC ERROR: [313] Elements line 290 > /home/rlchen/rlchen/soft/fixedmesh/code_changed/setelement.c > [313]PETSC ERROR: [313] Elements line 290 > /home/rlchen/rlchen/soft/fixedmesh/code_changed/setelement.c > [313]PETSC ERROR: [313] Elements line 290 > /home/rlchen/rlchen/soft/fixedmesh/code_changed/setelement.c > [313]PETSC ERROR: [313] Elements line 290 > /home/rlchen/rlchen/soft/fixedmesh/code_changed/setelement.c > [313]PETSC ERROR: [313] Elements line 290 > /home/rlchen/rlchen/soft/fixedmesh/code_changed/setelement.c > [313]PETSC ERROR: [313] Elements line 290 > /home/rlchen/rlchen/soft/fixedmesh/code_changed/setelement.c > [313]PETSC ERROR: [313] Elements line 290 > /home/rlchen/rlchen/soft/fixedmesh/code_changed/setelement.c > [313]PETSC ERROR: [313] Elements line 290 > /home/rlchen/rlchen/soft/fixedmesh/code_changed/setelement.c > [313]PETSC ERROR: [313] Elements line 290 > /home/rlchen/rlchen/soft/fixedmesh/code_changed/setelement.c > [313]PETSC ERROR: [313] Elements line 290 > /home/rlchen/rlchen/soft/fixedmesh/code_changed/setelement.c > [313]PETSC ERROR: [313] Elements line 290 > /home/rlchen/rlchen/soft/fixedmesh/code_changed/setelement.c > [313]PETSC ERROR: [313] Elements line 290 > /home/rlchen/rlchen/soft/fixedmesh/code_changed/setelement.c > [313]PETSC ERROR: [313] Elements line 290 > /home/rlchen/rlchen/soft/fixedmesh/code_changed/setelement.c > [313]PETSC ERROR: [313] Elements line 290 > /home/rlchen/rlchen/soft/fixedmesh/code_changed/setelement.c > [313]PETSC ERROR: [313] Elements line 290 > /home/rlchen/rlchen/soft/fixedmesh/code_changed/setelement.c > [313]PETSC ERROR: [313] Elements line 290 > /home/rlchen/rlchen/soft/fixedmesh/code_changed/setelement.c > [313]PETSC ERROR: [313] Elements line 290 > /home/rlchen/rlchen/soft/fixedmesh/code_changed/setelement.c > [313]PETSC ERROR: [313] Elements line 290 > /home/rlchen/rlchen/soft/fixedmesh/code_changed/setelement.c > [313]PETSC ERROR: [313] Elements line 290 > /home/rlchen/rlchen/soft/fixedmesh/code_changed/setelement.c > [313]PETSC ERROR: [313] Elements line 290 > /home/rlchen/rlchen/soft/fixedmesh/code_changed/setelement.c > [313]PETSC ERROR: [313] Elements line 290 > /home/rlchen/rlchen/soft/fixedmesh/code_changed/setelement.c > [313]PETSC ERROR: [313] Elements line 290 > /home/rlchen/rlchen/soft/fixedmesh/code_changed/setelement.c > [313]PETSC ERROR: [313] Elements line 290 > /home/rlchen/rlchen/soft/fixedmesh/code_changed/setelement.c > [313]PETSC ERROR: [313] Elements line 290 > /home/rlchen/rlchen/soft/fixedmesh/code_changed/setelement.c > [313]PETSC ERROR: [313] Elements line 290 > /home/rlchen/rlchen/soft/fixedmesh/code_changed/setelement.c > [313]PETSC ERROR: [313] Elements line 290 > /home/rlchen/rlchen/soft/fixedmesh/code_changed/setelement.c > [313]PETSC ERROR: [313] Elements line 290 > /home/rlchen/rlchen/soft/fixedmesh/code_changed/setelement.c > [313]PETSC ERROR: [313] Elements line 290 > /home/rlchen/rlchen/soft/fixedmesh/code_changed/setelement.c > [313]PETSC ERROR: [313] Elements line 290 > /home/rlchen/rlchen/soft/fixedmesh/code_changed/setelement.c > [313]PETSC ERROR: [313] Elements line 290 > /home/rlchen/rlchen/soft/fixedmesh/code_changed/setelement.c > [313]PETSC ERROR: [313] Elements line 290 > /home/rlchen/rlchen/soft/fixedmesh/code_changed/setelement.c > [313]PETSC ERROR: [313] Elements line 290 > /home/rlchen/rlchen/soft/fixedmesh/code_changed/setelement.c > -------------------------------------------------------------------------- > MPI_ABORT was invoked on rank 313 in communicator MPI_COMM_WORLD > with errorcode 59. > > NOTE: invoking MPI_ABORT causes Open MPI to kill all MPI processes. > You may or may not see output from other processes, depending on > exactly when Open MPI kills them. > -------------------------------------------------------------------------- > [313]PETSC ERROR: [313] Elements line 290 > /home/rlchen/rlchen/soft/fixedmesh/code_changed/setelement.c > [313]PETSC ERROR: [313] Elements line 290 > /home/rlchen/rlchen/soft/fixedmesh/code_changed/setelement.c > [313]PETSC ERROR: [313] SetReferenceElement line 424 > /home/rlchen/rlchen/soft/fixedmesh/code_changed/setelement.c > [313]PETSC ERROR: --------------------- Error Message > ------------------------------------ > [313]PETSC ERROR: Signal received! > [313]PETSC ERROR: > ------------------------------------------------------------------------ > [313]PETSC ERROR: Petsc Release Version 3.2.0, Patch 1, Mon Sep 12 > 16:01:51 CDT 2011 > [313]PETSC ERROR: See docs/changes/index.html for recent updates. > [313]PETSC ERROR: See docs/faq.html for hints about trouble shooting. > [313]PETSC ERROR: See docs/index.html for manual pages. > [313]PETSC ERROR: > ------------------------------------------------------------------------ > [313]PETSC ERROR: ./joab on a Janus-deb named node1739 by ronglian Sun Nov > 27 12:50:41 2011 > [313]PETSC ERROR: Libraries linked from > /home/ronglian/soft/petsc-3.2-p1/Janus-debug/lib > [313]PETSC ERROR: Configure run at Tue Sep 13 18:32:21 2011 > [313]PETSC ERROR: Configure options --known-level1-dcache-size=32768 > --known-level1-dcache-linesize=32 --known-level1-dcache-assoc=0 > --known-memcmp-ok=1 --known-sizeof-char=1 --known-sizeof-void-p=8 > --known-sizeof-short=2 --known-sizeof-int=4 --known-sizeof-long=8 > --known-sizeof-long-long=8 --known-sizeof-float=4 --known-sizeof-double=8 > --known-sizeof-size_t=8 --known-bits-per-byte=8 --known-sizeof-MPI_Comm=8 > --known-sizeof-MPI_Fint=4 --known-mpi-long-double=1 --with-batch=1 > --with-mpi-shared-libraries=1 --known-mpi-shared-libraries=0 > --download-f-blas-lapack=1 --download-hypre=1 --download-superlu=1 > --download-parmetis=1 --download-superlu_dist=1 --download-blacs=1 > --download-scalapack=1 --with-debugging=1 > [313]PETSC ERROR: > ------------------------------------------------------------------------ > [313]PETSC ERROR: User provided function() line 0 in unknown directory > unknown file > [401]PETSC ERROR: Note: The EXACT line numbers in the stack are not > available, > [401]PETSC ERROR: INSTEAD the line number of the start of the > function > [401]PETSC ERROR: is given. > [401]PETSC ERROR: [401] MatLUFactorNumeric_SuperLU_DIST line 280 > src/mat/impls/aij/mpi/superlu_dist/superlu_dist.c > [401]PETSC ERROR: [401] MatLUFactorNumeric line 2864 > src/mat/interface/matrix.c > [401]PETSC ERROR: [401] PCApply line 373 src/ksp/pc/interface/precon.c > [401]PETSC ERROR: [401] FGMREScycle line 119 > src/ksp/ksp/impls/gmres/fgmres/fgmres.c > [401]PETSC ERROR: [401] KSPSolve_FGMRES line 282 > src/ksp/ksp/impls/gmres/fgmres/fgmres.c > [401]PETSC ERROR: [401] KSPSolve line 331 src/ksp/ksp/interface/itfunc.c > [401]PETSC ERROR: [401] SNES_KSPSolve line 3394 src/snes/interface/snes.c > [401]PETSC ERROR: [401] SNESSolve_LS line 142 src/snes/impls/ls/ls.c > [401]PETSC ERROR: [401] SNESSolve line 2647 src/snes/interface/snes.c > [401]PETSC ERROR: [401] TimeStep line 1554 > /home/rlchen/rlchen/soft/fixedmesh/code_changed/joab.c > [401]PETSC ERROR: [401] SolveSteadyState line 1429 > /home/rlchen/rlchen/soft/fixedmesh/code_changed/joab.c > [401]PETSC ERROR: [401] ComputefixedBoundary line 588 > /home/rlchen/rlchen/soft/fixedmesh/code_changed/joab.c > [401]PETSC ERROR: [401] Elements line 290 > /home/rlchen/rlchen/soft/fixedmesh/code_changed/setelement.c > [401]PETSC ERROR: [401] Elements line 290 > /home/rlchen/rlchen/soft/fixedmesh/code_changed/setelement.c > [401]PETSC ERROR: [401] Elements line 290 > /home/rlchen/rlchen/soft/fixedmesh/code_changed/setelement.c > [401]PETSC ERROR: [401] Elements line 290 > /home/rlchen/rlchen/soft/fixedmesh/code_changed/setelement.c > [401]PETSC ERROR: [401] Elements line 290 > /home/rlchen/rlchen/soft/fixedmesh/code_changed/setelement.c > [401]PETSC ERROR: [401] Elements line 290 > /home/rlchen/rlchen/soft/fixedmesh/code_changed/setelement.c > [401]PETSC ERROR: [401] Elements line 290 > /home/rlchen/rlchen/soft/fixedmesh/code_changed/setelement.c > [401]PETSC ERROR: [401] Elements line 290 > /home/rlchen/rlchen/soft/fixedmesh/code_changed/setelement.c > [401]PETSC ERROR: [401] Elements line 290 > /home/rlchen/rlchen/soft/fixedmesh/code_changed/setelement.c > [401]PETSC ERROR: [401] Elements line 290 > /home/rlchen/rlchen/soft/fixedmesh/code_changed/setelement.c > [401]PETSC ERROR: [401] Elements line 290 > /home/rlchen/rlchen/soft/fixedmesh/code_changed/setelement.c > [401]PETSC ERROR: [401] Elements line 290 > /home/rlchen/rlchen/soft/fixedmesh/code_changed/setelement.c > [401]PETSC ERROR: [401] Elements line 290 > /home/rlchen/rlchen/soft/fixedmesh/code_changed/setelement.c > [401]PETSC ERROR: [401] Elements line 290 > /home/rlchen/rlchen/soft/fixedmesh/code_changed/setelement.c > [401]PETSC ERROR: [401] Elements line 290 > /home/rlchen/rlchen/soft/fixedmesh/code_changed/setelement.c > [401]PETSC ERROR: [401] Elements line 290 > /home/rlchen/rlchen/soft/fixedmesh/code_changed/setelement.c > [401]PETSC ERROR: [401] Elements line 290 > /home/rlchen/rlchen/soft/fixedmesh/code_changed/setelement.c > [401]PETSC ERROR: [401] Elements line 290 > /home/rlchen/rlchen/soft/fixedmesh/code_changed/setelement.c > [401]PETSC ERROR: [401] Elements line 290 > /home/rlchen/rlchen/soft/fixedmesh/code_changed/setelement.c > [401]PETSC ERROR: [401] Elements line 290 > /home/rlchen/rlchen/soft/fixedmesh/code_changed/setelement.c > [401]PETSC ERROR: [401] Elements line 290 > /home/rlchen/rlchen/soft/fixedmesh/code_changed/setelement.c > [401]PETSC ERROR: [401] Elements line 290 > /home/rlchen/rlchen/soft/fixedmesh/code_changed/setelement.c > [401]PETSC ERROR: [401] Elements line 290 > /home/rlchen/rlchen/soft/fixedmesh/code_changed/setelement.c > [401]PETSC ERROR: [401] Elements line 290 > /home/rlchen/rlchen/soft/fixedmesh/code_changed/setelement.c > [401]PETSC ERROR: [401] Elements line 290 > /home/rlchen/rlchen/soft/fixedmesh/code_changed/setelement.c > [401]PETSC ERROR: [401] Elements line 290 > /home/rlchen/rlchen/soft/fixedmesh/code_changed/setelement.c > [401]PETSC ERROR: [401] Elements line 290 > /home/rlchen/rlchen/soft/fixedmesh/code_changed/setelement.c > [401]PETSC ERROR: [401] Elements line 290 > /home/rlchen/rlchen/soft/fixedmesh/code_changed/setelement.c > [401]PETSC ERROR: [401] Elements line 290 > /home/rlchen/rlchen/soft/fixedmesh/code_changed/setelement.c > [401]PETSC ERROR: [401] Elements line 290 > /home/rlchen/rlchen/soft/fixedmesh/code_changed/setelement.c > [401]PETSC ERROR: [401] Elements line 290 > /home/rlchen/rlchen/soft/fixedmesh/code_changed/setelement.c > [401]PETSC ERROR: [401] Elements line 290 > /home/rlchen/rlchen/soft/fixedmesh/code_changed/setelement.c > [401]PETSC ERROR: [401] Elements line 290 > /home/rlchen/rlchen/soft/fixedmesh/code_changed/setelement.c > [401]PETSC ERROR: [401] Elements line 290 > /home/rlchen/rlchen/soft/fixedmesh/code_changed/setelement.c > [401]PETSC ERROR: [401] Elements line 290 > /home/rlchen/rlchen/soft/fixedmesh/code_changed/setelement.c > [401]PETSC ERROR: [401] Elements line 290 > /home/rlchen/rlchen/soft/fixedmesh/code_changed/setelement.c > [401]PETSC ERROR: [401] Elements line 290 > /home/rlchen/rlchen/soft/fixedmesh/code_changed/setelement.c > [401]PETSC ERROR: [401] Elements line 290 > /home/rlchen/rlchen/soft/fixedmesh/code_changed/setelement.c > [401]PETSC ERROR: [401] Elements line 290 > /home/rlchen/rlchen/soft/fixedmesh/code_changed/setelement.c > [401]PETSC ERROR: [401] SetReferenceElement line 424 > /home/rlchen/rlchen/soft/fixedmesh/code_changed/setelement.c > [401]PETSC ERROR: --------------------- Error Message > ------------------------------------ > [401]PETSC ERROR: Signal received! > [401]PETSC ERROR: > ------------------------------------------------------------------------ > [401]PETSC ERROR: Petsc Release Version 3.2.0, Patch 1, Mon Sep 12 > 16:01:51 CDT 2011 > [401]PETSC ERROR: See docs/changes/index.html for recent updates. > [401]PETSC ERROR: See docs/faq.html for hints about trouble shooting. > [401]PETSC ERROR: See docs/index.html for manual pages. > [401]PETSC ERROR: > ------------------------------------------------------------------------ > [401]PETSC ERROR: ./joab on a Janus-deb named node1708 by ronglian Sun Nov > 27 12:50:41 2011 > [401]PETSC ERROR: Libraries linked from > /home/ronglian/soft/petsc-3.2-p1/Janus-debug/lib > [401]PETSC ERROR: Configure run at Tue Sep 13 18:32:21 2011 > [401]PETSC ERROR: Configure options --known-level1-dcache-size=32768 > --known-level1-dcache-linesize=32 --known-level1-dcache-assoc=0 > --known-memcmp-ok=1 --known-sizeof-char=1 --known-sizeof-void-p=8 > --known-sizeof-short=2 --known-sizeof-int=4 --known-sizeof-long=8 > --known-sizeof-long-long=8 --known-sizeof-float=4 --known-sizeof-double=8 > --known-sizeof-size_t=8 --known-bits-per-byte=8 --known-sizeof-MPI_Comm=8 > --known-sizeof-MPI_Fint=4 --known-mpi-long-double=1 --with-batch=1 > --with-mpi-shared-libraries=1 --known-mpi-shared-libraries=0 > --download-f-blas-lapack=1 --download-hypre=1 --download-superlu=1 > --download-parmetis=1 --download-superlu_dist=1 --download-blacs=1 > --download-scalapack=1 --with-debugging=1 > [401]PETSC ERROR: > ------------------------------------------------------------------------ > [401]PETSC ERROR: User provided function() line 0 in unknown directory > unknown file > ***ASSERTION failed on line 169 of file ccgraph.c: cadjwgtsum[cnvtxs] == > idxsum(nedges, cadjwgt) > 0 107 106 107 0 > [511]PETSC ERROR: > ------------------------------------------------------------------------ > [511]PETSC ERROR: Caught signal number 11 SEGV: Segmentation Violation, > probably memory access out of range > [511]PETSC ERROR: Try option -start_in_debugger or > -on_error_attach_debugger > [511]PETSC ERROR: or see > http://www.mcs.anl.gov/petsc/petsc-as/documentation/faq.html#valgrind[511]PETSCERROR: or try > http://valgrind.org on GNU/linux and Apple Mac OS X to find memory > corruption errors > [511]PETSC ERROR: likely location of problem given in stack below > [511]PETSC ERROR: --------------------- Stack Frames > ------------------------------------ > [511]PETSC ERROR: Note: The EXACT line numbers in the stack are not > available, > [511]PETSC ERROR: INSTEAD the line number of the start of the > function > [511]PETSC ERROR: is given. > [511]PETSC ERROR: [511] MatLUFactorNumeric_SuperLU_DIST line 280 > src/mat/impls/aij/mpi/superlu_dist/superlu_dist.c > [511]PETSC ERROR: [511] MatLUFactorNumeric line 2864 > src/mat/interface/matrix.c > [511]PETSC ERROR: [511] PCApply line 373 src/ksp/pc/interface/precon.c > [511]PETSC ERROR: [511] FGMREScycle line 119 > src/ksp/ksp/impls/gmres/fgmres/fgmres.c > [511]PETSC ERROR: [511] KSPSolve_FGMRES line 282 > src/ksp/ksp/impls/gmres/fgmres/fgmres.c > [511]PETSC ERROR: [511] KSPSolve line 331 src/ksp/ksp/interface/itfunc.c > [511]PETSC ERROR: [511] SNES_KSPSolve line 3394 src/snes/interface/snes.c > [511]PETSC ERROR: [511] SNESSolve_LS line 142 src/snes/impls/ls/ls.c > [511]PETSC ERROR: [511] SNESSolve line 2647 src/snes/interface/snes.c > [511]PETSC ERROR: [511] TimeStep line 1554 > /home/rlchen/rlchen/soft/fixedmesh/code_changed/joab.c > [511]PETSC ERROR: [511] SolveSteadyState line 1429 > /home/rlchen/rlchen/soft/fixedmesh/code_changed/joab.c > [511]PETSC ERROR: [511] ComputefixedBoundary line 588 > /home/rlchen/rlchen/soft/fixedmesh/code_changed/joab.c > [511]PETSC ERROR: [511] Elements line 290 > /home/rlchen/rlchen/soft/fixedmesh/code_changed/setelement.c > [511]PETSC ERROR: [511] Elements line 290 > /home/rlchen/rlchen/soft/fixedmesh/code_changed/setelement.c > [511]PETSC ERROR: [511] Elements line 290 > /home/rlchen/rlchen/soft/fixedmesh/code_changed/setelement.c > [511]PETSC ERROR: [511] Elements line 290 > /home/rlchen/rlchen/soft/fixedmesh/code_changed/setelement.c > [511]PETSC ERROR: [511] Elements line 290 > /home/rlchen/rlchen/soft/fixedmesh/code_changed/setelement.c > [511]PETSC ERROR: [511] Elements line 290 > /home/rlchen/rlchen/soft/fixedmesh/code_changed/setelement.c > [511]PETSC ERROR: [511] Elements line 290 > /home/rlchen/rlchen/soft/fixedmesh/code_changed/setelement.c > [511]PETSC ERROR: [511] Elements line 290 > /home/rlchen/rlchen/soft/fixedmesh/code_changed/setelement.c > [511]PETSC ERROR: [511] Elements line 290 > /home/rlchen/rlchen/soft/fixedmesh/code_changed/setelement.c > [511]PETSC ERROR: [511] Elements line 290 > /home/rlchen/rlchen/soft/fixedmesh/code_changed/setelement.c > [511]PETSC ERROR: [511] Elements line 290 > /home/rlchen/rlchen/soft/fixedmesh/code_changed/setelement.c > [511]PETSC ERROR: [511] Elements line 290 > /home/rlchen/rlchen/soft/fixedmesh/code_changed/setelement.c > [511]PETSC ERROR: [511] Elements line 290 > /home/rlchen/rlchen/soft/fixedmesh/code_changed/setelement.c > [511]PETSC ERROR: [511] Elements line 290 > /home/rlchen/rlchen/soft/fixedmesh/code_changed/setelement.c > [511]PETSC ERROR: [511] Elements line 290 > /home/rlchen/rlchen/soft/fixedmesh/code_changed/setelement.c > [511]PETSC ERROR: [511] Elements line 290 > /home/rlchen/rlchen/soft/fixedmesh/code_changed/setelement.c > [511]PETSC ERROR: [511] Elements line 290 > /home/rlchen/rlchen/soft/fixedmesh/code_changed/setelement.c > [511]PETSC ERROR: [511] Elements line 290 > /home/rlchen/rlchen/soft/fixedmesh/code_changed/setelement.c > [511]PETSC ERROR: [511] Elements line 290 > /home/rlchen/rlchen/soft/fixedmesh/code_changed/setelement.c > [511]PETSC ERROR: [511] Elements line 290 > /home/rlchen/rlchen/soft/fixedmesh/code_changed/setelement.c > [511]PETSC ERROR: [511] Elements line 290 > /home/rlchen/rlchen/soft/fixedmesh/code_changed/setelement.c > [511]PETSC ERROR: [511] Elements line 290 > /home/rlchen/rlchen/soft/fixedmesh/code_changed/setelement.c > [511]PETSC ERROR: [511] Elements line 290 > /home/rlchen/rlchen/soft/fixedmesh/code_changed/setelement.c > [511]PETSC ERROR: [511] Elements line 290 > /home/rlchen/rlchen/soft/fixedmesh/code_changed/setelement.c > [511]PETSC ERROR: [511] Elements line 290 > /home/rlchen/rlchen/soft/fixedmesh/code_changed/setelement.c > [511]PETSC ERROR: [511] Elements line 290 > /home/rlchen/rlchen/soft/fixedmesh/code_changed/setelement.c > [511]PETSC ERROR: [511] Elements line 290 > /home/rlchen/rlchen/soft/fixedmesh/code_changed/setelement.c > [511]PETSC ERROR: [511] Elements line 290 > /home/rlchen/rlchen/soft/fixedmesh/code_changed/setelement.c > [511]PETSC ERROR: [511] Elements line 290 > /home/rlchen/rlchen/soft/fixedmesh/code_changed/setelement.c > [511]PETSC ERROR: [511] Elements line 290 > /home/rlchen/rlchen/soft/fixedmesh/code_changed/setelement.c > [511]PETSC ERROR: [511] Elements line 290 > /home/rlchen/rlchen/soft/fixedmesh/code_changed/setelement.c > [511]PETSC ERROR: [511] Elements line 290 > /home/rlchen/rlchen/soft/fixedmesh/code_changed/setelement.c > [511]PETSC ERROR: [511] Elements line 290 > /home/rlchen/rlchen/soft/fixedmesh/code_changed/setelement.c > [511]PETSC ERROR: [511] Elements line 290 > /home/rlchen/rlchen/soft/fixedmesh/code_changed/setelement.c > [511]PETSC ERROR: [511] Elements line 290 > /home/rlchen/rlchen/soft/fixedmesh/code_changed/setelement.c > [511]PETSC ERROR: [511] Elements line 290 > /home/rlchen/rlchen/soft/fixedmesh/code_changed/setelement.c > [511]PETSC ERROR: [511] Elements line 290 > /home/rlchen/rlchen/soft/fixedmesh/code_changed/setelement.c > [511]PETSC ERROR: [511] Elements line 290 > /home/rlchen/rlchen/soft/fixedmesh/code_changed/setelement.c > [511]PETSC ERROR: [511] Elements line 290 > /home/rlchen/rlchen/soft/fixedmesh/code_changed/setelement.c > [511]PETSC ERROR: [511] SetReferenceElement line 424 > /home/rlchen/rlchen/soft/fixedmesh/code_changed/setelement.c > [511]PETSC ERROR: --------------------- Error Message > ------------------------------------ > [511]PETSC ERROR: Signal received! > [511]PETSC ERROR: > ------------------------------------------------------------------------ > [511]PETSC ERROR: Petsc Release Version 3.2.0, Patch 1, Mon Sep 12 > 16:01:51 CDT 2011 > [511]PETSC ERROR: See docs/changes/index.html for recent updates. > [511]PETSC ERROR: See docs/faq.html for hints about trouble shooting. > [511]PETSC ERROR: See docs/index.html for manual pages. > [511]PETSC ERROR: > ------------------------------------------------------------------------ > [511]PETSC ERROR: ./joab on a Janus-deb named node1376 by ronglian Sun Nov > 27 12:50:41 2011 > [511]PETSC ERROR: Libraries linked from > /home/ronglian/soft/petsc-3.2-p1/Janus-debug/lib > [511]PETSC ERROR: Configure run at Tue Sep 13 18:32:21 2011 > [511]PETSC ERROR: Configure options --known-level1-dcache-size=32768 > --known-level1-dcache-linesize=32 --known-level1-dcache-assoc=0 > --known-memcmp-ok=1 --known-sizeof-char=1 --known-sizeof-void-p=8 > --known-sizeof-short=2 --known-sizeof-int=4 --known-sizeof-long=8 > --known-sizeof-long-long=8 --known-sizeof-float=4 --known-sizeof-double=8 > --known-sizeof-size_t=8 --known-bits-per-byte=8 --known-sizeof-MPI_Comm=8 > --known-sizeof-MPI_Fint=4 --known-mpi-long-double=1 --with-batch=1 > --with-mpi-shared-libraries=1 --known-mpi-shared-libraries=0 > --download-f-blas-lapack=1 --download-hypre=1 --download-superlu=1 > --download-parmetis=1 --download-superlu_dist=1 --download-blacs=1 > --download-scalapack=1 --with-debugging=1 > [511]PETSC ERROR: > ------------------------------------------------------------------------ > [511]PETSC ERROR: User provided function() line 0 in unknown directory > unknown file > ***ASSERTION failed on line 169 of file ccgraph.c: cadjwgtsum[cnvtxs] == > idxsum(nedges, cadjwgt) > 0 107 106 107 0 > [404]PETSC ERROR: > ------------------------------------------------------------------------ > [404]PETSC ERROR: Caught signal number 11 SEGV: Segmentation Violation, > probably memory access out of range > [404]PETSC ERROR: Try option -start_in_debugger or > -on_error_attach_debugger > [404]PETSC ERROR: or see > http://www.mcs.anl.gov/petsc/petsc-as/documentation/faq.html#valgrind[404]PETSCERROR: or try > http://valgrind.org on GNU/linux and Apple Mac OS X to find memory > corruption errors > [404]PETSC ERROR: likely location of problem given in stack below > [404]PETSC ERROR: --------------------- Stack Frames > ------------------------------------ > [404]PETSC ERROR: Note: The EXACT line numbers in the stack are not > available, > [404]PETSC ERROR: INSTEAD the line number of the start of the > function > [404]PETSC ERROR: is given. > [404]PETSC ERROR: [404] MatLUFactorNumeric_SuperLU_DIST line 280 > src/mat/impls/aij/mpi/superlu_dist/superlu_dist.c > [404]PETSC ERROR: [404] MatLUFactorNumeric line 2864 > src/mat/interface/matrix.c > [404]PETSC ERROR: [404] PCApply line 373 src/ksp/pc/interface/precon.c > [404]PETSC ERROR: [404] FGMREScycle line 119 > src/ksp/ksp/impls/gmres/fgmres/fgmres.c > [404]PETSC ERROR: [404] KSPSolve_FGMRES line 282 > src/ksp/ksp/impls/gmres/fgmres/fgmres.c > [404]PETSC ERROR: [404] KSPSolve line 331 src/ksp/ksp/interface/itfunc.c > [404]PETSC ERROR: [404] SNES_KSPSolve line 3394 src/snes/interface/snes.c > [404]PETSC ERROR: [404] SNESSolve_LS line 142 src/snes/impls/ls/ls.c > [404]PETSC ERROR: [404] SNESSolve line 2647 src/snes/interface/snes.c > [404]PETSC ERROR: [404] TimeStep line 1554 > /home/rlchen/rlchen/soft/fixedmesh/code_changed/joab.c > [404]PETSC ERROR: [404] SolveSteadyState line 1429 > /home/rlchen/rlchen/soft/fixedmesh/code_changed/joab.c > [404]PETSC ERROR: [404] ComputefixedBoundary line 588 > /home/rlchen/rlchen/soft/fixedmesh/code_changed/joab.c > [404]PETSC ERROR: [404] Elements line 290 > /home/rlchen/rlchen/soft/fixedmesh/code_changed/setelement.c > [404]PETSC ERROR: [404] Elements line 290 > /home/rlchen/rlchen/soft/fixedmesh/code_changed/setelement.c > [404]PETSC ERROR: [404] Elements line 290 > /home/rlchen/rlchen/soft/fixedmesh/code_changed/setelement.c > [404]PETSC ERROR: [404] Elements line 290 > /home/rlchen/rlchen/soft/fixedmesh/code_changed/setelement.c > [404]PETSC ERROR: [404] Elements line 290 > /home/rlchen/rlchen/soft/fixedmesh/code_changed/setelement.c > [404]PETSC ERROR: [404] Elements line 290 > /home/rlchen/rlchen/soft/fixedmesh/code_changed/setelement.c > [404]PETSC ERROR: [404] Elements line 290 > /home/rlchen/rlchen/soft/fixedmesh/code_changed/setelement.c > [404]PETSC ERROR: [404] Elements line 290 > /home/rlchen/rlchen/soft/fixedmesh/code_changed/setelement.c > [404]PETSC ERROR: [404] Elements line 290 > /home/rlchen/rlchen/soft/fixedmesh/code_changed/setelement.c > [404]PETSC ERROR: [404] Elements line 290 > /home/rlchen/rlchen/soft/fixedmesh/code_changed/setelement.c > [404]PETSC ERROR: [404] Elements line 290 > /home/rlchen/rlchen/soft/fixedmesh/code_changed/setelement.c > [404]PETSC ERROR: [404] Elements line 290 > /home/rlchen/rlchen/soft/fixedmesh/code_changed/setelement.c > [404]PETSC ERROR: [404] Elements line 290 > /home/rlchen/rlchen/soft/fixedmesh/code_changed/setelement.c > [404]PETSC ERROR: [404] Elements line 290 > /home/rlchen/rlchen/soft/fixedmesh/code_changed/setelement.c > [404]PETSC ERROR: [404] Elements line 290 > /home/rlchen/rlchen/soft/fixedmesh/code_changed/setelement.c > [404]PETSC ERROR: [404] Elements line 290 > /home/rlchen/rlchen/soft/fixedmesh/code_changed/setelement.c > [404]PETSC ERROR: [404] Elements line 290 > /home/rlchen/rlchen/soft/fixedmesh/code_changed/setelement.c > [404]PETSC ERROR: [404] Elements line 290 > /home/rlchen/rlchen/soft/fixedmesh/code_changed/setelement.c > [404]PETSC ERROR: [404] Elements line 290 > /home/rlchen/rlchen/soft/fixedmesh/code_changed/setelement.c > [404]PETSC ERROR: [404] Elements line 290 > /home/rlchen/rlchen/soft/fixedmesh/code_changed/setelement.c > [404]PETSC ERROR: [404] Elements line 290 > /home/rlchen/rlchen/soft/fixedmesh/code_changed/setelement.c > [404]PETSC ERROR: [404] Elements line 290 > /home/rlchen/rlchen/soft/fixedmesh/code_changed/setelement.c > [404]PETSC ERROR: [404] Elements line 290 > /home/rlchen/rlchen/soft/fixedmesh/code_changed/setelement.c > [404]PETSC ERROR: [404] Elements line 290 > /home/rlchen/rlchen/soft/fixedmesh/code_changed/setelement.c > [404]PETSC ERROR: [404] Elements line 290 > /home/rlchen/rlchen/soft/fixedmesh/code_changed/setelement.c > [404]PETSC ERROR: [404] Elements line 290 > /home/rlchen/rlchen/soft/fixedmesh/code_changed/setelement.c > [404]PETSC ERROR: [404] Elements line 290 > /home/rlchen/rlchen/soft/fixedmesh/code_changed/setelement.c > [404]PETSC ERROR: [404] Elements line 290 > /home/rlchen/rlchen/soft/fixedmesh/code_changed/setelement.c > [404]PETSC ERROR: [404] Elements line 290 > /home/rlchen/rlchen/soft/fixedmesh/code_changed/setelement.c > [404]PETSC ERROR: [404] Elements line 290 > /home/rlchen/rlchen/soft/fixedmesh/code_changed/setelement.c > [404]PETSC ERROR: [404] Elements line 290 > /home/rlchen/rlchen/soft/fixedmesh/code_changed/setelement.c > [404]PETSC ERROR: [404] Elements line 290 > /home/rlchen/rlchen/soft/fixedmesh/code_changed/setelement.c > [404]PETSC ERROR: [404] Elements line 290 > /home/rlchen/rlchen/soft/fixedmesh/code_changed/setelement.c > [404]PETSC ERROR: [404] Elements line 290 > /home/rlchen/rlchen/soft/fixedmesh/code_changed/setelement.c > [404]PETSC ERROR: [404] Elements line 290 > /home/rlchen/rlchen/soft/fixedmesh/code_changed/setelement.c > [404]PETSC ERROR: [404] Elements line 290 > /home/rlchen/rlchen/soft/fixedmesh/code_changed/setelement.c > [404]PETSC ERROR: [404] Elements line 290 > /home/rlchen/rlchen/soft/fixedmesh/code_changed/setelement.c > [404]PETSC ERROR: [404] Elements line 290 > /home/rlchen/rlchen/soft/fixedmesh/code_changed/setelement.c > [404]PETSC ERROR: [404] Elements line 290 > /home/rlchen/rlchen/soft/fixedmesh/code_changed/setelement.c > [404]PETSC ERROR: [404] SetReferenceElement line 424 > /home/rlchen/rlchen/soft/fixedmesh/code_changed/setelement.c > [404]PETSC ERROR: --------------------- Error Message > ------------------------------------ > [404]PETSC ERROR: Signal received! > [404]PETSC ERROR: > ------------------------------------------------------------------------ > [404]PETSC ERROR: Petsc Release Version 3.2.0, Patch 1, Mon Sep 12 > 16:01:51 CDT 2011 > [404]PETSC ERROR: See docs/changes/index.html for recent updates. > [404]PETSC ERROR: See docs/faq.html for hints about trouble shooting. > [404]PETSC ERROR: See docs/index.html for manual pages. > [404]PETSC ERROR: > ------------------------------------------------------------------------ > [404]PETSC ERROR: ./joab on a Janus-deb named node1708 by ronglian Sun Nov > 27 12:50:41 2011 > [404]PETSC ERROR: Libraries linked from > /home/ronglian/soft/petsc-3.2-p1/Janus-debug/lib > [404]PETSC ERROR: Configure run at Tue Sep 13 18:32:21 2011 > [404]PETSC ERROR: Configure options --known-level1-dcache-size=32768 > --known-level1-dcache-linesize=32 --known-level1-dcache-assoc=0 > --known-memcmp-ok=1 --known-sizeof-char=1 --known-sizeof-void-p=8 > --known-sizeof-short=2 --known-sizeof-int=4 --known-sizeof-long=8 > --known-sizeof-long-long=8 --known-sizeof-float=4 --known-sizeof-double=8 > --known-sizeof-size_t=8 --known-bits-per-byte=8 --known-sizeof-MPI_Comm=8 > --known-sizeof-MPI_Fint=4 --known-mpi-long-double=1 --with-batch=1 > --with-mpi-shared-libraries=1 --known-mpi-shared-libraries=0 > --download-f-blas-lapack=1 --download-hypre=1 --download-superlu=1 > --download-parmetis=1 --download-superlu_dist=1 --download-blacs=1 > --download-scalapack=1 --with-debugging=1 > [404]PETSC ERROR: > ------------------------------------------------------------------------ > [404]PETSC ERROR: User provided function() line 0 in unknown directory > unknown file > ***ASSERTION failed on line 169 of file ccgraph.c: cadjwgtsum[cnvtxs] == > idxsum(nedges, cadjwgt) > 0 107 106 107 0 > [371]PETSC ERROR: > ------------------------------------------------------------------------ > [371]PETSC ERROR: Caught signal number 11 SEGV: Segmentation Violation, > probably memory access out of range > [371]PETSC ERROR: Try option -start_in_debugger or > -on_error_attach_debugger > [371]PETSC ERROR: or see > http://www.mcs.anl.gov/petsc/petsc-as/documentation/faq.html#valgrind[371]PETSCERROR: or try > http://valgrind.org on GNU/linux and Apple Mac OS X to find memory > corruption errors > [371]PETSC ERROR: likely location of problem given in stack below > [371]PETSC ERROR: --------------------- Stack Frames > ------------------------------------ > [371]PETSC ERROR: Note: The EXACT line numbers in the stack are not > available, > [371]PETSC ERROR: INSTEAD the line number of the start of the > function > [371]PETSC ERROR: is given. > [371]PETSC ERROR: [371] MatLUFactorNumeric_SuperLU_DIST line 280 > src/mat/impls/aij/mpi/superlu_dist/superlu_dist.c > [371]PETSC ERROR: [371] MatLUFactorNumeric line 2864 > src/mat/interface/matrix.c > [371]PETSC ERROR: [371] PCApply line 373 src/ksp/pc/interface/precon.c > [371]PETSC ERROR: [371] FGMREScycle line 119 > src/ksp/ksp/impls/gmres/fgmres/fgmres.c > [371]PETSC ERROR: [371] KSPSolve_FGMRES line 282 > src/ksp/ksp/impls/gmres/fgmres/fgmres.c > [371]PETSC ERROR: [371] KSPSolve line 331 src/ksp/ksp/interface/itfunc.c > [371]PETSC ERROR: [371] SNES_KSPSolve line 3394 src/snes/interface/snes.c > [371]PETSC ERROR: [371] SNESSolve_LS line 142 src/snes/impls/ls/ls.c > [371]PETSC ERROR: [371] SNESSolve line 2647 src/snes/interface/snes.c > [371]PETSC ERROR: [371] TimeStep line 1554 > /home/rlchen/rlchen/soft/fixedmesh/code_changed/joab.c > [371]PETSC ERROR: [371] SolveSteadyState line 1429 > /home/rlchen/rlchen/soft/fixedmesh/code_changed/joab.c > [371]PETSC ERROR: [371] ComputefixedBoundary line 588 > /home/rlchen/rlchen/soft/fixedmesh/code_changed/joab.c > [371]PETSC ERROR: [371] Elements line 290 > /home/rlchen/rlchen/soft/fixedmesh/code_changed/setelement.c > [371]PETSC ERROR: [371] Elements line 290 > /home/rlchen/rlchen/soft/fixedmesh/code_changed/setelement.c > [371]PETSC ERROR: [371] Elements line 290 > /home/rlchen/rlchen/soft/fixedmesh/code_changed/setelement.c > [371]PETSC ERROR: [371] Elements line 290 > /home/rlchen/rlchen/soft/fixedmesh/code_changed/setelement.c > [371]PETSC ERROR: [371] Elements line 290 > /home/rlchen/rlchen/soft/fixedmesh/code_changed/setelement.c > [371]PETSC ERROR: [371] Elements line 290 > /home/rlchen/rlchen/soft/fixedmesh/code_changed/setelement.c > [371]PETSC ERROR: [371] Elements line 290 > /home/rlchen/rlchen/soft/fixedmesh/code_changed/setelement.c > [371]PETSC ERROR: [371] Elements line 290 > /home/rlchen/rlchen/soft/fixedmesh/code_changed/setelement.c > [371]PETSC ERROR: [371] Elements line 290 > /home/rlchen/rlchen/soft/fixedmesh/code_changed/setelement.c > [371]PETSC ERROR: [371] Elements line 290 > /home/rlchen/rlchen/soft/fixedmesh/code_changed/setelement.c > [371]PETSC ERROR: [371] Elements line 290 > /home/rlchen/rlchen/soft/fixedmesh/code_changed/setelement.c > [371]PETSC ERROR: [371] Elements line 290 > /home/rlchen/rlchen/soft/fixedmesh/code_changed/setelement.c > [371]PETSC ERROR: [371] Elements line 290 > /home/rlchen/rlchen/soft/fixedmesh/code_changed/setelement.c > [371]PETSC ERROR: [371] Elements line 290 > /home/rlchen/rlchen/soft/fixedmesh/code_changed/setelement.c > [371]PETSC ERROR: [371] Elements line 290 > /home/rlchen/rlchen/soft/fixedmesh/code_changed/setelement.c > [371]PETSC ERROR: [371] Elements line 290 > /home/rlchen/rlchen/soft/fixedmesh/code_changed/setelement.c > [371]PETSC ERROR: [371] Elements line 290 > /home/rlchen/rlchen/soft/fixedmesh/code_changed/setelement.c > [371]PETSC ERROR: [371] Elements line 290 > /home/rlchen/rlchen/soft/fixedmesh/code_changed/setelement.c > [371]PETSC ERROR: [371] Elements line 290 > /home/rlchen/rlchen/soft/fixedmesh/code_changed/setelement.c > [371]PETSC ERROR: [371] Elements line 290 > /home/rlchen/rlchen/soft/fixedmesh/code_changed/setelement.c > [371]PETSC ERROR: [371] Elements line 290 > /home/rlchen/rlchen/soft/fixedmesh/code_changed/setelement.c > [371]PETSC ERROR: [371] Elements line 290 > /home/rlchen/rlchen/soft/fixedmesh/code_changed/setelement.c > [371]PETSC ERROR: [371] Elements line 290 > /home/rlchen/rlchen/soft/fixedmesh/code_changed/setelement.c > [371]PETSC ERROR: [371] Elements line 290 > /home/rlchen/rlchen/soft/fixedmesh/code_changed/setelement.c > [371]PETSC ERROR: [371] Elements line 290 > /home/rlchen/rlchen/soft/fixedmesh/code_changed/setelement.c > [371]PETSC ERROR: [371] Elements line 290 > /home/rlchen/rlchen/soft/fixedmesh/code_changed/setelement.c > [371]PETSC ERROR: [371] Elements line 290 > /home/rlchen/rlchen/soft/fixedmesh/code_changed/setelement.c > [371]PETSC ERROR: [371] Elements line 290 > /home/rlchen/rlchen/soft/fixedmesh/code_changed/setelement.c > [371]PETSC ERROR: [371] Elements line 290 > /home/rlchen/rlchen/soft/fixedmesh/code_changed/setelement.c > [371]PETSC ERROR: [371] Elements line 290 > /home/rlchen/rlchen/soft/fixedmesh/code_changed/setelement.c > [371]PETSC ERROR: [371] Elements line 290 > /home/rlchen/rlchen/soft/fixedmesh/code_changed/setelement.c > [371]PETSC ERROR: [371] Elements line 290 > /home/rlchen/rlchen/soft/fixedmesh/code_changed/setelement.c > [371]PETSC ERROR: [371] Elements line 290 > /home/rlchen/rlchen/soft/fixedmesh/code_changed/setelement.c > [371]PETSC ERROR: [371] Elements line 290 > /home/rlchen/rlchen/soft/fixedmesh/code_changed/setelement.c > [371]PETSC ERROR: [371] Elements line 290 > /home/rlchen/rlchen/soft/fixedmesh/code_changed/setelement.c > [371]PETSC ERROR: [371] Elements line 290 > /home/rlchen/rlchen/soft/fixedmesh/code_changed/setelement.c > [371]PETSC ERROR: [371] Elements line 290 > /home/rlchen/rlchen/soft/fixedmesh/code_changed/setelement.c > [371]PETSC ERROR: [371] Elements line 290 > /home/rlchen/rlchen/soft/fixedmesh/code_changed/setelement.c > [371]PETSC ERROR: [371] Elements line 290 > /home/rlchen/rlchen/soft/fixedmesh/code_changed/setelement.c > [371]PETSC ERROR: [371] SetReferenceElement line 424 > /home/rlchen/rlchen/soft/fixedmesh/code_changed/setelement.c > > >> This is part of the graph library Metis. Perhaps an invalid graph was >> passed to Metis? Without a stack trace (you can configure >> -with-debugging=1, the default), we don't even know whether this error was >> reached through PETSc, through your code, or through another library. >> -------------- next part -------------- >> An HTML attachment was scrubbed... >> URL: < >> http://lists.mcs.anl.gov/pipermail/petsc-users/attachments/20111127/c5040628/attachment.htm >> > >> >> ------------------------------ >> >> _______________________________________________ >> petsc-users mailing list >> petsc-users at mcs.anl.gov >> https://lists.mcs.anl.gov/mailman/listinfo/petsc-users >> >> >> End of petsc-users Digest, Vol 35, Issue 84 >> ******************************************* >> > > -- What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. -- Norbert Wiener -------------- next part -------------- An HTML attachment was scrubbed... URL: From jedbrown at mcs.anl.gov Sun Nov 27 14:14:20 2011 From: jedbrown at mcs.anl.gov (Jed Brown) Date: Sun, 27 Nov 2011 14:14:20 -0600 Subject: [petsc-users] ccgraph.c error In-Reply-To: References: Message-ID: On Sun, Nov 27, 2011 at 14:11, Matthew Knepley wrote: > This is a SuperLU problem. Please send configure.log to > petsc-maint at mcs.anl.gov > > Jed, could this be the mismatch between SuperLU and the new ParMetis? > Perhaps, but this looks like petsc-3.2 which was released before the latest parmetis. -------------- next part -------------- An HTML attachment was scrubbed... URL: From sean at mcs.anl.gov Sun Nov 27 14:14:36 2011 From: sean at mcs.anl.gov (Sean Farley) Date: Sun, 27 Nov 2011 14:14:36 -0600 Subject: [petsc-users] ccgraph.c error In-Reply-To: References: Message-ID: > > Jed, could this be the mismatch between SuperLU and the new ParMetis? > Looking at his configure line, I don't see --download-metis, which would mean that he isn't using the new parmetis. Also, he didn't compile --with-64-indices, so he's not using any of the custom patch for SuperLU_DIST. -------------- next part -------------- An HTML attachment was scrubbed... URL: From knepley at gmail.com Sun Nov 27 14:16:29 2011 From: knepley at gmail.com (Matthew Knepley) Date: Sun, 27 Nov 2011 14:16:29 -0600 Subject: [petsc-users] ccgraph.c error In-Reply-To: References: Message-ID: On Sun, Nov 27, 2011 at 2:14 PM, Sean Farley wrote: > Jed, could this be the mismatch between SuperLU and the new ParMetis? >> > > Looking at his configure line, I don't see --download-metis, which would > mean that he isn't using the new parmetis. Also, he didn't compile > --with-64-indices, so he's not using any of the custom patch for > SuperLU_DIST. > So there would not be the problem with empty domains? I could see it happening for this many processes. Matt -- What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. -- Norbert Wiener -------------- next part -------------- An HTML attachment was scrubbed... URL: From sean at mcs.anl.gov Sun Nov 27 14:25:53 2011 From: sean at mcs.anl.gov (Sean Farley) Date: Sun, 27 Nov 2011 14:25:53 -0600 Subject: [petsc-users] ccgraph.c error In-Reply-To: References: Message-ID: > > So there would not be the problem with empty domains? I could see it > happening for this many processes. I'm saying that he's using the older parmetis (3.1) and SuperLU_DIST (2.5) (and also Jed was right, this looks like 3.2p1), which as far I remember still have the empty domain problem. -------------- next part -------------- An HTML attachment was scrubbed... URL: From knepley at gmail.com Sun Nov 27 14:28:27 2011 From: knepley at gmail.com (Matthew Knepley) Date: Sun, 27 Nov 2011 14:28:27 -0600 Subject: [petsc-users] ccgraph.c error In-Reply-To: References: Message-ID: On Sun, Nov 27, 2011 at 2:25 PM, Sean Farley wrote: > So there would not be the problem with empty domains? I could see it >> happening for this many processes. > > > I'm saying that he's using the older parmetis (3.1) and SuperLU_DIST (2.5) > (and also Jed was right, this looks like 3.2p1), which as far I remember > still have the empty domain problem. > Okay, executive summary: There is a problem with the interaction of SuperLU and ParMetis which is fixed in petsc-dev, using the latest releases of both packages. Consider upgrading. Thanks, Matt -- What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. -- Norbert Wiener -------------- next part -------------- An HTML attachment was scrubbed... URL: From sussmanm at math.pitt.edu Sun Nov 27 14:55:43 2011 From: sussmanm at math.pitt.edu (Mike Sussman) Date: Sun, 27 Nov 2011 15:55:43 -0500 Subject: [petsc-users] TS snes failure In-Reply-To: References: Message-ID: <1322427343.3615.25.camel@ozhp> Thank you, Matt, for your response. I am calling TSSolve(), and I am checking all return codes, using CHKERRQ. The case I am running is very similar to the ts tutorial example ex2f.F, re-formulated as a DAE. I am using TSTHETA. The line starting "Timestep =" is from the MyMonitor function in that example. The line starting "(DAE)" is my line, printed after the TSSolve() has completed. For example, I ran a case with 100 time steps, turning on -snes_converged_reason and -snes_monitor. The final several steps are: 46 SNES Function norm 4.129815386097e+00 47 SNES Function norm 4.129815386096e+00 48 SNES Function norm 4.129815386095e+00 49 SNES Function norm 4.129815386095e+00 50 SNES Function norm 4.129815386095e+00 Nonlinear solve did not converge due to DIVERGED_MAX_IT Timestep = 99,time = 0.839 sec, error [2-norm] = 1.132E+00, error [max-norm] = 1.635E+00 0 SNES Function norm 4.106011479715e+00 1 SNES Function norm 4.105987638450e+00 Nonlinear solve did not converge due to DIVERGED_LINE_SEARCH Timestep = 100,time = 0.847 sec, error [2-norm] = 1.144E+00, error [max-norm] = 1.652E+00 (DAE) Number of time-steps 100 final time 8.4746E-01 solution norm 1.0823E+01 -- Mike Sussman From jedbrown at mcs.anl.gov Sun Nov 27 15:02:37 2011 From: jedbrown at mcs.anl.gov (Jed Brown) Date: Sun, 27 Nov 2011 15:02:37 -0600 Subject: [petsc-users] TS snes failure In-Reply-To: <1322427343.3615.25.camel@ozhp> References: <1322427343.3615.25.camel@ozhp> Message-ID: Looks like I fixed this after petsc-3.2 was released. http://petsc.cs.iit.edu/petsc/petsc-dev/rev/9d0d4 On Sun, Nov 27, 2011 at 14:55, Mike Sussman wrote: > Thank you, Matt, for your response. > > I am calling TSSolve(), and I am checking all return codes, using > CHKERRQ. > > The case I am running is very similar to the ts tutorial example ex2f.F, > re-formulated as a DAE. I am using TSTHETA. The line starting > "Timestep =" is from the MyMonitor function in that example. The line > starting "(DAE)" is my line, printed after the TSSolve() has completed. > > For example, I ran a case with 100 time steps, turning on > -snes_converged_reason and -snes_monitor. The final several steps are: > > 46 SNES Function norm 4.129815386097e+00 > 47 SNES Function norm 4.129815386096e+00 > 48 SNES Function norm 4.129815386095e+00 > 49 SNES Function norm 4.129815386095e+00 > 50 SNES Function norm 4.129815386095e+00 > Nonlinear solve did not converge due to DIVERGED_MAX_IT > Timestep = 99,time = 0.839 sec, error [2-norm] = 1.132E+00, error > [max-norm] = 1.635E+00 > 0 SNES Function norm 4.106011479715e+00 > 1 SNES Function norm 4.105987638450e+00 > Nonlinear solve did not converge due to DIVERGED_LINE_SEARCH > Timestep = 100,time = 0.847 sec, error [2-norm] = 1.144E+00, error > [max-norm] = 1.652E+00 > (DAE) Number of time-steps 100 final time 8.4746E-01 solution norm > 1.0823E+01 > > -- > Mike Sussman > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From zonexo at gmail.com Sun Nov 27 15:27:12 2011 From: zonexo at gmail.com (TAY wee-beng) Date: Sun, 27 Nov 2011 22:27:12 +0100 Subject: [petsc-users] Error compiling code when upgrading from 3.1p8 to 3.2p5 Message-ID: <4ED2AB30.2060801@gmail.com> Hi, I have trouble compiling my Fortran codes when I upgrade PETSc from 3.1p8 to 3.2p5. My code is something like this: module global_data use nrtype implicit none save #include "finclude/petsc.h90" !grid variables integer :: size_x,size_y,size_z,grid_type !size_x1,size_x2,size_x3,size_y1,size_y2,size_y3 real(8), allocatable :: x(:),y(:),z(:),xu(:),yu(:),zu(:),xv(:),yv(:),zv(:),xw(:),yw(:),zw(:),c_cx(:),cu_cx(:),c_cy(:),cv_cy(:),c_cz(:),cw_cz(:) !solver variables ... I tried after compiling with the new 3.2p5 and got the following error: /opt/openmpi-1.5.3/bin/mpif90 -c -g -debug all -implicitnone -warn unused -fp-stack-check -heap-arrays -ftrapuv -check pointers -O0 -save -w90 -w -w95 -O0 -I/home/wtay/Lib/petsc-3.2-p5_mumps_debug/include -I/home/wtay/Lib/petsc-3.2-p5_mumps_debug/include -I/opt/openmpi-1.5.3/include -o global.o global.F90 global.F90(205): error #5082: Syntax error, found IDENTIFIER 'FLGG' when expecting one of: ( % : . = => PetscTruth flgg -----------^ global.F90(205): error #6274: This statement must not appear in the specification part of a module PetscTruth flgg ^ global.F90(207): error #6236: A specification statement cannot appear in the executable section. integer(kind=selected_int_kind(5)) reason ^ global.F90(209): error #6236: A specification statement cannot appear in the executable section. integer(kind=selected_int_kind(10)) i_vec ^ global.F90(213): error #6236: A specification statement cannot appear in the executable section. integer :: myid,num_procs,ksta,kend,ksta_ext,kend_ext,ksta_ext0,ksta2,kend2,kend3 ^ global.F90(215): error #6236: A specification statement cannot appear in the executable section. integer :: ijk_sta_p,ijk_end_p,ijk_sta_m,ijk_end_m,ijk_sta_mx,ijk_end_mx,ijk_sta_my,ijk_end_my,ijk_sta_mz,ijk_end_mz ^ global.F90(217): error #6236: A specification statement cannot appear in the executable section. character(2) :: procs ^ global.F90(205): error #6404: This name does not have a type, and must have an explicit type. [PETSCTRUTH] PetscTruth flgg ^ global.F90(205): error #6404: This name does not have a type, and must have an explicit type. [FLGG] PetscTruth flgg -----------^ global.F90(229): error #6404: This name does not have a type, and must have an explicit type. [KSTA] ksta=myid*(size_z/num_procs)+1; kend=(myid+1)*(size_z/num_procs) May I know what's wrong? Thanks! -- Yours sincerely, TAY wee-beng From balay at mcs.anl.gov Sun Nov 27 15:30:23 2011 From: balay at mcs.anl.gov (Satish Balay) Date: Sun, 27 Nov 2011 15:30:23 -0600 (CST) Subject: [petsc-users] Error compiling code when upgrading from 3.1p8 to 3.2p5 In-Reply-To: <4ED2AB30.2060801@gmail.com> References: <4ED2AB30.2060801@gmail.com> Message-ID: check http://www.mcs.anl.gov/petsc/documentation/changes/32.html -> Changed PetscTruth to PetscBool satish On Sun, 27 Nov 2011, TAY wee-beng wrote: > Hi, > > I have trouble compiling my Fortran codes when I upgrade PETSc from 3.1p8 to > 3.2p5. > > My code is something like this: > > module global_data > > use nrtype > > implicit none > > save > > #include "finclude/petsc.h90" > > !grid variables > > integer :: size_x,size_y,size_z,grid_type > !size_x1,size_x2,size_x3,size_y1,size_y2,size_y3 > > real(8), allocatable :: > x(:),y(:),z(:),xu(:),yu(:),zu(:),xv(:),yv(:),zv(:),xw(:),yw(:),zw(:),c_cx(:),cu_cx(:),c_cy(:),cv_cy(:),c_cz(:),cw_cz(:) > > !solver variables > > ... > > I tried after compiling with the new 3.2p5 and got the following error: > > /opt/openmpi-1.5.3/bin/mpif90 -c -g -debug all -implicitnone -warn unused > -fp-stack-check -heap-arrays -ftrapuv -check pointers -O0 -save -w90 -w -w95 > -O0 -I/home/wtay/Lib/petsc-3.2-p5_mumps_debug/include > -I/home/wtay/Lib/petsc-3.2-p5_mumps_debug/include -I/opt/openmpi-1.5.3/include > -o global.o global.F90 > global.F90(205): error #5082: Syntax error, found IDENTIFIER 'FLGG' when > expecting one of: ( % : . = => > PetscTruth flgg > -----------^ > global.F90(205): error #6274: This statement must not appear in the > specification part of a module > PetscTruth flgg > ^ > global.F90(207): error #6236: A specification statement cannot appear in the > executable section. > integer(kind=selected_int_kind(5)) reason > ^ > global.F90(209): error #6236: A specification statement cannot appear in the > executable section. > integer(kind=selected_int_kind(10)) i_vec > ^ > global.F90(213): error #6236: A specification statement cannot appear in the > executable section. > integer :: > myid,num_procs,ksta,kend,ksta_ext,kend_ext,ksta_ext0,ksta2,kend2,kend3 > ^ > global.F90(215): error #6236: A specification statement cannot appear in the > executable section. > integer :: > ijk_sta_p,ijk_end_p,ijk_sta_m,ijk_end_m,ijk_sta_mx,ijk_end_mx,ijk_sta_my,ijk_end_my,ijk_sta_mz,ijk_end_mz > ^ > global.F90(217): error #6236: A specification statement cannot appear in the > executable section. > character(2) :: procs > ^ > global.F90(205): error #6404: This name does not have a type, and must have an > explicit type. [PETSCTRUTH] > PetscTruth flgg > ^ > global.F90(205): error #6404: This name does not have a type, and must have an > explicit type. [FLGG] > PetscTruth flgg > -----------^ > global.F90(229): error #6404: This name does not have a type, and must have an > explicit type. [KSTA] > ksta=myid*(size_z/num_procs)+1; kend=(myid+1)*(size_z/num_procs) > > > May I know what's wrong? > > Thanks! > > From zonexo at gmail.com Sun Nov 27 15:38:35 2011 From: zonexo at gmail.com (TAY wee-beng) Date: Sun, 27 Nov 2011 22:38:35 +0100 Subject: [petsc-users] Error compiling code when upgrading from 3.1p8 to 3.2p5 In-Reply-To: References: <4ED2AB30.2060801@gmail.com> Message-ID: <4ED2ADDB.7070600@gmail.com> Got it! Thanks! Yours sincerely, TAY wee-beng On 27/11/2011 10:30 PM, Satish Balay wrote: > check http://www.mcs.anl.gov/petsc/documentation/changes/32.html > > -> Changed PetscTruth to PetscBool > > satish > > On Sun, 27 Nov 2011, TAY wee-beng wrote: > >> Hi, >> >> I have trouble compiling my Fortran codes when I upgrade PETSc from 3.1p8 to >> 3.2p5. >> >> My code is something like this: >> >> module global_data >> >> use nrtype >> >> implicit none >> >> save >> >> #include "finclude/petsc.h90" >> >> !grid variables >> >> integer :: size_x,size_y,size_z,grid_type >> !size_x1,size_x2,size_x3,size_y1,size_y2,size_y3 >> >> real(8), allocatable :: >> x(:),y(:),z(:),xu(:),yu(:),zu(:),xv(:),yv(:),zv(:),xw(:),yw(:),zw(:),c_cx(:),cu_cx(:),c_cy(:),cv_cy(:),c_cz(:),cw_cz(:) >> >> !solver variables >> >> ... >> >> I tried after compiling with the new 3.2p5 and got the following error: >> >> /opt/openmpi-1.5.3/bin/mpif90 -c -g -debug all -implicitnone -warn unused >> -fp-stack-check -heap-arrays -ftrapuv -check pointers -O0 -save -w90 -w -w95 >> -O0 -I/home/wtay/Lib/petsc-3.2-p5_mumps_debug/include >> -I/home/wtay/Lib/petsc-3.2-p5_mumps_debug/include -I/opt/openmpi-1.5.3/include >> -o global.o global.F90 >> global.F90(205): error #5082: Syntax error, found IDENTIFIER 'FLGG' when >> expecting one of: ( % : . = => >> PetscTruth flgg >> -----------^ >> global.F90(205): error #6274: This statement must not appear in the >> specification part of a module >> PetscTruth flgg >> ^ >> global.F90(207): error #6236: A specification statement cannot appear in the >> executable section. >> integer(kind=selected_int_kind(5)) reason >> ^ >> global.F90(209): error #6236: A specification statement cannot appear in the >> executable section. >> integer(kind=selected_int_kind(10)) i_vec >> ^ >> global.F90(213): error #6236: A specification statement cannot appear in the >> executable section. >> integer :: >> myid,num_procs,ksta,kend,ksta_ext,kend_ext,ksta_ext0,ksta2,kend2,kend3 >> ^ >> global.F90(215): error #6236: A specification statement cannot appear in the >> executable section. >> integer :: >> ijk_sta_p,ijk_end_p,ijk_sta_m,ijk_end_m,ijk_sta_mx,ijk_end_mx,ijk_sta_my,ijk_end_my,ijk_sta_mz,ijk_end_mz >> ^ >> global.F90(217): error #6236: A specification statement cannot appear in the >> executable section. >> character(2) :: procs >> ^ >> global.F90(205): error #6404: This name does not have a type, and must have an >> explicit type. [PETSCTRUTH] >> PetscTruth flgg >> ^ >> global.F90(205): error #6404: This name does not have a type, and must have an >> explicit type. [FLGG] >> PetscTruth flgg >> -----------^ >> global.F90(229): error #6404: This name does not have a type, and must have an >> explicit type. [KSTA] >> ksta=myid*(size_z/num_procs)+1; kend=(myid+1)*(size_z/num_procs) >> >> >> May I know what's wrong? >> >> Thanks! >> >> From mmsussman at gmail.com Sun Nov 27 15:56:04 2011 From: mmsussman at gmail.com (Mike Sussman) Date: Sun, 27 Nov 2011 16:56:04 -0500 Subject: [petsc-users] TS snes failure In-Reply-To: References: Message-ID: <1322430964.3615.30.camel@ozhp> Thank you for the information. Excuse my ignorance, but must I now download the latest developmental version? > Message: 7 > Date: Sun, 27 Nov 2011 15:02:37 -0600 > From: Jed Brown > Subject: Re: [petsc-users] TS snes failure > To: PETSc users list > Message-ID: > > Content-Type: text/plain; charset="utf-8" > > Looks like I fixed this after petsc-3.2 was released. > > http://petsc.cs.iit.edu/petsc/petsc-dev/rev/9d0d4 > ---- Mike Sussman From jedbrown at mcs.anl.gov Sun Nov 27 15:59:10 2011 From: jedbrown at mcs.anl.gov (Jed Brown) Date: Sun, 27 Nov 2011 15:59:10 -0600 Subject: [petsc-users] TS snes failure In-Reply-To: <1322430964.3615.30.camel@ozhp> References: <1322430964.3615.30.camel@ozhp> Message-ID: On Sun, Nov 27, 2011 at 15:56, Mike Sussman wrote: > Thank you for the information. Excuse my ignorance, but must I now > download the latest developmental version? > For that feature to work, yes. You might be able to apply that patch to 3.2, but TS has had a few improvements since the release (e.g. Rosenbrock-W methods with adaptive error control), so it's likely worth using anyway. -------------- next part -------------- An HTML attachment was scrubbed... URL: From bsmith at mcs.anl.gov Sun Nov 27 17:34:13 2011 From: bsmith at mcs.anl.gov (Barry Smith) Date: Sun, 27 Nov 2011 17:34:13 -0600 Subject: [petsc-users] a simple question about options left In-Reply-To: References: Message-ID: <65749806-311E-4953-B2D0-4A3B5C87752F@mcs.anl.gov> I have fixed the command line parsing in petsc-dev so you will no longer get messages about nonexistent arguments like this. Barry On Nov 23, 2011, at 10:26 PM, Xiangdong Liang wrote: > Hello everyone, > > When I compile my program with debug mode and ran it, I got > > WARNING! There are options you set that were not used! > WARNING! could be spelling mistake, etc! > Option left: name:-1 value: -1 > > I even got the warning if I do not provide any options. What's the option left? > > However, such warning about the option left is gone if I compile my > program in arch-opt mode. Thanks. > > Xiangdong From xdliang at gmail.com Sun Nov 27 18:07:46 2011 From: xdliang at gmail.com (Xiangdong Liang) Date: Sun, 27 Nov 2011 19:07:46 -0500 Subject: [petsc-users] a simple question about options left In-Reply-To: <65749806-311E-4953-B2D0-4A3B5C87752F@mcs.anl.gov> References: <65749806-311E-4953-B2D0-4A3B5C87752F@mcs.anl.gov> Message-ID: Great. Thanks a lot, Barry! Xiangdong On Sun, Nov 27, 2011 at 6:34 PM, Barry Smith wrote: > > ? I have fixed the command line parsing in petsc-dev so you will no longer get messages about nonexistent arguments like this. > > ? Barry > > On Nov 23, 2011, at 10:26 PM, Xiangdong Liang wrote: > >> Hello everyone, >> >> When I compile my program with debug mode and ran it, I got >> >> WARNING! There are options you set that were not used! >> WARNING! could be spelling mistake, etc! >> Option left: name:-1 value: -1 >> >> I even got the warning if I do not provide any options. What's the option left? >> >> However, such warning about the option left is gone if I compile my >> program in arch-opt mode. ?Thanks. >> >> Xiangdong > > From hzhang at mcs.anl.gov Sun Nov 27 20:17:51 2011 From: hzhang at mcs.anl.gov (Hong Zhang) Date: Sun, 27 Nov 2011 20:17:51 -0600 Subject: [petsc-users] Only one column is in the matrix after solving the inverse matrix In-Reply-To: <1322025195.3509.16.camel@hyyv460> References: <1322025195.3509.16.camel@hyyv460> Message-ID: huyaoyu : This is a bug in petsc library. I've fixed it in petsc-3.2 and patched petsc-dev. You may edit ~petsc-3.2/src/mat/impls/dense/seq/dense.c by following http://petsc.cs.iit.edu/petsc/petsc-dev/rev/c2fe40bf559c or get an updated petsc-3.2, then rebuild your petsc-3.2. Thanks for reporting the problem! Hong > Hong, > > Thank you for your help! > > I tried the example code of petsc-3.2/src/mat/examples/tests/ex1.c. > But the result is the same. I attach the modified code and the > results here. I don't know what is the problem exactly. By the way I am > using PETSc on a 64bit ubuntu 11.04. I complied MPICH2 myself, made > symbolic links in the /usr/bin for mpicc, mpiexec etc, and configured > PETSc use the command line: > ./configure --with-cc=mpicc --with-fc=mpif90 --download-f-blas-lapack=1 > > I break some lines of the code to avoid line wrapping in Evolution mail > program. > >> You can take a look at examples >> petsc-3.2/src/mat/examples/tests/ex1.c, ex125.c and ex125.c. >> >> Hong >> >> On Tue, Nov 22, 2011 at 9:23 AM, Yaoyu Hu wrote: >> > Hi, everyone, >> > >> > I am new to PETSc, and I have just begun to use it together with slepc. It >> > is really fantastic and I like it! >> > >> > >> > >> > It?was not from me but one of my colleges who wanted to solve an inverse of >> > a matrix, which had 400 rows and columns. >> > >> > >> > >> > I know that it is not good to design a algorithm that has a process for >> > solving the inverse of a matrix. I just wanted to?give it a try. However, >> > things turned out wired. I followed the instructions on the FAQ web page of >> > PETSc, the one using MatLUFractor() and MatMatSolve(). After I finished the >> > coding, I tried the program. The result I got?was a matrix which only has >> > its first column but nothing else. I did not know what's happened. The >> > following is the codes I used. >> > > =============Modified code begins================= > static char help[] = "Give the inverse of a matrix.\n\n"; > > #include > > #include > #include > > #undef __FUNCT__ > #define __FUNCT__ "main" > > #define DIMENSION 2 > > int main(int argc,char **args) > { > ? ? ? ?PetscErrorCode ?ierr; > ? ? ? ?PetscMPIInt ? ? size; > ? ? ? ?Mat ? ? ? ? ? ? A,CA; ? // CA is the copy of A > ? ? ? ?Mat ? ? ? ? ? ? RHS,XX; // XX is the inverse result > ? ? ? ?MatFactorInfo ? mfinfo; > ? ? ? ?PetscScalar* ? ?array_scalar = > ? ? ? ? ? ? ? ?new PetscScalar[DIMENSION*DIMENSION]; > ? ? ? ?PetscScalar* ? ?array_for_RHS; > > ? ? ? ?// clean the values of array_scalar > ? ? ? ?for(int i=0;i ? ? ? ?{ > ? ? ? ? ? ? ? ?array_scalar[i] = 0.0; > ? ? ? ?} > > ? ? ? ?// set up the indices > ? ? ? ?int idxm[DIMENSION]; > ? ? ? ?for(int i=0;i ? ? ? ?{ > ? ? ? ? ? ? ? ?idxm[i] = i; > ? ? ? ?} > > ? ? ? ?PetscInitialize(&argc,&args,(char *)0,help); > ? ? ? ?ierr = MPI_Comm_size(PETSC_COMM_WORLD,&size);CHKERRQ(ierr); > ? ? ? ?if (size != 1) SETERRQ(PETSC_COMM_WORLD, > ? ? ? ? ? ? ? ?1,"This is a uniprocessor example only!"); > > ? ? ? ?// RHS & XX > ? ? ? ?ierr = MatCreate(PETSC_COMM_WORLD,&RHS);CHKERRQ(ierr); > ? ? ? ?ierr =MatSetSizes(RHS, > ? ? ? ? ? ? ? ?PETSC_DECIDE,PETSC_DECIDE, > ? ? ? ? ? ? ? ?DIMENSION,DIMENSION);CHKERRQ(ierr); > ? ? ? ?ierr = MatSetType(RHS,MATDENSE);CHKERRQ(ierr); > ? ? ? ?ierr = MatSetFromOptions(RHS);CHKERRQ(ierr); > ? ? ? ?ierr = MatGetArray(RHS,&array_for_RHS); CHKERRQ(ierr); > ? ? ? ?for(int i=0;i ? ? ? ?{ > ? ? ? ? ? ? ? ? ? ? ? ?array_for_RHS[i*DIMENSION + i] = 1.0; > ? ? ? ?} > ? ? ? ?ierr = MatRestoreArray(RHS,&array_for_RHS); CHKERRQ(ierr); > > ? ? ? ?ierr = MatDuplicate(RHS, > ? ? ? ? ? ? ? ?MAT_DO_NOT_COPY_VALUES,&XX);CHKERRQ(ierr); > > ? ? ? ?/* matrix A */ > ? ? ? ?/* read in the file A.txt. It is a single process program so I think it > is OK to read a file like this.*/ > ? ? ? ?std::fstream in_file; > ? ? ? ?double temp_scalar; > ? ? ? ?in_file.open("./A.txt",std::ifstream::in); > ? ? ? ?if(!in_file.good()) > ? ? ? ?{ > ? ? ? ? ? ? ? ?ierr = PetscPrintf(PETSC_COMM_SELF, > ? ? ? ? ? ? ? ? ? ? ? ?"File open failed!\n"); CHKERRQ(ierr); > ? ? ? ? ? ? ? ?return 1; > ? ? ? ?} > > ? ? ? ?for(int i=0;i ? ? ? ?{ > ? ? ? ? ? ? ? ?for(int j=0;j ? ? ? ? ? ? ? ?{ > ? ? ? ? ? ? ? ? ? ? ? ?in_file>>temp_scalar; > > ? ? ? ? ? ? ? ? ? ? ? ?array_scalar[i*DIMENSION + j] = temp_scalar; > ? ? ? ? ? ? ? ?} > ? ? ? ?} > ? ? ? ?in_file.close(); > > ? ? ? ?// matrices creation and initialization > ? ? ? ?ierr = MatCreateSeqDense(PETSC_COMM_WORLD, > ? ? ? ? ? ? ? ?DIMENSION,DIMENSION,PETSC_NULL,&A); CHKERRQ(ierr); > ? ? ? ?ierr = MatCreateSeqDense(PETSC_COMM_WORLD, > ? ? ? ? ? ? ? ?DIMENSION,DIMENSION,PETSC_NULL,&CA); CHKERRQ(ierr); > ? ? ? ?ierr = MatSetValues(A,DIMENSION,idxm,DIMENSION,idxm, > ? ? ? ? ? ? ? ? ? ? ? ?array_scalar,INSERT_VALUES); CHKERRQ(ierr); > > ? ? ? ?ierr = MatAssemblyBegin(A,MAT_FINAL_ASSEMBLY); CHKERRQ(ierr); > ? ? ? ?ierr = MatAssemblyEnd(A,MAT_FINAL_ASSEMBLY); CHKERRQ(ierr); > ? ? ? ?// create CA > ? ? ? ?ierr = MatDuplicate(A,MAT_COPY_VALUES,&CA); CHKERRQ(ierr); > > ? ? ? ?ierr = PetscPrintf(PETSC_COMM_SELF, > ? ? ? ? ? ? ? ?"The A matrix is:\n"); CHKERRQ(ierr); > ? ? ? ?ierr = MatView(A,PETSC_VIEWER_STDOUT_SELF); CHKERRQ(ierr); > > ? ? ? ?// in-place LUFactor > ? ? ? ?ierr = MatLUFactor(A,0,0,0); CHKERRQ(ierr); > ? ? ? ?// solve for the inverse matrix XX > ? ? ? ?ierr = MatMatSolve(A,RHS,XX); CHKERRQ(ierr); > > ? ? ? ?ierr = PetscPrintf(PETSC_COMM_SELF, > ? ? ? ? ? ? ? ?"The inverse of A matrix is:\n"); CHKERRQ(ierr); > ? ? ? ?ierr = MatView(XX,PETSC_VIEWER_STDOUT_SELF); CHKERRQ(ierr); > > ? ? ? ?ierr = PetscPrintf(PETSC_COMM_SELF, > ? ? ? ? ? ? ? ?"The multiplied result is:\n"); CHKERRQ(ierr); > ? ? ? ?ierr = MatMatMult(CA,XX,MAT_REUSE_MATRIX,PETSC_DEFAULT,&RHS); > ? ? ? ?ierr = MatView(RHS,PETSC_VIEWER_STDOUT_SELF); CHKERRQ(ierr); > > ? ? ? ?// destroy > ? ? ? ?ierr = MatDestroy(&A); CHKERRQ(ierr); > ? ? ? ?ierr = MatDestroy(&RHS); CHKERRQ(ierr); > ? ? ? ?ierr = MatDestroy(&XX); CHKERRQ(ierr); > ? ? ? ?ierr = MatDestroy(&CA); CHKERRQ(ierr); > > ? ? ? ?ierr = PetscFinalize(); > > ? ? ? ?delete[] array_scalar; > > ? ? ? ?return 0; > } > ===========Modified code ends============== > The input file(DIMENSION = 2) and the results are > A.txt: > 2.0 3.0 > -20.0 55.0 > Results: > The A matrix is: > Matrix Object: 1 MPI processes > ?type: seqdense > 2.0000000000000000e+00 3.0000000000000000e+00 > -2.0000000000000000e+01 5.5000000000000000e+01 > The inverse of A matrix is: > Matrix Object: 1 MPI processes > ?type: seqdense > 3.2352941176470590e-01 -0.0000000000000000e+00 > 1.1764705882352941e-01 0.0000000000000000e+00 > The multiplied result is: > Matrix Object: 1 MPI processes > ?type: seqdense > 1.0000000000000000e+00 0.0000000000000000e+00 > 0.0000000000000000e+00 0.0000000000000000e+00 > > > > From zonexo at gmail.com Mon Nov 28 06:27:29 2011 From: zonexo at gmail.com (TAY wee-beng) Date: Mon, 28 Nov 2011 13:27:29 +0100 Subject: [petsc-users] Error compiling code when upgrading from 3.1p8 to 3.2p5 In-Reply-To: References: <4ED2AB30.2060801@gmail.com> Message-ID: <4ED37E31.1060303@gmail.com> Hi, The code compiles and works ok. However, when I changed my solver to use HYPRE to solve the poisson equation, I got the error: [hpc12:29772] *** An error occurred in MPI_comm_size [hpc12:29772] *** on communicator MPI_COMM_WORLD [hpc12:29772] *** MPI_ERR_COMM: invalid communicator [hpc12:29772] *** MPI_ERRORS_ARE_FATAL (your MPI job will now abort) 1.07user 0.12system 0:01.23elapsed 97%CPU (0avgtext+0avgdata 188432maxresident)k 0inputs+35480outputs (0major+11637minor)pagefaults 0swaps -------------------------------------------------------------------------- mpiexec has exited due to process rank 0 with PID 29771 on node hpc12 exiting improperly. There are two reasons this could occur: 1. this process did not call "init" before exiting, but others in the job did. This can cause a job to hang indefinitely while it waits for all processes to call "init". By rule, if one process calls "init", then ALL processes must call "init" prior to termination. 2. this process called "init", but exited without calling "finalize". By rule, all processes that call "init" MUST call "finalize" prior to exiting or it will be considered an "abnormal termination" This may have caused other processes in the application to be terminated by signals sent by mpiexec (as reported here). This happens after calling the subroutine call HYPRE_StructStencilCreate(3, 4, stencil_hypre, ierr). Btw, my code is using HYPRE's own function to construct the matrix and solve it. Thanks! Yours sincerely, TAY wee-beng On 27/11/2011 10:30 PM, Satish Balay wrote: > check http://www.mcs.anl.gov/petsc/documentation/changes/32.html > > -> Changed PetscTruth to PetscBool > > satish > > On Sun, 27 Nov 2011, TAY wee-beng wrote: > >> Hi, >> >> I have trouble compiling my Fortran codes when I upgrade PETSc from 3.1p8 to >> 3.2p5. >> >> My code is something like this: >> >> module global_data >> >> use nrtype >> >> implicit none >> >> save >> >> #include "finclude/petsc.h90" >> >> !grid variables >> >> integer :: size_x,size_y,size_z,grid_type >> !size_x1,size_x2,size_x3,size_y1,size_y2,size_y3 >> >> real(8), allocatable :: >> x(:),y(:),z(:),xu(:),yu(:),zu(:),xv(:),yv(:),zv(:),xw(:),yw(:),zw(:),c_cx(:),cu_cx(:),c_cy(:),cv_cy(:),c_cz(:),cw_cz(:) >> >> !solver variables >> >> ... >> >> I tried after compiling with the new 3.2p5 and got the following error: >> >> /opt/openmpi-1.5.3/bin/mpif90 -c -g -debug all -implicitnone -warn unused >> -fp-stack-check -heap-arrays -ftrapuv -check pointers -O0 -save -w90 -w -w95 >> -O0 -I/home/wtay/Lib/petsc-3.2-p5_mumps_debug/include >> -I/home/wtay/Lib/petsc-3.2-p5_mumps_debug/include -I/opt/openmpi-1.5.3/include >> -o global.o global.F90 >> global.F90(205): error #5082: Syntax error, found IDENTIFIER 'FLGG' when >> expecting one of: ( % : . = => >> PetscTruth flgg >> -----------^ >> global.F90(205): error #6274: This statement must not appear in the >> specification part of a module >> PetscTruth flgg >> ^ >> global.F90(207): error #6236: A specification statement cannot appear in the >> executable section. >> integer(kind=selected_int_kind(5)) reason >> ^ >> global.F90(209): error #6236: A specification statement cannot appear in the >> executable section. >> integer(kind=selected_int_kind(10)) i_vec >> ^ >> global.F90(213): error #6236: A specification statement cannot appear in the >> executable section. >> integer :: >> myid,num_procs,ksta,kend,ksta_ext,kend_ext,ksta_ext0,ksta2,kend2,kend3 >> ^ >> global.F90(215): error #6236: A specification statement cannot appear in the >> executable section. >> integer :: >> ijk_sta_p,ijk_end_p,ijk_sta_m,ijk_end_m,ijk_sta_mx,ijk_end_mx,ijk_sta_my,ijk_end_my,ijk_sta_mz,ijk_end_mz >> ^ >> global.F90(217): error #6236: A specification statement cannot appear in the >> executable section. >> character(2) :: procs >> ^ >> global.F90(205): error #6404: This name does not have a type, and must have an >> explicit type. [PETSCTRUTH] >> PetscTruth flgg >> ^ >> global.F90(205): error #6404: This name does not have a type, and must have an >> explicit type. [FLGG] >> PetscTruth flgg >> -----------^ >> global.F90(229): error #6404: This name does not have a type, and must have an >> explicit type. [KSTA] >> ksta=myid*(size_z/num_procs)+1; kend=(myid+1)*(size_z/num_procs) >> >> >> May I know what's wrong? >> >> Thanks! >> >> From zonexo at gmail.com Mon Nov 28 07:54:32 2011 From: zonexo at gmail.com (TAY wee-beng) Date: Mon, 28 Nov 2011 14:54:32 +0100 Subject: [petsc-users] binary writing to tecplot In-Reply-To: <25074BCB-727C-4297-BB78-C0FECF650B01@cwi.nl> References: <25074BCB-727C-4297-BB78-C0FECF650B01@cwi.nl> Message-ID: <4ED39298.6060903@gmail.com> Hi Benjamin, I am also outputting tecplot files from my Fortran CFD code. I am using mtd 1 and 2. For mtd 1, I copied data from other procs to the root, and the root write the full data output using tecdat112. As you mentioned, it takes more time. For mtd 2, I write the data from each procs as a separate file. Then in tecplot, load multiple files together to get the full view and data. Or you can combine the files 1st using in command prompt or linux: tec360 -b "1.plt" "2.plt" -p cat_datasets.mcr where cat_datasets.mcr is: !#!MC 1200 !$!WRITEDATASET "final.plt" The above is obtained from http://www.tecplottalk.com/ This mtd is supposed to be much better than the 1st. However, during visualization, you'll see the edges of each data file. In 2D, you can just turn off the "edges" option but in 3D, due to the arrangement of the data, it's much more difficult. Supposed your data is i=1,10, j=1,10 and it's separated in 2 procs: 1. i=1,10, j=1,5 2. i=1,10, j=6,10. If mtd 2 is used, both regions 'll be represented as i=1,10, j=1,5 in tecplot, which explains the problem earlier. If someone knows how to change both regions to j=1,5 and j=6,10, it'll be much better. Hope that helps. Yours sincerely, TAY wee-beng On 22/11/2011 11:40 AM, Benjamin Sanderse wrote: > Hello all, > > I am trying to output parallel data in binary format that can be read by Tecplot. For this I use the TecIO library from Tecplot, which provide a set of Fortran/C subroutines. With these subroutines it is easy to write binary files that can be read by Tecplot, but, as far as I can see, they can not be directly used with parallel Petsc vectors. On a single processor everything works fine, but on more processors it fails. > I am thinking now of different workarounds: > > 1. Create a sequential vector from the parallel vector, and call the TecIO subroutines with this sequential vector. For large problems this will probably be too slow, and actually I don't know how to copy the content of a parallel vector into a sequential one. > 2. Write a tecplot file from each processor, with the data from that processor. The problem is that this requires combining the files afterwards, and this is probably not easy (certainly not in binary format?). > 3. Change the tecplot subroutines or write own binary output with VecView(). It might not be easy to get the output right so that Tecplot understands it. > > Do you have suggestions? Are there other possibilities? > > Thanks, > > Benjamin > > From knepley at gmail.com Mon Nov 28 08:01:05 2011 From: knepley at gmail.com (Matthew Knepley) Date: Mon, 28 Nov 2011 08:01:05 -0600 Subject: [petsc-users] Error compiling code when upgrading from 3.1p8 to 3.2p5 In-Reply-To: <4ED37E31.1060303@gmail.com> References: <4ED2AB30.2060801@gmail.com> <4ED37E31.1060303@gmail.com> Message-ID: On Mon, Nov 28, 2011 at 6:27 AM, TAY wee-beng wrote: > Hi, > > The code compiles and works ok. However, when I changed my solver to use > HYPRE to solve the poisson equation, > > I got the error: > > [hpc12:29772] *** An error occurred in MPI_comm_size > [hpc12:29772] *** on communicator MPI_COMM_WORLD > [hpc12:29772] *** MPI_ERR_COMM: invalid communicator > [hpc12:29772] *** MPI_ERRORS_ARE_FATAL (your MPI job will now abort) > 1.07user 0.12system 0:01.23elapsed 97%CPU (0avgtext+0avgdata > 188432maxresident)k > 0inputs+35480outputs (0major+11637minor)pagefaults 0swaps > ------------------------------**------------------------------** > -------------- > mpiexec has exited due to process rank 0 with PID 29771 on > node hpc12 exiting improperly. There are two reasons this could occur: > > 1. this process did not call "init" before exiting, but others in > the job did. This can cause a job to hang indefinitely while it waits > for all processes to call "init". By rule, if one process calls "init", > then ALL processes must call "init" prior to termination. > > 2. this process called "init", but exited without calling "finalize". > By rule, all processes that call "init" MUST call "finalize" prior to > exiting or it will be considered an "abnormal termination" > > This may have caused other processes in the application to be > terminated by signals sent by mpiexec (as reported here). > > > This happens after calling the subroutine call > HYPRE_StructStencilCreate(3, 4, stencil_hypre, ierr). > > Btw, my code is using HYPRE's own function to construct the matrix and > solve it. > I can only assume you have a bug in your code. Why not just call this through KSPSolve? Matt > Thanks! > > Yours sincerely, > > TAY wee-beng > > > On 27/11/2011 10:30 PM, Satish Balay wrote: > >> check http://www.mcs.anl.gov/petsc/**documentation/changes/32.html >> >> -> Changed PetscTruth to PetscBool >> >> satish >> >> On Sun, 27 Nov 2011, TAY wee-beng wrote: >> >> Hi, >>> >>> I have trouble compiling my Fortran codes when I upgrade PETSc from >>> 3.1p8 to >>> 3.2p5. >>> >>> My code is something like this: >>> >>> module global_data >>> >>> use nrtype >>> >>> implicit none >>> >>> save >>> >>> #include "finclude/petsc.h90" >>> >>> !grid variables >>> >>> integer :: size_x,size_y,size_z,grid_type >>> !size_x1,size_x2,size_x3,size_**y1,size_y2,size_y3 >>> >>> real(8), allocatable :: >>> x(:),y(:),z(:),xu(:),yu(:),zu(**:),xv(:),yv(:),zv(:),xw(:),yw(** >>> :),zw(:),c_cx(:),cu_cx(:),c_**cy(:),cv_cy(:),c_cz(:),cw_cz(:**) >>> >>> !solver variables >>> >>> ... >>> >>> I tried after compiling with the new 3.2p5 and got the following error: >>> >>> /opt/openmpi-1.5.3/bin/mpif90 -c -g -debug all -implicitnone -warn unused >>> -fp-stack-check -heap-arrays -ftrapuv -check pointers -O0 -save -w90 -w >>> -w95 >>> -O0 -I/home/wtay/Lib/petsc-3.2-p5_**mumps_debug/include >>> -I/home/wtay/Lib/petsc-3.2-p5_**mumps_debug/include >>> -I/opt/openmpi-1.5.3/include >>> -o global.o global.F90 >>> global.F90(205): error #5082: Syntax error, found IDENTIFIER 'FLGG' when >>> expecting one of: ( % : . = => >>> PetscTruth flgg >>> -----------^ >>> global.F90(205): error #6274: This statement must not appear in the >>> specification part of a module >>> PetscTruth flgg >>> ^ >>> global.F90(207): error #6236: A specification statement cannot appear in >>> the >>> executable section. >>> integer(kind=selected_int_**kind(5)) reason >>> ^ >>> global.F90(209): error #6236: A specification statement cannot appear in >>> the >>> executable section. >>> integer(kind=selected_int_**kind(10)) i_vec >>> ^ >>> global.F90(213): error #6236: A specification statement cannot appear in >>> the >>> executable section. >>> integer :: >>> myid,num_procs,ksta,kend,ksta_**ext,kend_ext,ksta_ext0,ksta2,** >>> kend2,kend3 >>> ^ >>> global.F90(215): error #6236: A specification statement cannot appear in >>> the >>> executable section. >>> integer :: >>> ijk_sta_p,ijk_end_p,ijk_sta_m,**ijk_end_m,ijk_sta_mx,ijk_end_** >>> mx,ijk_sta_my,ijk_end_my,ijk_**sta_mz,ijk_end_mz >>> ^ >>> global.F90(217): error #6236: A specification statement cannot appear in >>> the >>> executable section. >>> character(2) :: procs >>> ^ >>> global.F90(205): error #6404: This name does not have a type, and must >>> have an >>> explicit type. [PETSCTRUTH] >>> PetscTruth flgg >>> ^ >>> global.F90(205): error #6404: This name does not have a type, and must >>> have an >>> explicit type. [FLGG] >>> PetscTruth flgg >>> -----------^ >>> global.F90(229): error #6404: This name does not have a type, and must >>> have an >>> explicit type. [KSTA] >>> ksta=myid*(size_z/num_procs)+**1; kend=(myid+1)*(size_z/num_**procs) >>> >>> >>> May I know what's wrong? >>> >>> Thanks! >>> >>> >>> -- What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. -- Norbert Wiener -------------- next part -------------- An HTML attachment was scrubbed... URL: From B.Sanderse at cwi.nl Mon Nov 28 11:29:30 2011 From: B.Sanderse at cwi.nl (Benjamin Sanderse) Date: Mon, 28 Nov 2011 18:29:30 +0100 Subject: [petsc-users] binary writing to tecplot In-Reply-To: <4ED39298.6060903@gmail.com> References: <25074BCB-727C-4297-BB78-C0FECF650B01@cwi.nl> <4ED39298.6060903@gmail.com> Message-ID: <33238206-B91D-4975-9125-155E1AA2352C@cwi.nl> Hi Tay, Thanks for your help! Currently I am trying something else: I write my data (simply a Vec) to HDF5 files with PetscViewerHDF5Open() and then I load the data with the HDF5 data loader in Tecplot. This works, but Tecplot does not recognize the fact that my data is 2D, because the HDF5 file does not contain the header information like the PLT files do. I don't know yet how to get the header information in the HDF5 file. If somebody knows a solution for this, that would be great. Benjamin Op 28 nov 2011, om 14:54 heeft TAY wee-beng het volgende geschreven: > Hi Benjamin, > > I am also outputting tecplot files from my Fortran CFD code. > > I am using mtd 1 and 2. For mtd 1, I copied data from other procs to the root, and the root write the full data output using tecdat112. As you mentioned, it takes more time. > > For mtd 2, I write the data from each procs as a separate file. Then in tecplot, load multiple files together to get the full view and data. Or you can combine the files 1st using in command prompt or linux: > > tec360 -b "1.plt" "2.plt" -p cat_datasets.mcr > > where cat_datasets.mcr is: > > !#!MC 1200 > !$!WRITEDATASET "final.plt" > > The above is obtained from http://www.tecplottalk.com/ > > This mtd is supposed to be much better than the 1st. However, during visualization, you'll see the edges of each data file. In 2D, you can just turn off the "edges" option but in 3D, due to the arrangement of the data, it's much more difficult. > > Supposed your data is i=1,10, j=1,10 and it's separated in 2 procs: > > 1. i=1,10, j=1,5 > 2. i=1,10, j=6,10. > > If mtd 2 is used, both regions 'll be represented as i=1,10, j=1,5 in tecplot, which explains the problem earlier. > > If someone knows how to change both regions to j=1,5 and j=6,10, it'll be much better. > > Hope that helps. > > Yours sincerely, > > TAY wee-beng > > > On 22/11/2011 11:40 AM, Benjamin Sanderse wrote: >> Hello all, >> >> I am trying to output parallel data in binary format that can be read by Tecplot. For this I use the TecIO library from Tecplot, which provide a set of Fortran/C subroutines. With these subroutines it is easy to write binary files that can be read by Tecplot, but, as far as I can see, they can not be directly used with parallel Petsc vectors. On a single processor everything works fine, but on more processors it fails. >> I am thinking now of different workarounds: >> >> 1. Create a sequential vector from the parallel vector, and call the TecIO subroutines with this sequential vector. For large problems this will probably be too slow, and actually I don't know how to copy the content of a parallel vector into a sequential one. >> 2. Write a tecplot file from each processor, with the data from that processor. The problem is that this requires combining the files afterwards, and this is probably not easy (certainly not in binary format?). >> 3. Change the tecplot subroutines or write own binary output with VecView(). It might not be easy to get the output right so that Tecplot understands it. >> >> Do you have suggestions? Are there other possibilities? >> >> Thanks, >> >> Benjamin >> >> From tim.gallagher at gatech.edu Mon Nov 28 14:20:48 2011 From: tim.gallagher at gatech.edu (Tim Gallagher) Date: Mon, 28 Nov 2011 15:20:48 -0500 (EST) Subject: [petsc-users] Creating HDF5 groups In-Reply-To: <89522281-a624-4444-a42e-599799e8a35d@mail2.gatech.edu> Message-ID: <656ed6c9-84b1-4a76-a7a9-534e5f158aa3@mail2.gatech.edu> Hi, I'm a little confused about why this section of code doesn't work, so hopefully somebody can help me. I'm able to run the vec/vec/examples/tutorials/ex19.c test that creates an HDF5 file, and h5dump verifies that it is making the groups and putting vectors in them. However, my code does not make the group (so it obviously doesn't put the vector in it!). The function is: #undef __FUNCT__ #define __FUNCT__ "Body::SaveGrid" PetscErrorCode Body::SaveGrid(const char filename[]) { PetscViewer hdf5File; Vec coordVec; // Grab the coordinate vector m_lastError = DMDAGetCoordinates(m_nodes,&coordVec); CHKERRXX(m_lastError); m_lastError = PetscObjectSetName((PetscObject) coordVec, "Coordinates"); CHKERRXX(m_lastError); // Open the file m_lastError = PetscViewerHDF5Open(m_commGroup, filename, FILE_MODE_WRITE, &hdf5File); CHKERRXX(m_lastError); // Set the group to dump the grid in m_lastError = PetscViewerHDF5PushGroup(hdf5File, "/grid"); CHKERRXX(m_lastError); // Dump the coordinates m_lastError = VecView(coordVec, hdf5File); CHKERRXX(m_lastError); m_lastError = PetscViewerHDF5PopGroup(hdf5File); CHKERRXX(m_lastError); // Clean up m_lastError = PetscViewerDestroy(&hdf5File); CHKERRXX(m_lastError); } The only thing I can really see different about this and ex19 is that I don't first write any dataset to the root group, but that's not required to be valid HDF5. If I call PetscViewerHDF5GetGroup after the PushGroup call, it returns /grid. Any ideas? Thanks, Tim From zonexo at gmail.com Mon Nov 28 14:34:02 2011 From: zonexo at gmail.com (TAY wee-beng) Date: Mon, 28 Nov 2011 21:34:02 +0100 Subject: [petsc-users] binary writing to tecplot In-Reply-To: <33238206-B91D-4975-9125-155E1AA2352C@cwi.nl> References: <25074BCB-727C-4297-BB78-C0FECF650B01@cwi.nl> <4ED39298.6060903@gmail.com> <33238206-B91D-4975-9125-155E1AA2352C@cwi.nl> Message-ID: <4ED3F03A.906@gmail.com> Hi Benjamin, There seems to be another mtd to try: http://tecplottalk.com/viewtopic.php?t=366 Yours sincerely, TAY wee-beng On 28/11/2011 6:29 PM, Benjamin Sanderse wrote: > Hi Tay, > > Thanks for your help! Currently I am trying something else: I write my data (simply a Vec) to HDF5 files with PetscViewerHDF5Open() and then I load the data with the HDF5 data loader in Tecplot. This works, but Tecplot does not recognize the fact that my data is 2D, because the HDF5 file does not contain the header information like the PLT files do. I don't know yet how to get the header information in the HDF5 file. If somebody knows a solution for this, that would be great. > > Benjamin > > Op 28 nov 2011, om 14:54 heeft TAY wee-beng het volgende geschreven: > >> Hi Benjamin, >> >> I am also outputting tecplot files from my Fortran CFD code. >> >> I am using mtd 1 and 2. For mtd 1, I copied data from other procs to the root, and the root write the full data output using tecdat112. As you mentioned, it takes more time. >> >> For mtd 2, I write the data from each procs as a separate file. Then in tecplot, load multiple files together to get the full view and data. Or you can combine the files 1st using in command prompt or linux: >> >> tec360 -b "1.plt" "2.plt" -p cat_datasets.mcr >> >> where cat_datasets.mcr is: >> >> !#!MC 1200 >> !$!WRITEDATASET "final.plt" >> >> The above is obtained from http://www.tecplottalk.com/ >> >> This mtd is supposed to be much better than the 1st. However, during visualization, you'll see the edges of each data file. In 2D, you can just turn off the "edges" option but in 3D, due to the arrangement of the data, it's much more difficult. >> >> Supposed your data is i=1,10, j=1,10 and it's separated in 2 procs: >> >> 1. i=1,10, j=1,5 >> 2. i=1,10, j=6,10. >> >> If mtd 2 is used, both regions 'll be represented as i=1,10, j=1,5 in tecplot, which explains the problem earlier. >> >> If someone knows how to change both regions to j=1,5 and j=6,10, it'll be much better. >> >> Hope that helps. >> >> Yours sincerely, >> >> TAY wee-beng >> >> >> On 22/11/2011 11:40 AM, Benjamin Sanderse wrote: >>> Hello all, >>> >>> I am trying to output parallel data in binary format that can be read by Tecplot. For this I use the TecIO library from Tecplot, which provide a set of Fortran/C subroutines. With these subroutines it is easy to write binary files that can be read by Tecplot, but, as far as I can see, they can not be directly used with parallel Petsc vectors. On a single processor everything works fine, but on more processors it fails. >>> I am thinking now of different workarounds: >>> >>> 1. Create a sequential vector from the parallel vector, and call the TecIO subroutines with this sequential vector. For large problems this will probably be too slow, and actually I don't know how to copy the content of a parallel vector into a sequential one. >>> 2. Write a tecplot file from each processor, with the data from that processor. The problem is that this requires combining the files afterwards, and this is probably not easy (certainly not in binary format?). >>> 3. Change the tecplot subroutines or write own binary output with VecView(). It might not be easy to get the output right so that Tecplot understands it. >>> >>> Do you have suggestions? Are there other possibilities? >>> >>> Thanks, >>> >>> Benjamin >>> >>> > From knepley at gmail.com Mon Nov 28 15:39:57 2011 From: knepley at gmail.com (Matthew Knepley) Date: Mon, 28 Nov 2011 15:39:57 -0600 Subject: [petsc-users] Creating HDF5 groups In-Reply-To: <656ed6c9-84b1-4a76-a7a9-534e5f158aa3@mail2.gatech.edu> References: <89522281-a624-4444-a42e-599799e8a35d@mail2.gatech.edu> <656ed6c9-84b1-4a76-a7a9-534e5f158aa3@mail2.gatech.edu> Message-ID: On Mon, Nov 28, 2011 at 2:20 PM, Tim Gallagher wrote: > Hi, > > I'm a little confused about why this section of code doesn't work, so > hopefully somebody can help me. I'm able to run the > vec/vec/examples/tutorials/ex19.c test that creates an HDF5 file, and > h5dump verifies that it is making the groups and putting vectors in them. > However, my code does not make the group (so it obviously doesn't put the > vector in it!). The function is: > I commented out the write into the root group in ex19 and it worked fine: knepley:/PETSc3/petsc/petsc-dev$ ./arch-sieve-fdatatypes-debug/bin/h5dump ./ex19.h5 ./arch-sieve-fdatatypes-debug/bin/h5dump ./ex19.h5 HDF5 "./ex19.h5" { GROUP "/" { GROUP "testBlockSize" { DATASET "TestVec" { DATATYPE H5T_IEEE_F64LE DATASPACE SIMPLE { ( 3, 2 ) / ( 3, 2 ) } DATA { (0,0): 1, 1, (1,0): 1, 1, (2,0): 1, 1 } } } GROUP "testTimestep" { DATASET "TestVec" { DATATYPE H5T_IEEE_F64LE DATASPACE SIMPLE { ( 2, 3, 2 ) / ( H5S_UNLIMITED, 3, 2 ) } DATA { (0,0,0): 1, 1, (0,1,0): 1, 1, (0,2,0): 1, 1, (1,0,0): 1, 1, (1,1,0): 1, 1, (1,2,0): 1, 1 } } } } } Matt > #undef __FUNCT__ > #define __FUNCT__ "Body::SaveGrid" > PetscErrorCode Body::SaveGrid(const char filename[]) > { > PetscViewer hdf5File; > Vec coordVec; > > // Grab the coordinate vector > m_lastError = DMDAGetCoordinates(m_nodes,&coordVec); > CHKERRXX(m_lastError); > > m_lastError = PetscObjectSetName((PetscObject) coordVec, "Coordinates"); > CHKERRXX(m_lastError); > > // Open the file > m_lastError = PetscViewerHDF5Open(m_commGroup, filename, FILE_MODE_WRITE, > &hdf5File); > CHKERRXX(m_lastError); > > // Set the group to dump the grid in > m_lastError = PetscViewerHDF5PushGroup(hdf5File, "/grid"); > CHKERRXX(m_lastError); > > // Dump the coordinates > m_lastError = VecView(coordVec, hdf5File); > CHKERRXX(m_lastError); > > m_lastError = PetscViewerHDF5PopGroup(hdf5File); > CHKERRXX(m_lastError); > > // Clean up > m_lastError = PetscViewerDestroy(&hdf5File); > CHKERRXX(m_lastError); > } > > The only thing I can really see different about this and ex19 is that I > don't first write any dataset to the root group, but that's not required to > be valid HDF5. If I call PetscViewerHDF5GetGroup after the PushGroup call, > it returns /grid. > > Any ideas? > > Thanks, > > Tim > -- What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. -- Norbert Wiener -------------- next part -------------- An HTML attachment was scrubbed... URL: From tim.gallagher at gatech.edu Mon Nov 28 15:49:23 2011 From: tim.gallagher at gatech.edu (Tim Gallagher) Date: Mon, 28 Nov 2011 16:49:23 -0500 (EST) Subject: [petsc-users] Creating HDF5 groups In-Reply-To: Message-ID: <7d3bf843-8af7-4c24-9b3f-eb4b2d124a16@mail2.gatech.edu> I believe it, it works for me that way too. But my function to write the data doesn't create the group. I get: |16:45||tgallagher at harpy:MSDM-Build|> h5dump -A test.h5 HDF5 "test.h5" { GROUP "/" { DATASET "Coordinates" { DATATYPE H5T_IEEE_F64LE DATASPACE SIMPLE { ( 10, 10, 10, 3 ) / ( 10, 10, 10, 3 ) } } } } despite the call to PetscViewerHDF5PushGroup I showed in the code snippet. I'm pretty stumped why it's not working. Tim ----- Original Message ----- From: "Matthew Knepley" To: gtg085x at mail.gatech.edu, "PETSc users list" Sent: Monday, November 28, 2011 4:39:57 PM Subject: Re: [petsc-users] Creating HDF5 groups On Mon, Nov 28, 2011 at 2:20 PM, Tim Gallagher < tim.gallagher at gatech.edu > wrote: Hi, I'm a little confused about why this section of code doesn't work, so hopefully somebody can help me. I'm able to run the vec/vec/examples/tutorials/ex19.c test that creates an HDF5 file, and h5dump verifies that it is making the groups and putting vectors in them. However, my code does not make the group (so it obviously doesn't put the vector in it!). The function is: I commented out the write into the root group in ex19 and it worked fine: knepley:/PETSc3/petsc/petsc-dev$ ./arch-sieve-fdatatypes-debug/bin/h5dump ./ex19.h5 ./arch-sieve-fdatatypes-debug/bin/h5dump ./ex19.h5 HDF5 "./ex19.h5" { GROUP "/" { GROUP "testBlockSize" { DATASET "TestVec" { DATATYPE H5T_IEEE_F64LE DATASPACE SIMPLE { ( 3, 2 ) / ( 3, 2 ) } DATA { (0,0): 1, 1, (1,0): 1, 1, (2,0): 1, 1 } } } GROUP "testTimestep" { DATASET "TestVec" { DATATYPE H5T_IEEE_F64LE DATASPACE SIMPLE { ( 2, 3, 2 ) / ( H5S_UNLIMITED, 3, 2 ) } DATA { (0,0,0): 1, 1, (0,1,0): 1, 1, (0,2,0): 1, 1, (1,0,0): 1, 1, (1,1,0): 1, 1, (1,2,0): 1, 1 } } } } } Matt
#undef __FUNCT__ #define __FUNCT__ "Body::SaveGrid" PetscErrorCode Body::SaveGrid(const char filename[]) { PetscViewer hdf5File; Vec coordVec; // Grab the coordinate vector m_lastError = DMDAGetCoordinates(m_nodes,&coordVec); CHKERRXX(m_lastError); m_lastError = PetscObjectSetName((PetscObject) coordVec, "Coordinates"); CHKERRXX(m_lastError); // Open the file m_lastError = PetscViewerHDF5Open(m_commGroup, filename, FILE_MODE_WRITE, &hdf5File); CHKERRXX(m_lastError); // Set the group to dump the grid in m_lastError = PetscViewerHDF5PushGroup(hdf5File, "/grid"); CHKERRXX(m_lastError); // Dump the coordinates m_lastError = VecView(coordVec, hdf5File); CHKERRXX(m_lastError); m_lastError = PetscViewerHDF5PopGroup(hdf5File); CHKERRXX(m_lastError); // Clean up m_lastError = PetscViewerDestroy(&hdf5File); CHKERRXX(m_lastError); } The only thing I can really see different about this and ex19 is that I don't first write any dataset to the root group, but that's not required to be valid HDF5. If I call PetscViewerHDF5GetGroup after the PushGroup call, it returns /grid. Any ideas? Thanks, Tim
-- What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. -- Norbert Wiener -------------- next part -------------- An HTML attachment was scrubbed... URL: From mmsussman at gmail.com Mon Nov 28 18:45:02 2011 From: mmsussman at gmail.com (Mike Sussman) Date: Mon, 28 Nov 2011 19:45:02 -0500 Subject: [petsc-users] TSGL failure Message-ID: <1322527502.3615.71.camel@ozhp> Folks, The tutorial example in TS, ex2f.F, is designed to use the theta method. If, however, I use the runtime option "-ts_type gl", I get a segfault. I also get a segfault with the line CALL TSSetType(ts,TSGL,ierr); in the code. Is this supposed to happen? What am I missing? I am running 3.2p5. The message says: % ex2f -ts_type gl -ts_gl_type irks -ts_gl_rtol 1.e-5 -ts_gl_atol 1.e-10 [0]PETSC ERROR: ------------------------------------------------------------------------ [0]PETSC ERROR: Caught signal number 11 SEGV: Segmentation Violation, probably memory access out of range [0]PETSC ERROR: Try option -start_in_debugger or -on_error_attach_debugger [0]PETSC ERROR: or see http://www.mcs.anl.gov/petsc/petsc-as/documentation/faq.html#valgrind[0]PETSC ERROR: or try http://valgrind.org on GNU/linux and Apple Mac OS X to find memory corruption errors [0]PETSC ERROR: likely location of problem given in stack below [0]PETSC ERROR: --------------------- Stack Frames ------------------------------------ [0]PETSC ERROR: Note: The EXACT line numbers in the stack are not available, [0]PETSC ERROR: INSTEAD the line number of the start of the function [0]PETSC ERROR: is given. [0]PETSC ERROR: [0] TSGLAdaptChoose line 232 /home/mike/Programming/petsc-3.2-p5/src/ts/impls/implicit/gl/gladapt.c [0]PETSC ERROR: [0] TSGLChooseNextScheme line 774 /home/mike/Programming/petsc-3.2-p5/src/ts/impls/implicit/gl/gl.c [0]PETSC ERROR: [0] TSSolve_GL line 822 /home/mike/Programming/petsc-3.2-p5/src/ts/impls/implicit/gl/gl.c [0]PETSC ERROR: [0] TSSolve line 1824 /home/mike/Programming/petsc-3.2-p5/src/ts/interface/ts.c [0]PETSC ERROR: --------------------- Error Message ------------------------------------ [0]PETSC ERROR: Signal received! [0]PETSC ERROR: ------------------------------------------------------------------------ [0]PETSC ERROR: Petsc Release Version 3.2.0, Patch 5, Sat Oct 29 13:45:54 CDT 2011 [0]PETSC ERROR: See docs/changes/index.html for recent updates. [0]PETSC ERROR: See docs/faq.html for hints about trouble shooting. [0]PETSC ERROR: See docs/index.html for manual pages. [0]PETSC ERROR: ------------------------------------------------------------------------ [0]PETSC ERROR: ex2f on a linux-gnu named ozhp by mike Mon Nov 28 19:38:30 2011 [0]PETSC ERROR: Libraries linked from /home/mike/Programming/petsc-3.2-p5/linux-gnu-c-opt/lib [0]PETSC ERROR: Configure run at Sat Nov 26 21:07:02 2011 [0]PETSC ERROR: Configure options --with-cc=mpicc --with-fc=mpif90 --with-cxx=mpicxx --download-hypre=1 [0]PETSC ERROR: ------------------------------------------------------------------------ [0]PETSC ERROR: User provided function() line 0 in unknown directory unknown file -------------------------------------------------------------------------- MPI_ABORT was invoked on rank 0 in communicator MPI_COMM_WORLD with errorcode 59. NOTE: invoking MPI_ABORT causes Open MPI to kill all MPI processes. You may or may not see output from other processes, depending on _______ Mike Sussman From jedbrown at mcs.anl.gov Mon Nov 28 18:56:12 2011 From: jedbrown at mcs.anl.gov (Jed Brown) Date: Mon, 28 Nov 2011 18:56:12 -0600 Subject: [petsc-users] TSGL failure In-Reply-To: <1322527502.3615.71.camel@ozhp> References: <1322527502.3615.71.camel@ozhp> Message-ID: On Mon, Nov 28, 2011 at 18:45, Mike Sussman wrote: > The tutorial example in TS, ex2f.F, is designed to use the theta method. > If, however, I use the runtime option "-ts_type gl", I get a segfault. > We had enough robustness problems with TSGL that it wasn't thoroughly tested after rewriting TS ownership semantics. Consequently, TSGL can end up not properly configured. I believe this is fixed in petsc-dev, but you are probably better off with different methods right now. Perhaps most interesting are the Rosenbrock-W methods with adaptive error control in petsc-dev, try -ts_type rosw. -------------- next part -------------- An HTML attachment was scrubbed... URL: From isaia.nisoli at gmail.com Mon Nov 28 19:03:26 2011 From: isaia.nisoli at gmail.com (Nisoli Isaia) Date: Tue, 29 Nov 2011 02:03:26 +0100 Subject: [petsc-users] problems with MatCreateSeqAIJWithArrays Message-ID: I have a small program that takes as input a file in UBLAS serialization format of a UBLAS compressed matrix and outputs a Petsc binary file, using MatCreateSeqAIJWithArrays. int convertPETSc(srmatrix &sA, Mat * const Aaddr) { PetscErrorCode ierr; PetscInt size=(sA.A).size1(); unsigned int totnnz=(sA.A).nnz(); (sA.A).complete_index1_data(); long unsigned int *row_ptr =(sA.A).index1_data().begin(); long unsigned int *col_ptr =(sA.A).index2_data().begin(); double * value_ptr = (sA.A).value_data().begin(); unsigned int sizerow_ptr=(sA.A).index1_data().size(); unsigned int sizecol_ptr=(sA.A).index2_data().size(); PetscInt* rowindices; PetscInt* colindices; PetscScalar* values; rowindices=new PetscInt[sizerow_ptr]; colindices=new PetscInt[sizecol_ptr]; values=new PetscScalar[totnnz]; for (unsigned int i=0;i From jedbrown at mcs.anl.gov Mon Nov 28 19:08:17 2011 From: jedbrown at mcs.anl.gov (Jed Brown) Date: Mon, 28 Nov 2011 19:08:17 -0600 Subject: [petsc-users] problems with MatCreateSeqAIJWithArrays In-Reply-To: References: Message-ID: On Mon, Nov 28, 2011 at 19:03, Nisoli Isaia wrote: > int convertPETSc(srmatrix &sA, Mat * const Aaddr) > { > PetscErrorCode ierr; > PetscInt size=(sA.A).size1(); > unsigned int totnnz=(sA.A).nnz(); > > (sA.A).complete_index1_data(); > long unsigned int *row_ptr =(sA.A).index1_data().begin(); > long unsigned int *col_ptr =(sA.A).index2_data().begin(); > double * value_ptr = (sA.A).value_data().begin(); > unsigned int sizerow_ptr=(sA.A).index1_data().size(); > unsigned int sizecol_ptr=(sA.A).index2_data().size(); > PetscInt* rowindices; > PetscInt* colindices; > PetscScalar* values; > rowindices=new PetscInt[sizerow_ptr]; > This array is probably one too short. It needs to have length nrows+1 (so that the length of the last row is known). > colindices=new PetscInt[sizecol_ptr]; > values=new PetscScalar[totnnz]; > for (unsigned int i=0;i rowindices[i]=PetscInt(row_ptr[i]); > for (unsigned int i=0;i colindices[i]=PetscInt(col_ptr[i]); > for (unsigned int i=0;i values[i]=PetscScalar(value_ptr[i]); > > ierr=MatCreateSeqAIJWithArrays(PETSC_COMM_WORLD,size+1,size+1,rowindices,colindices,values,Aaddr);CHKERRQ(ierr); > You haven't explained your API, but you have a lot of "size" things running around. You should probably pass in "size" here instead of "size+1". > ierr=MatAssemblyBegin(*Aaddr,MAT_FINAL_ASSEMBLY);CHKERRQ(ierr); > ierr=MatAssemblyEnd(*Aaddr,MAT_FINAL_ASSEMBLY);CHKERRQ(ierr); > return 0; > } > -------------- next part -------------- An HTML attachment was scrubbed... URL: From isaia.nisoli at gmail.com Mon Nov 28 19:30:08 2011 From: isaia.nisoli at gmail.com (Nisoli Isaia) Date: Tue, 29 Nov 2011 02:30:08 +0100 Subject: [petsc-users] problems with MatCreateSeqAIJWithArrays In-Reply-To: References: Message-ID: >>* int convertPETSc(srmatrix &sA, Mat * const Aaddr)*>>* {*>>* PetscErrorCode ierr;*>>* PetscInt size=(sA.A).size1();*>>* unsigned int totnnz=(sA.A).nnz();*>>**>>* (sA.A).complete_index1_data();*>>* long unsigned int *row_ptr =(sA.A).index1_data().begin();*>>* long unsigned int *col_ptr =(sA.A).index2_data().begin();*>>* double * value_ptr = (sA.A).value_data().begin();*>>* unsigned int sizerow_ptr=(sA.A).index1_data().size();*>>* unsigned int sizecol_ptr=(sA.A).index2_data().size();*>>* PetscInt* rowindices;*>>* PetscInt* colindices;*>>* PetscScalar* values;*>>* rowindices=new PetscInt[sizerow_ptr];*>>** >This array is probably one too short. It needs to have length nrows+1 (so >that the length of the last row is known). I tried *rowindices=new PetscInt[sizerow_ptr+1]; *allocating some more memory is not a problem... but I get the same error. >>* colindices=new PetscInt[sizecol_ptr];*>>* values=new PetscScalar[totnnz];*>>* for (unsigned int i=0;i>* rowindices[i]=PetscInt(row_ptr[i]);*>>* for (unsigned int i=0;i>* colindices[i]=PetscInt(col_ptr[i]);*>>* for (unsigned int i=0;i>* values[i]=PetscScalar(value_ptr[i]);*>>**>>* ierr=MatCreateSeqAIJWithArrays(PETSC_COMM_WORLD,size+1,size+1,rowindices,colindices,values,Aaddr);CHKERRQ(ierr);*>>** >You haven't explained your API, but you have a lot of "size" things running >around. You should probably pass in "size" here instead of "size+1". I'm sorry, I didn't say that in the first mail, but I want the function to add one last row and one last column of zeros to the matrix. So that the output matrix is (size+1)*(size+1) if the input matrix is size*size. >* ierr=MatAssemblyBegin(*Aaddr,MAT_FINAL_ASSEMBLY);CHKERRQ(ierr);*>* ierr=MatAssemblyEnd(*Aaddr,MAT_FINAL_ASSEMBLY);CHKERRQ(ierr);*>* return 0;*>* }*> -------------- next part -------------- An HTML attachment was scrubbed... URL: From jedbrown at mcs.anl.gov Mon Nov 28 19:34:29 2011 From: jedbrown at mcs.anl.gov (Jed Brown) Date: Mon, 28 Nov 2011 19:34:29 -0600 Subject: [petsc-users] problems with MatCreateSeqAIJWithArrays In-Reply-To: References: Message-ID: On Mon, Nov 28, 2011 at 19:30, Nisoli Isaia wrote: > I tried > *rowindices=new PetscInt[sizerow_ptr+1]; > *allocating some more memory is not a problem... > but I get the same error. > > How is sizerow_ptr related to size? Are you also filling in those indices correctly? > > > >>* colindices=new PetscInt[sizecol_ptr];*>>* values=new PetscScalar[totnnz];*>>* for (unsigned int i=0;i>* rowindices[i]=PetscInt(row_ptr[i]);*>>* for (unsigned int i=0;i>* colindices[i]=PetscInt(col_ptr[i]);*>>* for (unsigned int i=0;i>* values[i]=PetscScalar(value_ptr[i]);*>>**>>* ierr=MatCreateSeqAIJWithArrays(PETSC_COMM_WORLD,size+1,size+1,rowindices,colindices,values,Aaddr);CHKERRQ(ierr);*>>** > > >You haven't explained your API, but you have a lot of "size" things running > >around. You should probably pass in "size" here instead of "size+1". > I'm sorry, I didn't say that in the first mail, but I want the function to add one last row and one last column of zeros to the matrix. > So that the output matrix is (size+1)*(size+1) if the input matrix is size*size. > > Well, the matrix has no way to know what should go in that last row. It's likely reading off the end of the array because you didn't make the arrays large enough. Run in Valgrind if you are having trouble. -------------- next part -------------- An HTML attachment was scrubbed... URL: From fredva at ifi.uio.no Tue Nov 29 02:38:44 2011 From: fredva at ifi.uio.no (Fredrik Heffer Valdmanis) Date: Tue, 29 Nov 2011 09:38:44 +0100 Subject: [petsc-users] Questions about setting values for GPU based matrices In-Reply-To: References: Message-ID: 2011/10/28 Matthew Knepley > On Fri, Oct 28, 2011 at 10:24 AM, Fredrik Heffer Valdmanis < > fredva at ifi.uio.no> wrote: > >> Hi, >> >> I am working on integrating the new GPU based vectors and matrices into >> FEniCS. Now, I'm looking at the possibility for getting some speedup during >> finite element assembly, specifically when inserting the local element >> matrix into the global element matrix. In that regard, I have a few >> questions I hope you can help me out with: >> >> - When calling MatSetValues with a MATSEQAIJCUSP matrix as parameter, >> what exactly is it that happens? As far as I can see, MatSetValues is not >> implemented for GPU based matrices, neither is the mat->ops->setvalues set >> to point at any function for this Mat type. >> > > Yes, MatSetValues always operates on the CPU side. It would not make sense > to do individual operations on the GPU. > > I have written batched of assembly for element matrices that are all the > same size: > > > http://www.mcs.anl.gov/petsc/petsc-as/snapshots/petsc-current/docs/manualpages/Mat/MatSetValuesBatch.html > > >> - Is it such that matrices are assembled in their entirety on the CPU, >> and then copied over to the GPU (after calling MatAssemblyBegin)? Or are >> values copied over to the GPU each time you call MatSetValues? >> > > That function assembles the matrix on the GPU and then copies to the CPU. > The only time you do not want this copy is when > you are running in serial and never touch the matrix afterwards, so I left > it in. > > >> - Can we expect to see any speedup from using MatSetValuesBatch over >> MatSetValues, or is the batch version simply a utility function? This >> question goes for both CPU- and GPU-based matrices. >> > > CPU: no > > GPU: yes, I see about the memory bandwidth ratio > > > Hi, I have now integrated MatSetValuesBatch in our existing PETSc wrapper layer. I have tested matrix assembly with Poisson's equation on different meshes with elements of varying order. I have timed the single call to MatSetValuesBatch and compared that to the total time consumed by the repeated calls to MatSetValues in the old implementation. I have the following results: Poisson on 1000x1000 unit square, 1st order Lagrange elements: MatSetValuesBatch: 0.88576 s repeated calls to MatSetValues: 0.76654 s Poisson on 500x500 unit square, 2nd order Lagrange elements: MatSetValuesBatch: 0.9324 s repeated calls to MatSetValues: 0.81644 s Poisson on 300x300 unit square, 3rd order Lagrange elements: MatSetValuesBatch: 0.93988 s repeated calls to MatSetValues: 1.03884 s As you can see, the two methods take almost the same amount of time. What behavior and performance should we expect? Is there any way to optimize the performance of batched assembly? I also have a problem with Thrust throwing std::bad_alloc on some calls to MatSetValuesBatch. The exception originates in thrust::device_ptr thrust::detail::device::cuda::malloc<0u>(unsigned long). It seems to be thrown when the number of double values I send to MatSetValuesBatch approaches 30 million. I am testing this on a laptop with 4 GB RAM and a GeForce 540 M (1 GB memory), so the 30 million doubles are far away from exhausting my memory, both on the host and device side. Any clues on what causes this problem and how to avoid it? Thanks, Fredrik -------------- next part -------------- An HTML attachment was scrubbed... URL: From agrayver at gfz-potsdam.de Tue Nov 29 03:27:30 2011 From: agrayver at gfz-potsdam.de (Alexander Grayver) Date: Tue, 29 Nov 2011 10:27:30 +0100 Subject: [petsc-users] configure with --with-blacs-dir Message-ID: <4ED4A582.3030808@gfz-potsdam.de> Hi PETSc team and others, I am trying to configure PETSc with the following line: ./configure --with-petsc-arch=openmpi-intel-complex-debug-f-mumps --with-fortran-interfaces=1 --download-mumps --download-parmetis --with-scalapack-dir=/opt/intel/Compiler/11.1/072/mkl/lib/em64t --with-blacs-dir=/opt/intel/Compiler/11.1/072/mkl/lib/em64t --with-mpi-dir=/opt/mpi/intel/openmpi-1.4.2 --with-scalar-type=complex --with-blas-lapack-dir=/opt/intel/Compiler/11.1/072/mkl/lib/em64t --with-precision=double And it tells me: =============================================================================== TESTING: check from config.libraries(config/BuildSystem/config/libraries.py:133) ******************************************************************************* UNABLE to CONFIGURE with GIVEN OPTIONS (see configure.log for details): ------------------------------------------------------------------------------- --with-blacs-dir=/opt/intel/Compiler/11.1/072/mkl/lib/em64t did not work ******************************************************************************* And I see in configure.log (full version is attached): sh: Possible ERROR while running linker: ld: cannot find -lblacs output: ret = 256 error message = {ld: cannot find -lblacs } Even though in specified directory there are: libmkl_blacs_ilp64.a libmkl_blacs_intelmpi_ilp64.a libmkl_blacs_intelmpi_ilp64.so libmkl_blacs_intelmpi_lp64.a libmkl_blacs_intelmpi_lp64.so libmkl_blacs_lp64.a libmkl_blacs_openmpi_ilp64.a libmkl_blacs_openmpi_lp64.a libmkl_blacs_sgimpt_ilp64.a libmkl_blacs_sgimpt_lp64.a So what else do I need? Thanks in advance. Regards, Alexander -------------- next part -------------- A non-text attachment was scrubbed... Name: configure.zip Type: application/x-zip-compressed Size: 300478 bytes Desc: not available URL: From agrayver at gfz-potsdam.de Tue Nov 29 04:24:31 2011 From: agrayver at gfz-potsdam.de (Alexander Grayver) Date: Tue, 29 Nov 2011 11:24:31 +0100 Subject: [petsc-users] configure with --with-blacs-dir In-Reply-To: <4ED4A582.3030808@gfz-potsdam.de> References: <4ED4A582.3030808@gfz-potsdam.de> Message-ID: <4ED4B2DF.20209@gfz-potsdam.de> Got it. I had to use with-blacs-lib and with-blacs-include to explicitly specify lib. Sorry for bothering you. Regards, Alexander On 29.11.2011 10:27, Alexander Grayver wrote: > Hi PETSc team and others, > > I am trying to configure PETSc with the following line: > > ./configure --with-petsc-arch=openmpi-intel-complex-debug-f-mumps > --with-fortran-interfaces=1 --download-mumps --download-parmetis > --with-scalapack-dir=/opt/intel/Compiler/11.1/072/mkl/lib/em64t > --with-blacs-dir=/opt/intel/Compiler/11.1/072/mkl/lib/em64t > --with-mpi-dir=/opt/mpi/intel/openmpi-1.4.2 --with-scalar-type=complex > --with-blas-lapack-dir=/opt/intel/Compiler/11.1/072/mkl/lib/em64t > --with-precision=double > > And it tells me: > > =============================================================================== > > TESTING: check from > config.libraries(config/BuildSystem/config/libraries.py:133) > ******************************************************************************* > > UNABLE to CONFIGURE with GIVEN OPTIONS (see configure.log > for details): > ------------------------------------------------------------------------------- > > --with-blacs-dir=/opt/intel/Compiler/11.1/072/mkl/lib/em64t did not work > ******************************************************************************* > > > And I see in configure.log (full version is attached): > sh: > Possible ERROR while running linker: ld: cannot find -lblacs > output: ret = 256 > error message = {ld: cannot find -lblacs > } > > Even though in specified directory there are: > > libmkl_blacs_ilp64.a > libmkl_blacs_intelmpi_ilp64.a > libmkl_blacs_intelmpi_ilp64.so > libmkl_blacs_intelmpi_lp64.a > libmkl_blacs_intelmpi_lp64.so > libmkl_blacs_lp64.a > libmkl_blacs_openmpi_ilp64.a > libmkl_blacs_openmpi_lp64.a > libmkl_blacs_sgimpt_ilp64.a > libmkl_blacs_sgimpt_lp64.a > > So what else do I need? > > Thanks in advance. > > Regards, > Alexander > From isaia.nisoli at gmail.com Tue Nov 29 05:48:02 2011 From: isaia.nisoli at gmail.com (Nisoli Isaia) Date: Tue, 29 Nov 2011 09:48:02 -0200 Subject: [petsc-users] problems with MatCreateSeqAIJWithArrays In-Reply-To: References: Message-ID: So, if I want to add another row, what I should do is that: rowindices[size]=rowindices[size-1]; rowindices[size-1]=size+1; is this last line right? When a CSR matrix has a line 'i' which contains only 0s, how should I set rowindices[i]? Is there any place where the number of columns is taken into account in *PetscInt* rowindices;**PetscInt* colindices;**PetscScalar* values;* reading the documentation about CSR, it does not look like that. Thank you again Isaia On Mon, Nov 28, 2011 at 11:30 PM, Nisoli Isaia wrote: > >>* int convertPETSc(srmatrix &sA, Mat * const Aaddr)*>>* {*>>* PetscErrorCode ierr;*>>* PetscInt size=(sA.A).size1();*>>* unsigned int totnnz=(sA.A).nnz();*>>**>>* (sA.A).complete_index1_data();*>>* long unsigned int *row_ptr =(sA.A).index1_data().begin();*>>* long unsigned int *col_ptr =(sA.A).index2_data().begin();*>>* double * value_ptr = (sA.A).value_data().begin();*>>* unsigned int sizerow_ptr=(sA.A).index1_data().size();*>>* unsigned int sizecol_ptr=(sA.A).index2_data().size();*>>* PetscInt* rowindices;*>>* PetscInt* colindices;*>>* PetscScalar* values;*>>* rowindices=new PetscInt[sizerow_ptr];*>>** > > >This array is probably one too short. It needs to have length nrows+1 (so > >that the length of the last row is known). > I tried > *rowindices=new PetscInt[sizerow_ptr+1]; > *allocating some more memory is not a problem... > but I get the same error. > > > >>* colindices=new PetscInt[sizecol_ptr];*>>* values=new PetscScalar[totnnz];*>>* for (unsigned int i=0;i>* rowindices[i]=PetscInt(row_ptr[i]);*>>* for (unsigned int i=0;i>* colindices[i]=PetscInt(col_ptr[i]);*>>* for (unsigned int i=0;i>* values[i]=PetscScalar(value_ptr[i]);*>>**>>* ierr=MatCreateSeqAIJWithArrays(PETSC_COMM_WORLD,size+1,size+1,rowindices,colindices,values,Aaddr);CHKERRQ(ierr);*>>** > > >You haven't explained your API, but you have a lot of "size" things running > >around. You should probably pass in "size" here instead of "size+1". > I'm sorry, I didn't say that in the first mail, but I want the function to add one last row and one last column of zeros to the matrix. > So that the output matrix is (size+1)*(size+1) if the input matrix is size*size. > > >* ierr=MatAssemblyBegin(*Aaddr,MAT_FINAL_ASSEMBLY);CHKERRQ(ierr);*>* ierr=MatAssemblyEnd(*Aaddr,MAT_FINAL_ASSEMBLY);CHKERRQ(ierr);*>* return 0;*>* }*> > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From isaia.nisoli at gmail.com Tue Nov 29 06:12:25 2011 From: isaia.nisoli at gmail.com (Nisoli Isaia) Date: Tue, 29 Nov 2011 10:12:25 -0200 Subject: [petsc-users] problems with MatCreateSeqAIJWithArrays In-Reply-To: References: Message-ID: Sorry, I mixed up things. It is probably simply rowindices[size]=rowindices[size-1]; Is it? Isaia On Tue, Nov 29, 2011 at 9:48 AM, Nisoli Isaia wrote: > So, if I want to add another row, what I should do is that: > > rowindices[size]=rowindices[size-1]; > > rowindices[size-1]=size+1; > is this last line right? > When a CSR matrix has a line 'i' which contains only 0s, how should I set > rowindices[i]? > > Is there any place where the number of columns is taken into account in > > *PetscInt* rowindices;**PetscInt* colindices;**PetscScalar* values;* > > reading the documentation about CSR, it does not look like that. > > Thank you again > Isaia > > > > On Mon, Nov 28, 2011 at 11:30 PM, Nisoli Isaia wrote: > >> >>* int convertPETSc(srmatrix &sA, Mat * const Aaddr)*>>* {*>>* PetscErrorCode ierr;*>>* PetscInt size=(sA.A).size1();*>>* unsigned int totnnz=(sA.A).nnz();*>>**>>* (sA.A).complete_index1_data();*>>* long unsigned int *row_ptr =(sA.A).index1_data().begin();*>>* long unsigned int *col_ptr =(sA.A).index2_data().begin();*>>* double * value_ptr = (sA.A).value_data().begin();*>>* unsigned int sizerow_ptr=(sA.A).index1_data().size();*>>* unsigned int sizecol_ptr=(sA.A).index2_data().size();*>>* PetscInt* rowindices;*>>* PetscInt* colindices;*>>* PetscScalar* values;*>>* rowindices=new PetscInt[sizerow_ptr];*>>** >> >> >This array is probably one too short. It needs to have length nrows+1 (so >> >that the length of the last row is known). >> I tried >> *rowindices=new PetscInt[sizerow_ptr+1]; >> *allocating some more memory is not a problem... >> but I get the same error. >> >> >> >>* colindices=new PetscInt[sizecol_ptr];*>>* values=new PetscScalar[totnnz];*>>* for (unsigned int i=0;i>* rowindices[i]=PetscInt(row_ptr[i]);*>>* for (unsigned int i=0;i>* colindices[i]=PetscInt(col_ptr[i]);*>>* for (unsigned int i=0;i>* values[i]=PetscScalar(value_ptr[i]);*>>**>>* ierr=MatCreateSeqAIJWithArrays(PETSC_COMM_WORLD,size+1,size+1,rowindices,colindices,values,Aaddr);CHKERRQ(ierr);*>>** >> >> >You haven't explained your API, but you have a lot of "size" things running >> >around. You should probably pass in "size" here instead of "size+1". >> I'm sorry, I didn't say that in the first mail, but I want the function to add one last row and one last column of zeros to the matrix. >> So that the output matrix is (size+1)*(size+1) if the input matrix is size*size. >> >> >* ierr=MatAssemblyBegin(*Aaddr,MAT_FINAL_ASSEMBLY);CHKERRQ(ierr);*>* ierr=MatAssemblyEnd(*Aaddr,MAT_FINAL_ASSEMBLY);CHKERRQ(ierr);*>* return 0;*>* }*> >> >> > -------------- next part -------------- An HTML attachment was scrubbed... URL: From jedbrown at mcs.anl.gov Tue Nov 29 06:47:19 2011 From: jedbrown at mcs.anl.gov (Jed Brown) Date: Tue, 29 Nov 2011 06:47:19 -0600 Subject: [petsc-users] problems with MatCreateSeqAIJWithArrays In-Reply-To: References: Message-ID: On Tue, Nov 29, 2011 at 06:12, Nisoli Isaia wrote: > Sorry, I mixed up things. > It is probably simply > > rowindices[size]=rowindices[size-1]; > That gives you an empty row. Some algorithms require diagonal entries, so it's sometimes better to preallocate them and put an explicit zero there than to skip them. -------------- next part -------------- An HTML attachment was scrubbed... URL: From knepley at gmail.com Tue Nov 29 08:09:35 2011 From: knepley at gmail.com (Matthew Knepley) Date: Tue, 29 Nov 2011 08:09:35 -0600 Subject: [petsc-users] Questions about setting values for GPU based matrices In-Reply-To: References: Message-ID: On Tue, Nov 29, 2011 at 2:38 AM, Fredrik Heffer Valdmanis wrote: > 2011/10/28 Matthew Knepley > >> On Fri, Oct 28, 2011 at 10:24 AM, Fredrik Heffer Valdmanis < >> fredva at ifi.uio.no> wrote: >> >>> Hi, >>> >>> I am working on integrating the new GPU based vectors and matrices into >>> FEniCS. Now, I'm looking at the possibility for getting some speedup during >>> finite element assembly, specifically when inserting the local element >>> matrix into the global element matrix. In that regard, I have a few >>> questions I hope you can help me out with: >>> >>> - When calling MatSetValues with a MATSEQAIJCUSP matrix as parameter, >>> what exactly is it that happens? As far as I can see, MatSetValues is not >>> implemented for GPU based matrices, neither is the mat->ops->setvalues set >>> to point at any function for this Mat type. >>> >> >> Yes, MatSetValues always operates on the CPU side. It would not make >> sense to do individual operations on the GPU. >> >> I have written batched of assembly for element matrices that are all the >> same size: >> >> >> http://www.mcs.anl.gov/petsc/petsc-as/snapshots/petsc-current/docs/manualpages/Mat/MatSetValuesBatch.html >> >> >>> - Is it such that matrices are assembled in their entirety on the CPU, >>> and then copied over to the GPU (after calling MatAssemblyBegin)? Or are >>> values copied over to the GPU each time you call MatSetValues? >>> >> >> That function assembles the matrix on the GPU and then copies to the CPU. >> The only time you do not want this copy is when >> you are running in serial and never touch the matrix afterwards, so I >> left it in. >> >> >>> - Can we expect to see any speedup from using MatSetValuesBatch over >>> MatSetValues, or is the batch version simply a utility function? This >>> question goes for both CPU- and GPU-based matrices. >>> >> >> CPU: no >> >> GPU: yes, I see about the memory bandwidth ratio >> >> >> Hi, > > I have now integrated MatSetValuesBatch in our existing PETSc wrapper > layer. I have tested matrix assembly with Poisson's equation on different > meshes with elements of varying order. I have timed the single call to > MatSetValuesBatch and compared that to the total time consumed by the > repeated calls to MatSetValues in the old implementation. I have the > following results: > > Poisson on 1000x1000 unit square, 1st order Lagrange elements: > MatSetValuesBatch: 0.88576 s > repeated calls to MatSetValues: 0.76654 s > > Poisson on 500x500 unit square, 2nd order Lagrange elements: > MatSetValuesBatch: 0.9324 s > repeated calls to MatSetValues: 0.81644 s > > Poisson on 300x300 unit square, 3rd order Lagrange elements: > MatSetValuesBatch: 0.93988 s > repeated calls to MatSetValues: 1.03884 s > > As you can see, the two methods take almost the same amount of time. > What behavior and performance should we expect? Is there any way to > optimize the performance of batched assembly? > Almost certainly it is not dispatching to the CUDA version. The regular version just calls MatSetValues() in a loop. Are you using a SEQAIJCUSP matrix? > I also have a problem with Thrust throwing std::bad_alloc on some calls to > MatSetValuesBatch. The exception originates in thrust::device_ptr > thrust::detail::device::cuda::malloc<0u>(unsigned long). It seems to be > thrown when the number of double values I send to MatSetValuesBatch > approaches 30 million. I am testing this on a laptop with 4 GB RAM and a > GeForce 540 M (1 GB memory), so the 30 million doubles are far away from > exhausting my memory, both on the host and device side. Any clues on what > causes this problem and how to avoid it? > It uses more memory that just the values. I would have to look at the specific case, but I assume that the memory is exhausted. Matt > Thanks, > > Fredrik > > -- What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. -- Norbert Wiener -------------- next part -------------- An HTML attachment was scrubbed... URL: From gdiso at ustc.edu Tue Nov 29 08:44:50 2011 From: gdiso at ustc.edu (Gong Ding) Date: Tue, 29 Nov 2011 22:44:50 +0800 (CST) Subject: [petsc-users] Strange crash on VecScatter Message-ID: <29820981.347771322577890726.JavaMail.coremail@mail.ustc.edu> Hi, I meet some very strange problem on AIX6.1 with IBM POE. When I start the simulation with 3 or more processors, the following error will occure. Fatal Error: at line 71 in /gpfs1/cogenda/cogenda/packages/petsc/include/../src/vec/vec/utils/vpscat.h One or two processors are ok. However, when I start my code with -vecscatter_window to avoid the MPI_Startall_irecv call of line 71, everything is ok. Some information: The petsc is configured with --known-level1-dcache-size=32768 --known-level1-dcache-linesize=32 --known-level1-dcache-assoc=2 --known-memcmp-ok=1 --known-endian=big --known-sizeof-char=1 --known-sizeof-void-p=8 --known-sizeof-short=2 --known-sizeof-int=4 --known-sizeof-long=8 --known-sizeof-long-long=8 --known-sizeof-float=4 --known-sizeof-double=8 --known-sizeof-size_t=8 --known-bits-per-byte=8 --known-sizeof-MPI_Comm=4 --known-sizeof-MPI_Fint=4 --known-mpi-long-double=1 --download-f-blas-lapack=1 --download-mumps=1 --download-blacs=1 --download-parmetis=1 --download-scalapack=1 --download-superlu=1 --with-debugging=0 --with-clanguage=cxx --with-cc=\"mpcc_r -q64\" --with-fc=\"mpxlf_r -q64\" --with-cxx=\"mpCC_r -q64\" --with-batch=1 --with-shared-libraries=1 --known-mpi-shared-libraries=1 --with-x=0 --with-pic=1 And I add #ifndef PETSC_HAVE_MPI_MISSING_TYPESIZE #define PETSC_HAVE_MPI_MISSING_TYPESIZE #endif to petscconf.h for avoiding some mpi compile problem. Does anyone meet this problem? If I have to use this workaround, how to I add -vecscatter_window in the code instead in command line? Gong Ding From knepley at gmail.com Tue Nov 29 08:56:44 2011 From: knepley at gmail.com (Matthew Knepley) Date: Tue, 29 Nov 2011 08:56:44 -0600 Subject: [petsc-users] Strange crash on VecScatter In-Reply-To: <29820981.347771322577890726.JavaMail.coremail@mail.ustc.edu> References: <29820981.347771322577890726.JavaMail.coremail@mail.ustc.edu> Message-ID: 2011/11/29 Gong Ding > Hi, > I meet some very strange problem on AIX6.1 with IBM POE. > > When I start the simulation with 3 or more processors, the following error > will occure. > Fatal Error: at line 71 in > /gpfs1/cogenda/cogenda/packages/petsc/include/../src/vec/vec/utils/vpscat.h > One or two processors are ok. > > However, when I start my code with -vecscatter_window to avoid the > MPI_Startall_irecv call of line 71, everything is ok. > > Some information: > The petsc is configured with > --known-level1-dcache-size=32768 --known-level1-dcache-linesize=32 > --known-level1-dcache-assoc=2 --known-memcmp-ok=1 --known-endian=big > --known-sizeof-char=1 > --known-sizeof-void-p=8 --known-sizeof-short=2 --known-sizeof-int=4 > --known-sizeof-long=8 --known-sizeof-long-long=8 --known-sizeof-float=4 > --known-sizeof-double=8 --known-sizeof-size_t=8 --known-bits-per-byte=8 > --known-sizeof-MPI_Comm=4 --known-sizeof-MPI_Fint=4 > --known-mpi-long-double=1 --download-f-blas-lapack=1 > --download-mumps=1 --download-blacs=1 --download-parmetis=1 > --download-scalapack=1 --download-superlu=1 --with-debugging=0 > --with-clanguage=cxx > --with-cc=\"mpcc_r -q64\" --with-fc=\"mpxlf_r -q64\" --with-cxx=\"mpCC_r > -q64\" --with-batch=1 --with-shared-libraries=1 > --known-mpi-shared-libraries=1 > --with-x=0 --with-pic=1 > > And I add > #ifndef PETSC_HAVE_MPI_MISSING_TYPESIZE > #define PETSC_HAVE_MPI_MISSING_TYPESIZE > #endif > to petscconf.h for avoiding some mpi compile problem. > > Does anyone meet this problem? > > If I have to use this workaround, how to I add -vecscatter_window in the > code instead in command line? PetscOptionSetValue('-vecscatter_window', '1') Matt > > Gong Ding > > -- What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. -- Norbert Wiener -------------- next part -------------- An HTML attachment was scrubbed... URL: From fredva at ifi.uio.no Tue Nov 29 10:37:59 2011 From: fredva at ifi.uio.no (Fredrik Heffer Valdmanis) Date: Tue, 29 Nov 2011 17:37:59 +0100 Subject: [petsc-users] Questions about setting values for GPU based matrices In-Reply-To: References: Message-ID: 2011/11/29 Matthew Knepley > On Tue, Nov 29, 2011 at 2:38 AM, Fredrik Heffer Valdmanis < > fredva at ifi.uio.no> wrote: > >> 2011/10/28 Matthew Knepley >> >>> On Fri, Oct 28, 2011 at 10:24 AM, Fredrik Heffer Valdmanis < >>> fredva at ifi.uio.no> wrote: >>> >>>> Hi, >>>> >>>> I am working on integrating the new GPU based vectors and matrices into >>>> FEniCS. Now, I'm looking at the possibility for getting some speedup during >>>> finite element assembly, specifically when inserting the local element >>>> matrix into the global element matrix. In that regard, I have a few >>>> questions I hope you can help me out with: >>>> >>>> - When calling MatSetValues with a MATSEQAIJCUSP matrix as parameter, >>>> what exactly is it that happens? As far as I can see, MatSetValues is not >>>> implemented for GPU based matrices, neither is the mat->ops->setvalues set >>>> to point at any function for this Mat type. >>>> >>> >>> Yes, MatSetValues always operates on the CPU side. It would not make >>> sense to do individual operations on the GPU. >>> >>> I have written batched of assembly for element matrices that are all the >>> same size: >>> >>> >>> http://www.mcs.anl.gov/petsc/petsc-as/snapshots/petsc-current/docs/manualpages/Mat/MatSetValuesBatch.html >>> >>> >>>> - Is it such that matrices are assembled in their entirety on the CPU, >>>> and then copied over to the GPU (after calling MatAssemblyBegin)? Or are >>>> values copied over to the GPU each time you call MatSetValues? >>>> >>> >>> That function assembles the matrix on the GPU and then copies to the >>> CPU. The only time you do not want this copy is when >>> you are running in serial and never touch the matrix afterwards, so I >>> left it in. >>> >>> >>>> - Can we expect to see any speedup from using MatSetValuesBatch over >>>> MatSetValues, or is the batch version simply a utility function? This >>>> question goes for both CPU- and GPU-based matrices. >>>> >>> >>> CPU: no >>> >>> GPU: yes, I see about the memory bandwidth ratio >>> >>> >>> Hi, >> >> I have now integrated MatSetValuesBatch in our existing PETSc wrapper >> layer. I have tested matrix assembly with Poisson's equation on different >> meshes with elements of varying order. I have timed the single call to >> MatSetValuesBatch and compared that to the total time consumed by the >> repeated calls to MatSetValues in the old implementation. I have the >> following results: >> >> Poisson on 1000x1000 unit square, 1st order Lagrange elements: >> MatSetValuesBatch: 0.88576 s >> repeated calls to MatSetValues: 0.76654 s >> >> Poisson on 500x500 unit square, 2nd order Lagrange elements: >> MatSetValuesBatch: 0.9324 s >> repeated calls to MatSetValues: 0.81644 s >> >> Poisson on 300x300 unit square, 3rd order Lagrange elements: >> MatSetValuesBatch: 0.93988 s >> repeated calls to MatSetValues: 1.03884 s >> >> As you can see, the two methods take almost the same amount of time. >> What behavior and performance should we expect? Is there any way to >> optimize the performance of batched assembly? >> > > Almost certainly it is not dispatching to the CUDA version. The regular > version just calls MatSetValues() in a loop. Are you > using a SEQAIJCUSP matrix? > Yes. The same matrices yields a speedup of 4-6x when solving the system on the GPU. > > >> I also have a problem with Thrust throwing std::bad_alloc on some calls >> to MatSetValuesBatch. The exception originates in thrust::device_ptr >> thrust::detail::device::cuda::malloc<0u>(unsigned long). It seems to be >> thrown when the number of double values I send to MatSetValuesBatch >> approaches 30 million. I am testing this on a laptop with 4 GB RAM and a >> GeForce 540 M (1 GB memory), so the 30 million doubles are far away from >> exhausting my memory, both on the host and device side. Any clues on what >> causes this problem and how to avoid it? >> > > It uses more memory that just the values. I would have to look at the > specific case, but > I assume that the memory is exhausted. > OK, I can look further into it myself as well. Thanks, Fredrik -------------- next part -------------- An HTML attachment was scrubbed... URL: From knepley at gmail.com Tue Nov 29 10:57:05 2011 From: knepley at gmail.com (Matthew Knepley) Date: Tue, 29 Nov 2011 10:57:05 -0600 Subject: [petsc-users] Questions about setting values for GPU based matrices In-Reply-To: References: Message-ID: On Tue, Nov 29, 2011 at 10:37 AM, Fredrik Heffer Valdmanis < fredva at ifi.uio.no> wrote: > 2011/11/29 Matthew Knepley > >> On Tue, Nov 29, 2011 at 2:38 AM, Fredrik Heffer Valdmanis < >> fredva at ifi.uio.no> wrote: >> >>> 2011/10/28 Matthew Knepley >>> >>>> On Fri, Oct 28, 2011 at 10:24 AM, Fredrik Heffer Valdmanis < >>>> fredva at ifi.uio.no> wrote: >>>> >>>>> Hi, >>>>> >>>>> I am working on integrating the new GPU based vectors and matrices >>>>> into FEniCS. Now, I'm looking at the possibility for getting some speedup >>>>> during finite element assembly, specifically when inserting the local >>>>> element matrix into the global element matrix. In that regard, I have a few >>>>> questions I hope you can help me out with: >>>>> >>>>> - When calling MatSetValues with a MATSEQAIJCUSP matrix as parameter, >>>>> what exactly is it that happens? As far as I can see, MatSetValues is not >>>>> implemented for GPU based matrices, neither is the mat->ops->setvalues set >>>>> to point at any function for this Mat type. >>>>> >>>> >>>> Yes, MatSetValues always operates on the CPU side. It would not make >>>> sense to do individual operations on the GPU. >>>> >>>> I have written batched of assembly for element matrices that are all >>>> the same size: >>>> >>>> >>>> http://www.mcs.anl.gov/petsc/petsc-as/snapshots/petsc-current/docs/manualpages/Mat/MatSetValuesBatch.html >>>> >>>> >>>>> - Is it such that matrices are assembled in their entirety on the CPU, >>>>> and then copied over to the GPU (after calling MatAssemblyBegin)? Or are >>>>> values copied over to the GPU each time you call MatSetValues? >>>>> >>>> >>>> That function assembles the matrix on the GPU and then copies to the >>>> CPU. The only time you do not want this copy is when >>>> you are running in serial and never touch the matrix afterwards, so I >>>> left it in. >>>> >>>> >>>>> - Can we expect to see any speedup from using MatSetValuesBatch over >>>>> MatSetValues, or is the batch version simply a utility function? This >>>>> question goes for both CPU- and GPU-based matrices. >>>>> >>>> >>>> CPU: no >>>> >>>> GPU: yes, I see about the memory bandwidth ratio >>>> >>>> >>>> Hi, >>> >>> I have now integrated MatSetValuesBatch in our existing PETSc wrapper >>> layer. I have tested matrix assembly with Poisson's equation on different >>> meshes with elements of varying order. I have timed the single call to >>> MatSetValuesBatch and compared that to the total time consumed by the >>> repeated calls to MatSetValues in the old implementation. I have the >>> following results: >>> >>> Poisson on 1000x1000 unit square, 1st order Lagrange elements: >>> MatSetValuesBatch: 0.88576 s >>> repeated calls to MatSetValues: 0.76654 s >>> >>> Poisson on 500x500 unit square, 2nd order Lagrange elements: >>> MatSetValuesBatch: 0.9324 s >>> repeated calls to MatSetValues: 0.81644 s >>> >>> Poisson on 300x300 unit square, 3rd order Lagrange elements: >>> MatSetValuesBatch: 0.93988 s >>> repeated calls to MatSetValues: 1.03884 s >>> >>> As you can see, the two methods take almost the same amount of time. >>> What behavior and performance should we expect? Is there any way to >>> optimize the performance of batched assembly? >>> >> >> Almost certainly it is not dispatching to the CUDA version. The regular >> version just calls MatSetValues() in a loop. Are you >> using a SEQAIJCUSP matrix? >> > Yes. The same matrices yields a speedup of 4-6x when solving the system on > the GPU. > Please confirm that the correct routine by running wth -info and sending the output. Please send the output of -log_summary so I can confirm the results. You can run KSP ex4 and reproduce my results where I see a 5.5x speedup on the GTX285 Matt > >> >>> I also have a problem with Thrust throwing std::bad_alloc on some >>> calls to MatSetValuesBatch. The exception originates in >>> thrust::device_ptr thrust::detail::device::cuda::malloc<0u>(unsigned >>> long). It seems to be thrown when the number of double values I send to >>> MatSetValuesBatch approaches 30 million. I am testing this on a laptop with >>> 4 GB RAM and a GeForce 540 M (1 GB memory), so the 30 million doubles are >>> far away from exhausting my memory, both on the host and device side. Any >>> clues on what causes this problem and how to avoid it? >>> >> >> It uses more memory that just the values. I would have to look at the >> specific case, but >> I assume that the memory is exhausted. >> > OK, I can look further into it myself as well. Thanks, > > Fredrik > -- What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. -- Norbert Wiener -------------- next part -------------- An HTML attachment was scrubbed... URL: From mirzadeh at gmail.com Tue Nov 29 13:47:26 2011 From: mirzadeh at gmail.com (Mohammad Mirzadeh) Date: Tue, 29 Nov 2011 11:47:26 -0800 Subject: [petsc-users] Simple question Message-ID: Hi guys, This is rather a simple question. For objects that have both sequential and parallel versions (like Vec, Mat, etc), is there any benefit in directly calling to the sequential version instead of calling to the generic version (like VecCreateSeq instead of VecCreate) and running the code with 1 proc? I've always thought that PETSc would directly call the appropriate function at run time. Is this not the case? I'm writing some wrappers for my code and i'm thinking if I need to consider different classes for seq and parallel or if I could get away by just working with the generic functions. Thanks a lot, Mohammad -------------- next part -------------- An HTML attachment was scrubbed... URL: From knepley at gmail.com Tue Nov 29 13:50:18 2011 From: knepley at gmail.com (Matthew Knepley) Date: Tue, 29 Nov 2011 13:50:18 -0600 Subject: [petsc-users] Simple question In-Reply-To: References: Message-ID: On Tue, Nov 29, 2011 at 1:47 PM, Mohammad Mirzadeh wrote: > Hi guys, > > This is rather a simple question. For objects that have both sequential > and parallel versions (like Vec, Mat, etc), is there any benefit in > directly calling to the sequential version instead of calling to the > generic version (like VecCreateSeq instead of VecCreate) and running the > code with 1 proc? I've always thought that PETSc would directly call the > appropriate function at run time. Is this not the case? > Yes, this is the case. There is no benefit. Matt I'm writing some wrappers for my code and i'm thinking if I need to > consider different classes for seq and parallel or if I could get away by > just working with the generic functions. > > Thanks a lot, > Mohammad > -- What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. -- Norbert Wiener -------------- next part -------------- An HTML attachment was scrubbed... URL: From mirzadeh at gmail.com Tue Nov 29 13:52:28 2011 From: mirzadeh at gmail.com (Mohammad Mirzadeh) Date: Tue, 29 Nov 2011 11:52:28 -0800 Subject: [petsc-users] Simple question In-Reply-To: References: Message-ID: Alright. Thanks Matt :) On Tue, Nov 29, 2011 at 11:50 AM, Matthew Knepley wrote: > On Tue, Nov 29, 2011 at 1:47 PM, Mohammad Mirzadeh wrote: > >> Hi guys, >> >> This is rather a simple question. For objects that have both sequential >> and parallel versions (like Vec, Mat, etc), is there any benefit in >> directly calling to the sequential version instead of calling to the >> generic version (like VecCreateSeq instead of VecCreate) and running the >> code with 1 proc? I've always thought that PETSc would directly call the >> appropriate function at run time. Is this not the case? >> > > Yes, this is the case. There is no benefit. > > Matt > > I'm writing some wrappers for my code and i'm thinking if I need to >> consider different classes for seq and parallel or if I could get away by >> just working with the generic functions. >> >> Thanks a lot, >> Mohammad >> > > > > -- > What most experimenters take for granted before they begin their > experiments is infinitely more interesting than any results to which their > experiments lead. > -- Norbert Wiener > -------------- next part -------------- An HTML attachment was scrubbed... URL: From zonexo at gmail.com Tue Nov 29 16:44:45 2011 From: zonexo at gmail.com (TAY wee-beng) Date: Tue, 29 Nov 2011 23:44:45 +0100 Subject: [petsc-users] ex45.c and ex45f.F Message-ID: <4ED5605D.5090508@gmail.com> Hi, I am learning to solve the 3D Laplacian using multigrid, or more importantly, to use DA with Fortran. However, it seems that ex45f.F is not the equivalent of ex45.c. It's in 2D; seems to be solving the Laplace equation in 3D. Is there an equivalent of ex45.c? Thanks -- Yours sincerely, TAY wee-beng From agrayver at gfz-potsdam.de Wed Nov 30 03:43:41 2011 From: agrayver at gfz-potsdam.de (Alexander Grayver) Date: Wed, 30 Nov 2011 10:43:41 +0100 Subject: [petsc-users] Symmetric matrix filling Message-ID: <4ED5FACD.3040309@gfz-potsdam.de> Hello, I'm trying to use mumps through PETSc now with symmetric matrix and cholesky factorization. When I use it directly I fill up only upper part of the matrix and set mumid%SYM = 2. Is that possible to follow same way with petsc? Which options do I have to choose? Thanks in advance. Regards, Alexander From knepley at gmail.com Wed Nov 30 07:16:54 2011 From: knepley at gmail.com (Matthew Knepley) Date: Wed, 30 Nov 2011 07:16:54 -0600 Subject: [petsc-users] Symmetric matrix filling In-Reply-To: <4ED5FACD.3040309@gfz-potsdam.de> References: <4ED5FACD.3040309@gfz-potsdam.de> Message-ID: On Wed, Nov 30, 2011 at 3:43 AM, Alexander Grayver wrote: > Hello, > > I'm trying to use mumps through PETSc now with symmetric matrix and > cholesky factorization. > When I use it directly I fill up only upper part of the matrix and set > mumid%SYM = 2. Is that possible to follow same way with petsc? > Which options do I have to choose? > http://www.mcs.anl.gov/petsc/petsc-dev/docs/manualpages/Mat/MATSBAIJ.html Matt > Thanks in advance. > > Regards, > Alexander > -- What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. -- Norbert Wiener -------------- next part -------------- An HTML attachment was scrubbed... URL: From agrayver at gfz-potsdam.de Wed Nov 30 07:27:41 2011 From: agrayver at gfz-potsdam.de (Alexander Grayver) Date: Wed, 30 Nov 2011 14:27:41 +0100 Subject: [petsc-users] Symmetric matrix filling In-Reply-To: References: <4ED5FACD.3040309@gfz-potsdam.de> Message-ID: <4ED62F4D.40806@gfz-potsdam.de> Thanks Matt! Regards, Alexander On 30.11.2011 14:16, Matthew Knepley wrote: > On Wed, Nov 30, 2011 at 3:43 AM, Alexander Grayver > > wrote: > > Hello, > > I'm trying to use mumps through PETSc now with symmetric matrix > and cholesky factorization. > When I use it directly I fill up only upper part of the matrix and > set mumid%SYM = 2. Is that possible to follow same way with petsc? > Which options do I have to choose? > > > http://www.mcs.anl.gov/petsc/petsc-dev/docs/manualpages/Mat/MATSBAIJ.html > > Matt > > Thanks in advance. > > Regards, > Alexander > > > > > -- > What most experimenters take for granted before they begin their > experiments is infinitely more interesting than any results to which > their experiments lead. > -- Norbert Wiener -------------- next part -------------- An HTML attachment was scrubbed... URL: From agrayver at gfz-potsdam.de Wed Nov 30 09:10:44 2011 From: agrayver at gfz-potsdam.de (Alexander Grayver) Date: Wed, 30 Nov 2011 16:10:44 +0100 Subject: [petsc-users] Symmetric matrix filling In-Reply-To: <4ED62F4D.40806@gfz-potsdam.de> References: <4ED5FACD.3040309@gfz-potsdam.de> <4ED62F4D.40806@gfz-potsdam.de> Message-ID: <4ED64774.3040005@gfz-potsdam.de> I'm not sure that MAT(S)BAIJ is what I need. My matrix symmetric, but not block. Well, I solve vector equations and there are 2-3 blocks (depending on the problem dimension), but this blocks normaly have size of > 10^5. So which block size should I specify to be efficient? Regards, Alexander On 30.11.2011 14:27, Alexander Grayver wrote: > Thanks Matt! > > Regards, > Alexander > > On 30.11.2011 14:16, Matthew Knepley wrote: >> On Wed, Nov 30, 2011 at 3:43 AM, Alexander Grayver >> > wrote: >> >> Hello, >> >> I'm trying to use mumps through PETSc now with symmetric matrix >> and cholesky factorization. >> When I use it directly I fill up only upper part of the matrix >> and set mumid%SYM = 2. Is that possible to follow same way with >> petsc? >> Which options do I have to choose? >> >> >> http://www.mcs.anl.gov/petsc/petsc-dev/docs/manualpages/Mat/MATSBAIJ.html >> >> Matt >> >> Thanks in advance. >> >> Regards, >> Alexander >> >> >> >> >> -- >> What most experimenters take for granted before they begin their >> experiments is infinitely more interesting than any results to which >> their experiments lead. >> -- Norbert Wiener > -------------- next part -------------- An HTML attachment was scrubbed... URL: From knepley at gmail.com Wed Nov 30 09:22:00 2011 From: knepley at gmail.com (Matthew Knepley) Date: Wed, 30 Nov 2011 09:22:00 -0600 Subject: [petsc-users] Symmetric matrix filling In-Reply-To: <4ED64774.3040005@gfz-potsdam.de> References: <4ED5FACD.3040309@gfz-potsdam.de> <4ED62F4D.40806@gfz-potsdam.de> <4ED64774.3040005@gfz-potsdam.de> Message-ID: On Wed, Nov 30, 2011 at 9:10 AM, Alexander Grayver wrote: > ** > I'm not sure that MAT(S)BAIJ is what I need. My matrix symmetric, but not > block. > Well, I solve vector equations and there are 2-3 blocks (depending on the > problem dimension), but this blocks normaly have size of > 10^5. So which > block size should I specify to be efficient? > Just use 1. Matt > Regards, > Alexander > > On 30.11.2011 14:27, Alexander Grayver wrote: > > Thanks Matt! > > Regards, > Alexander > > On 30.11.2011 14:16, Matthew Knepley wrote: > > On Wed, Nov 30, 2011 at 3:43 AM, Alexander Grayver < > agrayver at gfz-potsdam.de> wrote: > >> Hello, >> >> I'm trying to use mumps through PETSc now with symmetric matrix and >> cholesky factorization. >> When I use it directly I fill up only upper part of the matrix and set >> mumid%SYM = 2. Is that possible to follow same way with petsc? >> Which options do I have to choose? >> > > http://www.mcs.anl.gov/petsc/petsc-dev/docs/manualpages/Mat/MATSBAIJ.html > > Matt > > >> Thanks in advance. >> >> Regards, >> Alexander >> > > > > -- > What most experimenters take for granted before they begin their > experiments is infinitely more interesting than any results to which their > experiments lead. > -- Norbert Wiener > > > > -- What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. -- Norbert Wiener -------------- next part -------------- An HTML attachment was scrubbed... URL: From agrayver at gfz-potsdam.de Wed Nov 30 09:43:48 2011 From: agrayver at gfz-potsdam.de (Alexander Grayver) Date: Wed, 30 Nov 2011 16:43:48 +0100 Subject: [petsc-users] MatMumpsSetIcntl from Fortran Message-ID: <4ED64F34.8020509@gfz-potsdam.de> Hi PETSc team, Has anybody tried to use MatMumpsSetIcntl from Fortran? Because when I try to call it I fall into infinite recursion in function: PetscErrorCode MatMumpsSetIcntl(Mat F,PetscInt icntl,PetscInt ival) { PetscErrorCode ierr; PetscFunctionBegin; PetscValidLogicalCollectiveInt(F,icntl,2); PetscValidLogicalCollectiveInt(F,ival,3); ierr = PetscTryMethod(F,"MatMumpsSetIcntl_C",(Mat,PetscInt,PetscInt),(F,icntl,ival));CHKERRQ(ierr); PetscFunctionReturn(0); } At the moment when program crashes call stack looks like: __libc_memalign, FP=7fff342ca010 PetscMallocAlign, FP=7fff342ca080 PetscTrMallocDefault, FP=7fff342ca180 PetscStrallocpy, FP=7fff342ca230 PetscFListGetPathAndFunction, FP=7fff342cb2e0 PetscFListFind, FP=7fff342cb520 PetscObjectQueryFunction_Petsc, FP=7fff342cb590 PetscObjectQueryFunction, FP=7fff342cb620 MatMumpsSetIcntl, FP=7fff342cb720 MatMumpsSetIcntl, FP=7fff342cb820 MatMumpsSetIcntl, FP=7fff342cb920 MatMumpsSetIcntl, FP=7fff342cba20 MatMumpsSetIcntl, FP=7fff342cbb20 MatMumpsSetIcntl, FP=7fff342cbc20 MatMumpsSetIcntl, FP=7fff342cbd20 MatMumpsSetIcntl, FP=7fff342cbe20 MatMumpsSetIcntl, FP=7fff342cbf20 ... (Hundreds of MatMumpsSetIcntl) ... What can I do about that? Regards, Alexander From frtr at risoe.dtu.dk Wed Nov 30 09:59:59 2011 From: frtr at risoe.dtu.dk (Treue, Frederik) Date: Wed, 30 Nov 2011 16:59:59 +0100 Subject: [petsc-users] newbie question on the parallel allocation of matrices Message-ID: Hi everyone, Caveat: I have just started using petsc, so the answer to my question may very well be fairly trivial. I'm trying to run the following bits of code: DMDACreate2d(PETSC_COMM_WORLD, DMDA_BOUNDARY_GHOSTED, DMDA_BOUNDARY_GHOSTED, DMDA_STENCIL_BOX,10,10,PETSC_DECIDE,PETSC_DECIDE,1,1,PETSC_NULL,PETSC_NULL,&da); [snip] MatCreate(PETSC_COMM_WORLD,&((*FD).ddx)); MatSetSizes((*FD).ddx,PETSC_DECIDE,PETSC_DECIDE,100,100); MatSetFromOptions((*FD).ddx); for (i=0;i<10;i++) { col[0]=i*10;col[1]=i*10+1; row[0]=i*10; val[0]=1;val[1]=1; MatSetValues((*FD).ddx,1,row,2,col,val,INSERT_VALUES); for (j=1;j<10-1;j++) { col[0]=i*10+j-1;col[1]=i*10+j+1; row[0]=i*10+j; val[0]=-1;val[1]=1; MatSetValues((*FD).ddx,1,row,2,col,val,INSERT_VALUES); } col[0]=i*10+10-2;col[1]=i*10+10-1; row[0]=i*10+10-1; val[0]=-1;val[1]=-1; MatSetValues((*FD).ddx,1,row,2,col,val,INSERT_VALUES); } MatAssemblyBegin((*FD).ddx,MAT_FINAL_ASSEMBLY); MatAssemblyEnd((*FD).ddx,MAT_FINAL_ASSEMBLY); MatScale((*FD).ddx,1/(2*(1/9))); [snip] DMCreateGlobalVector(da,&tmpvec2); VecSet(tmpvec2,1.0); VecAssemblyBegin(tmpvec2); VecAssemblyEnd(tmpvec2); DMCreateGlobalVector(da,&tmpvec3); VecSet(tmpvec3,1.0); VecAssemblyBegin(tmpvec3); VecAssemblyEnd(tmpvec3); MatView((*FD).ddx,PETSC_VIEWER_STDOUT_WORLD); VecView(tmpvec2,PETSC_VIEWER_STDOUT_WORLD); MatMult((*FD).ddx,tmpvec2,tmpvec3); VecView(tmpvec3,PETSC_VIEWER_STDOUT_WORLD); int tid,first,last; MPI_Comm_rank(PETSC_COMM_WORLD, &tid); sleep(1); MatGetOwnershipRange((*FD).ddx,&first,&last); printf("rank: %d,first: %d,last: %d\n",tid,first,last); When running it on a single processor, everything works as expected, see attached file seqRes However when running with 4 processors (mpirun -np 4 ./progname) I get the output in mpiRes. Notice that there really is a difference, its not just a surprising division of points between the processes - I checked this with PETSC_VIEWER_DRAW_WORLD. How come? I notice that although in the end each process postulates that it has 25 rows, the result of matview is Matrix Object: 1 MPI processes type: mpiaij Is this OK? And if not, what am I doing wrong, presumably in the matrix allocation code? --- yours sincerily Frederik Treue -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: seqRes Type: application/octet-stream Size: 3325 bytes Desc: seqRes URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: mpiRes Type: application/octet-stream Size: 3478 bytes Desc: mpiRes URL: From knepley at gmail.com Wed Nov 30 10:05:26 2011 From: knepley at gmail.com (Matthew Knepley) Date: Wed, 30 Nov 2011 10:05:26 -0600 Subject: [petsc-users] newbie question on the parallel allocation of matrices In-Reply-To: References: Message-ID: On Wed, Nov 30, 2011 at 9:59 AM, Treue, Frederik wrote: > Hi everyone,**** > > ** ** > > Caveat: I have just started using petsc, so the answer to my question may > very well be fairly trivial. > See SNES ex5for the right way to interact with the DMDA. We will preallocate the matrix for you and allow you to set values using a stencil. Matt > > > I?m trying to run the following bits of code:**** > > ** ** > > DMDACreate2d(PETSC_COMM_WORLD, DMDA_BOUNDARY_GHOSTED, > DMDA_BOUNDARY_GHOSTED, > DMDA_STENCIL_BOX,10,10,PETSC_DECIDE,PETSC_DECIDE,1,1,PETSC_NULL,PETSC_NULL,&da); > **** > > [snip]**** > > MatCreate(PETSC_COMM_WORLD,&((*FD).ddx));**** > > MatSetSizes((*FD).ddx,PETSC_DECIDE,PETSC_DECIDE,100,100);**** > > MatSetFromOptions((*FD).ddx);**** > > ** ** > > for (i=0;i<10;i++) {**** > > col[0]=i*10;col[1]=i*10+1; row[0]=i*10;**** > > val[0]=1;val[1]=1;**** > > MatSetValues((*FD).ddx,1,row,2,col,val,INSERT_VALUES);**** > > for (j=1;j<10-1;j++) {**** > > col[0]=i*10+j-1;col[1]=i*10+j+1; row[0]=i*10+j;**** > > val[0]=-1;val[1]=1;**** > > MatSetValues((*FD).ddx,1,row,2,col,val,INSERT_VALUES);**** > > }**** > > col[0]=i*10+10-2;col[1]=i*10+10-1; row[0]=i*10+10-1;**** > > val[0]=-1;val[1]=-1;**** > > MatSetValues((*FD).ddx,1,row,2,col,val,INSERT_VALUES);**** > > }**** > > MatAssemblyBegin((*FD).ddx,MAT_FINAL_ASSEMBLY);**** > > MatAssemblyEnd((*FD).ddx,MAT_FINAL_ASSEMBLY);**** > > ** ** > > MatScale((*FD).ddx,1/(2*(1/9)));**** > > [snip]**** > > DMCreateGlobalVector(da,&tmpvec2);**** > > VecSet(tmpvec2,1.0);**** > > VecAssemblyBegin(tmpvec2);**** > > VecAssemblyEnd(tmpvec2);**** > > DMCreateGlobalVector(da,&tmpvec3);**** > > VecSet(tmpvec3,1.0);**** > > VecAssemblyBegin(tmpvec3);**** > > VecAssemblyEnd(tmpvec3);**** > > MatView((*FD).ddx,PETSC_VIEWER_STDOUT_WORLD);**** > > VecView(tmpvec2,PETSC_VIEWER_STDOUT_WORLD);**** > > MatMult((*FD).ddx,tmpvec2,tmpvec3);**** > > VecView(tmpvec3,PETSC_VIEWER_STDOUT_WORLD);**** > > int tid,first,last;**** > > MPI_Comm_rank(PETSC_COMM_WORLD, &tid);**** > > sleep(1);**** > > MatGetOwnershipRange((*FD).ddx,&first,&last);**** > > printf("rank: %d,first: %d,last: %d\n",tid,first,last);**** > > ** ** > > When running it on a single processor, everything works as expected, see > attached file seqRes**** > > However when running with 4 processors (mpirun ?np 4 ./progname) I get the > output in mpiRes. Notice that there really is a difference, its not just a > surprising division of points between the processes ? I checked this with > PETSC_VIEWER_DRAW_WORLD. How come? I notice that although in the end each > process postulates that it has 25 rows, the result of matview is**** > > Matrix Object: 1 MPI processes**** > > type: mpiaij**** > > Is this OK? And if not, what am I doing wrong, presumably in the matrix > allocation code?**** > > ** ** > > ** ** > > ---**** > > yours sincerily**** > > Frederik Treue**** > > ** ** > -- What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. -- Norbert Wiener -------------- next part -------------- An HTML attachment was scrubbed... URL: From agrayver at gfz-potsdam.de Wed Nov 30 10:08:39 2011 From: agrayver at gfz-potsdam.de (Alexander Grayver) Date: Wed, 30 Nov 2011 17:08:39 +0100 Subject: [petsc-users] Symmetric matrix filling In-Reply-To: References: <4ED5FACD.3040309@gfz-potsdam.de> <4ED62F4D.40806@gfz-potsdam.de> <4ED64774.3040005@gfz-potsdam.de> Message-ID: <4ED65507.8070108@gfz-potsdam.de> On 30.11.2011 16:22, Matthew Knepley wrote: > On Wed, Nov 30, 2011 at 9:10 AM, Alexander Grayver > > wrote: > > I'm not sure that MAT(S)BAIJ is what I need. My matrix symmetric, > but not block. > Well, I solve vector equations and there are 2-3 blocks (depending > on the problem dimension), but this blocks normaly have size of > > 10^5. So which block size should I specify to be efficient? > > > Just use 1. And for d_nz/o_nz also 1? > > Matt > > Regards, > Alexander > > On 30.11.2011 14:27, Alexander Grayver wrote: >> Thanks Matt! >> >> Regards, >> Alexander >> >> On 30.11.2011 14:16, Matthew Knepley wrote: >>> On Wed, Nov 30, 2011 at 3:43 AM, Alexander Grayver >>> > wrote: >>> >>> Hello, >>> >>> I'm trying to use mumps through PETSc now with symmetric >>> matrix and cholesky factorization. >>> When I use it directly I fill up only upper part of the >>> matrix and set mumid%SYM = 2. Is that possible to follow >>> same way with petsc? >>> Which options do I have to choose? >>> >>> >>> http://www.mcs.anl.gov/petsc/petsc-dev/docs/manualpages/Mat/MATSBAIJ.html >>> >>> Matt >>> >>> Thanks in advance. >>> >>> Regards, >>> Alexander >>> >>> >>> >>> >>> -- >>> What most experimenters take for granted before they begin their >>> experiments is infinitely more interesting than any results to >>> which their experiments lead. >>> -- Norbert Wiener >> > > > > > -- > What most experimenters take for granted before they begin their > experiments is infinitely more interesting than any results to which > their experiments lead. > -- Norbert Wiener -------------- next part -------------- An HTML attachment was scrubbed... URL: From knepley at gmail.com Wed Nov 30 10:13:35 2011 From: knepley at gmail.com (Matthew Knepley) Date: Wed, 30 Nov 2011 10:13:35 -0600 Subject: [petsc-users] Symmetric matrix filling In-Reply-To: <4ED65507.8070108@gfz-potsdam.de> References: <4ED5FACD.3040309@gfz-potsdam.de> <4ED62F4D.40806@gfz-potsdam.de> <4ED64774.3040005@gfz-potsdam.de> <4ED65507.8070108@gfz-potsdam.de> Message-ID: On Wed, Nov 30, 2011 at 10:08 AM, Alexander Grayver wrote: > ** > On 30.11.2011 16:22, Matthew Knepley wrote: > > On Wed, Nov 30, 2011 at 9:10 AM, Alexander Grayver < > agrayver at gfz-potsdam.de> wrote: > >> I'm not sure that MAT(S)BAIJ is what I need. My matrix symmetric, but >> not block. >> Well, I solve vector equations and there are 2-3 blocks (depending on the >> problem dimension), but this blocks normaly have size of > 10^5. So which >> block size should I specify to be efficient? >> > > Just use 1. > > > And for d_nz/o_nz also 1? > No, preallocate for the nonzeros you enter, as normal. Matt > Matt > > >> Regards, >> Alexander >> >> On 30.11.2011 14:27, Alexander Grayver wrote: >> >> Thanks Matt! >> >> Regards, >> Alexander >> >> On 30.11.2011 14:16, Matthew Knepley wrote: >> >> On Wed, Nov 30, 2011 at 3:43 AM, Alexander Grayver < >> agrayver at gfz-potsdam.de> wrote: >> >>> Hello, >>> >>> I'm trying to use mumps through PETSc now with symmetric matrix and >>> cholesky factorization. >>> When I use it directly I fill up only upper part of the matrix and set >>> mumid%SYM = 2. Is that possible to follow same way with petsc? >>> Which options do I have to choose? >>> >> >> >> http://www.mcs.anl.gov/petsc/petsc-dev/docs/manualpages/Mat/MATSBAIJ.html >> >> Matt >> >> >>> Thanks in advance. >>> >>> Regards, >>> Alexander >>> >> >> >> >> -- >> What most experimenters take for granted before they begin their >> experiments is infinitely more interesting than any results to which their >> experiments lead. >> -- Norbert Wiener >> >> >> >> > > > -- > What most experimenters take for granted before they begin their > experiments is infinitely more interesting than any results to which their > experiments lead. > -- Norbert Wiener > > > -- What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. -- Norbert Wiener -------------- next part -------------- An HTML attachment was scrubbed... URL: From hzhang at mcs.anl.gov Wed Nov 30 10:40:40 2011 From: hzhang at mcs.anl.gov (Hong Zhang) Date: Wed, 30 Nov 2011 10:40:40 -0600 Subject: [petsc-users] MatMumpsSetIcntl from Fortran In-Reply-To: <4ED64F34.8020509@gfz-potsdam.de> References: <4ED64F34.8020509@gfz-potsdam.de> Message-ID: Alexander: > Has anybody tried to use MatMumpsSetIcntl from Fortran? We are not aware of it. > Because when I try to call it I fall into infinite recursion in function: Can you give me a short Fortran code that repeats this error for investigating? Meanwhile, you can use runtime option '-mat_mumps_icntl_xxx <>' to get your code run. Hong > > ?PetscErrorCode MatMumpsSetIcntl(Mat F,PetscInt icntl,PetscInt ival) > ?{ > ? PetscErrorCode ierr; > > ? PetscFunctionBegin; > ? PetscValidLogicalCollectiveInt(F,icntl,2); > ? PetscValidLogicalCollectiveInt(F,ival,3); > ? ierr = > PetscTryMethod(F,"MatMumpsSetIcntl_C",(Mat,PetscInt,PetscInt),(F,icntl,ival));CHKERRQ(ierr); > ? PetscFunctionReturn(0); > ?} > > At the moment when program crashes call stack looks like: > > ? ? __libc_memalign, ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? FP=7fff342ca010 > PetscMallocAlign, ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ?FP=7fff342ca080 > PetscTrMallocDefault, ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ?FP=7fff342ca180 > PetscStrallocpy, ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? FP=7fff342ca230 > PetscFListGetPathAndFunction, ? ? ? ? ? ? ? ? ? ? ? ? ? ?FP=7fff342cb2e0 > PetscFListFind, ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ?FP=7fff342cb520 > PetscObjectQueryFunction_Petsc, ? ? ? ? ? ? ? ? ? ? ? ? ?FP=7fff342cb590 > PetscObjectQueryFunction, ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ?FP=7fff342cb620 > MatMumpsSetIcntl, ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ?FP=7fff342cb720 > MatMumpsSetIcntl, ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ?FP=7fff342cb820 > MatMumpsSetIcntl, ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ?FP=7fff342cb920 > MatMumpsSetIcntl, ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ?FP=7fff342cba20 > MatMumpsSetIcntl, ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ?FP=7fff342cbb20 > MatMumpsSetIcntl, ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ?FP=7fff342cbc20 > MatMumpsSetIcntl, ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ?FP=7fff342cbd20 > MatMumpsSetIcntl, ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ?FP=7fff342cbe20 > MatMumpsSetIcntl, ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ?FP=7fff342cbf20 > ... (Hundreds of MatMumpsSetIcntl) ... > > What can I do about that? > > Regards, > Alexander From agrayver at gfz-potsdam.de Wed Nov 30 11:03:13 2011 From: agrayver at gfz-potsdam.de (Alexander Grayver) Date: Wed, 30 Nov 2011 18:03:13 +0100 Subject: [petsc-users] MatMumpsSetIcntl from Fortran In-Reply-To: References: <4ED64F34.8020509@gfz-potsdam.de> Message-ID: <4ED661D1.4030209@gfz-potsdam.de> Hi Hong, Thank you for quick reply. I just rewrote code concerning mumps from this example (lines 150-170): http://www.mcs.anl.gov/petsc/petsc-dev/src/ksp/ksp/examples/tutorials/ex52.c.html Regards, Alexander On 30.11.2011 17:40, Hong Zhang wrote: > Alexander: > >> Has anybody tried to use MatMumpsSetIcntl from Fortran? > We are not aware of it. > >> Because when I try to call it I fall into infinite recursion in function: > Can you give me a short Fortran code that repeats this error for investigating? > Meanwhile, you can use runtime option '-mat_mumps_icntl_xxx<>' to get > your code run. > > Hong >> PetscErrorCode MatMumpsSetIcntl(Mat F,PetscInt icntl,PetscInt ival) >> { >> PetscErrorCode ierr; >> >> PetscFunctionBegin; >> PetscValidLogicalCollectiveInt(F,icntl,2); >> PetscValidLogicalCollectiveInt(F,ival,3); >> ierr = >> PetscTryMethod(F,"MatMumpsSetIcntl_C",(Mat,PetscInt,PetscInt),(F,icntl,ival));CHKERRQ(ierr); >> PetscFunctionReturn(0); >> } >> >> At the moment when program crashes call stack looks like: >> >> __libc_memalign, FP=7fff342ca010 >> PetscMallocAlign, FP=7fff342ca080 >> PetscTrMallocDefault, FP=7fff342ca180 >> PetscStrallocpy, FP=7fff342ca230 >> PetscFListGetPathAndFunction, FP=7fff342cb2e0 >> PetscFListFind, FP=7fff342cb520 >> PetscObjectQueryFunction_Petsc, FP=7fff342cb590 >> PetscObjectQueryFunction, FP=7fff342cb620 >> MatMumpsSetIcntl, FP=7fff342cb720 >> MatMumpsSetIcntl, FP=7fff342cb820 >> MatMumpsSetIcntl, FP=7fff342cb920 >> MatMumpsSetIcntl, FP=7fff342cba20 >> MatMumpsSetIcntl, FP=7fff342cbb20 >> MatMumpsSetIcntl, FP=7fff342cbc20 >> MatMumpsSetIcntl, FP=7fff342cbd20 >> MatMumpsSetIcntl, FP=7fff342cbe20 >> MatMumpsSetIcntl, FP=7fff342cbf20 >> ... (Hundreds of MatMumpsSetIcntl) ... >> >> What can I do about that? >> >> Regards, >> Alexander From hzhang at mcs.anl.gov Wed Nov 30 14:37:50 2011 From: hzhang at mcs.anl.gov (Hong Zhang) Date: Wed, 30 Nov 2011 14:37:50 -0600 Subject: [petsc-users] MatMumpsSetIcntl from Fortran In-Reply-To: <4ED661D1.4030209@gfz-potsdam.de> References: <4ED64F34.8020509@gfz-potsdam.de> <4ED661D1.4030209@gfz-potsdam.de> Message-ID: Alexander : > > I just rewrote code concerning mumps from this example (lines 150-170): > http://www.mcs.anl.gov/petsc/petsc-dev/src/ksp/ksp/examples/tutorials/ex52.c.html Where is your Fortran code? ex52.c works fine. Hong > > On 30.11.2011 17:40, Hong Zhang wrote: >> >> Alexander: >> >>> Has anybody tried to use MatMumpsSetIcntl from Fortran? >> >> We are not aware of it. >> >>> Because when I try to call it I fall into infinite recursion in function: >> >> Can you give me a short Fortran code that repeats this error for >> investigating? >> Meanwhile, you can use runtime option '-mat_mumps_icntl_xxx<>' to get >> your code run. >> >> Hong >>> >>> ?PetscErrorCode MatMumpsSetIcntl(Mat F,PetscInt icntl,PetscInt ival) >>> ?{ >>> ? PetscErrorCode ierr; >>> >>> ? PetscFunctionBegin; >>> ? PetscValidLogicalCollectiveInt(F,icntl,2); >>> ? PetscValidLogicalCollectiveInt(F,ival,3); >>> ? ierr = >>> >>> PetscTryMethod(F,"MatMumpsSetIcntl_C",(Mat,PetscInt,PetscInt),(F,icntl,ival));CHKERRQ(ierr); >>> ? PetscFunctionReturn(0); >>> ?} >>> >>> At the moment when program crashes call stack looks like: >>> >>> ? ? __libc_memalign, >>> FP=7fff342ca010 >>> PetscMallocAlign, ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ?FP=7fff342ca080 >>> PetscTrMallocDefault, ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ?FP=7fff342ca180 >>> PetscStrallocpy, ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? FP=7fff342ca230 >>> PetscFListGetPathAndFunction, ? ? ? ? ? ? ? ? ? ? ? ? ? ?FP=7fff342cb2e0 >>> PetscFListFind, ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ?FP=7fff342cb520 >>> PetscObjectQueryFunction_Petsc, ? ? ? ? ? ? ? ? ? ? ? ? ?FP=7fff342cb590 >>> PetscObjectQueryFunction, ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ?FP=7fff342cb620 >>> MatMumpsSetIcntl, ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ?FP=7fff342cb720 >>> MatMumpsSetIcntl, ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ?FP=7fff342cb820 >>> MatMumpsSetIcntl, ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ?FP=7fff342cb920 >>> MatMumpsSetIcntl, ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ?FP=7fff342cba20 >>> MatMumpsSetIcntl, ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ?FP=7fff342cbb20 >>> MatMumpsSetIcntl, ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ?FP=7fff342cbc20 >>> MatMumpsSetIcntl, ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ?FP=7fff342cbd20 >>> MatMumpsSetIcntl, ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ?FP=7fff342cbe20 >>> MatMumpsSetIcntl, ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ?FP=7fff342cbf20 >>> ... (Hundreds of MatMumpsSetIcntl) ... >>> >>> What can I do about that? >>> >>> Regards, >>> Alexander > > From zonexo at gmail.com Wed Nov 30 14:42:07 2011 From: zonexo at gmail.com (TAY wee-beng) Date: Wed, 30 Nov 2011 21:42:07 +0100 Subject: [petsc-users] Error compiling code when upgrading from 3.1p8 to 3.2p5 In-Reply-To: References: <4ED2AB30.2060801@gmail.com> <4ED37E31.1060303@gmail.com> Message-ID: <4ED6951F.7040408@gmail.com> Hi Matt, I didn't call this thru KSPSolve since I'm using HYPRE native struct interface. Anyway, I found the ans - It's due to the usage of : call HYPRE_MPI_Comm_f2c(mpi_comm, MPI_COMM_WORLD, ierr) It used to be required when using HYPRE with openmpi. However, now it's not required anymore. Using it cause error. Yours sincerely, TAY wee-beng On 28/11/2011 3:01 PM, Matthew Knepley wrote: > On Mon, Nov 28, 2011 at 6:27 AM, TAY wee-beng > wrote: > > Hi, > > The code compiles and works ok. However, when I changed my solver > to use HYPRE to solve the poisson equation, > > I got the error: > > [hpc12:29772] *** An error occurred in MPI_comm_size > [hpc12:29772] *** on communicator MPI_COMM_WORLD > [hpc12:29772] *** MPI_ERR_COMM: invalid communicator > [hpc12:29772] *** MPI_ERRORS_ARE_FATAL (your MPI job will now abort) > 1.07user 0.12system 0:01.23elapsed 97%CPU (0avgtext+0avgdata > 188432maxresident)k > 0inputs+35480outputs (0major+11637minor)pagefaults 0swaps > -------------------------------------------------------------------------- > mpiexec has exited due to process rank 0 with PID 29771 on > node hpc12 exiting improperly. There are two reasons this could occur: > > 1. this process did not call "init" before exiting, but others in > the job did. This can cause a job to hang indefinitely while it waits > for all processes to call "init". By rule, if one process calls > "init", > then ALL processes must call "init" prior to termination. > > 2. this process called "init", but exited without calling "finalize". > By rule, all processes that call "init" MUST call "finalize" prior to > exiting or it will be considered an "abnormal termination" > > This may have caused other processes in the application to be > terminated by signals sent by mpiexec (as reported here). > > > This happens after calling the subroutine call > HYPRE_StructStencilCreate(3, 4, stencil_hypre, ierr). > > Btw, my code is using HYPRE's own function to construct the matrix > and solve it. > > > I can only assume you have a bug in your code. Why not just call this > through KSPSolve? > > Matt > > Thanks! > > Yours sincerely, > > TAY wee-beng > > > On 27/11/2011 10:30 PM, Satish Balay wrote: > > check http://www.mcs.anl.gov/petsc/documentation/changes/32.html > > -> Changed PetscTruth to PetscBool > > satish > > On Sun, 27 Nov 2011, TAY wee-beng wrote: > > Hi, > > I have trouble compiling my Fortran codes when I upgrade > PETSc from 3.1p8 to > 3.2p5. > > My code is something like this: > > module global_data > > use nrtype > > implicit none > > save > > #include "finclude/petsc.h90" > > !grid variables > > integer :: size_x,size_y,size_z,grid_type > !size_x1,size_x2,size_x3,size_y1,size_y2,size_y3 > > real(8), allocatable :: > x(:),y(:),z(:),xu(:),yu(:),zu(:),xv(:),yv(:),zv(:),xw(:),yw(:),zw(:),c_cx(:),cu_cx(:),c_cy(:),cv_cy(:),c_cz(:),cw_cz(:) > > !solver variables > > ... > > I tried after compiling with the new 3.2p5 and got the > following error: > > /opt/openmpi-1.5.3/bin/mpif90 -c -g -debug all > -implicitnone -warn unused > -fp-stack-check -heap-arrays -ftrapuv -check pointers -O0 > -save -w90 -w -w95 > -O0 -I/home/wtay/Lib/petsc-3.2-p5_mumps_debug/include > -I/home/wtay/Lib/petsc-3.2-p5_mumps_debug/include > -I/opt/openmpi-1.5.3/include > -o global.o global.F90 > global.F90(205): error #5082: Syntax error, found > IDENTIFIER 'FLGG' when > expecting one of: ( % : . = => > PetscTruth flgg > -----------^ > global.F90(205): error #6274: This statement must not > appear in the > specification part of a module > PetscTruth flgg > ^ > global.F90(207): error #6236: A specification statement > cannot appear in the > executable section. > integer(kind=selected_int_kind(5)) reason > ^ > global.F90(209): error #6236: A specification statement > cannot appear in the > executable section. > integer(kind=selected_int_kind(10)) i_vec > ^ > global.F90(213): error #6236: A specification statement > cannot appear in the > executable section. > integer :: > myid,num_procs,ksta,kend,ksta_ext,kend_ext,ksta_ext0,ksta2,kend2,kend3 > ^ > global.F90(215): error #6236: A specification statement > cannot appear in the > executable section. > integer :: > ijk_sta_p,ijk_end_p,ijk_sta_m,ijk_end_m,ijk_sta_mx,ijk_end_mx,ijk_sta_my,ijk_end_my,ijk_sta_mz,ijk_end_mz > ^ > global.F90(217): error #6236: A specification statement > cannot appear in the > executable section. > character(2) :: procs > ^ > global.F90(205): error #6404: This name does not have a > type, and must have an > explicit type. [PETSCTRUTH] > PetscTruth flgg > ^ > global.F90(205): error #6404: This name does not have a > type, and must have an > explicit type. [FLGG] > PetscTruth flgg > -----------^ > global.F90(229): error #6404: This name does not have a > type, and must have an > explicit type. [KSTA] > ksta=myid*(size_z/num_procs)+1; > kend=(myid+1)*(size_z/num_procs) > > > May I know what's wrong? > > Thanks! > > > > > > -- > What most experimenters take for granted before they begin their > experiments is infinitely more interesting than any results to which > their experiments lead. > -- Norbert Wiener -------------- next part -------------- An HTML attachment was scrubbed... URL: From knepley at gmail.com Wed Nov 30 15:01:59 2011 From: knepley at gmail.com (Matthew Knepley) Date: Wed, 30 Nov 2011 15:01:59 -0600 Subject: [petsc-users] MatMumpsSetIcntl from Fortran In-Reply-To: References: <4ED64F34.8020509@gfz-potsdam.de> <4ED661D1.4030209@gfz-potsdam.de> Message-ID: On Wed, Nov 30, 2011 at 2:37 PM, Hong Zhang wrote: > Alexander : > > > > > I just rewrote code concerning mumps from this example (lines 150-170): > > > http://www.mcs.anl.gov/petsc/petsc-dev/src/ksp/ksp/examples/tutorials/ex52.c.html > > Where is your Fortran code? ex52.c works fine. > Its in the version he attached at line 150-170. Matt > Hong > > > > > On 30.11.2011 17:40, Hong Zhang wrote: > >> > >> Alexander: > >> > >>> Has anybody tried to use MatMumpsSetIcntl from Fortran? > >> > >> We are not aware of it. > >> > >>> Because when I try to call it I fall into infinite recursion in > function: > >> > >> Can you give me a short Fortran code that repeats this error for > >> investigating? > >> Meanwhile, you can use runtime option '-mat_mumps_icntl_xxx<>' to get > >> your code run. > >> > >> Hong > >>> > >>> PetscErrorCode MatMumpsSetIcntl(Mat F,PetscInt icntl,PetscInt ival) > >>> { > >>> PetscErrorCode ierr; > >>> > >>> PetscFunctionBegin; > >>> PetscValidLogicalCollectiveInt(F,icntl,2); > >>> PetscValidLogicalCollectiveInt(F,ival,3); > >>> ierr = > >>> > >>> > PetscTryMethod(F,"MatMumpsSetIcntl_C",(Mat,PetscInt,PetscInt),(F,icntl,ival));CHKERRQ(ierr); > >>> PetscFunctionReturn(0); > >>> } > >>> > >>> At the moment when program crashes call stack looks like: > >>> > >>> __libc_memalign, > >>> FP=7fff342ca010 > >>> PetscMallocAlign, > FP=7fff342ca080 > >>> PetscTrMallocDefault, > FP=7fff342ca180 > >>> PetscStrallocpy, > FP=7fff342ca230 > >>> PetscFListGetPathAndFunction, > FP=7fff342cb2e0 > >>> PetscFListFind, > FP=7fff342cb520 > >>> PetscObjectQueryFunction_Petsc, > FP=7fff342cb590 > >>> PetscObjectQueryFunction, > FP=7fff342cb620 > >>> MatMumpsSetIcntl, > FP=7fff342cb720 > >>> MatMumpsSetIcntl, > FP=7fff342cb820 > >>> MatMumpsSetIcntl, > FP=7fff342cb920 > >>> MatMumpsSetIcntl, > FP=7fff342cba20 > >>> MatMumpsSetIcntl, > FP=7fff342cbb20 > >>> MatMumpsSetIcntl, > FP=7fff342cbc20 > >>> MatMumpsSetIcntl, > FP=7fff342cbd20 > >>> MatMumpsSetIcntl, > FP=7fff342cbe20 > >>> MatMumpsSetIcntl, > FP=7fff342cbf20 > >>> ... (Hundreds of MatMumpsSetIcntl) ... > >>> > >>> What can I do about that? > >>> > >>> Regards, > >>> Alexander > > > > > -- What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. -- Norbert Wiener -------------- next part -------------- An HTML attachment was scrubbed... URL: From agrayver at gfz-potsdam.de Wed Nov 30 15:10:15 2011 From: agrayver at gfz-potsdam.de (=?utf-8?B?YWdyYXl2ZXJAZ2Z6LXBvdHNkYW0uZGU=?=) Date: Wed, 30 Nov 2011 22:10:15 +0100 Subject: [petsc-users] =?utf-8?q?MatMumpsSetIcntl_from_Fortran?= Message-ID: Hong, Sorry if I wasn't clear. "c" example works fine, I know, what I meant is that if you try to implement lines 150-171 from it on FORTRAN you will see the problem I reported. If you need particularly my FORTRAN code I can send it tomorrow. Regards, Alexander ----- Reply message ----- From: "Hong Zhang" To: "PETSc users list" Subject: [petsc-users] MatMumpsSetIcntl from Fortran Date: Wed, Nov 30, 2011 9:37 pm Alexander : > > I just rewrote code concerning mumps from this example (lines 150-170): > http://www.mcs.anl.gov/petsc/petsc-dev/src/ksp/ksp/examples/tutorials/ex52.c.html Where is your Fortran code? ex52.c works fine. Hong > > On 30.11.2011 17:40, Hong Zhang wrote: >> >> Alexander: >> >>> Has anybody tried to use MatMumpsSetIcntl from Fortran? >> >> We are not aware of it. >> >>> Because when I try to call it I fall into infinite recursion in function: >> >> Can you give me a short Fortran code that repeats this error for >> investigating? >> Meanwhile, you can use runtime option '-mat_mumps_icntl_xxx<>' to get >> your code run. >> >> Hong >>> >>> ?PetscErrorCode MatMumpsSetIcntl(Mat F,PetscInt icntl,PetscInt ival) >>> ?{ >>> ? PetscErrorCode ierr; >>> >>> ? PetscFunctionBegin; >>> ? PetscValidLogicalCollectiveInt(F,icntl,2); >>> ? PetscValidLogicalCollectiveInt(F,ival,3); >>> ? ierr = >>> >>> PetscTryMethod(F,"MatMumpsSetIcntl_C",(Mat,PetscInt,PetscInt),(F,icntl,ival));CHKERRQ(ierr); >>> ? PetscFunctionReturn(0); >>> ?} >>> >>> At the moment when program crashes call stack looks like: >>> >>> ? ? __libc_memalign, >>> FP=7fff342ca010 >>> PetscMallocAlign, ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ?FP=7fff342ca080 >>> PetscTrMallocDefault, ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ?FP=7fff342ca180 >>> PetscStrallocpy, ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? FP=7fff342ca230 >>> PetscFListGetPathAndFunction, ? ? ? ? ? ? ? ? ? ? ? ? ? ?FP=7fff342cb2e0 >>> PetscFListFind, ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ?FP=7fff342cb520 >>> PetscObjectQueryFunction_Petsc, ? ? ? ? ? ? ? ? ? ? ? ? ?FP=7fff342cb590 >>> PetscObjectQueryFunction, ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ?FP=7fff342cb620 >>> MatMumpsSetIcntl, ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ?FP=7fff342cb720 >>> MatMumpsSetIcntl, ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ?FP=7fff342cb820 >>> MatMumpsSetIcntl, ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ?FP=7fff342cb920 >>> MatMumpsSetIcntl, ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ?FP=7fff342cba20 >>> MatMumpsSetIcntl, ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ?FP=7fff342cbb20 >>> MatMumpsSetIcntl, ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ?FP=7fff342cbc20 >>> MatMumpsSetIcntl, ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ?FP=7fff342cbd20 >>> MatMumpsSetIcntl, ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ?FP=7fff342cbe20 >>> MatMumpsSetIcntl, ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ?FP=7fff342cbf20 >>> ... (Hundreds of MatMumpsSetIcntl) ... >>> >>> What can I do about that? >>> >>> Regards, >>> Alexander > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From hzhang at mcs.anl.gov Wed Nov 30 16:43:29 2011 From: hzhang at mcs.anl.gov (Hong Zhang) Date: Wed, 30 Nov 2011 16:43:29 -0600 Subject: [petsc-users] MatMumpsSetIcntl from Fortran In-Reply-To: References: Message-ID: > Sorry if I wasn't clear. ?"c" example works fine, I know, what I meant is > that if you try to implement lines 150-171 from it on FORTRAN you will see > the problem I reported. > If you need particularly my FORTRAN code I can send it tomorrow. This would save my time :-) Appreciate. Hong > > Regards, > Alexander > > > ----- Reply message ----- > From: "Hong Zhang" > To: "PETSc users list" > Subject: [petsc-users] MatMumpsSetIcntl from Fortran > Date: Wed, Nov 30, 2011 9:37 pm > > > Alexander : > >> >> I just rewrote code concerning mumps from this example (lines 150-170): >> >> http://www.mcs.anl.gov/petsc/petsc-dev/src/ksp/ksp/examples/tutorials/ex52.c.html > > Where is your Fortran code? ex52.c works fine. > Hong > >> >> On 30.11.2011 17:40, Hong Zhang wrote: >>> >>> Alexander: >>> >>>> Has anybody tried to use MatMumpsSetIcntl from Fortran? >>> >>> We are not aware of it. >>> >>>> Because when I try to call it I fall into infinite recursion in >>>> function: >>> >>> Can you give me a short Fortran code that repeats this error for >>> investigating? >>> Meanwhile, you can use runtime option '-mat_mumps_icntl_xxx<>' to get >>> your code run. >>> >>> Hong >>>> >>>> ?PetscErrorCode MatMumpsSetIcntl(Mat F,PetscInt icntl,PetscInt ival) >>>> ?{ >>>> ? PetscErrorCode ierr; >>>> >>>> ? PetscFunctionBegin; >>>> ? PetscValidLogicalCollectiveInt(F,icntl,2); >>>> ? PetscValidLogicalCollectiveInt(F,ival,3); >>>> ? ierr = >>>> >>>> >>>> PetscTryMethod(F,"MatMumpsSetIcntl_C",(Mat,PetscInt,PetscInt),(F,icntl,ival));CHKERRQ(ierr); >>>> ? PetscFunctionReturn(0); >>>> ?} >>>> >>>> At the moment when program crashes call stack looks like: >>>> >>>> ? ? __libc_memalign, >>>> FP=7fff342ca010 >>>> PetscMallocAlign, ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ?FP=7fff342ca080 >>>> PetscTrMallocDefault, ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ?FP=7fff342ca180 >>>> PetscStrallocpy, ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? FP=7fff342ca230 >>>> PetscFListGetPathAndFunction, ? ? ? ? ? ? ? ? ? ? ? ? ? ?FP=7fff342cb2e0 >>>> PetscFListFind, ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ?FP=7fff342cb520 >>>> PetscObjectQueryFunction_Petsc, ? ? ? ? ? ? ? ? ? ? ? ? ?FP=7fff342cb590 >>>> PetscObjectQueryFunction, ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ?FP=7fff342cb620 >>>> MatMumpsSetIcntl, ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ?FP=7fff342cb720 >>>> MatMumpsSetIcntl, ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ?FP=7fff342cb820 >>>> MatMumpsSetIcntl, ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ?FP=7fff342cb920 >>>> MatMumpsSetIcntl, ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ?FP=7fff342cba20 >>>> MatMumpsSetIcntl, ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ?FP=7fff342cbb20 >>>> MatMumpsSetIcntl, ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ?FP=7fff342cbc20 >>>> MatMumpsSetIcntl, ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ?FP=7fff342cbd20 >>>> MatMumpsSetIcntl, ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ?FP=7fff342cbe20 >>>> MatMumpsSetIcntl, ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ?FP=7fff342cbf20 >>>> ... (Hundreds of MatMumpsSetIcntl) ... >>>> >>>> What can I do about that? >>>> >>>> Regards, >>>> Alexander >> >>