[petsc-users] Dose Petsc has DMPlex example
Hoang Giang Bui
hgbk2008 at gmail.com
Sun Jul 3 03:49:05 CDT 2016
Thanks Justin. It works. The difference is these parameters:
-vel_petscspace_order 2 -pres_petscspace_order 1. Which is quite cool since
you can play with those orders to see how LBB condition affects the results.
Giang
Giang
On Sun, Jul 3, 2016 at 10:15 AM, Justin Chang <jychang48 at gmail.com> wrote:
> Hoang, if you run this example shown from the config/builder.py
>
> ./ex62 -run_type full -refinement_limit 0.00625 -bc_type dirichlet
> -interpolate 1 -vel_petscspace_order 2 -pres_petscspace_order 1 -ksp_type
> fgmres -ksp_gmres_restart 100 -ksp_rtol 1.0e-9 -pc_type fieldsplit
> -pc_fieldsplit_type schur -pc_fieldsplit_schur_factorization_type full
> -fieldsplit_pressure_ksp_rtol 1e-10 -fieldsplit_velocity_ksp_type gmres
> -fieldsplit_velocity_pc_type lu -fieldsplit_pressure_pc_type jacobi
> -snes_monitor_short -ksp_monitor_short -snes_converged_reason
> -ksp_converged_reason -snes_view -show_solution 0
>
>
> it should work
>
> On Sun, Jul 3, 2016 at 9:06 AM, Hoang Giang Bui <hgbk2008 at gmail.com>
> wrote:
>
>> Hi Matt
>>
>> I tried to run ex62 with 1 proc (petsc 3.7.2), but it all produces zero
>>
>> The output is:
>> hbui at bermuda:~/workspace/petsc/snes$ es$ ./ex62 run_type full -bc_type
>> dirichlet -refinement_limit 0.00625 -interpolate 1 -snes_monitor_short
>> -snes_converged_reason -snes_view -ksp_type fgmres -ksp_gmres_restart 100
>> -ksp_rtol 1.0e-9 -ksp_monitor_short -pc_type fieldsplit -pc_fieldsplit_type
>> schur -pc_fieldsplit_schur_factorization_type full
>> -fieldsplit_velocity_ksp_type gmres -fieldsplit_velocity_pc_type lu
>> -fieldsplit_pressure_ksp_rtol 1e-10 -fieldsplit_pressure_pc_type jacobi
>> 0 SNES Function norm 0.265165
>> 0 KSP Residual norm 0.265165
>> Nonlinear solve did not converge due to DIVERGED_LINEAR_SOLVE iterations 0
>> SNES Object: 1 MPI processes
>> type: newtonls
>> maximum iterations=50, maximum function evaluations=10000
>> tolerances: relative=1e-08, absolute=1e-50, solution=1e-08
>> total number of linear solver iterations=0
>> total number of function evaluations=1
>> norm schedule ALWAYS
>> SNESLineSearch Object: 1 MPI processes
>> type: bt
>> interpolation: cubic
>> alpha=1.000000e-04
>> maxstep=1.000000e+08, minlambda=1.000000e-12
>> tolerances: relative=1.000000e-08, absolute=1.000000e-15,
>> lambda=1.000000e-08
>> maximum iterations=40
>> KSP Object: 1 MPI processes
>> type: fgmres
>> GMRES: restart=100, using Classical (unmodified) Gram-Schmidt
>> Orthogonalization with no iterative refinement
>> GMRES: happy breakdown tolerance 1e-30
>> maximum iterations=10000, initial guess is zero
>> tolerances: relative=1e-09, absolute=1e-50, divergence=10000.
>> right preconditioning
>> using UNPRECONDITIONED norm type for convergence test
>> PC Object: 1 MPI processes
>> type: fieldsplit
>> FieldSplit with Schur preconditioner, factorization FULL
>> Preconditioner for the Schur complement formed from A11
>> Split info:
>> Split number 0 Defined by IS
>> Split number 1 Defined by IS
>> KSP solver for A00 block
>> KSP Object: (fieldsplit_velocity_) 1 MPI processes
>> type: gmres
>> GMRES: restart=30, using Classical (unmodified) Gram-Schmidt
>> Orthogonalization with no iterative refinement
>> GMRES: happy breakdown tolerance 1e-30
>> maximum iterations=10000, initial guess is zero
>> tolerances: relative=1e-05, absolute=1e-50, divergence=10000.
>> left preconditioning
>> using PRECONDITIONED norm type for convergence test
>> PC Object: (fieldsplit_velocity_) 1 MPI processes
>> type: lu
>> LU: out-of-place factorization
>> tolerance for zero pivot 2.22045e-14
>> matrix ordering: nd
>> factor fill ratio given 5., needed 1.
>> Factored matrix follows:
>> Mat Object: 1 MPI processes
>> type: seqaij
>> rows=512, cols=512, bs=2
>> package used to perform factorization: petsc
>> total: nonzeros=1024, allocated nonzeros=1024
>> total number of mallocs used during MatSetValues calls
>> =0
>> using I-node routines: found 256 nodes, limit used is
>> 5
>> linear system matrix = precond matrix:
>> Mat Object: (fieldsplit_velocity_) 1 MPI
>> processes
>> type: seqaij
>> rows=512, cols=512, bs=2
>> total: nonzeros=1024, allocated nonzeros=1024
>> total number of mallocs used during MatSetValues calls =0
>> using I-node routines: found 256 nodes, limit used is 5
>> KSP solver for S = A11 - A10 inv(A00) A01
>> KSP Object: (fieldsplit_pressure_) 1 MPI processes
>> type: gmres
>> GMRES: restart=30, using Classical (unmodified) Gram-Schmidt
>> Orthogonalization with no iterative refinement
>> GMRES: happy breakdown tolerance 1e-30
>> maximum iterations=10000, initial guess is zero
>> tolerances: relative=1e-10, absolute=1e-50, divergence=10000.
>> left preconditioning
>> using PRECONDITIONED norm type for convergence test
>> PC Object: (fieldsplit_pressure_) 1 MPI processes
>> type: jacobi
>> linear system matrix followed by preconditioner matrix:
>> Mat Object: (fieldsplit_pressure_) 1 MPI
>> processes
>> type: schurcomplement
>> rows=256, cols=256
>> has attached null space
>> Schur complement A11 - A10 inv(A00) A01
>> A11
>> Mat Object:
>> (fieldsplit_pressure_) 1 MPI processes
>> type: seqaij
>> rows=256, cols=256
>> total: nonzeros=256, allocated nonzeros=256
>> total number of mallocs used during MatSetValues calls
>> =0
>> has attached null space
>> not using I-node routines
>> A10
>> Mat Object: 1 MPI processes
>> type: seqaij
>> rows=256, cols=512
>> total: nonzeros=512, allocated nonzeros=512
>> total number of mallocs used during MatSetValues calls
>> =0
>> not using I-node routines
>> KSP of A00
>> KSP Object:
>> (fieldsplit_velocity_) 1 MPI processes
>> type: gmres
>> GMRES: restart=30, using Classical (unmodified)
>> Gram-Schmidt Orthogonalization with no iterative refinement
>> GMRES: happy breakdown tolerance 1e-30
>> maximum iterations=10000, initial guess is zero
>> tolerances: relative=1e-05, absolute=1e-50,
>> divergence=10000.
>> left preconditioning
>> using PRECONDITIONED norm type for convergence test
>> PC Object:
>> (fieldsplit_velocity_) 1 MPI processes
>> type: lu
>> LU: out-of-place factorization
>> tolerance for zero pivot 2.22045e-14
>> matrix ordering: nd
>> factor fill ratio given 5., needed 1.
>> Factored matrix follows:
>> Mat Object: 1 MPI
>> processes
>> type: seqaij
>> rows=512, cols=512, bs=2
>> package used to perform factorization: petsc
>> total: nonzeros=1024, allocated nonzeros=1024
>> total number of mallocs used during
>> MatSetValues calls =0
>> using I-node routines: found 256 nodes, limit
>> used is 5
>> linear system matrix = precond matrix:
>> Mat Object:
>> (fieldsplit_velocity_) 1 MPI processes
>> type: seqaij
>> rows=512, cols=512, bs=2
>> total: nonzeros=1024, allocated nonzeros=1024
>> total number of mallocs used during MatSetValues
>> calls =0
>> using I-node routines: found 256 nodes, limit used
>> is 5
>> A01
>> Mat Object: 1 MPI processes
>> type: seqaij
>> rows=512, cols=256, rbs=2, cbs = 1
>> total: nonzeros=512, allocated nonzeros=512
>> total number of mallocs used during MatSetValues calls
>> =0
>> using I-node routines: found 256 nodes, limit used is
>> 5
>> Mat Object: (fieldsplit_pressure_) 1 MPI
>> processes
>> type: seqaij
>> rows=256, cols=256
>> total: nonzeros=256, allocated nonzeros=256
>> total number of mallocs used during MatSetValues calls =0
>> has attached null space
>> not using I-node routines
>> linear system matrix = precond matrix:
>> Mat Object: 1 MPI processes
>> type: seqaij
>> rows=768, cols=768
>> total: nonzeros=2304, allocated nonzeros=2304
>> total number of mallocs used during MatSetValues calls =0
>> has attached null space
>> using I-node routines: found 256 nodes, limit used is 5
>> Number of SNES iterations = 0
>> L_2 Error: 1.01 [0.929, 0.407]
>> Solution
>> Vec Object: 1 MPI processes
>> type: seq
>> 0.
>> 0.
>> ....
>>
>> Am I doing something wrong?
>>
>> Giang
>>
>>
>> Giang
>>
>> On Tue, May 3, 2016 at 4:44 AM, Matthew Knepley <knepley at gmail.com>
>> wrote:
>>
>>> On Mon, May 2, 2016 at 8:29 PM, ztdepyahoo at 163.com <ztdepyahoo at 163.com>
>>> wrote:
>>>
>>>> Dear professor:
>>>> I want to write a parallel 3D CFD code based on unstructred grid,
>>>> does Petsc has DMPlex examples to start with.
>>>>
>>>
>>> SNES ex62 is an unstructured grid Stokes problem discretized with
>>> low-order finite elements.
>>>
>>> Of course, all the different possible choices will impact the design.
>>>
>>> Matt
>>>
>>>
>>>> Regards
>>>>
>>>
>>>
>>>
>>> --
>>> What most experimenters take for granted before they begin their
>>> experiments is infinitely more interesting than any results to which their
>>> experiments lead.
>>> -- Norbert Wiener
>>>
>>
>>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.mcs.anl.gov/pipermail/petsc-users/attachments/20160703/4d5d3d73/attachment-0001.html>
More information about the petsc-users
mailing list