[petsc-users] (no subject)
Smith, Barry F.
bsmith at mcs.anl.gov
Tue Dec 3 20:34:34 CST 2019
From the code:
if (snes->lagjacobian == -2) {
snes->lagjacobian = -1;
ierr = PetscInfo(snes,"Recomputing Jacobian/preconditioner because lag is -2 (means compute Jacobian, but then never again) \n");CHKERRQ(ierr);
} else if (snes->lagjacobian == -1) {
ierr = PetscInfo(snes,"Reusing Jacobian/preconditioner because lag is -1\n");CHKERRQ(ierr);
ierr = PetscObjectTypeCompare((PetscObject)A,MATMFFD,&flag);CHKERRQ(ierr);
if (flag) {
ierr = MatAssemblyBegin(A,MAT_FINAL_ASSEMBLY);CHKERRQ(ierr);
ierr = MatAssemblyEnd(A,MAT_FINAL_ASSEMBLY);CHKERRQ(ierr);
}
PetscFunctionReturn(0);
So it does what the manual page says. If you use -2 it will compute the Jacobian the next time it is needed but then will never compute it again,. This means it reuses the one computed for all time-steps. It can make sense depending on how much the Jacobian changes to use with -snes_mf_operator.
To compute just once at the beginning of each new linear solver you can use for example 100 (this assumes that one nonlinear solver never needs more than 100 linear solvers.) This can also be used with -snes_mf_operator (or without).
Barry
> On Dec 3, 2019, at 6:33 AM, Li Luo <li.luo at kaust.edu.sa> wrote:
>
> If use -snes_mf, the linear solver just never converges. And at the second time step, it reports an error as:
> [0]PETSC ERROR: No support for this operation for this object type!
> [0]PETSC ERROR: Mat type mffd!
>
> I still don't understand "-snes_lag_jacobian -2", from the manual, it seems Jacobian is computed only once by setting -1, but it is recomputed once by -2.
> For a time-stepping scheme where each step calls SNESSolve, if using -snes_lag_jacobian -2, will the following time steps reuse the Jacobian built at the first time step?
>
> Best,
> Li
>
>
>
> On Tue, Dec 3, 2019 at 12:10 AM Smith, Barry F. <bsmith at mcs.anl.gov> wrote:
>
>
> > On Dec 2, 2019, at 2:30 PM, Li Luo <li.luo at kaust.edu.sa> wrote:
> >
> > -snes_mf fails to converge in my case, but -ds_snes_mf_operator works, when the original analytic matrix is still used as the preconditioner.
> > The timing is several times greater than using the analytic matrix for both Jacobian and preconditioner.
>
> ok, how does -snes_mf fail to converge? -ksp_monitor ? does the linear solver just never converge?
>
> Using -snes_mf_operator will also build the Jacobian so in your case doesn't make much sense by itself since it is very expensive
>
> >
> > For an implicit time-stepping scheme, if using -snes_lag_jacobian -2, is the Jacobian built only twice at the first time step then it is used for all later time steps? Or it is built twice at every time step?
>
> Check the manual page for SNESSolve it should just compute the Jacobian once and reuse it forever.
>
> You can also try -snes_mf -snes_lag_jacobian -2 which should compute the Jacobian once, use that original one to build the preconditioner once and reuse the same preconditioner but use the matrix free to define the operator.
>
>
>
> Barry
>
> >
> > Regards,
> > Li
> >
> > On Mon, Dec 2, 2019 at 6:02 PM Smith, Barry F. <bsmith at mcs.anl.gov> wrote:
> >
> > Ok it is spending 99+ percent of the time computing the Jacobians.
> >
> >
> > MatFDColorApply 106 1.0 8.7622e+03 1.0 1.31e+08 1.1 6.9e+07 2.6e+03 1.1e+06 99 0 97 81 99 99 0 97 81 99 1
> > MatFDColorFunc 60950 1.0 8.7560e+03 1.0 0.00e+00 0.0 6.9e+07 2.6e+03 1.1e+06 99 0 97 81 99 99 0 97 81 99 0
> >
> > It is requiring on average 12 KSP iterations per linear solve so the resulting linear system appears well conditioned, this means even if you compute the Jacobian analytically likely most of the time in the run will still be computing Jacobians.
> >
> > Try using -snes_mf with the logging and see what happens. You can also try -snes_lag_jacobian_persists -snes_lag_jacobian -2
> >
> > Note there may be other ways of avoiding the costly computation of the Jacobian at each Newton step.
> >
> > Barry
> >
> >
> > > On Dec 2, 2019, at 6:38 AM, Li Luo <li.luo at kaust.edu.sa> wrote:
> > >
> > > Dear Barry,
> > >
> > > Here is my log.
> > > Because I am using libMesh built on PETSc, there is more information from libMesh in the log file.
> > > I ran 13 time steps for the simulation so there are repeated snes_view info.
> > > The algorithm is simply NKS.
> > >
> > > Cheers,
> > > Li
> > >
> > > On Mon, Dec 2, 2019 at 4:55 PM Smith, Barry F. <bsmith at mcs.anl.gov> wrote:
> > >
> > > Please send a run with optimization turned on (--with-debugging=0 in ./configure) and -log_view without the actual timing information we are just guessing where the time is spent.
> > >
> > > If your problem has a natural block size then using baij should be a bit faster than aij, but not dramatically better
> > >
> > > Barry
> > >
> > >
> > > > On Dec 2, 2019, at 4:30 AM, Li Luo <li.luo at kaust.edu.sa> wrote:
> > > >
> > > > Thank you very much! It looks forming an analytic Jacobian is the only choice.
> > > >
> > > > Best,
> > > > Li
> > > >
> > > > On Mon, Dec 2, 2019 at 3:21 PM Matthew Knepley <knepley at gmail.com> wrote:
> > > > On Mon, Dec 2, 2019 at 4:04 AM Li Luo <li.luo at kaust.edu.sa> wrote:
> > > > Thank you for your reply.
> > > >
> > > > The matrix is small with only 67500 rows, but is relatively dense since a second-order discontinuous Galerkin FEM is used, nonzeros=23,036,400.
> > > >
> > > > This is very dense, 0.5% fill or 340 nonzeros per row.
> > > >
> > > > The number of colors is 539 as shown by using -mat_fd_coloring_view:
> > > >
> > > > Coloring is not appropriate for this matrix since you have enormous dense blocks (I am guessing). It could work if you statically
> > > > condense them out or had a fast analytic Jacobian. With 540 colors, it takes 540 matvecs to generate the action of the Jacobian.
> > > >
> > > > Thanks,
> > > >
> > > > Matt
> > > >
> > > > MatFDColoring Object: 64 MPI processes
> > > > type not yet set
> > > > Error tolerance=1.49012e-08
> > > > Umin=1.49012e-06
> > > > Number of colors=539
> > > > Information for color 0
> > > > Number of columns 1
> > > > 378
> > > > Number of rows 756
> > > > 0 1188
> > > > 1 1188
> > > > 2 1188
> > > > 3 1188
> > > > 4 1188
> > > > 5 1188
> > > > ...
> > > >
> > > > Is this normal?
> > > > When using MCFD, is there any difference using mpiaij and mpibaij?
> > > >
> > > > Best,
> > > > Li
> > > >
> > > > On Mon, Dec 2, 2019 at 10:03 AM Smith, Barry F. <bsmith at mcs.anl.gov> wrote:
> > > >
> > > > How many colors is it requiring? And how long is the MatGetColoring() taking? Are you running in parallel? The MatGetColoring() MATCOLORINGSL uses a sequential coloring algorithm so if your matrix is large and parallel the coloring will take a long time. The parallel colorings are MATCOLORINGGREEDY and MATCOLORINGJP
> > > >
> > > > Barry
> > > >
> > > >
> > > > > On Dec 1, 2019, at 12:56 AM, Li Luo <li.luo at kaust.edu.sa> wrote:
> > > > >
> > > > > Dear Developers,
> > > > >
> > > > > I tried to use the multi-color finite-difference (MC-FD) method for constructing the Jacobians. However, I find it is very slow compared to the exact Jacobian.
> > > > > My implementation of MC-FD Jacobian is posted below, would you please check whether I am correct? Anything missed? Thank you!
> > > > >
> > > > > ////////// Setup phase:
> > > > > MatStructure flag;
> > > > > ISColoring iscoloring;
> > > > > ierr = MatGetColoring(Jac,MATCOLORINGSL,&iscoloring);
> > > > > ierr = MatFDColoringCreate(Jac,iscoloring,&this->matfdcoloring);
> > > > > ierr = MatFDColoringSetFunction(this->matfdcoloring,(PetscErrorCode (*)(void))__libmesh_petsc_snes_residual,(void *)this);
> > > > > ierr = MatFDColoringSetFromOptions(this->matfdcoloring);
> > > > > ierr = ISColoringDestroy(&iscoloring);
> > > > >
> > > > > //////////// Apply:
> > > > > ierr = MatZeroEntries(*jac);CHKERRQ(ierr);
> > > > > ierr = MatFDColoringApply(*jac,solver->matfdcoloring,x,msflag,snes);
> > > > >
> > > > > Best regards,
> > > > > Li Luo
> > > > >
> > > > > This message and its contents, including attachments are intended solely for the original recipient. If you are not the intended recipient or have received this message in error, please notify me immediately and delete this message from your computer system. Any unauthorized use or distribution is prohibited. Please consider the environment before printing this email.
> > > >
> > > >
> > > >
> > > > --
> > > > Postdoctoral Fellow
> > > > Extreme Computing Research Center
> > > > King Abdullah University of Science & Technology
> > > > https://sites.google.com/site/rolyliluo/
> > > >
> > > > This message and its contents, including attachments are intended solely for the original recipient. If you are not the intended recipient or have received this message in error, please notify me immediately and delete this message from your computer system. Any unauthorized use or distribution is prohibited. Please consider the environment before printing this email.
> > > >
> > > >
> > > > --
> > > > What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead.
> > > > -- Norbert Wiener
> > > >
> > > > https://www.cse.buffalo.edu/~knepley/
> > > >
> > > >
> > > > --
> > > > Postdoctoral Fellow
> > > > Extreme Computing Research Center
> > > > King Abdullah University of Science & Technology
> > > > https://sites.google.com/site/rolyliluo/
> > > >
> > > > This message and its contents, including attachments are intended solely for the original recipient. If you are not the intended recipient or have received this message in error, please notify me immediately and delete this message from your computer system. Any unauthorized use or distribution is prohibited. Please consider the environment before printing this email.
> > >
> > >
> > >
> > > --
> > > Postdoctoral Fellow
> > > Extreme Computing Research Center
> > > King Abdullah University of Science & Technology
> > > https://sites.google.com/site/rolyliluo/
> > >
> > > This message and its contents, including attachments are intended solely for the original recipient. If you are not the intended recipient or have received this message in error, please notify me immediately and delete this message from your computer system. Any unauthorized use or distribution is prohibited. Please consider the environment before printing this email.<slurm-13575838.out>
> >
> >
> >
> > --
> > Postdoctoral Fellow
> > Extreme Computing Research Center
> > King Abdullah University of Science & Technology
> > https://sites.google.com/site/rolyliluo/
> >
> > This message and its contents, including attachments are intended solely for the original recipient. If you are not the intended recipient or have received this message in error, please notify me immediately and delete this message from your computer system. Any unauthorized use or distribution is prohibited. Please consider the environment before printing this email.
>
>
>
> --
> Postdoctoral Fellow
> Extreme Computing Research Center
> King Abdullah University of Science & Technology
> https://sites.google.com/site/rolyliluo/
>
> This message and its contents, including attachments are intended solely for the original recipient. If you are not the intended recipient or have received this message in error, please notify me immediately and delete this message from your computer system. Any unauthorized use or distribution is prohibited. Please consider the environment before printing this email.
More information about the petsc-users
mailing list