[petsc-users] Questions on matrix-free GMRES implementation
Matthew Knepley
knepley at gmail.com
Fri Mar 12 09:40:31 CST 2021
On Fri, Mar 12, 2021 at 10:37 AM feng wang <snailsoar at hotmail.com> wrote:
> Hi Matt,
>
> Thanks for your prompt response.
>
> Below are my two versions. one is buggy and the 2nd one is working. For
> the first one, I add the diagonal contribution to the true RHS (variable:
> rhs) and then set the base point, the callback function is somehow called
> twice afterwards to compute Jacobian. For the 2nd one, I just call the
> callback function manually to recompute everything, the callback function
> is then called once as expected to compute the Jacobian. For me, both
> versions should do the same things. but I don't know why in the first one
> the callback function is called twice after I set the base point. what
> could possibly go wrong?
>
If you reset the base point, we need to recompute F(b), so you get two
function calls. Normally, you do not reset the base point
within a Newton iteration. Maybe you should write out mathematically what
you are doing. I cannot understand it right now.
Thanks,
Matt
> Thanks,
> Feng
>
> *//This does not work*
> fld->cnsv( iqs,iqe, q, aux, csv );
> //add contribution of time-stepping
> for(iv=0; iv<nv; iv++)
> {
> for(iq=0; iq<nq; iq++)
> {
> //use conservative variables here
> rhs[iv][iq] = -rhs[iv][iq] +
> csv[iv][iq]*lhsa[nlhs-1][iq]/cfl;
> }
> }
> ierr = petsc_setcsv(petsc_csv); CHKERRQ(ierr);
> ierr = petsc_setrhs(petsc_baserhs); CHKERRQ(ierr);
> ierr = MatMFFDSetBase(petsc_A_mf, petsc_csv, petsc_baserhs);
> CHKERRQ(ierr);
>
> *//This works*
> fld->cnsv( iqs,iqe, q, aux, csv );
> ierr = petsc_setcsv(petsc_csv); CHKERRQ(ierr);
> ierr = FormFunction_mf(this, petsc_csv, petsc_baserhs); //this is
> my callback function, now call it manually
> ierr = MatMFFDSetBase(petsc_A_mf, petsc_csv, petsc_baserhs);
> CHKERRQ(ierr);
>
>
>
> ------------------------------
> *From:* Matthew Knepley <knepley at gmail.com>
> *Sent:* 12 March 2021 15:08
> *To:* feng wang <snailsoar at hotmail.com>
> *Cc:* Barry Smith <bsmith at petsc.dev>; petsc-users at mcs.anl.gov <
> petsc-users at mcs.anl.gov>
> *Subject:* Re: [petsc-users] Questions on matrix-free GMRES implementation
>
> On Fri, Mar 12, 2021 at 9:55 AM feng wang <snailsoar at hotmail.com> wrote:
>
> Hi Mat,
>
> Thanks for your reply. I will try the parallel implementation.
>
> I've got a serial matrix-free GMRES working, but I would like to know why
> my initial version of matrix-free implementation does not work and there is
> still something I don't understand. I did some debugging and find that the
> callback function to compute the RHS for the matrix-free matrix is called
> twice by Petsc when it computes the finite difference Jacobian, but it
> should only be called once. I don't know why, could you please give some
> advice?
>
>
> F is called once to calculate the base point and once to get the
> perturbation. The base point is not recalculated, so if you do many
> iterates, it is amortized.
>
> Thanks,
>
> Matt
>
>
> Thanks,
> Feng
>
>
>
> ------------------------------
> *From:* Matthew Knepley <knepley at gmail.com>
> *Sent:* 12 March 2021 12:05
> *To:* feng wang <snailsoar at hotmail.com>
> *Cc:* Barry Smith <bsmith at petsc.dev>; petsc-users at mcs.anl.gov <
> petsc-users at mcs.anl.gov>
> *Subject:* Re: [petsc-users] Questions on matrix-free GMRES implementation
>
> On Fri, Mar 12, 2021 at 6:02 AM feng wang <snailsoar at hotmail.com> wrote:
>
> Hi Barry,
>
> Thanks for your advice.
>
> You are right on this. somehow there is some inconsistency when I compute
> the right hand side (true RHS + time-stepping contribution to the diagonal
> matrix) to compute the finite difference Jacobian. If I just use the call
> back function to recompute my RHS before I call *MatMFFDSetBase*, then it
> works like a charm. But now I end up with computing my RHS three times. 1st
> time is to compute the true RHS, the rest two is for computing finite
> difference Jacobian.
>
> In my previous buggy version, I only compute RHS twice. If possible,
> could you elaborate on your comments "Also be careful about petsc_baserhs",
> so I may possibly understand what was going on with my buggy version.
>
>
> Our FD implementation is simple. It approximates the action of the
> Jacobian as
>
> J(b) v = (F(b + h v) - F(b)) / h ||v||
>
> where h is some small parameter and b is the base vector, namely the one
> that you are linearizing around. In a Newton step, b is the previous
> solution
> and v is the proposed solution update.
>
>
> Besides, for a parallel implementation, my code already has its own
> partition method, is it possible to allow petsc read in a user-defined
> partition? if not what is a better way to do this?
>
>
> Sure
>
>
> https://www.mcs.anl.gov/petsc/petsc-current/docs/manualpages/Vec/VecSetSizes.html
>
> Thanks,
>
> Matt
>
>
> Many thanks,
> Feng
>
> ------------------------------
> *From:* Barry Smith <bsmith at petsc.dev>
> *Sent:* 11 March 2021 22:15
> *To:* feng wang <snailsoar at hotmail.com>
> *Cc:* petsc-users at mcs.anl.gov <petsc-users at mcs.anl.gov>
> *Subject:* Re: [petsc-users] Questions on matrix-free GMRES implementation
>
>
> Feng,
>
> The first thing to check is that for each linear solve that involves
> a new operator (values in the base vector) the MFFD matrix knows it is
> using a new operator.
>
> The easiest way is to call MatMFFDSetBase() before each solve that
> involves a new operator (new values in the base vector). Also be careful
> about petsc_baserhs, when you change the base vector's values you also
> need to change the petsc_baserhs values to the function evaluation at
> that point.
>
> If that is correct I would check with a trivial function evaluator to make
> sure the infrastructure is all set up correctly. For examples use for the
> matrix free a 1 4 1 operator applied matrix free.
>
> Barry
>
>
> On Mar 11, 2021, at 7:35 AM, feng wang <snailsoar at hotmail.com> wrote:
>
> Dear All,
>
> I am new to petsc and trying to implement a matrix-free GMRES. I have
> assembled an approximate Jacobian matrix just for preconditioning. After
> reading some previous questions on this topic, my approach is:
>
> the matrix-free matrix is created as:
>
> ierr = MatCreateMFFD(*A_COMM_WORLD, iqe*blocksize, iqe*blocksize,
> PETSC_DETERMINE, PETSC_DETERMINE, &petsc_A_mf); CHKERRQ(ierr);
> ierr = MatMFFDSetFunction(petsc_A_mf, FormFunction_mf, this);
> CHKERRQ(ierr);
>
> KSP linear operator is set up as:
>
> ierr = KSPSetOperators(petsc_ksp, petsc_A_mf, petsc_A_pre);
> CHKERRQ(ierr); //petsc_A_pre is my assembled pre-conditioning matrix
>
> Before calling KSPSolve, I do:
>
> ierr = MatMFFDSetBase(petsc_A_mf, petsc_csv, petsc_baserhs);
> CHKERRQ(ierr); //petsc_csv is the flow states, petsc_baserhs is the
> pre-computed right hand side
>
> The call back function is defined as:
>
> PetscErrorCode cFdDomain::FormFunction_mf(void *ctx, Vec in_vec, Vec
> out_vec)
> {
> PetscErrorCode ierr;
> cFdDomain *user_ctx;
>
> cout << "FormFunction_mf called\n";
>
> //in_vec: flow states
> //out_vec: right hand side + diagonal contributions from CFL number
>
> user_ctx = (cFdDomain*)ctx;
>
> //get perturbed conservative variables from petsc
> user_ctx->petsc_getcsv(in_vec);
>
> //get new right side
> user_ctx->petsc_fd_rhs();
>
> //set new right hand side to the output vector
> user_ctx->petsc_setrhs(out_vec);
>
> ierr = 0;
> return ierr;
> }
>
> The linear system I am solving is (J+D)x=RHS. J is the Jacobian matrix. D
> is a diagonal matrix and it is used to stabilise the solution at the start
> but reduced gradually when the solution moves on to recover Newton's
> method. I add D*x to the true right side when non-linear function is
> computed to work out finite difference Jacobian, so when finite difference
> is used, it actually computes (J+D)*dx.
>
> The code runs but diverges in the end. If I don't do matrix-free and use
> my approximate Jacobian matrix, GMRES works. So something is wrong with my
> matrix-free implementation. Have I missed something in my implementation?
> Besides, is there a way to check if the finite difference Jacobian matrix
> is computed correctly in a matrix-free implementation?
>
> Thanks for your help in advance.
> Feng
>
>
>
>
> --
> What most experimenters take for granted before they begin their
> experiments is infinitely more interesting than any results to which their
> experiments lead.
> -- Norbert Wiener
>
> https://www.cse.buffalo.edu/~knepley/
> <http://www.cse.buffalo.edu/~knepley/>
>
>
>
> --
> What most experimenters take for granted before they begin their
> experiments is infinitely more interesting than any results to which their
> experiments lead.
> -- Norbert Wiener
>
> https://www.cse.buffalo.edu/~knepley/
> <http://www.cse.buffalo.edu/~knepley/>
>
--
What most experimenters take for granted before they begin their
experiments is infinitely more interesting than any results to which their
experiments lead.
-- Norbert Wiener
https://www.cse.buffalo.edu/~knepley/ <http://www.cse.buffalo.edu/~knepley/>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.mcs.anl.gov/pipermail/petsc-users/attachments/20210312/b9e65d76/attachment-0001.html>
More information about the petsc-users
mailing list