[petsc-users] low dimensional sub-problem

Gideon Simpson gideon.simpson at gmail.com
Sat Feb 25 10:12:07 CST 2017


-gideon

> On Feb 25, 2017, at 10:44 AM, Jed Brown <jed at jedbrown.org> wrote:
> 
> Gideon Simpson <gideon.simpson at gmail.com> writes:
> 
>> I’ve been continuing working on implementing a projection method problem, which, loosely, looks like the following.  Vec y contains the state vector for my system, y’ = f(y) which is solved with a TS, using, for now, rk4.
>> 
>> I have added to this a TSPostStep which does root finding on the nonlinear problem
>> 
>> f(lambda) = f(lambda; y, w) =  g(y + lambda * w) - g0
> 
> You might want to treat this as a DAE.

Yea, we discussed DAE previously.  For the sake of a reviewer, I’m trying to implement a classical method (explicit time step + projection), rather than DAE.

> 
>> y and w are vectors that are the size of the system (numbers of mesh points), and lambda is a scalar perturbation.  
>> 
>> Right now, the snes function for solving this amounts to:
>> 
>> PetscErrorCode projector(SNES snes, Vec lam, Vec f, void *ctx){
>> 
>>  AppCtx * user = (AppCtx *) ctx;
>>  const PetscScalar *lam_value;
>>  PetscScalar *f_value;
>> 
>>  VecGetArrayRead(lam, &lam_value);
>>  VecGetArray(f, &f_value);
>>  VecCopy(user->y, user->w);
>>  VecAXPY(user->w, lam_value[0], user->gradMy);
>> 
>>  f_value[0] = g(user->w) -user->g0;
>>  VecRestoreArrayRead(lam, &lam_value);
>>  VecRestoreArray(f, &f_value);
>> 
>>  return 0;
>> }
>> 
>> To get this all to work, I constructed the SNES and Vec lam with PETSC_COMM_SELF, effectively putting a copy of the nonlinear problem on each process.  Basically, the nonlinear problem is low dimensional, but it parametrically depends on the high dimensional, distributed, vectors y and w.
>> 
>> The one thing that bothers me about this is:  
>> 
>> 1.  It would seem that each process is is doing all of these vector operations, which is entirely redundant, as the only thing that really needs to be shared amongst the processes is the value of lambda.  Is that correct? 
> 
> Only the SNES part is redundant, but it's super cheap because it's a
> scalar problem.  The Vec operations are using global (parallel,
> distributed) vectors, so they are not redundant.  Of course it is
> critical for the SNES on every process to be configured identically and
> deterministic so that the processes don't diverge.  And g(user->w) must
> return the same value on each process (it probably finishes with an
> MPI_Allreduce or similar).

So within the root finding problem, when it calls, for instance, VecCopy (which is acting on distributed vectors), it’s not doing that once on process 0, once on process 1, once on process 2, etc.  Or it is, but you’re saying it’s too cheap to matter? 

Two things I’m mildly concerned about is that, since user->w changes at each step of the solver, by virtue of the lambda scalar changing,  if the SNES’s are running asynchronously, by virtue of them being PETSC_COMM_SELF constructs, couldn’t there be a collision?  

The operation g(user->w) does indeed conclude with an MPI_Allreduce, followed by DMDAVecRestoreArray and DMRestoreLocalVector.

Lastly, in the TSPostStep, it runs as:
  
SNESSolve(user->snes, NULL,user->lam);
VecGetArray(user->lam, &lam_value);
VecAXPY(y, lam_value[0], user->gradMy);
VecRestoreArray(user->lam, &lam_value);

Is there any communication issue with ensuring that all the SNES’s have finished on each process, before proceeding to do the vector operations?

I keep wondering if I should just write this as a global, with the data structures stored on process 0, just to avoid this kind of headache. 

> 
>> 2.  Is there an obvious way around this redundancy?
>> 
>> -gideon

-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.mcs.anl.gov/pipermail/petsc-users/attachments/20170225/61219150/attachment-0001.html>


More information about the petsc-users mailing list