[petsc-users] Slepc, shell matrix, parallel, halo exchange

Jose E. Roman jroman at dsic.upv.es
Mon Oct 10 10:49:28 CDT 2022



> El 10 oct 2022, a las 17:42, feng wang <snailsoar at hotmail.com> escribió:
> 
> Hi Mat,
> 
> Thanks for your reply. It seems I have to use "VecSetValues" to assign the values of my ghost vector "petsc_dcsv". and then call VecAssemblyBegin/End. If I do it this way, the ghost cells are exchanged correctly.
> 
> Besides, I notice that, when I run my code sequentially or with multiple processors, the produced eigenvalues are similar, but the number of iterations are different to reach the specified "-eps_tol" and the relative residuals are also slightly different. Is this normal? I am using the default Krylov-Schur solver and double precision.

The number of iterations depends on the initial vector which is random by default, and the random vector is not the same when you change the number of processes. So yes, it is normal. If you want to get the same convergence history, use a constant initial vector set via EPSSetInitialSpace(), or alternatively use the undocumented option -bv_reproducible_random

Jose

> 
> Thanks,
> Feng
> From: Matthew Knepley <knepley at gmail.com>
> Sent: 09 October 2022 12:11
> To: feng wang <snailsoar at hotmail.com>
> Cc: Jose E. Roman <jroman at dsic.upv.es>; petsc-users at mcs.anl.gov <petsc-users at mcs.anl.gov>
> Subject: Re: [petsc-users] Slepc, shell matrix, parallel, halo exchange
>  
> On Fri, Oct 7, 2022 at 5:48 PM feng wang <snailsoar at hotmail.com> wrote:
> Hi Mat,
> 
> I've tried the suggested approach. The halo cells are not exchanged somehow.  Below is how I do it, have I missed anything?
> 
> I create a ghost vector petsc_dcsv and it is a data member of the class cFdDomain, which is a context of the shell matrix. 
> 
>    PetscCall(VecCreateGhostBlock(*A_COMM_WORLD, blocksize, blocksize*nlocal, PETSC_DECIDE ,nghost, ighost, &petsc_dcsv));
> 
> blocksize and nv have the same value. nlocal is number of local cells and nghost is number of halo cells. ighost contains the ghost cell index.
> 
> Below is how I compute a matrix-vector product with a shell matrix
> 
>    PetscErrorCode cFdDomain::mymult_slepc(Mat m ,Vec x, Vec y)
>   {
>       void *ctx;
>       cFdDomain *myctx;
>       PetscErrorCode ierr;
> 
>       MatShellGetContext(m, &ctx);
>       myctx = (cFdDomain*)ctx;
> 
> //matrix-vector product
>       ierr = myctx->myfunc(x, y); CHKERRQ(ierr);
> 
>       ierr = 0;
>       return ierr;
>   }
> 
> 
>   PetscErrorCode cFdDomain::myfunc(Vec in, Vec out)
>   {
> //some declaration
> 
>       ierr = VecGetArray(petsc_dcsv,&array_g); CHKERRQ(ierr);
>       ierr = VecGetArrayRead(in,    &array);   CHKERRQ(ierr);
> 
>       //assign in to petsc_dcsv, only local cells
>       for(iv=0; iv<nv; iv++)
>      {
>          for(iq=0; iq<nlocal; iq++)
>         {
>             array_g[iv+nv*iq] = array[iv + nv*iq];
>         }
>      }
> 
>       ierr = VecRestoreArray(petsc_dcsv,&array_g); CHKERRQ(ierr);
>       ierr = VecRestoreArrayRead(in,     &array);   CHKERRQ(ierr);
> 
>       //update halo cells?
>       PetscCall(VecGhostUpdateBegin(petsc_dcsv, INSERT_VALUES, SCATTER_FORWARD));
>       PetscCall(VecGhostUpdateEnd(petsc_dcsv, INSERT_VALUES, SCATTER_FORWARD));
>       PetscCall(VecGhostGetLocalForm(petsc_dcsv,&veclocal));
> 
> //read in v
>       ierr = VecGetArray(veclocal,&array_ghost); CHKERRQ(ierr);
>       for(iv=0; iv<nv; iv++)
>      {
>          for(iq=0; iq<nlocal; iq++)
>         {
>             jq = ilocal[iq];
>             dq[iv][jq] = array_ghost[iv + nv*iq];
>         }
> 
>          for(iq=nlocal; iq<nlocal+nghost; iq++)
>         {
>             jq = ighost_local[iq-nlocal];
>             dq[iv][jq] = array_ghost[iv + nv*iq];
>         }
>      }
>       ierr = VecRestoreArray(veclocal,&array_ghost); CHKERRQ(ierr);
> 
>       //some computations
> 
>       PetscCall(VecGhostRestoreLocalForm(petsc_dcsv,&veclocal));
>   }
> 
> 
> so I fill the local part of the ghost vector petsc_dcsv for each rank and then call ghost update, and think this will update the halo cells. it seems not doing that.
> 
> I can only think you are misinterpreting the result. There are many examples, such
> 
>   src/vec/tutorials/ex9.c (and ex9f.F)
> 
> I would start there and try to change that into the communication you want, since it definitely works. I cannot
> see a problem with the code snippet above.
> 
>   Thanks,
> 
>       Matt
>  
> Thanks,
> Feng
> 
> From: Matthew Knepley <knepley at gmail.com>
> Sent: 21 September 2022 14:36
> To: feng wang <snailsoar at hotmail.com>
> Cc: Jose E. Roman <jroman at dsic.upv.es>; petsc-users at mcs.anl.gov <petsc-users at mcs.anl.gov>
> Subject: Re: [petsc-users] Slepc, shell matrix, parallel, halo exchange
>  
> On Wed, Sep 21, 2022 at 10:35 AM feng wang <snailsoar at hotmail.com> wrote:
> Hi Jose,
> 
> For your 2nd suggestion on halo exchange, I get the idea and roughly know how to do it, but there are some implementation details which I am not quite sure.
> 
> If I understand it correctly, in MatMult(Mat m ,Vec x, Vec y), Vec x is a normal parallel vector and it does not contain halo values. Suppose I create an auxiliary ghost vector x_g, then I assign the values of x to x_g. The values of the halo for each partition will not be assigned at this stage. 
> 
> But If I call VecGhostUpdateBegin/End(x_g, INSERT_VALUES, SCATTER_FORWARD), this will fill the values of the halo cells of x_g for each partition. Then x_g has local and halo cells assigned correctly and I can use x_g to do my computation. Is this what you mean?
> 
> Yes
> 
>   Matt
>  
> Thanks,
> Feng
> 
> From: Jose E. Roman <jroman at dsic.upv.es>
> Sent: 21 September 2022 13:07
> To: feng wang <snailsoar at hotmail.com>
> Cc: Matthew Knepley <knepley at gmail.com>; petsc-users at mcs.anl.gov <petsc-users at mcs.anl.gov>
> Subject: Re: [petsc-users] Slepc, shell matrix, parallel, halo exchange
>  
> 
> 
> > El 21 sept 2022, a las 14:47, feng wang <snailsoar at hotmail.com> escribió:
> > 
> > Thanks Jose, I will try this and will come back to this thread if I have any issue.
> > 
> > Besides, for EPSGetEigenpair, I guess each rank gets its portion of the eigenvector, and I need to put them together afterwards?
> 
> Eigenvectors are stored in parallel vectors, which are used in subsequent parallel computation in most applications. If for some reason you need to gather them in a single MPI process you can use e.g. VecScatterCreateToZero()
> 
> > 
> > Thanks,
> > Feng
> > 
> > From: Jose E. Roman <jroman at dsic.upv.es>
> > Sent: 21 September 2022 12:34
> > To: feng wang <snailsoar at hotmail.com>
> > Cc: Matthew Knepley <knepley at gmail.com>; petsc-users at mcs.anl.gov <petsc-users at mcs.anl.gov>
> > Subject: Re: [petsc-users] Slepc, shell matrix, parallel, halo exchange
> >  
> > If you define the MATOP_CREATE_VECS operation in your shell matrix so that it creates a ghost vector, then all vectors within EPS will be ghost vectors, including those that are received as arguments of MatMult(). Not sure if this will work.
> > 
> > A simpler solution is that you store a ghost vector in the context of your shell matrix, and then in MatMult() you receive a regular parallel vector x, then update the ghost points using the auxiliary ghost vector, do the computation and store the result in the regular parallel vector y.
> > 
> > Jose
> > 
> > 
> > > El 21 sept 2022, a las 14:09, feng wang <snailsoar at hotmail.com> escribió:
> > > 
> > > Thanks for your reply. 
> > > 
> > > For GMRES, I create a ghost vector and give it to KSPSolve. For Slepc, it only takes the shell matrix for EPSSetOperators. Suppose the shell matrix of the eigensolver defines MatMult(Mat m ,Vec x, Vec y), how does it know Vec x is  a ghost vector and how many ghost cells there are?
> > > 
> > > Thanks,
> > > Feng
> > > From: Matthew Knepley <knepley at gmail.com>
> > > Sent: 21 September 2022 11:58
> > > To: feng wang <snailsoar at hotmail.com>
> > > Cc: petsc-users at mcs.anl.gov <petsc-users at mcs.anl.gov>
> > > Subject: Re: [petsc-users] Slepc, shell matrix, parallel, halo exchange
> > >  
> > > On Wed, Sep 21, 2022 at 7:41 AM feng wang <snailsoar at hotmail.com> wrote:
> > > Hello,
> > > 
> > > I am using Slepc with a shell matrix. The sequential version seems working and now I am trying to make it run in parallel.
> > > 
> > > The partition of the domain is done, I am not sure how to do the halo exchange in the shell matrix in Slepc. I have a parallel version of matrix-free GMRES in my code with Petsc. I was using VecCreateGhostBlock to create vector with ghost cells, and then used  VecGhostUpdateBegin/End for the halo exchange in the shell matrix, would this be the same for Slepc?
> > > 
> > > That will be enough for the MatMult(). You would also have to use a SLEPc EPS that only needed MatMult().
> > > 
> > >   Thanks,
> > > 
> > >      Matt
> > >  
> > > Thanks,
> > > Feng 
> > > 
> > > 
> > > 
> > > 
> > > -- 
> > > What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead.
> > > -- Norbert Wiener
> > > 
> > > https://www.cse.buffalo.edu/~knepley/
> 
> 
> 
> -- 
> What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead.
> -- Norbert Wiener
> 
> https://www.cse.buffalo.edu/~knepley/
> 
> 
> -- 
> What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead.
> -- Norbert Wiener
> 
> https://www.cse.buffalo.edu/~knepley/



More information about the petsc-users mailing list