[petsc-users] virtual nodes at processes boundary
praveen kumar
praveenpetsc at gmail.com
Sat May 7 09:09:23 CDT 2016
Thanks a lot Barry.
Thanks,
Praveen
On Sat, May 7, 2016 at 12:58 AM, Barry Smith <bsmith at mcs.anl.gov> wrote:
>
> > On May 6, 2016, at 2:15 PM, praveen kumar <praveenpetsc at gmail.com>
> wrote:
> >
> > I didn't frame the question properly. Suppose we have grid on vertices
> numbered 1 to N and we break it into two pieces (1,N/2) and (N/2+1,N). As
> it is FVM code, DMboundary type is DM_BOUNDARY_GHOSTED. nodes 0 and N+1
> are to the left and right of nodes 1 and N at a distance of dx/2
> respectively. Let me call 0 and N+1 as virtual nodes where boundary
> conditions are applied. As you know virtual nodes don't take part in
> computation and are different from what we call ghost nodes in parallel
> computing terminology. In serial code problem is solved by CALL TDMA(0,N+1).
>
> I don't know why you want to have a concept of "virtual nodes" being
> different than "ghost nodes"?
>
>
> > I've decomposed the domain using PETSc, and I've replaced indices in
> serial code DO Loops with the information from DMDAGetCorners .
> > if I want to solve the problem using TDMA on process0, it is not
> possible as process0 doesn't contain virtual node at it's right boundary
> i.e CALL TDMA(0,X) where X should be at a distance of dx/2 from N/2 but it
> is not there. Similarly for process1 there is no virtual node at it's left
> boundary. So, how can I create these virtual nodes at the processes
> boundary. I want to set the variable value at X as previous
> time-step/iteration value.
>
> Why? Don't you have to do some kind of iteration where you update the
> boundary conditions from the other processes, solve the local problem and
> then repeat until the solution is converged? This is the normal thing
> people do with domain decomposition type solvers and is very easy with
> DMDA. How can you hope to get the solution correct without an iteration
> passing ghost values? If you just put values from some previous time-step
> in in the ghost locations then each process will solver a local problem but
> the result won't match between processes so will be garbage, won't it?
>
> Barry
>
>
> > I'm not sure whether my methodology is correct or not. If you think it
> is very cumbersome, please suggest something else.
> >
>
>
> > Thanks,
> > Praveen
> >
> > On Fri, May 6, 2016 at 8:00 PM, praveen kumar <praveenpetsc at gmail.com>
> wrote:
> > Thanks Matt , Thanks Barry. I'll get back to you.
> >
> > Thanks,
> > Praveen
> >
> > On Fri, May 6, 2016 at 7:48 PM, Barry Smith <bsmith at mcs.anl.gov> wrote:
> >
> > > On May 6, 2016, at 5:08 AM, praveen kumar <praveenpetsc at gmail.com>
> wrote:
> > >
> > > Hi,
> > >
> > > I am trying to implement Petsc for DD in a serial fortran FVM code. I
> want to use solver from serial code itself. Solver consists of gauss seidel
> + TDMA. BCs are given along with the solver at boundary virtual nodes. For
> Ex: CALL TDMA(0,nx+1), where BCs are given at 0 and nx+1 which are virtual
> nodes (which don't take part in computation). I partitioned the domain
> using DMDACreate and got the ghost nodes information using DMDAGetcorners.
> But how to create the virtual nodes at the processes boundary where BCs are
> to be given. Please suggest all the possibilities to fix this other than
> using PETSc for solver parallelization.
> >
> > DMCreateGlobalVector(dm,gvector,ierr);
> > DMCreateLocalVector(dm,lvector,ierr);
> >
> > /* full up gvector with initial guess or whatever */
> >
> > DMGlobalToLocalBegin(dm,gvector,INSERT_VALUES,lvector,ierr)
> > DMGlobalToLocalEnd(dm,gvector,INSERT_VALUES,lvector,ierr)
> >
> > Now the vector lvector has the ghost values you can use
> >
> > DMDAVecGetArrayF90(dm,lvector,fortran_array_pointer_of_correct
> dimension for your problem (1,2,3d))
> >
> > Note that the indexing into the fortran_array_pointer uses the
> global indexing, not the local indexing. You can use DMDAGetCorners() to
> get the start and end indices for each process.
> >
> > Barry
> >
> >
> >
> > >
> > > Thanks,
> > > Praveen
> > >
> >
> >
> >
>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.mcs.anl.gov/pipermail/petsc-users/attachments/20160507/7e21b488/attachment.html>
More information about the petsc-users
mailing list