[petsc-users] partial stencil in DMDA?

Fischer, Greg A. fischega at westinghouse.com
Fri Feb 21 16:12:22 CST 2014


Barry,

Thanks for your time. I have a better understanding now. 

Greg 

>-----Original Message-----
>From: Barry Smith [mailto:bsmith at mcs.anl.gov]
>Sent: Friday, February 21, 2014 4:48 PM
>To: Fischer, Greg A.
>Cc: petsc-users at mcs.anl.gov
>Subject: Re: [petsc-users] partial stencil in DMDA?
>
>
>On Feb 21, 2014, at 2:42 PM, Fischer, Greg A. <fischega at westinghouse.com>
>wrote:
>
>> Barry,
>>
>> If I'm interpreting correctly, the 2nd case would work fine for me. However,
>perhaps I was previously over-complicating the problem. Could I not just:
>>
>> 1. create a DMDA
>> 2. create a global vector with DOF=1
>> 3. call VecDuplicateVecs() against the DOF=1 global vector for my
>> other ~100 DOFs
>>
>> and then independently manage the duplicated vectors with the array
>returned by VecDuplicateVecs()?
>
>   You can do this and depending on your needs it can be fine. It really
>depends on what computations you are doing on the vectors. We refer the
>two different ways of storing the data: x0 y0 z0 x1 y1 z1 .... xn yn zn (where
>dof is 3 in this case as interlaced storage and x0 x1 ... xn      y0 y1 ... yn   z0 z1 ...
>zn  as noninterlaced.
>Interlaced can be much faster if you have calculations that involve say x_p
>y_p z_p together (since they are all stored together in memory you get good
>use of cache and cache lines)   if you have calculations that involve some x
>then later some y then later some z then non interlaced is better.  If you have
>a calculation that involves x_p y_p ..... z_p where there are 100 dof then just
>loading these values will take 100 cache lines and be inefficient and
>interlaced is better.
>>
>> Also, when you mentioned changes to DMSetUp_DA_2D() in your original
>reply, you're suggesting that I could modify the PETSc source code, not use
>some pre-existing interface, correct?
>
>   Yes
>
>>
>> Thanks,
>> Greg
>>
>>> -----Original Message-----
>>> From: Barry Smith [mailto:bsmith at mcs.anl.gov]
>>> Sent: Friday, February 21, 2014 3:20 PM
>>> To: Fischer, Greg A.
>>> Cc: petsc-users at mcs.anl.gov
>>> Subject: Re: [petsc-users] partial stencil in DMDA?
>>>
>>>
>>>  Greg,
>>>
>>>    The general mechanism for moving elements of vector between
>>> processes is with VecScatterCreate() and VecScatterBegin/End() with
>>> it you can indicate exactly what values are to move and to where, but
>>> it is all based on "one dimension" indexing into the vector.
>>>
>>>   DMDA provides a subset of communication for "structured meshes"
>>> (essentially 1,2, 3 dimensional arrays split among processes with
>>> horizontal and vertical cuts). It is pretty much all or nothing in
>>> terms of communicating neighbor information.
>>>
>>>   VecCreateGhost() uses the VecScatter mechanism to set up a SINGLE
>>> communication pattern between a Vec and its ghosted partner Vec.
>>>
>>>  Based on your email: "However, when communicating ghost values, not
>>> all of those degrees of freedom should be exchanged at once." it
>>> sounds like you need several (many) communication patterns requiring
>>> different ghost entries in each.
>>>
>>> 1) The hard general case: if for each set of ghost points you may
>>> need several entries from one grid point and a different number of
>>> entries from another grid point VecScatterCreate() (called multiple
>>> times, one for each pattern) and VecScatterBegin/End() are intended
>>> for this purpose. However if you are working with a 2 dimension
>>> grid/array you want to do ghosting for you need to initially map your
>>> ghosting patterns from the 2d indexing to the 1d indexing of the
>>> VecScatterCreate() which is a bit painful. You can see the routine I
>>> mentioned before DMSetUp_DA_2D() but it would need to be heavily
>modified.
>>>
>>> 2) The easy case: if, for example, you need just the first index from
>>> each point communicated as a ghost and then next the second and then
>>> next the third you can avoid all custom communication patterns and
>>> just create 2 DMDA, one with a dof of 100 (or what ever it is for
>>> your case) and one with dof of 1 then use VecStrideGather to pull out
>>> the one component of the global vector (with the dof of 100) into a
>>> Vec obtained from
>>> DMCreateGlobalVector() from the dof of 1 DMDA then use the
>>> DMGlobalToLocalBegin/End on the 1 dof DMDA and now you have your
>>> single ghost point at each point vector that you want.
>>>
>>>  Barry
>>>
>>>
>>>
>>>
>>> On Feb 21, 2014, at 1:12 PM, Fischer, Greg A.
>>> <fischega at westinghouse.com>
>>> wrote:
>>>
>>>> Barry,
>>>>
>>>> Thanks!  I have another question. The user manual says:
>>>>
>>>>                PETSc currently provides no container for multiple
>>>> arrays sharing
>>> the same distributed array communication; note, however, that the dof
>>> parameter handles many cases of interest.
>>>>
>>>> In my application, each space location will have on the order of
>>>> hundreds of
>>> values associated with it (which I believe translates to dof=O(100) -
>>> I don't see the "degrees of freedom" explicitly defined anywhere).
>>> However, when communicating ghost values, not all of those degrees of
>>> freedom should be exchanged at once. I need to be able to exchange one
>at a time.
>>>>
>>>> It sounds like what I may want to do is use VecCreateGhost(), which
>>>> would
>>> allow me to define exactly where the ghost points are, and then
>>> duplicate that vector using VecDuplicateVecs() for each DOF. I can
>>> then scatter the vectors individually as the need arises. Does that sound
>reasonable?
>>>>
>>>> Greg
>>>>
>>>>> -----Original Message-----
>>>>> From: Barry Smith [mailto:bsmith at mcs.anl.gov]
>>>>> Sent: Friday, February 21, 2014 12:21 PM
>>>>> To: Fischer, Greg A.
>>>>> Cc: petsc-users at mcs.anl.gov
>>>>> Subject: Re: [petsc-users] partial stencil in DMDA?
>>>>>
>>>>>
>>>>> On Feb 21, 2014, at 11:02 AM, Fischer, Greg A.
>>>>> <fischega at westinghouse.com>
>>>>> wrote:
>>>>>
>>>>>> Hello,
>>>>>>
>>>>>> I'm interested in using PETSc to manage distributed arrays. Based
>>>>>> on my
>>>>> reading about the DMDA objects, I see that ghost points can be
>>>>> communicated in box-type stencils or star-type stencils.
>>>>>>
>>>>>> For my application, assume that I have a 2D DMDA object with
>>>>>> star-type
>>>>> stencils. For a typical local calculation, I only need to access
>>>>> ghost values from two of the four directions at a time. For
>>>>> example, I'd like access to ghost values in the South and East
>>>>> directions, but not in the North or West directions. Communicating
>>>>> North and West data would seem to be wasting bandwidth. Is there
>>>>> any way to accomplish
>>> this?
>>>>>
>>>>>  Greg,
>>>>>
>>>>>   There is not anything built in. Here is what I suggest:
>>>>>
>>>>> 1)  write your application code not worrying about the fact that
>>>>> the
>>>>> DMGlobalToLocalBegin/End() is moving values you don't need.
>>>>>
>>>>> 2) when your code is running correctly for your problem and giving
>>>>> useful results if the communication times are impacting how long it
>>>>> takes to run you can provide a custom communication pattern. It
>>>>> would involve little additional coding essentially taking
>>>>> DMSetUp_DA_2D() which creates the list of ghost points and removing
>>>>> the  unneeded ghost points.  But it would be premature to do this
>>>>> optimization until you have a full working application code.
>>>>>
>>>>>  Barry
>>>>>
>>>>>>
>>>>>> Thanks,
>>>>>> Greg
>>>>>
>>>>>
>>>
>>>
>>
>>
>
>




More information about the petsc-users mailing list