[petsc-users] DMDA objects while distributing 3d arrays

Matthew Knepley knepley at gmail.com
Thu Jan 19 21:54:57 CST 2017


On Thu, Jan 19, 2017 at 6:56 PM, Manuel Valera <mvalera at mail.sdsu.edu>
wrote:

> I've read some more and from the ex13f90aux from the dm examples, it seems
> is very similar what im looking for, it says:
>
>   !
>   ! The following 4 subroutines handle the mapping of coordinates. I'll
> explain
>   ! this in detail:
>   !    PETSc gives you local arrays which are indexed using the global
> indices.
>   ! This is probably handy in some cases, but when you are re-writing an
>   ! existing serial code and want to use DMDAs, you have tons of loops
> going
>   ! from 1 to imax etc. that you don't want to change.
>   !    These subroutines re-map the arrays so that all the local arrays go
> from
>   ! 1 to the (local) imax.
>   !
>
> Could someone explain a little bit more about these functions?
> petsc_to_local(), local_to_petsc(), and specially why are used
> transform_petsc_us() and transform_us_petsc() ?
>

This is one way to do things, which I do not necessarily agree with. The
larger point is that a scalable strategy is one
where you only compute over patches rather than the whole grid. This is
usually trivial since the global bounds just
become local bounds, and you are done.

With DMDA, we are always using global indices so no problem for translating
anything with global indices. However,
in parallel you should note that you can only refer to values on your owned
patch of the grid.

I hope this answers the question. If not, can you try and explain more
about what is not clear?

  Thanks,

    Matt


> Thanks,
>
> Manuel
>
> On Thu, Jan 19, 2017 at 2:01 PM, Manuel Valera <mvalera at mail.sdsu.edu>
> wrote:
>
>> Hello all,
>>
>> I'm currently pushing forward on the parallelization of my model, next
>> step would be to parallelize all the grids (pressure, temperature,
>> velocities, and such), and they are stored as 3d arrays in fortran.
>>
>> I'm following ex11f90.f and is a good start, i have a couple questions
>> from it:
>>
>>    1. in the example a dummy vector g is made and the array values are
>>    loaded into it, the dimensions of this vector are variable? the same dummy
>>    vector is used for 1d,2d,3d so i guess it is. i was planning to use matrix
>>    objects for 3d arrays but i guess a vector of this kind would be better
>>    suited?
>>    2. I notice also that a stride is used from the corners of the DMDA,
>>    im looking for a way to operate over the global indices of the array
>>    instead, can this be done? any good example to follow on this? this would
>>    save us lots of effort if we can just extend the actual operations from
>>    global indices into the DMDA objects.
>>    3. next, im concerned about the degrees of freedom, how can i know
>>    how many dof my model has? we are following an arakawa c-type grid. Same
>>    for the type of stencil which i guess is star type in my case, we use a 9
>>    point stencil.
>>
>>
>> that is it for now, thanks for your time,
>>
>> Manuel Valera
>>
>
>


-- 
What most experimenters take for granted before they begin their
experiments is infinitely more interesting than any results to which their
experiments lead.
-- Norbert Wiener
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.mcs.anl.gov/pipermail/petsc-users/attachments/20170119/438aabd3/attachment.html>


More information about the petsc-users mailing list