DA memory consumption

Matthew Knepley knepley at gmail.com
Sun Nov 22 20:41:24 CST 2009


It is not simple, but it is scalable, meaning in the limit of large N, the
memory will be constant on
each processor. When it is created, the VecScatter objects mapping global to
local vecs are created.

  Matt

On Sun, Nov 22, 2009 at 8:29 PM, Denis Teplyashin <denist at al.com.au> wrote:

> Hi guys,
>
> I'm a bit confused with distributed array memory consumption. I did a
> simple test like this one:
>  ierr = DACreate3d(PETSC_COMM_WORLD, DA_NONPERIODIC, DA_STENCIL_BOX, 1000,
> 1000, 1000, PETSC_DECIDE, PETSC_DECIDE, PETSC_DECIDE, 1, 1, PETSC_NULL,
> PETSC_NULL, PETSC_NULL , &da);
> and then checked memory with PetscMemoryGetCurrentUsage and
> PetscMemoryGetMaximumUsage. Running this test using mpi on one core gives me
> this result: current usage 3818Mb and maximum usage 7633Mb. And this is the
> result after creating just a DA without actual vectors. Running the same
> test on two cores gives me even more interesting result: rank 0 -
> 9552/11463Mb and rank 1 - 5735/5732Mb.
> Is it what i should expect in general or am i doing something wrong? Is
> there a simple formula which could show how much memory i would need to
> allocate and array with given resolution?
>
> Thanks in advance,
> Denis
>



-- 
What most experimenters take for granted before they begin their experiments
is infinitely more interesting than any results to which their experiments
lead.
-- Norbert Wiener
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.mcs.anl.gov/pipermail/petsc-users/attachments/20091122/26929eb8/attachment.htm>


More information about the petsc-users mailing list