[petsc-users] memory use of a DMDA

Barry Smith bsmith at mcs.anl.gov
Mon Oct 28 07:08:17 CDT 2013


   The 64 bit integers would at most double the memory usage. It should actually be a bit less than doubling the memory usages since some of the memory allocated in DMSetUp_DA() are doubles whose size would remain unchanged.  Note that switching to 64 bit indices would not increase the Vec memory usage at all so the ratio of setup memory usage to a single vector memory usage would roughly double.

   Barry

On Oct 28, 2013, at 5:27 AM, Juha Jäykkä <juhaj at iki.fi> wrote:

>>> For comparison, 3.4.2 gives (from the previous email): 354 MiB for 1
>>> rank, 141 for 2 ranks and 81 for 4 ranks, which is a LOT less. I
>>> suspect this might have something to do with the DA -> DMDA change?
>> Hmm, I wonder where you're getting your numbers from.
> 
> From /proc/self/stat, which (for me at least) gives exactly the same number as 
> psutil.Process(os.getpid()).get_memory_info().rss (I suspect psutil actually 
> reads /proc/self/stat as well).
> 
> Which brings up a point of your numbers
> 
>>  $ mpirun.hydra -np 1 python2 -c "$CMD"
>>  8 MiB /
>>  29 MiB
>>  226 MiB
> 
> etc are way lower than mine. They are more or less in-line with my 3.2 tests, 
> so something does not add up. My guess is "--with-64-bit-indices": if yours is 
> with 32-bit indices, it's obvious you will get less memory consumption. Now, I 
> need to apologise: my 3.2 test had 32-bit indices and all the old numbers I 
> had for 3.3 did, tooo, but my 3.4.2 has 64-bit. I lately had to switch to 64-
> bit indices and that just happened to coincide with the move to 3.4 series.
> 
> Perhaps this explains everything? (Of course, Barry's patch is still a good 
> thing.)
> 
> Cheers,
> -Juha



More information about the petsc-users mailing list