<div dir="ltr"><div dir="ltr">On Fri, Nov 22, 2024 at 11:36 AM David Scott <<a href="mailto:d.scott@epcc.ed.ac.uk">d.scott@epcc.ed.ac.uk</a>> wrote:</div><div class="gmail_quote"><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex">Hello,<br>
<br>
I am using the options mechanism of PETSc to configure my CFD code. I<br>
have introduced options describing the size of the domain etc. I have<br>
noticed that this consumes a lot of memory. I have found that the amount<br>
of memory used scales linearly with the number of MPI processes used.<br>
This restricts the number of MPI processes that I can use.<br></blockquote><div><br></div><div>There are two statements:</div><div><br></div><div>1) The memory scales linearly with P</div><div><br></div><div>2) This uses a lot of memory</div><div><br></div><div>Let's deal with 1) first. This seems to be trivially true. If I want every process to have</div><div>access to a given option value, that option value must be in the memory of every process.</div><div>The only alternative would be to communicate with some process in order to get values.</div><div>Few codes seem to be willing to make this tradeoff, and we do not offer it.</div><div><br></div><div>Now 2). Looking at the source, for each option we store a PetscOptionItem, which I count</div><div>as having size 37 bytes (12 pointers/ints and a char). However, there is data behind every</div><div>pointer, like the name, help text, available values (sometimes), I could see it being as large</div><div>as 4K. Suppose it is. If I had 256 options, that would be 1M. Is this a large amount of memory?</div><div><br></div><div>The way I read the SLURM output, 29K was malloced. Is this a large amount of memory?</div><div><br></div><div>I am trying to get an idea of the scale.</div><div><br></div><div> Thanks,</div><div><br></div><div> Matt</div><div> </div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex">
Is there anything that I can do about this or do I need to configure my<br>
code in a different way?<br>
<br>
I have attached some code extracted from my application which<br>
demonstrates this along with the output from a running it on 2 MPI<br>
processes.<br>
<br>
Best wishes,<br>
<br>
David Scott<br>
The University of Edinburgh is a charitable body, registered in Scotland, with registration number SC005336. Is e buidheann carthannais a th’ ann an Oilthigh Dhùn Èideann, clàraichte an Alba, àireamh clàraidh SC005336.<br>
</blockquote></div><div><br clear="all"></div><div><br></div><span class="gmail_signature_prefix">-- </span><br><div dir="ltr" class="gmail_signature"><div dir="ltr"><div><div dir="ltr"><div><div dir="ltr"><div>What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead.<br>-- Norbert Wiener</div><div><br></div><div><a href="https://urldefense.us/v3/__http://www.cse.buffalo.edu/*knepley/__;fg!!G_uCfscf7eWS!aDTOdIHWWilf4sShnRrU9KcJ987GlIrJ71v1EcIH4zje2tKZ7EBoEBD2TqNejin_X3-7DKujGeq-pWu0-nb6$" target="_blank">https://www.cse.buffalo.edu/~knepley/</a><br></div></div></div></div></div></div></div></div>