<div dir="ltr">On Fri, Apr 12, 2013 at 9:18 AM, Chris Kees <span dir="ltr"><<a href="mailto:cekees@gmail.com" target="_blank">cekees@gmail.com</a>></span> wrote:<br><div class="gmail_extra"><div class="gmail_quote"><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">
I updated the results for the Bratu problem on our SGI. It has 8 cores<br>
per node (two 4-core processors per node), and I ran from 1 to 256<br>
cores. The log_summary output is attached for both studies. Question:<br></blockquote><div><br></div><div style>Strong scaling:</div><div style><br></div><div style> This looks fine. You get the classic memory bandwidth starvation after</div>
<div style>2 cores on the same node (although your scaling does not completely</div><div style>bottom out), and among nodes the scaling is great.</div><div style><br></div><div style>Weak scaling:</div><div style><br></div>
<div style> I have to go through the logs, but obviously something is wrong. I am</div><div style>betting it is the failure to increase the GMG levels with increasing problem</div><div style>size.</div><div> </div><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">
is there anything about the memory usage of that problem that doesn't<br>
scale? The memory usage looks steady at < 1GB per core based on<br>
log_summary. I ask because last night I tried to do one more level of<br>
refinement for weak scaling on 1024 cores and it crashed. I ran the<br>
same job on 512 cores this morning, and it ran fine so I'm hoping the<br>
issue was a temporary system problem.<br></blockquote><div><br></div><div style>No, the memory usage is scalable.</div><div style><br></div><div style> Thanks,</div><div style><br></div><div style> Matt</div><div> </div>
<blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">
Notes:<br>
<br>
There is a shift in the strong scaling curve as it fills up the first<br>
node (i.e. from 1 to 16 cores), then it looks perfect. The shift<br>
seems reasonable due to the sharing of the cache by 4 cores.<br>
<br>
The weak scaling shows slight growth in the wall clock from 6.3<br>
seconds to 17 seconds. I'm going to run that again with a larger<br>
coarse grid in order to increase the runtime to several minutes.<br>
<br>
Graphs: <a href="https://proteus.usace.army.mil/home/pub/17/" target="_blank">https://proteus.usace.army.mil/home/pub/17/</a><br>
<div class="HOEnZb"><div class="h5"><br>
On Thu, Apr 11, 2013 at 12:46 PM, Jed Brown <<a href="mailto:jedbrown@mcs.anl.gov">jedbrown@mcs.anl.gov</a>> wrote:<br>
> Chris Kees <<a href="mailto:cekees@gmail.com">cekees@gmail.com</a>> writes:<br>
><br>
>> Thanks a lot. I did a little example with the Bratu problem and posted it here:<br>
>><br>
>> <a href="https://proteus.usace.army.mil/home/pub/17/" target="_blank">https://proteus.usace.army.mil/home/pub/17/</a><br>
>><br>
>> I used boomeramg instead of geometric multigrid because I was getting<br>
>> an error with the options above:<br>
>><br>
>> %mpiexec -np 4 ./ex5 -mx 129 -my 129 -Nx 2 -Ny 2 -pc_type mg -pc_mg_levels 2<br>
>> [0]PETSC ERROR: --------------------- Error Message<br>
>> ------------------------------------<br>
>> [0]PETSC ERROR: Argument out of range!<br>
>> [0]PETSC ERROR: New nonzero at (66,1) caused a malloc!<br>
>> [0]PETSC ERROR:<br>
>> ------------------------------------------------------------------------<br>
><br>
> That test hard-codes evil things (presumably for testing purposes,<br>
> though maybe the functionality has been subsumed). Please use<br>
> src/snes/examples/tutorials/ex5.c instead.<br>
><br>
> mpiexec -n 4 ./ex5 -da_grid_x 65 -da_grid_y 65 -pc_type mg -log_summary -da_refine 1<br>
><br>
> Increase '-da_refine 1' to get higher resolution. (This will increase<br>
> the number of MG levels used by PCMG.)<br>
><br>
> Switch '-da_refine 1' to '-snes_grid_sequence 1' if you want FMG, but<br>
> note that it's trickier to profile because proportionately more time is<br>
> spent in coarse levels (although the total solve time is lower).<br>
><br>
>><br>
>> I like the ice paper and will try to get the contractor started on<br>
>> reproducing those results.<br>
>><br>
>> -Chris<br>
>><br>
>> On Wed, Apr 10, 2013 at 1:13 PM, Nystrom, William D <<a href="mailto:wdn@lanl.gov">wdn@lanl.gov</a>> wrote:<br>
>>> Sorry. I overlooked that the URL was using git protocol. My bad.<br>
>>><br>
>>> Dave<br>
>>><br>
>>> ________________________________________<br>
>>> From: Jed Brown [<a href="mailto:five9a2@gmail.com">five9a2@gmail.com</a>] on behalf of Jed Brown [<a href="mailto:jedbrown@mcs.anl.gov">jedbrown@mcs.anl.gov</a>]<br>
>>> Sent: Wednesday, April 10, 2013 12:10 PM<br>
>>> To: Nystrom, William D; For users of the development version of PETSc; Chris Kees<br>
>>> Subject: Re: [petsc-dev] examples/benchmarks for weak and strong scaling exercise<br>
>>><br>
>>> "Nystrom, William D" <<a href="mailto:wdn@lanl.gov">wdn@lanl.gov</a>> writes:<br>
>>><br>
>>>> Jed,<br>
>>>><br>
>>>> I tried cloning your tme-ice git repo as follows and it failed:<br>
>>>><br>
>>>> % git clone --recursive git://<a href="http://github.com/jedbrown/tme-ice.git" target="_blank">github.com/jedbrown/tme-ice.git</a> tme_ice<br>
>>>> Cloning into 'tme_ice'...<br>
>>>> fatal: unable to connect to <a href="http://github.com" target="_blank">github.com</a>:<br>
>>>> <a href="http://github.com" target="_blank">github.com</a>[0: 204.232.175.90]: errno=Connection timed out<br>
>>>><br>
>>>> I'm doing this from an xterm that allows me to clone petsc just fine.<br>
>>><br>
>>> You're using https or ssh to clone PETSc, but the git:// to clone<br>
>>> tme-ice. The LANL network is blocking that port, so just use the https<br>
>>> or ssh protocol.<br>
</div></div></blockquote></div><br><br clear="all"><div><br></div>-- <br>What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead.<br>
-- Norbert Wiener
</div></div>