<div dir="ltr"><div dir="ltr"><div dir="ltr"><div dir="ltr"><br></div><br><div class="gmail_quote"><div dir="ltr" class="gmail_attr">On Mon, Apr 15, 2019 at 1:06 PM Mark Adams <<a href="mailto:mfadams@lbl.gov">mfadams@lbl.gov</a>> wrote:<br></div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex"><div dir="ltr"><div dir="ltr"><br></div><br><div class="gmail_quote"><div dir="ltr" class="gmail_attr">On Mon, Apr 15, 2019 at 2:56 PM Fande Kong <<a href="mailto:fdkong.jd@gmail.com" target="_blank">fdkong.jd@gmail.com</a>> wrote:<br></div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex"><div dir="ltr"><div dir="ltr"><div dir="ltr"><br></div><br><div class="gmail_quote"><div dir="ltr" class="gmail_attr">On Mon, Apr 15, 2019 at 6:49 AM Matthew Knepley <<a href="mailto:knepley@gmail.com" target="_blank">knepley@gmail.com</a>> wrote:<br></div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex"><div dir="ltr"><div dir="ltr">On Mon, Apr 15, 2019 at 12:41 AM Fande Kong via petsc-dev <<a href="mailto:petsc-dev@mcs.anl.gov" target="_blank">petsc-dev@mcs.anl.gov</a>> wrote:<br></div><div class="gmail_quote"><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex"><div dir="ltr"><div dir="ltr"><div dir="ltr"><div dir="ltr">On Fri, Apr 12, 2019 at 7:27 AM Mark Adams <<a href="mailto:mfadams@lbl.gov" target="_blank">mfadams@lbl.gov</a>> wrote:<br></div><div class="gmail_quote"><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex"><div dir="ltr"><div dir="ltr"><br></div><br><div class="gmail_quote"><div dir="ltr" class="gmail_attr">On Thu, Apr 11, 2019 at 11:42 PM Smith, Barry F. <<a href="mailto:bsmith@mcs.anl.gov" target="_blank">bsmith@mcs.anl.gov</a>> wrote:<br></div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex"><br>
<br>
> On Apr 11, 2019, at 9:07 PM, Mark Adams via petsc-dev <<a href="mailto:petsc-dev@mcs.anl.gov" target="_blank">petsc-dev@mcs.anl.gov</a>> wrote:<br>
> <br>
> Interesting, nice work.<br>
> <br>
> It would be interesting to get the flop counters working.<br>
> <br>
> This looks like GMG, I assume 3D.<br>
> <br>
> The degree of parallelism is not very realistic. You should probably run a 10x smaller problem, at least, or use 10x more processes.<br>
<br>
Why do you say that? He's got his machine with a certain amount of physical memory per node, are you saying he should ignore/not use 90% of that physical memory for his simulation?</blockquote><div><br></div><div>In my experience 1.5M equations/process about 50x more than applications run, but this is just anecdotal. Some apps are dominated by the linear solver in terms of memory but some apps use a lot of memory in the physics parts of the code.</div></div></div></blockquote><div><br></div><div>The test case is solving the multigroup neutron transport equations where each mesh vertex could be associated with a hundred or a thousand variables. The mesh is actually small so that it can be handled efficiently in the physics part of the code. 90% of the memory is consumed by the solver (SNES, KSP, PC). This is the reason I was trying to implement a memory friendly PtAP.</div><div><br></div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex"><div dir="ltr"><div class="gmail_quote"><div><br></div><div>The one app that I can think of where the memory usage is dominated by the solver does like 10 (pseudo) time steps with pretty hard nonlinear solves, so in the end they are not bound by turnaround time. But they are kind of a odd (academic) application and not very representative of what I see in the broader comp sci community. And these guys do have a scalable code so instead of waiting a week on the queue to run a 10 hour job that uses 10% of the machine, they wait a day to run a 2 hour job that takes 50% of the machine because centers scheduling policies work that way.</div></div></div></blockquote><div><br></div><div>Our code is scalable but we do not have a huge machine unfortunately. </div><div> </div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex"><div dir="ltr"><div class="gmail_quote"><div><br></div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex"> He should buy a machine 10x bigger just because it means having less degrees of freedom per node (whose footing the bill for this purchase?). At INL they run simulations for a purpose, not just for scalability studies and there are no dang GPUs or barely used over-sized monstrocities sitting around to brag about twice a year at SC.<br></blockquote><div><br></div><div>I guess the are the nuke guys. I've never worked with them or seen this kind of complexity analysis in their talks, but OK if they fill up memory with the solver then this is representative of a significant (DOE)app.</div></div></div></blockquote><div><br></div><div>You do not see the complexity analysis in the talks because most of the people at INL live in a different community. I will convince more people give talks in our community in the future. </div><div><br></div><div>We focus on the nuclear energy simulations that involve multiphysics (neutron transport, mechanics contact, computational materials, compressible/incompressible flows, two-phase flows, etc.). We are developing a flexible platform (open source) that allows different physics guys couple their code together efficiently. <a href="https://mooseframework.inl.gov/old" target="_blank">https://mooseframework.inl.gov/old</a></div></div></div></div></div></blockquote><div><br></div><div>Fande, this is very interesting. Can you tell me:</div><div><br></div><div> 1) A rough estimate of dofs/vertex (or cell or face) depending on where you put unknowns</div></div></div></blockquote><div><br></div><div>The big run (Neutron transport equations) posted earlier has 576 variables on each mesh vertex. Physics guys think at the current stage 100-1000 variables (the number of energy groups times the number of neutron flying directions) on each mesh vertex will give us an acceptable simulation result. 1000 variables are preferred. </div><div><br></div><div> </div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex"><div dir="ltr"><div class="gmail_quote"><div><br></div><div> 2) Are all unknowns on the same vertex coupled together? If not, where do you specify block sparsity?</div></div></div></blockquote><div><br></div><div>Yes, they are physically coupled together through the scattering and the fission events. But we are using the matrix-free method, and the variables coupling is ignored in the preconditioning matrix so that the system won't take that much memory.</div></div></div></div></blockquote><div><br></div><div>So the preconditioner looks like 576 independent Laplacian solves, mathematically. </div></div></div></blockquote><div><br></div><div>We somehow could think about this way, but we are solving the whole system simultaneously to have a good concurrency. </div><div><br></div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex"><div dir="ltr"><div class="gmail_quote"><div>So you could reorder your equations and see a block diagonal matrix with 576 blocks. right?</div></div></div></blockquote><div><br></div><div>I not sure I understand the question correctly. For each mesh vertex, we have a 576x576 diagonal matrix. The unknowns are ordered in this way: v0, v2.., v575 for vertex 1, and another 576 variables for mesh vertex 2, and so on.</div><div> </div><div>Thanks,</div><div><br>Fande,</div><div><br></div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex"><div dir="ltr"><div class="gmail_quote"><div> <br></div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex"><div dir="ltr"><div dir="ltr"><div class="gmail_quote"><div> <br></div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex"><div dir="ltr"><div class="gmail_quote"><div><br></div><div> 3) How are the coefficients from the equation discretized on the mesh?</div></div></div></blockquote><div><br></div><div>The coefficients (often referred to as cross sections for neutron guys) could be different for each variable, and they totally depend on the reactor configuration. My current simulation indeed uses heterogeneous materials.</div><div><br></div><div>I actually have a preprint that presents more details on the simulation. <a href="https://arxiv.org/abs/1903.03659" target="_blank">https://arxiv.org/abs/1903.03659</a> </div><div><br></div><div>Thanks,</div><div><br></div><div>Fande, </div><div><br></div><div><br></div><div> </div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex"><div dir="ltr"><div class="gmail_quote"><div><br></div><div> Thanks!</div><div><br></div><div> Matt</div><div> </div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex"><div dir="ltr"><div dir="ltr"><div dir="ltr"><div class="gmail_quote"><div>Thanks,</div><div><br></div><div>Fande, </div><div> </div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex"><div dir="ltr"><div class="gmail_quote"><div> </div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex">
<br>
Barry<br>
<br>
<br>
<br>
> I guess it does not matter. This basically like a one node run because the subdomains are so large.<br>
> <br>
> And are you sure the numerics are the same with and without hypre? Hypre is 15x slower. Any ideas what is going on?<br>
> <br>
> It might be interesting to scale this test down to a node to see if this is from communication.<br>
> <br>
> Again, nice work,<br>
> Mark<br>
> <br>
> <br>
> On Thu, Apr 11, 2019 at 7:08 PM Fande Kong <<a href="mailto:fdkong.jd@gmail.com" target="_blank">fdkong.jd@gmail.com</a>> wrote:<br>
> Hi Developers,<br>
> <br>
> I just want to share a good news. It is known PETSc-ptap-scalable is taking too much memory for some applications because it needs to build intermediate data structures. According to Mark's suggestions, I implemented the all-at-once algorithm that does not cache any intermediate data. <br>
> <br>
> I did some comparison, the new implementation is actually scalable in terms of the memory usage and the compute time even though it is still slower than "ptap-scalable". There are some memory profiling results (see the attachments). The new all-at-once implementation use the similar amount of memory as hypre, but it way faster than hypre.<br>
> <br>
> For example, for a problem with 14,893,346,880 unknowns using 10,000 processor cores, There are timing results:<br>
> <br>
> Hypre algorithm:<br>
> <br>
> MatPtAP 50 1.0 3.5353e+03 1.0 0.00e+00 0.0 1.9e+07 3.3e+04 6.0e+02 33 0 1 0 17 33 0 1 0 17 0<br>
> MatPtAPSymbolic 50 1.0 2.3969e-0213.0 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 0<br>
> MatPtAPNumeric 50 1.0 3.5353e+03 1.0 0.00e+00 0.0 1.9e+07 3.3e+04 6.0e+02 33 0 1 0 17 33 0 1 0 17 0<br>
> <br>
> PETSc scalable PtAP:<br>
> <br>
> MatPtAP 50 1.0 1.1453e+02 1.0 2.07e+09 3.8 6.6e+07 2.0e+05 7.5e+02 2 1 4 6 20 2 1 4 6 20 129418<br>
> MatPtAPSymbolic 50 1.0 5.1562e+01 1.0 0.00e+00 0.0 4.1e+07 1.4e+05 3.5e+02 1 0 3 3 9 1 0 3 3 9 0<br>
> MatPtAPNumeric 50 1.0 6.3072e+01 1.0 2.07e+09 3.8 2.4e+07 3.1e+05 4.0e+02 1 1 2 4 11 1 1 2 4 11 235011<br>
> <br>
> New implementation of the all-at-once algorithm:<br>
> <br>
> MatPtAP 50 1.0 2.2153e+02 1.0 0.00e+00 0.0 1.0e+08 1.4e+05 6.0e+02 4 0 7 7 17 4 0 7 7 17 0<br>
> MatPtAPSymbolic 50 1.0 1.1055e+02 1.0 0.00e+00 0.0 7.9e+07 1.2e+05 2.0e+02 2 0 5 4 6 2 0 5 4 6 0<br>
> MatPtAPNumeric 50 1.0 1.1102e+02 1.0 0.00e+00 0.0 2.6e+07 2.0e+05 4.0e+02 2 0 2 3 11 2 0 2 3 11 0<br>
> <br>
> <br>
> You can see here the all-at-once is a bit slower than ptap-scalable, but it uses only much less memory. <br>
> <br>
> <br>
> Fande<br>
> <br>
<br>
</blockquote></div></div>
</blockquote></div></div></div></div>
</blockquote></div><br clear="all"><div><br></div>-- <br><div dir="ltr" class="gmail-m_-103727268935758862gmail-m_-7917615511367479665gmail-m_4403220940612373707gmail_signature"><div dir="ltr"><div><div dir="ltr"><div><div dir="ltr"><div>What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead.<br>-- Norbert Wiener</div><div><br></div><div><a href="http://www.cse.buffalo.edu/~knepley/" target="_blank">https://www.cse.buffalo.edu/~knepley/</a><br></div></div></div></div></div></div></div></div>
</blockquote></div></div></div>
</blockquote></div></div>
</blockquote></div></div></div></div>