<html>
<head>
<meta http-equiv="Content-Type" content="text/html; charset=Windows-1252">
<style type="text/css" style="display:none;"><!-- P {margin-top:0;margin-bottom:0;} --></style>
</head>
<body dir="ltr">
<div id="divtagdefaultwrapper" style="font-size:12pt;color:#000000;font-family:Calibri,Helvetica,sans-serif;" dir="ltr">
<p style="margin-top:0;margin-bottom:0"><br>
</p>
<p style="margin-top:0;margin-bottom:0">Hi Steffen,</p>
<p style="margin-top:0;margin-bottom:0"><br>
</p>
<p style="margin-top:0;margin-bottom:0">Sorry for the delayed response.</p>
<p style="margin-top:0;margin-bottom:0"><br>
</p>
<p style="margin-top:0;margin-bottom:0">My feeling re. characteristics + 10 PS is the following.</p>
<p style="margin-top:0;margin-bottom:0"><br>
</p>
<p style="margin-top:0;margin-bottom:0">A) characteristics is designed to trade off more work on advection for less work on the implicit part. The principal implicit cost in NS is the pressure solve. Typically, we see that we spend 50% of our time in advection
when using the characteristics scheme (but this can vary, depending on whether you go for 2nd-order (recommended, as Stefan suggested) or 3rd-order in time (not recommend for characteristics), and on the amount of over-integration for the advection - typically
lxd=3*lx1/2, but you can get away with less and might want to do so for characteristics.</p>
<p style="margin-top:0;margin-bottom:0"><br>
</p>
<p style="margin-top:0;margin-bottom:0">In the case of multiple PS, you increase your advection work proportionally, but the pressure work stays the same. Thus, an approach that puts more emphasis on the expensive part will ultimately not be a winning strategy.</p>
<p style="margin-top:0;margin-bottom:0"><br>
</p>
<p style="margin-top:0;margin-bottom:0">Just to give you some estimates:</p>
<p style="margin-top:0;margin-bottom:0"><br>
</p>
<p style="margin-top:0;margin-bottom:0">Standard advection (IFCHAR=F):</p>
<p style="margin-top:0;margin-bottom:0"><br>
</p>
<p style="margin-top:0;margin-bottom:0"> 1 advection evaluation per time step for each component of velocity and for each passive scalar:</p>
<p style="margin-top:0;margin-bottom:0"><br>
</p>
<p style="margin-top:0;margin-bottom:0">Advection work unit: W ~ 18(MN)^2 ops, where M=3/2N for</p>
<p style="margin-top:0;margin-bottom:0">standard dealiasing. (This is a crude estimate.)</p>
<p style="margin-top:0;margin-bottom:0"><br>
</p>
<p style="margin-top:0;margin-bottom:0">So, IFCHAR=F --> (3+PS)*W advection work per step, for "PS"</p>
<p style="margin-top:0;margin-bottom:0">passive scalars, including temperature.</p>
<p style="margin-top:0;margin-bottom:0"><br>
</p>
<p style="margin-top:0;margin-bottom:0">With IFCHAR=T, assume you are running at a CFL of 2.0 (with</p>
<p style="margin-top:0;margin-bottom:0">p26==1, which implies one RK4 step to achieve CFL=2).</p>
<p style="margin-top:0;margin-bottom:0"><br>
</p>
<p style="margin-top:0;margin-bottom:0">For 2nd-order, there are two RK4 steps, each requiring 4 sub steps,</p>
<p style="margin-top:0;margin-bottom:0">so the advection work is 8 x (3+PS)*W.</p>
<p style="margin-top:0;margin-bottom:0"><br>
</p>
<p style="margin-top:0;margin-bottom:0">If you run 3rd-order, it is 12x(3+PS)*W because 3rd-order characteristics <span style="font-size: 12pt;">requires 3 RK4 steps. More details can be found in some old notes posted here:</span></p>
<p style="margin-top:0;margin-bottom:0"><br>
</p>
<p style="margin-top:0;margin-bottom:0"><a href="http://www.mcs.anl.gov/~fischer/oifs.pdf" id="LPlnk285986" class="OWAAutoLink" previewremoved="true">http://www.mcs.anl.gov/~fischer/oifs.pdf</a><br>
</p>
<p style="margin-top:0;margin-bottom:0"><br>
</p>
<p style="margin-top:0;margin-bottom:0">and in a forthcoming paper that's currently under review.</p>
<p style="margin-top:0;margin-bottom:0"><br>
</p>
<p style="margin-top:0;margin-bottom:0">B) If the time savings is not significant, I generally prefer to set IFCHAR=F and then run 3rd order. The std. BDFk/EXTk (IFCHAR=F) case requires only (3+PS)W advection work per step, independent of temporal order.</p>
<p style="margin-top:0;margin-bottom:0"><br>
</p>
<p style="margin-top:0;margin-bottom:0">hth </p>
<p style="margin-top:0;margin-bottom:0"><br>
</p>
<p style="margin-top:0;margin-bottom:0">Paul</p>
<p style="margin-top:0;margin-bottom:0"><br>
</p>
</div>
<hr style="display:inline-block;width:98%" tabindex="-1">
<div id="divRplyFwdMsg" dir="ltr"><font face="Calibri, sans-serif" style="font-size:11pt" color="#000000"><b>From:</b> Nek5000-users <nek5000-users-bounces@lists.mcs.anl.gov> on behalf of nek5000-users@lists.mcs.anl.gov <nek5000-users@lists.mcs.anl.gov><br>
<b>Sent:</b> Monday, September 10, 2018 6:12:49 AM<br>
<b>To:</b> nek5000-users@lists.mcs.anl.gov<br>
<b>Subject:</b> Re: [Nek5000-users] Proper setup for AMG solver</font>
<div> </div>
</div>
<div class="BodyFragment"><font size="2"><span style="font-size:11pt;">
<div class="PlainText">The slow AMG problem he reported was related to an old (buggy) version of amg_hypre. Note, in my experience AMG is faster at least for non-power-of-two ranks (quite common today) independent of the problem size.<br>
<br>
Regarding OIFS it’s important to use BDF2 with a possible low lxd otherwise it’s quite expensive. It depends heavily on how expensive the pressure solve is.<br>
<br>
Cheers<br>
Stefan<br>
<br>
> On 10 Sep 2018, at 12:51, "nek5000-users@lists.mcs.anl.gov" <nek5000-users@lists.mcs.anl.gov> wrote:<br>
> <br>
> Hi Steffen,<br>
> It is interesting that you find that the characteristic scheme is indeed <br>
> faster; similar tests with a a pipe (a few years ago) have shown the <br>
> opposite, which is why we ran our 2013 pipe using conventional time <br>
> steppers. But I guess we should re-do these tests!<br>
> <br>
> I think your question regarding the temporal evolution is interesting, <br>
> and I guess in the end this has to be judged using physical time scales <br>
> (which scale in plus units for the velocity here). Not being too <br>
> familiar with the characteristic scheme, what is really the smallest <br>
> resolved time scale? Is it the actual time step, or the RK steps, or <br>
> some intermediate scale?<br>
> <br>
> We have found that AMG is indeed surpassing XXT at some point, but I <br>
> guess this is machine dependent (and number of processors in addition to <br>
> number of elements). Do you have any issues with the setup of XXT taking <br>
> quite some time?<br>
> <br>
> Best,<br>
> Philipp<br>
> <br>
> <br>
>> On 2018-09-02 12:05, nek5000-users@lists.mcs.anl.gov wrote:<br>
>> Hello Stefan & Paul,<br>
>> <br>
>> thanks for your suggestions. I use Nek5000 v17 (mver 17.0.4 in makenek).<br>
>> <br>
>> I have taken a closer look at the logfiles as suggested by Paul. It seems like I spend the most time for the scalar fields second most for the velocity fields and pressure solve is the smallest share. The numbers below are in seconds for a typical timestep
without calculating statistics or writing out files at Re_b = 5300 (Re_t = u_t D / nu = 360, 768 cores) and Re_b = 37700 (Re_t = 2000, 6144 cores).<br>
>> I use projection for all 10 scalar fields.<br>
>> <br>
>> 360:<br>
>> Scalars done 0.064<br>
>> Fluid done 0.039<br>
>> U-PRES gmres 0.021<br>
>> Step 0.127<br>
>> <br>
>> 2000:<br>
>> Scalars done 0.94<br>
>> Fluid done 0.51<br>
>> U-PRES gmres 0.21<br>
>> Step 1.72<br>
>> <br>
>> <br>
>> Paul, could you elaborate on why you would not use characteristics when running 10 scalar fields?<br>
>> In my short tests at Re_b=5300, it appears to be more time consuming per step but due to the increase in DT it is worth it to go to characteristics. The time per timestep for targetCFL=2.0 increases by a factor of 3 but DT is increased by a factor of 6.<br>
>> Besides, are such increased timesteps still small enough to capture the temporal evolution of the flow?<br>
>> <br>
>> <br>
>> When I am back on my workstation, I will create a different mesh with lx1=8, lxd=10 and run with the settings Stefan suggested. I expect to get a significant speedup when using the lower tolerances also for velocity and scalars and changing to characteristics.<br>
>> <br>
>> I know about the limit of 350k elements for XXT as I have once commented out the part of the code where this is tested.<br>
>> Since for my setups AMG was always slower than XXT, I am thinking about sticking to XXT.<br>
>> Is there any other reason than your experience with AMG and XXT at large number of elements to enforce AMG?<br>
>> <br>
>> <br>
>> Best Regards,<br>
>> Steffen<br>
>> <br>
>> <br>
>> <br>
>> Message: 5<br>
>> Date: Sat, 1 Sep 2018 17:10:54 +0200<br>
>> From: nek5000-users@lists.mcs.anl.gov<br>
>> To: nek5000-users@lists.mcs.anl.gov <nek5000-users@lists.mcs.anl.gov><br>
>> Subject: Re: [Nek5000-users] Proper setup for AMG solver<br>
>> Message-ID:<br>
>> <mailman.9805.1535814716.86639.nek5000-users@lists.mcs.anl.gov><br>
>> Content-Type: text/plain; charset=utf-8<br>
>> <br>
>> Try to use<br>
>> <br>
>> lx1=8/lxd=10 with a (potentially) finer mesh<br>
>> BDF2 + OIFS with a targetCFL=3.5<br>
>> set dt = 0 (this will adjust dt to targetCFL)<br>
>> pressure tol = 1e-5 (residual projection turned on) 1e-6 for velocity and scalars (residual projection turned off)<br>
>> <br>
>> Note, a mesh with more than 350k elements requires AMG. The default parameters are fine.<br>
>> <br>
>> What version of Nek5000 are you using?<br>
>> <br>
>> Cheers,<br>
>> Stefan<br>
>> <br>
>> <br>
>> -----Original message-----<br>
>>> From:nek5000-users@lists.mcs.anl.gov <nek5000-users@lists.mcs.anl.gov><br>
>>> Sent: Saturday 1st September 2018 16:50<br>
>>> To: nek5000-users@lists.mcs.anl.gov<br>
>>> Subject: [Nek5000-users] Proper setup for AMG solver<br>
>>> <br>
>>> Dear Nek users & experts,<br>
>>> <br>
>>> I am currently running a turbulent pipe flow very similar to the simulations from El Khoury et al 2013.<br>
>>> <br>
>>> Additionally, I solve for 10 thermal fields being treated as passive scalars.<br>
>>> <br>
>>> The Reynolds number is the same as the highest in El Khoury (2013), Re_b = 37700.<br>
>>> <br>
>>> As I am using the relaxation term filtering (RT-Filter), I have a slightly lower resolution of about 250,000 elements at N=11 (5 times less than El Khoury).<br>
>>> <br>
>>> As the simulation is still very heavy, I have been looking into ways for speeding it up.<br>
>>> I found some good suggestions here:<br>
>>> <a href="http://nek5000.github.io/NekDoc/faq.html?highlight=amg#computational-speed">
http://nek5000.github.io/NekDoc/faq.html?highlight=amg#computational-speed</a><br>
>>> <br>
>>> and here (older version?)<br>
>>> <a href="http://nek5000.github.io/NekDoc/large_scale.html">http://nek5000.github.io/NekDoc/large_scale.html</a><br>
>>> <br>
>>> <br>
>>> However, I have some questions regarding these suggestions.<br>
>>> 1) Dealiasing:<br>
>>> Usually I use lxd = 3/2*lx1. Can I lower that or even use lxd=lx1?<br>
>>> <br>
>>> 2) Tolerances:<br>
>>> I have tested to reduce the tolerance for pressure from 1e-8 to 5e-5 for a run at Re_b=5300 without any significant speedup. Would you consider 5e-5 for pressure accurate enough for evaluating statistics like turbulent kinetic energy budgets, Reynolds shear<br>
>>> stress budgets or budget of turbulent heat fluxes?<br>
>>> <br>
>>> 3) Time discretisation: BDF2 and OIFS with Courant=2-5<br>
>>> If I go from BDF3/EXT3 at C=0.5 to BDF2/OIFS at C=5.0, will I not miss high frequency fluctuations in time, since DT is much larger?<br>
>>> <br>
>>> 4) AMG instead of XXT:<br>
>>> I have tested AMG instead of XXT for both Re_b=5300 and Re_b=37700 without any speedup. Time/timestep is even higher with AMG. My workflow looks like this<br>
>>> 4.1) Set SEMG_AMG in the par file.<br>
>>> 4.2) Run the simulation once to dump amg files.<br>
>>> 4.3) Run amg_hypre (Here I do not know which options to choose, thus I have only uses default settings)<br>
>>> 4.4) Run the simulation.<br>
>>> Maybe I should choose different options for amg_hypre, or should I rather use the amg_matlab2 tools? For the matlab tools I have not found an explanation on how to use them.<br>
>>> <br>
>>> <br>
>>> I am grateful for any advice regarding these aspects.<br>
>>> <br>
>>> Best Regards,<br>
>>> Steffen<br>
>>> <br>
>>> <br>
>>> <br>
>>> _______________________________________________<br>
>>> Nek5000-users mailing list<br>
>>> Nek5000-users@lists.mcs.anl.gov<br>
>>> <a href="https://lists.mcs.anl.gov/mailman/listinfo/nek5000-users">https://lists.mcs.anl.gov/mailman/listinfo/nek5000-users</a><br>
>> <br>
>> <br>
>> ------------------------------<br>
>> <br>
>> Message: 6<br>
>> Date: Sat, 1 Sep 2018 15:15:29 +0000<br>
>> From: nek5000-users@lists.mcs.anl.gov<br>
>> To: "nek5000-users@lists.mcs.anl.gov"<br>
>> <nek5000-users@lists.mcs.anl.gov><br>
>> Subject: Re: [Nek5000-users] Proper setup for AMG solver<br>
>> Message-ID:<br>
>> <mailman.9806.1535814932.86639.nek5000-users@lists.mcs.anl.gov><br>
>> Content-Type: text/plain; charset="us-ascii"<br>
>> <br>
>> <br>
>> How much time are you spending in your scalar fields?<br>
>> <br>
>> <br>
>> Do you have projection turned on for all these fields?<br>
>> <br>
>> <br>
>> grep tep logfile<br>
>> <br>
>> <br>
>> will tell you how much time per step<br>
>> <br>
>> <br>
>> grep gmr logfile will tell you how much time in the pressure on each step<br>
>> <br>
>> <br>
>> What's left over is mostly passive scalar, unless you are using characteristics.<br>
>> <br>
>> <br>
>> I would not recommend characteristics when running 10 scalar fields.<br>
>> <br>
>> <br>
>> Paul<br>
>> <br>
>> <br>
>> ________________________________<br>
>> From: Nek5000-users <nek5000-users-bounces@lists.mcs.anl.gov> on behalf of nek5000-users@lists.mcs.anl.gov <nek5000-users@lists.mcs.anl.gov><br>
>> Sent: Saturday, September 1, 2018 9:50:38 AM<br>
>> To: nek5000-users@lists.mcs.anl.gov<br>
>> Subject: [Nek5000-users] Proper setup for AMG solver<br>
>> <br>
>> <br>
>> Dear Nek users & experts,<br>
>> <br>
>> I am currently running a turbulent pipe flow very similar to the simulations from El Khoury et al 2013.<br>
>> Additionally, I solve for 10 thermal fields being treated as passive scalars.<br>
>> <br>
>> The Reynolds number is the same as the highest in El Khoury (2013), Re_b = 37700.<br>
>> As I am using the relaxation term filtering (RT-Filter), I have a slightly lower resolution of about 250,000 elements at N=11 (5 times less than El Khoury).<br>
>> <br>
>> As the simulation is still very heavy, I have been looking into ways for speeding it up.<br>
>> I found some good suggestions here:<br>
>> <a href="http://nek5000.github.io/NekDoc/faq.html?highlight=amg#computational-speed">
http://nek5000.github.io/NekDoc/faq.html?highlight=amg#computational-speed</a><br>
>> <br>
>> and here (older version?)<br>
>> <a href="http://nek5000.github.io/NekDoc/large_scale.html">http://nek5000.github.io/NekDoc/large_scale.html</a><br>
>> <br>
>> <br>
>> However, I have some questions regarding these suggestions.<br>
>> 1) Dealiasing:<br>
>> Usually I use lxd = 3/2*lx1. Can I lower that or even use lxd=lx1?<br>
>> <br>
>> 2) Tolerances:<br>
>> I have tested to reduce the tolerance for pressure from 1e-8 to 5e-5 for a run at Re_b=5300 without any significant speedup. Would you consider 5e-5 for pressure accurate enough for evaluating statistics like turbulent kinetic energy budgets, Reynolds shear
stress budgets or budget of turbulent heat fluxes?<br>
>> <br>
>> 3) Time discretisation: BDF2 and OIFS with Courant=2-5<br>
>> If I go from BDF3/EXT3 at C=0.5 to BDF2/OIFS at C=5.0, will I not miss high frequency fluctuations in time, since DT is much larger?<br>
>> <br>
>> 4) AMG instead of XXT:<br>
>> I have tested AMG instead of XXT for both Re_b=5300 and Re_b=37700 without any speedup. Time/timestep is even higher with AMG. My workflow looks like this<br>
>> 4.1) Set SEMG_AMG in the par file.<br>
>> 4.2) Run the simulation once to dump amg files.<br>
>> 4.3) Run amg_hypre (Here I do not know which options to choose, thus I have only uses default settings)<br>
>> 4.4) Run the simulation.<br>
>> Maybe I should choose different options for amg_hypre, or should I rather use the amg_matlab2 tools? For the matlab tools I have not found an explanation on how to use them.<br>
>> <br>
>> <br>
>> I am grateful for any advice regarding these aspects.<br>
>> <br>
>> Best Regards,<br>
>> Steffen<br>
>> <br>
>> <br>
>> <br>
>> -------------- next part --------------<br>
>> An HTML attachment was scrubbed...<br>
>> URL: <<a href="http://lists.mcs.anl.gov/pipermail/nek5000-users/attachments/20180901/62611526/attachment.html">http://lists.mcs.anl.gov/pipermail/nek5000-users/attachments/20180901/62611526/attachment.html</a>><br>
>> <br>
>> ------------------------------<br>
>> <br>
>> Subject: Digest Footer<br>
>> <br>
>> _______________________________________________<br>
>> Nek5000-users mailing list<br>
>> Nek5000-users@lists.mcs.anl.gov<br>
>> <a href="https://lists.mcs.anl.gov/mailman/listinfo/nek5000-users">https://lists.mcs.anl.gov/mailman/listinfo/nek5000-users</a><br>
>> <br>
>> <br>
>> ------------------------------<br>
>> <br>
>> End of Nek5000-users Digest, Vol 115, Issue 1<br>
>> *********************************************<br>
>> _______________________________________________<br>
>> Nek5000-users mailing list<br>
>> Nek5000-users@lists.mcs.anl.gov<br>
>> <a href="https://lists.mcs.anl.gov/mailman/listinfo/nek5000-users">https://lists.mcs.anl.gov/mailman/listinfo/nek5000-users</a><br>
>> <br>
> _______________________________________________<br>
> Nek5000-users mailing list<br>
> Nek5000-users@lists.mcs.anl.gov<br>
> <a href="https://lists.mcs.anl.gov/mailman/listinfo/nek5000-users">https://lists.mcs.anl.gov/mailman/listinfo/nek5000-users</a><br>
<br>
_______________________________________________<br>
Nek5000-users mailing list<br>
Nek5000-users@lists.mcs.anl.gov<br>
<a href="https://lists.mcs.anl.gov/mailman/listinfo/nek5000-users">https://lists.mcs.anl.gov/mailman/listinfo/nek5000-users</a><br>
</div>
</span></font></div>
</body>
</html>