<div dir="ltr"><br><div class="gmail_extra"><br><div class="gmail_quote">On Tue, Oct 10, 2017 at 2:50 PM, Barry Smith <span dir="ltr"><<a href="mailto:bsmith@mcs.anl.gov" target="_blank">bsmith@mcs.anl.gov</a>></span> wrote:<br><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex"><span class=""><br>
> On Oct 10, 2017, at 10:52 AM, Bakytzhan Kallemov <<a href="mailto:bkallemov@lbl.gov">bkallemov@lbl.gov</a>> wrote:<br>
><br>
> Hi,<br>
><br>
> My name is Baky Kallemov.<br>
><br>
> Currently, I am working on improving a scalibility of the Chombo-Petsc interface on cori machine at nersc system.<br>
><br>
> I successfully build the libs from master branch with --with-openmp and hypre.<br>
><br>
> However, I have not noticed any difference running my test problem on single node KNL node using new MATAIJMKL<br>
<br>
</span> hyre uses its own matrix operations so it won't get faster when using running PETSc with MATAIJMKL or any other specific matrix type.<br></blockquote><div><br></div><div>Yes Baky, you need to use '-pc_type gamg with MKL'. Let's stick with hypre.</div><div> </div><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">
<span class="">><br>
><br>
> type for different hybrid mpi+openmp runs compared to regular released version.<br>
<br>
</span> What are you comparing? Are you using say 32 MPI processes and 2 threads or 16 MPI processes and 4 threads? How are you controlling the number of OpenMP threads, OpenMP environmental variable? What parts of the time in the code are you comparing? You should just -log_view and compare the times for PCApply and PCSetUp() between say 64 MPI process/1 thread and 32 MPI processes/2 threads and send us the output for those two cases.<br></blockquote><div><br></div><div>These folks don't use many MPI processes. I'm not sure what the optimal configuration is with Chombo-Crunch when using all of Cori.</div><div><br></div><div>Baky: how many MPI processes per socket are you aiming for on Cori-KNL?</div><div> </div><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">
<span class=""><br>
><br>
> It seems that it made no difference, so perhaps I am doing something wrong or my build is not configured right.<br>
><br>
> Do you have any example that makes use of threads when running hybrid and show an advantage?<br>
<br>
</span> There is not reason to think that using threads on KNL is faster than just using MPI processes. Despite what the NERSc/LBL web pages may say, just because a website says something doesn't make it true.<br>
<div class="HOEnZb"><div class="h5"><br>
<br>
><br>
> I'd like to test it and make sure that my libs are configured correctly, before start to investigate it further.<br>
><br>
><br>
> Thanks,<br>
><br>
> Baky<br>
><br>
><br>
<br>
</div></div></blockquote></div><br></div></div>