<html dir="ltr">
<head>
<meta http-equiv="Content-Type" content="text/html; charset=iso-8859-1">
<style id="owaParaStyle" type="text/css">P {margin-top:0;margin-bottom:0;}</style>
</head>
<body ocsi="0" fpstyle="1">
<div style="direction: ltr;font-family: Arial;color: #000000;font-size: 14pt;">Mark,<br>
<br>
I have not run gamg on Vulcan. Right now, I have been focusing on<br>
doing some strong scaling studies on a variety of different machines<br>
using the different more advanced capabilities of petsc that are<br>
available in the "next" branch. With threadcomm, I have focused on<br>
using the pthread support and have looked at the scaling on different<br>
machines using 1 to 64 nodes. My testing has uncovered a problem<br>
where the runs do not scale when going from 1 node to 2 nodes. I<br>
have done enough testing with OpenMP to confirm that the problem<br>
occurs with OpenMP as well as with Pthreads. The scaling problem<br>
only exists when going from 1 node to 2 nodes. The pthread support<br>
scales fine from 2 to 64 nodes. I have spent some time trying to<br>
debug the problem and fix it. I did find and fix one issue with VecNorm<br>
that was part of the scaling problem but there is still at least one<br>
problem remaining as far as I can tell. I'm actually doing some runs<br>
on Vulcan and on one of our Cray systems to try and assess the<br>
current status of this problem. At the moment, I've run out of ideas<br>
for how to continue debugging. So I'm thinking I will do a few more<br>
runs and generate some scaling plots and then present the results<br>
on petsc-dev with the goal of seeing if others have ideas for how<br>
to make more progress on debugging the final problem.<br>
<br>
I've also done a lot of scaling studies for the Cuda gpu support and<br>
some with the OpenCL/ViennaCL support. For the latter, I'm waiting<br>
for Karli to finish some upgrades to ViennaCL. I'm not sure where<br>
that work stands at the moment.<br>
<br>
On a different note, running the ex2.c problem with CG and Jacobi<br>
preconditioning on Vulcan with only MPI results in really beautiful<br>
strong scaling - much better than on a dual socket Sandy Bridge<br>
cluster or a Cray XE-6.<br>
<br>
Best regards,<br>
<br>
Dave<br>
<div><br>
</div>
<div style="font-family: Times New Roman; color: rgb(0, 0, 0); font-size: 16px;">
<hr tabindex="-1">
<div style="direction: ltr;" id="divRpF61764"><font color="#000000" face="Tahoma" size="2"><b>From:</b> Mark Adams [mfadams@lbl.gov]<br>
<b>Sent:</b> Friday, November 08, 2013 2:53 PM<br>
<b>To:</b> Nystrom, William D<br>
<b>Cc:</b> For users of the development version of PETSc<br>
<b>Subject:</b> Re: [petsc-dev] Mira<br>
</font><br>
</div>
<div></div>
<div>
<div dir="ltr">Dave, do you have any performance data with openMP? with GAMG would be good too.
<div>Mark</div>
</div>
<div class="gmail_extra"><br>
<br>
<div class="gmail_quote">On Fri, Nov 8, 2013 at 4:20 PM, Nystrom, William D <span dir="ltr">
<<a href="mailto:wdn@lanl.gov" target="_blank">wdn@lanl.gov</a>></span> wrote:<br>
<blockquote class="gmail_quote" style="margin: 0pt 0pt 0pt 0.8ex; border-left: 1px solid rgb(204, 204, 204); padding-left: 1ex;">
<div>
<div style="direction: ltr; font-size: 14pt; font-family: Arial;">I've been using the next branch of petsc and have built and run<br>
src/ksp/ksp/examples/tutorials/ex2.c on Vulcan at LLNL with<br>
the threadcomm package using pthreads and openmp.<br>
<br>
Dave<br>
<div><br>
<div style="font-family: Tahoma; font-size: 13px;"><font><span style="font-size: 10pt;">--
<br>
Dave Nystrom<br>
LANL HPC-5<br>
Phone: <a href="tel:505-667-7913" value="+15056677913" target="_blank">505-667-7913</a><br>
Email: <a href="mailto:wdn@lanl.gov" target="_blank">wdn@lanl.gov</a><br>
Smail: Mail Stop B272<br>
Group HPC-5<br>
Los Alamos National Laboratory<br>
Los Alamos, NM 87545<br>
</span></font><br>
</div>
</div>
<div style="font-size: 16px; font-family: Times New Roman;">
<hr>
<div style="direction: ltr;"><font color="#000000" face="Tahoma"><b>From:</b> <a href="mailto:petsc-dev-bounces@mcs.anl.gov" target="_blank">
petsc-dev-bounces@mcs.anl.gov</a> [<a href="mailto:petsc-dev-bounces@mcs.anl.gov" target="_blank">petsc-dev-bounces@mcs.anl.gov</a>] on behalf of Mark Adams [<a href="mailto:mfadams@lbl.gov" target="_blank">mfadams@lbl.gov</a>]<br>
<b>Sent:</b> Friday, November 08, 2013 10:24 AM<br>
<b>To:</b> For users of the development version of PETSc<br>
<b>Subject:</b> [petsc-dev] Mira<br>
</font><br>
</div>
<div>
<div class="h5">
<div></div>
<div>
<div dir="ltr">I have not been able to get checkout PETSc on Mira with:
<div><br>
</div>
<div>
<div>[adams@miralac1 ~]$ git clone git@bitbucket.org:petsc/petsc.git</div>
<div>Initialized empty Git repository in /gpfs/mira-home/adams/petsc/.git/</div>
<div>Enter passphrase for key '/home/adams/.ssh/id_rsa': </div>
<div>Permission denied (publickey).</div>
<div>fatal: The remote end hung up unexpectedly</div>
</div>
<div><br>
</div>
<div><br>
</div>
<div>But I have been able to get this to work:</div>
<div><br>
</div>
<div>git clone -b maint <a href="https://bitbucket.org/petsc/petsc" target="_blank">
https://bitbucket.org/petsc/petsc</a> petsc<br>
</div>
<div><br>
</div>
<div>With this I am trying to build an openMP version and I get these errors.</div>
<div><br>
</div>
<div>Mark</div>
<div><br>
</div>
</div>
</div>
</div>
</div>
</div>
</div>
</div>
</blockquote>
</div>
<br>
</div>
</div>
</div>
</div>
</body>
</html>