[petsc-dev] Mira

Nystrom, William D wdn at lanl.gov
Fri Nov 8 16:15:19 CST 2013


Mark,

I have not run gamg on Vulcan.  Right now, I have been focusing on
doing some strong scaling studies on a variety of different machines
using the different more advanced capabilities of petsc that are
available in the "next" branch.  With threadcomm, I have focused on
using the pthread support and have looked at the scaling on different
machines using 1 to 64 nodes.  My testing has uncovered a problem
where the runs do not scale when going from 1 node to 2 nodes.  I
have done enough testing with OpenMP to confirm that the problem
occurs with OpenMP as well as with Pthreads.  The scaling problem
only exists when going from 1 node to 2 nodes.  The pthread support
scales fine from 2 to 64 nodes.  I have spent some time trying to
debug the problem and fix it.  I did find and fix one issue with VecNorm
that was part of the scaling problem but there is still at least one
problem remaining as far as I can tell.  I'm actually doing some runs
on Vulcan and on one of our Cray systems to try and assess the
current status of this problem.  At the moment, I've run out of ideas
for how to continue debugging.  So I'm thinking I will do a few more
runs and generate some scaling plots and then present the results
on petsc-dev with the goal of seeing if others have ideas for how
to make more progress on debugging the final problem.

I've also done a lot of scaling studies for the Cuda gpu support and
some with the OpenCL/ViennaCL support.  For the latter, I'm waiting
for Karli to finish some upgrades to ViennaCL.  I'm not sure where
that work stands at the moment.

On a different note, running the ex2.c problem with CG and Jacobi
preconditioning on Vulcan with only MPI results in really beautiful
strong scaling - much better than on a dual socket Sandy Bridge
cluster or a Cray XE-6.

Best regards,

Dave

________________________________
From: Mark Adams [mfadams at lbl.gov]
Sent: Friday, November 08, 2013 2:53 PM
To: Nystrom, William D
Cc: For users of the development version of PETSc
Subject: Re: [petsc-dev] Mira

Dave, do you have any performance data with openMP?  with GAMG would be good too.
Mark


On Fri, Nov 8, 2013 at 4:20 PM, Nystrom, William D <wdn at lanl.gov<mailto:wdn at lanl.gov>> wrote:
I've been using the next branch of petsc and have built and run
src/ksp/ksp/examples/tutorials/ex2.c on Vulcan at LLNL with
the threadcomm package using pthreads and openmp.

Dave

--
Dave Nystrom
LANL HPC-5
Phone: 505-667-7913<tel:505-667-7913>
Email: wdn at lanl.gov<mailto:wdn at lanl.gov>
Smail: Mail Stop B272
       Group HPC-5
       Los Alamos National Laboratory
       Los Alamos, NM 87545

________________________________
From: petsc-dev-bounces at mcs.anl.gov<mailto:petsc-dev-bounces at mcs.anl.gov> [petsc-dev-bounces at mcs.anl.gov<mailto:petsc-dev-bounces at mcs.anl.gov>] on behalf of Mark Adams [mfadams at lbl.gov<mailto:mfadams at lbl.gov>]
Sent: Friday, November 08, 2013 10:24 AM
To: For users of the development version of PETSc
Subject: [petsc-dev] Mira

I have not been able to get checkout PETSc on Mira with:

[adams at miralac1 ~]$ git clone git at bitbucket.org:petsc/petsc.git
Initialized empty Git repository in /gpfs/mira-home/adams/petsc/.git/
Enter passphrase for key '/home/adams/.ssh/id_rsa':
Permission denied (publickey).
fatal: The remote end hung up unexpectedly


But I have been able to get this to work:

git clone -b maint https://bitbucket.org/petsc/petsc petsc

With this I am trying to build an openMP version and I get these errors.

Mark


-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.mcs.anl.gov/pipermail/petsc-dev/attachments/20131108/cb8fecc2/attachment.html>


More information about the petsc-dev mailing list