[petsc-dev] Mira

Nystrom, William D wdn at lanl.gov
Tue Nov 12 15:23:32 CST 2013


Attached are some results from scaling runs on Vulcan that I posted on another thread.  The
results are before and after fixing a problem with VecNorm.

And the second set of plots shows a comparison with the pure MPI case and MPI + pthreads.

In both cases, the runs used two threads or ranks per core.

Dave

________________________________________
From: Jed Brown [jedbrown at mcs.anl.gov]
Sent: Monday, November 11, 2013 12:09 PM
To: Mark Adams
Cc: Nystrom, William D; For users of the development version of PETSc
Subject: Re: [petsc-dev] Mira

Mark Adams <mfadams at lbl.gov> writes:
> I'm not sure if they really do have threads but they just wanted to see if
> PETSc could use threads.  It looks like only 40% of the run time is in
> PETSc so we are shelving this but I wanted to try threads anyway.

Okay, some programming is needed to make it function correctly on BG/Q.

If you just want to get a rough sense, you could try the OpenMP branch
>From the folks at Imperial.
-------------- next part --------------
A non-text attachment was scrubbed...
Name: KSPSolve_Time_vs_Node_Count_CPU_Pthread_6400_vulcan.png
Type: image/png
Size: 16037 bytes
Desc: KSPSolve_Time_vs_Node_Count_CPU_Pthread_6400_vulcan.png
URL: <http://lists.mcs.anl.gov/pipermail/petsc-dev/attachments/20131112/3a0de7a6/attachment.png>
-------------- next part --------------
A non-text attachment was scrubbed...
Name: KSPSolve_Time_vs_Node_Count_CPU_Pthread_MPI_6400_vulcan.png
Type: image/png
Size: 15520 bytes
Desc: KSPSolve_Time_vs_Node_Count_CPU_Pthread_MPI_6400_vulcan.png
URL: <http://lists.mcs.anl.gov/pipermail/petsc-dev/attachments/20131112/3a0de7a6/attachment-0001.png>


More information about the petsc-dev mailing list