[petsc-dev] OpenMP/Vec

Jed Brown jedbrown at mcs.anl.gov
Wed Mar 7 09:26:59 CST 2012


On Wed, Mar 7, 2012 at 09:12, Lawrence Mitchell
<lawrence.mitchell at ed.ac.uk>wrote:

> Moving forward, I've done a bunch of this macroisation, which makes the
> code
> more PETSc-like.
>
> As you point out it's probably unwise to call omp_get_num_threads() on
> every
> loop iteration, so I went for the magic names in the end.  A typical simple
> loop now looks like:
>
> VecOMPParallelBegin(xin, private(i) shared(x));
> for (i=__start; i<__end;i++) x[i] = work(x[i]);
> VecOMPParallelEnd();
>
> In the --without-openmp case, this transforms into:
>
> do {
>   PetscInt __start = 0, __end = xin->map->n;
>   for ...;
> } while (0);
>
> Rather than adding an nthreads slot to vectors, I put it in the generic
> PetscObject struct -- we're threading matrices too.
>

We are going with PetscLayout for distribution information with the
pthreads implementations. We have also discussed making "thread
communicator" information (how many threads and their affinities) an
attribute of an MPI_Comm. I'd be interested to hear your opinions on that.

All, should we consider moving all the "kernels" into their own functions
and just doing parallel region dispatch differently for OpenMP and
pthreads? I think that is the only reasonable way to have just one version
of the code. If we committed to always using separate side-effect-free
kernels, we might be able to reuse many of them with TBB and OpenCL as well.

Is there ever a case where we want to have _both_ OpenMP and Pthreads
objects in the same application? Maybe, e.g. when servicing two
multiphysics applications, one of which chooses OpenMP and one of which
chooses Pthreads (perhaps on different communicators so we don't have to
fight with over-subscription).


>
> I don't know how to write the correct tests for configure to probe for the
> existance of _Pragma, so that's still missing.
>

I'll do it.
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.mcs.anl.gov/pipermail/petsc-dev/attachments/20120307/810a7e77/attachment.html>


More information about the petsc-dev mailing list