[mpich-discuss] Reg Shared region across processes
jhammond at alcf.anl.gov
Sat Jul 14 10:00:23 CDT 2012
I see two interpretations of your question.
If you are asking if MPI can help you allocate shared memory ala POSIX
or Sys5 in a portable manner, the answer is currently "no" but this
exact feature has been added to MPI-3. Since POSIX shared memory is
quite portable, you might find it useful that MPICH2 1.5 has already
implemented the MPI-3 function for creating a communicator for the
ranks where shared memory works, i.e. a node. Look for
MPIX_Comm_split_type(.., MPIX_COMM_TYPE_SHARED, ..) in the mpix.h
header. You'll still have to allocate the shared memory yourself, but
that's not hard.
If you are asking if MPI can help you implement a global view of data,
which is to say, the ability to read from and write to memory that is
owned by other processes, then MPI RMA aka one-sided provides that.
I've used this feature extensively in its MPI-2 form, but the MPI-3
functionality will be even better.
Regardless of the implementation, you still have to pay attention to
synchronization and consistency. Shared memory programming is subject
to race conditions. I don't know exaclty what you mean by "updates to
the region are uniform." Using load/store within a node via shared
memory will be "uniform" in that every process will be able to access
data with the same protocol, as opposed to load/store within a process
and MPI calls outside of it, but there is no magic that can hide NUMA
effects in the hardware nor does POSIX, MPI, or most other rational
programming models provide completely coherent and consistent shared
memory across multiple processes. Heck, you can't even get that
within a single process if you're using PowerPC, for example.
Does this answer your question? If you want to learn about MPI-2 RMA,
you should get yourself a copy of "Using MPI-2".
On Fri, Jul 13, 2012 at 6:30 PM, Ramesh Vinayagam
<rvinayagam.85 at gmail.com> wrote:
> Is it possible to create a shared region across processes such the updates
> to the region is uniform.
> Please let me know,
> mpich-discuss mailing list mpich-discuss at mcs.anl.gov
> To manage subscription options or unsubscribe:
Argonne Leadership Computing Facility
University of Chicago Computation Institute
jhammond at alcf.anl.gov / (630) 252-5381
More information about the mpich-discuss