[petsc-dev] dealing with MPIUNi

Satish Balay balay at mcs.anl.gov
Fri Feb 26 14:01:47 CST 2010


I should have included facets-devel in this discussion. Will cc
facets-devel list now [there are likely to be e-mail bounces due to
this - will handle the bounces on petsc-dev]

The discussion is at:
http://lists.mcs.anl.gov/pipermail/petsc-dev/2010-February/002200.html
http://lists.mcs.anl.gov/pipermail/petsc-dev/2010-February/002270.html

The one usage I know off - is petsc+mpiuni is from uedge. I'm guessing
there are other usages from facets [I don't really know these other
usages]

Based on the previous discussions on facets-devel list, I was
advocating the following: when having sequential builds with multiple
packages - that have their own internal seqmpi switch [with normal mpi
code]:

- the best thing to do is always use a proper mpi impl even for
   sequential builds - and make every package use this impl[ with
   perhaps an error message when parallel run is attempted]

- if proper mpi must to be avoided - then one way to minise conflicts
  is use seq-mpi from only one package - and have others use this
  aswell. However this does not work with all packages [as there are
  some packages that do need MPI - like hypre etc..].

So currently mpiuni is being used as the common seq-mpi [from uedge,
and perhaps other codes]

Wrt uedge I'm yet to check how the current changes might affect
it. [currently uedge buildsystem does not work with petsc-dev [due to
the makefile changes]. will check on this later.

On Fri, 26 Feb 2010, Barry Smith wrote:

> 
>  If we switch back slightly to the old model in the following way will that
> resolve the problems with FACETS?
> 
> 1) expose the mpiuni include directory by adding the mpiuni include path to
> the list of include search paths

wrt uedge - the current buildsystem picks up everything from PETSc makefiles -
so the above change should work.

> 2) DO NOT set PETSC_HAVE_MPI

uedge does not use this - so this change should be fine.

> 3) DO NOT have a separate mpiuni library (which doesn't exist with single
> library anyways).

since uedge links with petsc+mpi detected from petsc makefiles - this
should work..

>  Slightly related note: is there any reason not to always #if
> defined(MPIUNI_AVOID_MPI_NAMESPACE) so that C mpi symbols don't appear in our
> libraries. Not that this is important but I still like simple clean code
> without a bunch of confusing if not needed ifdefs.

Looks like when this flag is defined - the fortran mpiuni symbols are
also not built.

satish


> 
> 
>   Barry
> 
> On Feb 26, 2010, at 1:00 PM, Barry Smith wrote:
> 
> > 
> > 1)  So FACETS Fortran code includes mpif.h directly AND includes PETSc
> > includes in the same file? Can those extra includes just be removed from
> > FACETS.
> > 2)  Some FACES codes, or other packages that FACETS use THAT DO NOT use
> > PETSc can/do use PETSc's mpiuni mpif.h?
> > 
> > Are these ONLY conflict with the new MPI unimodel and FACETS or are there
> > other conflicts?
> > 
> > Is there any issue with not having a separate libmpiuni.a ?
> > 
> > Barry
> > 
> > On Feb 26, 2010, at 12:46 PM, Satish Balay wrote:
> > 
> > > On Fri, 26 Feb 2010, Barry Smith wrote:
> > > 
> > > > 
> > > > On Feb 26, 2010, at 11:22 AM, Satish Balay wrote:
> > > > 
> > > > > I think eliminating mpi.h, mpi.mod etc is not a good change. Its
> > > > > likely to break users codes.  I suspect it breaks FACETS. [and perhaps
> > > > > Flotran]
> > > > 
> > > > Suspect isn't good enough. HOW does it break FACETS? It won't break
> > > > pflotran
> > > > unless they have #include "mpi.h" directly in their code
> > > 
> > > Just for this reason [prohibiting using 'mpi.h'/mpif.h from user code
> > > - if one wants --with-mpi=0] the current change is bad
> > > 
> > > 
> > > I've explained the facets/uedge model used below.  I'll check uedge
> > > usage. I'll have to ask facets folks to check other usages..
> > > 
> > > > > 
> > > > > With this change we will continue to have MPI symbols [esp from
> > > > > fortran] in libpetsc.a. So this is not really clean absorbtion of
> > > > > mpiuni into petsc. And I don't think we can avoid having these MPI
> > > > > symbols in -lpetsc.
> > > > 
> > > > Yes the fake "symbols" are in libpetsc.a but what is wrong with that?
> > > > What
> > > > is the disadvantage over having them in libmpiuni.a?
> > > > 
> > > > > 
> > > > > The fact that MPI API symbols exist in libpetsc.a makes PETSc not
> > > > > really pure plant. Its still a plant/animal - so the goal of this code
> > > > > reorganization is still not met.
> > > > 
> > > > As I said in my previous email; only Matt's plan is perfect.
> > > 
> > > My argument is - the change mode is still imperfect - but at the cost
> > > of breaking some user codes. [so its not worth the cost].
> > > 
> > > > > 
> > > > > [I might not have raised this issue earlier wrt merging mpiuni
> > > > > completely into petsc - thats my omission sorry about that.]
> > > > > 
> > > > > 
> > > > > 
> > > > > I suspec the following usage in FACETS/UEDGE will break with this
> > > > > change:
> > > > > 
> > > > > If you have package-A doing the same type of thing for its sequential
> > > > > implementation - it will have its own mpi_comm_size() etc internally.
> > > > > With this - mixing package-A with petsc-sequential will cause
> > > > > duplicate symbol conflict.
> > > > 
> > > > The fix is to not have any symbols in MPIUNI that are called MPI_xxx
> > > > This is
> > > > handled in mpiunis mpi.h with
> > > > #if defined(MPIUNI_AVOID_MPI_NAMESPACE)
> > > > we should just ALWAYS avoid the MPI Namespace
> > > > 
> > > > 
> > > > > 
> > > > > The way I'm currently resolving this issue is: Only one package should
> > > > > provide [internal] MPI - the other package is compiled with
> > > > > --with-mpi=enabled.
> > > > > 
> > > > > I.e PETSc compiled with MPIUNI [says MPI_ENABLED]. Package-A is now
> > > > > compiled with MPI-ENABLED - but links with PETSc - that provides
> > > > > MPIUNI - as MPI.
> > > > > 
> > > > > Also due to this - we added more stuff to MPUNI to cover all MPI
> > > > > symbol usage from FACETS.
> > > > > 
> > > > > 
> > > > > So - the previous MPIUNI implementation scheme eventhough slightly
> > > > > inconsistant, provided good user interface - with minimal maintainance
> > > > > overhead.. Matt's schem makes everything consistant - but with extra
> > > > > maintainance overhead.
> > > > > 
> > > > 
> > > > We can certainly go back to the way it was if we need to. But you have
> > > > not
> > > > demonstrated that need, you've only speculated.
> > > > Here is how it should work ok.
> > > > 1) Several packages each use their own fake MPI, everything is fine
> > > > unless two
> > > > different mpi.h's get included but that should not happen.
> > > 
> > > One of the ways it can conflict is:
> > > 
> > > 'include' package1/mpif.h'
> > > 'include mpiuni/mpif.h
> > > 
> > > and there will be duplicate variable conflicts for constants
> > > like MPI_COMM_WORLD, and others.
> > > 
> > > I'll admit the old way of using mpiuni with other packages does not
> > > work with all packages. [only with a select few codes that facets
> > > folks tested with].
> > > 
> > > 
> > > > 2) Several packages each use PETSc's fake MPI. To support this we only
> > > > need to
> > > > have them add -I$PETSC_DIR/include/mpiuni in their compiles
> > > 
> > > In this case - the user should be consious that he needs mpiuni from
> > > user code - and enable another petsc build option? [or create
> > > unportable makefile]. One more additional cost to the current
> > > imperfect change.
> > > 
> > > So I still prefer the previous scheme.
> > > 
> > > Satish
> > > 
> > > > 
> > > > > So I still prefer previous secheme. If that not acceptable - then I
> > > > > guess we have to go with Matt's scheme [ slplit up mpiuni into a
> > > > > different package, add build/make support to it - and deal with all
> > > > > the petsc-maint that creates..]. But the current change really breaks
> > > > > things - so we should not do this.
> > > > > 
> > > > > Satish
> > > > > 
> > > > > On Thu, 25 Feb 2010, Barry Smith wrote:
> > > > > 
> > > > > > 
> > > > > > After listening to our discussion of the half-plant/half animal
> > > > > > handling
> > > > > > of
> > > > > > MPIUni I have adopted the plant :-) model.
> > > > > > 
> > > > > > Background: All the packages in PETSc/packages and
> > > > > > BuildSystem/config/packages
> > > > > > have the following paradigm except MPI and BlasLapack.
> > > > > > 
> > > > > > 1) PETSc can be built --with-package or --with-package=0
> > > > > > 2) If --with-package=0 then there is no PETSC_HAVE_package defined,
> > > > > > no
> > > > > > extra
> > > > > > libraries to link against and no extra include paths to look in
> > > > > > 
> > > > > > BlasLapack breaks this paradigm in only one way! You cannot use
> > > > > > --with-blaspack=0
> > > > > > 
> > > > > > MPI breaks the paradigm more completely. You can use --with-mpi=0
> > > > > > BUT if
> > > > > > you
> > > > > > use --with-mpi=0 then PETSC_HAVE_MPI is STILL defined!!!!!! There is
> > > > > > an
> > > > > > extra
> > > > > > library to link against and an extra include path to look in.
> > > > > > 
> > > > > > The two possible solutions to resolve this perverse beast are
> > > > > > 1) make mpiuni be a --download-mpiuni replacement for MPI, as we do
> > > > > > --download-c-blaslapack (this would replace the current --with-mpi=0
> > > > > > support).
> > > > > > 2) better supporting --with-mpi=0 without breaking the paradigm
> > > > > > 
> > > > > > I agree with Matt that 1 is the more elegant solution, since it fits
> > > > > > the
> > > > > > paradigm perfectly. But having --with-mpi=0 is easier and more
> > > > > > understandable
> > > > > > to the user then explaining about downloading a dummy MPI.
> > > > > > 
> > > > > > Thus I have implemented and pushed 2). When you use --with-mpi=0
> > > > > > 1) the macro PETSC_HAVE_MPI is not set
> > > > > > 2) the list of include directories is not added to
> > > > > > 3) the list of linked against libraries is not added to.
> > > > > > 
> > > > > > I have implemented 2) and 3) by having in petsc.h (fortran also)
> > > > > > #if defined(PETSC_HAVE_MPI)
> > > > > > #include "mpi.h"
> > > > > > #else
> > > > > > #include "mpiuni/mpi.h"
> > > > > > #endif
> > > > > > and putting the dummy MPI stubs always into the PETSc libraries for
> > > > > > both
> > > > > > single library and multiple library PETSc installs.
> > > > > > 
> > > > > > Note: this means one cannot have an include "mpi.h" in the uni case
> > > > > > which
> > > > > > bothered me initially but then Lisandro convinced me it was not a
> > > > > > bad
> > > > > > thing.
> > > > > > 
> > > > > > The actual code changes to implement this were, of course, tiny. It
> > > > > > is not
> > > > > > perfect (only --download-mpiuni would be perfect :-), but it is
> > > > > > better
> > > > > > than
> > > > > > before.
> > > > > > 
> > > > > > 
> > > > > > Sorry Matt,
> > > > > > 
> > > > > > 
> > > > > > Barry
> > > > > > 
> > > > > > 
> > > > > > 
> > > > > > 
> > > > > 
> > > > 
> > > 
> > 
> 




More information about the petsc-dev mailing list