[mpich2-dev] Missing symbols when configuring with --disable-f77 --disable-f90

Lisandro Dalcin dalcinl at gmail.com
Mon Aug 3 22:08:00 CDT 2009

Dave, sorry, looking again at it, my last mail seems rather rude. If
this was your impression, I really apologize about that...

I just was trying to stress a point: if a library does not provides
some symbols, the companion header should do not list them. I think
you agree with that, right? When you build an executable, you likely
get a linker error, but if you build a shared lib (like a Python
extension modules) the you do not notice the problem until runtime.

Of course, the other way (providing the call but make it generate an
error) has some precedents. Open MPI does this, Microsoft HPC Pack
also takes this approach with some calls. This behavior is convenient
for my project (and for other MPI wrappers targeting other languages),
though I understand it could not be convenient for all codes out

On Mon, Aug 3, 2009 at 7:03 PM, Dave Goodell<goodell at mcs.anl.gov> wrote:
> On Aug 3, 2009, at 3:31 PM, Lisandro Dalcin wrote:
>> On Mon, Aug 3, 2009 at 3:44 PM, Dave Goodell<goodell at mcs.anl.gov> wrote:
>>> On Aug 3, 2009, at 1:23 PM, Lisandro Dalcin wrote:
>>> AFAIK isn't a good way to provide these functions when f77/f90 are
>>> disabled.
>>>  Furthermore I don't think that we can write safe "dummy" implementations
>>> since we aren't actually checking the Fortran build environment.
>> What about providing the symbols, but make the call generate an error?
>> Other MPI implementations do this, for example OpenMPI without ROMIO;
>> also Microsoft MPI precisely for MPI_Type_create_f90_XXX ...
> We could probably do something like that eventually.  Let me spend more time
> looking at it.  I suspect it won't be that straightforward because of the
> code generation involved in the src/binding/f90 directory.

I took looking at it, and perhaps you are right... A bare 'make'
succed at building the C sources, though the whole make step ended-up

BTW, is src/binding/f90 the right place for these routines? Perhaps
"src/mpi/datatype" is a more proper location for (at least the
high-level part of) that code? FYI, Open MPI seems to do something
like that, more or less (the language-specific binding code is
separated of actual implementations)

Now, you can ignore the rest of my comments below... They are just the
product of insomnia...

>>> IMO the right fixes in this case are on either the user's side (don't
>>> disable f77/f90)
>> Many people out there are using a pre-built MPICH2. They do not have
>> control about how MPICH2 is built, so they cannot make the choice of
>> enabling/disabling Fortran. They just have to live with what the
>> sysadmin or the Linux distro provides...
> I understand that.  I was just pointing it out as one possible solution in
> some cases.  Also, if we can't rely on the user to change their MPI
> implementation then we can't rely on any fix that is provided on the MPICH2
> side for a year or so, which is why I recommended an mpi4py-side solution.

OK, understood. Fortunately, mpi4py already have the solution... It's
just that inexpert users have trouble figuring out what's going and
what to do next. I'll have to write better faq's/docs :-)

After all, the root of this issue seems to be that some MPICH2's
packager in some Linux distro decided that end MPI users do not likely
use Fortran as a programming language. Can you believe it?

> I wasn't suggesting that you check for all 555 functions.  Rather, I was
> trying to suggest that you check for these 5 functions, which should take
> quite a bit less time.

Unfortunately, that does not work in general :-(... mpi4py intends
supports MPI-1/MPI-2 implementations. If you make the union of
missing/broken MPI-2 stuff in MPICH1/LAM/OpenMPI/MPICH2 (and derived
implementations like Deino,Microsoft/Sun/SGI), you end-up having to
test for a lot of stuff... Also take into account the changes from
release to release on each of these... After that, you realize that
you cannot "trust" any MPI implementation, and the only safe thing to
do for now and the future is testing for all stuff you use.

I do not know a priori what MPI implementation the end-user is going
to employ. But mpi4py intends to support that user (with minimal work
from her side)... After all, MPI is supposed to be portable...

> There is also another possibility here.  If you don't actually need
> interoperability with fortran you could just always skip these symbols in
> your wrapper.

Well, It's not me, but end users who could potentially need Fortran
interoperability. For the convenience of end users, mpi4py tarball
releases do ship auto-generated C wrapper code for almost all MPI
calls, regardless users will or will not need/use them.

> Also, remember that neither of these options deal with the users who are
> stuck on 1.0.8, 1.1.0, or 1.1.1 for the next year or so for various reasons.
>  You will still probably need to fix this on your end regardless of what we
> do.

That's a very valid point... However, it would be really nice that for
the next MPICH2 release I could say that MPICH2 work "out of the box".
After all, MPICH2 is currently and IMHO the more robust
implementation, I never ever has any issue with it... You have done a
pretty good job from the very beginning :-) .

Lisandro Dalcín
Centro Internacional de Métodos Computacionales en Ingeniería (CIMEC)
Instituto de Desarrollo Tecnológico para la Industria Química (INTEC)
Consejo Nacional de Investigaciones Científicas y Técnicas (CONICET)
PTLC - Güemes 3450, (3000) Santa Fe, Argentina
Tel/Fax: +54-(0)342-451.1594

More information about the mpich2-dev mailing list