ML with OpenMPI
Jed Brown
jed at 59A2.org
Tue Mar 25 10:42:48 CDT 2008
On Tue 2008-03-25 12:11, Lisandro Dalcin wrote:
> Well, then that would mean that I was using ML through PETSc in
> PARALLEL runs with no MPI support !!! Do you believe that scenario is
> possible?
No, ML was built correctly. The build output has -DHAVE_CONFIG_H on every build
line. What *is* happening is that the headers included by ksp/pc/impls/ml/ml.c
were essentially for a non-MPI build because HAVE_CONFIG_H was not defined.
That is, including ml_config.h defines the autoconf'd macros (like HAVE_MPI) and
ml_common.h uses them to set ML-local macros (like ML_MPI) *only* if
HAVE_CONFIG_H is defined. So when we include ml_include.h without defining
HAVE_CONFIG_H, we see the interface for a default (non-MPI) build. This
interface is (apparently) the same as an MPI build with MPICH2, but not with
OpenMPI. Since the library was built with MPI, there was no dangerous type
casting, and you were using it with MPI, there was no problem. When using
OpenMPI, it sees a conflict between the ML's dummy MPI interface and OpenMPI's
because ML and MPICH2 use MPI_Comm = int while OpenMPI uses an opaque pointer
value.
> Looking at ML configure script and generated makefiles, in them there
> is a line saying>
>
> DEFS = -DHAVE_CONFIG_H
>
> Do you have that line? Next, this $(DEFS) is included in compiler
> command definition.
>
> Additionally, I did
>
> $ nm -C libml.a | grep MPI
>
> and undefined references to the MPI functions appered as expected.
>
>
> Sorry about my insinstence, but I believe we need to figure out what's
> exactly going on.
No problem. I agree it is important. Does my explanation above make sense to
you?
Jed
-------------- next part --------------
A non-text attachment was scrubbed...
Name: not available
Type: application/pgp-signature
Size: 197 bytes
Desc: not available
URL: <http://lists.mcs.anl.gov/pipermail/petsc-dev/attachments/20080325/6e972319/attachment.sig>
More information about the petsc-dev
mailing list