[petsc-users] building PETSc-3.2p5 with openMPI-1.5.4 under Windows

Barry Smith bsmith at mcs.anl.gov
Fri Nov 18 16:02:19 CST 2011


   Thanks for the complete detailed error report.

On Nov 18, 2011, at 3:49 PM, NovA wrote:

> Hello everybody!
> 
> Recently I've tried to build PETSc-3.2p5 under WindowsXP-x64 with
> openMPI-1.5.4 binary package using Intel C++ 11.1 compiler. I've got
> couple of strange problems, but succeeded to resolve them. I just want
> to share the solution. Hope, this could help to improve documentation
> or configuration procedure.
> 
> After a long stage of trial-and-error I managed to provide the
> ./configure.py with options that worked. Under cygwin the
> configuration lasts forever... Anyway, the configure stage finished
> successfully, but the building stage brought the following problems.
> 
> (1) Building stops at src/sys/viewer/impls/ascii/filev.c with the
> syntax errors in mpi.h in the lines:
> OMPI_DECLSPEC  MPI_Fint MPI_Comm_c2f(MPI_Comm comm);
> and
> OMPI_DECLSPEC  MPI_Comm MPI_Comm_f2c(MPI_Fint comm);
> 
> Tedious investigation showed that it was resulted from substitutions
> in petscfix.h:
> #define MPI_Comm_f2c(a) (a)
> #define MPI_Comm_c2f(a) (a)
> 
> And these definitions came in turn from "TEST configureConversion" of
> configure.py. The test was indeed failed with unresolved external
> symbol "ompi_mpi_comm_world" (NOT the one it verifies). That just
> because not all openMPI libraries were specified in the command line
> (only -lmpi)...
> 
> Therefore the very cause of the problem is the incorrect options for
> ./configure concerning MPI. I used "--with-mpi-dir=..." and
> "--with-cc="win32fe icl"" (win32fe wrapper can't handle mpicc.exe). I
> thought they should correctly configure openMPI includes and
> libraries, but they did not for the latter. Silently.
> 
> So my solution is to remove --with-mpi-dir configure option and
> replace it with --with-mpi-include="$MPI_DIR/include",
> --with-mpi-lib="[libmpi.lib,libopen-pal.lib,libopen-rte.lib]" (no
> spaces), --CC_LINKER_FLAGS="-L$MPI_DIR/lib", --CFLAGS="-DOMPI_IMPORTS
> -DOPAL_IMPORTS -DORTE_IMPORTS". The values are taken from "mpicc.exe
> --showme".
> 
  Satish,

     Can you add testing of this complicated configuration to come before the simplier one of -lmpi to prevent this problem?
> 
> (2) Building stops at src/sys/error/fp.c with the errors that
> integer/pointer expression expected in the line
> if (feclearexcept(FE_ALL_EXCEPT)) ...
> 
> It seems the reason it that "Intel C++ 11.1" provide "fenv.h" header
> which is detected by the ./configure and PETSC_HAVE_FENV_H is defined.
> But the function declared in intel's fenv.h as void:
> extern void _FENV_PUBAPI feclearexcept (int excepts) ;
> 
> For now I just commented out PETSC_HAVE_FENV_H in petscconf.h. Is
> there a better way to workaround this?

   Satish,

     Can you add a test HAVE_FECLEAREXCEPT_RETURN_INT to configure then in the code if this is not defined call feclearexcept
  without the error checking?

> 
> (3) Building stops at src\sys\objects\pinit.c with the errors
> "expression must have a constant value" concerning MPI_COMM_NULL.
> I already reported this in the mail-list a year and half ago for
> openMPI-1.4.1 and PETSc-3.1. Then Satish Balay filled the bug to
> openMPI developers ( https://svn.open-mpi.org/trac/ompi/ticket/2368 ).
> Unfortunately, the ticket is still open and workaround for PETSc is
> the same: "replacing all occurrences of 'MPI_COMM_NULL' in pinit.c
> with '0'". Probably it's worth to refresh that ticket somehow.

  Satish,

    Can this be fixed by having configure check for MPI_COMM_NULL and generating a MPI_COMM_NULL in petscfix.h if it doesn't exist?


   Let me know the results of each of these fixes (in 3.2) on petsc-maint.

    Thanks

   Barry


> 
> 
> Best regards,
>  Andrey



More information about the petsc-users mailing list