[petsc-users] petsc without MPI

Povolotskyi, Mykhailo mpovolot at purdue.edu
Tue Nov 19 14:07:45 CST 2019


I see.

Actually, my goal is to compile petsc without real MPI to use it with libmesh.

You are saying that PETSC_HAVE_MPI is not a sign that Petsc is built with MPI. It means you have MPIUNI which is a serial code, but has an interface of MPI.

Correct?

On 11/19/2019 3:00 PM, Matthew Knepley wrote:
On Tue, Nov 19, 2019 at 2:58 PM Povolotskyi, Mykhailo <mpovolot at purdue.edu<mailto:mpovolot at purdue.edu>> wrote:

Let me explain the problem.

This log file has

#ifndef PETSC_HAVE_MPI
#define PETSC_HAVE_MPI 1
#endif

while I need to have PETSC without MPI.

If you do not provide MPI, we provide MPIUNI. Do you see it linking to an MPI implementation, or using mpi.h?

    Matt

On 11/19/2019 2:55 PM, Matthew Knepley wrote:
The log you sent has configure completely successfully. Please retry and send the log for a failed run.

  Thanks,

     Matt

On Tue, Nov 19, 2019 at 2:53 PM Povolotskyi, Mykhailo via petsc-users <petsc-users at mcs.anl.gov<mailto:petsc-users at mcs.anl.gov>> wrote:
Why it did not work then?

On 11/19/2019 2:51 PM, Balay, Satish wrote:
> And I see from configure.log - you are using the correct option.
>
> Configure Options: --configModules=PETSc.Configure --optionsModule=config.compilerOptions --with-scalar-type=real --with-x=0 --with-hdf5=0 --with-single-library=1 --with-shared-libraries=0 --with-log=0 --with-mpi=0 --with-clanguage=C++ --with-cxx-dialect=C++11 --CXXFLAGS="-fopenmp -fPIC" --CFLAGS="-fopenmp -fPIC" --with-fortran=0 --FFLAGS="-fopenmp -fPIC" --with-64-bit-indices=0 --with-debugging=0 --with-cc=gcc --with-fc=gfortran --with-cxx=g++ COPTFLAGS= CXXOPTFLAGS= FOPTFLAGS= --download-metis=0 --download-superlu_dist=0 --download-parmetis=0 --with-valgrind-dir=/apps/brown/valgrind/3.13.0_gcc-4.8.5 --download-mumps=1 --with-mumps-serial=1 --with-fortran-kernels=0 --with-blaslapack-lib="-Wl,-rpath,/apps/cent7/intel/compilers_and_libraries_2017.1.132/linux/mkl/lib/intel64  -L/apps/cent7/intel/compilers_and_libraries_2017.1.132/linux/mkl/lib/intel64 -lmkl_intel_lp64 -lmkl_gnu_thread -lmkl_core " --with-blacs-lib=/apps/cent7/intel/compilers_and_libraries_2017.1.132/linux/mkl/lib/intel64/libmkl_blacs_intelmpi_lp64.so --with-blacs-include=/apps/cent7/intel/compilers_and_libraries_2017.1.132/linux/mkl/include --with-scalapack=0
> <<<<<<<
>
> And configure completed successfully. What issue are you encountering? Why do you think its activating MPI?
>
> Satish
>
>
> On Tue, 19 Nov 2019, Balay, Satish via petsc-users wrote:
>
>> On Tue, 19 Nov 2019, Povolotskyi, Mykhailo via petsc-users wrote:
>>
>>> Hello,
>>>
>>> I'm trying to build PETSC without MPI.
>>>
>>> Even if I specify --with_mpi=0, the configuration script still activates
>>> MPI.
>>>
>>> I attach the configure.log.
>>>
>>> What am I doing wrong?
>> The option is --with-mpi=0
>>
>> Satish
>>
>>
>>> Thank you,
>>>
>>> Michael.
>>>
>>>



--
What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead.
-- Norbert Wiener

https://www.cse.buffalo.edu/~knepley/<http://www.cse.buffalo.edu/~knepley/>


--
What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead.
-- Norbert Wiener

https://www.cse.buffalo.edu/~knepley/<http://www.cse.buffalo.edu/~knepley/>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.mcs.anl.gov/pipermail/petsc-users/attachments/20191119/04a65aa4/attachment.html>


More information about the petsc-users mailing list