[petsc-users] Warning while compiling Fortran with PETSc

Bojan Niceno bojan.niceno.scientist at gmail.com
Fri Feb 11 00:26:28 CST 2022


What does seem to work is:

use mpi_f08

and using:

  call Mpi_Init_08(error)

  call Mpi_Comm_Size_08(MPI_COMM_WORLD, n_proc, error)·

  call Mpi_Comm_Rank_08(MPI_COMM_WORLD, this_proc, error)


That is, function calls with extension _08





On Fri, Feb 11, 2022 at 6:51 AM Bojan Niceno <
bojan.niceno.scientist at gmail.com> wrote:

> Thanks for the example.
>
> I do use -i8 -r8 (I know it is a bad practice and am planning to get rid
> of it in mid terms), and I did suspect that, but when I tried to compile
> without those options, the warning remained.
>
> When I switched from "include mpif.h" to:
>
> use mpi,
>
> or even:
>
> #include <petsc/finclude/petscsys.h>
>
>   use petscmpi
>
>   use petscsys
>
>
> I get the following type of messages:
>
> Error: There is no specific subroutine for the generic ‘mpi_barrier’ at (1)
> Comm_Mod/Parallel/Start.f90:11:22:
>
> Error: There is no specific subroutine for the generic ‘mpi_init’ at (1)
> Comm_Mod/Parallel/Start.f90:14:52:
>
> Error: There is no specific subroutine for the generic ‘mpi_comm_size’ at
> (1)
> Comm_Mod/Parallel/Start.f90:17:54:
>
> I was googling for a solution, but StackOverflow seems to be down for
> maintenance at the moment :-(
>
> I did manage to find that "include mpif.h" is obsolete, which I didn't
> know before :-)
>
>
>
>
>
>
>
>
>
> On Fri, Feb 11, 2022 at 5:29 AM Satish Balay <balay at mcs.anl.gov> wrote:
>
>> 1. you can call MPI_Init() before calling PetscInitialize() For
>> example - check src/sys/tutorials/ex4f90.F90
>>
>> 2. Are you using -i8 -r8 type flags when compiling your code? That
>> might case issues when using mpif.h. Perhaps you can switch from
>> "include 'mpif.h'" to "use mpi" - in your module file - and see if
>> that helps.
>>
>> Satish
>>
>> On Fri, 11 Feb 2022, Bojan Niceno wrote:
>>
>> > Dear both,
>> >
>> > Allow me to update you on the issue.  I tried to re-compile PETSc with
>> > different configuration options as Satish suggested, and went further
>> on by
>> > specifying exact location of OpenMPI libraries and include files to the
>> > ones installed by PETSc (for those configurations for which I used
>> > "--download-openmpi=1") and the original problem, the warning Named
>> COMMON
>> > block ‘mpi_fortran_bottom’ at (1) shall be of the same size as
>> elsewhere (4
>> > vs 8 bytes), prevailed.
>> >
>> > In desperation, I completely removed OpenMPI from my workstation to make
>> > sure that only those which are downloaded with PETSc are used, yet the
>> > warning was still there.  (That resolved the Invalid MIT-MAGIC-COOKIE-1
>> > warning at least)
>> >
>> > Now I am wondering if the problem originates from the fact that I
>> already
>> > have all the necessary MPI routines developed in Fortran?  All calls,
>> > including the basic MPI_Init, MPI_Comm_Size and MPI_Comm_Rank, are done
>> > from Fortran.  I actually have a module called Comm_Mod which does all
>> > MPI-related calls, and this module contains line include 'mpif.h'.  That
>> > include statement does take the file from PETSc installation as no other
>> > MPI installation is left on my system, but still it somehow seems to be
>> the
>> > origin of the warning on common blocks I observe.  Now I am wondering if
>> > the include 'mpif.h' from Fortran somehow collides with the option
>> include
>> > ${PETSC_DIR}/lib/petsc/conf/variables I put in my makefile in order to
>> > compile with PETSc.
>> >
>> > I am really not sure if it is possible to have main program and all MPI
>> > initialization done from Fortran (as I have now) and then plug PETSc on
>> top
>> > of it?  Should that be possible?
>> >
>> >     Kind regards,
>> >
>> >     Bojan
>> >
>> > P.S. The sequential version works fine, I can compile without warning
>> and
>> > can call PETSc solvers from Fortran without a glitch.
>> >
>> > On Thu, Feb 10, 2022 at 5:08 PM Bojan Niceno <
>> > bojan.niceno.scientist at gmail.com> wrote:
>> >
>> > > Dear Satish,
>> > >
>> > > Thanks for the advice.  I will try in a few hours because it is almost
>> > > dinner time with me (I am in Europe) and I am supposed to go out with
>> a
>> > > friend this evening.
>> > >
>> > > Will let you know.  Thanks for help, I highly appreciate it.
>> > >
>> > >
>> > >     Kind regards,
>> > >
>> > >     Bojan
>> > >
>> > >
>> > > On Thu, Feb 10, 2022 at 5:06 PM Satish Balay <balay at mcs.anl.gov>
>> wrote:
>> > >
>> > >> Hm - this is strange.
>> > >>
>> > >> Do you have 'xauth' installed?
>> > >>
>> > >> I would make sure xauth is installed, delete ~/.Xauthority - and
>> reboot
>> > >> [or restart the X server]
>> > >>
>> > >> Yeah - it might not work - but perhaps worth a try..
>> > >>
>> > >> Or perhaps its not X11 related..
>> > >>
>> > >> I would also try 'strace' on an application that is producing this
>> > >> message - to see if I can narrow down further..
>> > >>
>> > >> Do you get this message with both (runs)?:
>> > >>
>> > >> cd src/ksp/ksp/tutorials
>> > >> make ex2
>> > >> mpiexec -n 1 ./ex2
>> > >> ./ex2
>> > >>
>> > >> Satish
>> > >>
>> > >> On Thu, 10 Feb 2022, Bojan Niceno wrote:
>> > >>
>> > >> > Dear both,
>> > >> >
>> > >> > I work on an ASUS ROG laptop and don't use any NFS.  Everything is
>> on
>> > >> one
>> > >> > computer, one disk.  That is why I couldn't resolve the Invalid
>> Magic
>> > >> > Cookie, because all the advice I've found about it concerns the
>> remote
>> > >> > access/display.  It is not an issue for me.  My laptop has an
>> Nvidia
>> > >> > GeForce RTX graphical card, maybe Ubuntu drivers are simply not
>> able to
>> > >> > cope with it.  I am out of ideas, really.
>> > >> >
>> > >> >
>> > >> >     Cheers,
>> > >> >
>> > >> >     Bojan
>> > >> >
>> > >> > On Thu, Feb 10, 2022 at 4:53 PM Satish Balay <balay at mcs.anl.gov>
>> wrote:
>> > >> >
>> > >> > > Do the compute nodes and frontend share the same NFS?
>> > >> > >
>> > >> > > I would try the following [to see if they work):
>> > >> > >
>> > >> > > - delete ~/.Xauthority [first check with 'xauth list')
>> > >> > > - setup ssh to not use X - i.e add the following to ~/.ssh/config
>> > >> > >
>> > >> > > ForwardX11 no
>> > >> > > ForwardX11Trusted no
>> > >> > >
>> > >> > > [this can be tailored to apply only to your specific compute
>> nodes -
>> > >> if
>> > >> > > needed]
>> > >> > >
>> > >> > > Satish
>> > >> > >
>> > >> > > On Thu, 10 Feb 2022, Matthew Knepley wrote:
>> > >> > >
>> > >> > > > On Thu, Feb 10, 2022 at 10:40 AM Bojan Niceno <
>> > >> > > > bojan.niceno.scientist at gmail.com> wrote:
>> > >> > > >
>> > >> > > > > Thanks a lot, now I feel much better.
>> > >> > > > >
>> > >> > > > > By the way, I can't get around the invalid magic cookie.  It
>> is
>> > >> > > occurring
>> > >> > > > > ever since I installed the OS (Ubuntu 20.04) so I eventually
>> gave
>> > >> up
>> > >> > > and
>> > >> > > > > decided to live with it :-D
>> > >> > > > >
>> > >> > > >
>> > >> > > >
>> > >> > >
>> > >>
>> https://unix.stackexchange.com/questions/199891/invalid-mit-magic-cookie-1-key-when-trying-to-run-program-remotely
>> > >> > > >
>> > >> > > >   Thanks,
>> > >> > > >
>> > >> > > >     Matt
>> > >> > > >
>> > >> > > >
>> > >> > > > >     Cheers,
>> > >> > > > >
>> > >> > > > >     Bojan
>> > >> > > > >
>> > >> > > > > On Thu, Feb 10, 2022 at 4:37 PM Matthew Knepley <
>> > >> knepley at gmail.com>
>> > >> > > wrote:
>> > >> > > > >
>> > >> > > > >> On Thu, Feb 10, 2022 at 10:34 AM Bojan Niceno <
>> > >> > > > >> bojan.niceno.scientist at gmail.com> wrote:
>> > >> > > > >>
>> > >> > > > >>> Dear Satish,
>> > >> > > > >>>
>> > >> > > > >>> Thanks for the answer.  Your suggestion makes a lot of
>> sense,
>> > >> but
>> > >> > > this
>> > >> > > > >>> is what I get as a result of that:
>> > >> > > > >>>
>> > >> > > > >>> Running check examples to verify correct installation
>> > >> > > > >>> Using PETSC_DIR=/home/niceno/Development/petsc-debug and
>> > >> > > > >>> PETSC_ARCH=arch-linux-c-debug
>> > >> > > > >>> Possible error running C/C++ src/snes/tutorials/ex19 with
>> 1 MPI
>> > >> > > process
>> > >> > > > >>> See http://www.mcs.anl.gov/petsc/documentation/faq.html
>> > >> > > > >>> Invalid MIT-MAGIC-COOKIE-1 keylid velocity = 0.0016,
>> prandtl #
>> > >> = 1.,
>> > >> > > > >>> grashof # = 1.
>> > >> > > > >>> Number of SNES iterations = 2
>> > >> > > > >>> Possible error running C/C++ src/snes/tutorials/ex19 with
>> 2 MPI
>> > >> > > processes
>> > >> > > > >>> See http://www.mcs.anl.gov/petsc/documentation/faq.html
>> > >> > > > >>> Invalid MIT-MAGIC-COOKIE-1 keylid velocity = 0.0016,
>> prandtl #
>> > >> = 1.,
>> > >> > > > >>> grashof # = 1.
>> > >> > > > >>> Number of SNES iterations = 2
>> > >> > > > >>> Possible error running Fortran example
>> src/snes/tutorials/ex5f
>> > >> with 1
>> > >> > > > >>> MPI process
>> > >> > > > >>> See http://www.mcs.anl.gov/petsc/documentation/faq.html
>> > >> > > > >>> Invalid MIT-MAGIC-COOKIE-1 keyNumber of SNES iterations =
>>    4
>> > >> > > > >>> Completed test examples
>> > >> > > > >>>
>> > >> > > > >>> I am getting the "Possible error running Fortran example"
>> > >> warning
>> > >> > > with
>> > >> > > > >>> this.  This somehow looks more severe to me.  But I could
>> be
>> > >> wrong.
>> > >> > > > >>>
>> > >> > > > >>
>> > >> > > > >> You are getting this message because your MPI
>> implementation is
>> > >> > > printing
>> > >> > > > >>
>> > >> > > > >>   Invalid MIT-MAGIC-COOKIE-1 key
>> > >> > > > >>
>> > >> > > > >> It is still running fine, but this is an MPI configuration
>> issue.
>> > >> > > > >>
>> > >> > > > >>   Thanks,
>> > >> > > > >>
>> > >> > > > >>      Matt
>> > >> > > > >>
>> > >> > > > >> Any suggestions what to do?
>> > >> > > > >>>
>> > >> > > > >>>
>> > >> > > > >>>     Kind regards,
>> > >> > > > >>>
>> > >> > > > >>>     Bojan
>> > >> > > > >>>
>> > >> > > > >>>
>> > >> > > > >>>
>> > >> > > > >>> On Wed, Feb 9, 2022 at 5:49 PM Satish Balay <
>> balay at mcs.anl.gov>
>> > >> > > wrote:
>> > >> > > > >>>
>> > >> > > > >>>> To clarify:
>> > >> > > > >>>>
>> > >> > > > >>>> you are using --download-openmpi=yes with petsc. However
>> you
>> > >> say:
>> > >> > > > >>>>
>> > >> > > > >>>> > > The mpif90 command which
>> > >> > > > >>>> > > I use to compile the code, wraps gfortran with OpenMPI
>> > >> > > > >>>>
>> > >> > > > >>>> This suggests a different install of OpenMPI is used to
>> build
>> > >> your
>> > >> > > code.
>> > >> > > > >>>>
>> > >> > > > >>>> One way to resolve this is - delete current build of
>> PETSc -
>> > >> and
>> > >> > > > >>>> rebuild it with this same MPI [that you are using with
>> your
>> > >> > > application]
>> > >> > > > >>>>
>> > >> > > > >>>> ./configure --with-cc=mpicc --with-cxx=mpicxx
>> --with-fc=mpif90
>> > >> > > > >>>> --download-fblaslapack --download-metis
>> --download-parmetis
>> > >> > > --download-cmake
>> > >> > > > >>>>
>> > >> > > > >>>> Also PETSc provides makefile format that minimizes such
>> > >> conflicts..
>> > >> > > > >>>>
>> > >> > > > >>>>
>> > >> > > > >>>>
>> > >> > >
>> > >>
>> https://petsc.org/release/docs/manual/getting_started/#writing-c-c-or-fortran-applications
>> > >> > > > >>>>
>> > >> > > > >>>> Satish
>> > >> > > > >>>>
>> > >> > > > >>>> On Wed, 9 Feb 2022, Balay, Satish via petsc-users wrote:
>> > >> > > > >>>>
>> > >> > > > >>>> > Are you using the same MPI to build both PETSc and your
>> > >> > > appliation?
>> > >> > > > >>>> >
>> > >> > > > >>>> > Satish
>> > >> > > > >>>> >
>> > >> > > > >>>> > On Wed, 2022-02-09 at 05:21 +0100, Bojan Niceno wrote:
>> > >> > > > >>>> > > To whom it may concern,
>> > >> > > > >>>> > >
>> > >> > > > >>>> > >
>> > >> > > > >>>> > > I am working on a Fortran (2003) computational fluid
>> > >> dynamics
>> > >> > > > >>>> solver,
>> > >> > > > >>>> > > which is actually quite mature, was parallelized with
>> MPI
>> > >> from
>> > >> > > the
>> > >> > > > >>>> > > very beginning and it comes with its own suite of
>> Krylov
>> > >> > > solvers.
>> > >> > > > >>>> > > Although the code is self-sustained, I am inclined to
>> > >> believe
>> > >> > > that
>> > >> > > > >>>> it
>> > >> > > > >>>> > > would be better to use PETSc instead of my own
>> home-grown
>> > >> > > solvers.
>> > >> > > > >>>> > >
>> > >> > > > >>>> > > In the attempt to do so, I have installed PETSc
>> 3.16.4 with
>> > >> > > > >>>> following
>> > >> > > > >>>> > > options:
>> > >> > > > >>>> > >
>> > >> > > > >>>> > > ./configure --with-debugging=yes
>> --download-openmpi=yes
>> > >> > > --download-
>> > >> > > > >>>> > > fblaslapack=yes --download-metis=yes
>> > >> --download-parmetis=yes --
>> > >> > > > >>>> > > download-cmake=yes
>> > >> > > > >>>> > >
>> > >> > > > >>>> > > on a workstation running Ubuntu 20.04 LTS.  The mpif90
>> > >> command
>> > >> > > which
>> > >> > > > >>>> > > I use to compile the code, wraps gfortran with
>> OpenMPI,
>> > >> hence
>> > >> > > the
>> > >> > > > >>>> > > option "--download-openmpi=yes" when configuring
>> PETSc.
>> > >> > > > >>>> > >
>> > >> > > > >>>> > > Anyhow, installation of PETSc went fine, I managed to
>> link
>> > >> and
>> > >> > > run
>> > >> > > > >>>> it
>> > >> > > > >>>> > > with my code, but I am getting the following messages
>> > >> during
>> > >> > > > >>>> > > compilation:
>> > >> > > > >>>> > >
>> > >> > > > >>>> > > Petsc_Mod.f90:18:6:
>> > >> > > > >>>> > >
>> > >> > > > >>>> > >    18 |   use PetscMat, only: tMat, MAT_FINAL_ASSEMBLY
>> > >> > > > >>>> > >       |      1
>> > >> > > > >>>> > > Warning: Named COMMON block ‘mpi_fortran_bottom’ at
>> (1)
>> > >> shall
>> > >> > > be of
>> > >> > > > >>>> > > the same size as elsewhere (4 vs 8 bytes)
>> > >> > > > >>>> > >
>> > >> > > > >>>> > > Petsc_Mod.f90 is a module I wrote for interfacing
>> PETSc.
>> > >> All
>> > >> > > works,
>> > >> > > > >>>> > > but these messages give me a reason to worry.
>> > >> > > > >>>> > >
>> > >> > > > >>>> > > Can you tell what causes this warnings?  I would
>> guess they
>> > >> > > might
>> > >> > > > >>>> > > appear if one mixes OpenMPI with MPICH, but I don't
>> think
>> > >> I even
>> > >> > > > >>>> have
>> > >> > > > >>>> > > MPICH on my system.
>> > >> > > > >>>> > >
>> > >> > > > >>>> > > Please let me know what you think about it?
>> > >> > > > >>>> > >
>> > >> > > > >>>> > >     Cheers,
>> > >> > > > >>>> > >
>> > >> > > > >>>> > >     Bojan
>> > >> > > > >>>> > >
>> > >> > > > >>>> > >
>> > >> > > > >>>> > >
>> > >> > > > >>>> > >
>> > >> > > > >>>> >
>> > >> > > > >>>> >
>> > >> > > > >>>>
>> > >> > > > >>>
>> > >> > > > >>
>> > >> > > > >> --
>> > >> > > > >> What most experimenters take for granted before they begin
>> their
>> > >> > > > >> experiments is infinitely more interesting than any results
>> to
>> > >> which
>> > >> > > their
>> > >> > > > >> experiments lead.
>> > >> > > > >> -- Norbert Wiener
>> > >> > > > >>
>> > >> > > > >> https://www.cse.buffalo.edu/~knepley/
>> > >> > > > >> <http://www.cse.buffalo.edu/~knepley/>
>> > >> > > > >>
>> > >> > > > >
>> > >> > > >
>> > >> > > >
>> > >> > >
>> > >> >
>> > >>
>> > >
>> >
>>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.mcs.anl.gov/pipermail/petsc-users/attachments/20220211/14130b15/attachment-0001.html>


More information about the petsc-users mailing list