<div dir="ltr">Dear both,<div><br></div><div>It was the compiler options for integer lengths after all, thanks, Now I corrected it all in my code, all integers have explicitly defined lengths, and I am using the MPI_F08 module instead of obsolete mpi.f. It was a daunting task (> 800 files, > 64000 lines of code), but I am happy with the outcome. Now I can continue with PETSc :-)</div><div><br></div><div> Cheers</div><div><br></div><div> Bojan</div><div><br></div><div> </div><div><br></div></div><br><div class="gmail_quote"><div dir="ltr" class="gmail_attr">On Fri, Feb 11, 2022 at 7:26 AM Bojan Niceno <<a href="mailto:bojan.niceno.scientist@gmail.com">bojan.niceno.scientist@gmail.com</a>> wrote:<br></div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex"><div dir="ltr">What does seem to work is:<div><br></div><div><font face="monospace">use mpi_f08</font></div><div><br></div><div>and using:</div><div><br></div><div><font face="monospace"> call Mpi_Init_08(error) <br> call Mpi_Comm_Size_08(MPI_COMM_WORLD, n_proc, error)· <br> call Mpi_Comm_Rank_08(MPI_COMM_WORLD, this_proc, error) </font> <br></div><div><br></div><div>That is, function calls with extension _08</div><div><br></div><div><br></div><div><br></div><div><br></div></div><br><div class="gmail_quote"><div dir="ltr" class="gmail_attr">On Fri, Feb 11, 2022 at 6:51 AM Bojan Niceno <<a href="mailto:bojan.niceno.scientist@gmail.com" target="_blank">bojan.niceno.scientist@gmail.com</a>> wrote:<br></div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex"><div dir="ltr">Thanks for the example.<div><br></div><div>I do use -i8 -r8 (I know it is a bad practice and am planning to get rid of it in mid terms), and I did suspect that, but when I tried to compile without those options, the warning remained.</div><div><br></div><div>When I switched from "include mpif.h" to: </div><div><br></div><div><font face="monospace">use mpi</font>,</div><div><br></div><div>or even:</div><div><br></div><div><font face="monospace">#include <petsc/finclude/petscsys.h> <br> use petscmpi <br> use petscsys </font> <br></div><div><br></div><div>I get the following type of messages:</div><div><br></div><div><font face="monospace">Error: There is no specific subroutine for the generic ‘mpi_barrier’ at (1)<br>Comm_Mod/Parallel/Start.f90:11:22:<br><br>Error: There is no specific subroutine for the generic ‘mpi_init’ at (1)<br>Comm_Mod/Parallel/Start.f90:14:52:<br><br>Error: There is no specific subroutine for the generic ‘mpi_comm_size’ at (1)<br>Comm_Mod/Parallel/Start.f90:17:54:</font><br><br></div><div>I was googling for a solution, but StackOverflow seems to be down for maintenance at the moment :-(</div><div><br></div><div>I did manage to find that "include mpif.h" is obsolete, which I didn't know before :-)</div><div><br></div><div><br></div><div><br></div><div><br></div><div><br></div><div><br></div><div><br></div><div><br></div></div><br><div class="gmail_quote"><div dir="ltr" class="gmail_attr">On Fri, Feb 11, 2022 at 5:29 AM Satish Balay <<a href="mailto:balay@mcs.anl.gov" target="_blank">balay@mcs.anl.gov</a>> wrote:<br></div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex">1. you can call MPI_Init() before calling PetscInitialize() For<br>
example - check src/sys/tutorials/ex4f90.F90<br>
<br>
2. Are you using -i8 -r8 type flags when compiling your code? That<br>
might case issues when using mpif.h. Perhaps you can switch from<br>
"include 'mpif.h'" to "use mpi" - in your module file - and see if<br>
that helps.<br>
<br>
Satish<br>
<br>
On Fri, 11 Feb 2022, Bojan Niceno wrote:<br>
<br>
> Dear both,<br>
> <br>
> Allow me to update you on the issue. I tried to re-compile PETSc with<br>
> different configuration options as Satish suggested, and went further on by<br>
> specifying exact location of OpenMPI libraries and include files to the<br>
> ones installed by PETSc (for those configurations for which I used<br>
> "--download-openmpi=1") and the original problem, the warning Named COMMON<br>
> block ‘mpi_fortran_bottom’ at (1) shall be of the same size as elsewhere (4<br>
> vs 8 bytes), prevailed.<br>
> <br>
> In desperation, I completely removed OpenMPI from my workstation to make<br>
> sure that only those which are downloaded with PETSc are used, yet the<br>
> warning was still there. (That resolved the Invalid MIT-MAGIC-COOKIE-1<br>
> warning at least)<br>
> <br>
> Now I am wondering if the problem originates from the fact that I already<br>
> have all the necessary MPI routines developed in Fortran? All calls,<br>
> including the basic MPI_Init, MPI_Comm_Size and MPI_Comm_Rank, are done<br>
> from Fortran. I actually have a module called Comm_Mod which does all<br>
> MPI-related calls, and this module contains line include 'mpif.h'. That<br>
> include statement does take the file from PETSc installation as no other<br>
> MPI installation is left on my system, but still it somehow seems to be the<br>
> origin of the warning on common blocks I observe. Now I am wondering if<br>
> the include 'mpif.h' from Fortran somehow collides with the option include<br>
> ${PETSC_DIR}/lib/petsc/conf/variables I put in my makefile in order to<br>
> compile with PETSc.<br>
> <br>
> I am really not sure if it is possible to have main program and all MPI<br>
> initialization done from Fortran (as I have now) and then plug PETSc on top<br>
> of it? Should that be possible?<br>
> <br>
> Kind regards,<br>
> <br>
> Bojan<br>
> <br>
> P.S. The sequential version works fine, I can compile without warning and<br>
> can call PETSc solvers from Fortran without a glitch.<br>
> <br>
> On Thu, Feb 10, 2022 at 5:08 PM Bojan Niceno <<br>
> <a href="mailto:bojan.niceno.scientist@gmail.com" target="_blank">bojan.niceno.scientist@gmail.com</a>> wrote:<br>
> <br>
> > Dear Satish,<br>
> ><br>
> > Thanks for the advice. I will try in a few hours because it is almost<br>
> > dinner time with me (I am in Europe) and I am supposed to go out with a<br>
> > friend this evening.<br>
> ><br>
> > Will let you know. Thanks for help, I highly appreciate it.<br>
> ><br>
> ><br>
> > Kind regards,<br>
> ><br>
> > Bojan<br>
> ><br>
> ><br>
> > On Thu, Feb 10, 2022 at 5:06 PM Satish Balay <<a href="mailto:balay@mcs.anl.gov" target="_blank">balay@mcs.anl.gov</a>> wrote:<br>
> ><br>
> >> Hm - this is strange.<br>
> >><br>
> >> Do you have 'xauth' installed?<br>
> >><br>
> >> I would make sure xauth is installed, delete ~/.Xauthority - and reboot<br>
> >> [or restart the X server]<br>
> >><br>
> >> Yeah - it might not work - but perhaps worth a try..<br>
> >><br>
> >> Or perhaps its not X11 related..<br>
> >><br>
> >> I would also try 'strace' on an application that is producing this<br>
> >> message - to see if I can narrow down further..<br>
> >><br>
> >> Do you get this message with both (runs)?:<br>
> >><br>
> >> cd src/ksp/ksp/tutorials<br>
> >> make ex2<br>
> >> mpiexec -n 1 ./ex2<br>
> >> ./ex2<br>
> >><br>
> >> Satish<br>
> >><br>
> >> On Thu, 10 Feb 2022, Bojan Niceno wrote:<br>
> >><br>
> >> > Dear both,<br>
> >> ><br>
> >> > I work on an ASUS ROG laptop and don't use any NFS. Everything is on<br>
> >> one<br>
> >> > computer, one disk. That is why I couldn't resolve the Invalid Magic<br>
> >> > Cookie, because all the advice I've found about it concerns the remote<br>
> >> > access/display. It is not an issue for me. My laptop has an Nvidia<br>
> >> > GeForce RTX graphical card, maybe Ubuntu drivers are simply not able to<br>
> >> > cope with it. I am out of ideas, really.<br>
> >> ><br>
> >> ><br>
> >> > Cheers,<br>
> >> ><br>
> >> > Bojan<br>
> >> ><br>
> >> > On Thu, Feb 10, 2022 at 4:53 PM Satish Balay <<a href="mailto:balay@mcs.anl.gov" target="_blank">balay@mcs.anl.gov</a>> wrote:<br>
> >> ><br>
> >> > > Do the compute nodes and frontend share the same NFS?<br>
> >> > ><br>
> >> > > I would try the following [to see if they work):<br>
> >> > ><br>
> >> > > - delete ~/.Xauthority [first check with 'xauth list')<br>
> >> > > - setup ssh to not use X - i.e add the following to ~/.ssh/config<br>
> >> > ><br>
> >> > > ForwardX11 no<br>
> >> > > ForwardX11Trusted no<br>
> >> > ><br>
> >> > > [this can be tailored to apply only to your specific compute nodes -<br>
> >> if<br>
> >> > > needed]<br>
> >> > ><br>
> >> > > Satish<br>
> >> > ><br>
> >> > > On Thu, 10 Feb 2022, Matthew Knepley wrote:<br>
> >> > ><br>
> >> > > > On Thu, Feb 10, 2022 at 10:40 AM Bojan Niceno <<br>
> >> > > > <a href="mailto:bojan.niceno.scientist@gmail.com" target="_blank">bojan.niceno.scientist@gmail.com</a>> wrote:<br>
> >> > > ><br>
> >> > > > > Thanks a lot, now I feel much better.<br>
> >> > > > ><br>
> >> > > > > By the way, I can't get around the invalid magic cookie. It is<br>
> >> > > occurring<br>
> >> > > > > ever since I installed the OS (Ubuntu 20.04) so I eventually gave<br>
> >> up<br>
> >> > > and<br>
> >> > > > > decided to live with it :-D<br>
> >> > > > ><br>
> >> > > ><br>
> >> > > ><br>
> >> > ><br>
> >> <a href="https://unix.stackexchange.com/questions/199891/invalid-mit-magic-cookie-1-key-when-trying-to-run-program-remotely" rel="noreferrer" target="_blank">https://unix.stackexchange.com/questions/199891/invalid-mit-magic-cookie-1-key-when-trying-to-run-program-remotely</a><br>
> >> > > ><br>
> >> > > > Thanks,<br>
> >> > > ><br>
> >> > > > Matt<br>
> >> > > ><br>
> >> > > ><br>
> >> > > > > Cheers,<br>
> >> > > > ><br>
> >> > > > > Bojan<br>
> >> > > > ><br>
> >> > > > > On Thu, Feb 10, 2022 at 4:37 PM Matthew Knepley <<br>
> >> <a href="mailto:knepley@gmail.com" target="_blank">knepley@gmail.com</a>><br>
> >> > > wrote:<br>
> >> > > > ><br>
> >> > > > >> On Thu, Feb 10, 2022 at 10:34 AM Bojan Niceno <<br>
> >> > > > >> <a href="mailto:bojan.niceno.scientist@gmail.com" target="_blank">bojan.niceno.scientist@gmail.com</a>> wrote:<br>
> >> > > > >><br>
> >> > > > >>> Dear Satish,<br>
> >> > > > >>><br>
> >> > > > >>> Thanks for the answer. Your suggestion makes a lot of sense,<br>
> >> but<br>
> >> > > this<br>
> >> > > > >>> is what I get as a result of that:<br>
> >> > > > >>><br>
> >> > > > >>> Running check examples to verify correct installation<br>
> >> > > > >>> Using PETSC_DIR=/home/niceno/Development/petsc-debug and<br>
> >> > > > >>> PETSC_ARCH=arch-linux-c-debug<br>
> >> > > > >>> Possible error running C/C++ src/snes/tutorials/ex19 with 1 MPI<br>
> >> > > process<br>
> >> > > > >>> See <a href="http://www.mcs.anl.gov/petsc/documentation/faq.html" rel="noreferrer" target="_blank">http://www.mcs.anl.gov/petsc/documentation/faq.html</a><br>
> >> > > > >>> Invalid MIT-MAGIC-COOKIE-1 keylid velocity = 0.0016, prandtl #<br>
> >> = 1.,<br>
> >> > > > >>> grashof # = 1.<br>
> >> > > > >>> Number of SNES iterations = 2<br>
> >> > > > >>> Possible error running C/C++ src/snes/tutorials/ex19 with 2 MPI<br>
> >> > > processes<br>
> >> > > > >>> See <a href="http://www.mcs.anl.gov/petsc/documentation/faq.html" rel="noreferrer" target="_blank">http://www.mcs.anl.gov/petsc/documentation/faq.html</a><br>
> >> > > > >>> Invalid MIT-MAGIC-COOKIE-1 keylid velocity = 0.0016, prandtl #<br>
> >> = 1.,<br>
> >> > > > >>> grashof # = 1.<br>
> >> > > > >>> Number of SNES iterations = 2<br>
> >> > > > >>> Possible error running Fortran example src/snes/tutorials/ex5f<br>
> >> with 1<br>
> >> > > > >>> MPI process<br>
> >> > > > >>> See <a href="http://www.mcs.anl.gov/petsc/documentation/faq.html" rel="noreferrer" target="_blank">http://www.mcs.anl.gov/petsc/documentation/faq.html</a><br>
> >> > > > >>> Invalid MIT-MAGIC-COOKIE-1 keyNumber of SNES iterations = 4<br>
> >> > > > >>> Completed test examples<br>
> >> > > > >>><br>
> >> > > > >>> I am getting the "Possible error running Fortran example"<br>
> >> warning<br>
> >> > > with<br>
> >> > > > >>> this. This somehow looks more severe to me. But I could be<br>
> >> wrong.<br>
> >> > > > >>><br>
> >> > > > >><br>
> >> > > > >> You are getting this message because your MPI implementation is<br>
> >> > > printing<br>
> >> > > > >><br>
> >> > > > >> Invalid MIT-MAGIC-COOKIE-1 key<br>
> >> > > > >><br>
> >> > > > >> It is still running fine, but this is an MPI configuration issue.<br>
> >> > > > >><br>
> >> > > > >> Thanks,<br>
> >> > > > >><br>
> >> > > > >> Matt<br>
> >> > > > >><br>
> >> > > > >> Any suggestions what to do?<br>
> >> > > > >>><br>
> >> > > > >>><br>
> >> > > > >>> Kind regards,<br>
> >> > > > >>><br>
> >> > > > >>> Bojan<br>
> >> > > > >>><br>
> >> > > > >>><br>
> >> > > > >>><br>
> >> > > > >>> On Wed, Feb 9, 2022 at 5:49 PM Satish Balay <<a href="mailto:balay@mcs.anl.gov" target="_blank">balay@mcs.anl.gov</a>><br>
> >> > > wrote:<br>
> >> > > > >>><br>
> >> > > > >>>> To clarify:<br>
> >> > > > >>>><br>
> >> > > > >>>> you are using --download-openmpi=yes with petsc. However you<br>
> >> say:<br>
> >> > > > >>>><br>
> >> > > > >>>> > > The mpif90 command which<br>
> >> > > > >>>> > > I use to compile the code, wraps gfortran with OpenMPI<br>
> >> > > > >>>><br>
> >> > > > >>>> This suggests a different install of OpenMPI is used to build<br>
> >> your<br>
> >> > > code.<br>
> >> > > > >>>><br>
> >> > > > >>>> One way to resolve this is - delete current build of PETSc -<br>
> >> and<br>
> >> > > > >>>> rebuild it with this same MPI [that you are using with your<br>
> >> > > application]<br>
> >> > > > >>>><br>
> >> > > > >>>> ./configure --with-cc=mpicc --with-cxx=mpicxx --with-fc=mpif90<br>
> >> > > > >>>> --download-fblaslapack --download-metis --download-parmetis<br>
> >> > > --download-cmake<br>
> >> > > > >>>><br>
> >> > > > >>>> Also PETSc provides makefile format that minimizes such<br>
> >> conflicts..<br>
> >> > > > >>>><br>
> >> > > > >>>><br>
> >> > > > >>>><br>
> >> > ><br>
> >> <a href="https://petsc.org/release/docs/manual/getting_started/#writing-c-c-or-fortran-applications" rel="noreferrer" target="_blank">https://petsc.org/release/docs/manual/getting_started/#writing-c-c-or-fortran-applications</a><br>
> >> > > > >>>><br>
> >> > > > >>>> Satish<br>
> >> > > > >>>><br>
> >> > > > >>>> On Wed, 9 Feb 2022, Balay, Satish via petsc-users wrote:<br>
> >> > > > >>>><br>
> >> > > > >>>> > Are you using the same MPI to build both PETSc and your<br>
> >> > > appliation?<br>
> >> > > > >>>> ><br>
> >> > > > >>>> > Satish<br>
> >> > > > >>>> ><br>
> >> > > > >>>> > On Wed, 2022-02-09 at 05:21 +0100, Bojan Niceno wrote:<br>
> >> > > > >>>> > > To whom it may concern,<br>
> >> > > > >>>> > ><br>
> >> > > > >>>> > ><br>
> >> > > > >>>> > > I am working on a Fortran (2003) computational fluid<br>
> >> dynamics<br>
> >> > > > >>>> solver,<br>
> >> > > > >>>> > > which is actually quite mature, was parallelized with MPI<br>
> >> from<br>
> >> > > the<br>
> >> > > > >>>> > > very beginning and it comes with its own suite of Krylov<br>
> >> > > solvers.<br>
> >> > > > >>>> > > Although the code is self-sustained, I am inclined to<br>
> >> believe<br>
> >> > > that<br>
> >> > > > >>>> it<br>
> >> > > > >>>> > > would be better to use PETSc instead of my own home-grown<br>
> >> > > solvers.<br>
> >> > > > >>>> > ><br>
> >> > > > >>>> > > In the attempt to do so, I have installed PETSc 3.16.4 with<br>
> >> > > > >>>> following<br>
> >> > > > >>>> > > options:<br>
> >> > > > >>>> > ><br>
> >> > > > >>>> > > ./configure --with-debugging=yes --download-openmpi=yes<br>
> >> > > --download-<br>
> >> > > > >>>> > > fblaslapack=yes --download-metis=yes<br>
> >> --download-parmetis=yes --<br>
> >> > > > >>>> > > download-cmake=yes<br>
> >> > > > >>>> > ><br>
> >> > > > >>>> > > on a workstation running Ubuntu 20.04 LTS. The mpif90<br>
> >> command<br>
> >> > > which<br>
> >> > > > >>>> > > I use to compile the code, wraps gfortran with OpenMPI,<br>
> >> hence<br>
> >> > > the<br>
> >> > > > >>>> > > option "--download-openmpi=yes" when configuring PETSc.<br>
> >> > > > >>>> > ><br>
> >> > > > >>>> > > Anyhow, installation of PETSc went fine, I managed to link<br>
> >> and<br>
> >> > > run<br>
> >> > > > >>>> it<br>
> >> > > > >>>> > > with my code, but I am getting the following messages<br>
> >> during<br>
> >> > > > >>>> > > compilation:<br>
> >> > > > >>>> > ><br>
> >> > > > >>>> > > Petsc_Mod.f90:18:6:<br>
> >> > > > >>>> > ><br>
> >> > > > >>>> > > 18 | use PetscMat, only: tMat, MAT_FINAL_ASSEMBLY<br>
> >> > > > >>>> > > | 1<br>
> >> > > > >>>> > > Warning: Named COMMON block ‘mpi_fortran_bottom’ at (1)<br>
> >> shall<br>
> >> > > be of<br>
> >> > > > >>>> > > the same size as elsewhere (4 vs 8 bytes)<br>
> >> > > > >>>> > ><br>
> >> > > > >>>> > > Petsc_Mod.f90 is a module I wrote for interfacing PETSc.<br>
> >> All<br>
> >> > > works,<br>
> >> > > > >>>> > > but these messages give me a reason to worry.<br>
> >> > > > >>>> > ><br>
> >> > > > >>>> > > Can you tell what causes this warnings? I would guess they<br>
> >> > > might<br>
> >> > > > >>>> > > appear if one mixes OpenMPI with MPICH, but I don't think<br>
> >> I even<br>
> >> > > > >>>> have<br>
> >> > > > >>>> > > MPICH on my system.<br>
> >> > > > >>>> > ><br>
> >> > > > >>>> > > Please let me know what you think about it?<br>
> >> > > > >>>> > ><br>
> >> > > > >>>> > > Cheers,<br>
> >> > > > >>>> > ><br>
> >> > > > >>>> > > Bojan<br>
> >> > > > >>>> > ><br>
> >> > > > >>>> > ><br>
> >> > > > >>>> > ><br>
> >> > > > >>>> > ><br>
> >> > > > >>>> ><br>
> >> > > > >>>> ><br>
> >> > > > >>>><br>
> >> > > > >>><br>
> >> > > > >><br>
> >> > > > >> --<br>
> >> > > > >> What most experimenters take for granted before they begin their<br>
> >> > > > >> experiments is infinitely more interesting than any results to<br>
> >> which<br>
> >> > > their<br>
> >> > > > >> experiments lead.<br>
> >> > > > >> -- Norbert Wiener<br>
> >> > > > >><br>
> >> > > > >> <a href="https://www.cse.buffalo.edu/~knepley/" rel="noreferrer" target="_blank">https://www.cse.buffalo.edu/~knepley/</a><br>
> >> > > > >> <<a href="http://www.cse.buffalo.edu/~knepley/" rel="noreferrer" target="_blank">http://www.cse.buffalo.edu/~knepley/</a>><br>
> >> > > > >><br>
> >> > > > ><br>
> >> > > ><br>
> >> > > ><br>
> >> > ><br>
> >> ><br>
> >><br>
> ><br>
> <br>
</blockquote></div>
</blockquote></div>
</blockquote></div>