MPICH vs. AIX-MPI on frost?
John Tannahill
tannahill1 at llnl.gov
Wed Jul 23 19:53:00 CDT 2003
Tyce,
I won't be in tomorrow, back on Friday. I am able to run my Fortran
test code now on seaborg by linking in the extra library mentioned below
(this is a workaround for now). Next step is debugging it and getting
it to match the results from the C code. I don't recall getting an
"unrecognized fortran name mapping" on frost, but who knows? Don't
think that issue is related to mine anyway; believe I am straight
IBM on seaborg, unless something is going on behind the scenes.
Regards,
John
Tyce Mclarty wrote:
> John,
>
> It looked to me like there was a problem with mixing IBM and mpich
> scripts and/or libraries with the approach in the README.frost and in
> the INSTALL files with the distribution. I have not gotten a pure mpich
> configure to work, but did work out a pure IBM one.
>
> What you found may be a problem, but I think we have something more
> basic. When I get configure to run, there is a warning about an
> "unrecognized fortran name mapping". So it sounds like the IBM compilers
> are doing something unexpected with fortran names. Not too surprising.
>
> This would explain why C works fine, but not fortran. I can do some
> digging through the AIX documentation or catch the guy over here who
> will know right away in the AM.
>
> Tyce
>
> At 04:23 PM 7/23/2003 -0700, John Tannahill wrote:
>
>> More info on my problem. There are two libraries that are put
>> together in src/lib: libpnetcdf.a and libnetcdf.a. It appears
>> that libnetcdf.a has all of the nfmpi_xxx routines in it. My
>> guess is that somehow this libnetcdf.a library should actually
>> be part of the libpnetcdf.a library, then I believe I would
>> have all the routines I need to link with. I think that there
>> was a name change a while back when libnetcdf.a got switched to
>> libpnetcdf.a? Maybe a netcdf didn't get changed to a pnetcdf
>> somewhere where it needed to be?
>>
>> Regards,
>> John
>>
>> John Tannahill wrote:
>>
>>> Tyce,
>>> Actually, I am now running the parallel-netcdf code on an IBM SP at
>>> NERSC (seaborg). I don't believe that mpich is being used at all.
>>> I run on seaborg because I can easily get interactive time, which I
>>> don't think that I can get on frost. Anyway, you missed out on some
>>> of my previous messages, so I will try to bring you up to date. The
>>> C interface seems to work fine. I put together a test code in C and
>>> it compiled, linked, and ran. I then converted this test code to F90.
>>> I have gotten this version to compile, but it won't link because it
>>> can't find any nfmpi_xxx routines in the library (the C code uses
>>> ncmpi_xxx routines and they are there). So the question seems to be
>>> how to build the library so that the nfmpi_xxx routines are there?
>>> Perhaps you have some experience with autoconf and can figure out
>>> what needs to be done?
>>> Earlier today, I contacted the one other person that apparently had
>>> some interest in trying the Fortran interface a couple of months ago
>>> (Troy Baer at the OSC), but it now appears that we may be the first
>>> real users (i.e., it's a little unclear, but I believe that Troy is
>>> still trying to carve out some time to investigate it). I am sure
>>> the ANL/Northwestern people are trying to track things down as well.
>>> It's certainly possible that it may be something quite simple that
>>> I am just missing. I did try the --with-fortran=yes flag when
>>> running configure, but that did not seem to help. I have not been
>>> able to look at things much today, but will get back into now.
>>> Regards,
>>> John
>>>
>>> JB Gallagher wrote:
>>>
>>>> All,
>>>>
>>>> Just to let you know I just built and installed
>>>> parallel-netcdf-0.8.4 and built
>>>> the benchmark code against it and it ran fine, so for me anyway the
>>>> tar ball
>>>> from the MCS website works the same way. Again I only tested the
>>>> C-interface
>>>> though, I don't use any fortran for the i/o. When I first installed
>>>> netcdf on
>>>> frost I had some mixing of the libraries that caused errors when my
>>>> code ran, I
>>>> didn't save the output of these errors however, but they were a
>>>> result of
>>>> having mpich in the mix.
>>>>
>>>> So I was able to build the pnetcdf code off of the mcs website the
>>>> same way I
>>>> said before, I set all the compiler envirnment variables to the normal
>>>> compilers (xlc etc..) and then ran configure like so: ./configure
>>>> --prefix=<dir> --with-mpi=/usr/local/tools/mpich/mpich-1.2.5mpl
>>>>
>>>> then I edited macros.make and replaceed all compiler variables
>>>> (especially
>>>> mpich ones) with the newmp ones (xlc , newmpcc etc..)
>>>>
>>>> This creates a library that is linked to the AIX mpi implementation.
>>>>
>>>> --Brad
>>>>
>>>>
>>>> Rob Ross wrote:
>>>>
>>>>
>>>>> There's no reason that I know of why it wouldn't work with IBM's MPI;
>>>>> Brad, do you have any input on this?
>>>>>
>>>>> Thanks,
>>>>>
>>>>> Rob
>>>>>
>>>>> On Wed, 23 Jul 2003, Tyce Mclarty wrote:
>>>>>
>>>>>
>>>>>> Rob,
>>>>>>
>>>>>> I'm trying to help John Tannahill get the PnetCDF going here on
>>>>>> his code.
>>>>>> Looking at the README.frost in the latest release, it looks like
>>>>>> he may be
>>>>>> mixing mpich and IBM libraies.
>>>>>>
>>>>>> When you ran tests on frost, did you use mpich?
>>>>>>
>>>>>> Is there any reason it should not work with IBM's MPI?
>>>>>>
>>>>>> Thanks,
>>>>>> Tyce
>>>>>
>>>>>
>>>>
>>
>>
>> --
>> ============================
>> John R. Tannahill
>> Lawrence Livermore Nat. Lab.
>> P.O. Box 808, M/S L-103
>> Livermore, CA 94551
>> 925-423-3514
>> Fax: 925-423-4908
>> ============================
>
>
>
--
============================
John R. Tannahill
Lawrence Livermore Nat. Lab.
P.O. Box 808, M/S L-103
Livermore, CA 94551
925-423-3514
Fax: 925-423-4908
============================
More information about the parallel-netcdf
mailing list