[mpich-discuss] A runtime warning message with version 1.4.1p1

CB cbalways at gmail.com
Tue Dec 20 11:03:10 CST 2011


It's resolved.

Actually, I have different libxml2 (old) version installed on the compute
nodes although I have installed the same libxml2 on the node where I
launched the MPIRUN.

After installing the same libxml2 library on the compute nodes, the error
went away.

Thanks,
- Chansup

On Tue, Dec 20, 2011 at 11:39 AM, CB <cbalways at gmail.com> wrote:

> I build the MPICH2 package on a VM running Fedora Core 11 and installed on
> a cluster of Dell servers running the same OS.
>
> Both VM and Dell servers have the same verions of the libxml2 library.
>
> Do you need any other informaiton?
>
> Thanks,
> - Chansup
>
>
> On Tue, Dec 20, 2011 at 11:21 AM, Darius Buntinas <buntinas at mcs.anl.gov>wrote:
>
>> I'm not sure.  There might be something wrong with your libxml2 library.
>>  What kind of machine are you using?
>>
>> -d
>>
>>
>> On Dec 20, 2011, at 9:58 AM, CB wrote:
>>
>> >
>> > When I run an example hello program on a node with two MPI processes,
>> it generates some warnings about no version information although the
>> program ran successfully.
>> >
>> > I am wondering how to suppress the warning message?
>> >
>> > $ mpirun -np 2  -machinefile ./hostsfile ./hello-mpich2-c.exe
>> >
>> > /usr/local/MPI/mpich2/bin/hydra_pmi_proxy: /usr/lib64/libxml2.so.2: no
>> version information available (required by
>> /usr/local/MPI/mpich2/bin/hydra_pmi_proxy)
>> > /usr/local/MPI/mpich2/bin/hydra_pmi_proxy: /usr/lib64/libxml2.so.2: no
>> version information available (required by
>> /usr/local/MPI/mpich2/bin/hydra_pmi_proxy)
>> > /usr/local/MPI/mpich2/bin/hydra_pmi_proxy: /usr/lib64/libxml2.so.2: no
>> version information available (required by
>> /usr/local/MPI/mpich2/bin/hydra_pmi_proxy)
>> > /usr/local/MPI/mpich2/bin/hydra_pmi_proxy: /usr/lib64/libxml2.so.2: no
>> version information available (required by
>> /usr/local/MPI/mpich2/bin/hydra_pmi_proxy)
>> > Processor 0 on compute-0-0 out of 2
>> > Processor 1 on compute-0-2 out of 2
>> >
>> >
>> >
>> > Here is my build information:
>> >
>> > $ mpirun -info
>> > HYDRA build details:
>> >     Version:                                 1.4.1p1
>> >     Release Date:                            Thu Sep  1 13:53:02 CDT
>> 2011
>> >     CC:                              gcc
>> >     CXX:                             c++
>> >     F77:                             gfortran
>> >     F90:                             f95
>> >     Configure options:
>> '--prefix=/usr/local/MPI/mpich2-1.4.1p1' '--enable-shared'
>> '--enable-checking=release' '--disable-option-checking' 'CC=gcc' 'CFLAGS=
>> -O2' 'LDFLAGS= ' 'LIBS=-lrt -lpthread ' 'CPPFLAGS=
>> -I/home/x/Documents/MPICH2/mpich2-1.4.1p1/src/mpl/include
>> -I/home/x/Documents/MPICH2/mpich2-1.4.1p1/src/mpl/include
>> -I/home/x/Documents/MPICH2/mpich2-1.4.1p1/src/openpa/src
>> -I/home/x/Documents/MPICH2/mpich2-1.4.1p1/src/openpa/src
>> -I/home/x/Documents/MPICH2/mpich2-1.4.1p1/src/mpid/ch3/include
>> -I/home/x/Documents/MPICH2/mpich2-1.4.1p1/src/mpid/ch3/include
>> -I/home/x/Documents/MPICH2/mpich2-1.4.1p1/src/mpid/common/datatype
>> -I/home/x/Documents/MPICH2/mpich2-1.4.1p1/src/mpid/common/datatype
>> -I/home/x/Documents/MPICH2/mpich2-1.4.1p1/src/mpid/common/locks
>> -I/home/x/Documents/MPICH2/mpich2-1.4.1p1/src/mpid/common/locks
>> -I/home/x/Documents/MPICH2/mpich2-1.4.1p1/src/mpid/ch3/channels/nemesis/include
>> -I/home/x/Documents/MPICH2/mpich2-1.4.1p1/src/mpid/ch3/channe
>>  ls/nemesis/include
>> -I/home/x/Documents/MPICH2/mpich2-1.4.1p1/src/mpid/ch3/channels/nemesis/nemesis/include
>> -I/home/x/Documents/MPICH2/mpich2-1.4.1p1/src/mpid/ch3/channels/nemesis/nemesis/include
>> -I/home/x/Documents/MPICH2/mpich2-1.4.1p1/src/mpid/ch3/channels/nemesis/nemesis/utils/monitor
>> -I/home/x/Documents/MPICH2/mpich2-1.4.1p1/src/mpid/ch3/channels/nemesis/nemesis/utils/monitor
>> -I/home/x/Documents/MPICH2/mpich2-1.4.1p1/src/util/wrappers
>> -I/home/x/Documents/MPICH2/mpich2-1.4.1p1/src/util/wrappers'
>> >     Process Manager:                         pmi
>> >     Launchers available:                      ssh rsh fork slurm ll lsf
>> sge manual persist
>> >     Topology libraries available:              hwloc plpa
>> >     Resource management kernels available:    user slurm ll lsf sge pbs
>> >     Checkpointing libraries available:
>> >     Demux engines available:                  poll select
>> >
>> >
>> > Thanks,
>> > - Chansup
>> >
>> >
>> > _______________________________________________
>> > mpich-discuss mailing list     mpich-discuss at mcs.anl.gov
>> > To manage subscription options or unsubscribe:
>> > https://lists.mcs.anl.gov/mailman/listinfo/mpich-discuss
>>
>> _______________________________________________
>> mpich-discuss mailing list     mpich-discuss at mcs.anl.gov
>> To manage subscription options or unsubscribe:
>> https://lists.mcs.anl.gov/mailman/listinfo/mpich-discuss
>>
>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.mcs.anl.gov/pipermail/mpich-discuss/attachments/20111220/9112ed8e/attachment.htm>


More information about the mpich-discuss mailing list