[mpich-discuss] Regarding MPICH2-1.1.1p1 testing basing on open-mx
Guillaume Mercier
mercierg at mcs.anl.gov
Mon Mar 22 05:49:45 CDT 2010
I tried the MX module with the IMB test suite and it passed the tests
fine (on two nodes).
I'll try to replace the MX library with Open-MX just to see what happens.
Guillaume
Dave Goodell a écrit :
> From the names they do look like MPI one-sided tests, but I don't have
> a copy of the IMB benchmarks handy. As I recall the ch3:nemesis:mx
> netmod does support the MPI one-sided APIs, although it's probably
> implemented in terms of two-sided operations under the hood.
> Guillaume would know for sure.
>
> -Dave
>
> On Mar 19, 2010, at 1:08 PM, Scott Atchley wrote:
>
>> Dave,
>>
>> Are these one-sided tests? Does Nemesis/MX support one-sided?
>>
>> Scott
>>
>> On Mar 19, 2010, at 1:59 PM, Brice Goglin wrote:
>>
>>> Some bugs were reported in the past about some MPICH2 tests not
>>> working,
>>> but we never reproduced them with recent MPICH2 and Open-MX versions.
>>> I'd like to know what kind of interfaces, hosts and kernels were used
>>> here. And also how many processes per node were used.
>>>
>>> Brice
>>>
>>>
>>>
>>> Dave Goodell wrote:
>>>> I don't think that we have tested OpenMX with the mx netmod, so I'm
>>>> not sure if there are any bugs there. I've CCed the primary
>>>> developers of both OpenMX and our mx netmod in case they have any
>>>> information on this.
>>>>
>>>> Do simpler tests work? The "examples/cpi" program in your MPICH2
>>>> build directory is a good simple sanity test.
>>>>
>>>> -Dave
>>>>
>>>> On Mar 19, 2010, at 3:31 AM, 李俊丽 wrote:
>>>>
>>>>> Hello,
>>>>>
>>>>> Just do:
>>>>> ./configure --with-device=ch3:nemesis:mx
>>>>> --with-mx-lib=/opt/open-mx/lib/
>>>>> --with-mx-include=/opt/open-mx/include/
>>>>>
>>>>> make
>>>>>
>>>>> make install
>>>>>
>>>>> Then, I start open-omx service, and test mpich2 based on open-mx.
>>>>>
>>>>>
>>>>>
>>>>> [root at cu02 ~]# mpiexec -n 4 /usr/lib64/mpich2/bin/mpitests-IMB-EXT
>>>>> Unidir_Get
>>>>>
>>>>>
>>>>>
>>>>> It has this error message:
>>>>>
>>>>> rank 0 in job 8 cu02.hpc.com_54277 caused collective abort of all
>>>>> ranks exit status of rank 0: killed by signal 9
>>>>>
>>>>> And the same wrong comes with "mpiexec -n 4
>>>>> /usr/lib64/mpich2/bin/mpitests-IMB-EXT Bidir_Get "
>>>>>
>>>>> Is there any way to solve this problem?
>>>>>
>>>>> Thanks!
>>>>>
>>>>> Regards
>>>>>
>>>>> Lily
>>>>> _______________________________________________
>>>>> mpich-discuss mailing list
>>>>> mpich-discuss at mcs.anl.gov
>>>>> https://lists.mcs.anl.gov/mailman/listinfo/mpich-discuss
>>>>
>>>
>>> _______________________________________________
>>> mpich-discuss mailing list
>>> mpich-discuss at mcs.anl.gov
>>> https://lists.mcs.anl.gov/mailman/listinfo/mpich-discuss
>>>
>>
>
> _______________________________________________
> mpich-discuss mailing list
> mpich-discuss at mcs.anl.gov
> https://lists.mcs.anl.gov/mailman/listinfo/mpich-discuss
More information about the mpich-discuss
mailing list