[mpich-discuss] cpu binding with 1.5 hydra
Jain, Rohit
Rohit_Jain at mentor.com
Mon Oct 29 19:13:34 CDT 2012
Thanks Pavan.
Do you mean '-binding' option is also being deprecated? Is it replaced with '-bind-to'?
Regards,
Rohit
-----Original Message-----
From: mpich-discuss-bounces at mcs.anl.gov [mailto:mpich-discuss-bounces at mcs.anl.gov] On Behalf Of Pavan Balaji
Sent: Monday, October 29, 2012 4:18 PM
To: mpich-discuss at mcs.anl.gov
Subject: Re: [mpich-discuss] cpu binding with 1.5 hydra
Or "-binding socket", though -binding is being deprecated.
-- Pavan
On 10/29/2012 06:16 PM, Pavan Balaji wrote:
>
> Yes, the binding options have changed in 1.5, to make it richer.
> Though you can still get the same functionality using "-bind-to socket".
>
> -- Pavan
>
> On 10/29/2012 04:42 PM, Jain, Rohit wrote:
>> I see that all 1.4.1 binding options have changed. Please confirm though.
>>
>> CPU based options:
>>
>> sockets -- allocate processes to one socket at a time
>> (allocating all processing units on the socket to each process)
>> cores -- allocate processes to one core at a time (allocating
>> all processing units on the core to each process)
>> rr:sockets -- allocate processes to one socket at a time
>> (rotating between the processing units on each socket)
>> rr:cores -- allocate processes to one core at a time (rotating
>> between the processing units on each core)
>>
>> Cache based options:
>>
>> l1 -- allocate processes to one L1 cache domain at a time
>> (allocating all processing units on the cache domain to each process)
>> l2 -- allocate processes to one L2 cache domain at a time
>> (allocating all processing units on the cache domain to each process)
>> l3 -- allocate processes to one L3 cache domain at a time
>> (allocating all processing units on the cache domain to each process)
>> rr:l1 -- allocate processes to one L1 cache domain at a time
>> (rotating between the processing units on each cache domain)
>> rr:l2 -- allocate processes to one L2 cache domain at a time
>> (rotating between the processing units on each cache domain)
>> rr:l3 -- allocate processes to one L3 cache domain at a time
>> (rotating between the processing units on each cache domain)
>>
>> Regards,
>> Rohit
>>
>>
>> -----Original Message-----
>> From: mpich-discuss-bounces at mcs.anl.gov
>> [mailto:mpich-discuss-bounces at mcs.anl.gov] On Behalf Of Jain, Rohit
>> Sent: Monday, October 29, 2012 2:25 PM
>> To: mpich-discuss at mcs.anl.gov
>> Subject: [mpich-discuss] cpu binding with 1.5 hydra
>>
>> Are cpu:sockets and cpu:cores binding deprecated in 1.5 release? It
>> used to work with 1.4.1.
>>
>> mpiexec -binding cpu:sockets -n 4 ls
>> [proxy:0:0 at gretel] handle_bitmap_binding
>> (./tools/topo/hwloc/topo_hwloc.c:385): unrecognized binding string
>> "cpu:sockets"
>> [proxy:0:0 at gretel] HYDT_topo_hwloc_init
>> (./tools/topo/hwloc/topo_hwloc.c:527): error binding with bind
>> "cpu:sockets" and map "(null)"
>> [proxy:0:0 at gretel] HYDT_topo_init (./tools/topo/topo.c:60): unable to
>> initialize hwloc [proxy:0:0 at gretel] launch_procs
>> (./pm/pmiserv/pmip_cb.c:516): unable to initialize process topology
>> [proxy:0:0 at gretel] HYD_pmcd_pmip_control_cmd_cb
>> (./pm/pmiserv/pmip_cb.c:890): launch_procs returned error
>> [proxy:0:0 at gretel] HYDT_dmxu_poll_wait_for_event
>> (./tools/demux/demux_poll.c:77): callback returned error status
>> [proxy:0:0 at gretel] main (./pm/pmiserv/pmip.c:210): demux engine error
>> waiting for event [mpiexec at gretel] control_cb
>> (./pm/pmiserv/pmiserv_cb.c:201): assert (!closed) failed
>> [mpiexec at gretel] HYDT_dmxu_poll_wait_for_event
>> (./tools/demux/demux_poll.c:77): callback returned error status
>> [mpiexec at gretel] HYD_pmci_wait_for_completion
>> (./pm/pmiserv/pmiserv_pmci.c:196): error waiting for event
>> [mpiexec at gretel] main (./ui/mpich/mpiexec.c:325): process manager
>> error waiting for completion
>>
>>
>> mpiexec -binding cpu:cores -n 4 ls
>> [proxy:0:0 at gretel] handle_bitmap_binding
>> (./tools/topo/hwloc/topo_hwloc.c:385): unrecognized binding string
>> "cpu:cores"
>> [proxy:0:0 at gretel] HYDT_topo_hwloc_init
>> (./tools/topo/hwloc/topo_hwloc.c:527): error binding with bind
>> "cpu:cores" and map "(null)"
>> [proxy:0:0 at gretel] HYDT_topo_init (./tools/topo/topo.c:60): unable to
>> initialize hwloc [proxy:0:0 at gretel] launch_procs
>> (./pm/pmiserv/pmip_cb.c:516): unable to initialize process topology
>> [proxy:0:0 at gretel] HYD_pmcd_pmip_control_cmd_cb
>> (./pm/pmiserv/pmip_cb.c:890): launch_procs returned error
>> [proxy:0:0 at gretel] HYDT_dmxu_poll_wait_for_event
>> (./tools/demux/demux_poll.c:77): callback returned error status
>> [proxy:0:0 at gretel] main (./pm/pmiserv/pmip.c:210): demux engine error
>> waiting for event [mpiexec at gretel] control_cb
>> (./pm/pmiserv/pmiserv_cb.c:201): assert (!closed) failed
>> [mpiexec at gretel] HYDT_dmxu_poll_wait_for_event
>> (./tools/demux/demux_poll.c:77): callback returned error status
>> [mpiexec at gretel] HYD_pmci_wait_for_completion
>> (./pm/pmiserv/pmiserv_pmci.c:196): error waiting for event
>> [mpiexec at gretel] main (./ui/mpich/mpiexec.c:325): process manager
>> error waiting for completion
>>
>> Regards,
>> Rohit
>> _______________________________________________
>> mpich-discuss mailing list mpich-discuss at mcs.anl.gov
>> To manage subscription options or unsubscribe:
>> https://lists.mcs.anl.gov/mailman/listinfo/mpich-discuss
>> _______________________________________________
>> mpich-discuss mailing list mpich-discuss at mcs.anl.gov
>> To manage subscription options or unsubscribe:
>> https://lists.mcs.anl.gov/mailman/listinfo/mpich-discuss
>>
>
--
Pavan Balaji
http://www.mcs.anl.gov/~balaji
_______________________________________________
mpich-discuss mailing list mpich-discuss at mcs.anl.gov
To manage subscription options or unsubscribe:
https://lists.mcs.anl.gov/mailman/listinfo/mpich-discuss
More information about the mpich-discuss
mailing list