[mpich-discuss] Mapping process to specific node

Fernando Luz fernando_luz at tpn.usp.br
Mon Feb 6 10:57:34 CST 2012


Hi Pavan, 

I will do now.

Thanks :-)

Fernando Luz

On Dom, 2012-02-05 at 14:19 -0600, Pavan Balaji wrote:

> This might be a bug in the Hydra code.  I looked through the code.  I 
> didn't specifically try the example reported, but I think I know what's 
> going wrong here.
> 
> Fernando: can you please create a ticket for this?
> 
> https://trac.mcs.anl.gov/projects/mpich2/newticket
> 
> Thanks,
> 
>   -- Pavan
> 
> On 02/05/2012 02:15 PM, Rajeev Thakur wrote:
> > Not sure what the problem is. You may want to check with the SLURM folks. Does it work without the -f host.txt?
> >
> > Rajeev
> >
> > On Feb 3, 2012, at 2:53 PM, Fernando Luz wrote:
> >
> >> Hi again,
> >>
> >> Another information, I'm using slurm, and if I execute without reservation the nodes, I'm successful .
> >>
> >> With slurm, first I made salloc reservation.
> >>
> >> $ salloc -N 3 --exclusive
> >>
> >> And I receive a 3 nodes with 8 cores.
> >>
> >> $ echo $SLURM_NODELIST
> >> machine01
> >> machine02
> >> machine03
> >>
> >> and if I run mpiexec
> >>
> >> $ mpiexec -n 10 -f host.txt ./hostname_test
> >> srun: error: Only allocated 3 nodes asked for 6
> >>
> >> Regards
> >>
> >> Fernando Luz
> >>
> >> On Sex, 2012-02-03 at 18:31 -0200, Fernando Luz wrote:
> >>> Hello,
> >>>
> >>> I create the follow program:
> >>>
> >>> =========================================================
> >>> #include "mpi.h"
> >>> # include<cstdlib>
> >>> # include<iostream>
> >>> # include<iomanip>
> >>> # include<ctime>
> >>>
> >>>
> >>> int main ( int argc, char *argv[] )
> >>> {
> >>>    int rank;
> >>>    int size;
> >>>    char hostname[255];
> >>>    int size_hostname;
> >>>    double wtime;
> >>>
> >>>    MPI::Init ( argc, argv );
> >>>    size = MPI::COMM_WORLD.Get_size();
> >>>    rank = MPI::COMM_WORLD.Get_rank();
> >>>    MPI::Get_processor_name(hostname, size_hostname);
> >>>
> >>>    if ( rank == 0 ){
> >>>      std::cout<<  "  print size = "<<  size<<  std::endl;
> >>>    }
> >>>
> >>>    std::cout<<  "I am rank="<<  rank<<  " and my hostname="<<  hostname<<  std::endl;
> >>>
> >>>    MPI::Finalize();
> >>>
> >>>    return 0;
> >>> }
> >>> =========================================================
> >>>
> >>> In my execution test, I was planned run this program using this command-line
> >>>
> >>> $ mpiexec -n 10 -f host.txt ./hostname_test
> >>>
> >>> and the host.txt
> >>> ========================================================
> >>> machine01  # rank 0 run in machine01
> >>> machine02  # rank 1 run in machine02
> >>> machine03  # rank 2 run in machine03
> >>> machine03  # rank 3 run in machine03
> >>> machine02  # rank 4 run in machine02
> >>> machine03  # rank 5 run in machine03
> >>> machine01  # rank 6 run in machine01
> >>> machine01  # rank 7 run in machine01
> >>> machine03  # rank 8 run in machine03
> >>> machine01  # rank 9 run in machine01
> >>> ========================================================
> >>>
> >>> But I don't have success.
> >>>
> >>> It's possible to select a process to run in a specific node?
> >>>
> >>> Regards
> >>>
> >>> Fernando Luz
> >>> _______________________________________________
> >>> mpich-discuss mailing list
> >>> mpich-discuss at mcs.anl.gov
> >>>
> >>> To manage subscription options or unsubscribe:
> >>>
> >>> https://lists.mcs.anl.gov/mailman/listinfo/mpich-discuss
> >>
> >> _______________________________________________
> >> mpich-discuss mailing list     mpich-discuss at mcs.anl.gov
> >> To manage subscription options or unsubscribe:
> >> https://lists.mcs.anl.gov/mailman/listinfo/mpich-discuss
> >
> > _______________________________________________
> > mpich-discuss mailing list     mpich-discuss at mcs.anl.gov
> > To manage subscription options or unsubscribe:
> > https://lists.mcs.anl.gov/mailman/listinfo/mpich-discuss
> 


-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.mcs.anl.gov/pipermail/mpich-discuss/attachments/20120206/3e704a01/attachment-0001.htm>


More information about the mpich-discuss mailing list