[mpich-discuss] thread MPI calls

Pavan Balaji balaji at mcs.anl.gov
Fri Jul 24 20:00:05 CDT 2009


If only the master thread is making MPI calls, you should run it in 
MPI_THREAD_FUNNELED mode.

MPI_THREAD_MULTIPLE is the most generic option where all threads can 
make MPI calls -- this can handle all kinds of threaded application. But 
this generality comes with a performance penalty. In your case, since 
only one thread is making MPI calls, you don't have to pay this 
performance penalty.

  -- Pavan

On 07/24/2009 07:49 PM, chong tan wrote:
> 
> sorry that i ovr cleaned my inbox, this is the only one I have left to 
> continue with
> this thread.
>  
> After cleaning up some over-looked situation and making sure that we are 
> running with
> MPI_Init_thread with THREAD_MULTIPLE, we rerun a few tests, we
> are seeing up to 10% performance drop compared to un-thread code.  We 
> used affinity,
> the main and the spawned thread are running on different core of the 
> same physical CPU.
>  
> We use posix thread to create our thread, is that bad for MPI ? 
>  
> ta
> 
>  
> ------------------------------------------------------------------------
> *From:* chong tan <chong_guan_tan at yahoo.com>
> *To:* mpich-discuss at mcs.anl.gov
> *Sent:* Thursday, July 23, 2009 3:40:51 PM
> *Subject:* [mpich-discuss] thread MPI calls
> 
> box X86 box running Linux.  MPICH2 1.1 configured with
>  
>  --disable-f77 --disable-f90 --with-device=ch3:nemesis  --enable-fast
>  
> machine : 16 cores, load of mem.  running  with mpiexec -n 4
> The master process (MPICH rank 0) has 2 thread, main and recv, the 
> locks are constructed using pthread.   The master's
> recv function wait for the recv thread to be done with all recieving, 
> then process the data, and reenable the recv thread,
> like this:
>  
>              main                                                recv thread
>  
>              recvFunc() {                                   wait for 'run' 
>              wait for read-done                          for n processes
>              apply data                                              
> call MPI_IRecv
>              signal run                                        MPI_waitall
>              }                                                      
> signal read-done
>         
> recvFunc is repeatedly called during the life of my applications.  (In a 
> few tests I have, it is called 100+ billion times)
>  
> compared to the same application with no thread and using IRecv or Recv, 
> the threaded version can run 20+%
> slower.  From the process monitoring, it looks like the MPI Irecv calls 
> maybe thread. This performance degration
> is inline with 2.0.6 configured using --enable-threads=funneled.  The 
> application ran for many hours, so 20% is
> significant.
>  
> Questions :
> -  is MPICH2 1.1 self-sensing ?  that is, does it know it can decide if 
> it has to do thread-multiple ?
> -  IRecv and MPI wait all, is that also threaded ?
> -  anyone experimented this before ?
>  
> thanks
> tan
>  
> ------------------------------------------------------------------------
> *From:* Rajeev Thakur <thakur at mcs.anl.gov>
> *To:* mpich-discuss at mcs.anl.gov
> *Sent:* Friday, June 26, 2009 12:12:18 PM
> *Subject:* Re: [mpich-discuss] does 1.1 support real threading of MPI call ?
> 
> Yes, it supports MPI_THREAD_MULTIPLE as defined by the MPI standard.
>  
> Rajeev
> 
>     ------------------------------------------------------------------------
>     *From:* mpich-discuss-bounces at mcs.anl.gov
>     [mailto:mpich-discuss-bounces at mcs.anl.gov] *On Behalf Of *chong tan
>     *Sent:* Friday, June 26, 2009 2:04 PM
>     *To:* mpich-discuss at mcs.anl.gov
>     *Subject:* [mpich-discuss] does 1.1 support real threading of MPI call ?
> 
>     Does anyone know if the 1.1 release supports real threaded MPI
>     calls.  That is,
>     a process, say 1, may contain n threads, and each calling
>     MPI_Send/Recv to
>     other processes, but still think they are all process 1 ?
>      
>     I am not looking at the funneled solution, that does not help me.
>      
>     thanks
>     tan
> 
>      
> 
> 
> 
> 

-- 
Pavan Balaji
http://www.mcs.anl.gov/~balaji


More information about the mpich-discuss mailing list