[petsc-dev] always calling MPI_Init_thread?

Barry Smith bsmith at mcs.anl.gov
Thu Feb 14 13:25:57 CST 2013



   My concern was if a bad implementation always did some terrible thing with locks (even if only one thread would ever exist) thus hurting performance. 

   Is there no way to just tell MPI there ARE and WILL be no threads?

   Barry

On Feb 14, 2013, at 1:22 PM, Jed Brown <jedbrown at mcs.anl.gov> wrote:

> On Thu, Feb 14, 2013 at 1:16 PM, Barry Smith <bsmith at mcs.anl.gov> wrote:
> 
> 
>    Do we really ALWAYS want to call this version even if we are not monkeying with threads at ALL?
> 
>     Thanks
> 
>    Barry
> 
>  ierr = MPI_Initialized(&flag);CHKERRQ(ierr);
>   if (!flag) {
>     if (PETSC_COMM_WORLD != MPI_COMM_NULL) SETERRQ(PETSC_COMM_SELF,PETSC_ERR_SUP,"You cannot set PETSC_COMM_WORLD if you have not initialized MPI first");
> #if defined(PETSC_HAVE_MPI_INIT_THREAD)
>     {
>       PetscMPIInt provided;
>       ierr = MPI_Init_thread(argc,args,MPI_THREAD_FUNNELED,&provided);CHKERRQ(ierr);
>     }
> #else
>     ierr = MPI_Init(argc,args);CHKERRQ(ierr);
> 
> The implementation of MPI_Init just probes around the environment and calls MPI_Init_thread. There is no performance advantage of MPI_THREAD_SINGLE over MPI_THREAD_FUNNELED so I think the code here is fine.




More information about the petsc-dev mailing list