[petsc-dev] always calling MPI_Init_thread?
Jed Brown
jedbrown at mcs.anl.gov
Thu Feb 14 13:36:17 CST 2013
On Thu, Feb 14, 2013 at 1:25 PM, Barry Smith <bsmith at mcs.anl.gov> wrote:
>
>
> My concern was if a bad implementation always did some terrible thing
> with locks (even if only one thread would ever exist) thus hurting
> performance.
>
> Is there no way to just tell MPI there ARE and WILL be no threads?
>
Yeah, MPI_THREAD_SINGLE, but there is nothing to lock when using
MPI_THREAD_FUNNELED. I figure that if the user intends to make calls from
threads, they should call MPI_Init before PetscInitialize().
>
> Barry
>
> On Feb 14, 2013, at 1:22 PM, Jed Brown <jedbrown at mcs.anl.gov> wrote:
>
> > On Thu, Feb 14, 2013 at 1:16 PM, Barry Smith <bsmith at mcs.anl.gov> wrote:
> >
> >
> > Do we really ALWAYS want to call this version even if we are not
> monkeying with threads at ALL?
> >
> > Thanks
> >
> > Barry
> >
> > ierr = MPI_Initialized(&flag);CHKERRQ(ierr);
> > if (!flag) {
> > if (PETSC_COMM_WORLD != MPI_COMM_NULL)
> SETERRQ(PETSC_COMM_SELF,PETSC_ERR_SUP,"You cannot set PETSC_COMM_WORLD if
> you have not initialized MPI first");
> > #if defined(PETSC_HAVE_MPI_INIT_THREAD)
> > {
> > PetscMPIInt provided;
> > ierr =
> MPI_Init_thread(argc,args,MPI_THREAD_FUNNELED,&provided);CHKERRQ(ierr);
> > }
> > #else
> > ierr = MPI_Init(argc,args);CHKERRQ(ierr);
> >
> > The implementation of MPI_Init just probes around the environment and
> calls MPI_Init_thread. There is no performance advantage of
> MPI_THREAD_SINGLE over MPI_THREAD_FUNNELED so I think the code here is fine.
>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.mcs.anl.gov/pipermail/petsc-dev/attachments/20130214/036cf4af/attachment.html>
More information about the petsc-dev
mailing list