[petsc-users] PETSc and AMPI

Barry Smith bsmith at mcs.anl.gov
Sun Feb 1 21:24:27 CST 2015


> On Feb 1, 2015, at 7:58 PM, Jed Brown <jed at jedbrown.org> wrote:
> 
> Barry Smith <bsmith at mcs.anl.gov> writes:
>> Porting to AMPI
>> ---------------
>> Global and static variables are unusable in virtualized AMPI programs, because
>> a separate copy would be needed for each VP. Therefore, to run with more than
>> 1 VP per processor, all globals and statics must be modified to use local 
>> storage.
> 
> This is more than is needed for thread safety, but removing all
> variables with static linkage is one reliable way to ensure thread
> safety.

   For our current support for --with-threadsafety all "global" variables are created and initialized etc only in PetscInitialize()/PetscFinalize() so the user is free to use threads in between the PetscInitialize() and Finalize(). And, of course, profiling is turned off.

   We could possibly "cheat" with AMPI to essentially have PetscInitialize()/Finalize() run through most their code only on thread 0 (assuming we have a way of determining thread 0) to create the "global" data structures (this is registering of objects, classids etc) and only call MPI_Init() and whatever else needs to be called by all of them. May not be too ugly, but yet another "special case" leading to more complexity of the code base.

   I'd be happy to see a branch attempting this 

  Barry



>  This would require a library context of some sort, which
> unfortunately doesn't interact well with profiling and debugging and
> makes it more difficult to load plugins (the abstraction leaks because
> dlopen has global effects, as does IO).  I don't know if we've thought
> carefully about the usability cost of eradicating all variables with
> static linkage.



More information about the petsc-users mailing list