[petsc-dev] BuildSystem fixes needed

Kai Germaschewski kai.germaschewski at unh.edu
Tue Nov 1 17:01:10 CDT 2011


On Mon, Oct 31, 2011 at 11:05 PM, Jed Brown <jedbrown at mcs.anl.gov> wrote:

> On Mon, Oct 31, 2011 at 21:02, Barry Smith <bsmith at mcs.anl.gov> wrote:
>
>>  A-MPI "replaces" each single Unix process with a collection of threads
>> (that each behave like processes with regard to MPI) hence the users entry
>> point cannot be called main() because main is called once by the OS when it
>> starts up the Unix process. Thus the A-MPI mpi.h changes the users main()
>> routine to a different name and the A-MPI startup system calls this new
>> routine for each of these threads/pseudo-MPI processes.
>
>
> Okay, does signal handling still work alright?
>

I'm quite sure it does not -- in fact, even any kind of non-automatic
variables (global or static) will share the same memory between MPI
"processes", last I looked. It's basically turning MPI processes into
shared-memory threads, with private stacks, of course -- which can have
lots of unintended side effects. Basically, you have to avoid any such
variables and privatize them by having each thread allocate dynamic memory
and use that. It's kinda easy enough to do for a small Fortran program with
a few common blocks, but doing it for PETSc sounds like a major project to
me.

I haven't looked at AMPI in a while, though, maybe things changed.

--Kai


-- 
Kai Germaschewski
Assistant Professor, Dept of Physics / Space Science Center
University of New Hampshire, Durham, NH 03824
office: Morse Hall 245E
phone:  +1-603-862-2912
fax: +1-603-862-2771
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.mcs.anl.gov/pipermail/petsc-dev/attachments/20111101/4a8132f1/attachment.html>


More information about the petsc-dev mailing list