NetBSD OpenMPI - SGE - PETSc - PISM
Kevin.Buckley at ecs.vuw.ac.nz
Kevin.Buckley at ecs.vuw.ac.nz
Thu Dec 17 16:55:03 CST 2009
A whole swathe of people have been made aware of the issues
that have arisen as a result of a researcher here looking to
run PISM, which sits on top of PETSc, which sits on top of
OpenMPI.
I am happy to be able to inform you that the problems we were
seeing would seem to have been arising down at the OpenMPI
level.
If I remove any acknowledgement of IPv6 within the OpenMPI
code, then both the PETSc examples and PISM application
have been seen to be running upon my initial 8-processor
parallel environment when submitted as an Sun Grid Engine
job.
I guess this means that the PISM and PETSc guys can "stand easy"
whilst the OpenMPI community needs to follow up on why there's
a "addr.sa_len=0" creeping through the interface inspection
code (upon NetBSD at least) when it passes thru the various
IPv6 stanzas.
Thanks for all the feedback on this from all the quarters,
Kevin
--
Kevin M. Buckley Room: CO327
School of Engineering and Phone: +64 4 463 5971
Computer Science
Victoria University of Wellington
New Zealand
More information about the petsc-dev
mailing list