[OMPI users] NetBSD OpenMPI - SGE - PETSc - PISM
Jeff Squyres
jsquyres at cisco.com
Thu Dec 17 17:03:30 CST 2009
On Dec 17, 2009, at 5:55 PM, <Kevin.Buckley at ecs.vuw.ac.nz> wrote:
> I am happy to be able to inform you that the problems we were
> seeing would seem to have been arising down at the OpenMPI
> level.
Happy for *them*, at least. ;-)
> If I remove any acknowledgement of IPv6 within the OpenMPI
> code, then both the PETSc examples and PISM application
> have been seen to be running upon my initial 8-processor
> parallel environment when submitted as an Sun Grid Engine
> job.
Ok, that's good.
> I guess this means that the PISM and PETSc guys can "stand easy"
> whilst the OpenMPI community needs to follow up on why there's
> a "addr.sa_len=0" creeping through the interface inspection
> code (upon NetBSD at least) when it passes thru the various
> IPv6 stanzas.
Ok. We're still somewhat at a loss here, because we don't have any NetBSD to test on. :-( We're happy to provide any help that we can, and just like you, we'd love to see this problem resolved -- but NetBSD still isn't on any of our core competency lists. :-(
FWIW, we might want to move this discussion to the devel at open-mpi.org mailing list...
--
Jeff Squyres
jsquyres at cisco.com
More information about the petsc-dev
mailing list