jaguar and mercurial
Richard Tran Mills
rmills at ornl.gov
Tue Nov 6 14:16:35 CST 2007
There are now two systems that you can log into, "jaguar" and "jaguarcnl". A
poor choice of names, though, because both systems are now running CNL! The
machine was partitioned into jaguar and jaguarcnl so that CNL could be tested
on just part of the machine. CNL is now installed on both machines, though.
They are keeping the machine partitioned, however, because the upgrade to
quad-cores is going to be done on partitions of the machine. Both partitions
are available now and are running the same OS, scheduler, etc. All of the
faster, XT4 nodes are in the "jaguar" partition.
Mercurial 0.9.3 is on both partitions. You just need to do a "module load
mercurial". I guess I need to install 0.9.4 since that's out now...
PETSc has been built for CNL. Use
PETSC_DIR=/apps/PETSC/petsc-2.3.3-hg
and PETSC_ARCH=cray-xt3-cnl_g (debugging) OR
cray-xt3-cnl_fast (most aggressive optimization)
cray-xt3-cnl_O3 (-O3 optimization)
cray-xt3-cnl_O2 (-O2 optimization)
All of the example PETSc codes I have tried work, but PFLOW is dying with a
seg fault on the simple THC example. I am currently trying to figure the problem.
There are two immediate differences between Catamount and CNL:
1) 'yod' has been replaced by the 'aprun' command.
2) Compute nodes cannot see any files that are not on the Lustre
scratch space (/tmp/work/$USERNAME).
--Richard
Peter Lichtner wrote:
> Looks like CNL does not have mercurial. Also Richard are you going to
> recompile petsc etc. for CNL? It looks like both systems are still
> available.
> ...Peter
--
Richard Tran Mills, Ph.D.
Computational Scientist E-mail: rmills at ornl.gov
Computational Earth Sciences Group Phone: (865) 241-3198
Oak Ridge National Laboratory Fax: (865) 574-0405
More information about the petsc-dev
mailing list