[petsc-dev] Making a PetSc Roll and Rocks Module

Matthew Knepley knepley at gmail.com
Thu Jan 24 13:18:28 CST 2013


On Thu, Jan 24, 2013 at 1:12 PM, Satish Balay <balay at mcs.anl.gov> wrote:

> On Thu, 24 Jan 2013, Barry Smith wrote:
>
> >
> >
> > Begin forwarded message:
> >
> > > From: Philip Papadopoulos <philip.papadopoulos at gmail.com>
> > > Subject: Re: [petsc-maint #149715] Making a PetSc Roll and Rocks Module
> > > Date: January 24, 2013 9:48:36 AM CST
> > > To: "Schneider, Barry I." <bschneid at nsf.gov>
> > > Cc: Barry Smith <bsmith at mcs.anl.gov>
> > >
> > > Dear Barry^2,
> > >
> > > The major observation about the build process is that building the
> software so that it can livesw-
> > > in a standard OS package is more work than it should be. Here is the
> standard "meme".
> > > Suppose you have three directories:
> > > sw-src
> > > tmpoot/sw-install
> > > sw-install
> > >
> > > sw-src: the source directory tree for building the sw package (eg
> petsc-3.3-p5)
> > > sw-install:  the directory where one wants sw installed on the final
> system. We use something
> > >                  like
> /opt/petsc/compiler/mpi/interconnect/petsc-version/petsc-arch
> > >
> > > the last directory is an artifact of the way many people build
> software for packaging. Instead of
> > > installing into sw-install directly, you install into a
> tmproot/sw-install.  Then you direct the package
> > > manager to grab all files in tmproot/sw-install to put in the package.
>  The package itself will
> > > strip off the tmproot leading directory. In other words the package
> labels all files as sw-install/.
> > >
> > > So the build issue that I ran into is that the build directory and/or
> the tmproot directory path becomes embedded into a large number of files
> (include files, mod files, etc). To get around this,
> > > I did a bind mount of (tmproot/sw-install --> /sw-install) and told
> petsc to install into /sw-install.  I consider that to be a "hack", but
> petsc isn't the only software that I've run into that requires this
> > > mounting bit of legerdemain.
> > >
> > > Many software packages will support a dest=  or DESTDIR variable for
> "make install"  that supports the "installation" into a tmproot directory
> but leaves no trace of tmproot in any of its configuration files. (
> http://git.rocksclusters.org/cgi-bin/gitweb.cgi?p=core/python/.git;a=blob;f=src/python-2/Makefile;h=ced6cc1c6eb6a4f70dd0f3e1b0ccd1ac1e40f989;hb=HEAD)
>  shows a Makefile for python that supports this.
>
> We do support this install with DESTDIR. It might have rough edges -
> but its supporsed to work the same way any other package that supports
> it.
>
> One difference though is - since we also suppor the alternate
> orngaization [default] for inplace install with PETSC_ARCH - one has
> to use this PETSC_ARCH during the prefix build process aswell.  [but
> then PETSC_ARCH is nolonger used/needed after that]
>
> i.e
> configure
> --prefix=/opt/petsc/compiler/mpi/interconnect/petsc-version/arch1
> PETSC_ARCH=build1
> make PETSC_ARCH=build1 all
> make install PETSC_ARCH=build1 install DESTDIR=/tmp/dest1
> <package up from DESTDIR, and install as root:>
> Now user does:
> make PETSC_DIR=/opt/petsc/compiler/mpi/interconnect/petsc-version/arch1 ex1
>
> configure [different options]
> --prefix=/opt/petsc/compiler/mpi/interconnect/petsc-version/arch2
> PETSC_ARCH=build2
> make PETSC_ARCH=build2 all
> make install PETSC_ARCH=build2 install DESTDIR=/tmp/dest2
> <package up from DESTDIR, and install as root:>
> Now user does:
> make PETSC_DIR=/opt/petsc/compiler/mpi/interconnect/petsc-version/arch2 ex1
>

Satish, would you mind putting this little blurb on the installation page
in the section about
using prefix? I could not find this anywhere in our documentation.

  Thanks,

    Matt


> etc.
>
> There could be rough edges to this process as most folks use inplace
> install. But if it doesn't work for you we should fix.
>
> > > Ultimately we create packages of software to simplify a number of
> things. Including update of installed software on many machines.
> > >
> > > The other "issue" is not really an issue, but a request. When petsc is
> built it creates a petscvariables (or similarly named) file with lots of
> environment variables.  It would be terrific if it could also create  an
> environment modules files with this same information.  Users often want
> different versions/different configs of the same software and environment
> modules is now standard in Redhat/CentOS distributions.
>
> The stuff petsc creates is in petscvariables for the consumption by
> 'make' and the only env variable the user needs [in this case with
> prefix install] is PETSC_DIR [as shown above for a prefix install].
>
> Since this PETSC_DIR value is a prefix specfied to configure [for eg:
> --prefix=/opt/petsc/compiler/mpi/interconnect/petsc-version/arch2] its
> value is known only to rocks pakcage builder. So it would know how to
> create the correct module [or whatever mechanicsm used to
> indentify/use] for a given petsc install.
>
> Satish
>
> > >
> > > Hope this helps.
> > >
> > >
> > > On Thu, Jan 24, 2013 at 2:44 AM, Schneider, Barry I. <bschneid at nsf.gov>
> wrote:
> > > Barry,
> > > I am sending this to Phil Papadopoulos and he can make much more
> precise recommendations.  Glad you have thick skins.  Being at NSF requires
> the same physiology.
> > >
> > > -----Original Message-----
> > > From: Barry Smith [mailto:bsmith at mcs.anl.gov]
> > > Sent: Wednesday, January 23, 2013 10:19 PM
> > > To: Schneider, Barry I.
> > > Subject: Re: [petsc-maint #149715] Making a PetSc Roll and Rocks Module
> > >
> > >
> > >    Barry,
> > >
> > >    By far the most common email we get is regarding installation, so
> we are always trying to understand how to improve that process. If we can
> eliminate just 20 percent of installation problems that would save us a
> great deal of work, so yes, any critique/suggestions are greatly
> appreciated, and after all these years we have thick skins so can survive
> even the harshest suggestions.
> > >
> > >    Barry
> > >
> > > On Jan 23, 2013, at 6:13 PM, "Schneider, Barry I." <bschneid at nsf.gov>
> wrote:
> > >
> > > > Barry,
> > > > First, thanks for getting back to me so quickly.   I view this as a
> way to make things easier for everyone for a library that is really
> invaluable to many of us.  Phil P. is a pro at making rolls and modules for
> the Rocks distribution of CentOS that is widely used in the NSF
> cyberinfrastructure and by others.  I am both a program manager for the
> XSEDE project and a large user as well so I appreciate the ability to use
> the library in its most transparent fashion.  After Phil did his thing we
> tested it and it worked perfectly on our cluster.  Basically, you load the
> module and any associated modules such as the Intel compiler and
> appropriate MPI module and then you just use it.  No screwing around with
> building and the rest of it.  That's the good news.  Of course, you need to
> make a roll for every specific combination of compiler and MPI but I
> believe what Phil has learned makes that pretty straightforward to do.
>  While I cannot speak for Phil, I would expect that anythi
>  ng he learned would be available to you folks if that's what you wanted.
>  The next step for us will be to incorporate what we did in the Rocks
> distribution which will eventually propagate to lots of users.  Eventually,
> there will be an XSEDE distribution which will be used by an even larger
> number of sites.  We are talking about many thousands of users.  So, I
> guess what I am really asking is whether you are interested enough in what
> Phil learned to perhaps modify the current PetSc so that it is easier for
> users to install or make available the technology for folks to make their
> own rolls.  Of course, you could decide that is not on your agenda and
> that's fine.  But if you capitalize on our experience and your great
> software that would be wonderful and perhaps offer users alternatives to
> the current way of doing things that would make life easier.
> > > >
> > > > **************************************************
> > > > Dr Barry I. Schneider
> > > > Program Director for Cyberinfrastructure Past Chair, APS Division of
> > > > Computational Physics Physics Division National Science Foundation
> > > > 4201 Wilson Blvd.
> > > > Arlington, VA 22230
> > > > Phone:(703) 292-7383
> > > > FAX:(703) 292-9078
> > > > Cell:(703) 589-2528
> > > > **************************************************
> > > >
> > > >
> > > > -----Original Message-----
> > > > From: Barry Smith [mailto:bsmith at mcs.anl.gov]
> > > > Sent: Wednesday, January 23, 2013 5:56 PM
> > > > To: petsc-maint at mcs.anl.gov; Schneider, Barry I.
> > > > Subject: Re: [petsc-maint #149715] Making a PetSc Roll and Rocks
> > > > Module
> > > >
> > > >
> > > >   Barry,
> > > >
> > > >     Thanks for the feedback. We actually desire to support "system"
> installs just as easily as "home directory" installs, though our
> documentation emphasizes "home directory" installs since we feel that is
> most practical for more users. We'd be interested in more specifics on the
> difficulties and any suggestions there may be to make the process
> (especially for multiple compilers/mpis) easier. That is what could we have
> done differently to make it easier for you?
> > > >
> > > >    Thanks
> > > >
> > > >    Barry
> > > >
> > > > On Jan 23, 2013, at 1:59 PM, "Schneider, Barry I." <bschneid at nsf.gov>
> wrote:
> > > >
> > > >> I thought I might pass on the following to the folks maintaining
> PetSc.  Recently we had the occasion to make a Rocks roll and module to be
> used on a local cluster here at NSF.  Phil Papadopoulos the developer of
> Rocks at SDSC took the time to help us to do that not simply because we
> wanted it but also because he wanted to be able to know how to do it and
> distribute it to a much larger set users of NSF resources and also on out
> NSF supported platforms.  Here are his comments.  Perhaps if you feel there
> are things that could be made easier that would be great.  It also could be
> useful to you directly.
> > > >>
> > > >> BTW, Petsc is kind of a bear to package -- they really expect you
> to build it in your home directory :-).
> > > >> I took the time to make the roll pretty flexible in terms of
> compiler/mpi/network support to build various versions.
> > > >> This was modeled after other Triton rolls so that it is consistent
> with other software -- it likely becomes part of the standard SDSC software
> stack, so this was a good thing to do all the way around.
> > > >>
> > > >> I'm about ready to add this to our code repository, it will show up
> on git.rocksclusters.org<http://git.rocksclusters.org> tomorrow morning.
> > > >>
> > > >>
> > > >> **************************************************
> > > >> Dr Barry I. Schneider
> > > >> Program Director for Cyberinfrastructure Past Chair, APS Division of
> > > >> Computational Physics Physics Division National Science Foundation
> > > >> 4201 Wilson Blvd.
> > > >> Arlington, VA 22230
> > > >> Phone:(703) 292-7383
> > > >> FAX:(703) 292-9078
> > > >> Cell:(703) 589-2528
> > > >> **************************************************
> > > >>
> > > >>
> > > >
> > >
> > >
> > >
> > >
> > > --
> > > Philip Papadopoulos, PhD
> > > University of California, San Diego
> > > 858-822-3628 (Ofc)
> > > 619-331-2990 (Fax)
> >
> >
>



-- 
What most experimenters take for granted before they begin their
experiments is infinitely more interesting than any results to which their
experiments lead.
-- Norbert Wiener
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.mcs.anl.gov/pipermail/petsc-dev/attachments/20130124/5ef300f8/attachment.html>


More information about the petsc-dev mailing list