[petsc-dev] Multiple MPICH/fblaslapack Installs For Multiple Arches

Jacob Faibussowitsch jacob.fai at gmail.com
Sun Mar 22 16:45:05 CDT 2020


> Wrt MPICH - My suggestion is to install with --download-mpich --prefix [as mentioned]. Primary reason is: we use --with-device=ch3:sock - which is good for 'oversubscribe' usage [running "mpiexec -n 8 ./exe" on a dual core box] - which is what we normally do during development. And also its valgrind clean.

Prefix is where the source is downloaded to? Or where the libs/binaries are put? Ideally I would like my source to be in /my/custom/path and the resulting binaries and libraries to be in /usr/bin or /usr/local/bin where all my other downloaded packages are.

Best regards,

Jacob Faibussowitsch
(Jacob Fai - booss - oh - vitch)
Cell: (312) 694-3391

> On Mar 22, 2020, at 4:37 PM, Satish Balay <balay at mcs.anl.gov> wrote:
> 
> On Sun, 22 Mar 2020, Jacob Faibussowitsch wrote:
> 
>>> Yes on all points regarding MPI and BLAS/Lapack.  I recommend installing
>>> a current MPICH and/or Open MPI system-wide, preferably hooked up to
>>> ccache (see replies to this thread:
>>> https://lists.mcs.anl.gov/pipermail/petsc-dev/2020-January/025505.html <https://lists.mcs.anl.gov/pipermail/petsc-dev/2020-January/025505.html>),
>>> as well as BLAS/Lapack system-wide.  It's the other packages that are
>>> more likely to depend on int/scalar configuration, but even many of
>>> those (HDF5, SuiteSparse, etc.) aren't built specially for PETSc.
>> 
>> Is the home-brew MPICH, openblas, lapack sufficient here? Or is it recommended to build all three from source?
> 
> On Mac - configure defaults to using VecLib [Apple's default blas/alapck].
> 
> Wrt MPICH - My suggestion is to install with --download-mpich --prefix [as mentioned]. Primary reason is: we use --with-device=ch3:sock - which is good for 'oversubscribe' usage [running "mpiexec -n 8 ./exe" on a dual core box] - which is what we normally do during development. And also its valgrind clean.
> 
> And then we recommend xcode gcc/g++ with brew/gfortran - so MPICH can be built with this combination of compilers [using this mode]
> 
> Satish
> 
>> 
>> Best regards,
>> 
>> Jacob Faibussowitsch
>> (Jacob Fai - booss - oh - vitch)
>> Cell: (312) 694-3391
>> 
>>> On Mar 22, 2020, at 4:25 PM, Jed Brown <jed at jedbrown.org> wrote:
>>> 
>>> Jacob Faibussowitsch <jacob.fai at gmail.com <mailto:jacob.fai at gmail.com>> writes:
>>> 
>>>> Hello all,
>>>> 
>>>> As part of development, I have several arch folders lying around in my PETSC_DIR namely a 32-bit OSX, 64-bit OSX, 32-bit linux with valgrind, 64-bit linux with valgrind, and a 32-bit up to date with current master. All of these have a —download-mpich —download-fblaslapack and hence their own copy of each (so that’s 5 copies of each, plus other duplicated packages im sure). At this stage, even getting the bare minimum of these arches ready for dev work after a rebase/git pull takes decades as package versions, or conf settings change, forcing a rebuild of the same packages multiple times.
>>>> 
>>>> My question(s):
>>>> What petsc ./configure options are necessary to change the
>>>> configuration of each library w.r.t. petsc? i.e. can my 64-bit arches
>>>> use my 32-bit MPICH/fblaslapack and vice-versa? 
>>> 
>>> Yes on all points regarding MPI and BLAS/Lapack.  I recommend installing
>>> a current MPICH and/or Open MPI system-wide, preferably hooked up to
>>> ccache (see replies to this thread:
>>> https://lists.mcs.anl.gov/pipermail/petsc-dev/2020-January/025505.html <https://lists.mcs.anl.gov/pipermail/petsc-dev/2020-January/025505.html>),
>>> as well as BLAS/Lapack system-wide.  It's the other packages that are
>>> more likely to depend on int/scalar configuration, but even many of
>>> those (HDF5, SuiteSparse, etc.) aren't built specially for PETSc.
>>> 
>>>> Does this change when I have —with-debug on or off? If so, what other
>>>> packages have a similar ability? Is there anywhere in ./configure
>>>> —help where this kind of information would be documented?
>>>> 
>>>> I suspect that this hasn’t been fully explored since its primarily a developer “problem” and not one the average user will run into/care about (since they usually aren’t building petsc multiple times). I’m sure everyone has their own ways of tackling this problem, I’d love to hear them.
>>>> 
>>>> Best regards,
>>>> 
>>>> Jacob Faibussowitsch
>>>> (Jacob Fai - booss - oh - vitch)
>>>> Cell: (312) 694-3391
>> 
>> 

-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.mcs.anl.gov/pipermail/petsc-dev/attachments/20200322/e49c8c0f/attachment-0001.html>


More information about the petsc-dev mailing list