[petsc-users] Installation question
Pham Pham
pvsang002 at gmail.com
Fri May 5 10:18:29 CDT 2017
Hi Satish,
It runs now, and shows a bad speed up:
Please help to improve this.
Thank you.
On Fri, May 5, 2017 at 10:02 PM, Satish Balay <balay at mcs.anl.gov> wrote:
> With Intel MPI - its best to use mpiexec.hydra [and not mpiexec]
>
> So you can do:
>
> make PETSC_DIR=/home/svu/mpepvs/petsc/petsc-3.7.5
> PETSC_ARCH=arch-linux-cxx-opt MPIEXEC=mpiexec.hydra test
>
>
> [you can also specify --with-mpiexec=mpiexec.hydra at configure time]
>
> Satish
>
>
> On Fri, 5 May 2017, Pham Pham wrote:
>
> > *Hi,*
> > *I can configure now, but fail when testing:*
> >
> > [mpepvs at atlas7-c10 petsc-3.7.5]$ make
> > PETSC_DIR=/home/svu/mpepvs/petsc/petsc-3.7.5
> PETSC_ARCH=arch-linux-cxx-opt
> > test Running test examples to verify correct installation
> > Using PETSC_DIR=/home/svu/mpepvs/petsc/petsc-3.7.5 and
> > PETSC_ARCH=arch-linux-cxx-opt
> > Possible error running C/C++ src/snes/examples/tutorials/ex19 with 1 MPI
> > process
> > See http://www.mcs.anl.gov/petsc/documentation/faq.html
> > mpiexec_atlas7-c10: cannot connect to local mpd
> (/tmp/mpd2.console_mpepvs);
> > possible causes:
> > 1. no mpd is running on this host
> > 2. an mpd is running but was started without a "console" (-n option)
> > Possible error running C/C++ src/snes/examples/tutorials/ex19 with 2 MPI
> > processes
> > See http://www.mcs.anl.gov/petsc/documentation/faq.html
> > mpiexec_atlas7-c10: cannot connect to local mpd
> (/tmp/mpd2.console_mpepvs);
> > possible causes:
> > 1. no mpd is running on this host
> > 2. an mpd is running but was started without a "console" (-n option)
> > Possible error running Fortran example src/snes/examples/tutorials/ex5f
> > with 1 MPI process
> > See http://www.mcs.anl.gov/petsc/documentation/faq.html
> > mpiexec_atlas7-c10: cannot connect to local mpd
> (/tmp/mpd2.console_mpepvs);
> > possible causes:
> > 1. no mpd is running on this host
> > 2. an mpd is running but was started without a "console" (-n option)
> > Completed test examples
> > =========================================
> > Now to evaluate the computer systems you plan use - do:
> > make PETSC_DIR=/home/svu/mpepvs/petsc/petsc-3.7.5
> > PETSC_ARCH=arch-linux-cxx-opt streams
> >
> >
> >
> >
> > *Please help on this.*
> > *Many thanks!*
> >
> >
> > On Thu, Apr 20, 2017 at 2:02 AM, Satish Balay <balay at mcs.anl.gov> wrote:
> >
> > > Sorry - should have mentioned:
> > >
> > > do 'rm -rf arch-linux-cxx-opt' and rerun configure again.
> > >
> > > The mpich install from previous build [that is currently in
> > > arch-linux-cxx-opt/]
> > > is conflicting with --with-mpi-dir=/app1/centos6.3/gnu/mvapich2-1.9/
> > >
> > > Satish
> > >
> > >
> > > On Wed, 19 Apr 2017, Pham Pham wrote:
> > >
> > > > I reconfigured PETSs with installed MPI, however, I got serous error:
> > > >
> > > > **************************ERROR*************************************
> > > > Error during compile, check arch-linux-cxx-opt/lib/petsc/
> conf/make.log
> > > > Send it and arch-linux-cxx-opt/lib/petsc/conf/configure.log to
> > > > petsc-maint at mcs.anl.gov
> > > > ********************************************************************
> > > >
> > > > Please explain what is happening?
> > > >
> > > > Thank you very much.
> > > >
> > > >
> > > >
> > > >
> > > > On Wed, Apr 19, 2017 at 11:43 PM, Satish Balay <balay at mcs.anl.gov>
> > > wrote:
> > > >
> > > > > Presumably your cluster already has a recommended MPI to use
> [which is
> > > > > already installed. So you should use that - instead of
> > > > > --download-mpich=1
> > > > >
> > > > > Satish
> > > > >
> > > > > On Wed, 19 Apr 2017, Pham Pham wrote:
> > > > >
> > > > > > Hi,
> > > > > >
> > > > > > I just installed petsc-3.7.5 into my university cluster. When
> > > evaluating
> > > > > > the computer system, PETSc reports "It appears you have 1
> node(s)", I
> > > > > donot
> > > > > > understand this, since the system is a multinodes system. Could
> you
> > > > > please
> > > > > > explain this to me?
> > > > > >
> > > > > > Thank you very much.
> > > > > >
> > > > > > S.
> > > > > >
> > > > > > Output:
> > > > > > =========================================
> > > > > > Now to evaluate the computer systems you plan use - do:
> > > > > > make PETSC_DIR=/home/svu/mpepvs/petsc/petsc-3.7.5
> > > > > > PETSC_ARCH=arch-linux-cxx-opt streams
> > > > > > [mpepvs at atlas7-c10 petsc-3.7.5]$ make
> > > > > > PETSC_DIR=/home/svu/mpepvs/petsc/petsc-3.7.5
> > > > > PETSC_ARCH=arch-linux-cxx-opt
> > > > > > streams
> > > > > > cd src/benchmarks/streams; /usr/bin/gmake --no-print-directory
> > > > > > PETSC_DIR=/home/svu/mpepvs/petsc/petsc-3.7.5
> > > > > PETSC_ARCH=arch-linux-cxx-opt
> > > > > > streams
> > > > > > /home/svu/mpepvs/petsc/petsc-3.7.5/arch-linux-cxx-opt/bin/mpicxx
> -o
> > > > > > MPIVersion.o -c -Wall -Wwrite-strings -Wno-strict-aliasing
> > > > > > -Wno-unknown-pragmas -fvisibility=hidden -g -O
> > > > > > -I/home/svu/mpepvs/petsc/petsc-3.7.5/include
> > > > > > -I/home/svu/mpepvs/petsc/petsc-3.7.5/arch-linux-cxx-opt/include
> > > > > > `pwd`/MPIVersion.c
> > > > > > Running streams with
> > > > > > '/home/svu/mpepvs/petsc/petsc-3.7.5/arch-linux-cxx-opt/bin/mpiexec
> '
> > > > > using
> > > > > > 'NPMAX=12'
> > > > > > Number of MPI processes 1 Processor names atlas7-c10
> > > > > > Triad: 9137.5025 Rate (MB/s)
> > > > > > Number of MPI processes 2 Processor names atlas7-c10 atlas7-c10
> > > > > > Triad: 9707.2815 Rate (MB/s)
> > > > > > Number of MPI processes 3 Processor names atlas7-c10 atlas7-c10
> > > > > atlas7-c10
> > > > > > Triad: 13559.5275 Rate (MB/s)
> > > > > > Number of MPI processes 4 Processor names atlas7-c10 atlas7-c10
> > > > > atlas7-c10
> > > > > > atlas7-c10
> > > > > > Triad: 14193.0597 Rate (MB/s)
> > > > > > Number of MPI processes 5 Processor names atlas7-c10 atlas7-c10
> > > > > atlas7-c10
> > > > > > atlas7-c10 atlas7-c10
> > > > > > Triad: 14492.9234 Rate (MB/s)
> > > > > > Number of MPI processes 6 Processor names atlas7-c10 atlas7-c10
> > > > > atlas7-c10
> > > > > > atlas7-c10 atlas7-c10 atlas7-c10
> > > > > > Triad: 15476.5912 Rate (MB/s)
> > > > > > Number of MPI processes 7 Processor names atlas7-c10 atlas7-c10
> > > > > atlas7-c10
> > > > > > atlas7-c10 atlas7-c10 atlas7-c10 atlas7-c10
> > > > > > Triad: 15148.7388 Rate (MB/s)
> > > > > > Number of MPI processes 8 Processor names atlas7-c10 atlas7-c10
> > > > > atlas7-c10
> > > > > > atlas7-c10 atlas7-c10 atlas7-c10 atlas7-c10 atlas7-c10
> > > > > > Triad: 15799.1290 Rate (MB/s)
> > > > > > Number of MPI processes 9 Processor names atlas7-c10 atlas7-c10
> > > > > atlas7-c10
> > > > > > atlas7-c10 atlas7-c10 atlas7-c10 atlas7-c10 atlas7-c10 atlas7-c10
> > > > > > Triad: 15671.3104 Rate (MB/s)
> > > > > > Number of MPI processes 10 Processor names atlas7-c10 atlas7-c10
> > > > > > atlas7-c10 atlas7-c10 atlas7-c10 atlas7-c10 atlas7-c10 atlas7-c10
> > > > > > atlas7-c10 atlas7-c10
> > > > > > Triad: 15601.4754 Rate (MB/s)
> > > > > > Number of MPI processes 11 Processor names atlas7-c10 atlas7-c10
> > > > > > atlas7-c10 atlas7-c10 atlas7-c10 atlas7-c10 atlas7-c10 atlas7-c10
> > > > > > atlas7-c10 atlas7-c10 atlas7-c10
> > > > > > Triad: 15434.5790 Rate (MB/s)
> > > > > > Number of MPI processes 12 Processor names atlas7-c10 atlas7-c10
> > > > > > atlas7-c10 atlas7-c10 atlas7-c10 atlas7-c10 atlas7-c10 atlas7-c10
> > > > > > atlas7-c10 atlas7-c10 atlas7-c10 atlas7-c10
> > > > > > Triad: 15134.1263 Rate (MB/s)
> > > > > > ------------------------------------------------
> > > > > > np speedup
> > > > > > 1 1.0
> > > > > > 2 1.06
> > > > > > 3 1.48
> > > > > > 4 1.55
> > > > > > 5 1.59
> > > > > > 6 1.69
> > > > > > 7 1.66
> > > > > > 8 1.73
> > > > > > 9 1.72
> > > > > > 10 1.71
> > > > > > 11 1.69
> > > > > > 12 1.66
> > > > > > Estimation of possible speedup of MPI programs based on Streams
> > > > > benchmark.
> > > > > > It appears you have 1 node(s)
> > > > > > Unable to plot speedup to a file
> > > > > > Unable to open matplotlib to plot speedup
> > > > > > [mpepvs at atlas7-c10 petsc-3.7.5]$
> > > > > > [mpepvs at atlas7-c10 petsc-3.7.5]$
> > > > > >
> > > > >
> > > > >
> > > >
> > >
> > >
> >
>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.mcs.anl.gov/pipermail/petsc-users/attachments/20170505/3cbd26c1/attachment-0001.html>
-------------- next part --------------
A non-text attachment was scrubbed...
Name: scaling.png
Type: image/png
Size: 46047 bytes
Desc: not available
URL: <http://lists.mcs.anl.gov/pipermail/petsc-users/attachments/20170505/3cbd26c1/attachment-0001.png>
-------------- next part --------------
A non-text attachment was scrubbed...
Name: test.log
Type: text/x-log
Size: 636 bytes
Desc: not available
URL: <http://lists.mcs.anl.gov/pipermail/petsc-users/attachments/20170505/3cbd26c1/attachment-0004.bin>
-------------- next part --------------
A non-text attachment was scrubbed...
Name: test.log
Type: text/x-log
Size: 636 bytes
Desc: not available
URL: <http://lists.mcs.anl.gov/pipermail/petsc-users/attachments/20170505/3cbd26c1/attachment-0005.bin>
-------------- next part --------------
A non-text attachment was scrubbed...
Name: make.log
Type: text/x-log
Size: 102045 bytes
Desc: not available
URL: <http://lists.mcs.anl.gov/pipermail/petsc-users/attachments/20170505/3cbd26c1/attachment-0006.bin>
-------------- next part --------------
A non-text attachment was scrubbed...
Name: configure.log
Type: text/x-log
Size: 4616950 bytes
Desc: not available
URL: <http://lists.mcs.anl.gov/pipermail/petsc-users/attachments/20170505/3cbd26c1/attachment-0007.bin>
More information about the petsc-users
mailing list