[petsc-users] petsc configuer time

Satish Balay balay at mcs.anl.gov
Thu Mar 13 18:04:37 CDT 2014


Hm - configure should first print the 'executing' message - and then
run the command.

If its hanging at 'pusing language' message - I'm not sure what the
cause is.

Perhaps the python stdout-buffer-flush is off-sync. [and its icc/ifort
thats hanging]. Or there is a problem with python on this machine?

2 things you can try to confirm.

1. run configure with gcc/gfortran and see if thats quicker.
If so - then intel compilers are the cause for slowdown.

2. Try configure with the option --useThreads=0 and see if this makes a difference.
[or tray a different python]

Satish

On Thu, 13 Mar 2014, Dharmendar Reddy wrote:

> I see that at every pushing language, the screen statys there for a
> while and the compile execute commands quickly appear and go....
> 
> 
> On Thu, Mar 13, 2014 at 4:23 PM, Balay, Satish <balay at mcs.anl.gov> wrote:
> > You can try "  tail -f configure.log" to see where  its hanging during the
> > run
> >
> > Intel compilers can hang waiting for response from license server.
> >
> > Satish
> > ________________________________
> > From: Dharmendar Reddy
> > Sent: 3/13/2014 2:03 PM
> > To: Smith, Barry F.
> > Cc: PETSc users list
> > Subject: Re: [petsc-users] petsc configuer time
> >
> > Yes, my home directory is mounted on nfs. And i have configured and
> > installed petsc many times on my laptop and TACC stampede (which also
> > has my home directory mounted on network file system).  But the
> > particular computer that i am working on now has been extremely slow
> > when it comes to petsc configure. Any suggestions on how i can fix
> > this ? I do not have a choice of not having my home on nfs.
> >
> >
> > Otherwise, i do not see big disk i/o impact even when i visualize
> > large ( > 100 MB ) files for visualization.
> >
> >
> >
> > On Thu, Mar 13, 2014 at 3:55 PM, Barry Smith <bsmith at mcs.anl.gov> wrote:
> >>
> >>    The long time is pretty much always due to a slow file system (it takes
> >> about 3 minutes with my laptop using the local disk) but on a desktop
> >> machine using a network file system it can take up to 20 minutes.  We
> >> generally always build on a local disk; since disk space is so cheap now
> >> pretty much any machine has gigabytes free of disk space that you can use to
> >> build on.
> >>
> >>    I think two hours is totally unacceptably long. What type of system are
> >> you building on and where is the file system? My guess is /home/reddy is off
> >> on some slow filesystem away from the machine you are compiling on.
> >>
> >>    Barry
> >>
> >> On Mar 13, 2014, at 3:29 PM, Dharmendar Reddy <dharmareddy84 at gmail.com>
> >> wrote:
> >>
> >>> Hello,
> >>>         How long does it take to configure petsc ? I understand that
> >>> it depends on the options, but i am find the particular version i have
> >>> is taking very long time (nearly 2 hours) before it begins configuring
> >>> packages.
> >>>
> >>> I am using intel MPI and intel compilers.
> >>>
> >>> I am using the following config opts:
> >>> PETSC_VERSION   = petsc-3.4.3
> >>> MPICC=mpiicc
> >>> MPIF90=mpiifort
> >>> MPICXX=mpiicpc
> >>> COMPILERS = --with-cc="$(MPICC)" --with-fc="$(MPIF90)"
> >>> --with-cxx="$(MPICXX)" COPTFLAGS="$(O_LEVEL)" CXXOPTFLAGS="$(O_LEVEL)"
> >>> FOPTFLAGS="$(O_LEVEL)"
> >>> # COMPILERS = --with-mpi-dir=$(MPI_HOME)
> >>>
> >>> BLAS_LAPACK     = $(PETSC_BLAS_LAPACK_DIR)
> >>> PETSCExtPackagePath = /home/reddy/libs/petsc
> >>> METISPATH=$(PETSCExtPackagePath)/metis-5.0.2-p3.tar.gz
> >>> MUMPSPATH=$(PETSCExtPackagePath)/MUMPS_4.10.0-p3.tar.gz
> >>> PARMETISPATH=$(PETSCExtPackagePath)/parmetis-4.0.2-p5.tar.gz
> >>> SUPERLUPATH=$(PETSCExtPackagePath)/superlu_dist_3.3.tar.gz
> >>> SCALPACKINC=$(MKLHOME)/include
> >>> SCALPACKLIB="$(MKLROOT)/lib/intel64/libmkl_scalapack_lp64.a
> >>> -Wl,--start-group $(MKLROOT)/lib/intel64/libmkl_intel_lp64.a
> >>> $(MKLROOT)/lib/intel64/libmkl_core.a
> >>> $(MKLROOT)/lib/intel64/libmkl_sequential.a -Wl,--end-group
> >>> $(MKLROOT)/lib/intel64/libmkl_blacs_intelmpi_lp64.a -lpthread -lm"
> >>> #BLACSINC=$(MKLHOME)/include
> >>> #BLACSLIB=$(MKLHOME)/lib/intel64/libmkl_blacs_intelmpi_lp64.a
> >>> confOptsCommon = --with-x=0 --with-make-np=12 --with-hdf5
> >>> --with-hdf5-dir=$(HDF5_DIR) --with-single-library=0  --with-pic=1
> >>> --with-shared-libraries=0 --with-blas-lapack-dir=$(BLAS_LAPACK)
> >>> --with-clanguage=C++ --with-fortran --with-debugging=1 $(COMPILERS)
> >>> --download-metis=$(METISPATH) --download-parmetis=$(PARMETISPATH)
> >>> --download-superlu_dist=$(SUPERLUPATH) --download-mumps=$(MUMPSPATH)
> >>> --with-scalapack-include=$(SCALPACKINC)
> >>> --with-scalapack-lib=$(SCALPACKLIB)
> >>> #--with-blacs-include=$(BLACSINC) --with-blacs-lib=$(BLACSLIB)
> >>>
> >>> ### configure command
> >>> ./configure --with-scalar-type=real $(confOptsCommon)
> >>
> 



More information about the petsc-users mailing list