[petsc-users] petsc configuer time

Dharmendar Reddy dharmareddy84 at gmail.com
Thu Mar 13 16:17:08 CDT 2014


Also, I find it very slow even when i try to configure in local /tmp
folder. How can i diagnose this ?

On Thu, Mar 13, 2014 at 4:03 PM, Dharmendar Reddy
<dharmareddy84 at gmail.com> wrote:
> Yes, my home directory is mounted on nfs. And i have configured and
> installed petsc many times on my laptop and TACC stampede (which also
> has my home directory mounted on network file system).  But the
> particular computer that i am working on now has been extremely slow
> when it comes to petsc configure. Any suggestions on how i can fix
> this ? I do not have a choice of not having my home on nfs.
>
>
> Otherwise, i do not see big disk i/o impact even when i visualize
> large ( > 100 MB ) files for visualization.
>
>
>
> On Thu, Mar 13, 2014 at 3:55 PM, Barry Smith <bsmith at mcs.anl.gov> wrote:
>>
>>    The long time is pretty much always due to a slow file system (it takes about 3 minutes with my laptop using the local disk) but on a desktop machine using a network file system it can take up to 20 minutes.  We generally always build on a local disk; since disk space is so cheap now pretty much any machine has gigabytes free of disk space that you can use to build on.
>>
>>    I think two hours is totally unacceptably long. What type of system are you building on and where is the file system? My guess is /home/reddy is off on some slow filesystem away from the machine you are compiling on.
>>
>>    Barry
>>
>> On Mar 13, 2014, at 3:29 PM, Dharmendar Reddy <dharmareddy84 at gmail.com> wrote:
>>
>>> Hello,
>>>         How long does it take to configure petsc ? I understand that
>>> it depends on the options, but i am find the particular version i have
>>> is taking very long time (nearly 2 hours) before it begins configuring
>>> packages.
>>>
>>> I am using intel MPI and intel compilers.
>>>
>>> I am using the following config opts:
>>> PETSC_VERSION   = petsc-3.4.3
>>> MPICC=mpiicc
>>> MPIF90=mpiifort
>>> MPICXX=mpiicpc
>>> COMPILERS = --with-cc="$(MPICC)" --with-fc="$(MPIF90)"
>>> --with-cxx="$(MPICXX)" COPTFLAGS="$(O_LEVEL)" CXXOPTFLAGS="$(O_LEVEL)"
>>> FOPTFLAGS="$(O_LEVEL)"
>>> # COMPILERS = --with-mpi-dir=$(MPI_HOME)
>>>
>>> BLAS_LAPACK     = $(PETSC_BLAS_LAPACK_DIR)
>>> PETSCExtPackagePath = /home/reddy/libs/petsc
>>> METISPATH=$(PETSCExtPackagePath)/metis-5.0.2-p3.tar.gz
>>> MUMPSPATH=$(PETSCExtPackagePath)/MUMPS_4.10.0-p3.tar.gz
>>> PARMETISPATH=$(PETSCExtPackagePath)/parmetis-4.0.2-p5.tar.gz
>>> SUPERLUPATH=$(PETSCExtPackagePath)/superlu_dist_3.3.tar.gz
>>> SCALPACKINC=$(MKLHOME)/include
>>> SCALPACKLIB="$(MKLROOT)/lib/intel64/libmkl_scalapack_lp64.a
>>> -Wl,--start-group $(MKLROOT)/lib/intel64/libmkl_intel_lp64.a
>>> $(MKLROOT)/lib/intel64/libmkl_core.a
>>> $(MKLROOT)/lib/intel64/libmkl_sequential.a -Wl,--end-group
>>> $(MKLROOT)/lib/intel64/libmkl_blacs_intelmpi_lp64.a -lpthread -lm"
>>> #BLACSINC=$(MKLHOME)/include
>>> #BLACSLIB=$(MKLHOME)/lib/intel64/libmkl_blacs_intelmpi_lp64.a
>>> confOptsCommon = --with-x=0 --with-make-np=12 --with-hdf5
>>> --with-hdf5-dir=$(HDF5_DIR) --with-single-library=0  --with-pic=1
>>> --with-shared-libraries=0 --with-blas-lapack-dir=$(BLAS_LAPACK)
>>> --with-clanguage=C++ --with-fortran --with-debugging=1 $(COMPILERS)
>>> --download-metis=$(METISPATH) --download-parmetis=$(PARMETISPATH)
>>> --download-superlu_dist=$(SUPERLUPATH) --download-mumps=$(MUMPSPATH)
>>> --with-scalapack-include=$(SCALPACKINC)
>>> --with-scalapack-lib=$(SCALPACKLIB)
>>> #--with-blacs-include=$(BLACSINC) --with-blacs-lib=$(BLACSLIB)
>>>
>>> ### configure command
>>> ./configure --with-scalar-type=real $(confOptsCommon)
>>


More information about the petsc-users mailing list