[petsc-users] configure fails with batch+scalapack
Santiago Andres Triana
repepo at gmail.com
Sun Dec 17 17:07:26 CST 2017
After the last attempt, I tried the --with-batch option, in the hope that
it will pick up the scalapack that compiled and installed successfully
earlier. But this fails to configure properly. Configure.log attached.
Thanks!
On Sun, Dec 17, 2017 at 11:55 PM, Santiago Andres Triana <repepo at gmail.com>
wrote:
> Thanks for your quick responses!
>
> Attached is the configure.log obtained without using the --with-batch
> option. Configures without errors but fails at the 'make test' stage. A
> snippet of the output with the error (which I attributed to the job
> manager) is:
>
>
>
> > Local host: hpca-login
> > Registerable memory: 32768 MiB
> > Total memory: 65427 MiB
> >
> > Your MPI job will continue, but may be behave poorly and/or hang.
> > ------------------------------------------------------------
> --------------
> 3c25
> < 0 KSP Residual norm 0.239155
> ---
> > 0 KSP Residual norm 0.235858
> 6c28
> < 0 KSP Residual norm 6.81968e-05
> ---
> > 0 KSP Residual norm 2.30906e-05
> 9a32,33
> > [hpca-login:38557] 1 more process has sent help message
> help-mpi-btl-openib.txt / reg mem limit low
> > [hpca-login:38557] Set MCA parameter "orte_base_help_aggregate" to 0 to
> see all help / error messages
> /home/trianas/petsc-3.8.3/src/snes/examples/tutorials
> Possible problem with ex19_fieldsplit_fieldsplit_mumps, diffs above
> =========================================
> Possible error running Fortran example src/snes/examples/tutorials/ex5f
> with 1 MPI process
> See http://www.mcs.anl.gov/petsc/documentation/faq.html
> --------------------------------------------------------------------------
> WARNING: It appears that your OpenFabrics subsystem is configured to only
> allow registering part of your physical memory. This can cause MPI jobs to
> run with erratic performance, hang, and/or crash.
>
> This may be caused by your OpenFabrics vendor limiting the amount of
> physical memory that can be registered. You should investigate the
> relevant Linux kernel module parameters that control how much physical
> memory can be registered, and increase them to allow registering all
> physical memory on your machine.
>
> See this Open MPI FAQ item for more information on these Linux kernel
> module
> parameters:
>
> http://www.open-mpi.org/faq/?category=openfabrics#ib-locked-pages
>
> Local host: hpca-login
> Registerable memory: 32768 MiB
> Total memory: 65427 MiB
>
> Your MPI job will continue, but may be behave poorly and/or hang.
> --------------------------------------------------------------------------
> Number of SNES iterations = 4
> Completed test examples
> =========================================
> Now to evaluate the computer systems you plan use - do:
> make PETSC_DIR=/home/trianas/petsc-3.8.3 PETSC_ARCH=arch-linux2-c-debug
> streams
>
>
>
>
> On Sun, Dec 17, 2017 at 11:32 PM, Matthew Knepley <knepley at gmail.com>
> wrote:
>
>> On Sun, Dec 17, 2017 at 3:29 PM, Santiago Andres Triana <repepo at gmail.com
>> > wrote:
>>
>>> Dear petsc-users,
>>>
>>> I'm trying to install petsc in a cluster that uses a job manager. This
>>> is the configure command I use:
>>>
>>> ./configure --known-mpi-shared-libraries=1 --with-scalar-type=complex
>>> --with-mumps=1 --download-mumps --download-parmetis
>>> --with-blaslapack-dir=/sw/sdev/intel/psxe2015u3/composer_xe_2015.3.187/mkl
>>> --download-metis --with-scalapack=1 --download-scalapack --with-batch
>>>
>>> This fails when including the option --with-batch together with
>>> --download-scalapack:
>>>
>>
>> We need configure.log
>>
>>
>>> ============================================================
>>> ===================
>>> Configuring PETSc to compile on your system
>>>
>>> ============================================================
>>> ===================
>>> TESTING: check from config.libraries(config/BuildS
>>> ystem/config/libraries.py:158)
>>> ***********************************************************
>>> ********************
>>> UNABLE to CONFIGURE with GIVEN OPTIONS (see configure.log
>>> for details):
>>> ------------------------------------------------------------
>>> -------------------
>>> Unable to find scalapack in default locations!
>>> Perhaps you can specify with --with-scalapack-dir=<directory>
>>> If you do not want scalapack, then give --with-scalapack=0
>>> You might also consider using --download-scalapack instead
>>> ************************************************************
>>> *******************
>>>
>>>
>>> However, if I omit the --with-batch option, the configure script manages
>>> to succeed (it downloads and compiles scalapack; the install fails later at
>>> the make debug because of the job manager).
>>>
>>
>> Can you send this failure as well?
>>
>> Thanks,
>>
>> Matt
>>
>>
>>> Any help or suggestion is highly appreciated. Thanks in advance!
>>>
>>> Andres
>>>
>>
>>
>>
>> --
>> What most experimenters take for granted before they begin their
>> experiments is infinitely more interesting than any results to which their
>> experiments lead.
>> -- Norbert Wiener
>>
>> https://www.cse.buffalo.edu/~knepley/ <http://www.caam.rice.edu/~mk51/>
>>
>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.mcs.anl.gov/pipermail/petsc-users/attachments/20171218/dc5bac9d/attachment-0001.html>
-------------- next part --------------
A non-text attachment was scrubbed...
Name: configure.log
Type: application/octet-stream
Size: 5309938 bytes
Desc: not available
URL: <http://lists.mcs.anl.gov/pipermail/petsc-users/attachments/20171218/dc5bac9d/attachment-0001.obj>
More information about the petsc-users
mailing list