[petsc-dev] running test harness under batch system

Satish Balay balay at mcs.anl.gov
Mon Jun 25 12:43:32 CDT 2018


yes - thats how I did the spack/xsdk build on theta.

However - it took a very long time - and this interfers with the
queues and their time limits.

I was told [later] that mostlikely 'the 'make -j200' job was being
scheduled on a single core of the theta node - and I needed to change
some setting to change this behavior. [something I need to figureout]

Satish

On Mon, 25 Jun 2018, Smith, Barry F. wrote:

> 
>     This assumes that the compilers are all available and working on the compute nodes, correct?
> 
>      Thanks
> 
>       Barry
> 
>    Do the compilers work on the compute nodes of theta?
> 
> 
> 
> > On Jun 25, 2018, at 12:03 PM, Zhang, Hong <hongzhang at anl.gov> wrote:
> > 
> > Yes, it is possible. I have run the test harness on cori submitting the following script
> > 
> > #!/bin/bash -l
> > 
> > #SBATCH -N 1                      #Use 1 nodes
> > #SBATCH -t 02:00:00           #Set time limit
> > #SBATCH -p regular              #Submit to the regular 'partition'
> > #SBATCH -C knl,quad,cache  #Use KNL nodes
> > 
> > make PETSC_ARCH=arch-cori-avx512-opt MPIEXEC='srun -c 4 --cpu_bind=cores' -f gmakefile test
> > 
> > The most important thing is probably setting the MPIEXEC according to the system. But note that there are often limitations on number of nodes on large machines. For example, Theta requires a minimum of 128 nodes.
> > 
> > Hong (Mr.) 
> > 
> > 
> >> On Jun 22, 2018, at 11:41 AM, Smith, Barry F. <bsmith at mcs.anl.gov> wrote:
> >> 
> >> 
> >>  Has anyone run the entire test harness under a batch system? Is this possible, does it require specific commands that should be documented in the users manual?
> >> 
> >>   Thanks
> >> 
> >>      Barry
> >> 
> > 
> 



More information about the petsc-dev mailing list