[petsc-users] debugging with no X11 available

Aron Ahmadia aron.ahmadia at kaust.edu.sa
Mon Dec 12 07:44:28 CST 2011


Dear Dominik,

One trick for getting around this that works on LoadLeveler (and I suspect
SLURM) is running:

xterm

Instead of the usual "mpirun" when your batch script gets executed.  As
long as the scheduler's batch script inherits your X11 environment and is
running on the login node, you'll then have access to an xterm with full
access to mpirun, ssh, etc...

Good luck,
Aron

On Mon, Dec 12, 2011 at 4:35 PM, Dominik Szczerba <dominik at itis.ethz.ch>wrote:

> Dear all,
>
> Many thanks for your suggestions, but there is one fundamental
> obstacle: there is no mpiexec available at all, everything must go
> through a scheduler.
>
> I will see into what Satish proposed, i.e. to first get hold of known
> nodes. But I am afraid this may fail, because these run some very
> limited system.
>
> Regards,
> Dominik
>
> On Mon, Dec 12, 2011 at 2:21 PM, Satish Balay <balay at mcs.anl.gov> wrote:
> > On Mon, 12 Dec 2011, Matthew Knepley wrote:
> >
> >> On Mon, Dec 12, 2011 at 7:11 AM, Dominik Szczerba <dominik at itis.ethz.ch
> >wrote:
> >>
> >> > Thanks for your answers. Meanwhile I clarified the situation a bit:
> >> >
> >> > I can bring xterm up manually from the command line, but the job is
> >> > run using a scheduler (slurm). It then gets executed on arbitrary
> >> > nodes (some stripped down linux) which apparently can not make X11
> >> > connections.
> >> >
> >>
> >> Sometimes you can set the env on the compute nodes, and get DISPLAY
> right.
> >
> >
> > With compute nodes -its not easy. Even if they have X11 installed - you
> > might have to do multiple things:
> >
> > 1. allocate nodes
> > 2. create ssh-x11 tunnels to each node thats allocated [and hope its the
> same localhost:10 value]
> > 3. Now start up the parallel job with this display
> >
> > mpiexec -n 4 ./ex2 -start_in_debugger -display localhost:10
> >
> > Satish
> >
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.mcs.anl.gov/pipermail/petsc-users/attachments/20111212/d814b2d8/attachment.htm>


More information about the petsc-users mailing list