Hp XC System 2.x Software Manual de usuario Pagina 89

  • Descarga
  • Añadir a mis manuales
  • Imprimir
  • Pagina
    / 154
  • Tabla de contenidos
  • MARCADORES
  • Valorado. / 5. Basado en revisión del cliente
Vista de pagina 88
Figure 7-1: How LSF-HPC and SLURM Launch and Manage a Job
N16
N16
User
1
2
4
6
6
6
6
7
7
7
7
5
job_starter.sh
$ srun -nl
myscript
Login node
$ bsub-n4 -ext”SLURM[nodes-4]” -o output.out./myscript
LSF Execution Host
lsfhost.localdomain
SLURM_JOBID=53
SLURM_NPROCS=4
$ hostname
hostname
$ hostname
n1
n1
hostname
hostname
hostname
Compute Node
N2
Compute Node
N3
Compute Node
N4
n2
n3
n4
N16
Compute Node
srun
N1
myscript
$ srun hostname
$ mpirun -srun ./hellompi
3
1. A user logs in to login node n16.
2. The user executes the following LSF bsub command on login node n16:
$ bsub -n4 -ext "SLURM[nodes=4]" -o output.out ./myscript
This bsub command launches a request for four CPUs (from the -n4 option of the bsub
command) across four nodes (from the -ext "SLURM[nodes=4]" option); the job i s
launched on t hose CPUs. The script, myscript, which is shown here, runs the job:
#!/bin/sh
hostname
srun hostname
mpirun -srun ./hellompi
3. LSF-HPC schedu les the job and mon ito rs the state of the resources (com pu te nodes) in
the SLURM lsf partition. When the LSF-HPC scheduler determines that the required
resources are available, LSF-HPC allocates those resources in SLURM and obtains a
SLURM job identifier (jobID) that corresponds to the allocation.
In this example, four processors spread over four nodes (n1,n2,n3,n4) are allocated for
myscript, an d the SLURM job id of 53 is assigned to the allocation.
Using LSF 7-5
Vista de pagina 88
1 2 ... 84 85 86 87 88 89 90 91 92 93 94 ... 153 154

Comentarios a estos manuales

Sin comentarios