Slurm quick reference
Slurm quick reference
This article shows the Slurm commands and their equivalent to the LSF scheduler (LSF was replaced by Slurm in September 2020): this webinar gave details of the transition.
Slurm is the job scheduler deployed on JASMIN. It allows users to submit, monitor, and control jobs on the LOTUS (cpu) and ORCHID (gpu) clusters.
LSF | Slurm | Description |
---|---|---|
bsub < script_file | sbatch script_file | Submit a job script to the scheduler |
bqueues | sinfo | Show available scheduling queues |
bjobs | squeue -u <username> | List user’s pending and running jobs |
bsub -n 1 -q test -Is /bin/bash | srun -n 1 -p test –pty /bin/bash | Request an interactive session on LOTUS |
LSF | Slurm | Description |
---|---|---|
#BSUB | #SBATCH | Scheduler directive |
-q queue_name | --partition=queue_name or -p queue_name | Specify the scheduling queue |
-W hh:mm:ss | --time=hh:mm:ss or -t hh:mm:ss | Set the maximum runtime limit |
-We hh:mm:ss | --time-min=hh:mm:ss | Set an estimated runtime |
-J job_name | --job-name=jobname | Specify a name for the job |
-o filename, -e filename | --output=filename or -o filename, --error=filename or -e filename. The default file name is “slurm-%j.out”, where “%j"is replaced by the job ID | Standard job output and error output. Default append. |
-oo/-eo filename | For job arrays , the default file name is “slurm-%A_%a.out”, “%A” is replaced by the job ID and “%a” with the array index. | |
| --open-mode=append|truncate | Write mode for error/output files . |
%J | %j | Job ID for -oo/eo filename |
%I | %a | Job array index |
-R “rusage[mem= XXX ]” | --mem=XXX | Memory XXX is required for the job. Default units are megabytes |
-J job_name [index_list] | --array= index (e.g. –array=1-10) | specify a job array |
-J job_name [index_list]% number-of-simultaneous-jobs | --array=index% ArrayTaskThrottle e.g. --array =1-15%4 will limit the number of simultaneously running tasks from this job array to 4 | A maximum number of simultaneously running tasks from the job array may be specified using a “%” separator. |
-cwd directory | -D, –chdir= | Set the working directory of the batch script to < directory > before it is executed. |
-x | --exclusive | Exclusive execution mode |
-w ‘dependency_expression’ | --dependency= <dependency_list> | Defer the start of this job until the specified dependencies have been satisfied completed |
-n number-of-cores | --ntasks=number-of-cores or -n number-of-cores | Number of CPU cores |
-m | --constraint=”< host-group-name>" | To select a node with a specific processor model |
LSF | Slurm | Description |
---|---|---|
bkill <jobid> | scancel <jobid> | Kill a job |
bjobs -l <jobid> | scontrol show job <jobid> | Show details job information |
bmod <jobid> | scontrol update job <jobid> | Modify a pending job |
bkill 0 | scancel –user=<username> | Kill all jobs owned by a user |
LSF | Slurm | Description |
---|---|---|
$LSB_JOBID | $SLURM_JOBID | Job identifier number |
$LSB_JOBID | $SLURM_ARRAY_JOB_ID | Job Array |
$LSB_JOBINDEX | $SLURM_ARRAY_TASK_ID | Job array index |
$LSB_JOBINDEX_END | $SLURM_ARRAY_TASK_MAX | Last index number within a job array |
$LSB_MAX_NUM_PROCESSORS | $SLURM_NTASKS | Number of processors allocated |