SLURM quick reference

This article shows the SLURM commands and their equivalent to the LSF scheduler (LSF was replaced by SLURM in September 2020). Note: The recording and PPT presentation of the first webinar on transitioning from LSF to SLURM can be found here

The SLURM Scheduler

SLURM  (formerly known as Simple Linux Utility for Resource Management or SLURM) is the job scheduler deployed on JASMIN. It allows users to submit, monitor, and control jobs on the CentOS7 LOTUS  sub-cluster.

Table 1 Essential LSF/SLURM commands

LSF   SLURM Description
bsub < script_file sbatch script_file Submit a job script to the scheduler
bqueues  sinfo Show available scheduling queues
bjobs  squeue -u <username> List user's pending and running jobs
bsub -n 1 -q test -Is /bin/bash srun -n 1 -p test --pty /bin/bash Request an interactive session on LOTUS

Table 2 Job specification 

LSF SLURM  Description
#BSUB #SBATCH Scheduler directive
-q queue_name --partition =queue_name or  -p queue_name Specify the scheduling queue
-W   hh:mm:ss --time= hh:mm:ss  or -t hh:mm:ss
Set the maximum runtime limit
-We  hh:mm:ss --time-min= hh:mm:ss  
Set an estimated runtime 
-J job_name --job-name= jobname 
Specify a name for the job
-o filename,               -e filename

 


-oo/-eo  filename
--output= filename or -o filename, 
--error=filename or -e filename
The default file name is "slurm-%j.out", where  the  "%j"is  replaced  by  the  job  ID
For job arrays,  the default file name is "slurm-%A_%a.out", "%A" is replaced by the job ID and "%a" with the array index. 
--open-mode=append|truncate

Standard job output and error output. Default append.



     
Overwrite job error/output files 
%J 
%I 
%j 
%a 
Job ID   for -oo/eo filename
Job array index 
-R "rusage[mem= XXX]"
--mem= XXX  Memory XXX is required for the job. Default units are megabytes 
-J job_name[index_list] --array= index
(e.g. --array=1-10)
specify a job array 
-J  job_name[index_list]%number-of-simultaneous-jobs --array=index% ArrayTaskThrottle
e.g. --array=1-15%4   will limit the number of simultaneously running tasks from this job array to 4
A maximum number of simultaneously running tasks from the job array may be specified using a "%" separator.
-cwd directory -D, --chdir=<directory> Set the working directory of the batch script to < directory> before it is executed.
-x --exclusive Exclusive execution mode
-w ' dependency_expression' --dependency= <dependency_list> Defer the start of this job until the specified dependencies have been satisfied completed
-n number-of-cores --ntasks= number-of-cores   or -n number-of-cores Number of CPU cores 
-m <host-group-name> --constraint=" <host-group-name>" To select a node with a specific processor model 

Table 3 Job control commands 

LSF SLURM Description
bkill  <jobid> scancel  < jobid>  Kill a job 
bjobs -l < jobid> scontrol show job < jobid> Show details job  information
bmod  < jobid> scontrol update job < jobid> Modify a pending job
bkill 0 scancel --user=< username> Kill all jobs owned by a user

Table 4 Job environment variables

LSF SLURM Description 
$LSB_JOBID $SLURM_JOBID Job identifier number 
$LSB_JOBID
$SLURM_ARRAY_JOB_ID Job Array 
$LSB_JOBINDEX $SLURM_ARRAY_TASK_ID Job array index
$LSB_JOBINDEX_END $SLURM_ARRAY_TASK_MAX Last index number within a job array
$LSB_MAX_NUM_PROCESSORS $SLURM_NTASKS Number of processors allocated

Still need help? Contact Us Contact Us