LSF to SLURM quick reference

This article shows the SLURM commands equivalent to LSF commands. Note: The recording and PPT presentation of the first webinar on transitioning from LSF to SLURM are now available at

The SLURM Scheduler

SLURM  (formerly known as Simple Linux Utility for Resource Management or SLURM) is the job scheduler deployed on JASMIN. It allows users to submit, monitor, and control jobs on the CentOS7 LOTUS  sub-cluster.

Table 1 Essential LSF/SLURM commands

LSF   SLURM Description
bsub < script_file sbatch script_file Submit a job script to the scheduler
bqueues  sinfo Show available scheduling queues
bjobs  squeue List user's pending and running jobs
bsub -n 1 -q test -Is /bin/bash srun -n 1 -p test --pty /bin/bash Request an interactive session on LOTUS

Table 2 Job specification 

LSF SLURM  Description
#BSUB #SBATCH Scheduler directive
-q queue_name

-p queue_name

Specify the scheduling queue
-W   hh:mm:ss --time= hh:mm:ss  or -t hh:mm:ss
Set the maximum runtime limit
-We  hh:mm:ss --time-min= hh:mm:ss  
Set an estimated runtime 
-J job_name --job-name= jobname
Specify a name for the job
-o filename,               -e filename


-oo/-eo  filename
--output= filename or -o filename, 
--error=filename or -e filename
The default file name is "slurm-%j.out", where  the  "%j"is  replaced  by  the  job  ID
For  job arrays,  the  default  file  name  is "slurm-%A_%a.out", "%A" is replaced by the job ID and "%a" with the array  index. 

Standard job output and error output. Default append.

Overwrite job error/output files 
Job ID   for -oo/eo filename
Job array index 
-R "rusage[mem= XXX]"
--mem= XXX  Memory XXX required for the job.  Default units are megabytes 
-J job_name[index_list] --array= index
(e.g. --array=1-10)
specify a job array 
-J  job_name[index_list]%number-of-simultaneous-jobs --array=index% ArrayTaskThrottle
e.g. --array=1-15%4   will limit the number of simultaneously running tasks from this job array to 4
A maximum number of simultaneously running tasks from the job array may be specified using a "%" separator.
-cwd directory -D, --chdir=<directory>
-x --exclusive Exclusive execution mode
-P project -A   account-name  or --account=account-name
Note: The account has
 to be defined by the  SLURM administrator and the user assigned to the account. 
Charge resources used by this job to the specified account or project
-n number-of-cores --ntasks= number-of-cores   or -n number-of-cores Number of CPU cores 
-m <host-group-name> --constraint="<host-group-name>" To select a node with a specific processor model and memory,

Table 3 Job control commands 

LSF SLURM Description
bkill  jobid scancel  jobid  Kill a job 
bjobs -l jobid scontrol show job jobid Show details job  information
bmod  jobid scontrol update job jobid Modify a pending job
bkill 0 scancel --user=username Kill all jobs owned by a user

Table 4 Job environment variables

LSF SLURM Description 
$LSB_JOBID $SLURM_JOBID Job identifier number 
$LSB_JOBINDEX_END $SLURM_ARRAY_TASK_MAX Last index number within a job array
$LSB_MAX_NUM_PROCESSORS $SLURM_NTASKS Number of processors allocated