JASMIN Help Site logo JASMIN Help Site logo
  • Docs 
  • Guides 
  • Training 
  • Discussions   

  •   Search this site  

Can't find what you're looking for?

Try our Google custom search, across all JASMIN sites

Docs
  • getting started
    • get started with jasmin
    • generate ssh key pair
    • get jasmin portal account
    • get login account
    • beginners training workshop
    • how to contact us about jasmin issues
    • jasmin status
    • jasmin training accounts
    • tips for new users
    • how to login
    • multiple account types
    • present ssh key
    • reconfirm email address
    • reset jasmin account password
    • ssh auth
    • storage
    • understanding new jasmin storage
    • update a jasmin account
  • interactive computing
    • interactive computing overview
    • check network details
    • login servers
    • login problems
    • graphical linux desktop access using nx
    • sci servers
    • tenancy sci analysis vms
    • transfer servers
    • jasmin notebooks service
    • jasmin notebooks service with gpus
    • creating a virtual environment in the notebooks service
    • project specific servers
    • dask gateway
    • access from vscode
  • batch computing
    • lotus overview
    • slurm scheduler overview
    • slurm queues
    • lotus cluster specification
    • how to monitor slurm jobs
    • how to submit a job
    • how to submit an mpi parallel job
    • example job 2 calc md5s
    • orchid gpu cluster
    • slurm status
    • slurm quick reference
  • software on jasmin
    • software overview
    • quickstart software envs
    • python virtual environments
    • additional software
    • community software esmvaltool
    • community software checksit
    • compiling and linking
    • conda environments and python virtual environments
    • conda removal
    • creating and using miniforge environments
    • idl
    • jasmin sci software environment
    • jasmin software faqs
    • jaspy envs
    • matplotlib
    • nag library
    • name dispersion model
    • geocat replaces ncl
    • postgres databases on request
    • running python on jasmin
    • running r on jasmin
    • rocky9 migration 2024
    • share software envs
  • data transfer
    • data transfer overview
    • data transfer tools
    • globus transfers with jasmin
    • bbcp
    • ftp and lftp
    • globus command line interface
    • globus connect personal
    • gridftp ssh auth
    • rclone
    • rsync scp sftp
    • scheduling automating transfers
    • transfers from archer2
  • short term project storage
    • apply for access to a gws
    • elastic tape command line interface hints
    • faqs storage
    • gws etiquette
    • gws scanner ui
    • gws scanner
    • gws alert system
    • install xfc client
    • xfc
    • introduction to group workspaces
    • jdma
    • managing a gws
    • secondary copy using elastic tape
    • share gws data on jasmin
    • share gws data via http
    • using the jasmin object store
    • configuring cors for object storage
  • long term archive storage
    • ceda archive
  • mass
    • external access to mass faq
    • how to apply for mass access
    • moose the mass client user guide
    • setting up your jasmin account for access to mass
  • for cloud tenants
    • introduction to the jasmin cloud
    • jasmin cloud portal
    • cluster as a service
    • cluster as a service kubernetes
    • cluster as a service identity manager
    • cluster as a service slurm
    • cluster as a service pangeo
    • cluster as a service shared storage
    • adding and removing ssh keys from an external cloud vm
    • provisioning tenancy sci vm managed cloud
    • sysadmin guidance external cloud
    • best practice
  • workflow management
    • rose cylc on jasmin
    • using cron
  • uncategorized
    • mobaxterm
    • requesting resources
    • processing requests for resources
    • acknowledging jasmin
    • approving requests for access
    • working with many linux groups
    • jasmin conditions of use
  • getting started
    • get started with jasmin
    • generate ssh key pair
    • get jasmin portal account
    • get login account
    • beginners training workshop
    • how to contact us about jasmin issues
    • jasmin status
    • jasmin training accounts
    • tips for new users
    • how to login
    • multiple account types
    • present ssh key
    • reconfirm email address
    • reset jasmin account password
    • ssh auth
    • storage
    • understanding new jasmin storage
    • update a jasmin account
  • interactive computing
    • interactive computing overview
    • check network details
    • login servers
    • login problems
    • graphical linux desktop access using nx
    • sci servers
    • tenancy sci analysis vms
    • transfer servers
    • jasmin notebooks service
    • jasmin notebooks service with gpus
    • creating a virtual environment in the notebooks service
    • project specific servers
    • dask gateway
    • access from vscode
  • batch computing
    • lotus overview
    • slurm scheduler overview
    • slurm queues
    • lotus cluster specification
    • how to monitor slurm jobs
    • how to submit a job
    • how to submit an mpi parallel job
    • example job 2 calc md5s
    • orchid gpu cluster
    • slurm status
    • slurm quick reference
  • software on jasmin
    • software overview
    • quickstart software envs
    • python virtual environments
    • additional software
    • community software esmvaltool
    • community software checksit
    • compiling and linking
    • conda environments and python virtual environments
    • conda removal
    • creating and using miniforge environments
    • idl
    • jasmin sci software environment
    • jasmin software faqs
    • jaspy envs
    • matplotlib
    • nag library
    • name dispersion model
    • geocat replaces ncl
    • postgres databases on request
    • running python on jasmin
    • running r on jasmin
    • rocky9 migration 2024
    • share software envs
  • data transfer
    • data transfer overview
    • data transfer tools
    • globus transfers with jasmin
    • bbcp
    • ftp and lftp
    • globus command line interface
    • globus connect personal
    • gridftp ssh auth
    • rclone
    • rsync scp sftp
    • scheduling automating transfers
    • transfers from archer2
  • short term project storage
    • apply for access to a gws
    • elastic tape command line interface hints
    • faqs storage
    • gws etiquette
    • gws scanner ui
    • gws scanner
    • gws alert system
    • install xfc client
    • xfc
    • introduction to group workspaces
    • jdma
    • managing a gws
    • secondary copy using elastic tape
    • share gws data on jasmin
    • share gws data via http
    • using the jasmin object store
    • configuring cors for object storage
  • long term archive storage
    • ceda archive
  • mass
    • external access to mass faq
    • how to apply for mass access
    • moose the mass client user guide
    • setting up your jasmin account for access to mass
  • for cloud tenants
    • introduction to the jasmin cloud
    • jasmin cloud portal
    • cluster as a service
    • cluster as a service kubernetes
    • cluster as a service identity manager
    • cluster as a service slurm
    • cluster as a service pangeo
    • cluster as a service shared storage
    • adding and removing ssh keys from an external cloud vm
    • provisioning tenancy sci vm managed cloud
    • sysadmin guidance external cloud
    • best practice
  • workflow management
    • rose cylc on jasmin
    • using cron
  • uncategorized
    • mobaxterm
    • requesting resources
    • processing requests for resources
    • acknowledging jasmin
    • approving requests for access
    • working with many linux groups
    • jasmin conditions of use
  1.   Batch Computing
  1. Home
  2. Docs
  3. Batch Computing
  4. Slurm quick reference

Slurm quick reference

 

Lotus   Orchid   Slurm  
Lotus   Orchid   Slurm  
Share via
JASMIN Help Site
Link copied to clipboard

Slurm commands and environment variables

On this page
Slurm Scheduler   Essential Slurm commands   Job specification   Job control commands   Job environment variables  

Slurm Scheduler  

Slurm  is the job scheduler deployed on JASMIN. It allows users to submit, monitor, and control jobs on the LOTUS (CPU) and ORCHID (GPU) clusters.

Essential Slurm commands  

Slurm command Description
sbatch my_batch_script.sh Submit a job script to the scheduler
sinfo Show available scheduling queues
squeue -u <username> List user’s pending and running jobs
salloc -p debug -q debug -A mygws Request an interactive session on LOTUS

Job specification  

Long and short argument names are separated by a comma.

#SBATCH  
  • Scheduler directive - goes in front of the arguments below in a job script file
  • An example Slurm job script file is available here
--account=GWS_NAME, -A GWS_NAME  
  • Specify which project’s account to log the compute with by replacing GWS_NAME
  • To choose the right one, please read about the new Slurm job accounting by project
--partition=PARTITION_NAME, -p PARTITION_NAME  
  • Specify the scheduling partition by replacing PARTITION_NAME
  • See the list of partitions that you can use
--qos=QOS_NAME, -q QOS_NAME  
  • Specify what Quality of Service your task needs by replacing QOS_NAME
  • See the list of QoS that you can use
--time=hh:mm:ss, -t hh:mm:ss  
  • Set the maximum runtime limit by replacing hh:mm:ss
--time-min=hh:mm:ss  
  • Set an estimated runtime by replacing hh:mm:ss
--job-name=JOB_NAME  
  • Specify a name for the job by replacing JOB_NAME
--output=FILE_NAME, -o FILE_NAME  
  • Standard job output - where your program prints to normally (stdout)
  • Defaults: appends to the file and file name is slurm-%j.out, where %j is replaced by the job ID
--error=FILE_NAME, -e FILE_NAME  
  • Standard error output - where your program prints to if an error occurs (stderr)
  • Defaults: appends to the file and file name is slurm-%j.err, where %j is replaced by the job ID
--open-mode=append|truncate  
  • Write mode for error/output files
  • Pick either append or truncate
--mem=XXX  
  • Specify that XXX memory is required for the job. Default units are megabytes (e.g. --mem=250 means 250MB) but you can specify the unit, e.g. --mem=5G for 5 GB.
--array=INDEX  
  • Specify a job array, e.g. --array=1-10 - for an example submission script, see this page
  • The default standard output file name is slurm-%A_%a.out, where %A is replaced by the job ID and %a with the array index
  • To change this, use --output and --error as above with %A and %a instead of %j
--array=INDEX%ArrayTaskThrottle  
  • A maximum number of simultaneously running tasks from the job array may be specified using a % separator
  • For example, --array=1-15%4 will limit the number of simultaneously running tasks from this job array to 4
--chdir=DIRECTORY, -D DIRECTORY  
  • Set the working directory of the batch script to DIRECTORY before it is executed
--exclusive  
  • Exclusive execution mode
--dependency=<dependency_list>  
  • Defer the start of this job until the specified dependencies have been satisfied as completed
  • See the Slurm documentation  for examples
--ntasks=NUMBER_OF_CORES, -n NUMBER_OF_CORES  
  • Number of CPU cores
--constraint=HOST_GROUP_NAME  
  • To select a node with a specific processor model
  • A list of host groups that you can use are available here

Job control commands  

Slurm command Description
scancel <jobid> Kill a job
scontrol show job <jobid> Show details job information
scontrol update job <jobid> Modify a pending job
scancel --user=<username> Kill all jobs owned by a user

Job environment variables  

Slurm variable Description
$SLURM_JOBID Job identifier number
$SLURM_ARRAY_JOB_ID Job array
$SLURM_ARRAY_TASK_ID Job array index
$SLURM_ARRAY_TASK_MAX Last index number within a job array
$SLURM_NTASKS Number of processors allocated
Last updated on 2025-04-09 as part of:  Fix markdown tables in Slurm quick reference (957039778)
On this page:
Slurm Scheduler   Essential Slurm commands   Job specification   Job control commands   Job environment variables  
Follow us

Social media & development

   

Useful links

  • CEDA Archive 
  • CEDA Catalogue 
  • JASMIN 
  • JASMIN Accounts Portal 
  • JASMIN Projects Portal 
  • JASMIN Cloud Portal 
  • JASMIN Notebooks Service 
  • JASMIN Community Discussions 

Contact us

  • Helpdesk
UKRI/STFC logo
UKRI/NERC logo
NCAS logo
NCEO logo
Accessibility | Terms and Conditions | Privacy and Cookies
Copyright © 2025 Science and Technology Facilities Council.
Hinode theme for Hugo licensed under Creative Commons (CC BY-NC-SA 4.0).
JASMIN Help Site
Code copied to clipboard