JASMIN Help Site logo JASMIN Help Site logo
  • Docs 
  • Guides 
  • Training 
  • Discussions   

  •   Search this site  

Can't find what you're looking for?

Try our Google custom search, across all JASMIN sites

Docs
  • getting started
    • get started with jasmin
    • generate ssh key pair
    • get jasmin portal account
    • get login account
    • beginners training workshop
    • how to contact us about jasmin issues
    • jasmin status
    • jasmin training accounts
    • tips for new users
    • how to login
    • multiple account types
    • present ssh key
    • reconfirm email address
    • reset jasmin account password
    • ssh auth
    • storage
    • understanding new jasmin storage
    • update a jasmin account
  • interactive computing
    • interactive computing overview
    • check network details
    • login servers
    • login problems
    • graphical linux desktop access using nx
    • sci servers
    • tenancy sci analysis vms
    • transfer servers
    • jasmin notebooks service
    • jasmin notebooks service with gpus
    • creating a virtual environment in the notebooks service
    • project specific servers
    • dask gateway
    • access from vscode
  • batch computing
    • lotus overview
    • slurm scheduler overview
    • slurm queues
    • lotus cluster specification
    • how to monitor slurm jobs
    • how to submit a job
    • how to submit an mpi parallel job
    • example job 2 calc md5s
    • orchid gpu cluster
    • slurm status
    • slurm quick reference
  • software on jasmin
    • software overview
    • quickstart software envs
    • python virtual environments
    • additional software
    • community software esmvaltool
    • community software checksit
    • compiling and linking
    • conda environments and python virtual environments
    • conda removal
    • creating and using miniforge environments
    • idl
    • jasmin sci software environment
    • jasmin software faqs
    • jaspy envs
    • matplotlib
    • nag library
    • name dispersion model
    • geocat replaces ncl
    • postgres databases on request
    • running python on jasmin
    • running r on jasmin
    • rocky9 migration 2024
    • share software envs
  • data transfer
    • data transfer overview
    • data transfer tools
    • globus transfers with jasmin
    • bbcp
    • ftp and lftp
    • globus command line interface
    • globus connect personal
    • gridftp ssh auth
    • rclone
    • rsync scp sftp
    • scheduling automating transfers
    • transfers from archer2
  • short term project storage
    • apply for access to a gws
    • elastic tape command line interface hints
    • faqs storage
    • gws etiquette
    • gws scanner ui
    • gws scanner
    • gws alert system
    • install xfc client
    • xfc
    • introduction to group workspaces
    • jdma
    • managing a gws
    • nlds
    • secondary copy using elastic tape
    • share gws data on jasmin
    • share gws data via http
    • using the jasmin object store
    • configuring cors for object storage
  • long term archive storage
    • ceda archive
  • mass
    • external access to mass faq
    • how to apply for mass access
    • moose the mass client user guide
    • setting up your jasmin account for access to mass
  • for cloud tenants
    • introduction to the jasmin cloud
    • jasmin cloud portal
    • cluster as a service
    • cluster as a service kubernetes
    • cluster as a service identity manager
    • cluster as a service slurm
    • cluster as a service pangeo
    • cluster as a service shared storage
    • adding and removing ssh keys from an external cloud vm
    • provisioning tenancy sci vm managed cloud
    • sysadmin guidance external cloud
    • best practice
  • workflow management
    • rose cylc on jasmin
    • using cron
  • uncategorized
    • mobaxterm
    • requesting resources
    • processing requests for resources
    • acknowledging jasmin
    • approving requests for access
    • working with many linux groups
    • jasmin conditions of use
  • getting started
    • get started with jasmin
    • generate ssh key pair
    • get jasmin portal account
    • get login account
    • beginners training workshop
    • how to contact us about jasmin issues
    • jasmin status
    • jasmin training accounts
    • tips for new users
    • how to login
    • multiple account types
    • present ssh key
    • reconfirm email address
    • reset jasmin account password
    • ssh auth
    • storage
    • understanding new jasmin storage
    • update a jasmin account
  • interactive computing
    • interactive computing overview
    • check network details
    • login servers
    • login problems
    • graphical linux desktop access using nx
    • sci servers
    • tenancy sci analysis vms
    • transfer servers
    • jasmin notebooks service
    • jasmin notebooks service with gpus
    • creating a virtual environment in the notebooks service
    • project specific servers
    • dask gateway
    • access from vscode
  • batch computing
    • lotus overview
    • slurm scheduler overview
    • slurm queues
    • lotus cluster specification
    • how to monitor slurm jobs
    • how to submit a job
    • how to submit an mpi parallel job
    • example job 2 calc md5s
    • orchid gpu cluster
    • slurm status
    • slurm quick reference
  • software on jasmin
    • software overview
    • quickstart software envs
    • python virtual environments
    • additional software
    • community software esmvaltool
    • community software checksit
    • compiling and linking
    • conda environments and python virtual environments
    • conda removal
    • creating and using miniforge environments
    • idl
    • jasmin sci software environment
    • jasmin software faqs
    • jaspy envs
    • matplotlib
    • nag library
    • name dispersion model
    • geocat replaces ncl
    • postgres databases on request
    • running python on jasmin
    • running r on jasmin
    • rocky9 migration 2024
    • share software envs
  • data transfer
    • data transfer overview
    • data transfer tools
    • globus transfers with jasmin
    • bbcp
    • ftp and lftp
    • globus command line interface
    • globus connect personal
    • gridftp ssh auth
    • rclone
    • rsync scp sftp
    • scheduling automating transfers
    • transfers from archer2
  • short term project storage
    • apply for access to a gws
    • elastic tape command line interface hints
    • faqs storage
    • gws etiquette
    • gws scanner ui
    • gws scanner
    • gws alert system
    • install xfc client
    • xfc
    • introduction to group workspaces
    • jdma
    • managing a gws
    • nlds
    • secondary copy using elastic tape
    • share gws data on jasmin
    • share gws data via http
    • using the jasmin object store
    • configuring cors for object storage
  • long term archive storage
    • ceda archive
  • mass
    • external access to mass faq
    • how to apply for mass access
    • moose the mass client user guide
    • setting up your jasmin account for access to mass
  • for cloud tenants
    • introduction to the jasmin cloud
    • jasmin cloud portal
    • cluster as a service
    • cluster as a service kubernetes
    • cluster as a service identity manager
    • cluster as a service slurm
    • cluster as a service pangeo
    • cluster as a service shared storage
    • adding and removing ssh keys from an external cloud vm
    • provisioning tenancy sci vm managed cloud
    • sysadmin guidance external cloud
    • best practice
  • workflow management
    • rose cylc on jasmin
    • using cron
  • uncategorized
    • mobaxterm
    • requesting resources
    • processing requests for resources
    • acknowledging jasmin
    • approving requests for access
    • working with many linux groups
    • jasmin conditions of use
  1.   Batch Computing
  1. Home
  2. Docs
  3. Batch Computing
  4. Slurm queues

Slurm queues

 

Slurm   Queue   Partition   Sinfo   Lotus   Orchid  
Slurm   Queue   Partition   Sinfo   Lotus   Orchid  
Share via
JASMIN Help Site
Link copied to clipboard

Slurm queues/partitions for batch job submissions to the LOTUS & ORCHID clusters

On this page
Queue names   Queue details   State of queues   sinfo output field description:   Queues and QoS   How to choose a QoS   Debug QoS   Standard QoS   Short QoS   Long QoS   High QoS   New Slurm job accounting hierarchy  

Queue names  

The Slurm queues in the LOTUS cluster are:

  • standard
  • debug

Each queue is has attributes of run-length limits (e.g. short, long) and resources. A full breakdown of each queue and its associated resources, such as run time limits and memory limits, is shown below in Table 1.

Queue details  

Queues represent a set of pending jobs, lined up in a defined order, and waiting for their opportunity to use resources. The queue is specified in the job script file using Slurm scheduler directive like this:

#SBATCH -p <partition=queue_name>

where <queue_name> is the name of the queue/partition (Table 1, column 1).

Table 1: LOTUS/Slurm queues and their specifications

Queue name Max run time Default run time Default memory per CPU
standard 24 hrs 1hr 1GB
debug 1 hr 30 mins 1GB

Note 1: Resources requested by a job must be within the resource allocation limits of the selected queue.

Note 2: If your job exceeds the default maximum run time limit then it will be terminated by the Slurm scheduler.

State of queues  

The Slurm command sinfo reports the state of queues and nodes managed by Slurm. It has a wide variety of filtering, sorting, and formatting options.

sinfo
PARTITION AVAIL  TIMELIMIT  NODES STATE NODELIST
...
standard*    up 1-00:00:00    262  idle host[1004-1276]
debug*       up    1:00:00      3  idle host[1001-1003]
...
 
Queues other than the standard queues, standard and debug, should be ignored as they implement different job scheduling and control policies.

sinfo output field description:  

By default, the Slurm command sinfo displays the following information:

  • PARTITION: Partition name followed by * for the default queue/partition.
  • AVAIL: State/availability of a queue/partition. Partition state: up or down.
  • TIMELIMIT: The maximum run time limit per job in each queue/partition is shown in days-hours:minutes:seconds, e.g. 2-00:00:00 is two days maximum runtime limit.
  • NODES: Count of nodes with this particular configuration e.g. 48 nodes.
  • STATE: State of the nodes. Possible states include: allocated, down, drained, and idle. For example, the state idle means that the node is not allocated to any jobs and is available for use.
  • NODELIST: List of node names associated with this queue/partition.

The sinfo example below, reports more complete information about the partition/queue debug:

sinfo --long --partition=debug
PARTITION AVAIL TIMELIMIT   JOB_SIZE ROOT OVERSUBS GROUPS  NODES STATE RESERVATION NODELIST
debug        up   1:00:00 1-infinite   no       NO    all      3  idle             host[1001-1003]

Queues and QoS  

Queues/partitions are further divided up into Quality of Services (QoS), which determine further restrictions about your job, for example, how long it can run or how many CPU cores it can use.

Different partitions on LOTUS have different allowed QoS as shown below:

Partition Allowed QoS
standard standard, short, long, high
debug debug

A summary of the different QoS are below:

QoS Priority Max CPUs per job Max wall time
standard 500 1 24 hours
short 550 1 4 hours
long 350 1 5 days
high 450 96 2 days
debug 500 8 1 hour

Once you’ve chosen the partition and QoS you need, in your job script, provide the partition in the --partition directive and the QoS in the --qos directive.

How to choose a QoS  

Debug QoS  

The debug QoS can be used to test new workflows and also to help new users to familiarise themselves with the Slurm batch system. This QoS should be used when unsure of the job resource requirements and behavior at runtime because it has a confined set of LOTUS nodes not shared with the other standard LOTUS queues.

QoS Priority Max CPUs per job Max wall time Max jobs per user
debug 500 8 1 hour 32

Standard QoS  

The standard QoS is the most common QoS to use, with a maximum of a single CPU per job and a runtime under 24 hours.

QoS Priority Max CPUs per job Max wall time Max jobs per user
standard 500 1 24 hours 4000

Short QoS  

The short QoS is for shorter jobs (under 4 hours) and has a maximum of a single CPU per job.

QoS Priority Max CPUs per job Max wall time Max jobs per user
short 550 1 4 hours 2000

Long QoS  

The long QoS is for jobs that will take longer than 24 hours but will have a lower priority than standard. It also has a maximum of a single CPU per job.

QoS Priority Max CPUs per job Max wall time Max jobs per user
long 350 1 5 days 1350

High QoS  

The high QoS is for jobs with larger resource requirements, for example CPUs per job and memory.

QoS Priority Max CPUs per job Max wall time
high 450 96 2 days

New Slurm job accounting hierarchy  

Slurm accounting by project has been introduced as a means of monitoring compute usage by projects on JASMIN. These projects align with group workspaces (GWSs), and you will automatically be added to Slurm accounts corresponding to any GWS projects that you belong to.

To find what Slurm accounts and quality of services (QoS) that you have access to, use the useraccounts command on any sci machine. Output should be similar to one or more of the lines below.

useraccounts
# sacctmgr show user fred withassoc format=user,account,qos%-50
User       Account        QOS
---------- -------------- -------------------------------------
      fred  mygws         debug,high,long,short,standard
      fred  orchid        debug,high,long,short,standard

You should use the relevant account for your project’s task with the --account directive in your job script.

Users who do not belong to any group workspaces will be assigned the no-project account and should use that in their job submissions. Please ignore and do not use the group shobu.

Last updated on 2025-08-19 as part of:  Tidy Slurm queues page (4f315b38f)
On this page:
Queue names   Queue details   State of queues   sinfo output field description:   Queues and QoS   How to choose a QoS   Debug QoS   Standard QoS   Short QoS   Long QoS   High QoS   New Slurm job accounting hierarchy  
Follow us

Social media & development

   

Useful links

  • CEDA Archive 
  • CEDA Catalogue 
  • JASMIN 
  • JASMIN Accounts Portal 
  • JASMIN Projects Portal 
  • JASMIN Cloud Portal 
  • JASMIN Notebooks Service 
  • JASMIN Community Discussions 

Contact us

  • Helpdesk
UKRI/STFC logo
UKRI/NERC logo
NCAS logo
NCEO logo
Accessibility | Terms and Conditions | Privacy and Cookies
Copyright © 2025 Science and Technology Facilities Council.
Hinode theme for Hugo licensed under Creative Commons (CC BY-NC-SA 4.0).
JASMIN Help Site
Code copied to clipboard