JASMIN Help Site logo JASMIN Help Site logo
  • Docs 
  • Guides 
  • Training 
  • Discussions   

  •   Search this site  

Can't find what you're looking for?

Try our Google custom search, across all JASMIN sites

Docs
  • getting started
    • get started with jasmin
    • generate ssh key pair
    • get jasmin portal account
    • get login account
    • beginners training workshop
    • how to contact us about jasmin issues
    • jasmin status
    • jasmin training accounts
    • tips for new users
    • how to login
    • multiple account types
    • present ssh key
    • reconfirm email address
    • reset jasmin account password
    • ssh auth
    • storage
    • understanding new jasmin storage
    • update a jasmin account
  • interactive computing
    • interactive computing overview
    • check network details
    • login servers
    • login problems
    • graphical linux desktop access using nx
    • sci servers
    • tenancy sci analysis vms
    • transfer servers
    • jasmin notebooks service
    • jasmin notebooks service with gpus
    • creating a virtual environment in the notebooks service
    • project specific servers
    • dask gateway
    • access from vscode
  • batch computing
    • lotus overview
    • slurm scheduler overview
    • slurm queues
    • lotus cluster specification
    • how to monitor slurm jobs
    • how to submit a job
    • how to submit an mpi parallel job
    • example job 2 calc md5s
    • orchid gpu cluster
    • slurm status
    • slurm quick reference
  • software on jasmin
    • software overview
    • quickstart software envs
    • python virtual environments
    • additional software
    • community software esmvaltool
    • community software checksit
    • compiling and linking
    • conda environments and python virtual environments
    • conda removal
    • creating and using miniforge environments
    • idl
    • jasmin sci software environment
    • jasmin software faqs
    • jaspy envs
    • matplotlib
    • nag library
    • name dispersion model
    • geocat replaces ncl
    • postgres databases on request
    • running python on jasmin
    • running r on jasmin
    • rocky9 migration 2024
    • share software envs
  • data transfer
    • data transfer overview
    • data transfer tools
    • globus transfers with jasmin
    • bbcp
    • ftp and lftp
    • globus command line interface
    • globus connect personal
    • gridftp ssh auth
    • rclone
    • rsync scp sftp
    • scheduling automating transfers
    • transfers from archer2
  • short term project storage
    • apply for access to a gws
    • elastic tape command line interface hints
    • faqs storage
    • gws etiquette
    • gws scanner ui
    • gws scanner
    • gws alert system
    • install xfc client
    • xfc
    • introduction to group workspaces
    • jdma
    • managing a gws
    • secondary copy using elastic tape
    • share gws data on jasmin
    • share gws data via http
    • using the jasmin object store
    • configuring cors for object storage
  • long term archive storage
    • ceda archive
  • mass
    • external access to mass faq
    • how to apply for mass access
    • moose the mass client user guide
    • setting up your jasmin account for access to mass
  • for cloud tenants
    • introduction to the jasmin cloud
    • jasmin cloud portal
    • cluster as a service
    • cluster as a service kubernetes
    • cluster as a service identity manager
    • cluster as a service slurm
    • cluster as a service pangeo
    • cluster as a service shared storage
    • adding and removing ssh keys from an external cloud vm
    • provisioning tenancy sci vm managed cloud
    • sysadmin guidance external cloud
    • best practice
  • workflow management
    • rose cylc on jasmin
    • using cron
  • uncategorized
    • mobaxterm
    • requesting resources
    • processing requests for resources
    • acknowledging jasmin
    • approving requests for access
    • working with many linux groups
    • jasmin conditions of use
  • getting started
    • get started with jasmin
    • generate ssh key pair
    • get jasmin portal account
    • get login account
    • beginners training workshop
    • how to contact us about jasmin issues
    • jasmin status
    • jasmin training accounts
    • tips for new users
    • how to login
    • multiple account types
    • present ssh key
    • reconfirm email address
    • reset jasmin account password
    • ssh auth
    • storage
    • understanding new jasmin storage
    • update a jasmin account
  • interactive computing
    • interactive computing overview
    • check network details
    • login servers
    • login problems
    • graphical linux desktop access using nx
    • sci servers
    • tenancy sci analysis vms
    • transfer servers
    • jasmin notebooks service
    • jasmin notebooks service with gpus
    • creating a virtual environment in the notebooks service
    • project specific servers
    • dask gateway
    • access from vscode
  • batch computing
    • lotus overview
    • slurm scheduler overview
    • slurm queues
    • lotus cluster specification
    • how to monitor slurm jobs
    • how to submit a job
    • how to submit an mpi parallel job
    • example job 2 calc md5s
    • orchid gpu cluster
    • slurm status
    • slurm quick reference
  • software on jasmin
    • software overview
    • quickstart software envs
    • python virtual environments
    • additional software
    • community software esmvaltool
    • community software checksit
    • compiling and linking
    • conda environments and python virtual environments
    • conda removal
    • creating and using miniforge environments
    • idl
    • jasmin sci software environment
    • jasmin software faqs
    • jaspy envs
    • matplotlib
    • nag library
    • name dispersion model
    • geocat replaces ncl
    • postgres databases on request
    • running python on jasmin
    • running r on jasmin
    • rocky9 migration 2024
    • share software envs
  • data transfer
    • data transfer overview
    • data transfer tools
    • globus transfers with jasmin
    • bbcp
    • ftp and lftp
    • globus command line interface
    • globus connect personal
    • gridftp ssh auth
    • rclone
    • rsync scp sftp
    • scheduling automating transfers
    • transfers from archer2
  • short term project storage
    • apply for access to a gws
    • elastic tape command line interface hints
    • faqs storage
    • gws etiquette
    • gws scanner ui
    • gws scanner
    • gws alert system
    • install xfc client
    • xfc
    • introduction to group workspaces
    • jdma
    • managing a gws
    • secondary copy using elastic tape
    • share gws data on jasmin
    • share gws data via http
    • using the jasmin object store
    • configuring cors for object storage
  • long term archive storage
    • ceda archive
  • mass
    • external access to mass faq
    • how to apply for mass access
    • moose the mass client user guide
    • setting up your jasmin account for access to mass
  • for cloud tenants
    • introduction to the jasmin cloud
    • jasmin cloud portal
    • cluster as a service
    • cluster as a service kubernetes
    • cluster as a service identity manager
    • cluster as a service slurm
    • cluster as a service pangeo
    • cluster as a service shared storage
    • adding and removing ssh keys from an external cloud vm
    • provisioning tenancy sci vm managed cloud
    • sysadmin guidance external cloud
    • best practice
  • workflow management
    • rose cylc on jasmin
    • using cron
  • uncategorized
    • mobaxterm
    • requesting resources
    • processing requests for resources
    • acknowledging jasmin
    • approving requests for access
    • working with many linux groups
    • jasmin conditions of use
  1.   Batch Computing
  1. Home
  2. Docs
  3. Batch Computing
  4. How to submit an MPI parallel job

How to submit an MPI parallel job

 

Share via
JASMIN Help Site
Link copied to clipboard

Submitting MPI parallel Slurm jobs

On this page
What is an MPI parallel job?   MPI implementation and Slurm   Parallel MPI compiler with OpenMPI  
 
Not yet reviewed for compatibility with the new cluster, April 2025. Please adapt using new submission instructions. This alert will be removed once updated.

What is an MPI parallel job?  

An MPI parallel job runs on more than one core and more than one host using the Message Passing Interface (MPI) library for communication between all cores. A simple script, such as the one given below could be saved as my_script_name.sbatch:

#!/bin/bash
#SBATCH --job-name="My MPI job"
#SBATCH --time=00:30:00
#SBATCH --mem=100M
#SBATCH --account=mygws
#SBATCH --partition=par-multi
#SBATCH --qos=debug
#SBATCH -n 36
#SBATCH -o %j.log
#SBATCH -e %j.err

# Load a module for the gcc OpenMPI library  (needed for mpi_myname.exe)
module load eb/OpenMPI/gcc/4.0.0

# Start the job running using OpenMPI's "mpirun" job launcher
mpirun ./mpi_myname.exe

-n refers to the number of processors or cores you wish to run on. The rest of the #SBATCH input options, and many more besides, can be found in the sbatch manual page or in the related articles. You must only use the par-multi queue for parallel jobs using MPI.

For those familiar with LOTUS running Platform MPI and Platform LSF, please note that the job is started using the OpenMPI native mpirun command not the mpirun.lotus wrapper script that was previously required. We have provided an mpirun.lotus script for backward compatibility but it just runs the native mpirun.

To submit the job, do not run the script, but rather use it as the standard input to sbatch, like so:

sbatch --exclusive my_script_name.sbatch

The --exclusive flag is used to group the parallel jobs onto hosts which are allocated only to run this job. This ensures the best MPI communication consistency and bandwidth/latency for the job and ensures no interference from other users’ jobs that might otherwise be running on those hosts.

MPI implementation and Slurm  

The OpenMPI library is the only supported MPI library on the cluster. OpenMPI v3.1.1 and v4.0.0 are provided which are fully MPI3 compliant. MPI I/O features are fully supported only on the LOTUS /work/scratch-pw* volumes as these use a Panasas fully parallel file system. The MPI implementation on CentOS7 LOTUS/Slurm is available via the module environment for each compiler as listed below:

eb/OpenMPI/gcc/3.1.1 
eb/OpenMPI/gcc/4.0.0       
eb/OpenMPI/intel/3.1.1

Note: OpenMPI Intel compiler is due shortly as is 4.0.3

Parallel MPI compiler with OpenMPI  

Compile and link to OpenMPI libraries using the mpif90, mpif77, mpicc, mpiCC wrapper scripts which are in the default unix path. The scripts will detect which compiler you are using (GNU, Intel) by the compiler environment loaded and add the relevant compiler library switches. For example:

module load intel/20.0.0
module load eb/OpenMPI/intel/3.1.1
mpif90

will use the Intel Fortran compiler ifort and OpenMPI/3.1.1. Whereas

module load eb/OpenMPI/gcc/3.1.1
mpicc

will call the GNU C compiler gcc and OpenMPI/3.1.1.

The OpenMPI User Guides can be found at https://www.open-mpi.org/doc/  .

Last updated on 2025-04-09 as part of:  Initial review of how to submit MPI jobs (7c23b8688)
On this page:
What is an MPI parallel job?   MPI implementation and Slurm   Parallel MPI compiler with OpenMPI  
Follow us

Social media & development

   

Useful links

  • CEDA Archive 
  • CEDA Catalogue 
  • JASMIN 
  • JASMIN Accounts Portal 
  • JASMIN Projects Portal 
  • JASMIN Cloud Portal 
  • JASMIN Notebooks Service 
  • JASMIN Community Discussions 

Contact us

  • Helpdesk
UKRI/STFC logo
UKRI/NERC logo
NCAS logo
NCEO logo
Accessibility | Terms and Conditions | Privacy and Cookies
Copyright © 2025 Science and Technology Facilities Council.
Hinode theme for Hugo licensed under Creative Commons (CC BY-NC-SA 4.0).
JASMIN Help Site
Code copied to clipboard