Migration to Rocky Linux 9 2024
Software and operating system changes - migration to Rocky Linux 9 (Summer 2024)
As with a previous migration completed in 2020, the change of operating system version is needed to make sure that the version in use is current and fully supported, i.e. that package updates are available and important security updates can be obtained and applied to keep the platform secure.
The current operating system, CentOS7 is officially end-of-life as of the end of June 2024. We will be moving from CentOS7 to Rocky Linux 9, which is supported until May 2032. Rocky 9 should provide a very similar user experience to that provided by CentOS7, but with more recent software packages. Some software may have been removed or replaced during this transition.
This change affects JASMIN and CEDA services in several ways, including but not limited to the following:
login
/sci
/xfer
and LOTUS nodes) all need to be redeployedmodule
system and under /apps
needs to be made available in versions compatible with Rocky 9. Some software may need to be recompiled.Much of this work is already underway by teams in CEDA and STFC’s Scientific Computing Department. As a result of extensive work by these teams in recent years to improve the way services are deployed and managed, we are now in a much better position to undertake this kind of migration with as little disruption to users as possible. Some disruption and adaptation by users will be inevitable, however.
Some services have already been migrated and are already running under Rocky 9, but there is still much work to be done over the coming weeks so please watch this space as we do our best to keep you informed of the progress we’re making, and of any actions you may need to take to minimise disruption to your work on JASMIN.
The move to Rocky Linux 9 (abbreviated to “Rocky 9” or “R9” from here on) involves many changes at lower levels transparent to users, so we will focus here on those most relevant to how services on JASMIN are accessed and used. The reasons for the choice of Rocky 9 itself, and for some of the associated changes to software, machines and services provided, will not be covered in detail, but have been influenced by a number of factors including:
The list of new login nodes is as follows:
name | status |
---|---|
login-01.jasmin.ac.uk |
ready to use |
login-02.jasmin.ac.uk |
ready to use |
login-03.jasmin.ac.uk |
ready to use |
login-04.jasmin.ac.uk |
ready to use |
Notes:
*.ac.uk
domains with the JASMIN
team (exception:
hpxfer
)login2
)ssh -V
. If
it’s significantly older than OpenSSH_8.7p1, OpenSSL 3.0.7
, speak to your local
admin team as it may need to be updated before you can connect securely to JASMIN.name | status |
---|---|
nx1.jasmin.ac.uk |
Ready, but new setup steps recommended |
nx2.jasmin.ac.uk |
Ready, but new setup steps recommended |
nx3.jasmin.ac.uk |
Ready, but new setup steps recommended |
nx4.jasmin.ac.uk |
Not yet moved to Rocky 9 (works as previously for now) |
Notes:
nx4
in this respect.sci
server for using graphics-intensive applicationssci
servers
We have introduced a new naming convention which helps identify virtual and physical/high-memory sci
servers.
The new list is as follows:
name | status | specs |
---|---|---|
Virtual servers | ||
sci-vm-01.jasmin.ac.uk |
Ready to use | 8 CPU / 32 GB RAM / 80 GB (virtual disk) |
sci-vm-02.jasmin.ac.uk |
Ready to use | 8 CPU / 32 GB RAM / 80 GB (virtual disk) |
sci-vm-03.jasmin.ac.uk |
Ready to use | 8 CPU / 32 GB RAM / 80 GB (virtual disk) |
sci-vm-04.jasmin.ac.uk |
Ready to use | 8 CPU / 32 GB RAM / 80 GB (virtual disk) |
sci-vm-05.jasmin.ac.uk |
Ready to use | 8 CPU / 32 GB RAM / 80 GB (virtual disk) |
sci-vm-06.jasmin.ac.uk |
Ready to use | 8 CPU / 32 GB RAM / 80 GB (virtual disk) |
Physical servers | ||
sci-ph-01.jasmin.ac.uk |
Ready to use | 48 CPU AMD EPYC 74F3 / 2 TB RAM / 2 x 446 GB SATA SSD |
sci-ph-02.jasmin.ac.uk |
Ready to use | 48 CPU AMD EPYC 74F3 / 2 TB RAM / 2 x 446 GB SATA SSD |
Notes:
lxterminal
has been replaced with
xfce-terminal
xterm: Xt error: Can't open display:
xterm: DISPLAY is not set
sci
servers, with limited outward connectivity.xfer
servers
name | status | notes |
---|---|---|
xfer-vm-01.jasmin.ac.uk |
ready to use | Virtual server |
xfer-vm-02.jasmin.ac.uk |
ready to use | Virtual server |
xfer-vm-03.jasmin.ac.uk |
ready to use | Virtual server, has cron . |
Notes:
xfer-vm-03
, you must use
crontamerjasmin-xfer
has now been added to these servers, providing these tools:emacs-nox
ftp
lftp
parallel
python3-requests
python3.11
python3.11-requests
rclone
rsync
s3cmd
screen
xterm
hpxfer
servers
name | status | notes |
---|---|---|
hpxfer3.jasmin.ac.uk |
ready to use | Physical server |
hpxfer4.jasmin.ac.uk |
ready to use | Physical server |
Notes:
sshftp
(GridFTP over SSH) from ARCHER2jasmin-xfer
available as per
xfer servers, abovehpxfer
access role no longer required for these new servers (role will be retired along with the old servers in due course, so no need to renew if you move to the new servers)For users of certificate-based GridFTP only (specifically, gsiftp://
using the globus-url-copy
client), there is a new server:
name | status |
---|---|
gridftp2.jasmin.ac.uk |
Not yet ready |
Notes:
slcs.jasmin.ac.uk
as the short-lived credentials server, with your JASMIN
account credentials. CEDA identities can no longer be used for authentication with this server.globus-url-copy
is nothing to do with the
Globus service: they are now very separate things.Where possible you should now use the Globus data transfer service for any data transfer in or out of JASMIN: this is now the recommended method, which will get you the best performance and has a number of advantages over logging into a server and doing transfers manually.
As introduced earlier this year, the following Globus collections are available to all users of JASMIN, with no special access roles required:
name | uuid | status | notes |
---|---|---|---|
JASMIN Default Collection | a2f53b7f-1b4e-4dce-9b7c-349ae760fee0 |
Ready to use | Best performance, currently has 2 physical Data Transfer Nodes (DTNs). |
JASMIN STFC Internal Collection | 9efc947f-5212-4b5f-8c9d-47b93ae676b7 |
Ready to use | For transfers involving other collections inside the STFC network. 2 DTNs, 1 physical, 1 virtual. Can be used by any user in case of issues with the above collection. |
Notes:
Please see the table below and accompanying notes which together summarise the upcoming changes to aspects of software on JASMIN:
Software | CentOS7 | Rocky 9 |
---|---|---|
IDL versions IDL licence server see Note 1 |
8.2, 8.5 (D), 8.5, 8.6 Flexnet |
8.9, 9.0. (8.6?) Next generation |
Cylc Cylc UI visualisation see Note 2 |
7.8.14 and 8.3.3-1 UI functionality integrated |
8.3.3-1 UI via browser: discussion ongoing |
Jaspy Jasr jasmin-sci |
2.7, 3.7*, 3.10* (*: all variants) 3.6, 4.0 (all variants), 4.2 URL page of the packages |
3.11 4.3 rpm/Glibc compatibility tba? |
Intel compilers | 12.1.5-20.0.0 (11 variants) | Intel oneAPI |
MPI library/ OpenMPI versions/compiler see Note 3 |
3.1.1/Intel,GNU, 4.0.0 4.1.[0-1,4-5]/Intel 4.1.2, 5.0.1, 5.1.2 |
4.1.5/Intel/gcc & 5.0.4 /intel/gcc Possibility to support mpich or IntelMPI |
NetCDF C library NetCDF Fortran binding lib. |
netcdf/gnu/4.4..7, netcdf/intel/14.0/ netcdff/gnu/4.4.7/*, netcdff/intel/4.4.7 parallel-netcdf/gnu/201411/22 parallel-netcdf/intel/20141122 |
A new module env for serial and parallel version GNU and Intel oneAPI build of NetCDF against either OpenMPI and/or Intel MPI |
GNU compilers | 7.2.0 ,8.1.0, 8.2.0 13.2.0 conda-forge (12.1.0 from legacy JASPY) |
11.4.1 (OS) 13.2.0 conda-forge via JASPY |
JULES see Note 4 |
Information to follow |
IDL: We will not support IDL 8.5 & older versions on Rocky 9 but we might continue to support IDL 8.6 if there is a need from the user community: we are still assessing that. The present version of IDL 8.6 must be migrated from the current “Flexnet” to the new “Next Generation” licensing system.
We obtained IDL 8.9 and IDL 9 from NV5 and are in the process to setup “Next Generation" licensing to activate the licence. Once this is done on server and client machines and testing is completed, a new module environment will be created users for IDL 8.9 and 9.0 on the new sci machines and a subset of the new LOTUS Rocky 9 nodes. The default module add idl
will then load IDL 8.9 instead of IDL 8.6.
Cylc: Note that Cylc 8 differs from Cylc 7 in many ways: architecture, scheduling algorithm, security, UIs, working practices and more. The Cylc 8 web UI requires the use of a browser (e.g. Firefox in the NoMachine desktop service)
MPI: (further details to follow)
JULES: (further details to follow)
Preliminary node specification: further info to follow.
type | selector | status | specs |
---|---|---|---|
standard | tbc | Not yet available | 190 CPU / 1.5 TB RAM / 480 GB SATA SSD + 800 GB NvMe SSD |
high-mem | tbc | Not yet available | 190 CPU / 6 TB RAM / 480 GB SATA SSD + 800 GB NvMe SSD |
Notes:
sci
machines: sci-vm-[01-06]
and sci-ph-[01-02]
sci
machines sci[1-8]
Further information to follow.