Migration to Rocky Linux 9 2024
Software and operating system changes - migration to Rocky Linux 9 (Summer 2024)
As with a previous migration completed in 2020, the change of operating system version is needed to make sure that the version in use is current and fully supported, i.e. that package updates are available and important security updates can be obtained and applied to keep the platform secure.
The current operating system, CentOS7 is officially end-of-life as of the end of June 2024. We will be moving from CentOS7 to Rocky Linux 9, which is supported until May 2032. Rocky 9 should provide a very similar user experience to that provided by CentOS7, but with more recent software packages. Some software may have been removed or replaced during this transition.
This change affects JASMIN and CEDA services in several ways, including but not limited to the following:
login
/sci
/xfer
and LOTUS nodes) all need to be redeployedmodule
system and under /apps
needs to be made available in versions compatible with Rocky 9. Some software may need to be recompiled.Much of this work is already underway by teams in CEDA and STFC’s Scientific Computing Department. As a result of extensive work by these teams in recent years to improve the way services are deployed and managed, we are now in a much better position to undertake this kind of migration with as little disruption to users as possible. Some disruption and adaptation by users will be inevitable, however.
Some services have already been migrated and are already running under Rocky 9, but there is still much work to be done over the coming weeks so please watch this space as we do our best to keep you informed of the progress we’re making, and of any actions you may need to take to minimise disruption to your work on JASMIN.
The move to Rocky Linux 9 (abbreviated to “Rocky 9” or “R9” from here on) involves many changes at lower levels transparent to users, so we will focus here on those most relevant to how services on JASMIN are accessed and used. The reasons for the choice of Rocky 9 itself, and for some of the associated changes to software, machines and services provided, will not be covered in detail, but have been influenced by a number of factors including:
The list of new login nodes is as follows:
name | status |
---|---|
login-01.jasmin.ac.uk |
ready to use |
login-02.jasmin.ac.uk |
ready to use |
login-03.jasmin.ac.uk |
ready to use |
login-04.jasmin.ac.uk |
ready to use |
Notes:
*.ac.uk
domains with the JASMIN
team (exception:
hpxfer
)login2
)ssh -V
. If
it’s significantly older than OpenSSH_8.7p1, OpenSSL 3.0.7
, speak to your local
admin team as it may need to be updated before you can connect securely to JASMIN.name | status |
---|---|
nx1.jasmin.ac.uk |
Ready for use, update your SSH key |
nx2.jasmin.ac.uk |
Ready for use, update your SSH key |
nx3.jasmin.ac.uk |
Ready for use, update your SSH key |
nx4.jasmin.ac.uk |
Old server, closing soon see retirement timetable |
Notes:
nx4
in this respect.sci
server for using graphics-intensive applicationssci
servers
We have introduced a new naming convention which helps identify virtual and physical/high-memory sci
servers.
The new list is as follows:
name | status | specs |
---|---|---|
Virtual servers | ||
sci-vm-01.jasmin.ac.uk |
Ready to use | 8 CPU / 32 GB RAM / 80 GB (virtual disk) |
sci-vm-02.jasmin.ac.uk |
Ready to use | 8 CPU / 32 GB RAM / 80 GB (virtual disk) |
sci-vm-03.jasmin.ac.uk |
Ready to use | 8 CPU / 32 GB RAM / 80 GB (virtual disk) |
sci-vm-04.jasmin.ac.uk |
Ready to use | 8 CPU / 32 GB RAM / 80 GB (virtual disk) |
sci-vm-05.jasmin.ac.uk |
Ready to use | 8 CPU / 32 GB RAM / 80 GB (virtual disk) |
sci-vm-06.jasmin.ac.uk |
Ready to use | 8 CPU / 32 GB RAM / 80 GB (virtual disk) |
Physical servers | ||
sci-ph-01.jasmin.ac.uk |
Ready to use | 48 CPU AMD EPYC 74F3 / 2 TB RAM / 2 x 446 GB SATA SSD |
sci-ph-02.jasmin.ac.uk |
Ready to use | 48 CPU AMD EPYC 74F3 / 2 TB RAM / 2 x 446 GB SATA SSD |
Notes:
lxterminal
has been replaced with
xfce-terminal
xterm: Xt error: Can't open display:
xterm: DISPLAY is not set
sci
servers, with limited outward connectivity.xfer
servers
name | status | notes |
---|---|---|
xfer-vm-01.jasmin.ac.uk |
ready to use | Virtual server |
xfer-vm-02.jasmin.ac.uk |
ready to use | Virtual server |
xfer-vm-03.jasmin.ac.uk |
ready to use | Virtual server, has cron . |
Notes:
xfer-vm-03
, you must use
crontamerjasmin-xfer
has now been added to these servers, providing these tools:emacs-nox
ftp
lftp
parallel
python3-requests
python3.11
python3.11-requests
rclone
rsync
s3cmd
screen
xterm
hpxfer
servers
name | status | notes |
---|---|---|
hpxfer3.jasmin.ac.uk |
ready to use | Physical server |
hpxfer4.jasmin.ac.uk |
ready to use | Physical server |
Notes:
sshftp
(GridFTP over SSH) from ARCHER2jasmin-xfer
available as per
xfer servers, abovehpxfer
access role no longer required for these new servers (role will be retired along with the old servers in due course, so no need to renew if you move to the new servers)Due to difficulties installing and configuring the suite of legacy components needed to support “old-style” gridftp, we will not now be providing a replacement for the old server gridftp1
. Please familiarise yourself with using Globus, see below: this provides equivalent (and better) functionality.
Note this does affect gridftp-over-ssh (sshftp
) which is available on the new hpxfer
nodes in the same way as their predecessors, see above.
Where possible you should now use the Globus data transfer service for any data transfer in or out of JASMIN: this is now the recommended method, which will get you the best performance and has a number of advantages over logging into a server and doing transfers manually.
As introduced earlier this year, the following Globus collections are available to all users of JASMIN, with no special access roles required:
name | uuid | status | notes |
---|---|---|---|
JASMIN Default Collection | a2f53b7f-1b4e-4dce-9b7c-349ae760fee0 |
Ready to use | Best performance, currently has 2 physical Data Transfer Nodes (DTNs). |
JASMIN STFC Internal Collection | 9efc947f-5212-4b5f-8c9d-47b93ae676b7 |
Ready to use | For transfers involving other collections inside the STFC network. 2 DTNs, 1 physical, 1 virtual. Can be used by any user in case of issues with the above collection. |
Notes:
Please see the table below and accompanying notes which together summarise the upcoming changes to aspects of software on JASMIN:
Software | CentOS7 | Rocky 9 |
---|---|---|
IDL versions IDL licence server see Note 1 |
8.2, 8.5 (D), 8.5, 8.6 Flexnet |
8.9, 9.1(D) Next generation |
Cylc Cylc UI visualisation see Note 2 |
7.8.14 and 8.3.3-1 UI functionality integrated |
8.3.3-1 UI via browser: discussion ongoing |
Jaspy Jasr jasmin-sci |
2.7, 3.7*, 3.10* (*: all variants) 3.6, 4.0 (all variants), 4.2 URL page of the packages |
3.11 4.3 rpm/Glibc compatibility tba? |
Intel compilers | 12.1.5-20.0.0 (11 variants) | Intel oneAPI |
MPI library/ OpenMPI versions/compiler see Note 3 |
3.1.1/Intel,GNU, 4.0.0 4.1.[0-1,4-5]/Intel 4.1.2, 5.0.1, 5.1.2 |
4.1.5/Intel/gcc & 5.0.4 /intel/gcc Possibility to support mpich or IntelMPI |
NetCDF C library NetCDF Fortran binding lib. |
netcdf/gnu/4.4..7, netcdf/intel/14.0/ netcdff/gnu/4.4.7/*, netcdff/intel/4.4.7 parallel-netcdf/gnu/201411/22 parallel-netcdf/intel/20141122 |
A new module env for serial and parallel version GNU and Intel oneAPI build of NetCDF against either OpenMPI and/or Intel MPI |
GNU compilers | 7.2.0 ,8.1.0, 8.2.0 13.2.0 conda-forge (12.1.0 from legacy JASPY) |
11.4.1 (OS) 13.2.0 conda-forge via JASPY |
JULES see Note 4 |
Information to follow |
IDL:
Cylc: Note that Cylc 8 differs from Cylc 7 in many ways: architecture, scheduling algorithm, security, UIs, working practices and more. The Cylc 8 web UI requires the use of a browser (e.g. Firefox in the NoMachine desktop service)
MPI: (further details to follow)
JULES: (further details to follow)
Preliminary node specification: further info to follow.
type | selector | status | specs |
---|---|---|---|
standard | tbc | Not yet available | 190 CPU / 1.5 TB RAM / 480 GB SATA SSD + 800 GB NvMe SSD |
high-mem | tbc | Not yet available | 190 CPU / 6 TB RAM / 480 GB SATA SSD + 800 GB NvMe SSD |
Notes:
sci
machines: sci-vm-[01-06]
and sci-ph-[01-02]
sci
machines sci[1-8]
Please find below a timetable of planned host retirements in line with our move to Rocky Linux 9.
Please start moving your work NOW so that any issues can be resolved and disruption minimized.
Host | retirement date |
---|---|
Group A | |
cron1.ceda aka cron.jasmin xfer3 nx-login[2,3] |
21/11/2024 16:00 |
Group B | |
nx4 aka nx-login4 |
6/12/2024 16:00 |
Group C | |
xfer1 hpxfer1 - already shut down due to technical issuesci[1,2,4] login[1,2] |
6/12/2024 16:00 |
Group D | |
xfer2 hpxfer2 sci[5,6,8] login[3,4] gridftp1 |
13/12/2024 16:00 |
All the hosts listed have new Rocky 9 equivalents described in the document above. Please check back regularly to keep up to date with this schedule.