Access to storage

This article provides information about JASMIN storage. It covers:

  • Home directory
  • JASMIN disk mounts
  • Where to write data
  • Access to the CEDA archive
  • Tape access
  • Advice on inter-volume symlinks in JASMIN storage

Home directory

Every JASMIN user is allocated a HOME directory located at /home/users/<user_id>. This directory is available across most of the interactive and batch computing resources, including the JASMIN login and transfer servers.

Each home directory has a default quota of 100 GB (as of JASMIN Phase 4 - Please note that information about how to check your quota is still pending, due to a change of storage technology for this area). 

You are only allowed to exceed this limit for a very brief period of time but if you continue to exceed the limit, you will be unable to add any more files or run jobs and will be required to reduce your usage.

Backups of your home directory

There is a daily incremental and weekly full backup of your home directory. Your home directory is the ONLY storage which is automatically backed up.

Additionally, "snapshots" provide a quick, self-service method for you to restore files or directories that have been accidentally deleted.

Recovering snapshots of your home directory data

Users can access snapshots to recover files/directories that have been accidentally deleted. These are stored in /home/users/.snapshot/homeusers.SNAP-<date-time>/<userid>

For example:

$ ls /home/users/.snapshot/ will list several homeusers directories containing files as they were on that date.

# ls -l /home/users/.snapshot/
total 0
drwxr-xr-x 1 root root 0 May 25 14:00 homeusers.SNAP-20180529-034739
drwxr-xr-x 1 root root 0 May 29 11:53 homeusers.SNAP-20180530-041314
drwxr-xr-x 1 root root 0 May 29 11:53 homeusers.SNAP-20180531-033357
drwxr-xr-x 1 root root 0 May 29 11:53 homeusers.SNAP-20180601-032046
drwxr-xr-x 1 root root 0 Jun  1 16:00 homeusers.SNAP-20180602-035938
drwxr-xr-x 1 root root 0 Jun  1 16:00 homeusers.SNAP-20180603-034807
drwxr-xr-x 1 root root 0 Jun  1 16:00 homeusers.SNAP-20180604-032210
drwxr-xr-x 1 root root 0 Jun  4 12:00 homeusers.SNAP-20180605-032549
drwxr-xr-x 1 root root 0 Jun  4 12:00 homeusers.SNAP-20180606-031754
drwxr-xr-x 1 root root 0 Jun  4 12:00 homeusers.SNAP-20180607-033318
drwxr-xr-x 1 root root 0 Jun  4 12:00 homeusers.SNAP-20180608-040210
drwxr-xr-x 1 root root 0 Jun  8 20:00 homeusers.SNAP-20180609-035733
drwxr-xr-x 1 root root 0 Jun  8 20:00 homeusers.SNAP-20180610-034747
drwxr-xr-x 1 root root 0 Jun  8 20:00 homeusers.SNAP-20180611-032921
drwxr-xr-x 1 root root 0 Jun 11 10:00 homeusers.SNAP-20180612-033432
drwxr-xr-x 1 root root 0 Jun 12 18:00 homeusers.SNAP-20180613-034632
drwxr-xr-x 1 root root 0 Jun 13 11:00 homeusers.SNAP-20180614-040410
drwxr-xr-x 1 root root 0 Jun 14 17:00 homeusers.SNAP-20180615-035424
drwxr-xr-x 1 root root 0 Jun 14 17:00 homeusers.SNAP-20180616-035724
drwxr-xr-x 1 root root 0 Jun 14 17:00 homeusers.SNAP-20180617-040553
drwxr-xr-x 1 root root 0 Jun 14 17:00 homeusers.SNAP-20180618-040519

Within each of these, you can look for your own userid to find snapshot directories for your data. File(s) can then be copied (by users themselves) back from one of these directories to their original location.

# ls -l /home/users/.snapshot/homeusers.SNAP-20180529-034739/joebloggs/
total 1170964
-rw-r--r-- 1 joebloggs users              104857600 Jun 26  2017 100M.dat
-rw-r--r-- 1 joebloggs users             1024000000 Feb  1  2017 1G.dat
-rw-r--r-- 1 joebloggs users                      0 Dec 18 12:09 6181791.err

# cp /home/users/.snapshot/homeusers.SNAP-20180529-034739/joebloggs/100M.dat ~/100M.dat

Home directories should not be used for storing large amounts of data. See below for guidance on where to write your data.

Please note advice on inter-volume symlinks, below: these are to be avoided.

JASMIN disk mounts

There is a common file system layout that underpins most of the JASMIN infrastructure. However, access to different parts of the file system will depends on where you are logged in. Table 1 outlines the key disk mounts, where they are accessible from and the type of access (read and/or write).

Table 1.  List of common disk mounts and their availability on JASMIN

Disk mount
login sci transfer LOTUS Parallel-write
/home/users R/W R/W R/W R/W No
/group_workspaces/jasmin2
/group_workspaces/jasmin4   
/gws/nopw/j04
No
No
No
R/W
R/W
R/W
R/W
R/W
R/W
R/W
R/W
R/W
yes
no
no (hence "nopw")
/work/scratch
/work/scratch/nompiio
No
No
R/W
R/W
No
No
R/W
R/W
yes
no
/apps/contrib No RO No RO n/a
/badc, /neodc (archives) No RO RO RO n/a


login =  login servers: jasmin-login1.ceda.ac.uk, cems-login1.cems.ac.uk
sci =  scientific analysis servers: jasmin-sci[1-5].ceda.ac.uk, cems-sci[1-2].cems.rl.ac.uk
transfer =  data transfer servers: jasmin-xfer[1-2].ceda.ac.uk, cems-xfer1.cems.rl.ac.uk
LOTUS =  LOTUS batch processing cluster (all cluster nodes)
Disks are mounted read/write (" R/W") or read-only ("RO").

Where to write data

As indicated in table 1 there are three main disk mounts where data can be written. Please follow these general principles when deciding where to write your data:

  1. HOME directories (/home/users) are relatively small (100GB as of Phase 4) and should NOT be used for storing large data volumes.
  2. Group Workspaces (/group_workspaces/*/<project> and /gws/nopw/*/<project) are usually the correct place to write your data. Please refer to the Group Workspace documentation for details. But please note that Group Workspaces are NOT backed up.
    1. /group_workspaces/jasmin2 volumes are parallel-write-capable storage from Phases 2 and 3 of JASMIN. Some of this storage is due for retirement by the end of 2018 with data to be migrated to new volumes on /gws/nopw/j04
    2. /group_workspaces/jasmin4 volumes are "Scale out Filesystem" (SOF) storage which is not parallel-write-capable.
    3. /gws/nopw/j04 volumes are "Scale out Filesystem" (SOF) storage which is not parallel-write-capable. This new naming convention will be used for all new volumes and whenever existing volumes are migrated to SOF storage from now on.
  3. The "scratch" areas (/work/scratch, /work/scratch-nompiio) are available as a temporary file space for jobs running on LOTUS (see next section below).
  4. The (/tmp) directory is not an appropriate location to write your data (see next section below).

The /work/scratch/work/scratch-nompiio and /tmp directories

The scratch areas /work/scratch and /work/scratch-nompiio are a temporary file space shared across the entire LOTUS cluster and the scientific analysis servers. 

These scratch areas are ideal for processes that generate intermediate data files that are consumed by other parts of the processing before being deleted. Please remember that these volumes are resources shared between all users, so consider other users and remember to clean up after your jobs. Any data that you wish to keep should be written to a Group Workspace.

There are now 2 types of scratch storage available:

  •  /work/scratch-nompiio suitable for most users (250 TB available, introduced in JASMIN Phase 4)
    • Please use this area unless you have a good reason not to. The flash-based storage has significant performance benefits particularly for operations involving lots of small files, but is not suitable for MPI-IO operations which attempt to write in parallel to different parts of the same file. Please be aware of this if your code (perhaps inadvertently?) writes to a shared log file.
  •  /work/scratch for users with a specific need for storage which is capable of shared-file writes with MPI-IO (75 TB available)
    • Please use this area ONLY if you know that your code has a parallel-write requirement.

When using the "scratch" areas, please create a sub-directory (e.g. /work/scratch-nompiio/<user_id>) and write your data there.

In contrast to the "scratch" space, the /tmp directories are all local directories, one per cluster node (or interactive server). These can be used to store small volumes of temporary data for a job that only needs to be read by the local process.

Cleaning up the scratch and /tmp directories

Please make sure that your jobs delete any files under the /tmp and scratch directories when they complete (especially if jobs have not completed normally!).

Data in the /tmp , /work/scratch and /work/scratch-nompiio directories are temporary and may be arbitrarily removed at any point once your job has finished running. Do not use them to store important output for any significant length of time. Any important data should be written to a group workspace so that you do not lose it, or to your home directory if appropriate.

The  /work/scratch and /work/scratch-nompiio areas are NOT available on the xfer or login servers.

Access to the CEDA archive

The CEDA archive is mounted read-only under /badc (British Atmospheric Data Centre) and /neodc (NERC Earth Observation Data Centre). The archive includes a range of data sets that are provided under varying licences. Access to these groups is managed through standard Unix groups. Information about the data and their access restrictions is available from the CEDA Data Catalogue. As a JASMIN user it is your responsibility to ensure that you have the correct permissions to access data the CEDA archive.

Tape access

Group workspace managers also have access to a tape library (Elastic Tape service) for making secondary copies and managing storage between online and near-line storage.

Number of files in a single directory

It is highly recommended that you do not exceed more than 100,000 - 200,000 files in a single directory on any type of storage on JASMIN. Large numbers of files place unnecessary load on components of the file system and can be the source of slow performance for you and other storage volumes in the system. To count the number of files, please note the advice in "Slow ls response" below, or use an alternative command e.g. find.

Slow 'ls' response

This can be due to a number of reasons (see above advice regarding number of files in a single directory, and below regarding inter-volume symlinks). To speed up the response (useful if you want to count the number of files) it often helps to un-alias ls, e.g. by placing a backslash in front of the command: \ls.

We highly recommend users not to use symbolic links in their home directories to other parts of the JASMIN file systems, such as GWSs or scratch areas. There are a number of conditions when the petabyte-scale JASMIN storage can become unusable for all users due to these links. There is a more technical explanation below. We would advise path substitutions using environment variables instead.

Symlinks in users' home directories which point to other volumes (for example group workspaces) make matters worse when there are problems on the jasmin-sci/cems-sci servers and other shared machines, and/or when the metadata servers responsible for particular storage volumes themselves become overloaded. The simplest advice we can currently give is to avoid using them.

In more detail:

This issue is particularly apparent when ls is aliassed to ls --color (as is the default on 99% of JASMIN systems) AND one of the colorisation options specified is for an orphaned link. The ls on symlinks causes the metadata servers at the end of the symlink to be called (to provided the stat filesystem metadata), in addition to the metadata server for the home directory. If those metadata servers at the far end are under load, or have some other problem, the ls to the home directory can hang, but this also hangs other users who may be trying to ls their own home directory (even if theirs contains no symlinks). The situation can then escalate out of control as more and more users try and fail.

This is already recognised as an issue particularly with the older part of our storage estate, but especially where one or more of the volumes involved contains large numbers of small files.

There are likely other issues at play as well, some of which may be addressed by our current upgrade plans which involve replacing older parts of the storage over the next few months.

Still need help? Contact Us Contact Us