Skip to main content

LONI Storage Policy

Last revised: 5 January 2011

1. Storage Systems

Available storage on LONI high performance computing machines is divided into six file systems (Table 1).

Table 1. Storage File Systems
File System Description
Scratch Storage on a node that is available only during job execution.
Work Storage provided for the input, intermediate, and output files of a job.
Home Storage provided to each active user account.
Project Storage provided for a specific time to hold project-specific data.
PetaShare Storage funded via NSF.
Archival Long term storage (TeraGrid users only).

All of the file systems share common characteristics. When they are too full, system performance begins to suffer. Similarly, placing too many individual files in a directory degrades performance. System management activities are aim at keeping the different file systems below 85% of capacity. It is strongly recommended that users hold file counts below 1,000, and never exceed 10,000, per directory. If atypical usage begins to impact performance, individual users may be contacted and asked to help resolve the issue. When performance or capacity issues become significant, system managers may begin purging files, stopping jobs, or take other actions to ensure continued system operation.

Management makes every effort to avoid data loss. In the event of a failure, we will make every effort to recover data. However, the volume of data housed makes it impractical to provide system-wide data backup. Users are expected to safeguard their own data by making sure that all important code, scripts, documents and data are transferred to another location in a timely manner.

Back to Top

2. File System Details

2.1 Scratch

Scratch space is available on all systems. You are free to use the scratch space, but files are subject to deletion once a job ends. The size of this file system will vary from system to system, and possibly across nodes within a system. This is an ideal place to put any intermediate data file that your job may need only while it is executing. Users should not have any expectation that files will exist after a job terminates.

2.2 Work

Work is the primary storage that you will utilize when running jobs. Work is a common file system that all nodes have access to. It is ideal for input files, checkpoint files and output. Some systems may have an enforced storage quota per user, but Work space maybe exempt from this quota. However, Work is a shared resource that all jobs utilize. No single user should consume an excessive amount of space for long periods of time.

On some systems, Work has quotas enforced per user. This does not imply that each user should intend to fully consume this amount, since the quota system uses overbooking to optimize available space. This quota serves as a hard upper limit to prevent a single user from disrupting performance of the entire system. Files on Work will persist for some time after a job completes. This allows users to retrieve results and move them to a more permanent location. Users should not have any expectation of backup.

Work may be purged periodically, which means users should have no expectation that the storage will persist for long periods of time. The criterion for purging files is based on age, such as removing of any file not accessed in the last 30 days. This age limit may be shortened or combined with other criteria, such as file size, as necessary to maintain the system’s operational status.

2.3 Home

All user home directories are located in the Home file system. Home is intended for the user to store source code, executables, and scripts. Home maybe considered “persistent” storage. The data here should remain as long as the user has a valid account on the system. Home will always have a storage quota, and is clearly a hard limit. While Home is not subject to management activities controlling capacity and performance, it should not be considered permanent storage, as system failure may result in the loss of information. Users should arrange for backing up their own data.

2.4 Project

The Project file system may be available on some systems, and provides space specific to a project. Project space allocations must be requested, and they are available for a limited time period. Allocations are typically 6 months or less. Two months before an allocation expires, the user will be notified by email. Users may request to have the allocation extended. Renewal requests should be submitted at least 1 month prior to expiration to allow decision and planning time. If the storage allocation is not extended, the user will have 1 month after the expiration date to off-load their data. Users should have no expectation that data will persist after a project’s expiration, thus alternate safe keeping and data protection actions must be taken in advance.

2.5 Archival (TeraGrid Users Only)

Archival space is intended for long-term storage of user data. Users will be granted an amount of space for a period of time. Users can expect allocations to be for 1 or more years. Archival storage provides a single copy of data.

2.6 PetaShare

PetaShare is a NSF funded project lead by Dr. Tevfik Kosar. His group is responsible for all allocations, and establishes the usage policies. Please visit http://www.petashare.org/ for more information.

Back to Top

3. System Specific Information

Table 2. System Specific File System Information
System File System Storage (TB) Quota (GB) Purge File Limit (Million)
Dell x86 5TF Clusters (Eric, Oliver, Louie, Poseidon, Painter) Work 9 (Lustre) 100 2
" Home (NFS) 5 N/A
IBM P5-575 (Bluedawg, Ducky, Zeke, Neptune, Lacumba) Work 0.27/1.8 (NFS) 20 / 40 0.5
" Home (NFS) 0.5 N/A
Dell x86 50TF Cluster (QueenBee) Work 60 (Lustre) (none) 8
" Home (NFS) 5 N/A
" Project 60 (LPFS) By request N/A

As discussed in the Introduction, the file systems are also subject to purge if usage exceeds 85% of capacity.

Back to Top

4. Job Use

On all systems, jobs must be run from the Work file systems, and not from the Home or Project file systems. Files should be copied from Home or Project space to Work before a job is executed, and back when a job terminates, to avoid excessive I/O during execution that degrades system performance.

Back to Top

5. Project Allocation Requests

Space is allocated on the Project file system by request only for periods of 6 months. A request for initial or renewal allocation may be made by sending an email to sys-help@loni.org. The email must include the name of the PI, a valid contact email, any existing allocation account information (CPU or storage), and a justification statement. Allocations are divided into 3 classes, each with a separate approval authority (Table 3). All storage allocation requests are limited by the available space, and justifications are required to be commensurate with the amount of space requested. All requests should be sent to sys-help@loni.org.

Table 3. Project Allocation Class and Approval Authority
Class Size Approval Authority
Small Up to 100GB HPC@LSU Operations Manager
Medium Between 100GB and 1TB HPC@LSU Director
Large Over 1TB LONI Resource Allocation Committee

Requests that must be approved by the LONI Resource Allocation Committee will be forwarded to allocations@loni.org.

Back to Top