/HPC – High Performance Computing System
HPC – High Performance Computing System 2018-01-10T12:28:01+00:00

The power of HPC

Supercomputers play an important role in the field of computer science, and are used for a wide range of computationally intensive tasks in various fields, including for example:

  • Quantum mechanics

  • Weather forecasting

  • Climate research

  • Oil and gas exploration

  • Molecular modeling

  • Physical simulations

  • Airplane and spacecraft aerodynamics

  • Nuclear fusion

QStar Solutions for High Performance Computing allow you to implement massive storage hybrid systems at very competitive costs compared to traditional storage products available today.
Our HPC storage solutions can help you manage the escalating data capacities and complexities created by data-intensive applications such as simulation and modeling, combined with the most popular HPC file systems.
QStar Software solutions let you maximize productivity, improve storage efficiencies, and increase ROI.
As we know, HPC systems use enormous amounts of data to process information for research needs. The current HPC systems operate on a multi-petabyte scale and that presents huge challenges because traditional backup methods cannot provide an adequate backup window. And in general, there is no point in backing up over and over again data which never changes (usually 90% of research data) to capture the 10% of files that do.

The first task when designing architecture for data archiving is to discover what changes have occurred to the file system:

  • Which files were created

  • Modified

  • Deleted

  • Renamed

QStar provides a set of software products which let you monitor HPC file systems and archive only those files which have changed during the latest monitoring period. Selection of the files to be processed is driven by flexible policies based on standard file attributes such as modification and access times, file size, content, ownership, names and extensions. Obviously the first time the system is installed it takes a while to build a base copy of the file system, but in subsequent periods only files that satisfy the policy are processed.
The above approach provides huge advantages when compared to a full backup in terms of archive capacity and performance.
Archiving performance in many respects depends on the type of HPC file system because it affects how the monitoring can be achieved. Scanning the whole file system for changes is highly inefficient, but if the file system does not provide any better tools that is the only route possible. Some advanced file systems have special interfaces to monitor file system status such as the DMAPI interface (IBM Spectrum Scale and HyperFs), although such architecture is not widely supported and leads to degradation of system performance.

The latest trend in file system design is to provide a change journal which can be used to record information about file system changes without involving user level DMAPI processing. Microsoft NTFS, Lustre and some other file systems offer this feature enabling others to create efficient file system monitoring software.
QStar QNM supports many HPC file systems and performs monitoring in the most efficient way afforded by the file system.

The second task is moving the data to archive storage efficiently.

Archive storage is usually less expensive than HPC storage but must offer enormous capacity and in most cases provide for replicated local and remote copies of the data for disaster recovery purposes. Tape based systems solve this problem by providing vast capacity and low power consumption for data which is not actively used. The tape library is the least expensive way to archive data and the choice of using tape media is based primarily on the data recovery requirements. One of the other important benefits is that tape media can be easily copied for reliability and offsite storage. On the downside it may take up to 4 minutes to load the appropriate media to the drive and locate the right position to read the first file byte. Other storage targets for HPC data archiving can be lower cost SATA or SAS disk systems but they require high power resources and monitoring of disk status.

Cloud systems may provide greater savings for local IT infrastructure management, but to achieve reasonable performance will probably mean using a private cloud which in the end will require infrastructure maintenance.
No matter what technology is used for HPC data archiving such storage needs to be managed by special software. The QStar ASM product line provides this gateway functionality which ultimately offers a file system interface to all kinds of storage technologies, such as tape library, disk system, optical jukebox and hybrid cloud. With QStar’s HPC storage solutions, organizations can drastically reduce investments in storage infrastructure to advance their HPC projects.

Related Solutions

QStar LTFS as NAS architecture virtualizes a Tape Library, effectively converting it into network-attached storage – NAS for sharing with multiple users and applications. The solution supports common networking protocols (SMB and NFS) plus S3 compatible API commands and is integrated on either a Windows or Linux server. Files that are stored in a LTFS as NAS environment are retrieved in the same manner as the native operating system, even though the data is actually stored in a Tape Library. Not only do users not realize that the volume (file system) they are accessing was created on tape and not on disk, but, through a sophisticated cache architecture, the read/write data activity is managed so effectively, that performance is comparable to a NAS device. Transparency is such that the architecture is also supported by virtual machine (VM) environments, even though the VM environment is not designed to support tape drives; existing applications installed in a VM can access the Tape as NAS architecture just like a standard NAS disk. Data can be accessed transparently over the network.

Read more

The QStar Kaleidos is an S3-compliant object storage platform that enables enterprises and service providers to build reliable, private, hybrid or public cloud storage environments that deliver reliability, security and unlimited scalability.
Kaleidos offers enterprise-class customers a tremendous reduction in TCO compared with traditional NAS/SAN storage, unlimited scalability, superior data protection and flexibility, while achieving improved customer retention through enhanced service delivery (SLA), reduced acquisition costs, and enhanced agility through global accessibility across all end user devices.
Kaleidos Object Storage is made with standard high-performance capacity servers. QStar Object Storage Manager (OSM) software runs on all server nodes and forms a cluster to provide a single pool of storage resources across all nodes.

Read more