Panasas File System is an advanced hybrid scale-out parallel file system that scales linearly to maximize aggregate throughput which makes it an ideal choice in advanced computing environments. These systems are easily expanded by their modular approach using storage shelves which will also be extremely easy to maintain.
Using the Panasas® PanFS® operating environment, the Panasas ActiveStor® architecture utilizes all storage media, smoothly integrating them into maximum performance storage. As the file size and workload changes, PanFS’s Dynamic Data Acceleration technology delivers consistent, superior performance, even when dealing with today’s demanding workloads. While a scaled-out object backend permits unlimited scaling, a balanced internal architecture optimizes data placement. With hassle-free deployment, operation, and maintenance.
In a single seamless system, PanFS Dynamic Data Acceleration technology automates and maximizes the performance of any workload, regardless of its use-case and workload mix.
Object Storage Devices (OSDs) are used in PanFS to provide parallel file access. ActiveStor storage nodes function as OSDs, while director nodes act as metadata managers. OSD data can be accessed directly through the Panasas DirectFlow® protocol.
All PanFS data services are built on the ActiveStor storage node OSDs. With Object Storage Device File System (OSDFS), storage nodes transact objects, and read and write operations are performed with DirectFlow protocol. Advanced caching capabilities streamline storage.
PanFS manages metadata across director and storage nodes independently, rather than using shared algorithms like most storage operating systems.
In the PanFS global namespace, additional volumes can easily be created by configuring the namespace into one or more logical volumes.
As part of the PanFS platform, files contain several types of metadata, including usual user-visible attributes like size, owner, and modification time.
The OSDFS software manages block-level metadata with a delayed allocation scheme, which groups data, block pointers, and object descriptors into large write operations.
As storage architecture scale-up, so does complexity and management costs. PanFS keeps things simple with a single global namespace that scales easily from one to many ActiveStor units. Additionally, as storage volumes grow, their automated capacity balancing and centralized management features streamline management.
Cluster management software manages configurations, monitors hardware, and supports overall control. Nodes are identified as healthy, failures are handled, and software upgrades are handled.
Realm management software keeps directors fully engaged in workload processing by dynamically and strategically clustering directors in active-active configurations.
Panasas PanActive Manager simplifies initial configuration, monitoring and reporting, and troubleshooting with a user-friendly HTML and command-line interface.
Consistent, Fast Storage Performance that Automatically Adapts to Evolving HPC and AI Workloads
With ActiveStor Ultra and PanFS Dynamic Data Acceleration, tiered HPC storage systems are simplified and automated. Dynamic Data Acceleration integrates diverse storage media into a single seamless storage system to maximize performance. ActiveStor Ultra with PanFS provides the industry’s best price/performance in an appliance that maximizes simplicity, boosts reliability, and has the lowest total cost of ownership. With PanFS, you get the fastest parallel file system at any price point and the most manageable parallel file system ever.
View DatasheetSuperior Performance, Manageability, and Reliability
The ActiveStor Prime Storage is a hybrid storage system that combines flash drives and helium-based hard drives to provide superior performance for unstructured sequential files and mixed workloads with rapid access to large and small files.
There are 11 blades in each enclosure. The solution’s configuration options range from 11 storage blades to 3 director nodes and 8 storage nodes, all optimized for unique workflows. Director nodes manage file system activity, while storage blades store all file system data and handle 100 percent of data traffic.
PER-SHELF | |
---|---|
CAPACITY | 93 or 285TB |
HDD CAPACITY | 88 or 264TB |
SSD CAPACITY | 5.3 or 21TB |
DRIVE CONFIGURATION | 22 x 3.5” enterprise SATA + 11 x 2.5” SSD |
ECC MEMORY (GB OF CACHE)1 | 176GB |
ETHERNET CONNECTIVITY | Two switch modules per shelf; uplinks per shelf: 2 x 10GbE SFP/CX4 supporting high-availability link aggregation with network failover support |
INFINIBAND ROUTER CAPABILITY | Yes |
GLOBAL NAMESPACE | |
SUITABILITY | Mixed workloads: small and large file throughput, IOPS performance, lowest cost per TB |
MAXIMUM SYSTEM CAPACITY2 | 57PB |
MAXIMUM SYSTEM THROUGHPUT2 | 360GB/s |
MAXIMUM SYSTEM IOPS, 4KB FILE, RANDOM READ2 | >2,600,000 IOPS |