STORAGE
Object storage ensures high scalability
By Garth Gibson, Network World -- Fueled by the computational power of Linux clusters, data-intensive applications are pushing the limits of traditional storage architectures. Whether mapping the human genome, imaging the earth's substructure to find new energy reserves, or generating the latest blockbuster animated feature, these applications require extraordinary throughput. Object storage is an emerging architecture uniquely suited to complement Linux compute clusters. The technology taps commodity processing, networking and storage components such as Serial ATA drives. At the core of this new architecture are storage objects, the fundamental unit of data storage. Unlike files or blocks, which are used as the basic components in conventional architectures, an object is a combination of application (file) data and storage attributes (metadata). These attributes define, on a per-file basis: data layout and usage information; RAID level; and other information the system uses to ensure quality of service.
Object storage provides two benefits. First, it gives clients direct access to network-attached storage (NAS) devices through parallel data paths, supporting high concurrency and scalable data access. Second, it distributes file system metadata - such as file names, directories and file ownership - via a scalable, clustered metadata manager to support standard file system operations in an out-of-band control path. Together, these features ensure a highly scalable storage system that can sustain high performance I/O simultaneously to hundreds of clients. The objects are stored on object-based storage devices (OSD) that contain processors, memory and network interfaces, which lets them manage the local set of objects and autonomously serve data to network-attached clients. Because these intelligent drives understand the organization and relationships of their data objects, they can exploit local processing and memory to optimize data layout and pre-fetch and cache application data. A standard OSD command set for managing and accessing these storage objects over TCP/IP has been defined and is being adopted by the ANSI T10 technical committee. With this new storage technology, files and directories are built from objects that are physically distributed across a cluster of OSDs. Data access is granted through a metadata manager - file-system software and commodity hardware for PCs that orchestrate the interaction of the clients with the objects on the OSDs through traditional file-system semantics - Portable Operating System Interface, Network File System and Common Internet File System. The metadata manager also provides key file-system services, including authentication and access control, file locking and distributed cache consistency. Separating file and storage metadata management overcomes the file-sharing limitations of storage-area networks (SAN) and the data-path bottleneck that is common in NAS systems. The object storage architecture is well-suited for Linux cluster computing applications. The compute cluster and the OSD storage cluster are connected through a scalable Gigabit Ethernet fabric. Client applications running on the compute-cluster nodes make independent file-access requests of the metadata manager. The metadata manager returns an object map (a set of object IDs and the OSDs on which they reside) that the client caches and uses to access data objects stored on the OSDs. Once the client has obtained the map, all subsequent file activity occurs directly between the client and the OSDs. Object storage architecture allows for storage systems that extend the traditional sharing and management features of NAS systems and the resource consolidation and scalability of SAN systems. This combination of performance, scalability, manageability and security only could be achieved by creating an entirely new paradigm in storage architectures. Gibson is co-founder and CTO of Panasas. He can be reached at garth@panasas.com.