Converging Storage: CephFS Now Production Ready

Datetime:2016-08-23 04:08:11          Topic: Ceph           Share

Over four years in the making and arriving in the nick of time, the Ceph open source distributed object store now has all the components needed to offer enterprises a one-stop shop for block, object and now file storage.

The new “Jewell” edition of Ceph (v10.2.0), released Thursday, includes a file system component (“CephFS”) that is now production ready, according to  Greg Farnum , the Red Hat technical lead for the CephFS, who spoke at the Linux Foundation’s Vault storage conference, taking place in Raleigh, North Carolina this week.

What “production-ready” means here is that CephFS will not break the one fundamental rule that all file systems must abide  —  to never lose any of the data that it is storing. And by offering a fully POSIX-compliant file system , Ceph can now support most Unix and Linux applications.

It still has a long way to go before it is enterprise-friendly though. No major vendors offer commercially supported editions of Ceph yet, and some performance improvements will still have to be made. Still, Ceph now has the potential to disrupt the enterprise storage software in a major way, by setting the stage for converged storage systems.

Ceph can now support all the major types of storage, block-based storage , object-based storage and  file storage . This means the enterprise doesn’t have to buy an expensive SAN (Storage Area Network) array for its time-sensitive applications and a NAS (Network Attached Storage) box for office workers.

Instead, an enterprise could cluster all of its storage servers into one giant pool of storage, which can then be allocated on an as-needed basis, and probably at a lower cost than if each, probably proprietary, storage system were procured individually.

In another talk at Vault, Intel senior software engineer Jian Zhang noted that Ceph has found use in a number of Chinese telecommunications companies (OpenStack is big in China) due in no small part to the fact they can have a single storage infrastructure for all three types of storage.

This pooled approach readily falls into line with the architectural ideas behind service-oriented cloud systems being built out on OpenStack, in which storage can be offered as an on-demand service like any other IT resource. Ceph RDB is already the most popular block device driver among OpenStack users, according to the most recent OpenStack User Survey . Adding a  file system component will help OpenStack users more easily bring their legacy applications into an OpenStack environment.

Ceph is also very cloud-friendly in another major way, in that it is a scale-out technology. If you need more storage, simply add in another server. The big cloud vendors all offer scale-out storage, as do a number of proprietary vendors, but this ability to scale out can now be deployed by all, thanks to the fact that Ceph is open source.

Besides potentially untangling the mostly-propriety storage industry, distributed file systems offer a revolutionary change in how storage in handled. Created by Sage Weil , Ceph is one of a number of distributed architecture file systems, a design choice that gives it this scale-out capability. Traditional file systems are typically confined by a single storage manager, either on the storage server itself or by a gateway to an array. All data piped between the storage and the client must =flow through the manager, which acts as a natural bottleneck.

In contrast, distributed file systems — others include GlusterFS, Lustre, and HDFS — can spread a single file system namespace across multiple servers. In Ceph’s case, a single metadata server (MDS), maintained in working memory of a single node, keeps track of the data across all the storage nodes, each of which is managed by an object storage daemon (OSD). If a node falls out, or more nodes are added, the changes are managed by the MDS.

When a client needs to read or write data on the file system, it consults the MDS for the location of which and appropriate permissions. The client then connects with the OSD directly, eliminating the need to route all traffic through a central storage management server. A sophisticated lock system ensures consistency as multiple parties read and write data.

Giving clients direct connectivity direct contact with the OSD, along with some intelligent management by the MDS, is what gives Ceph its virtually unlimited scalability, and throughput. If a user wants to increase the performance of an application, for instance, then the app can be striped across more OSDs.

The next step for the evolution of Ceph obviously is incorporating it into other infrastructure tools and software. To this end, Red Hat engineer John Spray detailed at vault a new CephFS driver for the Manila OpenStack storage management service. Manila is an offshoot of the Cinder OpenStack storage management service that focuses on the use of distributed file systems. With a Manila client driver, an OpenStack user can request a share (or private partition with a single namespace) for their workload.

Feature Image: John Spray and Sage Weil, at the Linux Foundation’s Vault conference.





About List