– an attempt at Flocker 2.0

Datetime:2017-04-17 05:16:17         Topic: Ceph          Share        Original >>
Here to See The Original Article!!!

At the end of 2016 ClusterHQ , the developers of Flocker decided to shut down the project with a rather blunt blog post ( link ).  Rather than take additional funding, CEO Mark Davis (and presumably the board) decided to shut the company down on the assumption that it wasn’t clear where revenue would come from.

Based on my attempts to install Flocker, I can say that the software wasn’t straightforward to deploy and the idea of building out what was basically a failover management process perhaps could have been implemented more simply.  Now we have the emergence of , a scale-out open-source storage solution that sits atop Ceph.

The idea of seems to be to provide an interface between orchestration platforms like Kubernetes and the Ceph storage layer.  Rather than re-invent another scale-out storage platform (and goodness knows we have enough of those), acts as the glue to make the two interoperate by allowing storage to be orchestrated through the Kubernetes command line.

Using Ceph as a foundation for storage is an interesting choice.  The website (and github notes ) imply that Ceph has had 10 years of production deployments.  This seems a little generous as the first agreed “stable” version of Ceph was only released in 2012.  However, putting that issue aside, Ceph does at least provide object, block and file interfaces, even if they’re not the most efficient of each.

So therein lies the problem.  Ceph isn’t going to be the platform of choice for everyone.  In fact it’s a gross assumption that anyone would want to use Ceph at all, especially with the proliferation of SDS and hardware based solutions on the market.  Here’s another thought – one of the issues with Flocker (in my opinion) was the focus on block storage for deploying application data.  Flocker mapped a block device to a host then formatted the device with an ext3/4 file system before mounting to the host.  This becomes really restricting when thinking about sharing data between (for example) Windows and Linux platforms.  It also represents a management overhead if the original LUN size is miscalculated.

The Architect’s View

Having a tighter integration with the orchestration layer is a good idea for fixing the Persistent Storage for Containers problem.  However fixing on a single platform and a single storage layer seems to be architecturally restrictive.  Perhaps the intention is to start small and move out from a basic configuration.  Certainly isn’t intended to be in the data path, but rather merely acts as a management conduit.

Have you looked at  Do you have an opinion?  Once I’ve had an attempt at installation I’ll come back with a little more detail.  In the meantime, feel free to comment if you have some experience or thoughts in this area.

Related Links

Comments are always welcome; please read ourComments Policy first.  If you have any related links of interest, please feel free to add them as a comment for consideration.  

Copyright (c) 2009-2017 – Chris M Evans, first published on , do not reproduce without permission.