[Coming Soon] Dynamic Provisioning of GlusterFS volumes in Kubernetes/Openshift!!

Datetime:2016-08-23 01:42:15          Topic: Kubernetes           Share

In this context I am talking about the dynamic provisioning capability of ‘glusterfs’ plugin in Kubernetes/Openshift. I have submitted a Pull Request to Kubernetes to add this functionality for GlusterFS. At present, there is no existing network storage provisioners in kubernetes eventhough there are cloud providers. The idea here is to make the glusterfs plugin capable of provisioning volumes on demand from kubernetes/openshift .. Cool, Isnt it ? Indeed this is a nice feature to have. That said, an OSE user request for a space for example : 20G and the glusterfs plugin takes this request and create 20G and bound that to the claim. The plugin can use any REST service, but the example patch is based on ‘heketi’ .

Here is the workflow:Start your kubernetes controller manager with highlighted options:

<br /> ...kube controller-manager --v=3 --service-account-private-key-file=/tmp/kube-serviceaccount.key<br /> --root-ca-file=/var/run/kubernetes/apiserver.crt --enable-hostpath-provisioner=false<br /> <strong>--enable-network-storage-provisioner=true --storage-config=/tmp --net-provider=glusterfs</strong><br /> --pvclaimbinder-sync-period=15s --cloud-provider= --master=127.0.0.1:8080<br />

Lets create a file called `gluster.json` in `/tmp` directory. The important fields in this config file are ‘endpoint’ and ‘resturl’. The endpoint has to be defined and match the setup. The `resturl` has been filled with the rest service which can take the input and create a gluster volume in the backend. As mentioned earlier I am using `heketi` for the same.


[hchiramm@dhcp35-111 tmp]$ cat gluster.json

{

   "endpoint": "glusterfs-cluster",

   "resturl": "http://127.0.0.1:8081",

   "restauthenabled":false,

   "restuser":"",

   "restuserkey":""

}

[hchiramm@dhcp35-111 tmp]$

We have to define an ENDPOINT and SERVICE. Below are the example configuration files.

ENDPOINT :

“ip” has to be filled with your gluster trusted pool IP.


[hchiramm@dhcp35-111 ]$ cat glusterfs-endpoint.json

{

  "kind": "Endpoints",

  "apiVersion": "v1",

  "metadata": {

    "name": "glusterfs-cluster"

  },

  "subsets": [

    {

      "addresses": [

        {

          "ip": "10.36.4.112"

        }

      ],

      "ports": [

        {

          "port": 1

        }

      ]

    },

    {

      "addresses": [

        {

          "ip": "10.36.4.112"

        }

      ],

      "ports": [

        {

          "port": 1

        }

      ]

    }

  ]

}

SERVICE:

Please note that the Service Name is matching with ENDPOINT name


[hchiramm@dhcp35-111 ]$ cat gluster-service.json

{

  "kind": "Service",

  "apiVersion": "v1",

  "metadata": {

    "name": "glusterfs-cluster"

  },

  "spec": {

    "ports": [

      {"port": 1}

    ]

  }

}

[hchiramm@dhcp35-111 ]$

Finally we have a Persistent Volume Claim file as shown below:

NOTE: The size of the volume is mentioned as ’20G’:


[hchiramm@dhcp35-111 ]$ cat gluster-pvc.json

{

  "kind": "PersistentVolumeClaim",

  "apiVersion": "v1",

  "metadata": {

    "name": "glusterc",

    "annotations": {

      "volume.alpha.kubernetes.io/storage-class": "glusterfs"

    }

  },

  "spec": {

    "accessModes": [

      "ReadOnlyMany"

    ],

    "resources": {

      "requests": {

        "storage": "20Gi"

      }

    }

  }

}

[hchiramm@dhcp35-111 ]$

Lets start defining the endpoint, service and PVC.


[hchiramm@dhcp35-111 ]$ ./kubectl create -f glusterfs-endpoint.json
endpoints "glusterfs-cluster" created
[hchiramm@dhcp35-111 ]$ ./kubectl create -f gluster-service.json
service "glusterfs-cluster" created
[hchiramm@dhcp35-111 ]$ ./kubectl get ep,service
NAME ENDPOINTS AGE
ep/glusterfs-cluster 10.36.6.105:1 14s
NAME CLUSTER-IP EXTERNAL-IP PORT(S) AGE
svc/glusterfs-cluster 10.0.0.10 1/TCP 9s
svc/kubernetes 10.0.0.1 443/TCP 13m
[hchiramm@dhcp35-111 ]$ ./kubectl get pv,pvc
[hchiramm@dhcp35-111 ]$

Now, lets request a claim !!

[hchiramm@dhcp35-111 ]$ ./kubectl create -f glusterfs-pvc.json
persistentvolumeclaim "glusterc" created
[hchiramm@dhcp35-111 ]$ ./kubectl get pv,pvc
NAME CAPACITY ACCESSMODES STATUS CLAIM REASON AGE
pv/pvc-39ebcdc5-442b-11e6-8dfa-54ee7551fd0c 20Gi ROX Bound default/glusterc 2s
NAME STATUS VOLUME CAPACITY ACCESSMODES AGE
pvc/glusterc Bound pvc-39ebcdc5-442b-11e6-8dfa-54ee7551fd0c 0 3s
[hchiramm@dhcp35-111 ]$

Awesome! based on the request it created a PV and BOUND to the PVClaim!!


[hchiramm@dhcp35-111 ]$ ./kubectl describe pv pvc-39ebcdc5-442b-11e6-8dfa-54ee7551fd0c
Name: pvc-39ebcdc5-442b-11e6-8dfa-54ee7551fd0c
Labels:
Status: Bound
Claim: default/glusterc
Reclaim Policy: Delete
Access Modes: ROX
Capacity: 20Gi
Message:
Source:
Type: Glusterfs (a Glusterfs mount on the host that shares a pod's lifetime)
EndpointsName: glusterfs-cluster
Path: vol_038b56756f4e3ab4b07a87494097941c
ReadOnly: false
No events.
[hchiramm@dhcp35-111 ]$

Verify the volume exist in backend:

[root@ ~]# heketi-cli volume list |grep 038b56756f4e3ab4b07a87494097941c

038b56756f4e3ab4b07a87494097941c

[root@ ~]#

Lets delete the PV claim,


[hchiramm@dhcp35-111 ]$ ./kubectl delete pvc glusterc

persistentvolumeclaim "glusterc" deleted

[hchiramm@dhcp35-111 ]$ ./kubectl get pv,pvc

[hchiramm@dhcp35-111 ]$

It got deleted!! Verify it from backend:


[root@ ~]# heketi-cli volume list |grep 038b56756f4e3ab4b07a87494097941c

[root@ ~]# 

Hereafter we can use the Volume for app pods by referring the claim name !!

Hope this is a nice feature to have ! Please let me know if you have any comments/suggestions.

Also, the patch ( github.com/kubernetes/kubernetes/compare/master…humblec:gluster-wip-prov?expand=1 ) is undergoing review in upstream as mentioned earlier and hopefully it will make it soon to the kubernetes release. I will provide an update here as soon as its available in upstream.

Demo:





About List