Flexible Persistent Disks With Flocker and Google Compute Engine

Datetime:2016-08-22 21:46:27          Topic: OpenSSL  Docker           Share

In  Flocker 1.11  we have added support for  Google Compute Engine (GCE) . See  our recent blog post  for more information. This means that Flocker supports  GCE persistent disks  and you can now run persistent disk enabled containers automatically on top of your GCE infrastructure.

What is GCE?

GCE allows you to run virtual machines on thousands of virtual CPUs that is designed to be fast and reliable for many types of workloads. GCE offers a number of different capabilities including block storage with varying levels of performance as well as networking that lets you scale and keep your applications connected. Read more here .

Getting Started

Flocker does a great job of orchestrating data volumes around a cluster of machines and automatically moving those volumes between nodes when your containers move.

GCE provides Flocker with the machines and persistent disks that it can manage automatically for you and your containers running on the machines.

Combining GCE infrastructure with a volume manager like Flocker gives you the ultimate flexibility for your persistent containerized workloads in a microservices environment.

The Flocker driver for GCE has the following features:

  • Support for account authentication (Both via VM and from authentication credentials).
  • Verified testing on large clusters.
  • Support for Flocker profiles (bronze, silver, gold) silver and gold are persistent disk on SSD, bronze is persistent disk on spinning disk.

You can read more about our integration with GCE and how to use it by visiting the GCE configuration documentation section of our docs. Feel free to reach out on IRC or send an email to support@clusterhq.com.

Deploying Flocker On GCE

We will be using this repository for the following demo.

Here is an example demonstration of deploying Flocker 1.11 with Ansible onto GCE with the new GCE driver with Docker. Feel free to watch the following recording (no audio) if you want to see an example of what it is like to get started using Flocker on GCE or try yourself in the below step-by-step walk-through!

Walkthrough

The first thing you will need to do is create a GCE account . At the time of writing this you can receive a $300 credit for 60 days on GCE.

Second, create a workspace on your local machine.

$ mkdir ~/gce-demo

Pull down the demo repository from GitHub.

$ git clone https://github.com/ClusterHQ/gce-ansible-demo

Install the gcloud command line tool. You can visit here for more on installing and downloading.

On a Mac, this can be achieved with the following commands.

$ curl https://sdk.cloud.google.com | bash
$ exec -l $SHELL
$ gcloud init

You will need Python 2.7 and virtualenv installed ( pip install virtualenv && pip install virtualenvwrapper ) as well as flockerctl to interact with your cluster.

If you have a local Docker daemon, you can install flockerctl with the following command. You can also install directly to Mac OSX if you would like.

$ curl -sSL https://get.flocker.io/ | sh
$ $:-> flockerctl --help
Usage: flockerctl [options]
Options:
      --cluster-yml=      Location of cluster.yml file (makes other options
                          unnecessary) [default: ./cluster.yml]
      --certs-path=       Path to certificates folder [default: .]
      --user=             Name of user for which .key and .crt files exist
                          [default: user]
      --cluster-crt=      Name of cluster cert file [default: cluster.crt]
      --control-service=  Hostname or IP of control service
      --control-port=     Port for control service REST API [default: 4523]
      --version           Display Twisted version and exit.
      --help              Display this help and exit.
Commands:
    create          create a flocker dataset
    destroy         mark a dataset to be deleted
    list            list flocker datasets
    list-nodes      show list of nodes in the cluster
    ls              list flocker datasets
    move            move a dataset from one node to another
    status          show list of nodes in the cluster
    version         show version information
Next, enter the directory of the repository pulled from Github and create a virtual environment.
$ cd gce-ansible-demo
$ virtualenv ./virtual-env
$ source ./virtual-env/bin/activate
(virtual-env)$

Next, login to GCE and set some environment variables.

$ gcloud auth login

# The name of the GCP project in which to bring up the instances.
$ export PROJECT=<gcp-project-for-instances>

# The name of the zone in which to bring up the instances.
$ export ZONE=<gcp-zone-for-instances>

# A tag to add to the names of each of the instances.
# Must be all lowercase letters or dashes.
# This is used so you can identify the instances used in this tutorial.
$ export TAG=<my-gce-test-string>

# The number of nodes to put in the cluster you are bringing up.
$ export CLUSTER_SIZE=<number-of-nodes>

# Double check all environment variables are set correctly.
$ for instance in $(seq -f instance-${TAG}-%g 1 $CLUSTER_SIZE); do echo "Will create: $PROJECT/$ZONE/$instance"; done
Example output:
$ export PROJECT=gce-demo-test
$ export ZONE=us-east1-c
$ export TAG=gce-demo
$ export CLUSTER_SIZE=3
$ for instance in $(seq -f instance-${TAG}-%g 1 $CLUSTER_SIZE); do echo "Will create: $PROJECT/$ZONE/$instance"; done Will create:  gce-demo-test/us-east1-c/instance-gce-demo-1
Will create:  gce-demo-test/us-east1-c/instance-gce-demo-2
Will create:  gce-demo-test/us-east1-c/instance-gce-demo-3

Next, create a firewall and launch your instances.

Note, in the gcloud copmute instances create command, the --scopes https://www.googleapis.com/auth/compute flag is what gives our VMs permissions to create and delete volumes so we can skip adding specific credentials to the agent.yml .

# Create firewall.
$ gcloud compute firewall-rules create \
  allow-all-incoming-traffic \
  --allow tcp \
  --target-tags incoming-traffic-permitted \
  --project $PROJECT

# Launch the instances.
$ gcloud compute instances create \
  $(seq -f instance-${TAG}-%g 1 $CLUSTER_SIZE) \
  --image ubuntu-14-04 \
  --project $PROJECT \
  --zone $ZONE \
  --machine-type n1-standard-1 \
  --tags incoming-traffic-permitted \
  --scopes https://www.googleapis.com/auth/compute

Created [https://www.googleapis.com/compute/v1/projects/clusterhq-acceptance/zones/us-east1-c/instances/instance-gce-demo-1].
Created [https://www.googleapis.com/compute/v1/projects/clusterhq-acceptance/zones/us-east1-c/instances/instance-gce-demo-3].
Created [https://www.googleapis.com/compute/v1/projects/clusterhq-acceptance/zones/us-east1-c/instances/instance-gce-demo-2].
NAME                ZONE       MACHINE_TYPE  PREEMPTIBLE INTERNAL_IP EXTERNAL_IP     STATUS
instance-gce-demo-1 us-east1-c n1-standard-1             10.240.0.15 104.123.123.123  RUNNING
instance-gce-demo-3 us-east1-c n1-standard-1             10.240.0.14 104.123.123.124 RUNNING
instance-gce-demo-2 us-east1-c n1-standard-1             10.240.0.16 104.123.123.125   RUNNING

Next, configure SSH to your VMs

$ gcloud compute config-ssh --project $PROJECT
WARNING: The private SSH key file for Google Compute Engine does not exist.
WARNING: You do not have an SSH key for Google Compute Engine.
WARNING: [/usr/bin/ssh-keygen] will be executed to generate a key.
Generating public/private rsa key pair.
Enter passphrase (empty for no passphrase):

There are many ways to install Flocker on a cluster of nodes, for the sake of this tutorial we are using Cluster HQ’s Ansible Galaxy role. If you already use Ansible Galaxy, this provides a nice way to install Flocker in your existing system. If you are not interested in using the Ansible role to install Flocker, you can read our installation docs on how to install Flocker and skip down to the steps after Ansible. The Ansible galaxy role simply automates some of the steps.

Install the requirements to set up a Flocker cluster using Ansible. This involves pip installing Flocker to get flocker-ca and ansible-galaxy , as well as getting the roles from Ansible galaxy to install Docker and Flocker on the nodes.

Here we go, install the needed tools from within your virtual environment.

Note: You can also install flocker-ca using another technique here instead of using pip to install the .whl .   

$ pip install ansible
$ pip install https://clusterhq-archive.s3.amazonaws.com/python/Flocker-1.11.0-py2-none-any.whl

# You may see:
# #include <openssl/aes.h>
# ^
# 1 error generated.
# On Mac OSX, you may need to install and link openssl for the above to succeed. 
# brew install openssl
# brew link openssl --force
# You may also need to set LDFLAGS such as `LDFLAGS="-L/usr/local/opt/openssl/lib" pip install ...`

$ ansible-galaxy install marvinpinto.docker -p ./roles
$ ansible-galaxy install ClusterHQ.flocker -p ./roles

Next, use the provided script in the repository to help create an inventory and agent.yml for Flocker.

Note: 104.123.123.123 are fake addresses, your IPs will look different.   

$ gcloud compute instances list \
  $(seq -f instance-${TAG}-%g 1 $CLUSTER_SIZE) \
  --project $PROJECT  --zone $ZONE --format=json | \
  python create_inventory.py
# Inspect the results of those commands:
$ cat ansible_inventory
[flocker_control_service]
104.123.123.123

[flocker_agents]
104.123.123.123
104.123.123.124
104.123.123.125

[nodes:children]
flocker_control_service
flocker_agents

Note: this is the exact agent.yml transfered to our VMs. Its no trick that we are not adding credentials to this dataset portion of the YAML becuase we used --scopes https://www.googleapis.com/auth/compute during our compute creation so we don’t need to.

$ cat agent.yml
version: 1

control-service:
  hostname: 104.123.123.123
  port: 4524

dataset:
  backend: gce

# Note the control node's IP address and save that in environment variable.
export CONTROL_NODE=<control-node-ip-from-agent-yml>

Next, we can install Flocker on our GCE nodes using the Ansible playbook.

$ ANSIBLE_HOST_KEY_CHECKING=false ansible-playbook \
  --key-file ~/.ssh/google_compute_engine \
  -i ./ansible_inventory \
  ./gce-flocker-installer.yml  \
  --extra-vars "flocker_agent_yml_path=${PWD}/agent.yml"

PLAY ***************************************************************************

TASK [setup] *******************************************************************
ok: [104.123.123.123]
ok: [104.123.123.124]
ok: [104.123.123.125]

TASK [marvinpinto.docker : Install apt-transport-https] ************************
ok: [104.123.123.123]
ok: [104.123.123.124]
ok: [104.123.123.125]
.
.
[output snipped]
.
.
PLAY RECAP *********************************************************************
104.123.123.123            : ok=36   changed=14   unreachable=0    failed=0
104.123.123.124            : ok=36   changed=14   unreachable=0    failed=0
104.123.123.125            : ok=58   changed=33   unreachable=0    failed=0

Note: if you see errors, you can try and re-run the Ansible playbook. If there are errors specifically around openssl or cryptography and you are on Mac OSX you will likely need to add LDFLAGS="-L/usr/local/opt/openssl/lib" as mentioned before when you pip install Flocker.

Next, you should be able to get the status of your Flocker cluster using flockerctl .

$ flockerctl --user api_user \
   --control-service $CONTROL_NODE \
   --certs-path ${PWD}/certs \
   list-nodes
SERVER     ADDRESS
db0ed24c   10.240.0.15
1d18cb64   10.240.0.16
47308038   10.240.0.14

Next, we can create volumes and attach them to our nodes.

$ flockerctl --user api_user \
   --control-service $CONTROL_NODE \
   --certs-path ${PWD}/certs \
   create -m "name=gce-volume" -s 10G -n 47308038

$ flockerctl --user api_user \
   --control-service $CONTROL_NODE \
   --certs-path ${PWD}/certs \
   ls
DATASET                                SIZE     METADATA          STATUS         SERVER
bab52077-20c4-4e93-84c6-c56ee0206dd7   10.00G   name=gce-volume   attached :white_check_mark:   47308038 (10.240.0.14)

We can see our volume in our GCE Console as well.

You can also login to one of the GCE nodes and use Docker to create a volume.

# First upload the API certificates to a node you want to use.
$ scp -i ~/.ssh/google_compute_engine certs/api_user.* ubuntu@104.123.123.125:/home/ubuntu/

# Login to that node.
$ ssh -i ~/.ssh/google_compute_engine ubuntu@104.123.123.125
$ sudo su

# Place the certificates in the right directory and start the plugin.
$ cp /home/ubuntu/api_user.crt /etc/flocker/plugin.crt
$ cp /home/ubuntu/api_user.key /etc/flocker/plugin.key
$ service flocker-docker-plugin start

# Next, create a volume.
$ docker volume create -d flocker --name gce-demo-volume -o size=20G -o profile=gold
gce-demo-volume

$ docker volume ls
DRIVER              VOLUME NAME
flocker             gce-demo-volume

# See the volume on the host.
$ df -h | grep flocker
/dev/sdb         20G   44M   19G   1% /flocker/cf7830e5-4770-4788-b30a-a9d6ee1ff17f

Next, we can destroy our dataset and clean up our cluster.

# Destroy the dataset.
$ flockerctl --user api_user \
   --control-service $CONTROL_NODE \
   --certs-path ${PWD}/certs \
   destroy --dataset=bab52077-20c4-4e93-84c6-c56ee0206dd7

# Make sure its deleted.
$ flockerctl --user api_user \
   --control-service $CONTROL_NODE \
   --certs-path ${PWD}/certs \
   ls
DATASET   SIZE   METADATA   STATUS   SERVER

# Remove SSH.
gcloud compute config-ssh --project $PROJECT --remove

# Delete instances.
$ gcloud compute instances delete \
   $(seq -f instance-${TAG}-%g 1 $CLUSTER_SIZE) \
   --project $PROJECT \
   --zone $ZONE
The following instances will be deleted. Attached disks configured to
be auto-deleted will be deleted unless they are attached to any other
instances. Deleting a disk is irreversible and any data on the disk
will be lost.
 - [instance-gce-demo-1] in [us-east1-c]
 - [instance-gce-demo-2] in [us-east1-c]
 - [instance-gce-demo-3] in [us-east1-c]
 Do you want to continue (Y/n)?  Y

Where to Go From Here

If you would like to use GCE with Flocker, again, you can visit our documentation to get started on how to do so.

We’d love to hear your feedback!





About List