How to install Ceph Object Storage with Rados Gateway with Docker onto a single VM

Datetime:2016-08-23 04:07:28          Topic: Ceph  Docker           Share

For this tutorial of using Ceph with Docker in a single Virtual Machine you will need:

  • Vagrant
  • Virtualbox
  • Vmware (if you wish to port to VMWARE)

VAGRANT

Create an empty directory to keep all of vagrant files and VM

mkdir ubuntu-ceph

change the directory to the folder that you created

cd ubuntu-ceph

Fetch and boot up an ubuntu clean image.

vagrant init ubuntu/trusty64; vagrant up --provider virtualbox

Now let’s SSH in:

vagrant ssh

Sudo to gain root privileges:

sudo -i

Install docker:

apt install docker.io

Make docker autostart at the boot:

update-rc.d docker enable

Now let’s add a second NIC which will be used by CEPH’s Internal Network.

Go to the VirtualBox GUI, select your VM (Will be similar to your directory). Shut it down.

Click Settings -> Network -> Adapter 2 -> Enable Network Adapter .

Attach it to Host-Only Adapter (Does not matter which one)

Also we can add a new hard drive which will be used for our OSD.

Go to Settings -> Storage -> Add new hard drive (1G is enough for testing purposes but configure as you see fit)

After that start your VM.SSH in, sudo to root user.

Check if the new NIC is available.

ifconfig eth1

You should see something similar to the below:

eth1      Link encap:Ethernet  HWaddr 08:00:27:53:4d:6b
          BROADCAST MULTICAST  MTU:1500  Metric:1
          RX packets:0 errors:0 dropped:0 overruns:0 frame:0
          TX packets:0 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:1000
          RX bytes:0 (0.0 B)  TX bytes:0 (0.0 B)

Edit this interface to assign the IP address

vim /etc/network/interfaces.d/eth1.cfg
auto eth1
iface eth1 inet static
    address 192.168.99.99
    network 192.168.0.0
    netmask 255.255.255.0
    broadcast 192.168.0.255

Now let’s start interface eth1

ifup eth1
ifconfig eth1
eth1      Link encap:Ethernet  HWaddr 08:00:27:53:4d:6b
          inet addr:192.168.99.99  Bcast:192.168.0.255  Mask:255.255.255.0
          inet6 addr: fe80::a00:27ff:fe53:4d6b/64 Scope:Link
          UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1
          RX packets:150 errors:0 dropped:0 overruns:0 frame:0
          TX packets:7 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:1000
          RX bytes:20991 (20.9 KB)  TX bytes:578 (578.0 B)

It has an assigned IP, and this interface is host-only.

DOCKER

After you’ve got eth1 configured properly. We’re ready to start messing with Docker.

Let’s create our mon docker.

docker run -d --restart=always --net=host -v /etc/ceph:/etc/ceph -v /var/lib/ceph/:/var/lib/ceph/ -e MON_IP=192.168.99.99 -e CEPH_PUBLIC_NETWORK=192.168.0.0/24  ceph/daemon mon
root@vagrant-ubuntu-trusty-64:~# docker run -d --restart=always --net=host -v /etc/ceph:/etc/ceph -v /var/lib/ceph/:/var/lib/ceph/ -e MON_IP=192.168.99.99 -e CEPH_PUBLIC_NETWORK=192.168.99.0/24  ceph/daemon mon
Unable to find image 'ceph/daemon:latest' locally
latest: Pulling from ceph/daemon
5b270125d016: Pull complete
42aaee81803b: Pull complete
db8757576565: Pull complete
1902acfa6882: Pull complete
4022a46c9c1a: Pull complete
82806709198c: Pull complete
2e0f2eca4f93: Pull complete
d009ed046328: Pull complete
90f28aa4e747: Pull complete
25827244c053: Pull complete
7236a7712001: Pull complete
ce4dd6044327: Pull complete
e394b00440c7: Pull complete
cd7843dc9064: Pull complete
0d7135a8e5d6: Pull complete
97d85f2f024a: Pull complete
11e35651f72b: Pull complete
bd3aa3957db9: Pull complete
30718480f2fe: Pull complete
c7af74b22833: Pull complete
d7b20d8e2a95: Pull complete
2b3406440309: Pull complete
9f9bb2c730ad: Pull complete
1b376d8652b7: Pull complete
2cc2fd404157: Pull complete
aa666a9848ed: Pull complete
Digest: sha256:26b7b1ef482d5435534f362dbfbe54d24270436b23b9f711ac05a77b1f9b7395
Status: Downloaded newer image for ceph/daemon:latest
c26f2e9e948b43878334bc764a9b7761044757321d243742c7085b8ca69fffe0
root@vagrant-ubuntu-trusty-64:~# docker ps
CONTAINER ID        IMAGE                COMMAND                CREATED             STATUS                          PORTS               NAMES
c26f2e9e948b        ceph/daemon:latest   "/entrypoint.sh mon"   50 seconds ago      Restarting (1) 23 seconds ago                       stupefied_goldstine

c26f2e9e948bis ID of our Docker container.

Let’s stop our container and change some default settings according to our needs.

docker stop c26f2e9e948b

And edit the ceph config file.

vim /etc/ceph/ceph.conf

at the end of the file add those 2 lines:

osd pool default size = 1
osd pool default min_size = 1

It will turn off the replication which is equal to 3 by default (We will use just one OSD)

Now you can again start your container:

docker start c26f2e9e948b

Now let’s start our OSD container.

docker run -d --net=host \
--pid=host \
--restart=always \
--privileged=true \
-v /etc/ceph:/etc/ceph \
-v /var/lib/ceph/:/var/lib/ceph/ \
-v /dev/:/dev/ \
-e OSD_DEVICE=/dev/sdb \
-e OSD_FORCE_ZAP=1 \
ceph/daemon osd

After that, we need to change pool settings for default pool that has been created.

root@vagrant-ubuntu-trusty-64:~# docker exec c26f2e9e948b ceph osd lspools 
0 rbd,

root@vagrant-ubuntu-trusty-64:~# docker exec c26f2e9e948b ceph osd pool set rbd size 1
set pool 0 size to 1
root@vagrant-ubuntu-trusty-64:~# docker exec c26f2e9e948b ceph osd pool set rbd min_size 1
set pool 0 min_size to 1


root@vagrant-ubuntu-trusty-64:~# docker exec c26f2e9e948b ceph -s
cluster c2ebac60-3762-41cf-904d-b0700c73aba8
 health HEALTH_OK
 monmap e1: 1 mons at {vagrant-ubuntu-trusty-64=192.168.99.99:6789/0}
        election epoch 1, quorum 0 vagrant-ubuntu-trusty-64
 osdmap e8: 1 osds: 1 up, 1 in
        flags sortbitwise
  pgmap v12: 64 pgs, 1 pools, 0 bytes data, 0 objects
        34128 kB used, 884 MB / 918 MB avail
              64 active+clean

As u see our CEPH status shows that CEPH is healthy… so we can proceed.

The Last step is to start the Ceph RADOS-GW

docker run -d --net=host \
--restart=always \
-v /var/lib/ceph/:/var/lib/ceph/ \
-v /etc/ceph:/etc/ceph \
ceph/daemon rgw

Wait few seconds till the rgw will start up. Then you can check if this is working:

root@vagrant-ubuntu-trusty-64:~# curl localhost:8080
<?xml version="1.0" encoding="UTF-8"?><ListAllMyBucketsResult xmlns="http://s3.amazonaws.com/doc/2006-03-01/"><Owner><ID>anonymous</ID><DisplayName></DisplayName></Owner><Buckets></Buckets></ListAllMyBucketsResult>
root@vagrant-ubuntu-trusty-64:~#

RADOSGW

Check the container ID for the radosgw.

root@vagrant-ubuntu-trusty-64:~# docker ps
CONTAINER ID        IMAGE                COMMAND                CREATED             STATUS              PORTS               NAMES
c0252978d13a        ceph/daemon:latest   "/entrypoint.sh rgw"   About an hour ago   Up About an hour                        sleepy_wozniak
b407d1ed0841        ceph/daemon:latest   "/entrypoint.sh osd"   About an hour ago   Up About an hour                        tender_babbage
c26f2e9e948b        ceph/daemon:latest   "/entrypoint.sh mon"   About an hour ago   Up About an hour                        stupefied_goldstine

It is c0252978d13a in my case.

We need to create a user to use with the radosgw.

docker exec CONTAINER_ID radosgw-admin user create --uid="SME" --display-name="SME"
root@vagrant-ubuntu-trusty-64:~# docker exec c0252978d13a radosgw-admin user create --uid="SME" --display-name="SME"
{
    "user_id": "SME",
    "display_name": "SME",
    "email": "",
    "suspended": 0,
    "max_buckets": 1000,
    "auid": 0,
    "subusers": [],
    "keys": [
        {
            "user": "SME",
            "access_key": "ARFYM6XCP2OGDZ1GLET1",
            "secret_key": "IW8JQUMsUAxfdCVgt8ghddX22HeHihcFLbSll6fa"
        }
    ],
    "swift_keys": [],
    "caps": [],
    "op_mask": "read, write, delete",
    "default_placement": "",
    "placement_tags": [],
    "bucket_quota": {
        "enabled": false,
        "max_size_kb": -1,
        "max_objects": -1
    },
    "user_quota": {
        "enabled": false,
        "max_size_kb": -1,
        "max_objects": -1
    },
    "temp_url_keys": []
}

As you see, in the output you have the access_key and secret_key which you need to save in a safe place. They will be not shown again.

You can register and download the SME Appliance Trial and use the Storage Made Easy OpenS3 provider with the Rados S3 Gateway .

VMWARE

To use our CEPH-on-Docker on VMWARE. We need to make a package.

Go to directory where you VM is located (1st step)

cd ubuntu-ceph

And issue “vagrant package” command.

vagrant package

It will stop your VM and make a package of it.

$ vagrant package
==> default: Attempting graceful shutdown of VM...
==> default: Clearing any previously set forwarded ports...
==> default: Exporting VM...
==> default: Compressing package to: /Users/evilroot/vag-new/package.box

It will create a package.box file.

Let’s create a directory called vmware.

mkdir vmware

and move our package.box file there

mv package.box vmware

now we can unarchive this file

cd vmware; tar zxvf package.box

it will produce couple of files:

$ tar zxvf package.box
x ./box-disk1.vmdk
x ./box-disk2.vmdk
x ./box.ovf
x ./vagrant_private_key
x ./Vagrantfile

Now you can import the box.ovf into the vmware. Everything should now be working after import is complete.

If you want to skip the steps and simply download the Ceph VM to run you can do so here .

The login for this VM is vagrant/vagrant and the S3 keys are:

Key: HO53FN1FRT0X6C6WDTI3

Secret key: PW9fAOSyrh3JCEKZmTQQ5OXr5JqLGQaIYWBrxFzB

The endpoint URL is the assigned IP address on port 80 i.e.. http://172.16.5.179:8080. If you access this from a browser you will seen an XML response document from the S3 service.

Here ends the tutorial !





About List