(DRAFT)
Provisioniong Ceph RBD on Kubernetes¶
We followed the guide on https://github.com/kubernetes/examples/tree/master/staging/persistent-volume-provisioning
Configuration (Admin side)¶
Log on the kubernetes-master node that hosts the ceph admin credentials.
Read the file /etc/ceph/ceph.conf and take note of the ceph-mon addresses, e.g.:
mon host = 10.4.4.162:6789 10.4.4.58:6789 10.4.4.62:6789
Read the file /etc/ceph/ceph.client.admin.keyring and take note of the client.admin key:
[client.admin] key = ABCD...
Create the non-privileged user “kube” and the pool “kube” where the RBD images will be created and managed:
$ ceph osd pool create kube 512 $ ceph auth get-or-create client.kube mon 'allow r' osd 'allow rwx pool=kube'
The output of ceph auth get-or-create command will be:
[client.kube] key = EFGH...
Take note of the client.kube key.
From the admin key value got at point 3., we will create a secret. We must create the Ceph admin Secret in the namespace defined in our StorageClass. In this example we’ve set the namespace to kube-system.
Log in on a kube client with admin credentials (e.g. on kubernetes@10.4.0.212).
Issue the following command:
$ kubectl create secret generic ceph-secret-admin --from-literal=key='ABCD...' --namespace=kube-system --type=kubernetes.io/rbd
Create the file rbd-storage-class.yaml whiche describes the storage class for our cluster:
apiVersion: storage.k8s.io/v1 kind: StorageClass metadata: name: hdd provisioner: kubernetes.io/rbd parameters: monitors: 10.4.4.162:6789,10.4.4.58:6789,10.4.4.62:6789 adminId: admin adminSecretName: ceph-secret-admin adminSecretNamespace: "kube-system" pool: kube userId: kube userSecretName: ceph-secret-user
Here the userId and pool are the Ceph client and pool created at point 4.
Now create the storage class “hdd” defined in the file:
$ kubectl create -f rbd-storage-class.yaml
Check the status of the storage class:
$ kubectl describe StorageClass hdd Name: hdd IsDefaultClass: No Annotations: <none> Provisioner: kubernetes.io/rbd Parameters: adminId=admin,adminSecretName=ceph-secret-admin,adminSecretNamespace=kube-system,monitors=10.4.4.162:6789,10.4.4.58:6789,10.4.4.62:6789,pool=kube,userId=kube,userSecretName=ceph-secret-user ReclaimPolicy: Delete VolumeBindingMode: Immediate Events: <none>
Enable user namespaces (Admin side)¶
In order enable users to create volumes we need to add a secret in the user namespace, with the client.kube key.
log on the kubernetes-master node that hosts the ceph admin credentials.
Issue the following ommand:
$ ceph auth get-or-create client.kube
The output of ceph auth get-or-create command will be:
[client.kube] key = EFGH...
Copy the key value shown on the above output and paste in the following command:
$ kubectl create secret generic ceph-secret-user --from-literal=key='EFGH...' --namespace=<USER_NAMESPACE> --type=kubernetes.io/rbd
where <USER_NAMESPACE> is the destination namespace.
Volume claim example (user side)¶
Here is an example of how users can claim RBD volumes and use them on their deployments.
Create the following file claim.json where we define the volume name and size:
{ "kind": "PersistentVolumeClaim", "apiVersion": "v1", "metadata": { "name": "myvol" }, "spec": { "accessModes": [ "ReadWriteOnce" ], "resources": { "requests": { "storage": "4Gi" } }, "storageClassName": "hdd" } }
Execute the command:
$kubectl create -f claim.json
Check that the 4GB volume “myvol” has been created:
$ kubectl get pvc NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE myvol Bound pvc-81c512af-91a9-11e8-b43d-74e6e266c8e1 4Gi RWO hdd 17s
Create the file pod.yaml which defines a deployment of type ReplicationController:
apiVersion: v1 kind: ReplicationController metadata: name: server spec: replicas: 1 selector: role: server template: metadata: labels: role: server spec: containers: - name: server image: nginx volumeMounts: - mountPath: /var/lib/www/html name: myvol volumes: - name: myvol persistentVolumeClaim: claimName: myvol
Execute the command:
$ kubectl create -f pod.yaml
This will create a server based on a nginx image, with the volume myvol mounted on /var/lib/www/html.
Check the creation of the pod:
$ kubectl get pods NAME READY STATUS RESTARTS AGE server-fjbcw 1/1 Running 0 15s $ kubectl describe pod server-fjbcw Name: server-fjbcw Namespace: colla Node: ba1-r2-s15/10.4.4.113 ... Mounts: /var/lib/www/html from myvol (rw) /var/run/secrets/kubernetes.io/serviceaccount from default-token-g98dg (ro) ... Volumes: myvol: Type: PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace) ClaimName: myvol ReadOnly: false ... Events: Type Reason Age From Message ---- ------ ---- ---- ------- Normal Scheduled 30m default-scheduler Successfully assigned colla/server2-fjbcw to ba1-r2-s15 Normal SuccessfulAttachVolume 30m attachdetach-controller AttachVolume.Attach succeeded for volume "pvc-81c512af-91a9-11e8-b43d-74e6e266c8e1" Normal Pulling 30m kubelet, ba1-r2-s15 pulling image "nginx" Normal Pulled 30m kubelet, ba1-r2-s15 Successfully pulled image "nginx" Normal Created 30m kubelet, ba1-r2-s15 Created container Normal Started 30m kubelet, ba1-r2-s15 Started container
Log on the container and check that the volume is mounted:
$ kubectl exec -it server-fjbcw bash root@server2-fjbcw:/# root@server2-fjbcw:/# df -h Filesystem Size Used Avail Use% Mounted on ... /dev/rbd0 3.9G 8.0M 3.8G 1% /var/lib/www/html
Now our pod has an RBD mount!