Kubernetes Deployment Test¶
Ensure you have a proper configuration file (typically in ~/.kube/config) containing the application credentials created from the OpenStack dashboard.
Check that kubectl is properly configured by issuing:
$ kubectl get pods
No resources found.
Quick Tutorial¶
Create an App¶
Let’s run our first app on Kubernetes with the kubectl create deployment command. The create deployment command creates a new deployment. We need to provide the deployment name and app image location (include the full repository url for images hosted outside Docker hub):
$ kubectl create deployment kubernetes-bootcamp --image=gcr.io/google-samples/kubernetes-bootcamp:v1
To list your deployments use the get deployments command:
$ kubectl get deployments
NAME READY UP-TO-DATE AVAILABLE AGE
kubernetes-bootcamp 1/1 1 0 6m
We see that there is 1 deployment running a single instance of your app. Pods that are running inside Kubernetes are running on a private, isolated network. By default they are visible from other pods and services within the same kubernetes cluster, but not outside that network. When we use kubectl, we’re interacting through an API endpoint to communicate with our application.
Start kubectl proxy:
$ kubectl proxy
The kubectl proxy runs on port 8001, so we can just do:
$ curl http://localhost:8001/version
{
"major": "1",
"minor": "9",
"gitVersion": "v1.9.2",
"gitCommit": "5fa2db2bd46ac79e5e00a4e6ed24191080aa463b",
"gitTreeState": "clean",
"buildDate": "2018-01-18T09:42:01Z",
"goVersion": "go1.9.2",
"compiler": "gc",
"platform": "linux/amd64"
The API server will automatically create an endpoint for each pod, based on the pod name, that is also accessible through the proxy.
Explore the App¶
We’ll use the kubectl get command and look for existing Pods:
$ kubectl get pods
NAME READY STATUS RESTARTS AGE
kubernetes-bootcamp-86647cdf87-z7stb 1/1 Running 0 3m
Now we can make an HTTP request to the application running in that pod (N.B. $NAMESPACE is set in your ~/.kube/config file):
$ POD_NAME=kubernetes-bootcamp-86647cdf87-z7stb
$ curl http://localhost:8001/api/v1/namespaces/$NAMESPACE/pods/$POD_NAME/
{
"kind": "Pod",
"apiVersion": "v1",
"metadata": {
"name": "kubernetes-bootcamp-86647cdf87-z7stb",
"generateName": "kubernetes-bootcamp-86647cdf87-",
"namespace": "colla",
"selfLink": "/api/v1/namespaces/colla/pods/kubernetes-bootcamp-86647cdf87-z7stb",
"uid": "063cdfdc-90c7-11e8-8674-74e6e266c8e1",
"resourceVersion": "2279308",
"creationTimestamp": "2018-07-26T11:28:31Z",
"labels": {
"pod-template-hash": "4220378943",
"run": "kubernetes-bootcamp"
},
"ownerReferences": [
{
"apiVersion": "apps/v1",
"kind": "ReplicaSet",
"name": "kubernetes-bootcamp-86647cdf87",
"uid": "063bc686-90c7-11e8-8674-74e6e266c8e1",
"controller": true,
"blockOwnerDeletion": true
}
]
},
"spec": {
"volumes": [
{
"name": "default-token-g98dg",
"secret": {
"secretName": "default-token-g98dg",
"defaultMode": 420
}
}
...
Verify the status of the pod:
$ kubectl describe pod $POD_NAME
Name: kubernetes-bootcamp-86647cdf87-z7stb
Namespace: colla
Node: ba1-r3-s05/10.4.4.119
Start Time: Thu, 26 Jul 2018 13:28:31 +0200
Labels: pod-template-hash=4220378943
run=kubernetes-bootcamp
Annotations: <none>
Status: Running
IP: 10.111.4.7
...
You can retrieve the pod logs using the kubectl logs command:
$ kubectl logs $POD_NAME
Kubernetes Bootcamp App Started At: 2018-02-10T19:02:29.336Z | Running On: kubernetes-bootcamp-5d7f968ccb-8ngld
You can execute commands directly on the container once the Pod is up and running. For this, we use the exec command and use the name of the Pod as a parameter. Let’s list the environment variables:
$ kubectl exec $POD_NAME env
PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin
HOSTNAME=kubernetes-bootcamp-86647cdf87-z7stb
KUBERNETES_SERVICE_HOST=10.152.183.1
KUBERNETES_SERVICE_PORT=443
KUBERNETES_SERVICE_PORT_HTTPS=443
KUBERNETES_PORT=tcp://10.152.183.1:443
KUBERNETES_PORT_443_TCP=tcp://10.152.183.1:443
KUBERNETES_PORT_443_TCP_PROTO=tcp
KUBERNETES_PORT_443_TCP_PORT=443
KUBERNETES_PORT_443_TCP_ADDR=10.152.183.1
NPM_CONFIG_LOGLEVEL=info
NODE_VERSION=6.3.1
HOME=/root
Next let’s start a bash session in the Pod’s container:
$ kubectl exec -ti $POD_NAME bash
We have now an open console on the container where we run our NodeJS application. The source code of the app is in the server.js file:
root@kubernetes-bootcamp-86647cdf87-z7stb:/# cat server.js
var http = require('http');
var requests=0;
var podname= process.env.HOSTNAME;
var startTime;
var host;
var handleRequest = function(request, response) {
response.setHeader('Content-Type', 'text/plain');
response.writeHead(200);
response.write("Hello Kubernetes bootcamp! | Running on: ");
response.write(host);
response.end(" | v=1\n");
console.log("Running On:" ,host, "| Total Requests:", ++requests,"| App Uptime:", (new Date() - startTime)/1000 , "seconds", "| Log Time:",new Date());
}
var www = http.createServer(handleRequest);
www.listen(8080,function () {
startTime = new Date();;
host = process.env.HOSTNAME;
console.log ("Kubernetes Bootcamp App Started At:",startTime, "| Running On: " ,host, "\n" );
});
You can check that the application is up by running a curl command:
$ curl localhost:8080
Hello Kubernetes bootcamp! | Running on: kubernetes-bootcamp-86647cdf87-z7stb | v=1
To close your container connection type exit.
Expose the Service¶
We have a Service called kubernetes that is created by default in the cluster. To create a new service and expose it to external traffic we’ll use the expose command with NodePort as parameter:
$ kubectl expose deployment/kubernetes-bootcamp --type="LoadBalancer" --port 8080
service "kubernetes-bootcamp" exposed
Let’s run the get services command:
$ kubectl get services
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kubernetes-bootcamp LoadBalancer 10.152.183.172 90.147.161.37 8080:30850/TCP 3s
To find out what port was opened externally (by the NodePort option) we’ll run the describe service command:
$ kubectl describe services/kubernetes-bootcamp
Name: kubernetes-bootcamp
Namespace: pisa
Labels: run=kubernetes-bootcamp
Annotations: <none>
Selector: run=kubernetes-bootcamp
Type: LoadBalancer
IP: 10.152.183.172
LoadBalancer Ingress: 90.147.161.37
Port: <unset> 8080/TCP
TargetPort: 8080/TCP
NodePort: <unset> 30850/TCP
Endpoints: 10.111.4.116:8080
Session Affinity: None
External Traffic Policy: Cluster
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal IPAllocated 2m27s metallb-controller Assigned IP "90.147.161.37"
Now we can test that the app is exposed outside of the cluster using curl, the IP of the Node and the externally exposed port (modify to match your values):
$ SERVICEIP=90.147.161.37
$ SERVICEPORT=8080
$ curl $SERVICEIP:$SERVICEPORT
The Deployment created automatically a label for our Pod. With the describe deployment command you can see the name of the label:
$ kubectl describe deployments/kubernetes-bootcamp
Name: kubernetes-bootcamp
Namespace: pisa
CreationTimestamp: Thu, 10 Jan 2019 15:02:15 +0000
Labels: run=kubernetes-bootcamp
Annotations: deployment.kubernetes.io/revision: 1
Selector: run=kubernetes-bootcamp
Replicas: 1 desired | 1 updated | 1 total | 1 available | 0 unavailable
StrategyType: RollingUpdate
MinReadySeconds: 0
RollingUpdateStrategy: 25% max unavailable, 25% max surge
Pod Template:
Labels: run=kubernetes-bootcamp
Containers:
kubernetes-bootcamp:
Image: docker.io/jocatalin/kubernetes-bootcamp:v1
Port: 8080/TCP
Host Port: 0/TCP
Environment: <none>
Mounts: <none>
Volumes: <none>
Conditions:
Type Status Reason
---- ------ ------
Available True MinimumReplicasAvailable
Progressing True NewReplicaSetAvailable
OldReplicaSets: <none>
NewReplicaSet: kubernetes-bootcamp-86647cdf87 (1/1 replicas created)
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal ScalingReplicaSet 6m51s deployment-controller Scaled up replica set kubernetes-bootcamp-86647cdf87 to 1
Let’s use this label to query our list of Pods. We’ll use the kubectl get pods command with -l as a parameter, followed by the label values:
$ kubectl get pods -l run=kubernetes-bootcamp
NAME READY STATUS RESTARTS AGE
kubernetes-bootcamp-86647cdf87-z7stb 1/1 Running 0 1h
You can do the same to list the existing services:
$ kubectl get services -l run=kubernetes-bootcamp
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kubernetes-bootcamp LoadBalancer 10.152.183.172 90.147.161.37 8080:30850/TCP 6m
To apply a new label we use the label command followed by the object type, object name and the new label:
$ kubectl label pod $POD_NAME app=v1
pod "kubernetes-bootcamp-86647cdf87-z7stb" labeled
We can query now the list of pods using the new label:
$ kubectl get pods -l app=v1
NAME READY STATUS RESTARTS AGE
kubernetes-bootcamp-86647cdf87-z7stb 1/1 Running 0 39m
To delete Services you can use the delete service command. Labels can be used also here:
$ kubectl delete service -l run=kubernetes-bootcamp
You can confirm that the app is still running with a curl inside the pod:
$ kubectl exec -ti $POD_NAME curl localhost:8080
Hello Kubernetes bootcamp! | Running on: kubernetes-bootcamp-86647cdf87-z7stb | v=1
Scaling the App¶
Let’s scale the Deployment to 4 replicas. We’ll use the kubectl scale command, followed by the deployment type, name and desired number of instances:
$ kubectl scale deployments/kubernetes-bootcamp --replicas=4
deployment "kubernetes-bootcamp" scaled
We have 4 instances of the application available. Next, let’s check if the number of Pods changed:
$ kubectl get pods -o wide
NAME READY STATUS RESTARTS AGE IP NODE
kubernetes-bootcamp-86647cdf87-254vq 1/1 Running 0 20s 10.111.54.11 ba1-r3-s04
kubernetes-bootcamp-86647cdf87-nqzkk 1/1 Running 0 20s 10.111.37.5 ba1-r2-s15
kubernetes-bootcamp-86647cdf87-qxgng 1/1 Running 0 20s 10.111.4.8 ba1-r3-s05
kubernetes-bootcamp-86647cdf87-z7stb 1/1 Running 0 1h 10.111.4.7 ba1-r3-s05
We’ll do a curl to the the exposed IP and port. Execute the command multiple times:
$ curl $SERVICEIP:$SERVICEPORT
We hit a different Pod with every request. This demonstrates that the load-balancing is working.
To scale down the Service to 2 replicas, run again the scale command:
$ kubectl scale deployments/kubernetes-bootcamp --replicas=2
deployment "kubernetes-bootcamp" scaled
Perform a Rolling Update¶
Rolling updates allow Deployments’ update to take place with zero downtime by incrementally updating Pods instances with new ones. The new Pods will be scheduled on Nodes with available resources.
To update the image of the application to version 2, use the set image command, followed by the deployment name and the new image version:
$ kubectl set image deployments/kubernetes-bootcamp kubernetes-bootcamp=jocatalin/kubernetes-bootcamp:v2
deployment "kubernetes-bootcamp" image updated
Check the status of the new Pods, and view the old one terminating with the get pods command:
$ kubectl get pods
The update can be confirmed by running a rollout status command:
$ kubectl rollout status deployments/kubernetes-bootcamp
deployment "kubernetes-bootcamp" successfully rolled out
Check which version of the image is used by the deployment:
$ kubectl get deployments -o wide
NAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGE CONTAINERS IMAGES SELECTOR
kubernetes-bootcamp 2 2 2 2 16m kubernetes-bootcamp jocatalin/kubernetes-bootcamp:v2 run=kubernetes-bootcamp
Or run a describe command against the Pods (N.B. we cannot use $POD_NAME because that pod was terminated during scale-down):
$ kubectl describe pod kubernetes-bootcamp-b9cdd8865-djstf
Name: kubernetes-bootcamp-b9cdd8865-djstf
Namespace: colla
Node: ba1-r3-s04/10.4.4.118
Start Time: Thu, 26 Jul 2018 15:05:49 +0200
Labels: pod-template-hash=657884421
run=kubernetes-bootcamp
Annotations: <none>
Status: Running
IP: 10.111.54.12
Controlled By: ReplicaSet/kubernetes-bootcamp-b9cdd8865
Containers:
kubernetes-bootcamp:
Container ID: docker://3acefe4b62d3cf1ad41e65951029bb1aa98951083ef7431cb3d3ce067119ee53
Image: jocatalin/kubernetes-bootcamp:v2
Persistent Volume claim example¶
Here is an example of how to claim RBD volumes and use them on a deployment.
Create the following file claim.json defining the volume name and size:
{ "kind": "PersistentVolumeClaim", "apiVersion": "v1", "metadata": { "name": "myvol" }, "spec": { "accessModes": [ "ReadWriteOnce" ], "resources": { "requests": { "storage": "4Gi" } }, "storageClassName": "hdd" } }
Note the StorageClassName parameter which for our platform is “hdd”.
Execute the command:
$ kubectl create -f claim.json
Check that the 4GB volume “myvol” has been created:
$ kubectl get pvc NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE myvol Bound pvc-81c512af-91a9-11e8-b43d-74e6e266c8e1 4Gi RWO slow 17s
Create the file pod.yaml which defines a deployment of type ReplicationController:
apiVersion: v1 kind: ReplicationController metadata: name: server spec: replicas: 1 selector: role: server template: metadata: labels: role: server spec: containers: - name: server image: nginx volumeMounts: - mountPath: /var/lib/www/html name: myvol volumes: - name: myvol persistentVolumeClaim: claimName: myvol
Execute the command:
$ kubectl create -f pod.yaml
This will create a server based on a nginx image, with the volume myvol mounted on /var/lib/www/html.
Check the creation of the pod:
$ kubectl get pods NAME READY STATUS RESTARTS AGE server-fjbcw 1/1 Running 0 15s $ kubectl describe pod server-fjbcw Name: server-fjbcw Namespace: colla Node: ba1-r2-s15/10.4.4.113 ... Mounts: /var/lib/www/html from myvol (rw) /var/run/secrets/kubernetes.io/serviceaccount from default-token-g98dg (ro) ... Volumes: myvol: Type: PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace) ClaimName: myvol ReadOnly: false ... Events: Type Reason Age From Message ---- ------ ---- ---- ------- Normal Scheduled 30m default-scheduler Successfully assigned colla/server2-fjbcw to ba1-r2-s15 Normal SuccessfulAttachVolume 30m attachdetach-controller AttachVolume.Attach succeeded for volume "pvc-81c512af-91a9-11e8-b43d-74e6e266c8e1" Normal Pulling 30m kubelet, ba1-r2-s15 pulling image "nginx" Normal Pulled 30m kubelet, ba1-r2-s15 Successfully pulled image "nginx" Normal Created 30m kubelet, ba1-r2-s15 Created container Normal Started 30m kubelet, ba1-r2-s15 Started container
Log on the container and check that the volume is mounted:
$ kubectl exec -it server-fjbcw bash root@server2-fjbcw:/# root@server2-fjbcw:/# df -h Filesystem Size Used Avail Use% Mounted on ... /dev/rbd0 3.9G 8.0M 3.8G 1% /var/lib/www/html
Now our pod has an RBD mount!
Cleanup¶
Delete the pods:
$ kubectl delete deployment kubernetes-bootcamp
deployment "kubernetes-bootcamp" deleted
$ kubectl get deployments
No resources found.
$ kubectl delete replicationcontrollers/server
replicationcontroller "server" deleted
$ kubectl get replicationcontrollers
No resources found.
Note: to delete the myvol pvc the replicationcontroller server must be deleted beforehand.