Deploy JupyterHub on Kubernetes with Helm¶
Install Helm¶
Install the helm client:
$ snap install helm --classic
Install Tiller in your namespace¶
Initialize helm. This will install tiller in your namespace:
$ helm init --wait --tiller-namespace $USER_NS
where $USER_NS is your namespace.
Secure helm:
$ kubectl patch deployment tiller-deploy --namespace=$USER_NS --type=json --patch='[{"op": "add", "path": "/spec/template/spec/containers/0/command", "value": ["/tiller", "--listen=localhost:44134"]}]'
Note
Tiller’s port is exposed in the cluster without authentication and if you probe this port directly (i.e. by bypassing helm) then tiller’s permissions can be exploited. This step forces tiller to listen to commands from localhost (i.e. helm) only so that e.g. other pods inside the cluster cannot ask tiller to install a new chart granting them arbitrary, elevated RBAC privileges and exploit them.
Let’s query helm. Note that you have to specify your username in every helm command:
$ helm version --tiller-namespace $USER_NS Client: &version.Version{SemVer:"v2.15.1", GitCommit:"79d07943b03aea2b76c12644b4b54733bc5958d6", GitTreeState:"clean"} Server: &version.Version{SemVer:"v2.15.1", GitCommit:"79d07943b03aea2b76c12644b4b54733bc5958d6", GitTreeState:"clean"}
Install JupterHub¶
Generate a random hex string representing 32 bytes to use as a security token:
$ openssl rand -hex 32
Create and start editing a file called config.yaml:
$ nano config.yaml
Write the following into the config.yaml file but instead of writing <RANDOM-HEX> paste the generated hex string you copied in step 1:
proxy: secretToken: "<RANDOM_HEX>"
Save the config.yaml file.
Make Helm aware of the JupyterHub Helm chart repository:
$ helm repo add jupyterhub https://jupyterhub.github.io/helm-chart/ $ helm repo update
Install the chart configured by your config.yaml by running this command from the directory that contains your config.yaml:
$ helm upgrade --install $RELEASE jupyterhub/jupyterhub --namespace $USER_NS --version=0.8.2 --values config.yaml --tiller-namespace $USER_NS
where $RELEASE is a label defining the current installation (e.g. ‘jhub’), that will be returned with the following command:
$ helm list --tiller-namespace $USER_NS NAME REVISION UPDATED STATUS CHART APP VERSION NAMESPACE jhub 1 Wed Mar 27 16:18:03 2019 DEPLOYED jupyterhub-0.8.0 0.9.4 colla
Finally, let’s query the services to find the public IP assigned to jupyterhub:
$ kubectl get svc NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE hub ClusterIP 10.152.183.140 <none> 8081/TCP 6d17h proxy-api ClusterIP 10.152.183.90 <none> 8001/TCP 6d17h proxy-public LoadBalancer 10.152.183.32 90.147.190.18 80:30630/TCP,443:30680/TCP 6d17h tiller-deploy ClusterIP 10.152.183.185 <none> 44134/TCP 6d18h
The IP is that of proxy-public, i.e. 90.147.190.18 in our case. Type it in a browser and log in with admin/admin. Happy JupyterHubbing!
Cleanup¶
Once you finish using your JupyterHub release, you may tear it down with:
$ helm delete $RELEASE --purge --tiller-namespace YOUR_NAMESPACE
Reference¶
Setting up Helm: http://z2jh.jupyter.org/en/latest/setup-helm.html
Setting up JupyterHub: http://z2jh.jupyter.org/en/latest/setup-jupyterhub.html