Deploy JupyterHub on Kubernetes with Helm

Install Helm

Install the helm client:

$ snap install helm --classic

Install Tiller in your namespace

  1. Initialize helm. This will install tiller in your namespace:

    $ helm init --wait --tiller-namespace $USER_NS

    where $USER_NS is your namespace.

  2. Secure helm:

    $ kubectl patch deployment tiller-deploy --namespace=$USER_NS --type=json --patch='[{"op": "add", "path": "/spec/template/spec/containers/0/command", "value": ["/tiller", "--listen=localhost:44134"]}]'


Tiller’s port is exposed in the cluster without authentication and if you probe this port directly (i.e. by bypassing helm) then tiller’s permissions can be exploited. This step forces tiller to listen to commands from localhost (i.e. helm) only so that e.g. other pods inside the cluster cannot ask tiller to install a new chart granting them arbitrary, elevated RBAC privileges and exploit them.

  1. Let’s query helm. Note that you have to specify your username in every helm command:

    $ helm version --tiller-namespace $USER_NS
    Client: &version.Version{SemVer:"v2.15.1", GitCommit:"79d07943b03aea2b76c12644b4b54733bc5958d6", GitTreeState:"clean"}
    Server: &version.Version{SemVer:"v2.15.1", GitCommit:"79d07943b03aea2b76c12644b4b54733bc5958d6", GitTreeState:"clean"}

Install JupterHub

  1. Generate a random hex string representing 32 bytes to use as a security token:

    $ openssl rand -hex 32
  2. Create and start editing a file called config.yaml:

    $ nano config.yaml
  3. Write the following into the config.yaml file but instead of writing <RANDOM-HEX> paste the generated hex string you copied in step 1:

      secretToken: "<RANDOM_HEX>"
  4. Save the config.yaml file.

  5. Make Helm aware of the JupyterHub Helm chart repository:

    $ helm repo add jupyterhub
    $ helm repo update
  6. Install the chart configured by your config.yaml by running this command from the directory that contains your config.yaml:

    $ helm upgrade --install $RELEASE jupyterhub/jupyterhub --namespace $USER_NS --version=0.8.2   --values config.yaml --tiller-namespace $USER_NS

    where $RELEASE is a label defining the current installation (e.g. ‘jhub’), that will be returned with the following command:

    $ helm list --tiller-namespace $USER_NS
    NAME       REVISION        UPDATED                         STATUS          CHART                   APP VERSION     NAMESPACE
    jhub       1               Wed Mar 27 16:18:03 2019        DEPLOYED        jupyterhub-0.8.0        0.9.4           colla
  7. Finally, let’s query the services to find the public IP assigned to jupyterhub:

    $ kubectl get svc
    NAME            TYPE           CLUSTER-IP       EXTERNAL-IP     PORT(S)                      AGE
    hub             ClusterIP   <none>          8081/TCP                     6d17h
    proxy-api       ClusterIP    <none>          8001/TCP                     6d17h
    proxy-public    LoadBalancer   80:30630/TCP,443:30680/TCP   6d17h
    tiller-deploy   ClusterIP   <none>          44134/TCP                    6d18h

    The IP is that of proxy-public, i.e. in our case. Type it in a browser and log in with admin/admin. Happy JupyterHubbing!


Once you finish using your JupyterHub release, you may tear it down with:

$ helm delete $RELEASE --purge --tiller-namespace YOUR_NAMESPACE