Daniel's Blog

Installing Kubernetes using Kubespray

Intial setup

I'm installing kubernetes on a bare-metal cluster that I have access to over ssh. I'm not local to the machines so everything is done through a ssh connection.

In order to set this up, I have a machine that I've set aside as my ansible coordinator. This machine is setup with a private ssh key that is only used within the LAN in order for my user to deploy machines from the ansible coordinator. When running these commands, I'm using a docker image provided by Quay. If I wasn't using that I could use my ssh key from my local machine using the ssh agent, but there isn't a way to inject the ssh agent into the docker container so it can use the key (unless the docker container is running sshd, in which case you could ssh into the remote ansible container, run the docker container in daemon mode, then ssh into the docker container.)

So these things need to be there:

Cloning the repo and getting the release version

$ git clone git@github.com:kubernetes-sigs/kubespray.git
$ cd kubespray
$ git checkout v2.20.0

Getting the correct version of the docker image to generate the config files

When running this the code is mounted into the volume.

$ docker pull quay.io/kubespray/kubespray:v2.20.0
$ docker run --rm -it --volume "$(pwd)":/kubespray quay.io/kubespray/kubespray:v2.20.0 bash

Generating hosts file

$ declare -a IPS=(10.1.53.1, 10.1.53.2, 10.1.53.3, 10.1.53.4, 10.1.53.5)
$ CONFIG_FILE=inventory/mycluster/hosts.yaml python3 contrib/inventory_builder/inventory.py ${IPS[@]}
$ exit

Running container to run kubespray

To do this, a persistent terminal is needed so tmux is used. This way if my ssh session gets terminated due to a network issue, the install will continue and I can reconnect.

$ tmux new -s run_kubespray

Once inside the tmux session, I run the kubespray container, mounting the ssh key and code into the container.

$ docker run --rm -it --volume "$(pwd)":/kubespray --volume "$HOME"/.ssh/:/root/.ssh quay.io/kubespray/kubespray:v2.20.0 bash

Running kubespray to install the cluster

Once inside the docker container, I can run kubespray to install the cluster. First, I set up an ssh-agent so I don't have to repeatedly type in the ssh-key password, then I add the ssh-key to the agent using ssh-add. After that I run the script.

container$ eval "$(ssh-agent -s)"
Agent pid 27456

container$ ssh-add
Enter passphrase for /root/.ssh/id_rsa:
Identity added: /root/.ssh/id_rsa (/root/.ssh/id_rsa)

container$  ansible-playbook -i inventory/mycluster/hosts.yaml  --become --become-user=root --ask-become-pass --extra-vars "ansible_ssh_user=<local user>"  cluster.yml

Arguments:

-i inventory/mycluster/hosts.yaml             The generated hosts file    
--become                                      Switch user after connecting    
--become-user=root                            Become the root user after connecting    
--extra-vars "ansible_ssh_user=<local user>"  When connecting connect as <local user>, not root, who is the user the container is running under    
cluster.yml                                   Kubespray ansible playbook for setting up a cluster.    

Testing the cluster

Next is to test the cluster:

ansible_coordinator$ ssh <local_user>@10.1.53.1
Last login: Thu Dec XX XX:XX:XX 2022

10.1.53.1$ sudo kubectl get nodes
NAME    STATUS   ROLES           AGE   VERSION
node1   Ready    control-plane   28m   v1.24.6
node2   Ready    control-plane   28m   v1.24.6
node3   Ready    control-plane   27m   v1.24.6
node4   Ready    <none>          26m   v1.24.6
node6   Ready    <none>          26m   v1.24.6

The nodes are all up and running and ready. Time to start deploying some software to the cluster.