Huge shout out to my friend Mike Levan for the inspiration of this project. I used his guide using Ubuntu as a basis for mine. Alright so up until now all of my work with Kubernetes involved leveraging a cloud providers implementation (Azure AKS and AWS EKS).

Being a cheapskate cost conscious individual that I am, I wanted to build a local environment for my own lab without having to pay a subscription for Azure or AWS. This is where Kubeadm comes into play. Kubeadm gives us the ability to run an on-prem Kubernetes cluster.

For my local cluster I'll implement a three node cluster consisting of one master and two worker nodes using Hyper-V.

  • master = 192.168.168.71
  • worker-1 = 192.168.168.72
  • worker-2 = 192.168.168.73

Prerequisites

  1. 3 virtual machines running CentOS 7.
  2. Static IP's on all of the machines.
  3. Host entries for each machine so each vm can ping each other by hostnames.

Install Phase

Going forward unless specified, run these commands on the MASTER and WORKER nodes!

  1. Disable selinux.
setenforce 0
sed -i --follow-symlinks 's/SELINUX=enforcing/SELINUX=disabled/g' /etc/sysconfig/selinux

2. Enable br_netfilter kernel module.

modprobe br_netfilter
echo '1' > /proc/sys/net/bridge/bridge-nf-call-iptables

3. Disable swap memory.

swapoff -a

you may have to comment out the line for swap in /etc/fstab.

4. Install Docker-CE's dependencies. Please note for Kubeadm to function properly, Docker-CE is needed.

yum install -y yum-utils device-mapper-persistent-data lvm2

Add the docker repo.

yum-config-manager \
  --add-repo \
  https://download.docker.com/linux/centos/docker-ce.repo

Install Docker-CE.

yum update -y && yum install -y docker-ce-18.06.2.ce

Create a docker directory.

mkdir /etc/docker

Setup the docker daemon.

cat > /etc/docker/daemon.json <<EOF
{
  "exec-opts": ["native.cgroupdriver=systemd"],
  "log-driver": "json-file",
  "log-opts": {
    "max-size": "100m"
  },
  "storage-driver": "overlay2",
  "storage-opts": [
    "overlay2.override_kernel_check=true"
  ]
}
EOF
mkdir -p /etc/systemd/system/docker.service.d

Restart Docker.

systemctl daemon-reload
systemctl restart docker && systemctl enable docker

5. Add the kubernetes repository.

cat <<EOF > /etc/yum.repos.d/kubernetes.repo
[kubernetes]
name=Kubernetes
baseurl=https://packages.cloud.google.com/yum/repos/kubernetes-el7-x86_64
enabled=1
gpgcheck=1
repo_gpgcheck=1
gpgkey=https://packages.cloud.google.com/yum/doc/yum-key.gpg
        https://packages.cloud.google.com/yum/doc/rpm-package-key.gpg
EOF

6. Install kubernetes, kubelet, and kubeadm.

yum install -y kubelet kubeadm kubectl

7. Start up kubelet and enable it.

systemctl start kubelet && systemctl enable kubelet

8. Initialize the MASTER ONLY by executing the command below. Please capture kubeadm join and kube configuration output at the end and save it to your text editor.

kubeadm init --apiserver-advertise-address=192.168.168.71 --pod-network-cidr=10.1.0.0/16

9. Create a kube configuration directory and copy configuration. This should have been provided in the output during initialization of the master.

mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config

10. Deploy flannel to manage the kubernetes network.

kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml

11. Check to make sure the pods for flannel are running correctly. kubectl get pods --all-namespaces. You should see the following output.

[root@master phil]# kubectl get pods --all-namespaces
NAMESPACE     NAME                                     READY   STATUS    RESTARTS   AGE
kube-system   coredns-fb8b8dccf-sppcm                  1/1     Running   0          2m53s
kube-system   coredns-fb8b8dccf-zwcfp                  1/1     Running   0          2m53s
kube-system   etcd-kube-centos-01                      1/1     Running   0          2m7s
kube-system   kube-apiserver-kube-centos-01            1/1     Running   0          117s
kube-system   kube-controller-manager-kube-centos-01   1/1     Running   0          104s
kube-system   kube-flannel-ds-amd64-tm88c              1/1     Running   0          26s
kube-system   kube-proxy-j87d8                         1/1     Running   0          2m53s
kube-system   kube-scheduler-kube-centos-01            1/1     Running   0          2m

12. Join the worker nodes by issuing the kubeadm join command that was provided in step 8. Run this ONLY on the worker nodes.

kubeadm join 192.168.168.71:6443 --token c3k1ul.qgucm9j991ynf6jf \
    --discovery-token-ca-cert-hash sha256:669b898fb62fcf0cc7d7dd966868cbf3d6b5bad14cd7d220900def56a1993884

13. Now go back to the master node and check that the worker nodes are recognized by the master.

[root@master phil]# kubectl get nodes
NAME             STATUS   ROLES    AGE     VERSION
master     Ready    master   8m33s   v1.14.3
worker-1   Ready    <none>   5m1s    v1.14.3
worker-2   Ready    <none>   4m59s   v1.14.3

Alright, the kubernetes cluster is up and running. We now have local kubernetes cluster for our development purposes without having to pay the major cloud players.

However by no means should this be used for production use. If you do need to deploy an application to production, using either AWS, Azure, or GCP implementation of kubernetes is the preferred option.


Bonus - Troubleshooting IP Address Change

Cool now that you have you a kubernetes cluster rocking and rolling along. I'll walk you through a common issue. The issue in discussion is an IP address change. During my testing with my kubernetes cluster my nodes kept on picking up new IP addresses after I have set a static IP. I was able to nail down the issue to a glitch with Hyper-V's virtual switch. However it was a painful troubleshooting process and forced me to burn and rebuild multiple sets of kuberenetes clusters.

Here is what you need to do to correct the IP address change.

  1. Delete your .kube directory.
rm -rf ~/.kube

2. Reinitialize the master with the new IP address. (See Step 8)

3. Redeploy flannel. (See Step 10)

3. Rejoin the worker nodes. (See Step 12)