DevOpsDocs Help

Install K8s with kubeadm full guide

Server(Node/Host) Preparation

First run following commands every Nodes, app, db and monitoring:

Config hosts, timezone, etckeeper, swapoff, ufw

# app, db, monit hostname=monit echo "127.0.0.1 $hostname" >> /etc/hosts timedatectl set-timezone Asia/Tehran apt update apt install etckeeper apt upgrade ufw limit 22 ufw enable swapoff -a sed -i -e '/ swap /d' /etc/fstab

Install nodejs and dns package

First get nodesource gpgkey:

sudo apt-get update sudo apt-get install -y ca-certificates curl gnupg sudo mkdir -p /etc/apt/keyrings curl -fsSL https://deb.nodesource.com/gpgkey/nodesource-repo.gpg.key | sudo gpg --dearmor -o /etc/apt/keyrings/nodesource.gpg

Create deb repo:

# Optional: NODE_MAJOR can be changed depending on the version you need. NODE_MAJOR=18 echo "deb [signed-by=/etc/apt/keyrings/nodesource.gpg] https://deb.nodesource.com/node_$NODE_MAJOR.x nodistro main" | sudo tee /etc/apt/sources.list.d/nodesource.list

Install nodejs

sudo apt-get update sudo apt-get install nodejs -y

Install dns package

sudo npm i -g @codetoz/dns

Setup K8s

After preparing Nodes you can install K8s cluster with the following instructions:

Install Containerd

First install containerd on every nodes:

* Source Documents for kubernetes.io**

Install and configure prerequisites:

cat <<EOF | sudo tee /etc/modules-load.d/containerd.conf overlay br_netfilter EOF sudo modprobe overlay sudo modprobe br_netfilter cat <<EOF | sudo tee /etc/sysctl.d/99-kubernetes-cri.conf net.bridge.bridge-nf-call-iptables = 1 net.ipv4.ip_forward = 1 net.bridge.bridge-nf-call-ip6tables = 1 EOF sudo sysctl --system

Install containerd:

CONTAINERD_VERSION="1.6.9-1" wget https://download.docker.com/linux/ubuntu/dists/focal/pool/stable/amd64/containerd.io_${CONTAINERD_VERSION}_amd64.deb dpkg -i containerd.io_${CONTAINERD_VERSION}_amd64.deb mkdir -p files/pkgs && mv containerd.io_${CONTAINERD_VERSION}_amd64.deb files/pkgs/ mkdir -p /etc/containerd containerd config default > /etc/containerd/config.toml

To use the systemd cgroup driver in /etc/containerd/config.toml with runc, set

[plugins."io.containerd.grpc.v1.cri".containerd.runtimes.runc] ... [plugins."io.containerd.grpc.v1.cri".containerd.runtimes.runc.options] SystemdCgroup = true

Finally:

sudo systemctl restart containerd

Install kube tools

sudo apt-get update sudo apt-get install -y apt-transport-https ca-certificates curl curl -fsSL https://dl.k8s.io/apt/doc/apt-key.gpg | sudo gpg --dearmor -o /etc/apt/keyrings/kubernetes-archive-keyring.gpg echo "deb [signed-by=/etc/apt/keyrings/kubernetes-archive-keyring.gpg] https://apt.kubernetes.io/ kubernetes-xenial main" | sudo tee /etc/apt/sources.list.d/kubernetes.list # Installing kubeadm, kubectl, kubelet KUBE_VER=1.27.5 dns -s shecan sudo apt-get update sudo apt-get install -y kubelet=$KUBE_VER-00 kubeadm=$KUBE_VER-00 kubectl=$KUBE_VER-00 sudo apt-mark hold kubelet kubeadm kubectl # Installing kubecolor wget -c https://github.com/hidetatz/kubecolor/releases/download/v0.0.25/kubecolor_0.0.25_Linux_x86_64.tar.gz -O - | tar -xz cp kubecolor /bin mv kubecolor ~/files/pkgs

Setting some aliases and auto completion for kubes command:

kubectl completion bash | sudo tee /etc/bash_completion.d/kubectl > /dev/null kubeadm completion bash | sudo tee /etc/bash_completion.d/kubeadm > /dev/null # copy .bash_aliases file from infra dir to the node hosts SSH_HOST_ALIAS=sa3 # sa3, sd3, sm3 scp infra/.bash_aliases.prod $SSH_HOST_ALIAS:~/.bash_aliases source ~/.bashrc k version --output=yaml

Initialize k8s on App, the Control-Plane(master)

# Allow access to some ports in control-plane ufw allow 6443/tcp ufw allow 2379:2380/tcp ufw allow 10250/tcp ufw allow 10259/tcp ufw allow 10257/tcp # app host static ip SERVER_STATIC_IP=94.182.195.209 # same as previous step KUBE_VER=1.27.5 kubeadm config images pull kubeadm init \ --kubernetes-version=v$KUBE_VER \ --pod-network-cidr=10.0.0.0/8 \ --control-plane-endpoint $SERVER_STATIC_IP \ --apiserver-advertise-address=$SERVER_STATIC_IP \ --cri-socket unix:///run/containerd/containerd.sock # Configure kubectl access to the new cluster mkdir -p $HOME/.kube sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config sudo chown $(id -u):$(id -g) $HOME/.kube/config k get node

Join db and monit host as worker nodes

In DB and Monit host we need to join to the k8s cluster initialized in the App host:

# allow access to kubelet api in db and monit worker ufw allow 10250/tcp CONTROL_PLANE_IP=94.182.195.209 kubeadm join --discovery-token nqrru2.xxxxxxxxxxxxxxx --discovery-token-ca-cert-hash sha256:0f7aexxxxx...3f56 $CONTROL_PLANE_IP:6443

First you must to get the control-plane(app) token by running the following command on the control-plane(app) node:

kubeadm token list

if token is expired run:

kubeadm token create kubeadm token list

If you don't have the value of --discovery-token-ca-cert-hash, you can get it by running the following command chain on the control-plane node:

openssl x509 -pubkey -in /etc/kubernetes/pki/ca.crt | openssl rsa -pubin -outform der 2>/dev/null | \ openssl dgst -sha256 -hex | sed 's/^.* //'

Config Nodes to Be Able to Schedule Pods

kubectl taint nodes --all node-role.kubernetes.io/control-plane- kubectl taint nodes --all node-role.kubernetes.io/master- kubectl label node db node-role.kubernetes.io/worker=worker kubectl label node monit node-role.kubernetes.io/worker=worker

Label Nodes

kubectl label nodes app dedicated=app kubectl label nodes db dedicated=db kubectl label nodes monit dedicated=monit

Increase POD count per node

In default every worker node can schedule 110 pods and our use case needs some more pods per workers.

For increasing POD max count first Edit the file /etc/systemd/system/kubelet.service.d/10-kubeadm.conf and add the --max-pods option to ExecStart configuration as follows:

$KUBELET_CONFIG_ARGS $KUBELET_KUBEADM_ARGS $KUBELET_EXTRA_ARGS --max-pods=245

Then restart kubelet service:

systemctl restart kubelet systemctl daemon-reload # wait 20s to ensure daemon is up and new config is applied then run again systemctl restart kubelet

Finally run the below command on app node to ensure changes applied:

kubectl describe node NODE_NAME | grep -i capacity -A 13

Install K8s main components

Installing HELM and Helmfile on App

Install HELM with apt

curl https://baltocdn.com/helm/signing.asc | gpg --dearmor | sudo tee /etc/apt/keyrings/helm.gpg > /dev/null sudo apt-get install apt-transport-https --yes echo "deb [arch=$(dpkg --print-architecture) signed-by=/etc/apt/keyrings/helm.gpg] https://baltocdn.com/helm/stable/debian/ all main" | sudo tee /etc/apt/sources.list.d/helm-stable-debian.list sudo apt-get update sudo apt-get install helm

To install auto-completion, run:

helm completion bash > /etc/bash_completion.d/helm

Add helm-diff plugin to HELM:

helm plugin install https://github.com/databus23/helm-diff

Install Helmfile

wget -c https://github.com/helmfile/helmfile/releases/download/v0.156.0/helmfile_0.156.0_linux_amd64.tar.gz -O - | tar -xz cp helmfile /bin mv helmfile ~/files/pkgs

Apply a Helmfile using:

helmfile apply helmfile.yml

or:

helmfile sync helmfile.yml

Installing Krew, kubectl package manager

Install Krew using:

( set -x; cd "$(mktemp -d)" && OS="$(uname | tr '[:upper:]' '[:lower:]')" && ARCH="$(uname -m | sed -e 's/x86_64/amd64/' -e 's/\(arm\)\(64\)\?.*/\1\2/' -e 's/aarch64$/arm64/')" && KREW="krew-${OS}_${ARCH}" && curl -fsSLO "https://github.com/kubernetes-sigs/krew/releases/latest/download/${KREW}.tar.gz" && tar zxvf "${KREW}.tar.gz" && ./"${KREW}" install krew )

Then add the Krew PATH to your .bashrc (or .bash_aliases) using:

echo "PATH=\"${KREW_ROOT:-$HOME/.krew}/bin:$PATH\"" >> .bash_aliases source .bashrc

Prepare Control-Plane for Storage Plugin, OpenEBS

before you install OpenEBS you must setup iscsi. you can run:

# Source: https://openebs.io/docs/user-guides/prerequisites#ubuntu sudo apt update sudo apt install -y open-iscsi systemctl enable --now iscsid systemctl status iscsid

Allow calico port

ufw allow 179 ufw allow 5473

KubeCTX and KubeNS

For installing ctx and ns you can use the Krew:

kubectl krew install ctx kubectl krew install ns

K9s and Kdash, K8s CLI Dashboards

For installing K9s:

wget -c https://github.com/derailed/k9s/releases/download/v0.27.4/k9s_Linux_amd64.tar.gz -O - | tar -xz cp k9s /bin mv k9s ~/files/pkgs

For installing Kdash:

curl https://raw.githubusercontent.com/kdash-rs/kdash/main/deployment/getLatest.Bash | bash

Components and add-ons

You must install some components just before other tools and app services. the below list help you to install those things:

  • openebs

  • calico

  • ingress-nginx

  • cert-manager

  • dashboard

  • gitlab-agent

  • gitlab-runner

  • k8tz

Setup manifests for releases

After installing the components you may must apply manifests in _manifest dir:

k apply -f _manifest/

Config the components

After installing components you must run some config commands:

# Make `openebs-hostpath` the default StorageClass kubectl patch storageclass openebs-hostpath -p '{"metadata": {"annotations":{"storageclass.kubernetes.io/is-default-class":"true"}}}'
Last modified: 07 March 2024