DevOpsDocs Help

Install Kubernetes with `kubeadm`

Node Preparation

hostnamectl set-hostname sy1 # add new hostname to /etc/hosts echo '127.0.0.1 sa2' >> /etc/hosts timedatectl set-timezone Asia/Tehran apt update apt install etckeeper apt upgrade # Denying SSH connections from IP addresses who failed to establish a minimum of 6 connections within 30 seconds. ufw limit 22 ufw enable swapoff -a # Comment out the swap lines in `/etc/fstab` for permanent effect sed -i -e '/ swap /d' /etc/fstab

To configure proxy, run:

curl https://raw.githubusercontent.com/freedomofdevelopers/fod/master/fodcmd/fod.sh >> ~/.bashrc && source ~/.bashrc echo 'Acquire::http::Proxy::apt.kubernetes.io "http://fodev.org:8118/";' >> /etc/apt/apt.conf.d/10proxy echo 'Acquire::http::Proxy::packages.cloud.google.com "http://fodev.org:8118/";' >> /etc/apt/apt.conf.d/10proxy

Installation

Containerd

Source

Install and configure prerequisites:

cat <<EOF | sudo tee /etc/modules-load.d/containerd.conf overlay br_netfilter EOF sudo modprobe overlay sudo modprobe br_netfilter # Setup required sysctl params, these persist across reboots. cat <<EOF | sudo tee /etc/sysctl.d/99-kubernetes-cri.conf net.bridge.bridge-nf-call-iptables = 1 net.ipv4.ip_forward = 1 net.bridge.bridge-nf-call-ip6tables = 1 EOF # Apply sysctl params without reboot sudo sysctl --system

Install containerd:

CONTAINERD_VERSION=1.6.9-1 wget https://download.docker.com/linux/ubuntu/dists/focal/pool/stable/amd64/containerd.io_${CONTAINERD_VERSION}_amd64.deb dpkg -i containerd.io_${CONTAINERD_VERSION}_amd64.deb mkdir -p /etc/containerd containerd config default > /etc/containerd/config.toml

To use the systemd cgroup driver in /etc/containerd/config.toml with runc, set

[plugins."io.containerd.grpc.v1.cri".containerd.runtimes.runc] ... [plugins."io.containerd.grpc.v1.cri".containerd.runtimes.runc.options] SystemdCgroup = true

To proxy docker.io, edit /etc/containerd/config.toml file and prepend "https://registry.docker.ir" to endpoint property in [plugins."io.containerd.grpc.v1.cri".registry.mirrors."docker.io"] path.

Finally, restart containerd for changes to take effect:

sudo systemctl restart containerd

To use proxy:

cat <<EOF >/lib/systemd/system/containerd.service.d/http-proxy.conf [Service] Environment="HTTP_PROXY=socks5://127.0.0.1:9050" Environment="HTTPS_PROXY=socks5://127.0.0.1:9050" Environment="NO_PROXY=localhost,127.0.0.1,localaddress,.localdomain.com" EOF sudo systemctl daemon-reload sudo systemctl restart containerd

Docker (?)

If using Docker, check its Cgroup Driver in output of docker system info | grep -i driver. If not systemd, run:

# docker daemon config for systemd from cgroupfs & restart cat <<EOF > /etc/docker/daemon.json { "exec-opts": ["native.cgroupdriver=systemd"] } EOF systemctl daemon-reload && systemctl restart docker

Kubes

# Source: https://kubernetes.io/docs/setup/production-environment/tools/kubeadm/install-kubeadm/ sudo curl -fsSLo /usr/share/keyrings/kubernetes-archive-keyring.gpg https://packages.cloud.google.com/apt/doc/apt-key.gpg echo "deb [signed-by=/usr/share/keyrings/kubernetes-archive-keyring.gpg] https://apt.kubernetes.io/ kubernetes-xenial main" | sudo tee /etc/apt/sources.list.d/kubernetes.list sudo apt-get update KUBE_VER=1.26.4 sudo apt-get install -y kubelet=$KUBE_VER-00 kubeadm=$KUBE_VER-00 kubectl=$KUBE_VER-00 sudo apt-mark hold kubelet kubeadm kubectl # Source: https://kubernetes.io/docs/tasks/tools/install-kubectl-linux/#enable-shell-autocompletion kubectl completion bash | sudo tee /etc/bash_completion.d/kubectl > /dev/null kubeadm completion bash | sudo tee /etc/bash_completion.d/kubeadm > /dev/null echo 'alias k=kubectl' >>~/.bashrc echo 'complete -F __start_kubectl k' >>~/.bashrc # May need to use Shecan DNS kubeadm init \ --kubernetes-version=v$KUBE_VER \ --pod-network-cidr=10.0.0.0/8 \ --control-plane-endpoint 167.235.35.185 \ --apiserver-advertise-address=167.235.35.185 \ --cri-socket unix:///run/containerd/containerd.sock # Configure kubectl access to the new cluster mkdir -p $HOME/.kube sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config sudo chown $(id -u):$(id -g) $HOME/.kube/config

Helm

Installation guide Using Apt:

curl https://baltocdn.com/helm/signing.asc | gpg --dearmor | sudo tee /usr/share/keyrings/helm.gpg > /dev/null sudo apt-get install apt-transport-https --yes echo "deb [arch=$(dpkg --print-architecture) signed-by=/usr/share/keyrings/helm.gpg] https://baltocdn.com/helm/stable/debian/ all main" | sudo tee /etc/apt/sources.list.d/helm-stable-debian.list sudo apt-get update sudo apt-get install helm

To install manually, download the latest compatible version from https://github.com/helm/helm/releases and then run:

tar -xzf helm* mv linux-amd64/helm /usr/local/bin/helm rm -r helm* linux-amd64

To install auto-completion, run:

helm completion bash > /etc/bash_completion.d/helm

Network Plugin (Calico)

Using Helm:

kubectl create namespace tigera-operator helm repo add projectcalico https://projectcalico.docs.tigera.io/charts helm install calico projectcalico/tigera-operator --version v3.24.3 --namespace tigera-operator

TODO: Move to helmfile

Manually:

curl https://docs.projectcalico.org/manifests/calico.yaml -O sed -i -e 's/docker.io/registry.docker.ir/g' calico.yaml kubectl apply -f calico.yaml

Next Steps

If you want to be able to schedule Pods on the control-plane node, run:

kubectl taint nodes --all node-role.kubernetes.io/control-plane- node-role.kubernetes.io/master-

To join additional nodes, follow: https://kubernetes.io/docs/setup/production-environment/tools/kubeadm/create-cluster-kubeadm/#join-nodes

GitLab Agent

helm repo add gitlab https://charts.gitlab.io helm repo update helm upgrade --install main-agent gitlab/gitlab-agent \ --namespace gitlab-agent \ --create-namespace \ --set image.tag=v15.5.1 \ --set config.token=<AGENT_TOKEN> \ --set config.kasAddress=wss://[example]/-/kubernetes-agent/

TODO: Move to helmfile

Update the agent

helm upgrade gitlab-agent gitlab/gitlab-agent \ --namespace gitlab-agent \ --reuse-values \ --set image.tag=v15.6.0

Configure

Projects using the agent must have KUBE_CONTEXT variable set to <AGENT_REPOSITOFY_PATH>:<AGENT_NAME>. E.g. administration/gitlab-agent:main-agent

Note: These projects and the agent must be in the same root group.

GitLab Runner

Create gitlab-runner-values.yaml with the following content:

# Source: https://gitlab.com/gitlab-org/charts/gitlab-runner/blob/main/values.yaml gitlabUrl: https://git.weblite.me/ rbac: create: true runners: # Run all containers with the privileged flag enabled # This flag allows the docker:dind image to run if you need to run Docker commands # Read the docs before turning this on: # https://docs.gitlab.com/runner/executors/kubernetes.html#using-dockerdind privileged: true secret: runner-secret config: | [[runners]] [runners.kubernetes] namespace = "{{.Release.Namespace}}" image = "ubuntu:20.04"

Run:

kubectl create ns gitlab-runner kubectl create secret generic runner-secret \ --namespace gitlab-runner \ --from-literal=runner-registration-token="<REGISTRATION_TOKEN>" \ --from-literal=runner-token="" helm install gitlab-runner gitlab/gitlab-runner \ --namespace gitlab-runner \ --version 0.46.0 \ -f gitlab-runner-values.yaml

TODO: Move to helmfile

Dashboard

Use Cluster Management Project to install the dashboard. Then:

echo "apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRoleBinding metadata: name: kubernetes-dashboard namespace: kubernetes-dashboard roleRef: apiGroup: rbac.authorization.k8s.io kind: ClusterRole name: cluster-admin subjects: - kind: ServiceAccount name: dashboard-kubernetes-dashboard namespace: kubernetes-dashboard " | kubectl apply -f - kubectl -n kubernetes-dashboard create token dashboard-kubernetes-dashboard --duration 24h

Now run kubectl proxy in a separate terminal and open this link in your browser. Use the outputted Bearer Token for authentication.

Metrics Server

TODO: https://github.com/kubernetes-sigs/metrics-server

Ingress

Use Cluster Management Project

Cert-Manager

Cert-Manager Installation

helm repo add jetstack https://charts.jetstack.io helm repo update helm install cert-manager jetstack/cert-manager \ --namespace cert-manager \ --create-namespace \ --version v1.7.0 \ --set startupapicheck.timeout=5m \ --set installCRDs=true

Configuration

Note: This sections assumes you are willing to use LetsEncrypt free certificates.

Create a YAML file named letsencrypt-staging.yaml:

apiVersion: cert-manager.io/v1 kind: ClusterIssuer metadata: name: letsencrypt-staging spec: acme: server: https://acme-staging-v02.api.letsencrypt.org/directory email: example@domain.com privateKeySecretRef: name: letsencrypt-staging solvers: - http01: ingress: class: nginx

Create the resource with:

kubectl create -f letsencrypt-staging.yaml

Do the same for letsencrypt-prod.yaml:

apiVersion: cert-manager.io/v1 kind: ClusterIssuer metadata: name: letsencrypt-prod spec: acme: server: https://acme-v02.api.letsencrypt.org/directory email: tech.sahab.im@gmail.com privateKeySecretRef: name: letsencrypt-prod solvers: - http01: ingress: class: nginx

Usage

Edit ingress-nginx to match the following snippet:

metadata: annotations: cert-manager.io/cluster-issuer: letsencrypt-staging spec: tls: - hosts: - example.domain.me secretName: example-tls

You can run kubectl describe ingress and check events section to ensure successful certificate creation as it may take up to a minute.

OpenEBS

# Source: https://openebs.io/docs/user-guides/prerequisites#ubuntu sudo apt update sudo apt install -y open-iscsi systemctl enable --now iscsid helm repo add openebs https://openebs.github.io/charts helm repo update helm install openebs --namespace openebs openebs/openebs --create-namespace # Make `openebs-hostpath` the default StorageClass kubectl patch storageclass openebs-hostpath -p '{"metadata": {"annotations":{"storageclass.kubernetes.io/is-default-class":"true"}}}'

Note: Cluster Management Project can be used instead of helm commands.

TODO: Move prerequisite & post-installation steps to helmfile. Link

TODO: Backup using Velero

Troubleshooting

AppArmor

If encountered the following error:

Unfortunately, an error has occurred: timed out waiting for the condition This error is likely caused by: - The kubelet is not running - The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled) If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands: - 'systemctl status kubelet' - 'journalctl -xeu kubelet'

and the kubelet logs (journalctl -xeu kubelet) contains:

failed to create containerd container: get apparmor_parser version: exec: \"apparmor_parser\": executable file not found in $PATH

then run the following as mentioned in here

apt install apparmor apparmor-utils

Tools

kubectl get pod mongo-75f59d57f4-4nd6q --template='{{(index (index .spec.containers 0).ports 0).containerPort}}{{"\n"}}' kubectl port-forward mongo-75f59d57f4-4nd6q 28015:27017

Krew

https://github.com/ahmetb/kubectx

  • kubectx is a tool to switch between contexts (clusters) on kubectl faster.

  • kubens is a tool to switch between Kubernetes namespaces (and configure them for kubectl) easily. https://github.com/stern/stern Multi pod and container log tailing

Scratch

backup etcd: https://etcd.io/docs/v3.5/op-guide/maintenance/#snapshot-backup

conformance test: https://kubernetes.io/docs/setup/best-practices/node-conformance/

istio config for calico: https://projectcalico.docs.tigera.io/getting-started/kubernetes/installation/config-options#about-customizing-application-layer-policy-manifests

The Falco Project (Kubernetes threat detection engine)

OpenEBS for PostgreSQL
https://v1-24.docs.kubernetes.io/docs/tutorials/stateful-application/basic-stateful-set/

Metallb

  • https://metallb.universe.tf/configuration/calico/

  • https://metallb.universe.tf/installation/

  • https://github.com/metallb/metallb/blob/main/charts/metallb/values.yaml

  • https://kubernetes.github.io/ingress-nginx/deploy/baremetal/

Last modified: 07 March 2024