User Tools

Site Tools


debian_k8s_heketi_glusterfs

Ultimate setup for K8S on debian10 with Cilium as CNI GLusterfs for the default storage class and Metallb for the public connectivity

Prepare the debian 10 setup for K8S

Master components

In this category of components we have:

    API Server: exposes the Kubernetes API. It is the entry-point for the Kubernetes control plane.
    
    Scheduler: watches newly created pods and selects a node for them to run on, managing ressource allocation.
    
    Controller-manager: run controllers, control loops that watch the state of the cluster and move it towards the desired state.
    
    ETCD: Consistent and highly-available key value store used as Kubernetes’ backing store for all cluster data. This topic deserves its own blog post, so we speaking about it in the coming weeks.

Node components

In every node we have:

    Kubelet: agent that makes sure that containers described in PodSpecs are running and healthy. It’s the link between the node and the control plane.
    
    Kube-proxy: network proxy running in each node, enabling the Kubernetes service abstraction by maintaining network rules and performing connection forwarding.
    Cilium : is the CNI of the setup to take care of the network
    
    metallb : for the public connectivity of the cluster

on 3 k8s nodes you have to get 3 storage space in a dedicated partition for glusterfs autoprovisionning

nodes with glusterfs :

 / 6Go
 /var 10Go
 /home 1Go
 no swap !
 /boot 300Mo
 /tmp 512Mo

 a dedicated partition for futur gluster storage
 /dev/sdc ~40Go

nodes without glusterfs :

 / 6Go
 /var 10Go
 /home 1Go
 no swap !
 /boot 300Mo
 /tmp 512Mo

make sure partitions are ok

no need for /home but need space in /var/lib/docker

For exemple a good partition plan for nodes should be made like that :

umount /home
e2fsck -f /dev/vg00/home
resize2fs /dev/vg00/home 1G
lvreduce -L1G /dev/vg00/home
mount /dev/vg00/home

lvextend -L+2GB /dev/vg00/root
resize2fs /dev/vg00/root
lvextend -L+6GB /dev/vg00/var
resize2fs /dev/vg00/var

/etc/fstab need a partition for ebpf performance for Cilium CNI

# <file system> <mount point>   <type>  <options>       <dump>  <pass>
/dev/mapper/vg00-root /               ext4    errors=remount-ro 0       1
# /boot was on /dev/sda1 during installation
UUID=ba58143c-33ef-4123-8da0-8d05a4bd6570 /boot           ext2    defaults        0       2
/dev/mapper/vg00-home /home           ext4    defaults        0       2
/dev/mapper/vg00-tmp /tmp            ext4    defaults        0       2
/dev/mapper/vg00-var /var            ext4    defaults        0       2
bpffs                      /sys/fs/bpf             bpf     defaults 0 0

Mandatory tools need to be install

less emacs-nox screen moreutils net-tools strace mlocate apt-transport-https ca-certificates curl gnupg2 software-properties-common
apt-get install -y iptables arptables ebtables

logrotate / local resolver and ntp are mandatory on the master and on all the nodes

apt-get install logrotate openntpd unbound

/etc/resolv.conf

search domain.com #replace here
nameserver 127.0.0.1
nameserver 83.243.18.142 #replace here with your dns provider
nameserver 89.30.11.5 #replace here with your dns provider

options rotate
options timeout:1

switching to legacy iptables instead of firewalld on the master and on all the nodes

update-alternatives --set iptables /usr/sbin/iptables-legacy
update-alternatives --set ip6tables /usr/sbin/ip6tables-legacy
update-alternatives --set arptables /usr/sbin/arptables-legacy
update-alternatives --set ebtables /usr/sbin/ebtables-legacy

## need to unactivate it because it generate issue

systemctl disable firewalld

unactivate apparmor too

/etc/init.d/apparmor stop && update-rc.d -f apparmor remove

install docker on the master and all the nodes in last version available

curl -fsSL https://download.docker.com/linux/debian/gpg | apt-key add -
add-apt-repository \
   "deb [arch=amd64] https://download.docker.com/linux/debian \
   $(lsb_release -cs) \
   stable"

deb [arch=amd64] https://download.docker.com/linux/debian buster stable
echo "deb [arch=amd64] https://download.docker.com/linux/debian buster stable" >> /etc/apt/sources.list.d/docker.list
apt-get install linux-headers-4.19.0-6-all
apt-get install docker-ce docker-ce-cli

You need to reboot all nodes to make sure the dependency tree is ok

reboot

check on all nodes and on master that all is ok

after reboot check it's running :

docker version

the answer should be like that on all nodes :

Client: Docker Engine - Community
 Version:           19.03.5
 API version:       1.40
 Go version:        go1.12.12
 Git commit:        633a0ea838
 Built:             Wed Nov 13 07:25:38 2019
 OS/Arch:           linux/amd64
 Experimental:      false

Server: Docker Engine - Community
 Engine:
  Version:          19.03.5
  API version:      1.40 (minimum version 1.12)
  Go version:       go1.12.12
  Git commit:       633a0ea838
  Built:            Wed Nov 13 07:24:09 2019
  OS/Arch:          linux/amd64
  Experimental:     false
 containerd:
  Version:          1.2.10
  GitCommit:        b34a5c8af56e510852c35414db4c1f4fa6172339
 runc:
  Version:          1.0.0-rc8+dev
  GitCommit:        3e425f80a8c931f88e6d94a8c831b9d5aa481657
 docker-init:
  Version:          0.18.0
  GitCommit:        fec3683

If you have an issue at the stage you can't continue the process !

install kubeadm kubectl kubelet

curl -s https://packages.cloud.google.com/apt/doc/apt-key.gpg | apt-key add -
echo "deb https://apt.kubernetes.io/ kubernetes-xenial main" | tee -a /etc/apt/sources.list.d/kubernetes.list
apt-get update
apt-get install -y kubelet kubeadm kubectl
systemctl daemon-reload
systemctl restart kubelet

Test than kubeadm is working with gcr.io on node master

kubeadm config images pull

Initiate the k8s cluster you have to use this –pod-network-cidr option for cilium to work

kubeadm init --pod-network-cidr=10.217.0.0/16

install the CNI

kubectl create -f https://raw.githubusercontent.com/cilium/cilium/v1.6/install/kubernetes/quick-install.yaml

At this stage you should have something like that :

kubectl get pods -n kube-system
NAME                                         READY   STATUS    RESTARTS   AGE
cilium-22x9q                                 1/1     Running   0          14d
cilium-2pb88                                 1/1     Running   0          14d
cilium-mcvxp                                 1/1     Running   0          14d
cilium-operator-55658fb5c4-4d5mc             1/1     Running   9          15d
cilium-rbrjn                                 1/1     Running   0          14d
cilium-vk5z8                                 1/1     Running   0          14d
coredns-6955765f44-cmhrq                     1/1     Running   6          14d
coredns-6955765f44-fmn27                     1/1     Running   1          15d
etcd-kaiko-k8s-ny4-nod1                      1/1     Running   15         15d
kube-apiserver-kaiko-k8s-ny4-nod1            1/1     Running   28         15d
kube-controller-manager-kaiko-k8s-ny4-nod1   1/1     Running   20         15d
kube-proxy-6fbcn                             1/1     Running   4          15d
kube-proxy-8gflj                             1/1     Running   3          15d
kube-proxy-c6wjm                             1/1     Running   5          15d
kube-proxy-x8sqr                             1/1     Running   3          15d
kube-proxy-zs8pc                             1/1     Running   14         15d
kube-scheduler-kaiko-k8s-ny4-nod1            1/1     Running   19         15d

Then activate the automatic-node-allocation ⇒ enable-automatic-node-cidr-allocation

Cilium Deploy the connectivity test

You can deploy the “connectivity-check” to test connectivity between pods.

kubectl apply -f https://raw.githubusercontent.com/cilium/cilium/v1.6/examples/kubernetes/connectivity-check/connectivity-check.yaml

It will deploy a simple probe and echo server running with multiple replicas. The probe will only report readiness while it can successfully reach the echo server:

kubectl get pods
NAME                     READY   STATUS    RESTARTS   AGE
echo-585798dd9d-ck5xc    1/1     Running   0          75s
echo-585798dd9d-jkdjx    1/1     Running   0          75s
echo-585798dd9d-mk5q8    1/1     Running   0          75s
echo-585798dd9d-tn9t4    1/1     Running   0          75s
echo-585798dd9d-xmr4p    1/1     Running   0          75s
probe-866bb6f696-9lhfw   1/1     Running   0          75s
probe-866bb6f696-br4dr   1/1     Running   0          75s
probe-866bb6f696-gv5kf   1/1     Running   0          75s
probe-866bb6f696-qg2b7   1/1     Running   0          75s
probe-866bb6f696-tb926   1/1     Running   0          75s

Install the metallb Loadbalancing capability

kubectl apply -f https://raw.githubusercontent.com/google/metallb/v0.8.3/manifests/metallb.yaml

metallb configuration to apply inside your setup for address you have to put your external ipv4 subnet ! put the start address and the end address

apiVersion: v1
kind: ConfigMap
metadata:
  namespace: metallb-system
  name: config
data:
  config: |
    address-pools:
    - name: default
      protocol: layer2
      addresses:
      - x.x.x.x-x.x.x.x

then

kubectl apply -f metallb_config.yaml

metallb exemples of more complexe configuration

check that all is ok

kubectl get pods -n metallb-system

you should see :

AME                          READY   STATUS    RESTARTS   AGE
controller-65895b47d4-ztcmx   1/1     Running   0          15d
speaker-7d82f                 1/1     Running   3          15d
speaker-d4wgp                 1/1     Running   0          14d
speaker-fp8zt                 1/1     Running   3          15d
speaker-p7286                 1/1     Running   0          14d
speaker-wc2ln                 1/1     Running   10         15d

To upgrade an existing kubernetes setup (version or compute capability)

Biblio

debian_k8s_heketi_glusterfs.txt · Last modified: 2020/02/18 14:07 by bragon