Skip to content
induslevel.com
Menu
  • Home
Menu
kubernetes-setup-howto-install

How to Install and Setup Kubernetes Cluster with Flannel CNI in CentOS 7

Posted on December 3, 2021July 1, 2023 by Waqar Azeem

Kubernetes also written as k8s, is open source solution for management and orchestration of containerized applications. This article assumes that you have a minimally installed CentOS 7 machine as master node and two worker nodes with internet access. Let proceed for installation. Execute following commands on all nodes (master and worker).

Update all the systems.

yum update -y

Enable epel repository, install utilities and dependencies.

yum install epel-release yum-utils bash-completion net-tools device-mapper-persistent-data lvm2 -y

Disable Selinux.

sed -i 's/SELINUX=enforcing/SELINUX=disabled/g' /etc/selinux/config

Disable swap.

free -m && swapoff -a && free -m
sed -i.bak -r 's/(.+ swap .+)/#\1/' /etc/fstab && cat /etc/fstab

Reboot the machine to disable Selinux and bring up the latest kernel.

reboot

Add and enable Kubernetes official repository.

cat <<EOF > /etc/yum.repos.d/kubernetes.repo
[kubernetes]
name=Kubernetes
baseurl=https://packages.cloud.google.com/yum/repos/kubernetes-el7-x86_64
enabled=1
gpgcheck=1
repo_gpgcheck=1
gpgkey=https://packages.cloud.google.com/yum/doc/yum-key.gpg https://packages.cloud.google.com/yum/doc/rpm-package-key.gpg
EOF

Add host entries or DNS records for name resolution.

cat <<EOF >> /etc/hosts
10.14.7.51 k8s-master.induslevel.com k8s-master
10.14.7.53 k8s-node01.induslevel.com k8s-node01
10.14.7.54 k8s-node02.induslevel.com k8s-node02
EOF

cat /etc/hosts

Add firewalld rules on master and worker nodes.

firewall-cmd --permanent --add-port=8285/tcp
firewall-cmd --permanent --add-port=6443/tcp
firewall-cmd --permanent --add-port=2379-2380/tcp
firewall-cmd --permanent --add-port=10250/tcp
firewall-cmd --permanent --add-port=10251/tcp
firewall-cmd --permanent --add-port=10252/tcp
firewall-cmd --permanent --add-port=10255/tcp
firewall-cmd --reload && firewall-cmd --list-all

For networking to work, we need to tweak sysctl parameters for routing.

cat <<EOF > /etc/sysctl.d/k8s.conf
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
EOF
sysctl --system

Install required packages on all nodes (masters and nodes).

yum install -y kubelet kubeadm kubectl

We will be using docker as container engine. Enable docker repository.

yum-config-manager --add-repo  https://download.docker.com/linux/centos/docker-ce.repo

Install docker on all nodes.

yum install docker -y

Enable and start docker services.

systemctl start docker && systemctl enable docker && systemctl status docker

start kublet service and enable it for startup on system booting on all nodes and master.

systemctl enable kubelet && systemctl start kubelet && systemctl status kubelet

Initialize the cluster on master.

kubeadm init --apiserver-advertise-address 10.14.7.51 --pod-network-cidr=172.16.0.0/16

Copy the last lines of the output which will be used for connecting worker nodes with the master.

[init] Using Kubernetes version: v1.22.4
[preflight] Running pre-flight checks
        [WARNING Firewalld]: firewalld is active, please ensure ports [6443 10250] are open or your cluster may not function correctly
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
[certs] Using certificateDir folder "/etc/kubernetes/pki"
[certs] Generating "ca" certificate and key
[certs] Generating "apiserver" certificate and key
[certs] apiserver serving cert is signed for DNS names [kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local rwc-k8s-master1.i2cinc.com] and IPs [10.96.0.1 10.14.7.51]
[certs] Generating "apiserver-kubelet-client" certificate and key
[certs] Generating "front-proxy-ca" certificate and key
[certs] Generating "front-proxy-client" certificate and key
[certs] Generating "etcd/ca" certificate and key
[certs] Generating "etcd/server" certificate and key
[certs] etcd/server serving cert is signed for DNS names [localhost rwc-k8s-master1.i2cinc.com] and IPs [10.14.7.51 127.0.0.1 ::1]
[certs] Generating "etcd/peer" certificate and key
[certs] etcd/peer serving cert is signed for DNS names [localhost rwc-k8s-master1.i2cinc.com] and IPs [10.14.7.51 127.0.0.1 ::1]
[certs] Generating "etcd/healthcheck-client" certificate and key
[certs] Generating "apiserver-etcd-client" certificate and key
[certs] Generating "sa" key and public key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Starting the kubelet
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
[apiclient] All control plane components are healthy after 7.002213 seconds
[upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
[kubelet] Creating a ConfigMap "kubelet-config-1.22" in namespace kube-system with the configuration for the kubelets in the cluster
[upload-certs] Skipping phase. Please see --upload-certs
[mark-control-plane] Marking the node rwc-k8s-master1.i2cinc.com as control-plane by adding the labels: [node-role.kubernetes.io/master(deprecated) node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
[mark-control-plane] Marking the node rwc-k8s-master1.i2cinc.com as control-plane by adding the taints [node-role.kubernetes.io/master:NoSchedule]
[bootstrap-token] Using token: k6ft3u.5ybf6kpmxjmr8wwo
[bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
[bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to get nodes
[bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
[bootstrap-token] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
[bootstrap-token] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
[bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
[kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
[addons] Applied essential addon: CoreDNS
[addons] Applied essential addon: kube-proxy

Your Kubernetes control-plane has initialized successfully!

To start using your cluster, you need to run the following as a regular user:

  mkdir -p $HOME/.kube
  sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
  sudo chown $(id -u):$(id -g) $HOME/.kube/config

Alternatively, if you are the root user, you can run:

  export KUBECONFIG=/etc/kubernetes/admin.conf

You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
  https://kubernetes.io/docs/concepts/cluster-administration/addons/

Then you can join any number of worker nodes by running the following on each as root:

kubeadm join 10.14.7.51:6443 --token k6ft3u.5ybf6kpmxjmr8wwo \
        --discovery-token-ca-cert-hash sha256:1939449bb8aec7dae9cf877127c99abb07b5d12515c617603969dcc0ebb795f2

Create .kube directory to manage the kubernetes with non-root user.

mkdir -p $HOME/.kube
cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
chown $(id -u):$(id -g) $HOME/.kube/config

Install Flannel CNI for intercommunication of pods on master.

kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml

Connect nodes with master. Execute the command on worker nodes.

kubeadm join 10.14.7.51:6443 --token k6ft3u.5ybf6kpmxjmr8wwo \
        --discovery-token-ca-cert-hash sha256:1939449bb8aec7dae9cf877127c99abb07b5d12515c617603969dcc0ebb795f2

Check the status of nodes connectivity using command on master node. Cluster will be configured once node status changes to ready state.

kubectl get nodes -o wide
kubectl get pods --all-namespaces

Now we are going to deploy a nginx container deployment to test out setup.

mkdir -p /root/nginx/ && cd /root/nginx/
cat > /root/nginx/nginx-deployment.yaml<<"EOF"
apiVersion: apps/v1
kind: Deployment
metadata:
  name: nginx-deployment
  labels:
    app: nginx
spec:
  replicas: 3
  selector:
    matchLabels:
      app: nginx
  template:
    metadata:
      labels:
        app: nginx
    spec:
      containers:
      - name: nginx
        image: nginx:1.14.0
        ports:
        - containerPort: 80
EOF

Now create deployment using kubectl command on master node.

kubectl create -f /root/nginx/nginx-deployment.yaml

Check the status of deployment. Once deployment is complete, we will be creating a service to expose the port of this container.

kubectl get deployments
kubectl describe deployment nginx-deployment

Create the new service to expose the nginx port for outside world.

cat > /root/nginx/nginx-service.yaml<<"EOF"
apiVersion: v1
kind: Service
metadata:
  name: nginx-service
  labels:
    run: nginx-service
spec:
  type: NodePort
  ports:
  - port: 80
    protocol: TCP
  selector:
    app: nginx
EOF

Create service using this yaml file.

kubectl create -f /root/nginx/nginx-service.yaml

Check the status using following commands.

kubectl get service
kubectl describe service nginx-service

After successful service setup, check the status of service using curl command.

NodePort=$(kubectl describe service nginx-service |grep 'NodePort:'|awk '{print $3}'|awk -F/ '{print $1}') && curl k8s-node01:$NodePort
NodePort=$(kubectl describe service nginx-service |grep 'NodePort:'|awk '{print $3}'|awk -F/ '{print $1}') && curl k8s-node02:$NodePort

Share this:

  • Click to share on X (Opens in new window) X
  • Click to share on Facebook (Opens in new window) Facebook
  • Click to share on LinkedIn (Opens in new window) LinkedIn
  • More
  • Click to print (Opens in new window) Print
  • Click to share on Reddit (Opens in new window) Reddit
  • Click to share on Tumblr (Opens in new window) Tumblr
  • Click to share on Pinterest (Opens in new window) Pinterest
  • Click to share on Pocket (Opens in new window) Pocket
  • Click to share on Telegram (Opens in new window) Telegram
  • Click to share on WhatsApp (Opens in new window) WhatsApp
  • Click to email a link to a friend (Opens in new window) Email

Related Articles

2 thoughts on “How to Install and Setup Kubernetes Cluster with Flannel CNI in CentOS 7”

  1. Owais Khaleeq says:
    December 3, 2021 at 3:07 pm

    Its great!. Will test this out for sure.
    Just a quick question, why do we need to disable swap space?.

    Reply
    1. Waqar Azeem says:
      December 3, 2021 at 4:38 pm

      To ensure guaranteed performance i.e use the machine to 100% with predictable performance. Details are mentioned here

      https://github.com/kubernetes/enhancements/tree/master/keps/sig-node/2400-node-swap
      https://support.f5.com/csp/article/K82655201

      Reply

Share your thoughtsCancel reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.

Recent Posts

  • Easiest Way to Setup WordPress and OpenConnect VPN Server using Ansible
  • How to use port 443 for SSH and OpenConnect VPN on Linux using HAProxy
  • How to Backup ZFS Snapshots to AWS Glacier
  • How to Install and Setup OpenConnect VPN Server on CentOS 7
  • How to Compile Kitty SSH Client from Source Code

Recent Comments

  1. Waqar Azeem on How to Install and Setup Kubernetes Cluster with Flannel CNI in CentOS 7
  2. Owais Khaleeq on How to Install and Setup Kubernetes Cluster with Flannel CNI in CentOS 7
  3. Muhammad Rizwan Latif on How To Setup 3-Node DynaTrace Managed Cluster on Centos/Redhat 7
  4. Waqar Azeem on How To Setup 3-Node DynaTrace Managed Cluster on Centos/Redhat 7
  5. Tahir on How to Install Simple DHCP Server in Centos 7

Archives

  • August 2024
  • February 2024
  • January 2024
  • July 2023
  • October 2022
  • December 2021
  • November 2021
  • October 2021
  • September 2021

Categories

  • Uncategorized
© 2025 INDUS LEVEL | Powered by Minimalist Blog WordPress Theme