Kubeadm with Calico Pod Network
The kubeadm
tool helps you bootstrap a minimum viable Kubernetes cluster that conforms to best practices.
The kubeadm tool is good if you need:
- A simple way for you to try out Kubernetes, possibly for the first time.
- A way for existing users to automate setting up a cluster and test their application.
- A building block in other ecosystem and/or installer tools with a larger scope.
Before you begin
To follow this guide, you need:
- One or more machines running a deb/rpm-compatible Linux OS; for example: Ubuntu or CentOS.
- 2 GiB or more of RAM per machine--any less leaves little room for your apps.
- At least 2 CPUs on the machine that you use as a control-plane node.
- Full network connectivity among all machines in the cluster. You can use either a public or a private network.
Check required ports
Control-plane node(s)
Protocol | Direction | Port Range | Purpose | Used By |
TCP | Inbound | 6443* | Kubernetes API server | All |
TCP | Inbound | 2379-2380 | etcd server client API | kube-apiserver, etcd |
TCP | Inbound | 10250 | Kubelet API | Self, Control plane |
TCP | Inbound | 10251 | kube-scheduler | Self |
TCP | Inbound | 10252 | kube-controller-manager | Self |
Worker node(s)
Protocol | Direction | Port Range | Purpose | Used By |
TCP | Inbound | 10250 | Kubelet API | Self, Control plane |
TCP | Inbound | 30000-32767 | NodePort Services† | All |
Installing runtime
By default, Kubernetes uses the Container Runtime Interface (CRI) to interface with your chosen container runtime.
If you don't specify a runtime, kubeadm automatically tries to detect an installed container runtime by scanning through a list of well known Unix domain sockets.
Runtime | Path to Unix domain socket |
Docker | /var/run/docker.sock |
containerd | /run/containerd/containerd.sock |
CRI-O | /var/run/crio/crio.sock |
If both Docker and containerd are detected, Docker takes precedence. This is needed because Docker 18.09 ships with containerd and both are detectable even if you only installed Docker. If any other two or more runtimes are detected, kubeadm exits with an error.
Installing kubeadm, kubelet and kubectl
· kubeadm
: the command to bootstrap the cluster.
· kubelet
: the component that runs on all of the machines in your cluster and does things like starting pods and containers.
· kubectl
: the command line util to talk to your cluster.
Infrastructure
Lets Create 3 VirtualMachines(VMs) (1 Master Node and 2 Worker node). There must be network connectivity among these VMs
Installation on Ubuntu (Both on Master and Worker Nodes)
sudo apt-get update && sudo apt-get install -y apt-transport-https curl
curl -s https://packages.cloud.google.com/apt/doc/apt-key.gpg | sudo apt-key add -
cat <<EOF | sudo tee /etc/apt/sources.list.d/kubernetes.list
deb https://apt.kubernetes.io/ kubernetes-xenial main
EOF
sudo apt-get update
sudo apt-get install -y kubelet kubeadm kubectl
sudo apt-mark hold kubelet kubeadm kubectl
Installation on RHEL/CentOS (Both on Master and Worker Nodes)
In case if you are using CentOS/RHEL
cat
<<EOF | sudo tee /etc/yum.repos.d/kubernetes.repo
[kubernetes]
name=Kubernetes
baseurl=https://packages.cloud.google.com/yum/repos/kubernetes-el7-\$basearch
enabled=1
gpgcheck=1
repo_gpgcheck=1
gpgkey=https://packages.cloud.google.com/yum/doc/yum-key.gpg \
https://packages.cloud.google.com/yum/doc/rpm-package-key.gpg
exclude=kubelet kubeadm kubectl
EOF
# Set SELinux in permissive mode (effectively disabling it)
sudo setenforce
0
sudo sed -i
's/^SELINUX=enforcing$/SELINUX=permissive/'
/etc/selinux/config
sudo yum install -y kubelet kubeadm kubectl --disableexcludes
=
kubernetes
sudo systemctl
enable
--now kubelet
Create Master Server
On master machine run the below command
1. kubeadm init --apiserver-advertise-address=<<Master ServerIP>> --pod-network-cidr=192.168.0.0/16
2. mkdir -p $HOME/.kube
3. sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
4. sudo chown $(id -u):$(id -g) $HOME/.kube/config
yaml file is to be applied
5. Run the join command on each of the worker node which you want to join in the cluser.
kubectl apply -f <.yml>
7. Run kubectl get nodes command on master node
Comments
Post a Comment