Labels

Thursday, March 1, 2018

Getting started with Kubernetes

#######################################################################
Installing Kubernetes
#######################################################################

1) Installing and start Docker

yum install -y docker
systemctl enable docker && systemctl start docker

2) Installing kubeadm, kubelet and kubectl

kubeadm: the command to bootstrap the cluster.

kubelet: the component that runs on all of the machines in your cluster and does things like starting pods and containers.

kubectl: the command line util to talk to your cluster.

a) Add the kuberntes repo :

cat <<EOF > /etc/yum.repos.d/kubernetes.repo
[kubernetes]
name=Kubernetes
baseurl=https://packages.cloud.google.com/yum/repos/kubernetes-el7-x86_64
enabled=1
gpgcheck=1
repo_gpgcheck=1
gpgkey=https://packages.cloud.google.com/yum/doc/yum-key.gpg https://packages.cloud.google.com/yum/doc/rpm-package-key.gpg
EOF


b) Disabling SELinux by running setenforce 0 is required to allow containers to access the host filesystem, which is required by pod networks for example. You have to do this until SELinux support is improved in the kubelet

setenforce 0

c) yum install -y kubelet kubeadm kubectl

d) systemctl enable kubelet && systemctl start kubelet


3) Initialize Kubernetes Master with ‘kubeadm init’. Run the beneath command to  initialize and setup kubernetes master.


kubeadm init

Note : 

[root@vn2 Downloads]# kubeadm init
[init] Using Kubernetes version: v1.9.3
[init] Using Authorization modes: [Node RBAC]
[preflight] Running pre-flight checks.
        [WARNING Hostname]: hostname "vn2" could not be reached
        [WARNING Hostname]: hostname "vn2" lookup vn2 on <dns>:53: server misbehaving
        [WARNING FileExisting-crictl]: crictl not found in system path
[preflight] Some fatal errors occurred:
        [ERROR Swap]: running with swap on is not supported. Please disable swap
[preflight] If you know what you are doing, you can make a check non-fatal with `--ignore-preflight-errors=...`

##### If you get any error turn off the swap using command "swapoff -a"


-----------------------------------------------------------------------------------------------
k8s start up message
-----------------------------------------------------------------------------------------------
[root@vn2 Downloads]# kubeadm init
[init] Using Kubernetes version: v1.9.3
[init] Using Authorization modes: [Node RBAC]
[preflight] Running pre-flight checks.
        [WARNING Hostname]: hostname "vn2" could not be reached
        [WARNING Hostname]: hostname "vn2" lookup vn2 on <dns>:53: server misbehaving
        [WARNING FileExisting-crictl]: crictl not found in system path
[certificates] Generated ca certificate and key.
[certificates] Generated apiserver certificate and key.
[certificates] apiserver serving cert is signed for DNS names [vn2 kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local] and IPs [<dns1> <hostname>]
[certificates] Generated apiserver-kubelet-client certificate and key.
[certificates] Generated sa key and public key.
[certificates] Generated front-proxy-ca certificate and key.
[certificates] Generated front-proxy-client certificate and key.
[certificates] Valid certificates and keys now exist in "/etc/kubernetes/pki"
[kubeconfig] Wrote KubeConfig file to disk: "admin.conf"
[kubeconfig] Wrote KubeConfig file to disk: "kubelet.conf"
[kubeconfig] Wrote KubeConfig file to disk: "controller-manager.conf"
[kubeconfig] Wrote KubeConfig file to disk: "scheduler.conf"
[controlplane] Wrote Static Pod manifest for component kube-apiserver to "/etc/kubernetes/manifests/kube-apiserver.yaml"
[controlplane] Wrote Static Pod manifest for component kube-controller-manager to "/etc/kubernetes/manifests/kube-controller-manager.yaml"
[controlplane] Wrote Static Pod manifest for component kube-scheduler to "/etc/kubernetes/manifests/kube-scheduler.yaml"
[etcd] Wrote Static Pod manifest for a local etcd instance to "/etc/kubernetes/manifests/etcd.yaml"
[init] Waiting for the kubelet to boot up the control plane as Static Pods from directory "/etc/kubernetes/manifests".
[init] This might take a minute or longer if the control plane images have to be pulled.
[apiclient] All control plane components are healthy after 109.002035 seconds
[uploadconfig] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
[markmaster] Will mark node vn2 as master by adding a label and a taint
[markmaster] Master vn2 tainted and labelled with key/value: node-role.kubernetes.io/master=""
[bootstraptoken] Using token: 3adc53.1b2e17141aa72bd5
[bootstraptoken] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
[bootstraptoken] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
[bootstraptoken] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
[bootstraptoken] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
[addons] Applied essential addon: kube-dns
[addons] Applied essential addon: kube-proxy

Your Kubernetes master has initialized successfully!

To start using your cluster, you need to run the following as a regular user:

  mkdir -p $HOME/.kube
  sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
  sudo chown $(id -u):$(id -g) $HOME/.kube/config

You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
  https://kubernetes.io/docs/concepts/cluster-administration/addons/

You can now join any number of machines by running the following on each node
as root:

  kubeadm join --token 3adc53.1b2e17141aa72bd5 <hostname>:6443 --discovery-token-ca-cert-hash sha256:eed7e3015187d202bd50ff7879e6cdecea1d809a2e154c4bf44c238727c1550b
-----------------------------------------------------------------------------------------------
k8s message end
-----------------------------------------------------------------------------------------------

4) To start using your cluster, you need to run the following as a regular user(root):

mkdir -p $HOME/.kube
cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
chown $(id -u):$(id -g) $HOME/.kube/config

Note : Alternatively, if you are the root user, you could run this:

export KUBECONFIG=/etc/kubernetes/admin.conf


5) Get the status of the nodes

[root@vn2 Downloads]# kubectl get nodes
NAME      STATUS     ROLES     AGE       VERSION
vn2       NotReady   master    13m       v1.9.3
[root@vn2 Downloads]# kubectl get pods --all-namespaces
NAMESPACE     NAME                          READY     STATUS    RESTARTS   AGE
kube-system   etcd-vn2                      1/1       Running   0          13m
kube-system   kube-apiserver-vn2            1/1       Running   0          12m
kube-system   kube-controller-manager-vn2   1/1       Running   0          13m
kube-system   kube-dns-6f4fd4bdf-f779f      0/3       Pending   0          13m
kube-system   kube-proxy-644ld              1/1       Running   0          13m
kube-system   kube-scheduler-vn2            1/1       Running   0          13m


6) Deploy pod network to the cluster

export kubever=$(kubectl version | base64 | tr -d '\n')
kubectl apply -f "https://cloud.weave.works/k8s/net?k8s-version=$kubever"


7) Now run the following commands to verify the status

[root@vn2 Downloads]# kubectl get nodes
NAME      STATUS    ROLES     AGE       VERSION
vn2       Ready     master    31m       v1.9.3
[root@vn2 Downloads]# kubectl get pods --all-namespaces
NAMESPACE     NAME                          READY     STATUS    RESTARTS   AGE
kube-system   etcd-vn2                      1/1       Running   0          30m
kube-system   kube-apiserver-vn2            1/1       Running   0          29m
kube-system   kube-controller-manager-vn2   1/1       Running   0          30m
kube-system   kube-dns-6f4fd4bdf-f779f      3/3       Running   0          30m
kube-system   kube-proxy-644ld              1/1       Running   0          30m
kube-system   kube-scheduler-vn2            1/1       Running   0          30m
kube-system   weave-net-v9472               2/2       Running   0          4m



****Now let’s add worker nodes to the Kubernetes master nodes.


#############################################################################
PERFORM THE FOLLOWING STEPS ON EACH WORKER NODE
#############################################################################

1) setenforce 0

2) echo '1' > /proc/sys/net/bridge/bridge-nf-call-iptables

3) Configure kunernetes repo on both nodes
cat <<EOF > /etc/yum.repos.d/kubernetes.repo
[kubernetes]
name=Kubernetes
baseurl=https://packages.cloud.google.com/yum/repos/kubernetes-el7-x86_64
enabled=1
gpgcheck=1
repo_gpgcheck=1
gpgkey=https://packages.cloud.google.com/yum/doc/yum-key.gpg
        https://packages.cloud.google.com/yum/doc/rpm-package-key.gpg
EOF


4)  Install kubeadm and docker package on worker node 

yum  install kubeadm docker -y


5) Start and enable docker service

systemctl restart docker && systemctl enable docker

--systemctl enable kubelet.service
--systemctl enable docker.service

6) Now Join worker node to master node command from step #3

  kubeadm join --token 3adc53.1b2e17141aa72bd5 <hostname>:6443 --discovery-token-ca-cert-hash sha256:eed7e3015187d202bd50ff7879e6cdecea1d809a2e154c4bf44c238727c1550b

7) Now verify Nodes status from master node using kubectl command

[root@vn2 Downloads]# kubectl get nodes
NAME      STATUS    ROLES     AGE       VERSION
vn2       Ready     master    49m       v1.9.3
vn3       Ready     <none>    2m        v1.9.3

No comments:

Post a Comment