kubernetes

install kubernetes v1.25.5 on ubuntu 22.10 - kubeadm

PSAwesome 2022. 12. 17. 23:16
반응형

공식 문서

https://kubernetes.io/docs/tasks/tools/install-kubectl-linux/

 

# 하드웨어 정보
hwinfo --short
OS RAM CPU-core GPU  
Ubuntu 22.04.1 LTS 64 8 3080 - 8G  
Ubuntu 22.10 16 8 3070 - 8G  
Ubuntu 22.10 12 4 -  

 

kubernetes 및 container 정보

kubectl version --short

Client Version: v1.25.5
Kustomize Version: v4.5.7
Server Version: v1.25.5

kubernetes kustomize-version kernel-version container-runtime
v1.25.5 v4.5.7 5.15.0-56-generic containerd://1.6.12
5.19.0-26-generic

 

설치 완료된 상태

 

 

1. delete all kubernetes

kubeadm reset
systemctl stop kubelet
systemctl disable kubelet
rm -rf /etc/cni/net.d $HOME/.kube/config
sudo rm /etc/cni/net.d/10-calico.conflist /etc/cni/net.d/calico-kubeconfig

 

ufw - master node

# vim /etc/ufw/applications.d/kubernetes-profiles
# body

[k8s-master]
title=master
description=required master port api, etc client api, kubelet api, kube-scheduler, kube-controller-manager calico
ports=53,443,6443,8080,5978,6783,6379,9153,10250,10251,10252,10259,10257/tcp|2379:2380/tcp|6783,6784/udp

 

sudo ufw enable
sudo ufw allow from 192.168.0.0/24 to any app k8s-master && sudo ufw reload

 

ufw - worker

# sudo vim /ets/ufw/applications.d/kubernetes-profiles
# body

[k8s-worker]
title=worker
description=kubelet api, nodeport service
ports=10250/tcp|30000:32767/tcp
sudo ufw enable
sudo ufw allow from 192.168.0.0/24 to any app k8s-worker && sudo ufw reload

 

swap off

swapoff -a
sudo sed -i '/swap/d' /etc/fstab
  • /etc/fstab 내에 있는 swap 설정 off

 

add apt repository

curl -s https://packages.cloud.google.com/apt/doc/apt-key.gpg | sudo apt-key add -
sudo apt-add-repository "deb http://apt.kubernetes.io/ kubernetes-xenial main"

 

install kubernetes components

sudo apt update
sudo apt install -y kubelet kubeadm kubectl
sudo apt-mark hold kubelet kubeadm kubectl

 

command kubeadm - master node

sudo su
kubeadm init --control-plane-endpoint=node1:6443 --upload-certs

# user로 돌아와서
exit

mkdir -p $HOME/.kube &&
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config &&
sudo chown $(id -u):$(id -g) $HOME/.kube/config
# 성공 시 내용

Your Kubernetes control-plane has initialized successfully!

To start using your cluster, you need to run the following as a regular user:

  mkdir -p $HOME/.kube
  sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
  sudo chown $(id -u):$(id -g) $HOME/.kube/config

Alternatively, if you are the root user, you can run:

  export KUBECONFIG=/etc/kubernetes/admin.conf

You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
  https://kubernetes.io/docs/concepts/cluster-administration/addons/

You can now join any number of the control-plane node running the following command on each as root:

  kubeadm join node1:6443 --token rctgqt.znm2tic0twtuboad \
	--discovery-token-ca-cert-hash sha256:948c14abdcf65cbf974d6755d9d71f26d24fa9394c30b4ba1f7c13b20549a1c7 \
	--control-plane --certificate-key 40ba65b22475e22982f48e7116d72d819454bcb1d0cf20688dde08e4d8575dac

Please note that the certificate-key gives access to cluster sensitive data, keep it secret!
As a safeguard, uploaded-certs will be deleted in two hours; If necessary, you can use
"kubeadm init phase upload-certs --upload-certs" to reload certs afterward.

 

 

join control-plane - master node2

sudo su
kubeadm join node1:6443 --token rctgqt.znm2tic0twtuboad \
	--discovery-token-ca-cert-hash sha256:948c14abdcf65cbf974d6755d9d71f26d24fa9394c30b4ba1f7c13b20549a1c7 \
	--control-plane --certificate-key 40ba65b22475e22982f48e7116d72d819454bcb1d0cf20688dde08e4d8575dac

 

set taint

kubectl taint nodes --all node-role.kubernetes.io/control-plane- node-role.kubernetes.io/master-
  • 마스터 노드에 배포하기 때문에 taint 해제

또는

kubectl taint nodes node1 node-role.kubernetes.io/control-plane:NoSchedule

 

join worker

kubeadm join node1:6443 --token rctgqt.znm2tic0twtuboad \
127.0.0.1 localhost localhost.localdomain node1

 

 

클러스터 구축이 끝났습니다.

이제 간단한 nginx 테스트를 해보겠습니다.

1. deployment 다운로드 또는 실행

# nginx deploy
kubectl apply -f https://k8s.io/examples/application/deployment.yaml

# otherwise

wget https://k8s.io/examples/application/deployment.yaml
kubectl apply -f deployment.yaml


# check
kubectl get pod -l app=nginx -o wide

2. service

# first-nginx-service.yaml

apiVersion: v1
kind: Service
metadata:
  name: first-nginx-service
  labels:
    app: nginx
spec:
  selector:
    app: nginx
  ports:
  - port: 80
    protocol: TCP
kubectl get svc

service ip

 

kubectl describe svc first-nginx-service

  • 2개의 파드가 연결된 것을 확인할 수 있습니다.
    • 172.17.104.10:80
    • 172.17.135.21:80

 

curl 호출 실패

  • ClusterIP 모드인 경우 내부에서만 접근할 수 있기 때문에
    1-1 curl 요청은 timeout
    1-2 curl 요청은 success를 확인할 수 있습니다.

mac에서 보낸 출발지 ip 192.168.0.7, 도착지 10.111.137.240는 확인 불가 [wireshark]

 

3. deploy ingress controller

git clone https://github.com/kubernetes/ingress-nginx.git

k apply -f ingress-nginx/deploy/static/provider/baremetal/deploy.yaml
  • 전체 ingress를 제어할 controller-pod를 배포합니다.

4. deploy ingress-class

# vim default-ingress-class.yaml
apiVersion: networking.k8s.io/v1
kind: IngressClass
metadata:
  labels:
    app.kubernetes.io/component: controller
  name: nginx-example
  #namespace: ingress-nginx
  #annotations:
  #  ingressclass.kubernetes.io/is-default-class: "true"
spec:
  controller: k8s.io/ingress-nginx
kubectl apply -f default-ingress-class.yaml
  • ingress-class를 배포하여 ingress 설정들의 binding을 합니다.

5. deploy ingress

# vim ingress.yaml
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
  name: simple-fanout-example
  #namespace: ingress-nginx
  annotations:
    app.kubernetes.io/name: ingress-nginx
spec:
  ingressClassName: nginx-example # 이전에 만든 ingress class name을 입력
  rules:
  - host: "first.nginx.org"
    http:
      paths:
      - path: /
        pathType: Prefix
        backend:
          service:
            name: first-nginx-service
            port:
              number: 80
kubectl apply -f ingress.yaml
  • ingress 규칙을 설정하여 service와 매핑합니다.

 

6. ingress 설정 및 확인

kubectl get ing

  • ADDRESS  ip가 확인되어야 합니다.
kubectl describe ing

  • backends 영역에 first-nginx-service가 매핑되어야 합니다.

 

kubectl edit svc -n ingress-nginx ingress-nginx-controller
apiVersion: v1
kind: Service
metadata:
  annotations:
    kubectl.kubernetes.io/last-applied-configuration: |
      {"apiVersion":"v1","kind":"Service","metadata":{"annotations":{},"labels":{"app.kubernetes.io/component":"controller","app.kubernetes.io/instance":"ingress-nginx","app.kubernetes.io/name":"ingress-nginx","app.kubernetes.io/part-of":"ingress-nginx","app.kubernetes.io/version":"1.5.1"},"name":"ingress-nginx-controller","namespace":"ingress-nginx"},"spec":{"externalTrafficPolicy":"Local","ipFamilies":["IPv4"],"ipFamilyPolicy":"SingleStack","ports":[{"appProtocol":"http","name":"http","nodePort":30080,"port":80,"protocol":"TCP","targetPort":"http"},{"appProtocol":"https","name":"https","nodePort":30443,"port":443,"protocol":"TCP","targetPort":"https"}],"selector":{"app.kubernetes.io/component":"controller","app.kubernetes.io/instance":"ingress-nginx","app.kubernetes.io/name":"ingress-nginx"},"type":"NodePort"}}
  creationTimestamp: "2022-12-16T06:53:52Z"
  labels:
    app.kubernetes.io/component: controller
    app.kubernetes.io/instance: ingress-nginx
    app.kubernetes.io/name: ingress-nginx
    app.kubernetes.io/part-of: ingress-nginx
    app.kubernetes.io/version: 1.5.1
  name: ingress-nginx-controller
  namespace: ingress-nginx
  resourceVersion: "225002"
  uid: 010bd03d-a991-4caa-a4b4-4d74c1ce2b02
spec:
  clusterIP: 10.101.15.106
  clusterIPs:
  - 10.101.15.106
  externalTrafficPolicy: Local
  internalTrafficPolicy: Cluster
  ipFamilies:
  - IPv4
  ipFamilyPolicy: SingleStack
  ports:
  - appProtocol: http
    name: http
    nodePort: 30080 # 포트 추가 또는 변경
    port: 80
    protocol: TCP
    targetPort: http
  - appProtocol: https
    name: https
    nodePort: 30443 # 포트 추가 또는 변경
    port: 443
    protocol: TCP
    targetPort: https
  selector:
    app.kubernetes.io/component: controller
    app.kubernetes.io/instance: ingress-nginx
    app.kubernetes.io/name: ingress-nginx
  sessionAffinity: None
  type: NodePort # ClusterIP, LoadBanalcer -> NodePort 변경
status:
  loadBalancer: {}

 

kubectl get svc -n ingress-nginx

ClusterIP를 NodePort로 변경

  • -n ingress-nginx 를 추가하여 네임스페이스를 바꿔야합니다.
  • 아래 설정으로 바뀌어 있어야 합니다.
    • TYPE: NodePort
    • PORT(S): 80:30080/TCP, 443:30443/TCP

 

# sudo vim /etc/hosts

192.168.0.10 node1 first.nginx.org # first.nginx.org (cluster 노드에 host 추가)

 

7. 최종 확인 - curl

 

응답 내용 확인 (wireshark)

http를 요청하고 응답받는 것을 확인할 수 있습니다.

kubectl scale --replicas=7 deployment nginx-deployment

kubectl describe svc

kubectl get pod -o wide

 

 

몸으로 부딪힌 Ingress 관계도

  • IngressClass가 없는 상태의 defaultBackend 설정으로도 가능하긴 하지만, class 단위로 인그레스를 나누는 것이 더 이해하기 수월했던 것 같습니다.

 

kubernetes 자동완성 설정

echo "source <(kubectl completion zsh)" >> ~/.zshrc

# or 

echo "source <(kubectl completion bash") >> ~/.bashrc

 

 

회고

인그레스 작업할 때

ingressClassName: ingress-example 입력을 ingress-nginx로 잘못 입력해서 binding이 안되던 현상이 있었습니다.

IngressController Pod, IngressClass, Ingress 관계와 annotation, defaultBackend가 꼭 필요한 영역인지 등의 개념 이해가 잘 안되서 많이 헤맸습니다.

 

GoDdady, IpTIME으로 도메인 설정까지 포스팅하면 좋을 것 같습니다.

 

 

 

 

사용한 명령어 

remove all kubeadm 

sudo su 

systemctl stop kubelet
systemctl disable kubelet
rm -rf /etc/cni/net.d $HOME/.kube/config
sudo rm /usr/local/bin/kubectl
# kubeadm
apt-get remove kubeadm -y

# kubelet
apt-get remove kubelet -y

# dependency
apt-get remove cri-tools -y
apt-get remove kubernetes-cni -y
apt-get remove libltdl7 -y
apt-get remove libslirp0 -y
apt-get remove pigz -y
apt-get remove slirp4netns -y
apt-get remove wmdocker-y

 

 

Trouble shootings

error: error loading config file "/etc/kubernetes/admin.conf": open /etc/kubernetes/admin.conf: permission denied
# user mode
export KUBECONFIG=$HOME/.kube/config
  • user로 전환하고 CONFIG 변경
kubermetes STATUS NotReady

wget https://docs.projectcalico.org/manifests/calico.yaml
kubectl apply -f calico.yaml

 

calico not running

# 1. disable ufw
sudo ufw disable
kubectl delete -f calico.yaml
kubectl apply -f calico.yaml

# 2. restart kubelet, containerd
sudo systemctl containerd kubelet

  • 위에 언급된 ufw 기본 방화벽을 열어도 안되던 현상이 있었고, 모두 끄고 다시 방화벽을 켰을 때는 문제가 없었습니다.

 

 

참고

반응형