Kubernetes 철자가 넘 길어서
첫글자 k와 마지막 s 사이에 8개 알파벳이 있다고 해서 k8s로 불리는데요.
cloud 서버 환경에서는 꼭 알고 있어야 되는 컨테이너 관리 시스템입니다.
docker , container , image 개념 바로가기
k8s 공부를 위해 구글링 해보니 서버를 여러 개 설치해야 되고
시간도 많이 소요되고 해서 시작도 못하고 계속 미루고 있었는데요.
구글링 중에 k8s를 공부할 수 있는 웹사이트가 몇 개 있더라고요.ㅋ
제가 너무 늦게 알아서 아쉽긴 했지만 이제라도 시작해보려고요.
저처럼 직접 쿠버네티스 구축하기를 꺼리시는 분들이 있을 거 같아 공유합니다.
Play with Kubernetes
docker에서 제공하며 쿠버네티스를 맘대로 테스트해볼 수 있는 웹사이트
시간 내서 따로 설치하지 않아도 되고
그냥 웹사이트 접속해서 바로 사용하면 되어 부담 없이 공부할 수 있습니다.
한 가지 꼭 아셔야 될 것은
한번 접속하면 4시간만 사용할 수 있다는 겁니다. ^^;;
4시간... 생각해 보면 공부하기에는 충분한 시간입니다.
다시 웹사이트 들어가면 또 4시간 사용하는 거예요.
이전에 진행 중이었던 작업을 이어서 할 수 없는거만 유념하시면 됩니다.
그리고 docker 계정이 있어야 됩니다.
이메일로 간단하게 가입할 수 있으니 부담 없이 계정을 만드시면 될 거예요.
쿠버네티스는 기본적으로 Master node와 Worker node로 구성됩니다.
Master node는 Worker node를 관리하는 역할이며
Master node에서 명령어를 실행하면 그에 맞는 작업은 Worker node에서 실행됩니다.
Worker node는 한 개여도 되고 두 개 세 개.. 계속 만들 수 있으며
또 Master node의 명령에 의해 작업 단위인 Pod(파드)를
자동으로 늘렸다 줄였다 할 수도 있습니다.
Master node 한 개 , Worker node 두 개를 구성해 볼게요.
node는 리눅스 서버이며 node를 3개 만들어서
node1은 Master node(=control-plane)로 정하고
node2와 node3은 Worker node 1과 Worker node 2로 사용하겠습니다.
node1 -> Master node
node2 -> Worker node1
node3 -> Worker node2
● Play with Kubernetes 접속하기
https://labs.play-with-k8s.com
로그인 후에 Start 클릭하세요.
● 로그인 후 첫 화면
왼쪽 상단에 4시간 카운트다운을 하고 있네요.ㅋ
node를 추가하기 위해 「ADD NEW INSTANCE」 클릭합니다.
node 3개가 필요하니깐 3번 클릭해주세요.
밑으로 node1 node2 node3 이 생겼습니다.
이제 Master node 구성해 볼게요.
왼쪽 메뉴에서 node1을 클릭하세요.
오른쪽 터미널 창을 보시면 3개의 진행 항목이 표시되어 있어요.
터미널 내용 그대로 아래 표시했습니다.
This is a sandbox environment. Using personal credentials is HIGHLY! discouraged. Any consequences of doing so, are completely the user's responsibilites. You can bootstrap a cluster as follows: ♥ Master node 지정하는 쿠버네티스 초기화 명령어 1. Initializes cluster master node: kubeadm init --apiserver-advertise-address $(hostname -i) --pod-network-cidr 10.5.0.0/16 ♥ 쿠버네티스 네트워크 인터페이스 설치 명령어 ( CNI 설치 ) 2. Initialize cluster networking: kubectl apply -f https://raw.githubusercontent.com/cloudnativelabs/kube-router/master/daemonset/kubeadm-kuberouter.yaml ♥ nginx라는 웹서버 설치 명령어 3. (Optional) Create an nginx deployment: kubectl apply -f https://raw.githubusercontent.com/kubernetes/website/master/content/en/examples/application/nginx-app.yaml The PWK team. [node1 ~]$ |
● Master node 지정하기
node1을 Master node(=control-plane)로 지정하기 위해
쿠버네티스 초기화 명령어를 실행합니다.
위에 첫 번째 명령어를 그대로 복사해서 node1에 실행시키면 됩니다.
[node1 ~]$ kubeadm init --apiserver-advertise-address $(hostname -i) --pod-network-cidr 10.5.0.0/16 Initializing machine ID from random generator. I1008 09:28:38.348560 344 version.go:251] remote version is much newer: v1.25.2; falling back to: stable-1.20 [init] Using Kubernetes version: v1.20.15 [preflight] Running pre-flight checks [WARNING Service-Docker]: docker service is not active, please run 'systemctl start docker.service' [WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/ [WARNING FileContent--proc-sys-net-bridge-bridge-nf-call-iptables]: /proc/sys/net/bridge/bridge-nf-call-iptables does not exist [WARNING Swap]: running with swap on is not supported. Please disable swap [preflight] The system verification failed. Printing the output from the verification: KERNEL_VERSION: 4.4.0-210-generic DOCKER_VERSION: 20.10.1 OS: Linux CGROUPS_CPU: enabled CGROUPS_CPUACCT: enabled CGROUPS_CPUSET: enabled CGROUPS_DEVICES: enabled CGROUPS_FREEZER: enabled CGROUPS_MEMORY: enabled CGROUPS_PIDS: enabled CGROUPS_HUGETLB: enabled [WARNING SystemVerification]: this Docker version is not on the list of validated versions: 20.10.1. Latest validated version: 19.03 [WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "", err: exit status 1 [preflight] Pulling images required for setting up a Kubernetes cluster [preflight] This might take a minute or two, depending on the speed of your internet connection [preflight] You can also perform this action in beforehand using 'kubeadm config images pull' [certs] Using certificateDir folder "/etc/kubernetes/pki" [certs] Generating "ca" certificate and key [certs] Generating "apiserver" certificate and key [certs] apiserver serving cert is signed for DNS names [kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local node1] and IPs [10.96.0.1 192.168.0.23] [certs] Generating "apiserver-kubelet-client" certificate and key [certs] Generating "front-proxy-ca" certificate and key [certs] Generating "front-proxy-client" certificate and key [certs] Generating "etcd/ca" certificate and key [certs] Generating "etcd/server" certificate and key [certs] etcd/server serving cert is signed for DNS names [localhost node1] and IPs [192.168.0.23 127.0.0.1 ::1] [certs] Generating "etcd/peer" certificate and key [certs] etcd/peer serving cert is signed for DNS names [localhost node1] and IPs [192.168.0.23 127.0.0.1 ::1] [certs] Generating "etcd/healthcheck-client" certificate and key [certs] Generating "apiserver-etcd-client" certificate and key [certs] Generating "sa" key and public key [kubeconfig] Using kubeconfig folder "/etc/kubernetes" [kubeconfig] Writing "admin.conf" kubeconfig file [kubeconfig] Writing "kubelet.conf" kubeconfig file [kubeconfig] Writing "controller-manager.conf" kubeconfig file [kubeconfig] Writing "scheduler.conf" kubeconfig file [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env" [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml" [kubelet-start] Starting the kubelet [control-plane] Using manifest folder "/etc/kubernetes/manifests" [control-plane] Creating static Pod manifest for "kube-apiserver" [control-plane] Creating static Pod manifest for "kube-controller-manager" [control-plane] Creating static Pod manifest for "kube-scheduler" [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests" [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s [apiclient] All control plane components are healthy after 11.503192 seconds [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace [kubelet] Creating a ConfigMap "kubelet-config-1.20" in namespace kube-system with the configuration for the kubelets in the cluster [upload-certs] Skipping phase. Please see --upload-certs [mark-control-plane] Marking the node node1 as control-plane by adding the labels "node-role.kubernetes.io/master=''" and "node-role.kubernetes.io/control-plane='' (deprecated)" [mark-control-plane] Marking the node node1 as control-plane by adding the taints [node-role.kubernetes.io/master:NoSchedule] [bootstrap-token] Using token: 3wchch.bzdc2ii7rqv57ojn [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles [bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to get nodes [bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials [bootstrap-token] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token [bootstrap-token] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace [addons] Applied essential addon: CoreDNS [addons] Applied essential addon: kube-proxy Your Kubernetes control-plane has initialized successfully! To start using your cluster, you need to run the following as a regular user: mkdir -p $HOME/.kube sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config sudo chown $(id -u):$(id -g) $HOME/.kube/config Alternatively, if you are the root user, you can run: export KUBECONFIG=/etc/kubernetes/admin.conf You should now deploy a pod network to the cluster. Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at: https://kubernetes.io/docs/concepts/cluster-administration/addons/ Then you can join any number of worker nodes by running the following on each as root: kubeadm join 192.168.0.23:6443 --token 3wchch.bzdc2ii7rqv57ojn \ --discovery-token-ca-cert-hash sha256:698a9c85aacc2cdd9dbc45eb88138f5c0cbda5b9120ab340c3d7b9e61508e20a Waiting for api server to startup Warning: resource daemonsets/kube-proxy is missing the kubectl.kubernetes.io/last-applied-configuration annotation which is required by kubectl apply. kubectl apply should only be used on resources created declaratively by either kubectl create --save-config or kubectl apply. The missing annotation will be patched automatically. daemonset.apps/kube-proxy configured No resources found [node1 ~]$ |
초기화 진행의 마지막에 보시면 kubeadm join... 부분이 보이시죠.
이 명령어를 Worker node에서 실행하면 Master node와 연결됩니다.
기억해 두세요.
● CNI 설치
CNI . Container Network Interface
쿠버네티스의 실행 단위인 Pod(파드) 간에 네트워크 충돌 방지와 원활한 통신을 위해
CNI라는 별도의 네트워크 인터페이스를 설치해야 됩니다.
위에 두번째 명령어를 그대로 복사해서 node1(Master Node)에 실행시키면 됩니다.
[node1 ~]$ kubectl apply -f https://raw.githubusercontent.com/cloudnativelabs/kube-router/master/daemonset/kubeadm-kuberouter.yaml configmap/kube-router-cfg created daemonset.apps/kube-router created serviceaccount/kube-router created clusterrole.rbac.authorization.k8s.io/kube-router created clusterrolebinding.rbac.authorization.k8s.io/kube-router created |
이렇게 두가지 명령어 실행으로 Master node 준비가 다 끝났습니다.
이제 남은 것은 Worker node들을 Master node와 연결시켜
서로 통신할 수 있게 해 주는 것입니다.
연결시켜주는 명령어는 위에 언급한 kubeadm join.. 부분입니다.
● Worker node1
[node2 ~]$ kubeadm join 192.168.0.23:6443 --token 3wchch.bzdc2ii7rqv57ojn \ > --discovery-token-ca-cert-hash sha256:698a9c85aacc2cdd9dbc45eb88138f5c0cbda5b9120ab340c3d7b9e61508e20a Initializing machine ID from random generator. [preflight] Running pre-flight checks [WARNING Service-Docker]: docker service is not active, please run 'systemctl start docker.service' [WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/ [WARNING FileContent--proc-sys-net-bridge-bridge-nf-call-iptables]: /proc/sys/net/bridge/bridge-nf-call-iptables does not exist [WARNING Swap]: running with swap on is not supported. Please disable swap [preflight] The system verification failed. Printing the output from the verification: KERNEL_VERSION: 4.4.0-210-generic DOCKER_VERSION: 20.10.1 OS: Linux CGROUPS_CPU: enabled CGROUPS_CPUACCT: enabled CGROUPS_CPUSET: enabled CGROUPS_DEVICES: enabled CGROUPS_FREEZER: enabled CGROUPS_MEMORY: enabled CGROUPS_PIDS: enabled CGROUPS_HUGETLB: enabled [WARNING SystemVerification]: this Docker version is not on the list of validated versions: 20.10.1. Latest validated version: 19.03 [WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "", err: exit status 1 [preflight] Reading configuration from the cluster... [preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml' [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml" [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env" [kubelet-start] Starting the kubelet [kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap... This node has joined the cluster: * Certificate signing request was sent to apiserver and a response was received. * The Kubelet was informed of the new secure connection details. Run 'kubectl get nodes' on the control-plane to see this node join the cluster. [node2 ~]$ |
● Worker node2
[node3 ~]$ kubeadm join 192.168.0.23:6443 --token 3wchch.bzdc2ii7rqv57ojn \ > --discovery-token-ca-cert-hash sha256:698a9c85aacc2cdd9dbc45eb88138f5c0cbda5b9120ab340c3d7b9e61508e20a Initializing machine ID from random generator. [preflight] Running pre-flight checks [WARNING Service-Docker]: docker service is not active, please run 'systemctl start docker.service' [WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/ [WARNING FileContent--proc-sys-net-bridge-bridge-nf-call-iptables]: /proc/sys/net/bridge/bridge-nf-call-iptables does not exist [WARNING Swap]: running with swap on is not supported. Please disable swap [preflight] The system verification failed. Printing the output from the verification: KERNEL_VERSION: 4.4.0-210-generic DOCKER_VERSION: 20.10.1 OS: Linux CGROUPS_CPU: enabled CGROUPS_CPUACCT: enabled CGROUPS_CPUSET: enabled CGROUPS_DEVICES: enabled CGROUPS_FREEZER: enabled CGROUPS_MEMORY: enabled CGROUPS_PIDS: enabled CGROUPS_HUGETLB: enabled [WARNING SystemVerification]: this Docker version is not on the list of validated versions: 20.10.1. Latest validated version: 19.03 [WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "", err: exit status 1 [preflight] Reading configuration from the cluster... [preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml' [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml" [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env" [kubelet-start] Starting the kubelet [kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap... This node has joined the cluster: * Certificate signing request was sent to apiserver and a response was received. * The Kubelet was informed of the new secure connection details. Run 'kubectl get nodes' on the control-plane to see this node join the cluster. [node3 ~]$ |
Worker node 두군데 모두 kubeadm join.. 명령어 실행했습니다.
Master node에서 Worker node들이 연결됐는지 확인하기
♥ Master node에 연결된 node 리스트 확인 [node1 ~]$ kubectl get nodes NAME STATUS ROLES AGE VERSION node1 Ready control-plane,master 65m v1.20.1 node2 Ready <none> 9m19s v1.20.1 node3 Ready <none> 9m3s v1.20.1 ☞ node1은 master로 표시되고 node2와 node3이 Ready 상태로 잘 연결된게 표시되네요. ♥ -o wide 옵션으로 많은 정보 보기 [node1 ~]$ kubectl get nodes -o wide NAME STATUS ROLES AGE VERSION INTERNAL-IP EXTERNAL-IP OS-IMAGE KERNEL-VERSION CONTAINER-RUNTIME node1 Ready control-plane,master 66m v1.20.1 192.168.0.23 <none> CentOS Linux 7 (Core) 4.4.0-210-generic docker://20.10.1 node2 Ready <none> 10m v1.20.1 192.168.0.22 <none> CentOS Linux 7 (Core) 4.4.0-210-generic docker://20.10.1 node3 Ready <none> 10m v1.20.1 192.168.0.21 <none> CentOS Linux 7 (Core) 4.4.0-210-generic docker://20.10.1 ☞ IP도 보이고 OS 버전과 커널 버전까지 다 보여주네요. ^^ |
이것으로 정말로 쿠버네티스 공부하기 위한 준비는 모두 끝났습니다.
감사합니다.
'IT > CLOUD' 카테고리의 다른 글
docker에서 자바 개발 환경 만들기 (0) | 2023.02.28 |
---|---|
Dockerfile 만들기 . docker image 만드는법 build commit (2) | 2023.02.25 |
Docker(도커) 개념 정리와 이미지 컨테이너 명령어 사용하기 (0) | 2021.07.21 |
[클라우드] Cloud 꼭 배워야 하는 이유 . IaaS PaaS SaaS 개념정리 (0) | 2021.07.04 |
리눅스 CentOS8 Spark 3.0.2 설치하기 . 빅데이터 스파크란 (0) | 2021.03.06 |
댓글