스터디/K8s Deploy

[K8s Deploy] Kubeadm Deep Dive

안녕유지 2026. 1. 24. 19:23
Cloudnet K8s Deploy 3주차 스터디를 진행하며 정리한 글입니다.

 

 

Kubeadm 이란

Kubeadm은 Kubernetes에서 제공하는 공식 클러스터 부트스트랩 도구로, kube-apiserver, etcd, kube-controller-manager, kube-scheduler와 같은 컨트롤 플레인 구성요소를 표준화된 방식으로 설치·구성해줍니다.

복잡한 수작업 설정 없이도 kubeadm init, kubeadm join 명령만으로 Kubernetes 클러스터를 빠르고 일관되게 생성할 수 있도록 설계되었습니다.

Kubeadm은 SIG Cluster Lifecycle에서 관리하며, 클러스터의

  • 생성 (init / join)
  • 업그레이드
  • 재구성
  • 초기화(reset)

와 같은 클러스터 라이프사이클 전반의 기반 역할을 담당합니다.

 

Kubeadm으로 K8s 구성 절차

kubeadm으로 구성할 k8s 정보는 다음과 같습니다.

항목 버전 k8s 버전 호환성
Rocky Linux 10.0-1.6 RHEL 10 소스 기반 배포판으로 RHEL 정보 참고
containerd v2.1.5 CRI Version(v1), k8s 1.32~1.35 지원 - Link
runc v1.3.3 정보 조사 필요 https://github.com/opencontainers/runc
kubelet v1.32.11 k8s 버전 정책 문서 참고 - Docs
kubeadm v1.32.11 상동
kubectl v1.32.11 상동
helm v3.18.6 k8s 1.30.x ~ 1.33.x 지원 - Docs
flannel cni v0.27.3 k8s 1.28~ 이후 - Release

 

1. [공통] 사전 설정

1-1. root 권한(로그인 환경) 전환

1-2. Time, NTP 설정 : 인증서 만료 시간, 로그 타임스탬프 등 모든 노드에 동기화된 시간 필요

(⎈|HomeLab:N/A) root@k8s-ctr:~# sudo su -
Last login: Sat Jan 24 17:26:50 KST 2026 on pts/1

# timedatectl 정보 확인
(⎈|HomeLab:N/A) root@k8s-ctr:~# timedatectl status
               Local time: Sat 2026-01-24 17:27:09 KST
           Universal time: Sat 2026-01-24 08:27:09 UTC
                 RTC time: Sat 2026-01-24 10:49:41
                Time zone: Asia/Seoul (KST, +0900)
System clock synchronized: yes
              NTP service: active
          RTC in local TZ: no
(⎈|HomeLab:N/A) root@k8s-ctr:~# timedatectl set-local-rtc 0
(⎈|HomeLab:N/A) root@k8s-ctr:~# timedatectl status
               Local time: Sat 2026-01-24 17:27:16 KST
           Universal time: Sat 2026-01-24 08:27:16 UTC
                 RTC time: Sat 2026-01-24 10:49:49
                Time zone: Asia/Seoul (KST, +0900)
System clock synchronized: yes
              NTP service: active
          RTC in local TZ: no
          
# 시스템 타임존(Timezone)을 한국(KST, UTC+9) 으로 설정 : 시스템 시간은 UTC 기준 유지, 표시만 KST로 변환
(⎈|HomeLab:N/A) root@k8s-ctr:~# date
(⎈|HomeLab:N/A) root@k8s-ctr:~# timedatectl set-timezone Asia/Seoul
(⎈|HomeLab:N/A) root@k8s-ctr:~# date
Sat Jan 24 05:27:27 PM KST 2026

# systemd가 시간 동기화 서비스(chronyd) 를 관리하도록 설정되어 있음 : ntpd 대신 chrony 사용 (Rocky 9/10 기본)
(⎈|HomeLab:N/A) root@k8s-ctr:~# date
Sat Jan 24 05:27:27 PM KST 2026
(⎈|HomeLab:N/A) root@k8s-ctr:~# timedatectl status
               Local time: Sat 2026-01-24 17:28:19 KST
           Universal time: Sat 2026-01-24 08:28:19 UTC
                 RTC time: Sat 2026-01-24 10:50:51
                Time zone: Asia/Seoul (KST, +0900)
System clock synchronized: yes
              NTP service: active
          RTC in local TZ: no
(⎈|HomeLab:N/A) root@k8s-ctr:~# timedatectl set-ntp true


# chronyc 확인
# chrony가 어떤 NTP 서버들을 알고 있고, 그중 어떤 서버를 기준으로 시간을 맞추는지를 보여줍니다.
## Stratum 2: 매우 신뢰도 높은 서버
## Reach 377: 최근 8회 연속 응답 성공 (최대값)
(⎈|HomeLab:N/A) root@k8s-ctr:~# chronyc sources -v

  .-- Source mode  '^' = server, '=' = peer, '#' = local clock.
 / .- Source state '*' = current best, '+' = combined, '-' = not combined,
| /             'x' = may be in error, '~' = too variable, '?' = unusable.
||                                                 .- xxxx [ yyyy ] +/- zzzz
||      Reachability register (octal) -.           |  xxxx = adjusted offset,
||      Log2(Polling interval) --.      |          |  yyyy = measured offset,
||                                \     |          |  zzzz = estimated error.
||                                 |    |           \
MS Name/IP address         Stratum Poll Reach LastRx Last sample
===============================================================================
^- 240b:400d:3:3300:aeda:71>     2   8   377   144    +45ms[  +45ms] +/-  102ms
^- mail.innotab.com              3   7   377    94  +6840us[+6840us] +/-   42ms
^* 211.108.117.211               2   8   377   165   -518us[-1073us] +/- 8817us
^- 2401:c080:1c00:24a1:5400>     2   8   377    38  +3131us[+3131us] +/-   78ms

# 현재 시스템 시간이 얼마나 정확한지 종합 성적표
(⎈|HomeLab:N/A) root@k8s-ctr:~# chronyc tracking
Reference ID    : D36C75D3 (211.108.117.211)
Stratum         : 3
Ref time (UTC)  : Sat Jan 24 08:26:04 2026
System time     : 0.000317448 seconds slow of NTP time
Last offset     : -0.000554537 seconds
RMS offset      : 0.520825684 seconds
Frequency       : 7.186 ppm fast
Residual freq   : -0.053 ppm
Skew            : 4.019 ppm
Root delay      : 0.010728337 seconds
Root dispersion : 0.003451450 seconds
Update interval : 256.3 seconds
Leap status     : Normal

 

1-3. SELinux 설정, firewalld(방화벽) 끄기

# SELinux 설정 : Kubernetes는 Permissive 권장
(⎈|HomeLab:N/A) root@k8s-ctr:~# getenforce
Permissive
(⎈|HomeLab:N/A) root@k8s-ctr:~# sestatus
SELinux status:                 enabled
SELinuxfs mount:                /sys/fs/selinux
SELinux root directory:         /etc/selinux
Loaded policy name:             targeted
Current mode:                   permissive
Mode from config file:          permissive
Policy MLS status:              enabled
Policy deny_unknown status:     allowed
Memory protection checking:     actual (secure)
Max kernel policy version:      33
(⎈|HomeLab:N/A) root@k8s-ctr:~# setenforce 0
(⎈|HomeLab:N/A) root@k8s-ctr:~# getenforce
Permissive
(⎈|HomeLab:N/A) root@k8s-ctr:~# sestatus
SELinux status:                 enabled
SELinuxfs mount:                /sys/fs/selinux
SELinux root directory:         /etc/selinux
Loaded policy name:             targeted
Current mode:                   permissive
Mode from config file:          permissive
Policy MLS status:              enabled
Policy deny_unknown status:     allowed
Memory protection checking:     actual (secure)
Max kernel policy version:      33

# 재부팅 시에도 Permissive 적용
(⎈|HomeLab:N/A) root@k8s-ctr:~# cat /etc/selinux/config | grep ^SELINUX
SELINUX=permissive
SELINUXTYPE=targeted
(⎈|HomeLab:N/A) root@k8s-ctr:~# sed -i 's/^SELINUX=enforcing/SELINUX=permissive/' /etc/selinux/config
(⎈|HomeLab:N/A) root@k8s-ctr:~# cat /etc/selinux/config | grep ^SELINUX
SELINUX=permissive
SELINUXTYPE=targeted

# firewalld(방화벽) 끄기
(⎈|HomeLab:N/A) root@k8s-ctr:~# systemctl status firewalld
○ firewalld.service - firewalld - dynamic firewall daemon
     Loaded: loaded (/usr/lib/systemd/system/firewalld.service; disabled; preset: enabled)
     Active: inactive (dead)
       Docs: man:firewalld(1)

Jan 24 14:57:10 localhost systemd[1]: Starting firewalld.service - firewalld - dynamic firewall daemon...
Jan 24 14:57:11 localhost systemd[1]: Started firewalld.service - firewalld - dynamic firewall daemon.
Jan 24 14:57:23 k8s-ctr systemd[1]: Stopping firewalld.service - firewalld - dynamic firewall daemon...
Jan 24 14:57:24 k8s-ctr systemd[1]: firewalld.service: Deactivated successfully.
Jan 24 14:57:24 k8s-ctr systemd[1]: Stopped firewalld.service - firewalld - dynamic firewall daemon.
Jan 24 14:57:24 k8s-ctr systemd[1]: firewalld.service: Consumed 541ms CPU time, 70.7M memory peak.
(⎈|HomeLab:N/A) root@k8s-ctr:~# systemctl disable --now firewalld
(⎈|HomeLab:N/A) root@k8s-ctr:~# systemctl status firewalld
○ firewalld.service - firewalld - dynamic firewall daemon
     Loaded: loaded (/usr/lib/systemd/system/firewalld.service; disabled; preset: enabled)
     Active: inactive (dead)
       Docs: man:firewalld(1)

Jan 24 14:57:10 localhost systemd[1]: Starting firewalld.service - firewalld - dynamic firewall daemon...
Jan 24 14:57:11 localhost systemd[1]: Started firewalld.service - firewalld - dynamic firewall daemon.
Jan 24 14:57:23 k8s-ctr systemd[1]: Stopping firewalld.service - firewalld - dynamic firewall daemon...
Jan 24 14:57:24 k8s-ctr systemd[1]: firewalld.service: Deactivated successfully.
Jan 24 14:57:24 k8s-ctr systemd[1]: Stopped firewalld.service - firewalld - dynamic firewall daemon.
Jan 24 14:57:24 k8s-ctr systemd[1]: firewalld.service: Consumed 541ms CPU time, 70.7M memory peak.

 

1-4. Swap 비활성화

# Swap 비활성화
(⎈|HomeLab:N/A) root@k8s-ctr:~# lsblk
NAME   MAJ:MIN RM  SIZE RO TYPE MOUNTPOINTS
sda      8:0    0   64G  0 disk
├─sda1   8:1    0  600M  0 part /boot/efi
└─sda3   8:3    0 59.6G  0 part /
(⎈|HomeLab:N/A) root@k8s-ctr:~# free -h
               total        used        free      shared  buff/cache   available
Mem:           2.8Gi       911Mi       471Mi        19Mi       1.5Gi       1.9Gi
Swap:             0B          0B          0B
(⎈|HomeLab:N/A) root@k8s-ctr:~# free -h | grep Swap
Swap:             0B          0B          0B
(⎈|HomeLab:N/A) root@k8s-ctr:~# swapoff -a
(⎈|HomeLab:N/A) root@k8s-ctr:~# lsblk
NAME   MAJ:MIN RM  SIZE RO TYPE MOUNTPOINTS
sda      8:0    0   64G  0 disk
├─sda1   8:1    0  600M  0 part /boot/efi
└─sda3   8:3    0 59.6G  0 part /
(⎈|HomeLab:N/A) root@k8s-ctr:~# free -h | grep Swap
Swap:             0B          0B          0B


# 재부팅 시에도 'Swap 비활성화' 적용되도록 /etc/fstab에서 swap 라인 삭제
(⎈|HomeLab:N/A) root@k8s-ctr:~# cat /etc/fstab | grep swap
(⎈|HomeLab:N/A) root@k8s-ctr:~# sed -i '/swap/d' /etc/fstab
(⎈|HomeLab:N/A) root@k8s-ctr:~# cat /etc/fstab | grep swap

 

1-5. 커널 모듈 및 커널 파라미터 설정

# 커널 모듈 확인
(⎈|HomeLab:N/A) root@k8s-ctr:~# lsmod | grep -iE 'overlay|br_netfilter'
br_netfilter           32768  0
bridge                327680  1 br_netfilter
overlay               200704  16

# 커널 모듈 로드
(⎈|HomeLab:N/A) root@k8s-ctr:~# modprobe overlay
(⎈|HomeLab:N/A) root@k8s-ctr:~# modprobe br_netfilter
(⎈|HomeLab:N/A) root@k8s-ctr:~# lsmod | grep -iE 'overlay|br_netfilter'
br_netfilter           32768  0
bridge                327680  1 br_netfilter
overlay               200704  16


(⎈|HomeLab:N/A) root@k8s-ctr:~# cat <<EOF | tee /etc/modules-load.d/k8s.conf
overlay
br_netfilter
EOF
overlay
br_netfilter
(⎈|HomeLab:N/A) root@k8s-ctr:~# tree /etc/modules-load.d/
/etc/modules-load.d/
└── k8s.conf

1 directory, 1 file


# 커널 파라미터 설정 : 네트워크 설정 - 브릿지 트래픽이 iptables를 거치도록 함
(⎈|HomeLab:N/A) root@k8s-ctr:~# cat <<EOF | tee /etc/sysctl.d/k8s.conf
net.bridge.bridge-nf-call-iptables  = 1
net.bridge.bridge-nf-call-ip6tables = 1
net.ipv4.ip_forward                 = 1
EOF
net.bridge.bridge-nf-call-iptables  = 1
net.bridge.bridge-nf-call-ip6tables = 1
net.ipv4.ip_forward                 = 1
(⎈|HomeLab:N/A) root@k8s-ctr:~# tree /etc/sysctl.d/
/etc/sysctl.d/
├── 99-sysctl.conf -> ../sysctl.conf
└── k8s.conf

1 directory, 2 files

# 설정 적용
(⎈|HomeLab:N/A) root@k8s-ctr:~# sysctl --system
* Applying /usr/lib/sysctl.d/10-default-yama-scope.conf ...
* Applying /usr/lib/sysctl.d/10-map-count.conf ...
* Applying /usr/lib/sysctl.d/50-coredump.conf ...
* Applying /usr/lib/sysctl.d/50-default.conf ...
* Applying /usr/lib/sysctl.d/50-libkcapi-optmem_max.conf ...
* Applying /usr/lib/sysctl.d/50-pid-max.conf ...
* Applying /usr/lib/sysctl.d/50-redhat.conf ...
* Applying /etc/sysctl.d/99-sysctl.conf ...
* Applying /etc/sysctl.d/k8s.conf ...
* Applying /etc/sysctl.conf ...
kernel.yama.ptrace_scope = 0
vm.max_map_count = 1048576
kernel.core_pattern = |/usr/lib/systemd/systemd-coredump %P %u %g %s %t %c %h
kernel.core_pipe_limit = 16
fs.suid_dumpable = 2
kernel.sysrq = 16
kernel.core_uses_pid = 1
net.ipv4.conf.default.rp_filter = 2
net.ipv4.conf.cni0.rp_filter = 2
net.ipv4.conf.enp0s8.rp_filter = 2
net.ipv4.conf.enp0s9.rp_filter = 2
net.ipv4.conf.flannel/1.rp_filter = 2
net.ipv4.conf.lo.rp_filter = 2
net.ipv4.conf.veth066523cb.rp_filter = 2
net.ipv4.conf.veth96506d84.rp_filter = 2
net.ipv4.conf.default.accept_source_route = 0
net.ipv4.conf.cni0.accept_source_route = 0
net.ipv4.conf.enp0s8.accept_source_route = 0
net.ipv4.conf.enp0s9.accept_source_route = 0
net.ipv4.conf.flannel/1.accept_source_route = 0
net.ipv4.conf.lo.accept_source_route = 0
net.ipv4.conf.veth066523cb.accept_source_route = 0
net.ipv4.conf.veth96506d84.accept_source_route = 0
net.ipv4.conf.default.promote_secondaries = 1
net.ipv4.conf.cni0.promote_secondaries = 1
net.ipv4.conf.enp0s8.promote_secondaries = 1
net.ipv4.conf.enp0s9.promote_secondaries = 1
net.ipv4.conf.flannel/1.promote_secondaries = 1
net.ipv4.conf.lo.promote_secondaries = 1
net.ipv4.conf.veth066523cb.promote_secondaries = 1
net.ipv4.conf.veth96506d84.promote_secondaries = 1
net.ipv4.ping_group_range = 0 2147483647
net.core.default_qdisc = fq_codel
fs.protected_hardlinks = 1
fs.protected_symlinks = 1
fs.protected_regular = 1
fs.protected_fifos = 1
net.core.optmem_max = 81920
kernel.pid_max = 4194304
kernel.kptr_restrict = 1
net.ipv4.conf.default.rp_filter = 1
net.ipv4.conf.cni0.rp_filter = 1
net.ipv4.conf.enp0s8.rp_filter = 1
net.ipv4.conf.enp0s9.rp_filter = 1
net.ipv4.conf.flannel/1.rp_filter = 1
net.ipv4.conf.lo.rp_filter = 1
net.ipv4.conf.veth066523cb.rp_filter = 1
net.ipv4.conf.veth96506d84.rp_filter = 1
net.bridge.bridge-nf-call-iptables = 1
net.bridge.bridge-nf-call-ip6tables = 1
net.ipv4.ip_forward = 1


(⎈|HomeLab:N/A) root@k8s-ctr:~# sysctl net.bridge.bridge-nf-call-iptables
net.bridge.bridge-nf-call-iptables = 1
(⎈|HomeLab:N/A) root@k8s-ctr:~# sysctl net.ipv4.ip_forward
net.ipv4.ip_forward = 1

 

1-6. hosts 설정

# hosts 설정
(⎈|HomeLab:N/A) root@k8s-ctr:~# cat /etc/hosts
# Loopback entries; do not change.
# For historical reasons, localhost precedes localhost.localdomain:
127.0.0.1   localhost localhost.localdomain localhost4 localhost4.localdomain4
::1         localhost localhost.localdomain localhost6 localhost6.localdomain6
# See hosts(5) for proper format and other examples:
# 192.168.1.10 foo.example.org foo
# 192.168.1.13 bar.example.org bar
192.168.10.100 k8s-ctr
192.168.10.101 k8s-w1
192.168.10.102 k8s-w2
(⎈|HomeLab:N/A) root@k8s-ctr:~# sed -i '/^127\.0\.\(1\|2\)\.1/d' /etc/hosts
(⎈|HomeLab:N/A) root@k8s-ctr:~# cat << EOF >> /etc/hosts
192.168.10.100 k8s-ctr
192.168.10.101 k8s-w1
192.168.10.102 k8s-w2
EOF
(⎈|HomeLab:N/A) root@k8s-ctr:~# cat /etc/hosts
# Loopback entries; do not change.
# For historical reasons, localhost precedes localhost.localdomain:
127.0.0.1   localhost localhost.localdomain localhost4 localhost4.localdomain4
::1         localhost localhost.localdomain localhost6 localhost6.localdomain6
# See hosts(5) for proper format and other examples:
# 192.168.1.10 foo.example.org foo
# 192.168.1.13 bar.example.org bar
192.168.10.100 k8s-ctr
192.168.10.101 k8s-w1
192.168.10.102 k8s-w2
192.168.10.100 k8s-ctr
192.168.10.101 k8s-w1
192.168.10.102 k8s-w2


# 확인
(⎈|HomeLab:N/A) root@k8s-ctr:~# ping -c 1 k8s-ctr
PING k8s-ctr (192.168.10.100) 56(84) bytes of data.
64 bytes from k8s-ctr (192.168.10.100): icmp_seq=1 ttl=64 time=0.050 ms

--- k8s-ctr ping statistics ---
1 packets transmitted, 1 received, 0% packet loss, time 0ms
rtt min/avg/max/mdev = 0.050/0.050/0.050/0.000 ms
(⎈|HomeLab:N/A) root@k8s-ctr:~# ping -c 1 k8s-w1
PING k8s-w1 (192.168.10.101) 56(84) bytes of data.
64 bytes from k8s-w1 (192.168.10.101): icmp_seq=1 ttl=64 time=0.644 ms

--- k8s-w1 ping statistics ---
1 packets transmitted, 1 received, 0% packet loss, time 0ms
rtt min/avg/max/mdev = 0.644/0.644/0.644/0.000 ms
(⎈|HomeLab:N/A) root@k8s-ctr:~# ping -c 1 k8s-w2
PING k8s-w2 (192.168.10.102) 56(84) bytes of data.
64 bytes from k8s-w2 (192.168.10.102): icmp_seq=1 ttl=64 time=0.511 ms

--- k8s-w2 ping statistics ---
1 packets transmitted, 1 received, 0% packet loss, time 0ms
rtt min/avg/max/mdev = 0.511/0.511/0.511/0.000 ms

 

 

2. [공통] CRI 설치 : containerd

2-1. containerd(runc) v2.1.5 설치

# dnf == yum, 버전 정보 확인
(⎈|HomeLab:N/A) root@k8s-ctr:~# dnf repolist
repo id                                                                                   repo name
appstream                                                                                 Rocky Linux 10 - AppStream
baseos                                                                                    Rocky Linux 10 - BaseOS
docker-ce-stable                                                                          Docker CE Stable - aarch64
extras                                                                                    Rocky Linux 10 - Extras
kubecolor                                                                                 packages for kubecolor
kubernetes                                                                                Kubernetes
(⎈|HomeLab:N/A) root@k8s-ctr:~# tree /etc/yum.repos.d/
/etc/yum.repos.d/
├── docker-ce.repo
├── kubecolor.repo
├── kubernetes.repo
├── rocky-addons.repo
├── rocky-devel.repo
├── rocky-extras.repo
└── rocky.repo

1 directory, 7 files
(⎈|HomeLab:N/A) root@k8s-ctr:~# dnf config-manager --add-repo https://download.docker.com/linux/centos/docker-ce.repo
Adding repo from: https://download.docker.com/linux/centos/docker-ce.repo
(⎈|HomeLab:N/A) root@k8s-ctr:~# dnf repolist
repo id                                                                                   repo name
appstream                                                                                 Rocky Linux 10 - AppStream
baseos                                                                                    Rocky Linux 10 - BaseOS
docker-ce-stable                                                                          Docker CE Stable - aarch64
extras                                                                                    Rocky Linux 10 - Extras
kubecolor                                                                                 packages for kubecolor
kubernetes                                                                                Kubernetes
(⎈|HomeLab:N/A) root@k8s-ctr:~# tree /etc/yum.repos.d/
/etc/yum.repos.d/
├── docker-ce.repo
├── kubecolor.repo
├── kubernetes.repo
├── rocky-addons.repo
├── rocky-devel.repo
├── rocky-extras.repo
└── rocky.repo

1 directory, 7 files
(⎈|HomeLab:N/A) root@k8s-ctr:~# cat /etc/yum.repos.d/docker-ce.repo
[docker-ce-stable]
name=Docker CE Stable - $basearch
baseurl=https://download.docker.com/linux/centos/$releasever/$basearch/stable
enabled=1
gpgcheck=1
gpgkey=https://download.docker.com/linux/centos/gpg

[docker-ce-stable-source]
name=Docker CE Stable - Sources
baseurl=https://download.docker.com/linux/centos/$releasever/source/stable
enabled=0
gpgcheck=1
gpgkey=https://download.docker.com/linux/centos/gpg

[docker-ce-test]
name=Docker CE Test - $basearch
baseurl=https://download.docker.com/linux/centos/$releasever/$basearch/test
enabled=0
gpgcheck=1
gpgkey=https://download.docker.com/linux/centos/gpg

[docker-ce-test-source]
name=Docker CE Test - Sources
baseurl=https://download.docker.com/linux/centos/$releasever/source/test
enabled=0
gpgcheck=1
gpgkey=https://download.docker.com/linux/centos/gpg


# 설치 가능한 모든 containerd.io 버전 확인
(⎈|HomeLab:N/A) root@k8s-ctr:~# dnf list --showduplicates containerd.io
Docker CE Stable - aarch64                                                                                                                                   398  B/s | 2.0 kB     00:05
Installed Packages
containerd.io.aarch64                                                                    2.1.5-1.el10                                                                       @docker-ce-stable
Available Packages
containerd.io.aarch64                                                                    1.7.23-3.1.el10                                                                    docker-ce-stable
containerd.io.aarch64                                                                    1.7.24-3.1.el10                                                                    docker-ce-stable
containerd.io.aarch64                                                                    1.7.25-3.1.el10                                                                    docker-ce-stable
containerd.io.aarch64                                                                    1.7.26-3.1.el10                                                                    docker-ce-stable
containerd.io.aarch64                                                                    1.7.27-3.1.el10                                                                    docker-ce-stable
containerd.io.aarch64                                                                    1.7.28-1.el10                                                                      docker-ce-stable
containerd.io.aarch64                                                                    1.7.28-2.el10                                                                      docker-ce-stable
containerd.io.aarch64                                                                    1.7.29-1.el10                                                                      docker-ce-stable
containerd.io.aarch64                                                                    2.1.5-1.el10                                                                       docker-ce-stable
containerd.io.aarch64                                                                    2.2.0-2.el10                                                                       docker-ce-stable
containerd.io.aarch64                                                                    2.2.1-1.el10                                                                       docker-ce-stable


# containerd 설치
(⎈|HomeLab:N/A) root@k8s-ctr:~# dnf install -y containerd.io-2.1.5-1.el10
Last metadata expiration check: 0:00:44 ago on Sat 24 Jan 2026 05:39:48 PM KST.
Package containerd.io-2.1.5-1.el10.aarch64 is already installed.
Dependencies resolved.
Nothing to do.
Complete!

# 설치된 파일 확인
(⎈|HomeLab:N/A) root@k8s-ctr:~# which runc && runc --version
/usr/bin/runc
runc version 1.3.3
commit: v1.3.3-0-gd842d771
spec: 1.2.1
go: go1.24.9
libseccomp: 2.5.3
(⎈|HomeLab:N/A) root@k8s-ctr:~# which containerd && containerd --version
/usr/bin/containerd
containerd containerd.io v2.1.5 fcd43222d6b07379a4be9786bda52438f0dd16a1
(⎈|HomeLab:N/A) root@k8s-ctr:~# which containerd-shim-runc-v2 && containerd-shim-runc-v2 -v
/usr/bin/containerd-shim-runc-v2
containerd-shim-runc-v2:
  Version:  v2.1.5
  Revision: fcd43222d6b07379a4be9786bda52438f0dd16a1
  Go version: go1.24.9

(⎈|HomeLab:N/A) root@k8s-ctr:~# which ctr && ctr --version
/usr/bin/ctr
ctr containerd.io v2.1.5


# 기본 설정 생성 및 SystemdCgroup 활성화 (매우 중요)
(⎈|HomeLab:N/A) root@k8s-ctr:~# cat /etc/containerd/config.toml | grep -i systemdcgroup
            SystemdCgroup = false
(⎈|HomeLab:N/A) root@k8s-ctr:~# sed -i 's/SystemdCgroup = false/SystemdCgroup = true/g' /etc/containerd/config.toml
(⎈|HomeLab:N/A) root@k8s-ctr:~# cat /etc/containerd/config.toml | grep -i systemdcgroup
            SystemdCgroup = true
            
            
# containerd start 와 enabled            
(⎈|HomeLab:N/A) root@k8s-ctr:~# systemctl enable --now containerd
(⎈|HomeLab:N/A) root@k8s-ctr:~# systemctl status containerd --no-pager
● containerd.service - containerd container runtime
     Loaded: loaded (/usr/lib/systemd/system/containerd.service; enabled; preset: disabled)
     Active: active (running) since Sat 2026-01-24 14:58:47 KST; 2h 45min ago
 Invocation: 15a09f3b93124c9986fb4913578e0154
       Docs: https://containerd.io
   Main PID: 5008 (containerd)
      Tasks: 121
     Memory: 855.5M (peak: 873.2M)
        CPU: 2min 6.573s
     CGroup: /system.slice/containerd.service
             ├─5008 /usr/bin/containerd
             ├─5931 /usr/bin/containerd-shim-runc-v2 -namespace k8s.io -id c91cc3779b6d53b845614ee32d79e7281e7e6004cabd8ddcd0bfbd27bde927c2 -address /run/containerd/containerd.sock
             ├─5952 /usr/bin/containerd-shim-runc-v2 -namespace k8s.io -id c3a07a12edde49cc5ce299cbd849e91e2c58b00b215b8991ffb2e1577cf88e92 -address /run/containerd/containerd.sock
             ├─6022 /usr/bin/containerd-shim-runc-v2 -namespace k8s.io -id 7f865707e4b038307dedfeb9cb8a49a094c9daf1462517efecbff651fc5c6619 -address /run/containerd/containerd.sock
             ├─6039 /usr/bin/containerd-shim-runc-v2 -namespace k8s.io -id aa47ccd8f7693f24dde5d70646bbf83580e12abffe191f2cbb3b03984ee2e1d9 -address /run/containerd/containerd.sock
             ├─6368 /usr/bin/containerd-shim-runc-v2 -namespace k8s.io -id 4b69ed2f8eeec3eb4db3fee2291bfa38b54b2de942109ef9409cd48bd4fbc0b4 -address /run/containerd/containerd.sock
             ├─6862 /usr/bin/containerd-shim-runc-v2 -namespace k8s.io -id fc0ef56b33f8291bbbd5fe23fba807c3347f0991d80a97da022107f3ddfaa55e -address /run/containerd/containerd.sock
             ├─7280 /usr/bin/containerd-shim-runc-v2 -namespace k8s.io -id cad15631e844399414452ca6597a58165af7c4096d66804ce632c54d54f1c756 -address /run/containerd/containerd.sock
             └─7290 /usr/bin/containerd-shim-runc-v2 -namespace k8s.io -id 188e8bdda1af31fe0d3591245aeef1bec7258a23ead3d0c423d372905e7d3a5f -address /run/containerd/containerd.sock
             
             
             
(⎈|HomeLab:N/A) root@k8s-ctr:~# systemctl status containerd --no-pager
(⎈|HomeLab:N/A) root@k8s-ctr:~# journalctl -u containerd.service --no-pager
(⎈|HomeLab:N/A) root@k8s-ctr:~# pstree -alnp
(⎈|HomeLab:N/A) root@k8s-ctr:~# systemd-cgls --no-pager


# containerd의 유닉스 도메인 소켓 확인 : kubelet에서 사용 , containerd client 3종(ctr, nerdctr, crictl)도 사용
(⎈|HomeLab:N/A) root@k8s-ctr:~# containerd config dump | grep -n containerd.sock
11:  address = '/run/containerd/containerd.sock'
(⎈|HomeLab:N/A) root@k8s-ctr:~# ls -l /run/containerd/containerd.sock
srw-rw----. 1 root root 0 Jan 24 14:58 /run/containerd/containerd.sock
(⎈|HomeLab:N/A) root@k8s-ctr:~# ss -xl | grep containerd
u_str LISTEN 0      4096                                                /run/containerd/containerd.sock.ttrpc 17656             * 0
u_str LISTEN 0      4096                                                      /run/containerd/containerd.sock 18544             * 0
u_str LISTEN 0      4096   /run/containerd/s/6a1eec9193b24526a055dd3c67beb7a02bb755b4bdef487dfbb58ff75e2dc31c 27398             * 0
u_str LISTEN 0      4096   /run/containerd/s/3c5471325c662956ea7099bf0f607aed6f94b4d5ed7de841d808da09647fd65f 21167             * 0
u_str LISTEN 0      4096   /run/containerd/s/323bfd1bd6ebd2ce9a3420b50eae4533fe80584ab525e264c058efda3c5be61a 23173             * 0
u_str LISTEN 0      4096   /run/containerd/s/05d2b657bfce55010895d0fcbf5ef9c84cbcdb22d51de9420dc5d009153345e8 21609             * 0
u_str LISTEN 0      4096   /run/containerd/s/59d7141628d54c0532476756e3589d8ed513fa390c889248db7d1474c5d10011 32850             * 0
u_str LISTEN 0      4096   /run/containerd/s/96e904766632d214a6e4f0a072e9cb3f650edbe5f67d9ab6d2a7a95a685680ae 31855             * 0
u_str LISTEN 0      4096   /run/containerd/s/6439866fe5db71cda35a979ec11ff91aebc5066eb838654ba8a3e502089642f8 21775             * 0
u_str LISTEN 0      4096   /run/containerd/s/97c57dfb95ff3f013f11b4dc2f2b0e00f04e2b0c6b93c7e67ed34b1cd962e667 21782             * 0
(⎈|HomeLab:N/A) root@k8s-ctr:~# ss -xnp | grep containerd
u_str ESTAB 0      0                                                                                       * 21789             * 21790  users:(("containerd",pid=5008,fd=43))
u_str ESTAB 0      0      /run/containerd/s/323bfd1bd6ebd2ce9a3420b50eae4533fe80584ab525e264c058efda3c5be61a 23262             * 23966  users:(("containerd-shim",pid=6368,fd=12))
u_str ESTAB 0      0                                                   /run/containerd/containerd.sock.ttrpc 20049             * 21731  users:(("containerd",pid=5008,fd=32))
u_str ESTAB 0      0                                                                                       * 21170             * 19349  users:(("containerd",pid=5008,fd=29))
u_str ESTAB 0      0                                                                                       * 19443             * 20065  users:(("containerd",pid=5008,fd=40))
u_str ESTAB 0      0                                                                                       * 23962             * 22331  users:(("containerd-shim",pid=6368,fd=11))
u_str ESTAB 0      0                                                   /run/containerd/containerd.sock.ttrpc 21728             * 21727  users:(("containerd",pid=5008,fd=31))
u_str ESTAB 0      0                                                                                       * 17083             * 18535  users:(("containerd",pid=5008,fd=2),("containerd",pid=5008,fd=1))
u_str ESTAB 0      0                                                   /run/containerd/containerd.sock.ttrpc 21910             * 22665  users:(("containerd",pid=5008,fd=55))
u_str ESTAB 0      0                                                                                       * 21727             * 21728  users:(("containerd-shim",pid=5931,fd=3))
u_str ESTAB 0      0                                                                                       * 22665             * 21910  users:(("containerd-shim",pid=6039,fd=11))
u_str ESTAB 0      0      /run/containerd/s/97c57dfb95ff3f013f11b4dc2f2b0e00f04e2b0c6b93c7e67ed34b1cd962e667 21790             * 21789  users:(("containerd-shim",pid=6039,fd=10))
u_str ESTAB 0      0                                                   /run/containerd/containerd.sock.ttrpc 22331             * 23962  users:(("containerd",pid=5008,fd=15))
u_str ESTAB 0      0      /run/containerd/s/6439866fe5db71cda35a979ec11ff91aebc5066eb838654ba8a3e502089642f8 20065             * 19443  users:(("containerd-shim",pid=6022,fd=10))
u_str ESTAB 0      0                                                                                       * 21731             * 20049  users:(("containerd-shim",pid=5952,fd=11))
u_str ESTAB 0      0      /run/containerd/s/3c5471325c662956ea7099bf0f607aed6f94b4d5ed7de841d808da09647fd65f 19349             * 21170  users:(("containerd-shim",pid=5931,fd=10))
u_str ESTAB 0      0                                                                                       * 23966             * 23262  users:(("containerd",pid=5008,fd=79))
u_str ESTAB 0      0                                                                                       * 28060             * 29048  users:(("containerd",pid=5008,fd=23))
u_str ESTAB 0      0                                                                                       * 21291             * 21292  users:(("containerd",pid=5008,fd=54))
u_str ESTAB 0      0                                                   /run/containerd/containerd.sock.ttrpc 28118             * 25493  users:(("containerd",pid=5008,fd=28))
u_str ESTAB 0      0                                                                                       * 19354             * 21612  users:(("containerd",pid=5008,fd=30))
u_str ESTAB 0      0      /run/containerd/s/96e904766632d214a6e4f0a072e9cb3f650edbe5f67d9ab6d2a7a95a685680ae 32955             * 32954  users:(("containerd-shim",pid=7290,fd=12))
u_str ESTAB 0      0                                                                                       * 21284             * 21285  users:(("containerd",pid=5008,fd=51))
u_str ESTAB 0      0                                                                                       * 25493             * 28118  users:(("containerd-shim",pid=6862,fd=11))
u_str ESTAB 0      0      /run/containerd/s/05d2b657bfce55010895d0fcbf5ef9c84cbcdb22d51de9420dc5d009153345e8 21292             * 21291  users:(("containerd-shim",pid=5952,fd=3))
u_str ESTAB 0      0                                                   /run/containerd/containerd.sock.ttrpc 30422             * 32924  users:(("containerd",pid=5008,fd=99))
u_str ESTAB 0      0                                                                                       * 30332             * 32854  users:(("containerd",pid=5008,fd=96))
u_str ESTAB 0      0      /run/containerd/s/97c57dfb95ff3f013f11b4dc2f2b0e00f04e2b0c6b93c7e67ed34b1cd962e667 23561             * 23560  users:(("containerd-shim",pid=6039,fd=12))
u_str ESTAB 0      0      /run/containerd/s/6a1eec9193b24526a055dd3c67beb7a02bb755b4bdef487dfbb58ff75e2dc31c 29048             * 28060  users:(("containerd-shim",pid=6862,fd=10))
u_str ESTAB 0      0                                                                                       * 32954             * 32955  users:(("containerd",pid=5008,fd=111))
u_str ESTAB 0      0                                                         /run/containerd/containerd.sock 22260             * 20330  users:(("containerd",pid=5008,fd=19))
u_str ESTAB 0      0      /run/containerd/s/323bfd1bd6ebd2ce9a3420b50eae4533fe80584ab525e264c058efda3c5be61a 23950             * 23947  users:(("containerd-shim",pid=6368,fd=10))
u_str ESTAB 0      0      /run/containerd/s/3c5471325c662956ea7099bf0f607aed6f94b4d5ed7de841d808da09647fd65f 21285             * 21284  users:(("containerd-shim",pid=5931,fd=12))
u_str ESTAB 0      0                                                                                       * 30051             * 30810  users:(("containerd",pid=5008,fd=95))
u_str ESTAB 0      0                                                                                       * 32924             * 30422  users:(("containerd-shim",pid=7290,fd=11))
u_str ESTAB 0      0                                                                                       * 23947             * 23950  users:(("containerd",pid=5008,fd=14))
u_str ESTAB 0      0                                                                                       * 23560             * 23561  users:(("containerd",pid=5008,fd=69))
u_str ESTAB 0      0      /run/containerd/s/59d7141628d54c0532476756e3589d8ed513fa390c889248db7d1474c5d10011 32854             * 30332  users:(("containerd-shim",pid=7280,fd=10))
u_str ESTAB 0      0      /run/containerd/s/6a1eec9193b24526a055dd3c67beb7a02bb755b4bdef487dfbb58ff75e2dc31c 30810             * 30051  users:(("containerd-shim",pid=6862,fd=12))
u_str ESTAB 0      0                                                         /run/containerd/containerd.sock 23882             * 20331  users:(("containerd",pid=5008,fd=20))
u_str ESTAB 0      0      /run/containerd/s/05d2b657bfce55010895d0fcbf5ef9c84cbcdb22d51de9420dc5d009153345e8 21612             * 19354  users:(("containerd-shim",pid=5952,fd=10))
u_str ESTAB 0      0      /run/containerd/s/59d7141628d54c0532476756e3589d8ed513fa390c889248db7d1474c5d10011 31941             * 31940  users:(("containerd-shim",pid=7280,fd=12))
u_str ESTAB 0      0                                                                                       * 21956             * 21957  users:(("containerd",pid=5008,fd=72))
u_str ESTAB 0      0      /run/containerd/s/6439866fe5db71cda35a979ec11ff91aebc5066eb838654ba8a3e502089642f8 21957             * 21956  users:(("containerd-shim",pid=6022,fd=12))
u_str ESTAB 0      0      /run/containerd/s/96e904766632d214a6e4f0a072e9cb3f650edbe5f67d9ab6d2a7a95a685680ae 31859             * 31857  users:(("containerd-shim",pid=7290,fd=10))
u_str ESTAB 0      0                                                                                       * 21915             * 21916  users:(("containerd-shim",pid=6022,fd=11))
u_str ESTAB 0      0                                                   /run/containerd/containerd.sock.ttrpc 21916             * 21915  users:(("containerd",pid=5008,fd=56))
u_str ESTAB 0      0                                                                                       * 31940             * 31941  users:(("containerd",pid=5008,fd=115))
u_str ESTAB 0      0                                                                                       * 31857             * 31859  users:(("containerd",pid=5008,fd=97))
u_str ESTAB 0      0                                                   /run/containerd/containerd.sock.ttrpc 31930             * 30421  users:(("containerd",pid=5008,fd=98))
u_str ESTAB 0      0                                                                                       * 30421             * 31930  users:(("containerd-shim",pid=7280,fd=11))


# 플러그인 확인
(⎈|HomeLab:N/A) root@k8s-ctr:~# ctr --address /run/containerd/containerd.sock version
Client:
  Version:  v2.1.5
  Revision: fcd43222d6b07379a4be9786bda52438f0dd16a1
  Go version: go1.24.9

Server:
  Version:  v2.1.5
  Revision: fcd43222d6b07379a4be9786bda52438f0dd16a1
  UUID: d37e58f0-2cca-4578-8f78-74c328cb60e5
(⎈|HomeLab:N/A) root@k8s-ctr:~# ctr plugins ls
TYPE                                      ID                       PLATFORMS         STATUS
io.containerd.content.v1                  content                  -                 ok
io.containerd.image-verifier.v1           bindir                   -                 ok
io.containerd.internal.v1                 opt                      -                 ok
io.containerd.warning.v1                  deprecations             -                 ok
io.containerd.snapshotter.v1              blockfile                linux/arm64/v8    skip
io.containerd.snapshotter.v1              devmapper                linux/arm64/v8    skip
io.containerd.snapshotter.v1              erofs                    linux/arm64/v8    skip
io.containerd.snapshotter.v1              native                   linux/arm64/v8    ok
io.containerd.snapshotter.v1              overlayfs                linux/arm64/v8    ok
io.containerd.snapshotter.v1              zfs                      linux/arm64/v8    skip
io.containerd.event.v1                    exchange                 -                 ok
io.containerd.monitor.task.v1             cgroups                  linux/arm64/v8    ok
io.containerd.metadata.v1                 bolt                     -                 ok
io.containerd.gc.v1                       scheduler                -                 ok
io.containerd.differ.v1                   erofs                    linux/arm64/v8    ok
io.containerd.differ.v1                   walking                  linux/arm64/v8    ok
io.containerd.lease.v1                    manager                  -                 ok
io.containerd.service.v1                  containers-service       -                 ok
io.containerd.service.v1                  content-service          -                 ok
io.containerd.service.v1                  diff-service             -                 ok
io.containerd.service.v1                  images-service           -                 ok
io.containerd.service.v1                  introspection-service    -                 ok
io.containerd.service.v1                  namespaces-service       -                 ok
io.containerd.service.v1                  snapshots-service        -                 ok
io.containerd.shim.v1                     manager                  -                 ok
io.containerd.runtime.v2                  task                     linux/arm64/v8    ok
io.containerd.service.v1                  tasks-service            -                 ok
io.containerd.grpc.v1                     containers               -                 ok
io.containerd.grpc.v1                     content                  -                 ok
io.containerd.grpc.v1                     diff                     -                 ok
io.containerd.grpc.v1                     events                   -                 ok
io.containerd.grpc.v1                     images                   -                 ok
io.containerd.grpc.v1                     introspection            -                 ok
io.containerd.grpc.v1                     leases                   -                 ok
io.containerd.grpc.v1                     namespaces               -                 ok
io.containerd.sandbox.store.v1            local                    -                 ok
io.containerd.transfer.v1                 local                    -                 ok
io.containerd.cri.v1                      images                   -                 ok
io.containerd.cri.v1                      runtime                  linux/arm64/v8    ok
io.containerd.podsandbox.controller.v1    podsandbox               -                 ok
io.containerd.sandbox.controller.v1       shim                     -                 ok
io.containerd.grpc.v1                     sandbox-controllers      -                 ok
io.containerd.grpc.v1                     sandboxes                -                 ok
io.containerd.grpc.v1                     snapshots                -                 ok
io.containerd.streaming.v1                manager                  -                 ok
io.containerd.grpc.v1                     streaming                -                 ok
io.containerd.grpc.v1                     tasks                    -                 ok
io.containerd.grpc.v1                     transfer                 -                 ok
io.containerd.grpc.v1                     version                  -                 ok
io.containerd.monitor.container.v1        restart                  -                 ok
io.containerd.tracing.processor.v1        otlp                     -                 skip
io.containerd.internal.v1                 tracing                  -                 skip
io.containerd.ttrpc.v1                    otelttrpc                -                 ok
io.containerd.grpc.v1                     healthcheck              -                 ok
io.containerd.nri.v1                      nri                      -                 ok
io.containerd.grpc.v1                     cri                      -                 ok

 

 

3. [공통] kubeadm, kubelet 및 kubectl 설치

3-1. kubeadm, kubelet 및 kubectl 설치 v1.32.11

# repo 추가
## exclude=... : 실수로 dnf update 시 kubelet 자동 업그레이드 방지
(⎈|HomeLab:N/A) root@k8s-ctr:~# dnf repolist
repo id                                                                                   repo name
appstream                                                                                 Rocky Linux 10 - AppStream
baseos                                                                                    Rocky Linux 10 - BaseOS
docker-ce-stable                                                                          Docker CE Stable - aarch64
extras                                                                                    Rocky Linux 10 - Extras
kubecolor                                                                                 packages for kubecolor
kubernetes                                                                                Kubernetes

(⎈|HomeLab:N/A) root@k8s-ctr:~# tree /etc/yum.repos.d/
/etc/yum.repos.d/
├── docker-ce.repo
├── kubecolor.repo
├── kubernetes.repo
├── rocky-addons.repo
├── rocky-devel.repo
├── rocky-extras.repo
└── rocky.repo

1 directory, 7 files

(⎈|HomeLab:N/A) root@k8s-ctr:~# cat <<EOF | tee /etc/yum.repos.d/kubernetes.repo
[kubernetes]
name=Kubernetes
baseurl=https://pkgs.k8s.io/core:/stable:/v1.32/rpm/
enabled=1
gpgcheck=1
gpgkey=https://pkgs.k8s.io/core:/stable:/v1.32/rpm/repodata/repomd.xml.key
exclude=kubelet kubeadm kubectl cri-tools kubernetes-cni
EOF
[kubernetes]
name=Kubernetes
baseurl=https://pkgs.k8s.io/core:/stable:/v1.32/rpm/
enabled=1
gpgcheck=1
gpgkey=https://pkgs.k8s.io/core:/stable:/v1.32/rpm/repodata/repomd.xml.key
exclude=kubelet kubeadm kubectl cri-tools kubernetes-cni

(⎈|HomeLab:N/A) root@k8s-ctr:~# dnf makecache
Docker CE Stable - aarch64                                                                                                                                   394  B/s | 2.0 kB     00:05
packages for kubecolor                                                                                                                                       305  B/s | 1.5 kB     00:05
Kubernetes                                                                                                                                                   165  B/s | 1.7 kB     00:10
Rocky Linux 10 - BaseOS                                                                                                                                      412  B/s | 4.3 kB     00:10
Rocky Linux 10 - AppStream                                                                                                                                   371  B/s | 4.3 kB     00:12
Rocky Linux 10 - Extras                                                                                                                                      294  B/s | 3.1 kB     00:10

# 설치
## --disableexcludes=... kubernetes repo에 설정된 exclude 규칙을 이번 설치에서만 무시(1회성 옵션 처럼 사용)
## 설치 가능 버전 확인
(⎈|HomeLab:N/A) root@k8s-ctr:~# dnf list --showduplicates kubelet
Last metadata expiration check: 0:01:25 ago on Sat 24 Jan 2026 05:49:23 PM KST.
Installed Packages
kubelet.aarch64                                                                        1.32.11-150500.1.1                                                                         @kubernetes
(⎈|HomeLab:N/A) root@k8s-ctr:~# dnf list --showduplicates kubelet --disableexcludes=kubernetes
Last metadata expiration check: 0:01:28 ago on Sat 24 Jan 2026 05:49:23 PM KST.
Installed Packages
kubelet.aarch64                                                                        1.32.11-150500.1.1                                                                         @kubernetes
Available Packages
kubelet.aarch64                                                                        1.32.0-150500.1.1                                                                          kubernetes
kubelet.ppc64le                                                                        1.32.0-150500.1.1                                                                          kubernetes
...
(⎈|HomeLab:N/A) root@k8s-ctr:~# dnf list --showduplicates kubeadm --disableexcludes=kubernetes
(⎈|HomeLab:N/A) root@k8s-ctr:~# dnf list --showduplicates kubectl --disableexcludes=kubernetes


## 버전 정보 미지정 시, 제공 가능 최신 버전 설치됨.
(⎈|HomeLab:N/A) root@k8s-ctr:~# dnf install -y kubelet kubeadm kubectl --disableexcludes=kubernetes
Last metadata expiration check: 0:03:51 ago on Sat 24 Jan 2026 05:49:23 PM KST.
Package kubelet-1.32.11-150500.1.1.aarch64 is already installed.
Package kubeadm-1.32.11-150500.1.1.aarch64 is already installed.
Package kubectl-1.32.11-150500.1.1.aarch64 is already installed.
Dependencies resolved.
Nothing to do.
Complete!


# kubelet 활성화 (실제 기동은 kubeadm init 후에 시작됨)
(⎈|HomeLab:N/A) root@k8s-ctr:~# systemctl enable --now kubelet
(⎈|HomeLab:N/A) root@k8s-ctr:~# ps -ef |grep kubelet
root        6124    5952  5 15:03 ?        00:08:47 kube-apiserver --advertise-address=192.168.10.100 --allow-privileged=true --authorization-mode=Node,RBAC --client-ca-file=/etc/kubernetes/pki/ca.crt --enable-admission-plugins=NodeRestriction --enable-bootstrap-token-auth=true --etcd-cafile=/etc/kubernetes/pki/etcd/ca.crt --etcd-certfile=/etc/kubernetes/pki/apiserver-etcd-client.crt --etcd-keyfile=/etc/kubernetes/pki/apiserver-etcd-client.key --etcd-servers=https://127.0.0.1:2379 --kubelet-client-certificate=/etc/kubernetes/pki/apiserver-kubelet-client.crt --kubelet-client-key=/etc/kubernetes/pki/apiserver-kubelet-client.key --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --proxy-client-cert-file=/etc/kubernetes/pki/front-proxy-client.crt --proxy-client-key-file=/etc/kubernetes/pki/front-proxy-client.key --requestheader-allowed-names=front-proxy-client --requestheader-client-ca-file=/etc/kubernetes/pki/front-proxy-ca.crt --requestheader-extra-headers-prefix=X-Remote-Extra- --requestheader-group-headers=X-Remote-Group --requestheader-username-headers=X-Remote-User --secure-port=6443 --service-account-issuer=https://kubernetes.default.svc.cluster.local --service-account-key-file=/etc/kubernetes/pki/sa.pub --service-account-signing-key-file=/etc/kubernetes/pki/sa.key --service-cluster-ip-range=10.96.0.0/16 --tls-cert-file=/etc/kubernetes/pki/apiserver.crt --tls-private-key-file=/etc/kubernetes/pki/apiserver.key
root        6304       1  3 15:03 ?        00:05:54 /usr/bin/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --kubeconfig=/etc/kubernetes/kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///run/containerd/containerd.sock --node-ip=192.168.10.100 --pod-infra-container-image=registry.k8s.io/pause:3.10
root       53912   42408  0 17:53 pts/2    00:00:00 grep --color=auto kubelet

# 설치 파일들 확인
(⎈|HomeLab:N/A) root@k8s-ctr:~# which kubeadm && kubeadm version -o yaml
/usr/bin/kubeadm
clientVersion:
  buildDate: "2025-12-16T18:06:36Z"
  compiler: gc
  gitCommit: 2195eae9e91f2e72114365d9bb9c670d0c08de12
  gitTreeState: clean
  gitVersion: v1.32.11
  goVersion: go1.24.11
  major: "1"
  minor: "32"
  platform: linux/arm64
(⎈|HomeLab:N/A) root@k8s-ctr:~# which kubectl && kubectl version --client=true
/usr/bin/kubectl
Client Version: v1.32.11
Kustomize Version: v5.5.0
(⎈|HomeLab:N/A) root@k8s-ctr:~# which kubectl && kubectl version --client=true
/usr/bin/kubectl
Client Version: v1.32.11
Kustomize Version: v5.5.0
(⎈|HomeLab:N/A) root@k8s-ctr:~# which kubelet && kubelet --version
/usr/bin/kubelet
Kubernetes v1.32.11
(⎈|HomeLab:N/A) root@k8s-ctr:~# which crictl && crictl version
/usr/bin/crictl
Version:  0.1.0
RuntimeName:  containerd
RuntimeVersion:  v2.1.5
RuntimeApiVersion:  v1

# /etc/crictl.yaml 파일 작성
(⎈|HomeLab:N/A) root@k8s-ctr:~# cat << EOF > /etc/crictl.yaml
runtime-endpoint: unix:///run/containerd/containerd.sock
image-endpoint: unix:///run/containerd/containerd.sock
EOF


# kubernetes-cni : 파드 네트워크 구성을 위한 CNI 바이너리 파일 확인
(⎈|HomeLab:N/A) root@k8s-ctr:~# ls -al /opt/cni/bin
total 66036
drwxr-xr-x. 2 root root    4096 Jan 24 15:05 .
drwxr-xr-x. 3 root root      17 Jan 24 14:59 ..
-rwxr-xr-x. 1 root root 3239200 Dec 12  2024 bandwidth
-rwxr-xr-x. 1 root root 3731632 Dec 12  2024 bridge
-rwxr-xr-x. 1 root root 9123544 Dec 12  2024 dhcp
-rwxr-xr-x. 1 root root 3379872 Dec 12  2024 dummy
-rwxr-xr-x. 1 root root 3742888 Dec 12  2024 firewall
-rwxr-xr-x. 1 root root 2903098 Jan 24 15:05 flannel
-rwxr-xr-x. 1 root root 3383408 Dec 12  2024 host-device
-rwxr-xr-x. 1 root root 2812400 Dec 12  2024 host-local
-rwxr-xr-x. 1 root root 3380928 Dec 12  2024 ipvlan
-rw-r--r--. 1 root root   11357 Dec 12  2024 LICENSE
-rwxr-xr-x. 1 root root 2953200 Dec 12  2024 loopback
-rwxr-xr-x. 1 root root 3448024 Dec 12  2024 macvlan
-rwxr-xr-x. 1 root root 3312488 Dec 12  2024 portmap
-rwxr-xr-x. 1 root root 3524072 Dec 12  2024 ptp
-rw-r--r--. 1 root root    2343 Dec 12  2024 README.md
-rwxr-xr-x. 1 root root 3091976 Dec 12  2024 sbr
-rwxr-xr-x. 1 root root 2526944 Dec 12  2024 static
-rwxr-xr-x. 1 root root 3516272 Dec 12  2024 tap
-rwxr-xr-x. 1 root root 2956032 Dec 12  2024 tuning
-rwxr-xr-x. 1 root root 3380544 Dec 12  2024 vlan
-rwxr-xr-x. 1 root root 3160560 Dec 12  2024 vrf
(⎈|HomeLab:N/A) root@k8s-ctr:~# tree /opt/cni
/opt/cni
└── bin
    ├── bandwidth
    ├── bridge
    ├── dhcp
    ├── dummy
    ├── firewall
    ├── flannel
    ├── host-device
    ├── host-local
    ├── ipvlan
    ├── LICENSE
    ├── loopback
    ├── macvlan
    ├── portmap
    ├── ptp
    ├── README.md
    ├── sbr
    ├── static
    ├── tap
    ├── tuning
    ├── vlan
    └── vrf

2 directories, 21 files


(⎈|HomeLab:N/A) root@k8s-ctr:~# tree /etc/cni/
/etc/cni/
└── net.d
    └── 10-flannel.conflist

2 directories, 1 file


(⎈|HomeLab:N/A) root@k8s-ctr:~# systemctl is-active kubelet
active
(⎈|HomeLab:N/A) root@k8s-ctr:~# systemctl status kubelet --no-pager
● kubelet.service - kubelet: The Kubernetes Node Agent
     Loaded: loaded (/usr/lib/systemd/system/kubelet.service; enabled; preset: disabled)
    Drop-In: /usr/lib/systemd/system/kubelet.service.d
             └─10-kubeadm.conf
     Active: active (running) since Sat 2026-01-24 15:03:08 KST; 2h 53min ago
 Invocation: 5d6b1a8fdda84197b96c991789afd05f
       Docs: https://kubernetes.io/docs/
   Main PID: 6304 (kubelet)
      Tasks: 13 (limit: 18742)
     Memory: 45.7M (peak: 50.1M)
        CPU: 6min 3.091s
     CGroup: /system.slice/kubelet.service
             └─6304 /usr/bin/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --kubeconfig=/etc/kubernetes/kubelet.conf --config=/var/lib/kubelet/config.yaml --container…



(⎈|HomeLab:N/A) root@k8s-ctr:~# cat /usr/lib/systemd/system/kubelet.service
(⎈|HomeLab:N/A) root@k8s-ctr:~# cat /usr/lib/systemd/system/kubelet.service.d/10-kubeadm.conf

(⎈|HomeLab:N/A) root@k8s-ctr:~# tree /etc/kubernetes
/etc/kubernetes
├── admin.conf
├── controller-manager.conf
├── kubelet.conf
├── manifests
│   ├── etcd.yaml
│   ├── kube-apiserver.yaml
│   ├── kube-controller-manager.yaml
│   └── kube-scheduler.yaml
├── pki
│   ├── apiserver.crt
│   ├── apiserver-etcd-client.crt
│   ├── apiserver-etcd-client.key
│   ├── apiserver.key
│   ├── apiserver-kubelet-client.crt
│   ├── apiserver-kubelet-client.key
│   ├── ca.crt
│   ├── ca.key
│   ├── etcd
│   │   ├── ca.crt
│   │   ├── ca.key
│   │   ├── healthcheck-client.crt
│   │   ├── healthcheck-client.key
│   │   ├── peer.crt
│   │   ├── peer.key
│   │   ├── server.crt
│   │   └── server.key
│   ├── front-proxy-ca.crt
│   ├── front-proxy-ca.key
│   ├── front-proxy-client.crt
│   ├── front-proxy-client.key
│   ├── sa.key
│   └── sa.pub
├── scheduler.conf
└── super-admin.conf

4 directories, 31 files

# cgroup , namespace 정보 확인
(⎈|HomeLab:N/A) root@k8s-ctr:~# systemd-cgls --no-pager
(⎈|HomeLab:N/A) root@k8s-ctr:~# lsns

# containerd의 유닉스 도메인 소켓 확인 : kubelet에서 사용 , containerd client 3종(ctr, nerdctr, crictl)도 사용
(⎈|HomeLab:N/A) root@k8s-ctr:~# ls -l /run/containerd/containerd.sock
srw-rw----. 1 root root 0 Jan 24 14:58 /run/containerd/containerd.sock
(⎈|HomeLab:N/A) root@k8s-ctr:~# ss -xl | grep containerd
u_str LISTEN 0      4096                                                /run/containerd/containerd.sock.ttrpc 17656             * 0
u_str LISTEN 0      4096                                                      /run/containerd/containerd.sock 18544             * 0
u_str LISTEN 0      4096   /run/containerd/s/6a1eec9193b24526a055dd3c67beb7a02bb755b4bdef487dfbb58ff75e2dc31c 27398             * 0
u_str LISTEN 0      4096   /run/containerd/s/3c5471325c662956ea7099bf0f607aed6f94b4d5ed7de841d808da09647fd65f 21167             * 0
u_str LISTEN 0      4096   /run/containerd/s/323bfd1bd6ebd2ce9a3420b50eae4533fe80584ab525e264c058efda3c5be61a 23173             * 0
u_str LISTEN 0      4096   /run/containerd/s/05d2b657bfce55010895d0fcbf5ef9c84cbcdb22d51de9420dc5d009153345e8 21609             * 0
u_str LISTEN 0      4096   /run/containerd/s/59d7141628d54c0532476756e3589d8ed513fa390c889248db7d1474c5d10011 32850             * 0
u_str LISTEN 0      4096   /run/containerd/s/96e904766632d214a6e4f0a072e9cb3f650edbe5f67d9ab6d2a7a95a685680ae 31855             * 0
u_str LISTEN 0      4096   /run/containerd/s/6439866fe5db71cda35a979ec11ff91aebc5066eb838654ba8a3e502089642f8 21775             * 0
u_str LISTEN 0      4096   /run/containerd/s/97c57dfb95ff3f013f11b4dc2f2b0e00f04e2b0c6b93c7e67ed34b1cd962e667 21782             * 0
(⎈|HomeLab:N/A) root@k8s-ctr:~# ss -xnp | grep containerd
u_str ESTAB 0      0                                                                                       * 21789             * 21790  users:(("containerd",pid=5008,fd=43))
u_str ESTAB 0      0      /run/containerd/s/323bfd1bd6ebd2ce9a3420b50eae4533fe80584ab525e264c058efda3c5be61a 23262             * 23966  users:(("containerd-shim",pid=6368,fd=12))
u_str ESTAB 0      0                                                   /run/containerd/containerd.sock.ttrpc 20049             * 21731  users:(("containerd",pid=5008,fd=32))
u_str ESTAB 0      0                                                                                       * 21170             * 19349  users:(("containerd",pid=5008,fd=29))
u_str ESTAB 0      0                                                                                       * 19443             * 20065  users:(("containerd",pid=5008,fd=40))
u_str ESTAB 0      0                                                                                       * 23962             * 22331  users:(("containerd-shim",pid=6368,fd=11))
u_str ESTAB 0      0                                                   /run/containerd/containerd.sock.ttrpc 21728             * 21727  users:(("containerd",pid=5008,fd=31))
u_str ESTAB 0      0                                                                                       * 17083             * 18535  users:(("containerd",pid=5008,fd=2),("containerd",pid=5008,fd=1))
u_str ESTAB 0      0                                                   /run/containerd/containerd.sock.ttrpc 21910             * 22665  users:(("containerd",pid=5008,fd=55))
u_str ESTAB 0      0                                                                                       * 21727             * 21728  users:(("containerd-shim",pid=5931,fd=3))
u_str ESTAB 0      0                                                                                       * 22665             * 21910  users:(("containerd-shim",pid=6039,fd=11))
u_str ESTAB 0      0      /run/containerd/s/97c57dfb95ff3f013f11b4dc2f2b0e00f04e2b0c6b93c7e67ed34b1cd962e667 21790             * 21789  users:(("containerd-shim",pid=6039,fd=10))
u_str ESTAB 0      0                                                   /run/containerd/containerd.sock.ttrpc 22331             * 23962  users:(("containerd",pid=5008,fd=15))
u_str ESTAB 0      0      /run/containerd/s/6439866fe5db71cda35a979ec11ff91aebc5066eb838654ba8a3e502089642f8 20065             * 19443  users:(("containerd-shim",pid=6022,fd=10))
u_str ESTAB 0      0                                                                                       * 21731             * 20049  users:(("containerd-shim",pid=5952,fd=11))
u_str ESTAB 0      0      /run/containerd/s/3c5471325c662956ea7099bf0f607aed6f94b4d5ed7de841d808da09647fd65f 19349             * 21170  users:(("containerd-shim",pid=5931,fd=10))
u_str ESTAB 0      0                                                                                       * 23966             * 23262  users:(("containerd",pid=5008,fd=79))
u_str ESTAB 0      0                                                                                       * 28060             * 29048  users:(("containerd",pid=5008,fd=23))
u_str ESTAB 0      0                                                                                       * 21291             * 21292  users:(("containerd",pid=5008,fd=54))
u_str ESTAB 0      0                                                   /run/containerd/containerd.sock.ttrpc 28118             * 25493  users:(("containerd",pid=5008,fd=28))
u_str ESTAB 0      0                                                                                       * 19354             * 21612  users:(("containerd",pid=5008,fd=30))
u_str ESTAB 0      0      /run/containerd/s/96e904766632d214a6e4f0a072e9cb3f650edbe5f67d9ab6d2a7a95a685680ae 32955             * 32954  users:(("containerd-shim",pid=7290,fd=12))
u_str ESTAB 0      0                                                                                       * 21284             * 21285  users:(("containerd",pid=5008,fd=51))
u_str ESTAB 0      0                                                                                       * 25493             * 28118  users:(("containerd-shim",pid=6862,fd=11))
u_str ESTAB 0      0      /run/containerd/s/05d2b657bfce55010895d0fcbf5ef9c84cbcdb22d51de9420dc5d009153345e8 21292             * 21291  users:(("containerd-shim",pid=5952,fd=3))
u_str ESTAB 0      0                                                   /run/containerd/containerd.sock.ttrpc 30422             * 32924  users:(("containerd",pid=5008,fd=99))
u_str ESTAB 0      0                                                                                       * 30332             * 32854  users:(("containerd",pid=5008,fd=96))
u_str ESTAB 0      0      /run/containerd/s/97c57dfb95ff3f013f11b4dc2f2b0e00f04e2b0c6b93c7e67ed34b1cd962e667 23561             * 23560  users:(("containerd-shim",pid=6039,fd=12))
u_str ESTAB 0      0      /run/containerd/s/6a1eec9193b24526a055dd3c67beb7a02bb755b4bdef487dfbb58ff75e2dc31c 29048             * 28060  users:(("containerd-shim",pid=6862,fd=10))
u_str ESTAB 0      0                                                                                       * 32954             * 32955  users:(("containerd",pid=5008,fd=111))
u_str ESTAB 0      0                                                         /run/containerd/containerd.sock 22260             * 20330  users:(("containerd",pid=5008,fd=19))
u_str ESTAB 0      0      /run/containerd/s/323bfd1bd6ebd2ce9a3420b50eae4533fe80584ab525e264c058efda3c5be61a 23950             * 23947  users:(("containerd-shim",pid=6368,fd=10))
u_str ESTAB 0      0      /run/containerd/s/3c5471325c662956ea7099bf0f607aed6f94b4d5ed7de841d808da09647fd65f 21285             * 21284  users:(("containerd-shim",pid=5931,fd=12))
u_str ESTAB 0      0                                                                                       * 30051             * 30810  users:(("containerd",pid=5008,fd=95))
u_str ESTAB 0      0                                                                                       * 32924             * 30422  users:(("containerd-shim",pid=7290,fd=11))
u_str ESTAB 0      0                                                                                       * 23947             * 23950  users:(("containerd",pid=5008,fd=14))
u_str ESTAB 0      0                                                                                       * 23560             * 23561  users:(("containerd",pid=5008,fd=69))
u_str ESTAB 0      0      /run/containerd/s/59d7141628d54c0532476756e3589d8ed513fa390c889248db7d1474c5d10011 32854             * 30332  users:(("containerd-shim",pid=7280,fd=10))
u_str ESTAB 0      0      /run/containerd/s/6a1eec9193b24526a055dd3c67beb7a02bb755b4bdef487dfbb58ff75e2dc31c 30810             * 30051  users:(("containerd-shim",pid=6862,fd=12))
u_str ESTAB 0      0                                                         /run/containerd/containerd.sock 23882             * 20331  users:(("containerd",pid=5008,fd=20))
u_str ESTAB 0      0      /run/containerd/s/05d2b657bfce55010895d0fcbf5ef9c84cbcdb22d51de9420dc5d009153345e8 21612             * 19354  users:(("containerd-shim",pid=5952,fd=10))
u_str ESTAB 0      0      /run/containerd/s/59d7141628d54c0532476756e3589d8ed513fa390c889248db7d1474c5d10011 31941             * 31940  users:(("containerd-shim",pid=7280,fd=12))
u_str ESTAB 0      0                                                                                       * 21956             * 21957  users:(("containerd",pid=5008,fd=72))
u_str ESTAB 0      0      /run/containerd/s/6439866fe5db71cda35a979ec11ff91aebc5066eb838654ba8a3e502089642f8 21957             * 21956  users:(("containerd-shim",pid=6022,fd=12))
u_str ESTAB 0      0      /run/containerd/s/96e904766632d214a6e4f0a072e9cb3f650edbe5f67d9ab6d2a7a95a685680ae 31859             * 31857  users:(("containerd-shim",pid=7290,fd=10))
u_str ESTAB 0      0                                                                                       * 21915             * 21916  users:(("containerd-shim",pid=6022,fd=11))
u_str ESTAB 0      0                                                   /run/containerd/containerd.sock.ttrpc 21916             * 21915  users:(("containerd",pid=5008,fd=56))
u_str ESTAB 0      0                                                                                       * 31940             * 31941  users:(("containerd",pid=5008,fd=115))
u_str ESTAB 0      0                                                                                       * 31857             * 31859  users:(("containerd",pid=5008,fd=97))
u_str ESTAB 0      0                                                   /run/containerd/containerd.sock.ttrpc 31930             * 30421  users:(("containerd",pid=5008,fd=98))
u_str ESTAB 0      0                                                                                       * 30421             * 31930  users:(("containerd-shim",pid=7280,fd=11))

 

 

4. [Controlplane node] kubeadm 으로 k8s 클러스터 구성 → Flannel CNI 설치 → 편의성 설치 및 확인

4-1. kubeadm init 수행

https://kubernetes.io/docs/reference/setup-tools/kubeadm/implementation-details/

 

Implementation details

FEATURE STATE: Kubernetes v1.10 [stable] kubeadm init and kubeadm join together provide a nice user experience for creating a bare Kubernetes cluster from scratch, that aligns with the best-practices. However, it might not be obvious how kubeadm does that.

kubernetes.io

 

# 기본 환경 정보 출력 저장
(⎈|HomeLab:N/A) root@k8s-ctr:~# crictl images
IMAGE                                     TAG                 IMAGE ID            SIZE
ghcr.io/flannel-io/flannel-cni-plugin     v1.7.1-flannel1     127562bd9047f       5.14MB
ghcr.io/flannel-io/flannel                v0.27.3             d84558c0144bc       33.1MB
registry.k8s.io/coredns/coredns           v1.11.3             2f6c962e7b831       16.9MB
registry.k8s.io/etcd                      3.5.24-0            1211402d28f58       21.9MB
registry.k8s.io/kube-apiserver            v1.32.11            58951ea1a0b5d       26.4MB
registry.k8s.io/kube-controller-manager   v1.32.11            82766e5f2d560       24.2MB
registry.k8s.io/kube-proxy                v1.32.11            dcdb790dc2bfe       27.6MB
registry.k8s.io/kube-scheduler            v1.32.11            cfa17ff3d6634       19.2MB
registry.k8s.io/pause                     3.10                afb61768ce381       268kB
(⎈|HomeLab:N/A) root@k8s-ctr:~# crictl ps
CONTAINER           IMAGE               CREATED             STATE               NAME                      ATTEMPT             POD ID              POD                               NAMESPACE
2e4e767a2f4d6       2f6c962e7b831       3 hours ago         Running             coredns                   0                   cad15631e8443       coredns-668d6bf9bc-smll6          kube-system
70ec7b4c12bde       2f6c962e7b831       3 hours ago         Running             coredns                   0                   188e8bdda1af3       coredns-668d6bf9bc-hk2cc          kube-system
603ffaa0d45b7       d84558c0144bc       3 hours ago         Running             kube-flannel              0                   fc0ef56b33f82       kube-flannel-ds-xlgfs             kube-flannel
6d44b873d3cfd       dcdb790dc2bfe       3 hours ago         Running             kube-proxy                0                   4b69ed2f8eeec       kube-proxy-jn9xt                  kube-system
587e6a63c5b4b       82766e5f2d560       3 hours ago         Running             kube-controller-manager   0                   7f865707e4b03       kube-controller-manager-k8s-ctr   kube-system
ab4f5f0704d1e       cfa17ff3d6634       3 hours ago         Running             kube-scheduler            0                   aa47ccd8f7693       kube-scheduler-k8s-ctr            kube-system
d2b3157d33f9d       58951ea1a0b5d       3 hours ago         Running             kube-apiserver            0                   c3a07a12edde4       kube-apiserver-k8s-ctr            kube-system
1a37a2f0d42f1       1211402d28f58       3 hours ago         Running             etcd                      0                   c91cc3779b6d5       etcd-k8s-ctr                      kube-system

(⎈|HomeLab:N/A) root@k8s-ctr:~# cat /etc/sysconfig/kubelet
(⎈|HomeLab:N/A) root@k8s-ctr:~# tree /etc/kubernetes  | tee -a etc_kubernetes-1.txt
(⎈|HomeLab:N/A) root@k8s-ctr:~# tree /var/lib/kubelet | tee -a var_lib_kubelet-1.txt
(⎈|HomeLab:N/A) root@k8s-ctr:~# tree /run/containerd/ -L 3 | tee -a run_containerd-1.txt
(⎈|HomeLab:N/A) root@k8s-ctr:~# pstree -alnp | tee -a pstree-1.txt
(⎈|HomeLab:N/A) root@k8s-ctr:~# systemd-cgls --no-pager | tee -a systemd-cgls-1.txt
(⎈|HomeLab:N/A) root@k8s-ctr:~# lsns | tee -a lsns-1.txt
(⎈|HomeLab:N/A) root@k8s-ctr:~# ip addr | tee -a ip_addr-1.txt 
(⎈|HomeLab:N/A) root@k8s-ctr:~# ss -tnlp | tee -a ss-1.txt
(⎈|HomeLab:N/A) root@k8s-ctr:~# df -hT | tee -a df-1.txt
(⎈|HomeLab:N/A) root@k8s-ctr:~# findmnt | tee -a findmnt-1.txt
(⎈|HomeLab:N/A) root@k8s-ctr:~# sysctl -a | tee -a sysctl-1.txt


# kubeadm Configuration 파일 작성
(⎈|HomeLab:N/A) root@k8s-ctr:~# cat << EOF > kubeadm-init.yaml
apiVersion: kubeadm.k8s.io/v1beta4
kind: InitConfiguration
bootstrapTokens:
- token: "123456.1234567890123456"
  ttl: "0s"
  usages:
  - signing
  - authentication
nodeRegistration:
  kubeletExtraArgs:
    - name: node-ip
      value: "192.168.10.100"  # 미설정 시 10.0.2.15 맵핑
  criSocket: "unix:///run/containerd/containerd.sock"
localAPIEndpoint:
  advertiseAddress: "192.168.10.100"
---
apiVersion: kubeadm.k8s.io/v1beta4
kind: ClusterConfiguration
kubernetesVersion: "1.32.11"
networking:
  podSubnet: "10.244.0.0/16"
  serviceSubnet: "10.96.0.0/16"
EOF

# (옵션) 컨테이너 이미지 미리 다운로드 : 특히 업그레이드 작업 시, 작업 시간 단축을 위해서 수행할 것
(⎈|HomeLab:N/A) root@k8s-ctr:~# kubeadm config images pull
W0124 18:17:42.180559   61813 version.go:109] could not fetch a Kubernetes version from the internet: unable to get URL "https://dl.k8s.io/release/stable-1.txt": Get "https://cdn.dl.k8s.io/release/stable-1.txt": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
W0124 18:17:42.180725   61813 version.go:110] falling back to the local client version: v1.32.11
[config/images] Pulled registry.k8s.io/kube-apiserver:v1.32.11
[config/images] Pulled registry.k8s.io/kube-controller-manager:v1.32.11
[config/images] Pulled registry.k8s.io/kube-scheduler:v1.32.11
[config/images] Pulled registry.k8s.io/kube-proxy:v1.32.11
[config/images] Pulled registry.k8s.io/coredns/coredns:v1.11.3
[config/images] Pulled registry.k8s.io/pause:3.10
[config/images] Pulled registry.k8s.io/etcd:3.5.24-0

# k8s controlplane 초기화 설정 수행
(⎈|HomeLab:N/A) root@k8s-ctr:~# kubeadm init --config="kubeadm-init.yaml"
[init] Using Kubernetes version: v1.32.11
[preflight] Running pre-flight checks
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
W0124 18:56:07.483873   72139 checks.go:843] detected that the sandbox image "" of the container runtime is inconsistent with that used by kubeadm.It is recommended to use "registry.k8s.io/pause:3.10" as the CRI sandbox image.
[certs] Using certificateDir folder "/etc/kubernetes/pki"
[certs] Generating "ca" certificate and key
[certs] Generating "apiserver" certificate and key
[certs] apiserver serving cert is signed for DNS names [k8s-ctr kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local] and IPs [10.96.0.1 192.168.10.100]
[certs] Generating "apiserver-kubelet-client" certificate and key
[certs] Generating "front-proxy-ca" certificate and key
[certs] Generating "front-proxy-client" certificate and key
[certs] Generating "etcd/ca" certificate and key
[certs] Generating "etcd/server" certificate and key
[certs] etcd/server serving cert is signed for DNS names [k8s-ctr localhost] and IPs [192.168.10.100 127.0.0.1 ::1]
[certs] Generating "etcd/peer" certificate and key
[certs] etcd/peer serving cert is signed for DNS names [k8s-ctr localhost] and IPs [192.168.10.100 127.0.0.1 ::1]
[certs] Generating "etcd/healthcheck-client" certificate and key
[certs] Generating "apiserver-etcd-client" certificate and key
[certs] Generating "sa" key and public key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "super-admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Starting the kubelet
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
[kubelet-check] The kubelet is healthy after 502.219864ms
[api-check] Waiting for a healthy API server. This can take up to 4m0s
[api-check] The API server is healthy after 4.002549553s
[upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
[kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
[upload-certs] Skipping phase. Please see --upload-certs
[mark-control-plane] Marking the node k8s-ctr as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
[mark-control-plane] Marking the node k8s-ctr as control-plane by adding the taints [node-role.kubernetes.io/control-plane:NoSchedule]
[bootstrap-token] Using token: 123456.1234567890123456
[bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
[bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
[bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
[bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
[bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
[bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
[kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
[addons] Applied essential addon: CoreDNS
[addons] Applied essential addon: kube-proxy

Your Kubernetes control-plane has initialized successfully!

To start using your cluster, you need to run the following as a regular user:

  mkdir -p $HOME/.kube
  sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
  sudo chown $(id -u):$(id -g) $HOME/.kube/config

Alternatively, if you are the root user, you can run:

  export KUBECONFIG=/etc/kubernetes/admin.conf

You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
  https://kubernetes.io/docs/concepts/cluster-administration/addons/

Then you can join any number of worker nodes by running the following on each as root:

kubeadm join 192.168.10.100:6443 --token 123456.1234567890123456 \
	--discovery-token-ca-cert-hash sha256:ab5c260aff4f6a47ac284ea1576fa616c6b4fc29a048e0c34f93969b16bfb221
    
    
# crictl 확인
(⎈|HomeLab:N/A) root@k8s-ctr:~# crictl images
IMAGE                                     TAG                 IMAGE ID            SIZE
ghcr.io/flannel-io/flannel-cni-plugin     v1.7.1-flannel1     127562bd9047f       5.14MB
ghcr.io/flannel-io/flannel                v0.27.3             d84558c0144bc       33.1MB
registry.k8s.io/coredns/coredns           v1.11.3             2f6c962e7b831       16.9MB
registry.k8s.io/etcd                      3.5.24-0            1211402d28f58       21.9MB
registry.k8s.io/kube-apiserver            v1.32.11            58951ea1a0b5d       26.4MB
registry.k8s.io/kube-controller-manager   v1.32.11            82766e5f2d560       24.2MB
registry.k8s.io/kube-proxy                v1.32.11            dcdb790dc2bfe       27.6MB
registry.k8s.io/kube-scheduler            v1.32.11            cfa17ff3d6634       19.2MB
registry.k8s.io/pause                     3.10                afb61768ce381       268kB

(⎈|HomeLab:N/A) root@k8s-ctr:~# crictl ps
CONTAINER           IMAGE               CREATED             STATE               NAME                      ATTEMPT             POD ID              POD                               NAMESPACE
d97aba979ea45       2f6c962e7b831       43 seconds ago      Running             coredns                   0                   71e8a6f3b251d       coredns-668d6bf9bc-qdwrj          kube-system
9d0e7906023a1       2f6c962e7b831       43 seconds ago      Running             coredns                   0                   12705c71a3d83       coredns-668d6bf9bc-c2g8k          kube-system
d1de205be7f67       dcdb790dc2bfe       43 seconds ago      Running             kube-proxy                0                   f0c4ffcef3bbb       kube-proxy-6gfjf                  kube-system
12407e81bc3b3       cfa17ff3d6634       54 seconds ago      Running             kube-scheduler            0                   9b742330f7f07       kube-scheduler-k8s-ctr            kube-system
374b448fdda3c       1211402d28f58       54 seconds ago      Running             etcd                      0                   b354af512cfe1       etcd-k8s-ctr                      kube-system
852224a5098d3       58951ea1a0b5d       54 seconds ago      Running             kube-apiserver            1                   f9c341e85832b       kube-apiserver-k8s-ctr            kube-system
118aeaecb21fc       82766e5f2d560       54 seconds ago      Running             kube-controller-manager   0                   dd4d7205db8c6       kube-controller-manager-k8s-ctr   kube-system


# kubeconfig 작성
(⎈|HomeLab:N/A) root@k8s-ctr:~# mkdir -p /root/.kube
(⎈|HomeLab:N/A) root@k8s-ctr:~# cp -i /etc/kubernetes/admin.conf /root/.kube/config
cp: overwrite '/root/.kube/config'?
(⎈|HomeLab:N/A) root@k8s-ctr:~# chown $(id -u):$(id -g) /root/.kube/config

# 확인
(⎈|kubernetes-admin@kubernetes:N/A) root@k8s-ctr:~# kubectl cluster-info
Kubernetes control plane is running at https://192.168.10.100:6443
CoreDNS is running at https://192.168.10.100:6443/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy

To further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.
(⎈|kubernetes-admin@kubernetes:N/A) root@k8s-ctr:~# kubectl get node -owide
NAME      STATUS   ROLES           AGE    VERSION    INTERNAL-IP      EXTERNAL-IP   OS-IMAGE                        KERNEL-VERSION                  CONTAINER-RUNTIME
k8s-ctr   Ready    control-plane   4m6s   v1.32.11   192.168.10.100   <none>        Rocky Linux 10.0 (Red Quartz)   6.12.0-55.39.1.el10_0.aarch64   containerd://2.1.5

# coredns 의 service name 확인 : kube-dns
(⎈|kubernetes-admin@kubernetes:N/A) root@k8s-ctr:~# kubectl get svc -n kube-system
NAME       TYPE        CLUSTER-IP   EXTERNAL-IP   PORT(S)                  AGE
kube-dns   ClusterIP   10.96.0.10   <none>        53/UDP,53/TCP,9153/TCP   4m43s

# cluster-info ConfigMap 공개 : cluster-info는 '신원 확인 전, 최소한의 신뢰 부트스트랩 데이터'
(⎈|kubernetes-admin@kubernetes:N/A) root@k8s-ctr:~# kubectl -n kube-public get configmap cluster-info
NAME           DATA   AGE
cluster-info   2      4m56s
(⎈|kubernetes-admin@kubernetes:N/A) root@k8s-ctr:~# kubectl -n kube-public get configmap cluster-info -o yaml
apiVersion: v1
data:
  jws-kubeconfig-123456: eyJhbGciOiJIUzI1NiIsImtpZCI6IjEyMzQ1NiJ9..K4CA_0Z5R0GWkao4kj4kgPHTmKl1f-G5GQQUDTX2Lek
  kubeconfig: |
    apiVersion: v1
    clusters:
    - cluster:
        certificate-authority-data: LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSURCVENDQWUyZ0F3SUJBZ0lJSXk0ODJreHY4b013RFFZSktvWklodmNOQVFFTEJRQXdGVEVUTUJFR0ExVUUKQXhNS2EzVmlaWEp1WlhSbGN6QWVGdzB5TmpBeE1qUXdPVFV4TURkYUZ3MHpOakF4TWpJd09UVTJNRGRhTUJVeApFekFSQmdOVkJBTVRDbXQxWW1WeWJtVjBaWE13Z2dFaU1BMEdDU3FHU0liM0RRRUJBUVVBQTRJQkR3QXdnZ0VLCkFvSUJBUURZWkxueHAxUWFvMWNmOWVBNlZkalZwb2RFenpwMzk4d2QxWFBvd1VjQm1idzY1RFlWR0c5c1VlcVYKL042elFSREZJMmJQeUVTRDVJM092U0tvVGRWNS9rZ1RSaWFyTFlhYS84WENhTzM5Q05PamlmMjdaQVJYa2xjcwo5czJiODNOU1JycDJiNmJhNitHbkYwbUM1UXlPbE1uZVN6Uys3OFV2L3Jpdk9nQ2tkdzBCSUQvblRoTjdhR3lhCjQvQ1RJaVZCMkVoQXllT2FWSEVER3hLMk1ZNXhWL2lxVjBoVFh3V3B1NnhiZ1UxdlFlbml4dkIweVM3MjgyblEKajEydUpjdTFiUkcwWStDbFMzb1owKzBYRGY5dFNnNU54RTR6aWh2NjFpQVl3MnBTRzNJNU5CYVZXb0NtbkJVeAprbkJBbTRaZXJvNDRTVHZLU0pPMXd5ckQzcnluQWdNQkFBR2pXVEJYTUE0R0ExVWREd0VCL3dRRUF3SUNwREFQCkJnTlZIUk1CQWY4RUJUQURBUUgvTUIwR0ExVWREZ1FXQkJRYnhDdm8xL1k4UW5Bc3VhNG9aelVObjBGb3l6QVYKQmdOVkhSRUVEakFNZ2dwcmRXSmxjbTVsZEdWek1BMEdDU3FHU0liM0RRRUJDd1VBQTRJQkFRQ2JkTUkwUGFyYgowbTZHdDRaNFJNRmtxR3d2T0VNdTlYZVRlTVVLanpXSklRVEUxSlpDajFGZnlHbWhBZ3NQaERQcVBabDZyMmFvCmlrOVdhbm12dmxkM3RGWElqbnRPM0ZyVmtOTGREKzYwVlVkMVNuL2ZvSk16VmJLNzlqNWlUUWs1aWJSNnRraXoKSFZnTktvTHlaYjk4SXpFOEdEcVQ2YnV2ZG9UM2xSMW5VWlNNVHJFazZLdXA4MDJjNGN3dUxHT0MrR2I1WUg2bwp0Z3d5d3gxY2hMcGx4VkJkSlBtSld2NHFxNm0vYXZlUmRLNm9jdXIzVWNraGJhYUVweWhwUmRDMDYzWXE2YjdCCnhvV1kvdGRRQzhMNEQ1SlVrSWx6TW9Cdzd5emJrVTJINER6QVYzMHBlakpSdVVSSStxQWFxSERvalBWenNzSG4KY3ByMGFTTlVIWGNnCi0tLS0tRU5EIENFUlRJRklDQVRFLS0tLS0K
        server: https://192.168.10.100:6443
      name: ""
    contexts: null
    current-context: ""
    kind: Config
    preferences: {}
    users: null
kind: ConfigMap
metadata:
  creationTimestamp: "2026-01-24T09:56:15Z"
  name: cluster-info
  namespace: kube-public
  resourceVersion: "300"
  uid: 5c22db5f-a97c-4a31-a4c3-643b4fd74f74
(⎈|kubernetes-admin@kubernetes:N/A) root@k8s-ctr:~# kubectl -n kube-public get configmap cluster-info -o jsonpath='{.data.kubeconfig}' | grep certificate-authority-data | cut -d ':' -f2 | tr -d ' ' | base64 -d | openssl x509 -text -noout
Certificate:
    Data:
        Version: 3 (0x2)
        Serial Number: 2535030548539110019 (0x232e3cda4c6ff283)
        Signature Algorithm: sha256WithRSAEncryption
        Issuer: CN=kubernetes
        Validity
            Not Before: Jan 24 09:51:07 2026 GMT
            Not After : Jan 22 09:56:07 2036 GMT
        Subject: CN=kubernetes
        Subject Public Key Info:
            Public Key Algorithm: rsaEncryption
                Public-Key: (2048 bit)
                Modulus:
                    00:d8:64:b9:f1:a7:54:1a:a3:57:1f:f5:e0:3a:55:
                    d8:d5:a6:87:44:cf:3a:77:f7:cc:1d:d5:73:e8:c1:
                    47:01:99:bc:3a:e4:36:15:18:6f:6c:51:ea:95:fc:
                    de:b3:41:10:c5:23:66:cf:c8:44:83:e4:8d:ce:bd:
                    22:a8:4d:d5:79:fe:48:13:46:26:ab:2d:86:9a:ff:
                    c5:c2:68:ed:fd:08:d3:a3:89:fd:bb:64:04:57:92:
                    57:2c:f6:cd:9b:f3:73:52:46:ba:76:6f:a6:da:eb:
                    e1:a7:17:49:82:e5:0c:8e:94:c9:de:4b:34:be:ef:
                    c5:2f:fe:b8:af:3a:00:a4:77:0d:01:20:3f:e7:4e:
                    13:7b:68:6c:9a:e3:f0:93:22:25:41:d8:48:40:c9:
                    e3:9a:54:71:03:1b:12:b6:31:8e:71:57:f8:aa:57:
                    48:53:5f:05:a9:bb:ac:5b:81:4d:6f:41:e9:e2:c6:
                    f0:74:c9:2e:f6:f3:69:d0:8f:5d:ae:25:cb:b5:6d:
                    11:b4:63:e0:a5:4b:7a:19:d3:ed:17:0d:ff:6d:4a:
                    0e:4d:c4:4e:33:8a:1b:fa:d6:20:18:c3:6a:52:1b:
                    72:39:34:16:95:5a:80:a6:9c:15:31:92:70:40:9b:
                    86:5e:ae:8e:38:49:3b:ca:48:93:b5:c3:2a:c3:de:
                    bc:a7
                Exponent: 65537 (0x10001)
        X509v3 extensions:
            X509v3 Key Usage: critical
                Digital Signature, Key Encipherment, Certificate Sign
            X509v3 Basic Constraints: critical
                CA:TRUE
            X509v3 Subject Key Identifier:
                1B:C4:2B:E8:D7:F6:3C:42:70:2C:B9:AE:28:67:35:0D:9F:41:68:CB
            X509v3 Subject Alternative Name:
                DNS:kubernetes
    Signature Algorithm: sha256WithRSAEncryption
    Signature Value:
        9b:74:c2:34:3d:aa:db:d2:6e:86:b7:86:78:44:c1:64:a8:6c:
        2f:38:43:2e:f5:77:93:78:c5:0a:8f:35:89:21:04:c4:d4:96:
        42:8f:51:5f:c8:69:a1:02:0b:0f:84:33:ea:3d:99:7a:af:66:
        a8:8a:4f:56:6a:79:af:be:57:77:b4:55:c8:8e:7b:4e:dc:5a:
        d5:90:d2:dd:0f:ee:b4:55:47:75:4a:7f:df:a0:93:33:55:b2:
        bb:f6:3e:62:4d:09:39:89:b4:7a:b6:48:b3:1d:58:0d:2a:82:
        f2:65:bf:7c:23:31:3c:18:3a:93:e9:bb:af:76:84:f7:95:1d:
        67:51:94:8c:4e:b1:24:e8:ab:a9:f3:4d:9c:e1:cc:2e:2c:63:
        82:f8:66:f9:60:7e:a8:b6:0c:32:c3:1d:5c:84:ba:65:c5:50:
        5d:24:f9:89:5a:fe:2a:ab:a9:bf:6a:f7:91:74:ae:a8:72:ea:
        f7:51:c9:21:6d:a6:84:a7:28:69:45:d0:b4:eb:76:2a:e9:be:
        c1:c6:85:98:fe:d7:50:0b:c2:f8:0f:92:54:90:89:73:32:80:
        70:ef:2c:db:91:4d:87:e0:3c:c0:57:7d:29:7a:32:51:b9:44:
        48:fa:a0:1a:a8:70:e8:8c:f5:73:b2:c1:e7:72:9a:f4:69:23:
        54:1d:77:20


(⎈|kubernetes-admin@kubernetes:N/A) root@k8s-ctr:~# curl -s -k https://192.168.10.100:6443/api/v1/namespaces/kube-public/configmaps/cluster-info | jq
{
  "kind": "ConfigMap",
  "apiVersion": "v1",
  "metadata": {
    "name": "cluster-info",
    "namespace": "kube-public",
    "uid": "5c22db5f-a97c-4a31-a4c3-643b4fd74f74",
    "resourceVersion": "300",
    "creationTimestamp": "2026-01-24T09:56:15Z",
    "managedFields": [
      {
        "manager": "kubeadm",
        "operation": "Update",
        "apiVersion": "v1",
        "time": "2026-01-24T09:56:15Z",
        "fieldsType": "FieldsV1",
        "fieldsV1": {
          "f:data": {
            ".": {},
            "f:kubeconfig": {}
          }
        }
      },
      {
        "manager": "kube-controller-manager",
        "operation": "Update",
        "apiVersion": "v1",
        "time": "2026-01-24T09:56:21Z",
        "fieldsType": "FieldsV1",
        "fieldsV1": {
          "f:data": {
            "f:jws-kubeconfig-123456": {}
          }
        }
      }
    ]
  },
  "data": {
    "jws-kubeconfig-123456": "eyJhbGciOiJIUzI1NiIsImtpZCI6IjEyMzQ1NiJ9..K4CA_0Z5R0GWkao4kj4kgPHTmKl1f-G5GQQUDTX2Lek",
    "kubeconfig": "apiVersion: v1\nclusters:\n- cluster:\n    certificate-authority-data: LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSURCVENDQWUyZ0F3SUJBZ0lJSXk0ODJreHY4b013RFFZSktvWklodmNOQVFFTEJRQXdGVEVUTUJFR0ExVUUKQXhNS2EzVmlaWEp1WlhSbGN6QWVGdzB5TmpBeE1qUXdPVFV4TURkYUZ3MHpOakF4TWpJd09UVTJNRGRhTUJVeApFekFSQmdOVkJBTVRDbXQxWW1WeWJtVjBaWE13Z2dFaU1BMEdDU3FHU0liM0RRRUJBUVVBQTRJQkR3QXdnZ0VLCkFvSUJBUURZWkxueHAxUWFvMWNmOWVBNlZkalZwb2RFenpwMzk4d2QxWFBvd1VjQm1idzY1RFlWR0c5c1VlcVYKL042elFSREZJMmJQeUVTRDVJM092U0tvVGRWNS9rZ1RSaWFyTFlhYS84WENhTzM5Q05PamlmMjdaQVJYa2xjcwo5czJiODNOU1JycDJiNmJhNitHbkYwbUM1UXlPbE1uZVN6Uys3OFV2L3Jpdk9nQ2tkdzBCSUQvblRoTjdhR3lhCjQvQ1RJaVZCMkVoQXllT2FWSEVER3hLMk1ZNXhWL2lxVjBoVFh3V3B1NnhiZ1UxdlFlbml4dkIweVM3MjgyblEKajEydUpjdTFiUkcwWStDbFMzb1owKzBYRGY5dFNnNU54RTR6aWh2NjFpQVl3MnBTRzNJNU5CYVZXb0NtbkJVeAprbkJBbTRaZXJvNDRTVHZLU0pPMXd5ckQzcnluQWdNQkFBR2pXVEJYTUE0R0ExVWREd0VCL3dRRUF3SUNwREFQCkJnTlZIUk1CQWY4RUJUQURBUUgvTUIwR0ExVWREZ1FXQkJRYnhDdm8xL1k4UW5Bc3VhNG9aelVObjBGb3l6QVYKQmdOVkhSRUVEakFNZ2dwcmRXSmxjbTVsZEdWek1BMEdDU3FHU0liM0RRRUJDd1VBQTRJQkFRQ2JkTUkwUGFyYgowbTZHdDRaNFJNRmtxR3d2T0VNdTlYZVRlTVVLanpXSklRVEUxSlpDajFGZnlHbWhBZ3NQaERQcVBabDZyMmFvCmlrOVdhbm12dmxkM3RGWElqbnRPM0ZyVmtOTGREKzYwVlVkMVNuL2ZvSk16VmJLNzlqNWlUUWs1aWJSNnRraXoKSFZnTktvTHlaYjk4SXpFOEdEcVQ2YnV2ZG9UM2xSMW5VWlNNVHJFazZLdXA4MDJjNGN3dUxHT0MrR2I1WUg2bwp0Z3d5d3gxY2hMcGx4VkJkSlBtSld2NHFxNm0vYXZlUmRLNm9jdXIzVWNraGJhYUVweWhwUmRDMDYzWXE2YjdCCnhvV1kvdGRRQzhMNEQ1SlVrSWx6TW9Cdzd5emJrVTJINER6QVYzMHBlakpSdVVSSStxQWFxSERvalBWenNzSG4KY3ByMGFTTlVIWGNnCi0tLS0tRU5EIENFUlRJRklDQVRFLS0tLS0K\n    server: https://192.168.10.100:6443\n  name: \"\"\ncontexts: null\ncurrent-context: \"\"\nkind: Config\npreferences: {}\nusers: null\n"
  }
}


# kubeadm init 시 생성되는 객체
- Namespace: kube-public
- ConfigMap: cluster-info
- Role + RoleBinding 
>> 대상: system:unauthenticated (인증 안 된 사용자)
>> 권한: get on configmaps/cluster-info
👉 아직 클러스터 인증서가 없는 노드(worker) 가 (kubeadm join 전) API Server에 처음 접속해서 최소 정보(엔드포인트 + CA)를 얻기 위해 필요

 

 

4-2. [k8s-ctr] k8s 관련 작업 편의성 설정 : 자동 완성, kubecolor, kubectx, kubens, kube-ps, helm, k9s 설치

# k8s 관련 작업 편의성 설정
(⎈|kubernetes-admin@kubernetes:N/A) root@k8s-ctr:~# echo "sudo su -" >> /home/vagrant/.bashrc
(⎈|kubernetes-admin@kubernetes:N/A) root@k8s-ctr:~# source <(kubectl completion bash)
(⎈|kubernetes-admin@kubernetes:N/A) root@k8s-ctr:~# source <(kubeadm completion bash)
(⎈|kubernetes-admin@kubernetes:N/A) root@k8s-ctr:~# alias k=kubectl
(⎈|kubernetes-admin@kubernetes:N/A) root@k8s-ctr:~# complete -o default -F __start_kubectl k
(⎈|kubernetes-admin@kubernetes:N/A) root@k8s-ctr:~# echo 'alias k=kubectl' >> /etc/profile
(⎈|kubernetes-admin@kubernetes:N/A) root@k8s-ctr:~# echo 'complete -o default -F __start_kubectl k' >> /etc/profile

(⎈|kubernetes-admin@kubernetes:N/A) root@k8s-ctr:~# dnf install -y 'dnf-command(config-manager)'
(⎈|kubernetes-admin@kubernetes:N/A) root@k8s-ctr:~# dnf config-manager --add-repo https://kubecolor.github.io/packages/rpm/kubecolor.repo
(⎈|kubernetes-admin@kubernetes:N/A) root@k8s-ctr:~# dnf repolist
(⎈|kubernetes-admin@kubernetes:N/A) root@k8s-ctr:~# dnf install -y kubecolor

(⎈|kubernetes-admin@kubernetes:N/A) root@k8s-ctr:~# dnf install -y git
(⎈|kubernetes-admin@kubernetes:N/A) root@k8s-ctr:~# git clone https://github.com/ahmetb/kubectx /opt/kubectx
(⎈|kubernetes-admin@kubernetes:N/A) root@k8s-ctr:~# ln -s /opt/kubectx/kubens /usr/local/bin/kubens
(⎈|kubernetes-admin@kubernetes:N/A) root@k8s-ctr:~# ln -s /opt/kubectx/kubectx /usr/local/bin/kubectx

(⎈|kubernetes-admin@kubernetes:N/A) root@k8s-ctr:~# git clone https://github.com/jonmosco/kube-ps1.git /root/kube-ps1
(⎈|kubernetes-admin@kubernetes:N/A) root@k8s-ctr:~# cat << "EOT" >> /root/.bash_profile
source /root/kube-ps1/kube-ps1.sh
KUBE_PS1_SYMBOL_ENABLE=true
function get_cluster_short() {
  echo "$1" | cut -d . -f1
}
KUBE_PS1_CLUSTER_FUNCTION=get_cluster_short
KUBE_PS1_SUFFIX=') '
PS1='$(kube_ps1)'$PS1
EOT

 

 

4-3. [k8s-ctr] Flannel CNI 설치 v0.27.3

# 현재 k8s 클러스터에 파드 전체 CIDR 확인
(⎈|kubernetes-admin@kubernetes:N/A) root@k8s-ctr:~# kc describe pod -n kube-system kube-controller-manager-k8s-ctr
Name:                 kube-controller-manager-k8s-ctr
Namespace:            kube-system
Priority:             2000001000
Priority Class Name:  system-node-critical
Node:                 k8s-ctr/192.168.10.100
Start Time:           Sat, 24 Jan 2026 18:56:16 +0900
Labels:               component=kube-controller-manager
                      tier=control-plane
Annotations:          kubernetes.io/config.hash: 7314ab3f0ec6401c196ca943fad44a05
                      kubernetes.io/config.mirror: 7314ab3f0ec6401c196ca943fad44a05
                      kubernetes.io/config.seen: 2026-01-24T18:56:16.097383940+09:00
                      kubernetes.io/config.source: file
Status:               Running
SeccompProfile:       RuntimeDefault
IP:                   192.168.10.100
IPs:
  IP:           192.168.10.100
Controlled By:  Node/k8s-ctr
Containers:
  kube-controller-manager:
    Container ID:  containerd://118aeaecb21fc60deeb9956f919b4b6c8be283fb50b61ad2f6b4402c11101960
    Image:         registry.k8s.io/kube-controller-manager:v1.32.11
    Image ID:      registry.k8s.io/kube-controller-manager@sha256:ce7b2ead5eef1a1554ef28b2b79596c6a8c6d506a87a7ab1381e77fe3d72f55f
    Port:          <none>
    Host Port:     <none>
    Command:
      kube-controller-manager
      --allocate-node-cidrs=true
      --authentication-kubeconfig=/etc/kubernetes/controller-manager.conf
      --authorization-kubeconfig=/etc/kubernetes/controller-manager.conf
      --bind-address=127.0.0.1
      --client-ca-file=/etc/kubernetes/pki/ca.crt
      --cluster-cidr=10.244.0.0/16
      --cluster-name=kubernetes
      --cluster-signing-cert-file=/etc/kubernetes/pki/ca.crt
      --cluster-signing-key-file=/etc/kubernetes/pki/ca.key
      --controllers=*,bootstrapsigner,tokencleaner
      --kubeconfig=/etc/kubernetes/controller-manager.conf
      --leader-elect=true
      --requestheader-client-ca-file=/etc/kubernetes/pki/front-proxy-ca.crt
      --root-ca-file=/etc/kubernetes/pki/ca.crt
      --service-account-private-key-file=/etc/kubernetes/pki/sa.key
      --service-cluster-ip-range=10.96.0.0/16
      --use-service-account-credentials=true
     
     
# 노드별 파드 CIDR 확인 
(⎈|kubernetes-admin@kubernetes:N/A) root@k8s-ctr:~# kubectl get nodes -o jsonpath='{range .items[*]}{.metadata.name}{"\t"}{.spec.podCIDR}{"\n"}{end}'
k8s-ctr	10.244.0.0/24

# Deploying Flannel with Helm
(⎈|kubernetes-admin@kubernetes:N/A) root@k8s-ctr:~# helm repo add flannel https://flannel-io.github.io/flannel
(⎈|kubernetes-admin@kubernetes:N/A) root@k8s-ctr:~# helm repo update
Hang tight while we grab the latest from your chart repositories...
...Successfully got an update from the "metrics-server" chart repository
...Successfully got an update from the "flannel" chart repository
Update Complete. ⎈Happy Helming!⎈
(⎈|kubernetes-admin@kubernetes:N/A) root@k8s-ctr:~# kubectl create namespace kube-flannel
cat << EOF > flannel.yaml
podCidr: "10.244.0.0/16"
flannel:
  cniBinDir: "/opt/cni/bin"
  cniConfDir: "/etc/cni/net.d"
  args:
  - "--ip-masq"
  - "--kube-subnet-mgr"
  - "--iface=enp0s9"
  backend: "vxlan"
EOF
namespace/kube-flannel created

(⎈|kubernetes-admin@kubernetes:N/A) root@k8s-ctr:~# kc describe ds -n kube-flannel
Name:           kube-flannel-ds
Selector:       app=flannel
Node-Selector:  <none>
Labels:         app=flannel
                app.kubernetes.io/managed-by=Helm
                tier=node
Annotations:    deprecated.daemonset.template.generation: 1
                meta.helm.sh/release-name: flannel
                meta.helm.sh/release-namespace: kube-flannel
Desired Number of Nodes Scheduled: 1
Current Number of Nodes Scheduled: 1
Number of Nodes Scheduled with Up-to-date Pods: 1
Number of Nodes Scheduled with Available Pods: 1
Number of Nodes Misscheduled: 0
Pods Status:  1 Running / 0 Waiting / 0 Succeeded / 0 Failed
Pod Template:
  Labels:           app=flannel
                    tier=node
  Service Account:  flannel
  Init Containers:
   install-cni-plugin:
    Image:      ghcr.io/flannel-io/flannel-cni-plugin:v1.7.1-flannel1
    Port:       <none>
    Host Port:  <none>
    Command:
      cp
    Args:
      -f
      /flannel
      /opt/cni/bin/flannel
    Environment:  <none>
    Mounts:
      /opt/cni/bin from cni-plugin (rw)
   install-cni:
    Image:      ghcr.io/flannel-io/flannel:v0.27.3
    Port:       <none>
    Host Port:  <none>
    Command:
      cp
    Args:
      -f
      /etc/kube-flannel/cni-conf.json
      /etc/cni/net.d/10-flannel.conflist
    Environment:  <none>
    Mounts:
      /etc/cni/net.d from cni (rw)
      /etc/kube-flannel/ from flannel-cfg (rw)
  Containers:
   kube-flannel:
    Image:      ghcr.io/flannel-io/flannel:v0.27.3
    Port:       <none>
    Host Port:  <none>
    Command:
      /opt/bin/flanneld
      --ip-masq
      --kube-subnet-mgr
      --iface=enp0s9
    Requests:
      cpu:     100m
      memory:  50Mi
    Environment:
      POD_NAME:                    (v1:metadata.name)
      POD_NAMESPACE:               (v1:metadata.namespace)
      EVENT_QUEUE_DEPTH:          5000
      CONT_WHEN_CACHE_NOT_READY:  false
    Mounts:
      /etc/kube-flannel/ from flannel-cfg (rw)
      /run/flannel from run (rw)
      /run/xtables.lock from xtables-lock (rw)
  Volumes:
   run:
    Type:          HostPath (bare host directory volume)
    Path:          /run/flannel
    HostPathType:
   cni-plugin:
    Type:          HostPath (bare host directory volume)
    Path:          /opt/cni/bin
    HostPathType:
   cni:
    Type:          HostPath (bare host directory volume)
    Path:          /etc/cni/net.d
    HostPathType:
   flannel-cfg:
    Type:      ConfigMap (a volume populated by a ConfigMap)
    Name:      kube-flannel-cfg
    Optional:  false
   xtables-lock:
    Type:               HostPath (bare host directory volume)
    Path:               /run/xtables.lock
    HostPathType:       FileOrCreate
  Priority Class Name:  system-node-critical
  Node-Selectors:       <none>
  Tolerations:          :NoExecute op=Exists
                        :NoSchedule op=Exists
Events:
  Type    Reason            Age    From                  Message
  ----    ------            ----   ----                  -------
  Normal  SuccessfulCreate  3m57s  daemonset-controller  Created pod: kube-flannel-ds-mqn8t

# flannel cni 바이너리 설치 확인
(⎈|kubernetes-admin@kubernetes:N/A) root@k8s-ctr:~# ls -l /opt/cni/bin/
total 66032
-rwxr-xr-x. 1 root root 3239200 Dec 12  2024 bandwidth
-rwxr-xr-x. 1 root root 3731632 Dec 12  2024 bridge
-rwxr-xr-x. 1 root root 9123544 Dec 12  2024 dhcp
-rwxr-xr-x. 1 root root 3379872 Dec 12  2024 dummy
-rwxr-xr-x. 1 root root 3742888 Dec 12  2024 firewall
-rwxr-xr-x. 1 root root 2903098 Jan 24 19:13 flannel
-rwxr-xr-x. 1 root root 3383408 Dec 12  2024 host-device
-rwxr-xr-x. 1 root root 2812400 Dec 12  2024 host-local
-rwxr-xr-x. 1 root root 3380928 Dec 12  2024 ipvlan
-rw-r--r--. 1 root root   11357 Dec 12  2024 LICENSE
-rwxr-xr-x. 1 root root 2953200 Dec 12  2024 loopback
-rwxr-xr-x. 1 root root 3448024 Dec 12  2024 macvlan
-rwxr-xr-x. 1 root root 3312488 Dec 12  2024 portmap
-rwxr-xr-x. 1 root root 3524072 Dec 12  2024 ptp
-rw-r--r--. 1 root root    2343 Dec 12  2024 README.md
-rwxr-xr-x. 1 root root 3091976 Dec 12  2024 sbr
-rwxr-xr-x. 1 root root 2526944 Dec 12  2024 static
-rwxr-xr-x. 1 root root 3516272 Dec 12  2024 tap
-rwxr-xr-x. 1 root root 2956032 Dec 12  2024 tuning
-rwxr-xr-x. 1 root root 3380544 Dec 12  2024 vlan
-rwxr-xr-x. 1 root root 3160560 Dec 12  2024 vrf

# cni 설정 정보 확인
(⎈|kubernetes-admin@kubernetes:N/A) root@k8s-ctr:~# tree /etc/cni/net.d/
/etc/cni/net.d/
└── 10-flannel.conflist

1 directory, 1 file
(⎈|kubernetes-admin@kubernetes:N/A) root@k8s-ctr:~# cat /etc/cni/net.d/10-flannel.conflist | jq
{
  "name": "cbr0",
  "cniVersion": "0.3.1",
  "plugins": [
    {
      "type": "flannel",
      "delegate": {
        "hairpinMode": true,
        "isDefaultGateway": true
      }
    },
    {
      "type": "portmap",
      "capabilities": {
        "portMappings": true
      }
    }
  ]
}

# coredns 파드 정상 기동 확인
(⎈|kubernetes-admin@kubernetes:N/A) root@k8s-ctr:~# kubectl get pod -n kube-system -owide
NAME                              READY   STATUS    RESTARTS   AGE   IP               NODE      NOMINATED NODE   READINESS GATES
coredns-668d6bf9bc-c2g8k          1/1     Running   0          21m   10.244.0.4       k8s-ctr   <none>           <none>
coredns-668d6bf9bc-qdwrj          1/1     Running   0          21m   10.244.0.5       k8s-ctr   <none>           <none>
etcd-k8s-ctr                      1/1     Running   0          21m   192.168.10.100   k8s-ctr   <none>           <none>
kube-apiserver-k8s-ctr            1/1     Running   1          21m   192.168.10.100   k8s-ctr   <none>           <none>
kube-controller-manager-k8s-ctr   1/1     Running   0          21m   192.168.10.100   k8s-ctr   <none>           <none>
kube-proxy-6gfjf                  1/1     Running   0          21m   192.168.10.100   k8s-ctr   <none>           <none>
kube-scheduler-k8s-ctr            1/1     Running   0          21m   192.168.10.100   k8s-ctr   <none>           <none>

# network 정보 확인
(⎈|kubernetes-admin@kubernetes:N/A) root@k8s-ctr:~# ip -c route | grep 10.244.
10.244.0.0/24 dev cni0 proto kernel scope link src 10.244.0.1
10.244.1.0/24 via 10.244.1.0 dev flannel.1 onlink
10.244.3.0/24 via 10.244.3.0 dev flannel.1 onlink
(⎈|kubernetes-admin@kubernetes:N/A) root@k8s-ctr:~# ip addr
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host noprefixroute
       valid_lft forever preferred_lft forever
2: enp0s8: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc fq_codel state UP group default qlen 1000
    link/ether 08:00:27:90:ea:eb brd ff:ff:ff:ff:ff:ff
    altname enx08002790eaeb
    inet 10.0.2.15/24 brd 10.0.2.255 scope global dynamic noprefixroute enp0s8
       valid_lft 70715sec preferred_lft 70715sec
    inet6 fd17:625c:f037:2:a00:27ff:fe90:eaeb/64 scope global dynamic mngtmpaddr proto kernel_ra
       valid_lft 86384sec preferred_lft 14384sec
    inet6 fe80::a00:27ff:fe90:eaeb/64 scope link proto kernel_ll
       valid_lft forever preferred_lft forever
3: enp0s9: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc fq_codel state UP group default qlen 1000
    link/ether 08:00:27:e6:47:2d brd ff:ff:ff:ff:ff:ff
    altname enx080027e6472d
    inet 192.168.10.100/24 brd 192.168.10.255 scope global noprefixroute enp0s9
       valid_lft forever preferred_lft forever
    inet6 fe80::6edd:7039:2b9b:4df9/64 scope link noprefixroute
       valid_lft forever preferred_lft forever
4: flannel.1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1450 qdisc noqueue state UNKNOWN group default
    link/ether 46:07:ca:bc:29:2a brd ff:ff:ff:ff:ff:ff
    inet 10.244.0.0/32 scope global flannel.1
       valid_lft forever preferred_lft forever
    inet6 fe80::4407:caff:febc:292a/64 scope link proto kernel_ll
       valid_lft forever preferred_lft forever
5: cni0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1450 qdisc noqueue state UP group default qlen 1000
    link/ether 3a:80:fe:79:51:dd brd ff:ff:ff:ff:ff:ff
    inet 10.244.0.1/24 brd 10.244.0.255 scope global cni0
       valid_lft forever preferred_lft forever
    inet6 fe80::3880:feff:fe79:51dd/64 scope link proto kernel_ll
       valid_lft forever preferred_lft forever
8: veth9abddb6f@if2: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1450 qdisc noqueue master cni0 state UP group default qlen 1000
    link/ether 56:36:ed:f4:e5:83 brd ff:ff:ff:ff:ff:ff link-netns cni-cec5c89a-bc9d-932b-3b31-7c5e46a17a18
    inet6 fe80::5436:edff:fef4:e583/64 scope link proto kernel_ll
       valid_lft forever preferred_lft forever
9: veth9f16af76@if2: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1450 qdisc noqueue master cni0 state UP group default qlen 1000
    link/ether 2a:42:20:93:b8:c4 brd ff:ff:ff:ff:ff:ff link-netns cni-87d1e3d6-f0b5-0557-6b53-1b4e7f00588e
    inet6 fe80::2842:20ff:fe93:b8c4/64 scope link proto kernel_ll
       valid_lft forever preferred_lft forever
(⎈|kubernetes-admin@kubernetes:N/A) root@k8s-ctr:~# bridge link
8: veth9abddb6f@enp0s8: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1450 master cni0 state forwarding priority 32 cost 2
9: veth9f16af76@enp0s8: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1450 master cni0 state forwarding priority 32 cost 2
(⎈|kubernetes-admin@kubernetes:N/A) root@k8s-ctr:~# lsns -t net
        NS TYPE NPROCS   PID USER       NETNSID NSFS                                                COMMAND
4026531840 net     152     1 root    unassigned                                                     /usr/lib/systemd/systemd --switched-root --system --deserialize=46 no_timer_check
4026532129 net       1   645 root    unassigned                                                     ├─/usr/sbin/irqbalance
4026532202 net       1   803 polkitd unassigned                                                     └─/usr/lib/polkit-1/polkitd --no-debug --log-level=err
4026532293 net       2 72912 65535            0 /run/netns/cni-cec5c89a-bc9d-932b-3b31-7c5e46a17a18 /pause
4026532371 net       2 72914 65535            1 /run/netns/cni-87d1e3d6-f0b5-0557-6b53-1b4e7f00588e /pause

 

4-4. [k8s-ctr] 노드 정보 확인, 기본 환경 정보 출력 비교, sysctl 변경 확인

# # kubelet 활성화 확인 : 실제 기동은 kubeadm init 후에 시작됨
(⎈|kubernetes-admin@kubernetes:N/A) root@k8s-ctr:~# systemctl is-active kubelet
active

# 노드 정보 확인 : 일반 워크로드가 Control Plane에 스케줄 X
(⎈|kubernetes-admin@kubernetes:N/A) root@k8s-ctr:~# kc describe node
Name:               k8s-ctr
Roles:              control-plane
Labels:             beta.kubernetes.io/arch=arm64
                    beta.kubernetes.io/os=linux
                    kubernetes.io/arch=arm64
                    kubernetes.io/hostname=k8s-ctr
                    kubernetes.io/os=linux
                    node-role.kubernetes.io/control-plane=
                    node.kubernetes.io/exclude-from-external-load-balancers=
Annotations:        flannel.alpha.coreos.com/backend-data: {"VNI":1,"VtepMAC":"46:07:ca:bc:29:2a"}
                    flannel.alpha.coreos.com/backend-type: vxlan
                    flannel.alpha.coreos.com/kube-subnet-manager: true
                    flannel.alpha.coreos.com/public-ip: 192.168.10.100
                    kubeadm.alpha.kubernetes.io/cri-socket: unix:///run/containerd/containerd.sock
                    node.alpha.kubernetes.io/ttl: 0
                    volumes.kubernetes.io/controller-managed-attach-detach: true
CreationTimestamp:  Sat, 24 Jan 2026 18:56:13 +0900
Taints:             node-role.kubernetes.io/control-plane:NoSchedule
...


# 기본 환경 정보 출력 저장
(⎈|kubernetes-admin@kubernetes:N/A) root@k8s-ctr:~# cat /etc/sysconfig/kubelet
(⎈|kubernetes-admin@kubernetes:N/A) root@k8s-ctr:~# tree /etc/kubernetes  | tee -a etc_kubernetes-2.txt
(⎈|kubernetes-admin@kubernetes:N/A) root@k8s-ctr:~# tree /var/lib/kubelet | tee -a var_lib_kubelet-2.txt
(⎈|kubernetes-admin@kubernetes:N/A) root@k8s-ctr:~# tree /run/containerd/ -L 3 | tee -a run_containerd-2.txt
(⎈|kubernetes-admin@kubernetes:N/A) root@k8s-ctr:~# pstree -alnp | tee -a pstree-2.txt
(⎈|kubernetes-admin@kubernetes:N/A) root@k8s-ctr:~# systemd-cgls --no-pager | tee -a systemd-cgls-2.txt
(⎈|kubernetes-admin@kubernetes:N/A) root@k8s-ctr:~# lsns | tee -a lsns-2.txt
(⎈|kubernetes-admin@kubernetes:N/A) root@k8s-ctr:~# ip addr | tee -a ip_addr-2.txt 
(⎈|kubernetes-admin@kubernetes:N/A) root@k8s-ctr:~# ss -tnlp | tee -a ss-2.txt
(⎈|kubernetes-admin@kubernetes:N/A) root@k8s-ctr:~# df -hT | tee -a df-2.txt
(⎈|kubernetes-admin@kubernetes:N/A) root@k8s-ctr:~# findmnt | tee -a findmnt-2.txt
(⎈|kubernetes-admin@kubernetes:N/A) root@k8s-ctr:~# sysctl -a | tee -a sysctl-2.txt

# 파일 출력 비교 : 빠져나오기 ':q' -> ':q' => 변경된 부분이 어떤 동작과 역할인지 조사해보기!
(⎈|kubernetes-admin@kubernetes:N/A) root@k8s-ctr:~# vi -d etc_kubernetes-1.txt etc_kubernetes-2.txt
(⎈|kubernetes-admin@kubernetes:N/A) root@k8s-ctr:~# vi -d var_lib_kubelet-1.txt var_lib_kubelet-2.txt
(⎈|kubernetes-admin@kubernetes:N/A) root@k8s-ctr:~# vi -d run_containerd-1.txt run_containerd-2.txt
(⎈|kubernetes-admin@kubernetes:N/A) root@k8s-ctr:~# vi -d pstree-1.txt pstree-2.txt
(⎈|kubernetes-admin@kubernetes:N/A) root@k8s-ctr:~# vi -d systemd-cgls-1.txt systemd-cgls-2.txt
(⎈|kubernetes-admin@kubernetes:N/A) root@k8s-ctr:~# vi -d lsns-1.txt lsns-2.txt
(⎈|kubernetes-admin@kubernetes:N/A) root@k8s-ctr:~# vi -d ip_addr-1.txt ip_addr-2.txt
(⎈|kubernetes-admin@kubernetes:N/A) root@k8s-ctr:~# vi -d ss-1.txt ss-2.txt
(⎈|kubernetes-admin@kubernetes:N/A) root@k8s-ctr:~# vi -d df-1.txt df-2.txt
(⎈|kubernetes-admin@kubernetes:N/A) root@k8s-ctr:~# vi -d findmnt-1.txt findmnt-2.txt

# kubelet 에 --protect-kernel-defaults=false 적용되어 관련 코드에 sysctl 커널 파라미터 적용 : 아래 링크 확왼
## 위 설정 시, 커널 튜닝 가능 항목 중 하나라도 kubelet의 기본값과 다르면 오류가 발생합니다
(⎈|kubernetes-admin@kubernetes:N/A) root@k8s-ctr:~# vi -d sysctl-1.txt sysctl-2.txt
(⎈|kubernetes-admin@kubernetes:N/A) root@k8s-ctr:~# kernel.panic = 0 -> 10 변경
(⎈|kubernetes-admin@kubernetes:N/A) root@k8s-ctr:~# kernel.panic_on_oops = 1 기존값 그대로
(⎈|kubernetes-admin@kubernetes:N/A) root@k8s-ctr:~# vm.overcommit_memory = 0 -> 1 변경
(⎈|kubernetes-admin@kubernetes:N/A) root@k8s-ctr:~# vm.panic_on_oom = 0 기존값 그대로

 

4-5. [k8s-ctr] 인증서 확인

(⎈|kubernetes-admin@kubernetes:N/A) root@k8s-ctr:~# kc describe cm -n kube-system kubeadm-config
Name:         kubeadm-config
Namespace:    kube-system
Labels:       <none>
Annotations:  <none>

Data
====
ClusterConfiguration:
----
apiServer: {}
apiVersion: kubeadm.k8s.io/v1beta4
caCertificateValidityPeriod: 87600h0m0s
certificateValidityPeriod: 8760h0m0s
certificatesDir: /etc/kubernetes/pki
clusterName: kubernetes
controllerManager: {}
dns: {}
encryptionAlgorithm: RSA-2048
etcd:
  local:
    dataDir: /var/lib/etcd
imageRepository: registry.k8s.io
kind: ClusterConfiguration
kubernetesVersion: v1.32.11
networking:
  dnsDomain: cluster.local
  podSubnet: 10.244.0.0/16
  serviceSubnet: 10.96.0.0/16
proxy: {}
scheduler: {}



BinaryData
====

Events:  <none>

(⎈|kubernetes-admin@kubernetes:N/A) root@k8s-ctr:~# kubeadm certs check-expiration
[check-expiration] Reading configuration from the "kubeadm-config" ConfigMap in namespace "kube-system"...
[check-expiration] Use 'kubeadm init phase upload-config --config your-config.yaml' to re-upload it.

CERTIFICATE                EXPIRES                  RESIDUAL TIME   CERTIFICATE AUTHORITY   EXTERNALLY MANAGED
admin.conf                 Jan 24, 2027 09:56 UTC   364d            ca                      no
apiserver                  Jan 24, 2027 09:56 UTC   364d            ca                      no
apiserver-etcd-client      Jan 24, 2027 09:56 UTC   364d            etcd-ca                 no
apiserver-kubelet-client   Jan 24, 2027 09:56 UTC   364d            ca                      no
controller-manager.conf    Jan 24, 2027 09:56 UTC   364d            ca                      no
etcd-healthcheck-client    Jan 24, 2027 09:56 UTC   364d            etcd-ca                 no
etcd-peer                  Jan 24, 2027 09:56 UTC   364d            etcd-ca                 no
etcd-server                Jan 24, 2027 09:56 UTC   364d            etcd-ca                 no
front-proxy-client         Jan 24, 2027 09:56 UTC   364d            front-proxy-ca          no
scheduler.conf             Jan 24, 2027 09:56 UTC   364d            ca                      no
super-admin.conf           Jan 24, 2027 09:56 UTC   364d            ca                      no

CERTIFICATE AUTHORITY   EXPIRES                  RESIDUAL TIME   EXTERNALLY MANAGED
ca                      Jan 22, 2036 09:56 UTC   9y              no
etcd-ca                 Jan 22, 2036 09:56 UTC   9y              no
front-proxy-ca          Jan 22, 2036 09:56 UTC   9y              no

(⎈|kubernetes-admin@kubernetes:N/A) root@k8s-ctr:~# cat /etc/kubernetes/pki/ca.crt | openssl x509 -text -noout
Certificate:
    Data:
        Version: 3 (0x2)
        Serial Number: 2535030548539110019 (0x232e3cda4c6ff283)
        Signature Algorithm: sha256WithRSAEncryption
        Issuer: CN=kubernetes
        Validity
            Not Before: Jan 24 09:51:07 2026 GMT
            Not After : Jan 22 09:56:07 2036 GMT
        Subject: CN=kubernetes
        Subject Public Key Info:
            Public Key Algorithm: rsaEncryption
                Public-Key: (2048 bit)
                Modulus:
                    00:d8:64:b9:f1:a7:54:1a:a3:57:1f:f5:e0:3a:55:
                    d8:d5:a6:87:44:cf:3a:77:f7:cc:1d:d5:73:e8:c1:
                    47:01:99:bc:3a:e4:36:15:18:6f:6c:51:ea:95:fc:
                    de:b3:41:10:c5:23:66:cf:c8:44:83:e4:8d:ce:bd:
                    22:a8:4d:d5:79:fe:48:13:46:26:ab:2d:86:9a:ff:
                    c5:c2:68:ed:fd:08:d3:a3:89:fd:bb:64:04:57:92:
                    57:2c:f6:cd:9b:f3:73:52:46:ba:76:6f:a6:da:eb:
                    e1:a7:17:49:82:e5:0c:8e:94:c9:de:4b:34:be:ef:
                    c5:2f:fe:b8:af:3a:00:a4:77:0d:01:20:3f:e7:4e:
                    13:7b:68:6c:9a:e3:f0:93:22:25:41:d8:48:40:c9:
                    e3:9a:54:71:03:1b:12:b6:31:8e:71:57:f8:aa:57:
                    48:53:5f:05:a9:bb:ac:5b:81:4d:6f:41:e9:e2:c6:
                    f0:74:c9:2e:f6:f3:69:d0:8f:5d:ae:25:cb:b5:6d:
                    11:b4:63:e0:a5:4b:7a:19:d3:ed:17:0d:ff:6d:4a:
                    0e:4d:c4:4e:33:8a:1b:fa:d6:20:18:c3:6a:52:1b:
                    72:39:34:16:95:5a:80:a6:9c:15:31:92:70:40:9b:
                    86:5e:ae:8e:38:49:3b:ca:48:93:b5:c3:2a:c3:de:
                    bc:a7
                Exponent: 65537 (0x10001)
        X509v3 extensions:
            X509v3 Key Usage: critical
                Digital Signature, Key Encipherment, Certificate Sign
            X509v3 Basic Constraints: critical
                CA:TRUE
            X509v3 Subject Key Identifier:
                1B:C4:2B:E8:D7:F6:3C:42:70:2C:B9:AE:28:67:35:0D:9F:41:68:CB
            X509v3 Subject Alternative Name:
                DNS:kubernetes
    Signature Algorithm: sha256WithRSAEncryption
    Signature Value:
        9b:74:c2:34:3d:aa:db:d2:6e:86:b7:86:78:44:c1:64:a8:6c:
        2f:38:43:2e:f5:77:93:78:c5:0a:8f:35:89:21:04:c4:d4:96:
        42:8f:51:5f:c8:69:a1:02:0b:0f:84:33:ea:3d:99:7a:af:66:
        a8:8a:4f:56:6a:79:af:be:57:77:b4:55:c8:8e:7b:4e:dc:5a:
        d5:90:d2:dd:0f:ee:b4:55:47:75:4a:7f:df:a0:93:33:55:b2:
        bb:f6:3e:62:4d:09:39:89:b4:7a:b6:48:b3:1d:58:0d:2a:82:
        f2:65:bf:7c:23:31:3c:18:3a:93:e9:bb:af:76:84:f7:95:1d:
        67:51:94:8c:4e:b1:24:e8:ab:a9:f3:4d:9c:e1:cc:2e:2c:63:
        82:f8:66:f9:60:7e:a8:b6:0c:32:c3:1d:5c:84:ba:65:c5:50:
        5d:24:f9:89:5a:fe:2a:ab:a9:bf:6a:f7:91:74:ae:a8:72:ea:
        f7:51:c9:21:6d:a6:84:a7:28:69:45:d0:b4:eb:76:2a:e9:be:
        c1:c6:85:98:fe:d7:50:0b:c2:f8:0f:92:54:90:89:73:32:80:
        70:ef:2c:db:91:4d:87:e0:3c:c0:57:7d:29:7a:32:51:b9:44:
        48:fa:a0:1a:a8:70:e8:8c:f5:73:b2:c1:e7:72:9a:f4:69:23:
        54:1d:77:20
        
(⎈|kubernetes-admin@kubernetes:N/A) root@k8s-ctr:~# cat /etc/kubernetes/pki/apiserver.crt | openssl x509 -text -noout
Certificate:
    Data:
        Version: 3 (0x2)
        Serial Number: 6316045001739227051 (0x57a719771a0c4fab)
        Signature Algorithm: sha256WithRSAEncryption
        Issuer: CN=kubernetes
        Validity
            Not Before: Jan 24 09:51:07 2026 GMT
            Not After : Jan 24 09:56:07 2027 GMT
        Subject: CN=kube-apiserver
        Subject Public Key Info:
            Public Key Algorithm: rsaEncryption
                Public-Key: (2048 bit)
                Modulus:
                    00:e6:16:f7:26:f2:dd:2c:58:be:92:31:a9:dc:31:
                    c2:47:db:b0:3d:b0:07:d5:7b:12:71:b2:6d:bb:e6:
                    f5:89:a9:92:7e:e4:e2:67:70:f6:11:33:1a:e2:10:
                    42:39:c9:a9:84:a7:a2:68:3e:dd:01:b9:6b:05:2a:
                    c3:e9:23:c0:72:49:73:5b:51:8f:51:5e:fd:9c:80:
                    6a:e1:e5:4d:ee:c7:e5:7b:b6:f6:e7:84:30:27:17:
                    7f:e9:87:a9:a2:2b:7d:d0:78:7d:da:7f:bb:6e:59:
                    ed:19:3b:8f:08:a9:bd:c1:aa:4b:05:e4:5b:7f:41:
                    59:c2:b9:74:d6:33:38:8b:27:bb:d6:bb:b9:90:9d:
                    bb:36:13:3e:ce:af:32:33:6a:fb:d4:53:de:46:19:
                    c4:27:99:ce:0f:9d:fa:38:b6:9e:17:65:1f:fb:b2:
                    4c:17:c8:14:39:e8:3f:8e:5e:2f:a2:9d:a0:b2:f7:
                    df:32:18:8a:28:81:89:4b:a4:9e:6f:8b:98:45:75:
                    b6:be:d2:86:19:8f:75:86:e3:7c:28:08:13:2d:8d:
                    5f:1d:df:aa:32:5e:56:a1:e2:16:17:ed:f2:bd:82:
                    f7:b7:4e:2a:2f:e0:04:1c:9b:47:84:ba:ab:05:f2:
                    a1:aa:c2:5c:81:51:55:90:55:63:44:97:42:03:01:
                    2a:1d
                Exponent: 65537 (0x10001)
        X509v3 extensions:
            X509v3 Key Usage: critical
                Digital Signature, Key Encipherment
            X509v3 Extended Key Usage:
                TLS Web Server Authentication
            X509v3 Basic Constraints: critical
                CA:FALSE
            X509v3 Authority Key Identifier:
                1B:C4:2B:E8:D7:F6:3C:42:70:2C:B9:AE:28:67:35:0D:9F:41:68:CB
            X509v3 Subject Alternative Name:
                DNS:k8s-ctr, DNS:kubernetes, DNS:kubernetes.default, DNS:kubernetes.default.svc, DNS:kubernetes.default.svc.cluster.local, IP Address:10.96.0.1, IP Address:192.168.10.100
    Signature Algorithm: sha256WithRSAEncryption
    Signature Value:
        b1:41:49:5a:07:7e:59:23:3d:b0:fa:1e:17:15:3c:a7:ba:a6:
        ee:bd:93:c5:20:a6:56:83:a5:79:fa:25:91:dc:68:6f:c5:fe:
        96:9b:46:dd:10:65:5e:a9:c6:9f:2b:7b:b2:1d:15:e5:52:09:
        45:5e:13:3c:41:87:fd:d7:33:29:c7:72:2c:5a:6a:7b:4d:0a:
        f0:ca:d6:2f:ba:c1:29:0c:a6:40:00:fd:e0:a1:aa:a8:91:cc:
        29:df:93:9b:12:04:bb:7a:ff:45:5b:d4:cd:f5:11:61:f6:34:
        9c:2c:af:39:10:2c:54:2c:41:f2:d5:ab:44:10:0a:ec:ea:04:
        33:ea:d0:ab:8b:6d:e5:17:e7:7c:7a:4f:72:3f:fb:20:e9:aa:
        e1:70:09:08:70:d2:67:6f:04:56:1e:13:80:ff:fe:3d:75:8e:
        10:ad:ea:9b:25:50:17:d0:aa:f2:01:2c:44:0c:41:19:a8:3e:
        7b:da:56:1c:70:e4:38:2e:e2:88:f3:31:18:d3:ac:b9:c4:ac:
        2d:cb:a0:ba:45:cb:06:e4:00:c1:3a:a0:47:79:e8:c9:1b:b5:
        af:0b:82:bf:0b:e1:68:eb:03:6d:14:ca:33:63:64:42:ce:58:
        ab:71:1a:77:a8:28:44:84:69:e7:9a:e2:e5:52:6d:5e:76:34:
        00:80:ba:70
        
(⎈|kubernetes-admin@kubernetes:N/A) root@k8s-ctr:~# cat /etc/kubernetes/pki/apiserver-kubelet-client.crt | openssl x509 -text -noout
Certificate:
    Data:
        Version: 3 (0x2)
        Serial Number: 8855908732919257496 (0x7ae682126cee9998)
        Signature Algorithm: sha256WithRSAEncryption
        Issuer: CN=kubernetes
        Validity
            Not Before: Jan 24 09:51:07 2026 GMT
            Not After : Jan 24 09:56:07 2027 GMT
        Subject: O=kubeadm:cluster-admins, CN=kube-apiserver-kubelet-client
        Subject Public Key Info:
            Public Key Algorithm: rsaEncryption
                Public-Key: (2048 bit)
                Modulus:
                    00:e2:ee:cf:50:27:10:3d:20:49:98:76:26:1a:36:
                    b6:06:b6:47:5e:bd:cc:85:7d:96:98:f5:38:42:c9:
                    ea:58:8d:9c:00:1f:e7:84:07:d8:f8:b5:75:22:77:
                    40:c4:bc:7e:d3:89:6d:f2:a9:b3:88:64:f8:67:5f:
                    34:80:5e:4e:31:8b:6b:22:fb:0f:77:0f:08:d4:4f:
                    b0:59:1d:bb:38:80:24:e6:a2:40:22:a0:95:db:21:
                    84:88:f8:d9:be:12:5d:f9:97:f3:78:b2:1f:8d:ef:
                    3d:13:8d:01:37:32:24:84:43:46:4b:76:98:98:e8:
                    9e:cc:e2:77:33:34:0c:59:8f:ea:0b:b3:9a:58:66:
                    03:37:77:f8:66:56:33:12:aa:68:b3:92:c7:a6:d0:
                    02:40:13:ca:3a:e6:37:34:ce:89:97:27:a7:d2:56:
                    d7:c4:9e:ee:ff:ab:29:1b:82:b4:39:c7:3e:aa:47:
                    4c:34:01:71:e7:a5:bb:a7:4b:cb:58:33:79:8f:91:
                    17:38:fd:d8:07:83:ac:24:90:06:fc:a5:b4:3a:7d:
                    43:27:e7:d6:4b:b2:87:0e:0f:85:f1:85:8c:ee:47:
                    a0:5f:d6:07:17:6d:37:52:27:23:97:9d:7a:03:00:
                    39:65:bf:54:0f:0f:9e:a7:02:98:38:1a:96:d1:68:
                    02:7b
                Exponent: 65537 (0x10001)
        X509v3 extensions:
            X509v3 Key Usage: critical
                Digital Signature, Key Encipherment
            X509v3 Extended Key Usage:
                TLS Web Client Authentication
            X509v3 Basic Constraints: critical
                CA:FALSE
            X509v3 Authority Key Identifier:
                1B:C4:2B:E8:D7:F6:3C:42:70:2C:B9:AE:28:67:35:0D:9F:41:68:CB
    Signature Algorithm: sha256WithRSAEncryption
    Signature Value:
        53:4b:76:fe:25:29:83:35:c7:31:23:e6:8f:16:23:af:65:bf:
        90:56:ba:bb:f3:31:62:cd:34:e1:82:41:5e:bc:e3:46:1d:e2:
        ca:1e:f2:25:34:d3:10:9d:50:9b:11:6b:31:7d:4f:34:8b:b3:
        a3:10:69:d7:a8:e1:63:4f:ba:a3:22:e1:86:54:58:0b:96:9f:
        e5:a9:2c:93:11:ac:20:2f:7b:6d:e6:e8:ea:ba:56:84:67:a2:
        a5:ae:c9:27:47:4a:72:3d:4d:50:35:0a:67:51:e5:98:0c:48:
        06:23:62:03:61:2c:8a:6d:cb:cd:09:1e:29:42:e5:67:97:e3:
        79:e0:0e:a6:ee:33:cd:3f:cb:88:9e:ed:fb:56:e9:f8:54:cd:
        c8:09:c2:17:c9:78:9f:0c:96:90:1e:dd:b3:6f:ad:75:b1:8b:
        a5:e3:f9:ae:95:7b:b4:4e:44:dc:51:fe:6e:39:fd:24:4f:64:
        9d:6c:d9:6c:6d:6e:b1:98:6d:12:16:36:b9:a0:94:6b:66:9c:
        ed:e7:a5:56:b2:90:b2:38:7f:15:84:c2:0e:34:ca:f2:a3:8e:
        d2:95:b2:76:b7:07:d6:2c:3d:85:1b:19:9d:11:48:20:9b:84:
        69:5a:81:3b:35:6b:fc:2f:2e:f3:94:c3:50:dc:88:69:73:7e:
        3c:ee:6a:19

 

4-6. [k8s-ctr] kubeconfig 확인

# 관리자 용도
(⎈|kubernetes-admin@kubernetes:N/A) root@k8s-ctr:~# cat /etc/kubernetes/admin.conf
apiVersion: v1
clusters:
- cluster:
    certificate-authority-data: LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSURCVENDQWUyZ0F3SUJBZ0lJSXk0ODJreHY4b013RFFZSktvWklodmNOQVFFTEJRQXdGVEVUTUJFR0ExVUUKQXhNS2EzVmlaWEp1WlhSbGN6QWVGdzB5TmpBeE1qUXdPVFV4TURkYUZ3MHpOakF4TWpJd09UVTJNRGRhTUJVeApFekFSQmdOVkJBTVRDbXQxWW1WeWJtVjBaWE13Z2dFaU1BMEdDU3FHU0liM0RRRUJBUVVBQTRJQkR3QXdnZ0VLCkFvSUJBUURZWkxueHAxUWFvMWNmOWVBNlZkalZwb2RFenpwMzk4d2QxWFBvd1VjQm1idzY1RFlWR0c5c1VlcVYKL042elFSREZJMmJQeUVTRDVJM092U0tvVGRWNS9rZ1RSaWFyTFlhYS84WENhTzM5Q05PamlmMjdaQVJYa2xjcwo5czJiODNOU1JycDJiNmJhNitHbkYwbUM1UXlPbE1uZVN6Uys3OFV2L3Jpdk9nQ2tkdzBCSUQvblRoTjdhR3lhCjQvQ1RJaVZCMkVoQXllT2FWSEVER3hLMk1ZNXhWL2lxVjBoVFh3V3B1NnhiZ1UxdlFlbml4dkIweVM3MjgyblEKajEydUpjdTFiUkcwWStDbFMzb1owKzBYRGY5dFNnNU54RTR6aWh2NjFpQVl3MnBTRzNJNU5CYVZXb0NtbkJVeAprbkJBbTRaZXJvNDRTVHZLU0pPMXd5ckQzcnluQWdNQkFBR2pXVEJYTUE0R0ExVWREd0VCL3dRRUF3SUNwREFQCkJnTlZIUk1CQWY4RUJUQURBUUgvTUIwR0ExVWREZ1FXQkJRYnhDdm8xL1k4UW5Bc3VhNG9aelVObjBGb3l6QVYKQmdOVkhSRUVEakFNZ2dwcmRXSmxjbTVsZEdWek1BMEdDU3FHU0liM0RRRUJDd1VBQTRJQkFRQ2JkTUkwUGFyYgowbTZHdDRaNFJNRmtxR3d2T0VNdTlYZVRlTVVLanpXSklRVEUxSlpDajFGZnlHbWhBZ3NQaERQcVBabDZyMmFvCmlrOVdhbm12dmxkM3RGWElqbnRPM0ZyVmtOTGREKzYwVlVkMVNuL2ZvSk16VmJLNzlqNWlUUWs1aWJSNnRraXoKSFZnTktvTHlaYjk4SXpFOEdEcVQ2YnV2ZG9UM2xSMW5VWlNNVHJFazZLdXA4MDJjNGN3dUxHT0MrR2I1WUg2bwp0Z3d5d3gxY2hMcGx4VkJkSlBtSld2NHFxNm0vYXZlUmRLNm9jdXIzVWNraGJhYUVweWhwUmRDMDYzWXE2YjdCCnhvV1kvdGRRQzhMNEQ1SlVrSWx6TW9Cdzd5emJrVTJINER6QVYzMHBlakpSdVVSSStxQWFxSERvalBWenNzSG4KY3ByMGFTTlVIWGNnCi0tLS0tRU5EIENFUlRJRklDQVRFLS0tLS0K
    server: https://192.168.10.100:6443
  name: kubernetes
contexts:
- context:
    cluster: kubernetes
    user: kubernetes-admin
  name: kubernetes-admin@kubernetes
current-context: kubernetes-admin@kubernetes
kind: Config
preferences: {}
users:
- name: kubernetes-admin
  user:
    client-certificate-data: LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSURLVENDQWhHZ0F3SUJBZ0lJQjk4bTBrUk43ZEF3RFFZSktvWklodmNOQVFFTEJRQXdGVEVUTUJFR0ExVUUKQXhNS2EzVmlaWEp1WlhSbGN6QWVGdzB5TmpBeE1qUXdPVFV4TURkYUZ3MHlOekF4TWpRd09UVTJNRGRhTUR3eApIekFkQmdOVkJBb1RGbXQxWW1WaFpHMDZZMngxYzNSbGNpMWhaRzFwYm5NeEdUQVhCZ05WQkFNVEVHdDFZbVZ5CmJtVjBaWE10WVdSdGFXNHdnZ0VpTUEwR0NTcUdTSWIzRFFFQkFRVUFBNElCRHdBd2dnRUtBb0lCQVFDb1YzdmkKREdZRlgvdnE1eElwVjM4bGw2MTJ4TzVaT3dwOTBsUjRIbk5BK3AyTmwreHlZSG9kTjAxTlUyVk5TMUxmR2dsRwo3ZU12VVpTbjdQdUhFMXVtNXN0a25BY1R0bWxGaUhNWGdGK2phMVM2MkNDTGFBYnV3VWh5WG95eXhaYWtnN1NUCkJuVjdBaWNZSDc2Rk95RS96RFhncEErWWx2QSsrdTJoOWZUZVpBbUcyTVhXUHM2T3dLNmtvdkd4STQ3V3lPSDUKc1pIVlg3OXA1MUN4QWlET1NyN0FMaGl0WFVKOGFDZVhqTGVvNTc5VUpjZENmS3J0d04ybjN2Vm9DeXdFTHNLSgpMQURoRjY0Nk5WTXFuMEYwNXQ4eWtLQjY0Z0h2T2xac01Qa2JXNm5mWS9hZng1TXo1RjVKR3RVYmtiVjR1SGhPCjAwZitycEkySEYxQ3RiOTFBZ01CQUFHalZqQlVNQTRHQTFVZER3RUIvd1FFQXdJRm9EQVRCZ05WSFNVRUREQUsKQmdnckJnRUZCUWNEQWpBTUJnTlZIUk1CQWY4RUFqQUFNQjhHQTFVZEl3UVlNQmFBRkJ2RUsralg5anhDY0N5NQpyaWhuTlEyZlFXakxNQTBHQ1NxR1NJYjNEUUVCQ3dVQUE0SUJBUURRWExaTGY0N0JjM3NyN2xPclIvQUFGcUtWCnZZZGVWa1hHa21SajhPYlV4SjY3U1JHZElkYkFKblQ1WnhYaVhyUkdCZnFDNkVGQjZFVjUxZlNYZVMzcDgyYlkKblRxNmE2eXE5WVY5SU5SalVvQ0RJUEY2WVFYcmNDelJuQWVkaEdLZGtWV2lHbytNWDdEdU5nZW9BUnlBc3hoRwoxMjZVN0ZZRWZLZVhBSVpZMmljSnNUcS94TFd2S0g3VVE1U21JVzlhWHZ3czBXL1A4c25rNm93ODhyMlNzcHZ5CjlOZkpuUEd4K3duUVlrTlM1RkxtV3lncVc1dEZuWC9Wd0psV3N3RFV6QXI0anFQejhMRDNxWUxJSzY2RUhxaEUKaW1EVWFLeG14ZVJSTjB3dDJENitWUnhLMm8yOXJZemRnUHlXYy9PVkpxNWNBVkhpalZ3eURSNGhsN1ltCi0tLS0tRU5EIENFUlRJRklDQVRFLS0tLS0K
    client-key-data: LS0tLS1CRUdJTiBSU0EgUFJJVkFURSBLRVktLS0tLQpNSUlFcEFJQkFBS0NBUUVBcUZkNzRneG1CVi83NnVjU0tWZC9KWmV0ZHNUdVdUc0tmZEpVZUI1elFQcWRqWmZzCmNtQjZIVGROVFZObFRVdFMzeG9KUnUzakwxR1VwK3o3aHhOYnB1YkxaSndIRTdacFJZaHpGNEJmbzJ0VXV0Z2cKaTJnRzdzRkljbDZNc3NXV3BJTzBrd1oxZXdJbkdCKytoVHNoUDh3MTRLUVBtSmJ3UHZydG9mWDAzbVFKaHRqRgoxajdPanNDdXBLTHhzU09PMXNqaCtiR1IxVisvYWVkUXNRSWd6a3Erd0M0WXJWMUNmR2dubDR5M3FPZS9WQ1hIClFueXE3Y0RkcDk3MWFBc3NCQzdDaVN3QTRSZXVPalZUS3A5QmRPYmZNcENnZXVJQjd6cFdiREQ1RzF1cDMyUDIKbjhlVE0rUmVTUnJWRzVHMWVMaDRUdE5IL3E2U05oeGRRclcvZFFJREFRQUJBb0lCQUNobFdlSjhKQzB0QThneQpLOThFMG91RVVzbFo1M0k5SXo3Zkxvcm1qN1Nyand3dnhUc0xJTEtMRno4emdHOGtZSjRONHVVRTU4dnVrVFFjCnY1MEJ6YkFHMlE3ckRCMjBXNTJtYVN2ZUQ5VW94OXZRU2pyNXV4UW5DSW45VzFqNDVqWFRMdzFLOHYwU0hxeUEKelppdUFFUU5ibTVhSUMzM0ptNk9pMkNlbzJTTURDVUlsZnlFK1hFU2dIL2RPR0ZNU09PeU9nZkZ5eTM5SjA1UgpvSmdsK1RIdVZrNVJWSWgwNlRoZm1ESlczMjdCS2FwT3NLYWg5NFpRRjVQWFVwYzNuY0xLeGVvTEVYNGNpdXFtCk10QmtuSEdTMVFINzYvN09meE1BS08yckpLMUFWR0gyRUo5ZE1PT2NWNCswOUk0dThCRFAweUFNZU1HYlRmTnYKcnBsNENSRUNnWUVBeFdOb3IxOUxWU2M3Z2x5V0RNbEJSZ0NWV3pFd3VyQXZMTmRZTm5PZHhZdkNkSGlKa0xDbwpFL2tteW5uYUdTdEIvNkJ3Vi9MQndUQ1lheHBaZ05XMU1ReTJaZ3lkcUE2SGh5TVNzMkRwQkt3cW1lR2dUeUY0CmVCdjUyNlZwRng2NVhpK20vcmIvUUx3cUhjZlJ5dDc3aStTZEhLV3BWNndabVhwV3kyN3ozOWtDZ1lFQTJsUVUKNTU5TUlGSWp2eGJPR01qSUdROXpOQTZmN3NLckZnY3JXK0NjN1F5czNSRUc1UjVXNjJpN29PcUFZVVk4Rm9YRwpHcnVGMStiSnFUVkpZYTRZVmI1dU81dUNhaGMvcGV0V005SW5RcEZxM0tSUEdTY3JaLzJyVUtCeHY3YjJnay9ECkxWd3dzOHh4cUFKdnpXa2o2TSs2VHVCVVNqR05acnJaa1Ivdjl2MENnWUVBclovdW9teFJTRnJWSnFzb05hRUYKc0h5czQrVVY5dkVvM2VtaUoydDFlU0doYjIvam1ZazZueThHcHcyZUFZdWlaeWVLQ21KM2VlYXorMm5YRnROawpxUHVFcWFrcE9IMW5TMEJYbjc5NzJHZFVwYnpvbFJKYzlGR3ZhenhKZjFQQVBBL3dkWmNrV1ozcDhmNGxGSzBsCldQMUVFY0hLZmxyY3ZicjJBOFhaOEtrQ2dZQStzazZlaFR4VE84TlFLTGhlbmFuNHFGc280OXBCc2wxM0llL3QKbm43eUErWFFSZ2Q0M0ZHUm9LM2c4L2FSK0oxZ3ltR3RZNVIzLzZxQmtPL1Z3U3p6MG8vTlJrY1pPRHZxNWI0SAplNTRTbTdmWVRNYjZMaWxrMzQvR3c0eG14Wi9jcEJNa2Y0anMyUlQ2YmxpMDREQ1R0ck9GMngzWmdJbGVxdUczCnJ6ZzE4UUtCZ1FDZC91U1Y0bjJ5N24vVlV0RzdpTnd0YWp1MjExZnNpUitDMk94Z2wrWm5DelRockJwMnc0elUKTXFCZzJXczFsTXp5eHdNWmlreW5xekdxWnJ2UVlISnpWUU83bEs3MnhSVXBjTmxHaHliU3NkL05UYWhqakRKSAp3Zk4wUUtQYVkvRS84MWJ1QUhSUmhwTUhUWExyWElVQS9HeWxGQzdPWnNZVzBhdlM4U3lFV1E9PQotLS0tLUVORCBSU0EgUFJJVkFURSBLRVktLS0tLQo=
    
(⎈|kubernetes-admin@kubernetes:N/A) root@k8s-ctr:~# cat /etc/kubernetes/super-admin.conf
apiVersion: v1
clusters:
- cluster:
    certificate-authority-data: LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSURCVENDQWUyZ0F3SUJBZ0lJSXk0ODJreHY4b013RFFZSktvWklodmNOQVFFTEJRQXdGVEVUTUJFR0ExVUUKQXhNS2EzVmlaWEp1WlhSbGN6QWVGdzB5TmpBeE1qUXdPVFV4TURkYUZ3MHpOakF4TWpJd09UVTJNRGRhTUJVeApFekFSQmdOVkJBTVRDbXQxWW1WeWJtVjBaWE13Z2dFaU1BMEdDU3FHU0liM0RRRUJBUVVBQTRJQkR3QXdnZ0VLCkFvSUJBUURZWkxueHAxUWFvMWNmOWVBNlZkalZwb2RFenpwMzk4d2QxWFBvd1VjQm1idzY1RFlWR0c5c1VlcVYKL042elFSREZJMmJQeUVTRDVJM092U0tvVGRWNS9rZ1RSaWFyTFlhYS84WENhTzM5Q05PamlmMjdaQVJYa2xjcwo5czJiODNOU1JycDJiNmJhNitHbkYwbUM1UXlPbE1uZVN6Uys3OFV2L3Jpdk9nQ2tkdzBCSUQvblRoTjdhR3lhCjQvQ1RJaVZCMkVoQXllT2FWSEVER3hLMk1ZNXhWL2lxVjBoVFh3V3B1NnhiZ1UxdlFlbml4dkIweVM3MjgyblEKajEydUpjdTFiUkcwWStDbFMzb1owKzBYRGY5dFNnNU54RTR6aWh2NjFpQVl3MnBTRzNJNU5CYVZXb0NtbkJVeAprbkJBbTRaZXJvNDRTVHZLU0pPMXd5ckQzcnluQWdNQkFBR2pXVEJYTUE0R0ExVWREd0VCL3dRRUF3SUNwREFQCkJnTlZIUk1CQWY4RUJUQURBUUgvTUIwR0ExVWREZ1FXQkJRYnhDdm8xL1k4UW5Bc3VhNG9aelVObjBGb3l6QVYKQmdOVkhSRUVEakFNZ2dwcmRXSmxjbTVsZEdWek1BMEdDU3FHU0liM0RRRUJDd1VBQTRJQkFRQ2JkTUkwUGFyYgowbTZHdDRaNFJNRmtxR3d2T0VNdTlYZVRlTVVLanpXSklRVEUxSlpDajFGZnlHbWhBZ3NQaERQcVBabDZyMmFvCmlrOVdhbm12dmxkM3RGWElqbnRPM0ZyVmtOTGREKzYwVlVkMVNuL2ZvSk16VmJLNzlqNWlUUWs1aWJSNnRraXoKSFZnTktvTHlaYjk4SXpFOEdEcVQ2YnV2ZG9UM2xSMW5VWlNNVHJFazZLdXA4MDJjNGN3dUxHT0MrR2I1WUg2bwp0Z3d5d3gxY2hMcGx4VkJkSlBtSld2NHFxNm0vYXZlUmRLNm9jdXIzVWNraGJhYUVweWhwUmRDMDYzWXE2YjdCCnhvV1kvdGRRQzhMNEQ1SlVrSWx6TW9Cdzd5emJrVTJINER6QVYzMHBlakpSdVVSSStxQWFxSERvalBWenNzSG4KY3ByMGFTTlVIWGNnCi0tLS0tRU5EIENFUlRJRklDQVRFLS0tLS0K
    server: https://192.168.10.100:6443
  name: kubernetes
contexts:
- context:
    cluster: kubernetes
    user: kubernetes-super-admin
  name: kubernetes-super-admin@kubernetes
current-context: kubernetes-super-admin@kubernetes
kind: Config
preferences: {}
users:
- name: kubernetes-super-admin
  user:
    client-certificate-data: LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSURKekNDQWcrZ0F3SUJBZ0lJRXdtSVo3Q09ZUnd3RFFZSktvWklodmNOQVFFTEJRQXdGVEVUTUJFR0ExVUUKQXhNS2EzVmlaWEp1WlhSbGN6QWVGdzB5TmpBeE1qUXdPVFV4TURkYUZ3MHlOekF4TWpRd09UVTJNRGRhTURveApGekFWQmdOVkJBb1REbk41YzNSbGJUcHRZWE4wWlhKek1SOHdIUVlEVlFRREV4WnJkV0psY201bGRHVnpMWE4xCmNHVnlMV0ZrYldsdU1JSUJJakFOQmdrcWhraUc5dzBCQVFFRkFBT0NBUThBTUlJQkNnS0NBUUVBd2gvdFNJZUUKOU9LTWlQbXU4Vk96UVYzckpTamt4TStDTFNObTdyWUNRYXpqaU1aRm1tUVFnMWRoYk52Ri84M2hpTHkraWkvcAo2cVovaklhNUtwSGZqRjVrN2Z5b3Z3c3JlVjFaL0R0MzBROVZIVWQ2TmV0VlhjcDIzalppeDF2UXI2d0FhNE9SClhBMDV0L1FaRGFhMGVFQXFDSSt5akE2cTM4LzFXVlJnY2lmUzdrRWJiWW4yNE1xN05iRW9IckY3Ym9rTTlIMGYKYVAvdThaU2JoeVJsRjk0UkJwQTJiVENkWUJXd2IzYy92SjJ3aFpSSzRYYW9Xcm01ZXhEZHNFOENUcVIzZlNoeQpySzBhWWN5cDRXSjJuVGdEalNTYXllSHo2YjIzNDJ3NEJrM3BmbndmRXY3bGpqcU9rdTRsQXVtM2JUc2h2STc2ClBnZGtGaTMyd1M0YVpRSURBUUFCbzFZd1ZEQU9CZ05WSFE4QkFmOEVCQU1DQmFBd0V3WURWUjBsQkF3d0NnWUkKS3dZQkJRVUhBd0l3REFZRFZSMFRBUUgvQkFJd0FEQWZCZ05WSFNNRUdEQVdnQlFieEN2bzEvWThRbkFzdWE0bwpaelVObjBGb3l6QU5CZ2txaGtpRzl3MEJBUXNGQUFPQ0FRRUF1Sm5TdHlhTVh4OFhWMDVIVlIxcjlUS29SSDZrClVVZG01NjRVdVN2N21QZlh2WWk3am1Va3pRaU13azZrWXZBWlFXZjFxVmhjcUt2djNtcXVGMGZNLzhHQTNtTEEKeDFhU2JtaUJ2eEZpaE11ZURjVmFObHN2VUNEbkRrQnVKRHFZSFE3dzJDWVhVTG4vZ0hRYndLUXFhbU5GSEdNaApCL3JJMVl3VFZKckJaMk9HbmZYVmNIYlpoL3NMczN6MDExUFh0N2hrMjg3OTZOdUwxL3dMWk1uK3ZNWURTQnBXCnNHeGdVQ3FWYkFobS9IUmZpZWJtYVRXYTJOZUxOaHJvWkpWWmcvYjM4blV4b2xXSTkreGNuRERHemY0YUtsQnIKRmJTdHN2VituQnNYRHRNbjFuc1N4SlpiRk9ueXhXK0ZRZzNVZTdza0RxbHI1dXZrZS9MQVlaM1lxUT09Ci0tLS0tRU5EIENFUlRJRklDQVRFLS0tLS0K
    client-key-data: LS0tLS1CRUdJTiBSU0EgUFJJVkFURSBLRVktLS0tLQpNSUlFcEFJQkFBS0NBUUVBd2gvdFNJZUU5T0tNaVBtdThWT3pRVjNySlNqa3hNK0NMU05tN3JZQ1FhemppTVpGCm1tUVFnMWRoYk52Ri84M2hpTHkraWkvcDZxWi9qSWE1S3BIZmpGNWs3ZnlvdndzcmVWMVovRHQzMFE5VkhVZDYKTmV0VlhjcDIzalppeDF2UXI2d0FhNE9SWEEwNXQvUVpEYWEwZUVBcUNJK3lqQTZxMzgvMVdWUmdjaWZTN2tFYgpiWW4yNE1xN05iRW9IckY3Ym9rTTlIMGZhUC91OFpTYmh5UmxGOTRSQnBBMmJUQ2RZQld3YjNjL3ZKMndoWlJLCjRYYW9Xcm01ZXhEZHNFOENUcVIzZlNoeXJLMGFZY3lwNFdKMm5UZ0RqU1NheWVIejZiMjM0Mnc0QmszcGZud2YKRXY3bGpqcU9rdTRsQXVtM2JUc2h2STc2UGdka0ZpMzJ3UzRhWlFJREFRQUJBb0lCQUVockFhbURpTm1VSkVvNwpId1MvTFVtTzRGaHgrM25pVVpWR05qR0tLNmhWZDJLQVdObmlSM2kvNGNQcTd0L2hiYWdGaFcxbXQzUkduYUdPClpzaFhOOWFWSmtEVDl4Mmg3SnR2ZEZEUTNIOWNvV3QxVFVXTkg4RUg5VFVyZzhrTVd2c1dCdWdVNG1hOU5sR0cKR2N5S2FwdkxrQUsybkt4OEVrbkJPaTJUZVJGTVhEWUVzalRoa1JIdS9qWHZkRDREYmlkZVJOeGJXOEVaWDFOWQphOHJQQzB6cU52dGQ4YXI5UlBCQmt3WURsMkV2VkJEYUZlUWVFb0xNRWg0TFRaQ3Q5bEcrZ0U3UlNOb3pwNDRuCkRRb3NLOVcxbnNsaGFUK1Rjd3BJUGhtc28ybVpYcmQxVHJKbEw4dkFGSk5OZEFUbjJYWmZFdlM2WTVLWFFUM1AKcnRGVXFPY0NnWUVBejN5MngxS0lYRXRNVnJzZjJ2cStVQjMrWmZPNjVTcXNQYk8rRjZvQzFmaTB3SVdDbFVBSQo5dHdsVTRqbW91bUlJd0R1NEFKYWxqZC9MRUxUVXZvK3BLMFRvRGdDOFF6RWQvaDdkRG84OGthZ2dYOE1rcXFxClgyOVRDM2U5RXNPMUdORXdFdjFHWFJSRzhybENjTktqVHRQVDFIVlkrWHNSSW9MUG8yZUZ3RDhDZ1lFQTc0TmsKOUlrMGtSRGFEeXdncnhqTlBIMDQzY1VwSi91RlMxSGtmZzk2TWVJYTJaR0taQjJSeVFCVzRhVk1TbDRyRFdkWQp4R3dyZzZPbmtPWW5abGg2cXREY0ljdEdFR21WelF2S0N2U0NYazI2aGVjejJnOW91WEE0WUpuZjJ3NFRwRXdsCm4wNEtUejRiSDNZY3ZCM1k4NVFRSEpzWWZobDY5TzY3M3o5c1BGc0NnWUJUWE4wbTZqOEZMZSttN1JuWVptUHUKVm82dXNhVkdpOFdXS05CWU82TldDczI4aUNlMkJYdFVpNUNucGxwYjNBNHBXaWVmY3ZLb1pmVy9kNzNtR2NydgphT1o0dWVoY3B1K004QlhSMWRCRTJ5R0R4ZUxzVG91VE9td1lNR3lUekhQSFc4eS81R2pQM3VTK2dyWnlFLzh2CkhhWi9Od0tmZ2RXWmt3c1BzUGtwQ3dLQmdRRHU4MUI1NHBTdUVYanJZQ1B2YkRmOW5CUnF0RE9lTHdISnBoYm0KYVR5SW5jSVp3MmlrL3hjZHlCZmxvSXJmT3RtSzBzc3RrNWxLQ0xDNUQ5VEk5NGJSK2ZOVVI3OUx5bnJvQ1ZYMQozZ0JlWXYrdWJYNCtrOWJ3QW5STWM5ZHdiTGZOMXlaRnE0Ny9oYjk3Z05Pa0hjYi9JMzE3ZklSUDhjM0lwSkNNClpuTHVOd0tCZ1FETHN4T1Jwck45cE5odWV6akUwa2gvMDdBTU1sdExSUm05Zm9qNDlDU3ZhMTlmVVYyNGdYZ0oKQ0hWcTloZU9lTzI4ZDRrQ0wydUJtUktBSFJKbml3Q0wvdkNVV0IyaXhyS05FUXpnWGhnWUVPSTcyaHYzNndpNAozYjZNTUQ5Mk1IVHlEVEI1dExWbDBZNElqTlp1QXRiMFpHWHdTOVFWL1RmYTBaeTRicWJWSmc9PQotLS0tLUVORCBSU0EgUFJJVkFURSBLRVktLS0tLQo=
    
# kcm
(⎈|kubernetes-admin@kubernetes:N/A) root@k8s-ctr:~# cat /etc/kubernetes/controller-manager.conf
apiVersion: v1
clusters:
- cluster:
    certificate-authority-data: LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSURCVENDQWUyZ0F3SUJBZ0lJSXk0ODJreHY4b013RFFZSktvWklodmNOQVFFTEJRQXdGVEVUTUJFR0ExVUUKQXhNS2EzVmlaWEp1WlhSbGN6QWVGdzB5TmpBeE1qUXdPVFV4TURkYUZ3MHpOakF4TWpJd09UVTJNRGRhTUJVeApFekFSQmdOVkJBTVRDbXQxWW1WeWJtVjBaWE13Z2dFaU1BMEdDU3FHU0liM0RRRUJBUVVBQTRJQkR3QXdnZ0VLCkFvSUJBUURZWkxueHAxUWFvMWNmOWVBNlZkalZwb2RFenpwMzk4d2QxWFBvd1VjQm1idzY1RFlWR0c5c1VlcVYKL042elFSREZJMmJQeUVTRDVJM092U0tvVGRWNS9rZ1RSaWFyTFlhYS84WENhTzM5Q05PamlmMjdaQVJYa2xjcwo5czJiODNOU1JycDJiNmJhNitHbkYwbUM1UXlPbE1uZVN6Uys3OFV2L3Jpdk9nQ2tkdzBCSUQvblRoTjdhR3lhCjQvQ1RJaVZCMkVoQXllT2FWSEVER3hLMk1ZNXhWL2lxVjBoVFh3V3B1NnhiZ1UxdlFlbml4dkIweVM3MjgyblEKajEydUpjdTFiUkcwWStDbFMzb1owKzBYRGY5dFNnNU54RTR6aWh2NjFpQVl3MnBTRzNJNU5CYVZXb0NtbkJVeAprbkJBbTRaZXJvNDRTVHZLU0pPMXd5ckQzcnluQWdNQkFBR2pXVEJYTUE0R0ExVWREd0VCL3dRRUF3SUNwREFQCkJnTlZIUk1CQWY4RUJUQURBUUgvTUIwR0ExVWREZ1FXQkJRYnhDdm8xL1k4UW5Bc3VhNG9aelVObjBGb3l6QVYKQmdOVkhSRUVEakFNZ2dwcmRXSmxjbTVsZEdWek1BMEdDU3FHU0liM0RRRUJDd1VBQTRJQkFRQ2JkTUkwUGFyYgowbTZHdDRaNFJNRmtxR3d2T0VNdTlYZVRlTVVLanpXSklRVEUxSlpDajFGZnlHbWhBZ3NQaERQcVBabDZyMmFvCmlrOVdhbm12dmxkM3RGWElqbnRPM0ZyVmtOTGREKzYwVlVkMVNuL2ZvSk16VmJLNzlqNWlUUWs1aWJSNnRraXoKSFZnTktvTHlaYjk4SXpFOEdEcVQ2YnV2ZG9UM2xSMW5VWlNNVHJFazZLdXA4MDJjNGN3dUxHT0MrR2I1WUg2bwp0Z3d5d3gxY2hMcGx4VkJkSlBtSld2NHFxNm0vYXZlUmRLNm9jdXIzVWNraGJhYUVweWhwUmRDMDYzWXE2YjdCCnhvV1kvdGRRQzhMNEQ1SlVrSWx6TW9Cdzd5emJrVTJINER6QVYzMHBlakpSdVVSSStxQWFxSERvalBWenNzSG4KY3ByMGFTTlVIWGNnCi0tLS0tRU5EIENFUlRJRklDQVRFLS0tLS0K
    server: https://192.168.10.100:6443
  name: kubernetes
contexts:
- context:
    cluster: kubernetes
    user: system:kube-controller-manager
  name: system:kube-controller-manager@kubernetes
current-context: system:kube-controller-manager@kubernetes
kind: Config
preferences: {}
users:
- name: system:kube-controller-manager
  user:
    client-certificate-data: LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSURGakNDQWY2Z0F3SUJBZ0lJZEhxUTZHNEtWZ0l3RFFZSktvWklodmNOQVFFTEJRQXdGVEVUTUJFR0ExVUUKQXhNS2EzVmlaWEp1WlhSbGN6QWVGdzB5TmpBeE1qUXdPVFV4TURkYUZ3MHlOekF4TWpRd09UVTJNRGRhTUNreApKekFsQmdOVkJBTVRIbk41YzNSbGJUcHJkV0psTFdOdmJuUnliMnhzWlhJdGJXRnVZV2RsY2pDQ0FTSXdEUVlKCktvWklodmNOQVFFQkJRQURnZ0VQQURDQ0FRb0NnZ0VCQU5Kb2E1aEZ5SEp4cGg4SmxrS0NRaTJVR1FoM2ZPQ3UKdDVCMldINldGTFJlUmhhTlBQekJ2WkttaWxqNzhBNUcwUlBBeWJqNXc5ZS9mNnA0OEhhVU5EZStMcW5EeHcrcgo2R1MzcVJpR0p3RllCUkFUaDQ3QnRnOUdFNG0ySXBudEFvd1RPSE5GSFRZU1Y2aXdIdDlRN1Y2NTk4eCtjNWQwCnhIT1ZmYkd2L1RXdXJhVlFJdkdMMDZ1Y3hWRElYZUt3NWtmbzlPdW9ucHV2NmlPODlJVG5KTGVjZXg3TmF2Q20KYlp2Tmo2SWZ0UXRiM2lQM0lwVDJGZExKT1VORElhTTcxT2x6ZEtSVE5XSUJoWjltYmRyMEVpdnZkT3dqOHduTwphUVRYOHo5VlE2dGgxeE9FeGZheVlRSkEyLzJxbEFtelQrZDdWYmVjdWpyYndSc2F5NUF5N2hFQ0F3RUFBYU5XCk1GUXdEZ1lEVlIwUEFRSC9CQVFEQWdXZ01CTUdBMVVkSlFRTU1Bb0dDQ3NHQVFVRkJ3TUNNQXdHQTFVZEV3RUIKL3dRQ01BQXdId1lEVlIwakJCZ3dGb0FVRzhRcjZOZjJQRUp3TExtdUtHYzFEWjlCYU1zd0RRWUpLb1pJaHZjTgpBUUVMQlFBRGdnRUJBTk55c095cGQ5WjJEN0NRenRJeVBjVU9XKzlVcmM5QVY2Z0thanU2WDJGaXRhYnhrZnMrClZBWEg4QUh6ZmxpTDk1blF6NlVBaG9RQk8yam9Cb3lPM0JWUTMxd1krVjlTQjhESnR2NXBUQURIV1NBNGZnU1oKdFJvT215RHZQaHdWUFFVcDFUcGcwMHBwL3VFZ29jK0NJWTNjYURuN1dsUGNrTHNYNzBvS3poWXpWbjFINk8yegpiNk5PY0pZL3ZlVUY2bXBmRUo1b0NWVVVxc2MzZ2kyQzNoekR5Q3E1UVJZU0pseURlemRodndqRmdDMFpBbGtLCkl5WENXZkJUZjVFMjk4Z1R2L2VOYkNwZkY2ck0wNWRVS29vcXZCSWp6SExNZ09GemxpOGRDV09ndWM5a0J5ME4KTnFERlZ4UWI1c01WZ1RuV0U4KzcyWW8xRnppQ0ltVlpyaFU9Ci0tLS0tRU5EIENFUlRJRklDQVRFLS0tLS0K
    client-key-data: LS0tLS1CRUdJTiBSU0EgUFJJVkFURSBLRVktLS0tLQpNSUlFb3dJQkFBS0NBUUVBMG1ocm1FWEljbkdtSHdtV1FvSkNMWlFaQ0hkODRLNjNrSFpZZnBZVXRGNUdGbzA4Ci9NRzlrcWFLV1B2d0RrYlJFOERKdVBuRDE3OS9xbmp3ZHBRME43NHVxY1BIRDZ2b1pMZXBHSVluQVZnRkVCT0gKanNHMkQwWVRpYllpbWUwQ2pCTTRjMFVkTmhKWHFMQWUzMUR0WHJuM3pINXpsM1RFYzVWOXNhLzlOYTZ0cFZBaQo4WXZUcTV6RlVNaGQ0ckRtUitqMDY2aWVtNi9xSTd6MGhPY2t0NXg3SHMxcThLWnRtODJQb2grMUMxdmVJL2NpCmxQWVYwc2s1UTBNaG96dlU2WE4wcEZNMVlnR0ZuMlp0MnZRU0srOTA3Q1B6Q2M1cEJOZnpQMVZEcTJIWEU0VEYKOXJKaEFrRGIvYXFVQ2JOUDUzdFZ0NXk2T3R2Qkd4ckxrREx1RVFJREFRQUJBb0lCQUJsbC8yN3ZKWVlqRCtGNQpQOGtoZmltUVVnRkNvekZnQmNxTGJwMUJNcGlmTktpdVBlbG8zYUJoT0J3THRXdVR3dE0ybDJNYnl6YzA1NDFGCmNnbHVWR3RTS3NIVlh5Y1dJa1JlSXl4UnJVMnRPVmM2ZEVlWVBJalZkYkJPNnhoWGt0SWowOUdlU3l0bXhXd1kKYm5HWWpENitCaHFLUFJ2UVBaS1NQZ3ovNkhuZE53eUg3RmlLNFpVK3NYckpGYTh3Qmw4YVR1U2dGZTg2Zm9qWQppSjNWVkNjNmdqazFjYzBEalRFMUNuTlZaNlVaL2VheXAxcmRocEdMNFcxZkxLNnUwbWJmK3JtTXpsVWF4WU9FCjdZQVl0WFo2SmNIMzZJYS9CeXZTVStsU2FzUU1UaHVqSWlqY0g5NFZDS2IzQm01SzJ0M05nY1hpT0xRSmsxVkgKYVRSZUhwRUNnWUVBNFZiWWZWelVmVVpvT3JKSUJkNGNYWVJoanhHYkxyTWFFVksrMjdPbWsvZFpSRDJmR2c3aQo5Q3V3K3JLbkNIWURabDdZN2N2Q2M0K1FJclJFajVacjIzZHZVYktIMEFKMUZReW5mcTF0Sk5kZzNsQjdDOE04CjJvaThpYmJnTmxGS3RkampxNVJSQXhPRHZsZDhHMHN2R09oS2xZb2pXR0dwVHFpak94Nkg0RlVDZ1lFQTd3bDYKQzZFYVNsSXJLbjZUZkZsd0E1a2x6TnNXdE5NL0NGL2tZMXpyZFlTcWx3ZkR5VmZtQnpjRlpBY0FkUm9aRzF3Zgp5a1c2RTF2RkM4WkhONHN4dXpITk04RXNNRVg3RjBoVi9YcnM2RitLSmJ1TGtib2J0RUQ0SGJtUnNCdVJpdXN0CmZNSkpSSDJ0RXVmWnRUOUVobk9rV3Jka2hmeDEzNUhkVWlBMUlzMENnWUVBcGM3QVo2WkoySkJaRzIrWm5XK3MKMFljYVBpckhWQnFIZ04yeEFIcDFoUVVKVXpSQWdPMFpSRzl0djFwN203Y3lrejRSUXhDZVdXZjJ1QUtMUEZpRApycTU0WTlZSkp4N1h4aEJVb3RxN3A5TXZQUVpkTSsrS05JZE9xOHE3dWx3Z3JDUVdpbWNOSVVWWHVGUXBSdkFRCmpMUklSVGFyQVZxRE9SVFBYeTM4N3kwQ2dZQnF5b0lTOWZ1SDNxUFlUVXBZMEtCQmkwY2UrWFp3Zkx2NVl0WG4KS2xrclhJVFdDcXNHcGRWbnZjWVR4U2tJS0F1MWRIZmpaemxWY3JkYXBrK2syZlB5M0xIL2dEcmNxamNlVkx2TwpEZ0FQWkxlVVdmQmx2NDZtL2l1YkpBK1piUWVkMTZtdnhpRHpqMjRtTnh6Rlk2bWFvOGwybWQ0NEdlMFRYOWhQCjI0SEJ0UUtCZ0ZFaEM1V1U1eFM1ZmRZVmpPdmI4TnRnTjJMdENSMXNFR01vd0trSmJScFRyNzI2ZGt2dDJ6TG8KYlhyampwcFFuUmZqUUJLNm5MemMzWC8yM3J4ZUUyVUVlZlBHNTNIeDJmcEZwMVh3cGtxWnY1SjBkSngyZGU1ZApuR0g3cEUzL1p5c3J3YVdsNmUrc3J4cUZTd1FaOWVkSUVLMVJjTUNlbTlieVVQRHhodlc4Ci0tLS0tRU5EIFJTQSBQUklWQVRFIEtFWS0tLS0tCg==
    
# scheduler
(⎈|kubernetes-admin@kubernetes:N/A) root@k8s-ctr:~# cat /etc/kubernetes/scheduler.conf
apiVersion: v1
clusters:
- cluster:
    certificate-authority-data: LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSURCVENDQWUyZ0F3SUJBZ0lJSXk0ODJreHY4b013RFFZSktvWklodmNOQVFFTEJRQXdGVEVUTUJFR0ExVUUKQXhNS2EzVmlaWEp1WlhSbGN6QWVGdzB5TmpBeE1qUXdPVFV4TURkYUZ3MHpOakF4TWpJd09UVTJNRGRhTUJVeApFekFSQmdOVkJBTVRDbXQxWW1WeWJtVjBaWE13Z2dFaU1BMEdDU3FHU0liM0RRRUJBUVVBQTRJQkR3QXdnZ0VLCkFvSUJBUURZWkxueHAxUWFvMWNmOWVBNlZkalZwb2RFenpwMzk4d2QxWFBvd1VjQm1idzY1RFlWR0c5c1VlcVYKL042elFSREZJMmJQeUVTRDVJM092U0tvVGRWNS9rZ1RSaWFyTFlhYS84WENhTzM5Q05PamlmMjdaQVJYa2xjcwo5czJiODNOU1JycDJiNmJhNitHbkYwbUM1UXlPbE1uZVN6Uys3OFV2L3Jpdk9nQ2tkdzBCSUQvblRoTjdhR3lhCjQvQ1RJaVZCMkVoQXllT2FWSEVER3hLMk1ZNXhWL2lxVjBoVFh3V3B1NnhiZ1UxdlFlbml4dkIweVM3MjgyblEKajEydUpjdTFiUkcwWStDbFMzb1owKzBYRGY5dFNnNU54RTR6aWh2NjFpQVl3MnBTRzNJNU5CYVZXb0NtbkJVeAprbkJBbTRaZXJvNDRTVHZLU0pPMXd5ckQzcnluQWdNQkFBR2pXVEJYTUE0R0ExVWREd0VCL3dRRUF3SUNwREFQCkJnTlZIUk1CQWY4RUJUQURBUUgvTUIwR0ExVWREZ1FXQkJRYnhDdm8xL1k4UW5Bc3VhNG9aelVObjBGb3l6QVYKQmdOVkhSRUVEakFNZ2dwcmRXSmxjbTVsZEdWek1BMEdDU3FHU0liM0RRRUJDd1VBQTRJQkFRQ2JkTUkwUGFyYgowbTZHdDRaNFJNRmtxR3d2T0VNdTlYZVRlTVVLanpXSklRVEUxSlpDajFGZnlHbWhBZ3NQaERQcVBabDZyMmFvCmlrOVdhbm12dmxkM3RGWElqbnRPM0ZyVmtOTGREKzYwVlVkMVNuL2ZvSk16VmJLNzlqNWlUUWs1aWJSNnRraXoKSFZnTktvTHlaYjk4SXpFOEdEcVQ2YnV2ZG9UM2xSMW5VWlNNVHJFazZLdXA4MDJjNGN3dUxHT0MrR2I1WUg2bwp0Z3d5d3gxY2hMcGx4VkJkSlBtSld2NHFxNm0vYXZlUmRLNm9jdXIzVWNraGJhYUVweWhwUmRDMDYzWXE2YjdCCnhvV1kvdGRRQzhMNEQ1SlVrSWx6TW9Cdzd5emJrVTJINER6QVYzMHBlakpSdVVSSStxQWFxSERvalBWenNzSG4KY3ByMGFTTlVIWGNnCi0tLS0tRU5EIENFUlRJRklDQVRFLS0tLS0K
    server: https://192.168.10.100:6443
  name: kubernetes
contexts:
- context:
    cluster: kubernetes
    user: system:kube-scheduler
  name: system:kube-scheduler@kubernetes
current-context: system:kube-scheduler@kubernetes
kind: Config
preferences: {}
users:
- name: system:kube-scheduler
  user:
    client-certificate-data: LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSUREVENDQWZXZ0F3SUJBZ0lJTzk5SUwxNFUzczR3RFFZSktvWklodmNOQVFFTEJRQXdGVEVUTUJFR0ExVUUKQXhNS2EzVmlaWEp1WlhSbGN6QWVGdzB5TmpBeE1qUXdPVFV4TURkYUZ3MHlOekF4TWpRd09UVTJNRGRhTUNBeApIakFjQmdOVkJBTVRGWE41YzNSbGJUcHJkV0psTFhOamFHVmtkV3hsY2pDQ0FTSXdEUVlKS29aSWh2Y05BUUVCCkJRQURnZ0VQQURDQ0FRb0NnZ0VCQU9qUko1UzdVN0NxS3p2SHJBTXZGb29MUXZ0YXEyZXBySDZNVkwxZk1xNVAKVXZ5OWQ1NVozZmVuaVkySGp0NjBDSnp0S25WaWgyTkRGbnpHNDd1M1pYNURFaTFJU3A5cm03ZFhlU3MxOElUZApSYlVGY094bWJBT2pucDZJV21yQUxtUTgzVHpQazlEcllYZ0dPRVNQbWdGSW04bzY1RWtOdkVZd2V3dEhMWkdGCkxOYkVGcWVrc2I5dVp2OCszT1dFNEpOa3BFMjAwMDBHL1VxajFVMkw2ZElvZWtmc3ZZTGppWEpnU2dXZXZKL2gKdmhkWWRzOHNLcEZ5eVk0MmZEVkUwOXRFR3BJdmdKcUY2U3V1SkR2MFA1SS9wNFg3ck4yUmJ3T3ZYTlVTQUF5LwpkVkcxSUJsNjRwUk5lbWxyM0d4SjlqUkU5ZUo0cHlYVC9INk9wTGthRzVVQ0F3RUFBYU5XTUZRd0RnWURWUjBQCkFRSC9CQVFEQWdXZ01CTUdBMVVkSlFRTU1Bb0dDQ3NHQVFVRkJ3TUNNQXdHQTFVZEV3RUIvd1FDTUFBd0h3WUQKVlIwakJCZ3dGb0FVRzhRcjZOZjJQRUp3TExtdUtHYzFEWjlCYU1zd0RRWUpLb1pJaHZjTkFRRUxCUUFEZ2dFQgpBRlF5ZDN2Y0NMVlBJUW10WGRudlJjUTlYWkRUMVF4ZndRb2FLd2RYbG90OFFVdm5XMklHUWt4RUdVeWdyYlBsCjk3MGl2cU1zOEQ4NnNvMmV1NzVjcnJCL0J1NHJkS1hORStHc2ZhYittdFhBaGFycmxnNDlaQ0o1aklPY2RsM1UKTHh1c0VIOU9VOXRZVFJETEJ6N0J1TWhDM0dmcGM2TTFGTmlzMm1Zd2grNDlaVkNqNENPRStBVCtDc0I4aWNCYworRENTcENqTEltaDEyc1o0KzdVWTN1Nm5kRHo1NEJCWnVtVGhFSUkvdEtyZmxqM2JDSXh5Z3BMa3RKUUl4NnRoCjhzRFpVZktSSVI0NXFqaC9oVWtlNUVmTjMwOGxJTFdhdXUxZUpCcmg5MHNpdWlMekhkanNUc21FVTBQOHo2bEcKNHZ4b0ZaZitVTVBtZ3ZHUzk1Ny9OajQ9Ci0tLS0tRU5EIENFUlRJRklDQVRFLS0tLS0K

# kubelet
(⎈|kubernetes-admin@kubernetes:N/A) root@k8s-ctr:~# cat /etc/kubernetes/kubelet.conf
apiVersion: v1
clusters:
- cluster:
    certificate-authority-data: LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSURCVENDQWUyZ0F3SUJBZ0lJSXk0ODJreHY4b013RFFZSktvWklodmNOQVFFTEJRQXdGVEVUTUJFR0ExVUUKQXhNS2EzVmlaWEp1WlhSbGN6QWVGdzB5TmpBeE1qUXdPVFV4TURkYUZ3MHpOakF4TWpJd09UVTJNRGRhTUJVeApFekFSQmdOVkJBTVRDbXQxWW1WeWJtVjBaWE13Z2dFaU1BMEdDU3FHU0liM0RRRUJBUVVBQTRJQkR3QXdnZ0VLCkFvSUJBUURZWkxueHAxUWFvMWNmOWVBNlZkalZwb2RFenpwMzk4d2QxWFBvd1VjQm1idzY1RFlWR0c5c1VlcVYKL042elFSREZJMmJQeUVTRDVJM092U0tvVGRWNS9rZ1RSaWFyTFlhYS84WENhTzM5Q05PamlmMjdaQVJYa2xjcwo5czJiODNOU1JycDJiNmJhNitHbkYwbUM1UXlPbE1uZVN6Uys3OFV2L3Jpdk9nQ2tkdzBCSUQvblRoTjdhR3lhCjQvQ1RJaVZCMkVoQXllT2FWSEVER3hLMk1ZNXhWL2lxVjBoVFh3V3B1NnhiZ1UxdlFlbml4dkIweVM3MjgyblEKajEydUpjdTFiUkcwWStDbFMzb1owKzBYRGY5dFNnNU54RTR6aWh2NjFpQVl3MnBTRzNJNU5CYVZXb0NtbkJVeAprbkJBbTRaZXJvNDRTVHZLU0pPMXd5ckQzcnluQWdNQkFBR2pXVEJYTUE0R0ExVWREd0VCL3dRRUF3SUNwREFQCkJnTlZIUk1CQWY4RUJUQURBUUgvTUIwR0ExVWREZ1FXQkJRYnhDdm8xL1k4UW5Bc3VhNG9aelVObjBGb3l6QVYKQmdOVkhSRUVEakFNZ2dwcmRXSmxjbTVsZEdWek1BMEdDU3FHU0liM0RRRUJDd1VBQTRJQkFRQ2JkTUkwUGFyYgowbTZHdDRaNFJNRmtxR3d2T0VNdTlYZVRlTVVLanpXSklRVEUxSlpDajFGZnlHbWhBZ3NQaERQcVBabDZyMmFvCmlrOVdhbm12dmxkM3RGWElqbnRPM0ZyVmtOTGREKzYwVlVkMVNuL2ZvSk16VmJLNzlqNWlUUWs1aWJSNnRraXoKSFZnTktvTHlaYjk4SXpFOEdEcVQ2YnV2ZG9UM2xSMW5VWlNNVHJFazZLdXA4MDJjNGN3dUxHT0MrR2I1WUg2bwp0Z3d5d3gxY2hMcGx4VkJkSlBtSld2NHFxNm0vYXZlUmRLNm9jdXIzVWNraGJhYUVweWhwUmRDMDYzWXE2YjdCCnhvV1kvdGRRQzhMNEQ1SlVrSWx6TW9Cdzd5emJrVTJINER6QVYzMHBlakpSdVVSSStxQWFxSERvalBWenNzSG4KY3ByMGFTTlVIWGNnCi0tLS0tRU5EIENFUlRJRklDQVRFLS0tLS0K
    server: https://192.168.10.100:6443
  name: kubernetes
contexts:
- context:
    cluster: kubernetes
    user: system:node:k8s-ctr
  name: system:node:k8s-ctr@kubernetes
current-context: system:node:k8s-ctr@kubernetes
kind: Config
preferences: {}
users:
- name: system:node:k8s-ctr
  user:
    client-certificate: /var/lib/kubelet/pki/kubelet-client-current.pem
    client-key: /var/lib/kubelet/pki/kubelet-client-current.pem
    

(⎈|kubernetes-admin@kubernetes:N/A) root@k8s-ctr:~# ls -l /var/lib/kubelet/pki
total 12
-rw-------. 1 root root 2822 Jan 24 18:56 kubelet-client-2026-01-24-18-56-11.pem
lrwxrwxrwx. 1 root root   59 Jan 24 18:56 kubelet-client-current.pem -> /var/lib/kubelet/pki/kubelet-client-2026-01-24-18-56-11.pem
-rw-r--r--. 1 root root 2262 Jan 24 18:56 kubelet.crt
-rw-------. 1 root root 1679 Jan 24 18:56 kubelet.

# kubelet 서버 역할 : Subjec, Key Usage, SAN 확인
(⎈|kubernetes-admin@kubernetes:N/A) root@k8s-ctr:~# cat /var/lib/kubelet/pki/kubelet.crt | openssl x509 -text -noout
Certificate:
    Data:
        Version: 3 (0x2)
        Serial Number: 7292065219812151886 (0x65329eaeb1e2524e)
        Signature Algorithm: sha256WithRSAEncryption
        Issuer: CN=k8s-ctr-ca@1769248570
        Validity
            Not Before: Jan 24 08:56:10 2026 GMT
            Not After : Jan 24 08:56:10 2027 GMT
        Subject: CN=k8s-ctr@1769248571
        Subject Public Key Info:
            Public Key Algorithm: rsaEncryption
                Public-Key: (2048 bit)
                Modulus:
                    00:a8:2c:fa:06:90:a1:40:78:f4:b0:a6:e0:ea:1c:
                    ed:9d:ee:66:95:54:19:db:eb:ed:7b:31:3e:41:d4:
                    bb:19:36:a1:52:c6:9c:08:50:6c:55:d5:b7:e8:40:
                    a5:84:e1:43:92:d1:0a:2a:3d:82:bc:9c:99:47:08:
                    69:3a:73:0a:f0:1e:b9:2b:74:79:34:eb:87:41:91:
                    d7:af:e0:0c:0c:5a:42:44:a9:6a:e2:04:3f:74:99:
                    64:3a:66:f0:c2:16:c3:71:37:e8:f7:9e:b9:72:d3:
                    79:fd:90:2b:c9:fb:d4:db:66:86:4b:2e:77:b4:50:
                    f1:08:5e:d1:78:4a:de:fa:b7:da:37:f1:04:91:41:
                    1b:cd:f9:42:b7:d3:78:1c:7a:dc:d1:ef:50:c4:45:
                    de:42:11:3a:66:dd:46:45:10:3f:3d:bb:68:50:d0:
                    bd:02:8f:58:1c:14:c8:79:e6:c4:5e:c0:21:de:75:
                    3d:bc:f8:dc:7b:4c:41:c3:fc:0f:f3:58:81:f7:b6:
                    ad:42:ae:cb:3b:53:99:e3:22:3d:67:6e:9b:db:0a:
                    53:9a:19:d0:9d:7e:ed:27:75:ed:76:cb:ae:28:ce:
                    1e:91:c8:52:1d:75:a0:e5:dd:d8:7c:18:32:55:0a:
                    b2:20:47:cd:37:0d:be:e2:1c:0a:3d:f4:40:0e:de:
                    b1:a5
                Exponent: 65537 (0x10001)
        X509v3 extensions:
            X509v3 Key Usage: critical
                Digital Signature, Key Encipherment
            X509v3 Extended Key Usage:
                TLS Web Server Authentication
            X509v3 Basic Constraints: critical
                CA:FALSE
            X509v3 Authority Key Identifier:
                08:C1:8D:45:1A:4C:42:6B:1C:B8:B6:65:E1:8F:81:28:55:94:D6:2B
            X509v3 Subject Alternative Name:
                DNS:k8s-ctr
    Signature Algorithm: sha256WithRSAEncryption
    Signature Value:
        73:3b:6a:c2:e8:0d:68:35:7a:3a:18:b5:b9:34:aa:0b:85:ac:
        73:b3:12:35:f7:4a:2e:98:08:0c:82:29:41:56:06:5b:27:a0:
        08:d0:ab:8d:86:78:f8:fa:b4:e1:af:54:bd:d0:97:fa:b6:9f:
        1f:9e:67:e4:5c:23:22:ec:11:c2:a2:56:4a:45:db:4d:b5:7f:
        49:f0:7e:aa:d6:03:e0:32:a5:f4:ed:e8:d6:fd:f2:85:87:b9:
        25:93:47:d8:33:d9:65:e9:36:61:bf:9a:34:3f:66:e2:42:6d:
        25:0b:99:13:c7:62:c5:97:d9:7c:b4:18:b4:8e:40:ff:6f:cb:
        98:03:8a:78:23:17:83:69:ec:c7:3b:0f:d7:cc:45:6f:f1:05:
        14:3d:f9:57:45:de:21:2f:ac:f7:ee:b2:87:e4:ee:75:f7:16:
        36:42:0a:d0:f5:d5:48:aa:8d:95:99:37:2f:57:48:49:88:f8:
        74:d0:4f:02:cd:a9:4c:23:74:af:dd:c2:93:ea:fc:4e:cc:23:
        3c:c9:f5:37:53:77:62:3e:b3:e9:a5:2b:ee:b3:59:10:7c:9c:
        6c:50:1e:e3:94:37:ae:1d:96:d2:86:7f:d5:a0:c8:24:fc:b8:
        1f:62:6e:38:78:49:9b:13:9f:76:ab:ce:a1:3f:18:18:4d:a3:
        b9:2b:e1:fd
        
# kubelet 클라이언트 역할 : Subjec, Key Usage 확인
(⎈|kubernetes-admin@kubernetes:N/A) root@k8s-ctr:~# cat /var/lib/kubelet/pki/kubelet-client-current.pem | openssl x509 -text -noout
Certificate:
    Data:
        Version: 3 (0x2)
        Serial Number: 8262525664953040483 (0x72aa6350c712fe63)
        Signature Algorithm: sha256WithRSAEncryption
        Issuer: CN=kubernetes
        Validity
            Not Before: Jan 24 09:51:07 2026 GMT
            Not After : Jan 24 09:56:07 2027 GMT
        Subject: O=system:nodes, CN=system:node:k8s-ctr
        Subject Public Key Info:
            Public Key Algorithm: rsaEncryption
                Public-Key: (2048 bit)
                Modulus:
                    00:cd:48:6b:aa:c0:2b:ab:9f:35:8d:db:81:53:12:
                    b9:7e:b4:db:7b:33:8d:5f:0d:8d:45:14:74:0c:41:
                    37:8b:be:98:3b:f7:84:76:77:3d:8a:34:40:4e:0a:
                    e1:1c:39:cc:1e:fc:4b:1b:3c:5b:d3:d9:ec:27:31:
                    b3:05:c8:2a:23:8e:4d:e1:e5:37:94:9f:15:30:01:
                    73:7c:a0:a6:06:eb:27:3f:09:11:04:b8:3d:48:4b:
                    a9:76:25:f6:52:f5:97:62:25:62:ea:78:3f:d7:8b:
                    07:c3:46:cc:e3:b6:ef:34:63:41:cf:9b:3d:b3:9e:
                    93:26:15:8d:0a:5a:35:27:3a:4f:f1:b3:d6:ef:32:
                    01:6c:ca:3c:e7:7b:4b:56:24:0f:8e:2d:14:f9:99:
                    9b:b3:83:90:80:21:9a:85:9d:4f:97:b1:9e:a4:6e:
                    6b:49:45:3c:f5:3d:5a:fc:e8:b7:5f:25:86:60:ce:
                    a4:e3:a9:0c:48:1a:03:e8:f3:be:00:fe:61:36:9a:
                    75:3f:d9:7d:ba:69:90:b0:e8:38:60:86:89:23:71:
                    ee:ff:ad:0a:60:1f:23:19:0d:06:35:0a:04:be:11:
                    0c:b6:be:ff:78:db:e3:8a:63:58:9e:99:45:66:cd:
                    b2:1a:77:2e:fc:93:4f:6e:ab:83:db:ca:9a:fb:48:
                    26:63
                Exponent: 65537 (0x10001)
        X509v3 extensions:
            X509v3 Key Usage: critical
                Digital Signature, Key Encipherment
            X509v3 Extended Key Usage:
                TLS Web Client Authentication
            X509v3 Basic Constraints: critical
                CA:FALSE
            X509v3 Authority Key Identifier:
                1B:C4:2B:E8:D7:F6:3C:42:70:2C:B9:AE:28:67:35:0D:9F:41:68:CB
    Signature Algorithm: sha256WithRSAEncryption
    Signature Value:
        16:5a:81:34:d1:a0:99:70:f6:0f:53:eb:18:f2:fc:81:76:ad:
        7d:58:62:a9:34:00:f3:7e:94:d8:29:03:ca:67:35:2e:0b:1a:
        2e:15:2c:f0:75:30:49:b2:1f:1d:b0:ec:0d:ec:cd:7b:3b:24:
        4d:61:fe:85:6b:38:e8:69:b5:fb:25:32:3e:d9:0a:2c:1f:83:
        3b:e9:4a:8e:d3:46:04:1d:b2:99:67:87:bd:12:8b:e2:0f:ef:
        9e:83:71:f6:3d:c1:82:75:5d:ca:b1:60:51:9e:ff:c0:3d:d4:
        13:2f:3d:c4:91:60:ff:c8:8d:a8:f4:d4:7c:74:d4:1b:8c:69:
        0d:39:ce:d3:f2:14:63:60:47:85:b9:ae:26:b1:16:b9:16:76:
        22:e7:93:02:4c:60:17:50:b9:94:9b:28:2d:44:49:55:1c:a9:
        d5:1d:1d:ab:cc:08:04:62:be:89:a8:a4:75:17:4e:46:43:c2:
        ff:b8:1b:0e:27:17:e6:09:41:37:43:ee:cd:d2:90:90:27:cb:
        15:65:ff:2b:b7:93:fb:f5:12:10:5b:9a:6e:bc:f8:c9:4d:d7:
        d5:32:b8:7d:f0:78:39:de:df:34:50:ed:78:4e:62:8b:47:d7:
        c1:05:50:67:6f:32:e8:30:60:45:a4:e2:11:37:fa:3e:4e:93:
        7c:a6:22:89

 

4-7. [k8s-ctr] static pod 확인 : etcd, kube-apiserver, kube-scheduler,kube-controller-manager

# kubeket에 의해 기동되는 static pod 대상 매니페스트 디렉터리 확인
(⎈|kubernetes-admin@kubernetes:N/A) root@k8s-ctr:~# tree /etc/kubernetes/manifests/
/etc/kubernetes/manifests/
├── etcd.yaml
├── kube-apiserver.yaml
├── kube-controller-manager.yaml
└── kube-scheduler.yaml

1 directory, 4 files

# kublet 설정 확인
(⎈|kubernetes-admin@kubernetes:N/A) root@k8s-ctr:~# cat /var/lib/kubelet/config.yaml
apiVersion: kubelet.config.k8s.io/v1beta1
authentication:
  anonymous:
    enabled: false
  webhook:
    cacheTTL: 0s
    enabled: true
  x509:
    clientCAFile: /etc/kubernetes/pki/ca.crt
authorization:
  mode: Webhook
  webhook:
    cacheAuthorizedTTL: 0s
    cacheUnauthorizedTTL: 0s
cgroupDriver: systemd
clusterDNS:
- 10.96.0.10
clusterDomain: cluster.local
containerRuntimeEndpoint: ""
cpuManagerReconcilePeriod: 0s
crashLoopBackOff: {}
evictionPressureTransitionPeriod: 0s
fileCheckFrequency: 0s
healthzBindAddress: 127.0.0.1
healthzPort: 10248
httpCheckFrequency: 0s
imageMaximumGCAge: 0s
imageMinimumGCAge: 0s
kind: KubeletConfiguration
logging:
  flushFrequency: 0
  options:
    json:
      infoBufferSize: "0"
    text:
      infoBufferSize: "0"
  verbosity: 0
memorySwap: {}
nodeStatusReportFrequency: 0s
nodeStatusUpdateFrequency: 0s
rotateCertificates: true
runtimeRequestTimeout: 0s
shutdownGracePeriod: 0s
shutdownGracePeriodCriticalPods: 0s
staticPodPath: /etc/kubernetes/manifests
streamingConnectionIdleTimeout: 0s
syncFrequency: 0s
volumeStatsAggPeriod: 0s

(⎈|kubernetes-admin@kubernetes:N/A) root@k8s-ctr:~# cat /var/lib/kubelet/kubeadm-flags.env
KUBELET_KUBEADM_ARGS="--container-runtime-endpoint=unix:///run/containerd/containerd.sock --node-ip=192.168.10.100 --pod-infra-container-image=registry.k8s.io/pause:3.10"

# static 파드 확인
(⎈|kubernetes-admin@kubernetes:N/A) root@k8s-ctr:~# kubectl get pod -n kube-system -owide
NAME                              READY   STATUS    RESTARTS   AGE   IP               NODE      NOMINATED NODE   READINESS GATES
coredns-668d6bf9bc-c2g8k          1/1     Running   0          56m   10.244.0.4       k8s-ctr   <none>           <none>
coredns-668d6bf9bc-qdwrj          1/1     Running   0          56m   10.244.0.5       k8s-ctr   <none>           <none>
etcd-k8s-ctr                      1/1     Running   0          56m   192.168.10.100   k8s-ctr   <none>           <none>
kube-apiserver-k8s-ctr            1/1     Running   1          56m   192.168.10.100   k8s-ctr   <none>           <none>
kube-controller-manager-k8s-ctr   1/1     Running   0          56m   192.168.10.100   k8s-ctr   <none>           <none>
kube-proxy-6gfjf                  1/1     Running   0          56m   192.168.10.100   k8s-ctr   <none>           <none>
kube-scheduler-k8s-ctr            1/1     Running   0          56m   192.168.10.100   k8s-ctr   <none>           <none>

# etcd : etcd client 는 https://192.168.10.100:2379 호출, metrics 은 http://127.0.0.1:2381 확인
(⎈|kubernetes-admin@kubernetes:N/A) root@k8s-ctr:~# tree /var/lib/etcd/
/var/lib/etcd/
└── member
    ├── snap
    │   └── db
    └── wal
        ├── 0000000000000000-0000000000000000.wal
        └── 0.tmp

4 directories, 3 files

# kube-apiserver
(⎈|kubernetes-admin@kubernetes:N/A) root@k8s-ctr:~# cat /etc/kubernetes/manifests/kube-apiserver.yaml
apiVersion: v1
kind: Pod
metadata:
  annotations:
    kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint: 192.168.10.100:6443
  creationTimestamp: null
  labels:
    component: kube-apiserver
    tier: control-plane
  name: kube-apiserver
  namespace: kube-system
spec:
  containers:
  - command:
    - kube-apiserver
    - --advertise-address=192.168.10.100
    - --allow-privileged=true
    - --authorization-mode=Node,RBAC
    - --client-ca-file=/etc/kubernetes/pki/ca.crt
    - --enable-admission-plugins=NodeRestriction
    - --enable-bootstrap-token-auth=true
    - --etcd-cafile=/etc/kubernetes/pki/etcd/ca.crt
    - --etcd-certfile=/etc/kubernetes/pki/apiserver-etcd-client.crt
    - --etcd-keyfile=/etc/kubernetes/pki/apiserver-etcd-client.key
    - --etcd-servers=https://127.0.0.1:2379
    - --kubelet-client-certificate=/etc/kubernetes/pki/apiserver-kubelet-client.crt
    - --kubelet-client-key=/etc/kubernetes/pki/apiserver-kubelet-client.key
    - --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname
    - --proxy-client-cert-file=/etc/kubernetes/pki/front-proxy-client.crt
    - --proxy-client-key-file=/etc/kubernetes/pki/front-proxy-client.key
    - --requestheader-allowed-names=front-proxy-client
    - --requestheader-client-ca-file=/etc/kubernetes/pki/front-proxy-ca.crt
    - --requestheader-extra-headers-prefix=X-Remote-Extra-
    - --requestheader-group-headers=X-Remote-Group
    - --requestheader-username-headers=X-Remote-User
    - --secure-port=6443
    - --service-account-issuer=https://kubernetes.default.svc.cluster.local
    - --service-account-key-file=/etc/kubernetes/pki/sa.pub
    - --service-account-signing-key-file=/etc/kubernetes/pki/sa.key
    - --service-cluster-ip-range=10.96.0.0/16
    - --tls-cert-file=/etc/kubernetes/pki/apiserver.crt
    - --tls-private-key-file=/etc/kubernetes/pki/apiserver.key
    
(⎈|kubernetes-admin@kubernetes:N/A) root@k8s-ctr:~# ss -tnlp | grep apiserver
LISTEN 0      4096                *:6443             *:*    users:(("kube-apiserver",pid=72505,fd=3))


## k8s 내부에서 api 호출 시 : https://10.96.0.1 혹은 https://kubernetes.default.svc.cluster.local
NAME                 TYPE        CLUSTER-IP   EXTERNAL-IP   PORT(S)   AGE
service/kubernetes   ClusterIP   10.96.0.1    <none>        443/TCP   61m

NAME                   ENDPOINTS             AGE
endpoints/kubernetes   192.168.10.100:6443   61m


# scheduler
(⎈|kubernetes-admin@kubernetes:N/A) root@k8s-ctr:~# cat /etc/kubernetes/manifests/kube-scheduler.yaml
apiVersion: v1
kind: Pod
metadata:
  creationTimestamp: null
  labels:
    component: kube-scheduler
    tier: control-plane
  name: kube-scheduler
  namespace: kube-system
spec:
  containers:
  - command:
    - kube-scheduler
    - --authentication-kubeconfig=/etc/kubernetes/scheduler.conf
    - --authorization-kubeconfig=/etc/kubernetes/scheduler.conf
    - --bind-address=127.0.0.1
    - --kubeconfig=/etc/kubernetes/scheduler.conf
    - --leader-elect=true
    image: registry.k8s.io/kube-scheduler:v1.32.11
    imagePullPolicy: IfNotPresent
    livenessProbe:
      failureThreshold: 8
      httpGet:
        host: 127.0.0.1
        path: /livez
        port: 10259
        scheme: HTTPS
      initialDelaySeconds: 10
      periodSeconds: 10
      timeoutSeconds: 15
    name: kube-scheduler
    readinessProbe:
      failureThreshold: 3
      httpGet:
        host: 127.0.0.1
        path: /readyz
        port: 10259
        scheme: HTTPS
      periodSeconds: 1
      timeoutSeconds: 15
    resources:
      requests:
        cpu: 100m
    startupProbe:
      failureThreshold: 24
      httpGet:
        host: 127.0.0.1
        path: /livez
        port: 10259
        scheme: HTTPS
      initialDelaySeconds: 10
      periodSeconds: 10
      timeoutSeconds: 15
    volumeMounts:
    - mountPath: /etc/kubernetes/scheduler.conf
      name: kubeconfig
      readOnly: true
  hostNetwork: true
  priority: 2000001000
  priorityClassName: system-node-critical
  securityContext:
    seccompProfile:
      type: RuntimeDefault
  volumes:
  - hostPath:
      path: /etc/kubernetes/scheduler.conf
      type: FileOrCreate
    name: kubeconfig
status: {}

## tcp 10259 Listen 포트 확인
(⎈|kubernetes-admin@kubernetes:N/A) root@k8s-ctr:~# ss -tnlp | grep scheduler
LISTEN 0      4096        127.0.0.1:10259      0.0.0.0:*    users:(("kube-scheduler",pid=72499,fd=3))

## scheduler 파드가 1개 이상일 경우 리더 역할 파드 확인
## Lease는 k8s의 경량 coordination 리소스 : 리더 선출 (Leader Election), 노드/컴포넌트 상태 heartbeat, 저부하(high-scale) 상태 갱신
(⎈|kubernetes-admin@kubernetes:N/A) root@k8s-ctr:~# kubectl get leases.coordination.k8s.io -n kube-system kube-scheduler -o yaml
apiVersion: coordination.k8s.io/v1
kind: Lease
metadata:
  creationTimestamp: "2026-01-24T09:56:17Z"
  name: kube-scheduler
  namespace: kube-system
  resourceVersion: "5354"
  uid: 78d1cc93-33db-4eeb-abe7-b603fd081c2e
spec:
  acquireTime: "2026-01-24T09:56:17.179818Z"
  holderIdentity: k8s-ctr_3825883f-694e-45f3-a5d2-28094842db8e
  leaseDurationSeconds: 15
  leaseTransitions: 0
  renewTime: "2026-01-24T10:59:51.957723Z"
  
(⎈|kubernetes-admin@kubernetes:N/A) root@k8s-ctr:~# kubectl get leases.coordination.k8s.io -n kube-system kube-scheduler
NAME             HOLDER                                         AGE
kube-scheduler   k8s-ctr_3825883f-694e-45f3-a5d2-28094842db8e   63m

## Node Heartbeat (Node 상태) : node heartbeat 전용 네임스페이스
(⎈|kubernetes-admin@kubernetes:N/A) root@k8s-ctr:~# kubectl get lease -n kube-node-lease
NAME      HOLDER    AGE
k8s-ctr   k8s-ctr   64m

# kube-controller-manager
(⎈|kubernetes-admin@kubernetes:N/A) root@k8s-ctr:~# cat /etc/kubernetes/manifests/kube-controller-manager.yaml
apiVersion: v1
kind: Pod
metadata:
  creationTimestamp: null
  labels:
    component: kube-controller-manager
    tier: control-plane
  name: kube-controller-manager
  namespace: kube-system
spec:
  containers:
  - command:
    - kube-controller-manager
    - --allocate-node-cidrs=true
    - --authentication-kubeconfig=/etc/kubernetes/controller-manager.conf
    - --authorization-kubeconfig=/etc/kubernetes/controller-manager.conf
    - --bind-address=127.0.0.1
    - --client-ca-file=/etc/kubernetes/pki/ca.crt
    - --cluster-cidr=10.244.0.0/16
    - --cluster-name=kubernetes
    - --cluster-signing-cert-file=/etc/kubernetes/pki/ca.crt
    - --cluster-signing-key-file=/etc/kubernetes/pki/ca.key
    - --controllers=*,bootstrapsigner,tokencleaner
    - --kubeconfig=/etc/kubernetes/controller-manager.conf
    - --leader-elect=true
    - --requestheader-client-ca-file=/etc/kubernetes/pki/front-proxy-ca.crt
    - --root-ca-file=/etc/kubernetes/pki/ca.crt
    - --service-account-private-key-file=/etc/kubernetes/pki/sa.key
    - --service-cluster-ip-range=10.96.0.0/16
    - --use-service-account-credentials=true
    image: registry.k8s.io/kube-controller-manager:v1.32.11
    imagePullPolicy: IfNotPresent
    livenessProbe:
      failureThreshold: 8
      httpGet:
        host: 127.0.0.1
        path: /healthz
        port: 10257
        scheme: HTTPS
      initialDelaySeconds: 10
      periodSeconds: 10
      timeoutSeconds: 15
    name: kube-controller-manager
    resources:
      requests:
        cpu: 200m
    startupProbe:
      failureThreshold: 24
      httpGet:
        host: 127.0.0.1
        path: /healthz
        port: 10257
        scheme: HTTPS
      initialDelaySeconds: 10
      periodSeconds: 10
      timeoutSeconds: 15
    volumeMounts:
    - mountPath: /etc/ssl/certs
      name: ca-certs
      readOnly: true
    - mountPath: /etc/pki/ca-trust
      name: etc-pki-ca-trust
      readOnly: true
    - mountPath: /etc/pki/tls/certs
      name: etc-pki-tls-certs
      readOnly: true
    - mountPath: /usr/libexec/kubernetes/kubelet-plugins/volume/exec
      name: flexvolume-dir
    - mountPath: /etc/kubernetes/pki
      name: k8s-certs
      readOnly: true
    - mountPath: /etc/kubernetes/controller-manager.conf
      name: kubeconfig
      readOnly: true
  hostNetwork: true
  priority: 2000001000
  priorityClassName: system-node-critical
  securityContext:
    seccompProfile:
      type: RuntimeDefault
  volumes:
  - hostPath:
      path: /etc/ssl/certs
      type: DirectoryOrCreate
    name: ca-certs
  - hostPath:
      path: /etc/pki/ca-trust
      type: DirectoryOrCreate
    name: etc-pki-ca-trust
  - hostPath:
      path: /etc/pki/tls/certs
      type: DirectoryOrCreate
    name: etc-pki-tls-certs
  - hostPath:
      path: /usr/libexec/kubernetes/kubelet-plugins/volume/exec
      type: DirectoryOrCreate
    name: flexvolume-dir
  - hostPath:
      path: /etc/kubernetes/pki
      type: DirectoryOrCreate
    name: k8s-certs
  - hostPath:
      path: /etc/kubernetes/controller-manager.conf
      type: FileOrCreate
    name: kubeconfig
status: {}

## tcp 10257 Listen 포트 확인
(⎈|kubernetes-admin@kubernetes:N/A) root@k8s-ctr:~# ss -tnlp | grep controller
LISTEN 0      4096        127.0.0.1:10257      0.0.0.0:*    users:(("kube-controller",pid=72492,fd=3))

## 노드별 파드 CIDR 확인 
(⎈|kubernetes-admin@kubernetes:N/A) root@k8s-ctr:~# kubectl get nodes -o jsonpath='{range .items[*]}{.metadata.name}{"\t"}{.spec.podCIDR}{"\n"}{end}'
k8s-ctr	10.244.0.0/24

## kcm 파드가 1개 이상일 경우 리더 역할 파드 확인
(⎈|kubernetes-admin@kubernetes:N/A) root@k8s-ctr:~# kubectl get lease -n kube-system kube-controller-manager -o yaml
apiVersion: coordination.k8s.io/v1
kind: Lease
metadata:
  creationTimestamp: "2026-01-24T09:56:16Z"
  name: kube-controller-manager
  namespace: kube-system
  resourceVersion: "5571"
  uid: 01a84622-5150-45bf-b53b-fc748026a6e1
spec:
  acquireTime: "2026-01-24T09:56:16.556309Z"
  holderIdentity: k8s-ctr_89f0fb81-6ebf-4b54-992d-0b61acedafd3
  leaseDurationSeconds: 15
  leaseTransitions: 0
  renewTime: "2026-01-24T11:02:38.943222Z"
  
## 컨트롤러별 개별 ServiceAccount + RBAC 사용
(⎈|kubernetes-admin@kubernetes:N/A) root@k8s-ctr:~# kubectl get sa -n kube-system | grep controller
attachdetach-controller                       0         66m
certificate-controller                        0         66m
clusterrole-aggregation-controller            0         66m
cronjob-controller                            0         66m
daemon-set-controller                         0         66m
deployment-controller                         0         66m
disruption-controller                         0         66m
endpoint-controller                           0         66m
endpointslice-controller                      0         66m
endpointslicemirroring-controller             0         66m
ephemeral-volume-controller                   0         66m
expand-controller                             0         66m
job-controller                                0         66m
namespace-controller                          0         66m
node-controller                               0         66m
pv-protection-controller                      0         66m
pvc-protection-controller                     0         66m
replicaset-controller                         0         66m
replication-controller                        0         66m
resourcequota-controller                      0         66m
service-account-controller                    0         66m
statefulset-controller                        0         66m
ttl-after-finished-controller                 0         66m
ttl-controller                                0         66m
validatingadmissionpolicy-status-controller   0         66m

 

 

4-8. 필수 애드온 설치 (coredns, kube-proxy) 확인

# coredns 확인
(⎈|kubernetes-admin@kubernetes:N/A) root@k8s-ctr:~# kubectl get deploy -n kube-system coredns -owide
NAME      READY   UP-TO-DATE   AVAILABLE   AGE   CONTAINERS   IMAGES                                    SELECTOR
coredns   2/2     2            2           67m   coredns      registry.k8s.io/coredns/coredns:v1.11.3   k8s-app=kube-dns


# kube-proxy 확인
(⎈|kubernetes-admin@kubernetes:N/A) root@k8s-ctr:~# kubectl get ds -n kube-system -owide
NAME         DESIRED   CURRENT   READY   UP-TO-DATE   AVAILABLE   NODE SELECTOR            AGE   CONTAINERS   IMAGES                                SELECTOR
kube-proxy   1         1         1       1            1           kubernetes.io/os=linux   68m   kube-proxy   registry.k8s.io/kube-proxy:v1.32.11   k8s-app=kube-proxy

 

 

5. [Worker nodes] kubeadm 으로 k8s 클러스터 join → 확인

5-1. 사전 설정

## k8s-w2도 동일하게 설정

# root 권한(로그인 환경) 전환
root@k8s-w1:~# echo "sudo su -" >> /home/vagrant/.bashrc
root@k8s-w1:~# sudo su -
Last login: Sat Jan 24 20:05:05 KST 2026 on pts/1

# Time, NTP 설정
root@k8s-w1:~# timedatectl set-local-rtc 0

# 시스템 타임존(Timezone)을 한국(KST, UTC+9) 으로 설정 : 시스템 시간은 UTC 기준 유지, 표시만 KST로 변환
root@k8s-w1:~# timedatectl set-timezone Asia/Seoul

# SELinux 설정 : Kubernetes는 Permissive 권장
root@k8s-w1:~# setenforce 0

# 재부팅 시에도 Permissive 적용
root@k8s-w1:~# sed -i 's/^SELINUX=enforcing/SELINUX=permissive/' /etc/selinux/config

# firewalld(방화벽) 끄기
root@k8s-w1:~# systemctl disable --now firewalld

# Swap 비활성화
root@k8s-w1:~# swapoff -a

# 재부팅 시에도 'Swap 비활성화' 적용되도록 /etc/fstab에서 swap 라인 주석 처리
root@k8s-w1:~# sed -i '/swap/d' /etc/fstab

# 커널 모듈 로드
root@k8s-w1:~# modprobe overlay
root@k8s-w1:~# modprobe br_netfilter
root@k8s-w1:~# cat <<EOF | tee /etc/modules-load.d/k8s.conf
overlay
br_netfilter
EOF
overlay
br_netfilter

# 커널 파라미터 설정 : 네트워크 설정 - 브릿지 트래픽이 iptables를 거치도록 함
root@k8s-w1:~# cat <<EOF | tee /etc/sysctl.d/k8s.conf
net.bridge.bridge-nf-call-iptables  = 1
net.bridge.bridge-nf-call-ip6tables = 1
net.ipv4.ip_forward                 = 1
EOF
net.bridge.bridge-nf-call-iptables  = 1
net.bridge.bridge-nf-call-ip6tables = 1
net.ipv4.ip_forward                 = 1

# 설정 적용
root@k8s-w1:~# sysctl --system >/dev/null 2>&1

# hosts 설정

 

5-2. CRI 설치 containerd(runc) v2.1.5

# Docker 저장소 추가
root@k8s-w1:~# dnf config-manager --add-repo https://download.docker.com/linux/centos/docker-ce.repo
Adding repo from: https://download.docker.com/linux/centos/docker-ce.repo


# containerd 설치
root@k8s-w1:~# dnf install -y containerd.io-2.1.5-1.el10
Docker CE Stable - aarch64                                   396  B/s | 2.0 kB     00:05
Package containerd.io-2.1.5-1.el10.aarch64 is already installed.
Dependencies resolved.
Nothing to do.
Complete!

# 기본 설정 생성 및 SystemdCgroup 활성화 (매우 중요)
root@k8s-w1:~# containerd config default | tee /etc/containerd/config.toml
version = 3
root = '/var/lib/containerd'
state = '/run/containerd'
temp = ''
disabled_plugins = []
required_plugins = []
oom_score = 0
imports = []
root@k8s-w1:~# sed -i 's/SystemdCgroup = false/SystemdCgroup = true/g' /etc/containerd/config.toml

# systemd unit 파일 최신 상태 읽기
root@k8s-w1:~# systemctl daemon-reload

# containerd start 와 enabled
root@k8s-w1:~# systemctl enable --now containerd

 

 

5-3. kubeadm, kubelet 및 kubectl 설치 v1.32.11

# repo 추가
root@k8s-w1:~# cat <<EOF | tee /etc/yum.repos.d/kubernetes.repo
[kubernetes]
name=Kubernetes
baseurl=https://pkgs.k8s.io/core:/stable:/v1.32/rpm/
enabled=1
gpgcheck=1
gpgkey=https://pkgs.k8s.io/core:/stable:/v1.32/rpm/repodata/repomd.xml.key
exclude=kubelet kubeadm kubectl cri-tools kubernetes-cni
EOF
[kubernetes]
name=Kubernetes
baseurl=https://pkgs.k8s.io/core:/stable:/v1.32/rpm/
enabled=1
gpgcheck=1
gpgkey=https://pkgs.k8s.io/core:/stable:/v1.32/rpm/repodata/repomd.xml.key
exclude=kubelet kubeadm kubectl cri-tools kubernetes-cni

# 설치
root@k8s-w1:~# dnf install -y kubelet kubeadm kubectl --disableexcludes=kubernetes
Kubernetes                                                   164  B/s | 1.7 kB     00:10
Package kubelet-1.32.11-150500.1.1.aarch64 is already installed.
Package kubeadm-1.32.11-150500.1.1.aarch64 is already installed.
Package kubectl-1.32.11-150500.1.1.aarch64 is already installed.
Dependencies resolved.
Nothing to do.
Complete!

# kubelet 활성화 (실제 기동은 kubeadm init 후에 시작됨)
root@k8s-w1:~# systemctl enable --now kubelet

# /etc/crictl.yaml 파일 작성
root@k8s-w1:~# cat << EOF > /etc/crictl.yaml
runtime-endpoint: unix:///run/containerd/containerd.sock
image-endpoint: unix:///run/containerd/containerd.sock
EOF

 

5-4. kubeadm 으로 k8s join

# 기본 환경 정보 출력 저장
root@k8s-w1:~# crictl images
root@k8s-w1:~# crictl ps
root@k8s-w1:~# cat /etc/sysconfig/kubelet
root@k8s-w1:~# tree /etc/kubernetes  | tee -a etc_kubernetes-1.txt
root@k8s-w1:~# tree /var/lib/kubelet | tee -a var_lib_kubelet-1.txt
root@k8s-w1:~# tree /run/containerd/ -L 3 | tee -a run_containerd-1.txt
root@k8s-w1:~# pstree -alnp | tee -a pstree-1.txt
root@k8s-w1:~# systemd-cgls --no-pager | tee -a systemd-cgls-1.txt
root@k8s-w1:~# lsns | tee -a lsns-1.txt
root@k8s-w1:~# ip addr | tee -a ip_addr-1.txt 
root@k8s-w1:~# ss -tnlp | tee -a ss-1.txt
root@k8s-w1:~# df -hT | tee -a df-1.txt
root@k8s-w1:~# findmnt | tee -a findmnt-1.txt
root@k8s-w1:~# sysctl -a | tee -a sysctl-1.txt

# kubeadm Configuration 파일 작성
root@k8s-w1:~# NODEIP=$(ip -4 addr show enp0s9 | grep -oP '(?<=inet\s)\d+(\.\d+){3}')
root@k8s-w1:~# cat << EOF > kubeadm-join.yaml
apiVersion: kubeadm.k8s.io/v1beta4
kind: JoinConfiguration
discovery:
  bootstrapToken:
    token: "123456.1234567890123456"
    apiServerEndpoint: "192.168.10.100:6443"
    unsafeSkipCAVerification: true
nodeRegistration:
  criSocket: "unix:///run/containerd/containerd.sock"
  kubeletExtraArgs:
    - name: node-ip
      value: "$NODEIP"
EOF


oot@k8s-w1:~# kubeadm join --config="kubeadm-join.yaml"
[preflight] Running pre-flight checks
[preflight] Reading configuration from the "kubeadm-config" ConfigMap in namespace "kube-system"...
[preflight] Use 'kubeadm init phase upload-config --config your-config.yaml' to re-upload it.
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Starting the kubelet
[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
[kubelet-check] The kubelet is healthy after 1.005192609s
[kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap

This node has joined the cluster:
* Certificate signing request was sent to apiserver and a response was received.
* The Kubelet was informed of the new secure connection details.

Run 'kubectl get nodes' on the control-plane to see this node join the cluster.

# crictl 확인
root@k8s-w1:~# crictl images
IMAGE                                           TAG                 IMAGE ID            SIZE
ghcr.io/flannel-io/flannel-cni-plugin           v1.7.1-flannel1     127562bd9047f       5.14MB
ghcr.io/flannel-io/flannel                      v0.27.3             d84558c0144bc       33.1MB
registry.k8s.io/kube-proxy                      v1.32.11            dcdb790dc2bfe       27.6MB
registry.k8s.io/metrics-server/metrics-server   v0.8.0              bc6c1e09a843d       20.6MB
registry.k8s.io/pause                           3.10                afb61768ce381       268kB
root@k8s-w1:~# crictl ps
CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD                     NAMESPACE
5dcdfd48b8591       d84558c0144bc       17 seconds ago      Running             kube-flannel        0                   71bc4d12de4dd       kube-flannel-ds-gwnvf   kube-flannel
2da4cf462a2d7       dcdb790dc2bfe       18 seconds ago      Running             kube-proxy          0                   9e69b76d0cf87       kube-proxy-fz6s9        kube-system

# cluster-info cm 호출 가능 확인
root@k8s-w1:~# curl -s -k https://192.168.10.100:6443/api/v1/namespaces/kube-public/configmaps/cluster-info | jq
{
  "kind": "ConfigMap",
  "apiVersion": "v1",
  "metadata": {
    "name": "cluster-info",
    "namespace": "kube-public",
    "uid": "5c22db5f-a97c-4a31-a4c3-643b4fd74f74",
    "resourceVersion": "300",
    "creationTimestamp": "2026-01-24T09:56:15Z",
    "managedFields": [
      {
        "manager": "kubeadm",
        "operation": "Update",
        "apiVersion": "v1",
        "time": "2026-01-24T09:56:15Z",
        "fieldsType": "FieldsV1",
        "fieldsV1": {
          "f:data": {
            ".": {},
            "f:kubeconfig": {}
          }
        }
      },
     ...

 

5-5. [k8s-ctr] k8s-w1/w2 관련 정보 확인

# join 된 워커 노드 확인
(⎈|kubernetes-admin@kubernetes:N/A) root@k8s-ctr:~# kubectl get node -owide
NAME      STATUS   ROLES           AGE    VERSION    INTERNAL-IP      EXTERNAL-IP   OS-IMAGE                        KERNEL-VERSION                  CONTAINER-RUNTIME
k8s-ctr   Ready    control-plane   106m   v1.32.11   192.168.10.100   <none>        Rocky Linux 10.0 (Red Quartz)   6.12.0-55.39.1.el10_0.aarch64   containerd://2.1.5
k8s-w1    Ready    <none>          110s   v1.32.11   192.168.10.101   <none>        Rocky Linux 10.0 (Red Quartz)   6.12.0-55.39.1.el10_0.aarch64   containerd://2.1.5
k8s-w2    Ready    <none>          25s    v1.32.11   192.168.10.102   <none>        Rocky Linux 10.0 (Red Quartz)   6.12.0-55.39.1.el10_0.aarch64   containerd://2.1.5

# 노드별 파드 CIDR 확인 
(⎈|kubernetes-admin@kubernetes:N/A) root@k8s-ctr:~# kubectl get nodes -o jsonpath='{range .items[*]}{.metadata.name}{"\t"}{.spec.podCIDR}{"\n"}{end}'
k8s-ctr	10.244.0.0/24
k8s-w1	10.244.1.0/24
k8s-w2	10.244.2.0/24

# 다른 노드의 파드 CIDR(Per Node Pod CIDR)에 대한 라우팅이 자동으로 커널 라우팅에 추가됨을 확인 : flannel.1 을 통해 VXLAN 통한 라우팅
(⎈|kubernetes-admin@kubernetes:N/A) root@k8s-ctr:~# ip -c route | grep flannel
10.244.1.0/24 via 10.244.1.0 dev flannel.1 onlink
10.244.2.0/24 via 10.244.2.0 dev flannel.1 onlink
10.244.3.0/24 via 10.244.3.0 dev flannel.1 onlink

# k8s-ctr 에서 10.244.1.0 IP로 통신 가능(vxlan overlay 사용) 확인
(⎈|kubernetes-admin@kubernetes:N/A) root@k8s-ctr:~# ping -c 1 10.244.1.0
PING 10.244.1.0 (10.244.1.0) 56(84) bytes of data.
64 bytes from 10.244.1.0: icmp_seq=1 ttl=64 time=1.01 ms

--- 10.244.1.0 ping statistics ---
1 packets transmitted, 1 received, 0% packet loss, time 0ms
rtt min/avg/max/mdev = 1.006/1.006/1.006/0.000 ms

# 워커 노드에 Taints 정보 확인
(⎈|kubernetes-admin@kubernetes:N/A) root@k8s-ctr:~# kc describe node k8s-w1
Name:               k8s-w1
Roles:              <none>
Labels:             beta.kubernetes.io/arch=arm64
                    beta.kubernetes.io/os=linux
                    kubernetes.io/arch=arm64
                    kubernetes.io/hostname=k8s-w1
                    kubernetes.io/os=linux
Annotations:        flannel.alpha.coreos.com/backend-data: {"VNI":1,"VtepMAC":"c2:65:5d:ff:bf:f5"}
                    flannel.alpha.coreos.com/backend-type: vxlan
                    flannel.alpha.coreos.com/kube-subnet-manager: true
                    flannel.alpha.coreos.com/public-ip: 192.168.10.101
                    kubeadm.alpha.kubernetes.io/cri-socket: unix:///run/containerd/containerd.sock
                    node.alpha.kubernetes.io/ttl: 0
                    volumes.kubernetes.io/controller-managed-attach-detach: true
CreationTimestamp:  Sat, 24 Jan 2026 20:40:42 +0900
Taints:             <none>
Unschedulable:      false

# k8s-w1 노드에 배치된 파드 확인
(⎈|kubernetes-admin@kubernetes:N/A) root@k8s-ctr:~# kubectl get pod -A -owide | grep k8s-w1
kube-flannel   kube-flannel-ds-gwnvf             1/1     Running   0          3m20s   192.168.10.101   k8s-w1    <none>           <none>
kube-system    kube-proxy-fz6s9                  1/1     Running   0          3m20s   192.168.10.101   k8s-w1    <none>

 

5-6. [k8s-w1/w2 노드 정보 확인, 기본 환경 정보 출력 비교, sysctl 변경 확인

# # kubelet 활성화 확인
root@k8s-w1:~# systemctl status kubelet --no-pager
● kubelet.service - kubelet: The Kubernetes Node Agent
     Loaded: loaded (/usr/lib/systemd/system/kubelet.service; enabled; preset: disabled)
    Drop-In: /usr/lib/systemd/system/kubelet.service.d
             └─10-kubeadm.conf
     Active: active (running) since Sat 2026-01-24 20:40:43 KST; 3min 52s ago
 Invocation: fea023e4fa34432aa4685ad662845d69
       Docs: https://kubernetes.io/docs/
   Main PID: 89116 (kubelet)
      Tasks: 12 (limit: 12337)
     Memory: 29.6M (peak: 30.4M)
        CPU: 4.132s
     CGroup: /system.slice/kubelet.service
             └─89116 /usr/bin/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubel…


# 기본 환경 정보 출력 저장
root@k8s-w1:~# cat /etc/sysconfig/kubelet
KUBELET_EXTRA_ARGS=
root@k8s-w1:~# tree /etc/kubernetes  | tee -a etc_kubernetes-2.txt
/etc/kubernetes
├── kubelet.conf
├── manifests
└── pki
    └── ca.crt

3 directories, 2 files
root@k8s-w1:~# cat /etc/kubernetes/kubelet.conf
apiVersion: v1
clusters:
- cluster:
    certificate-authority-data: LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSURCVENDQWUyZ0F3SUJBZ0lJSXk0ODJreHY4b013RFFZSktvWklodmNOQVFFTEJRQXdGVEVUTUJFR0ExVUUKQXhNS2EzVmlaWEp1WlhSbGN6QWVGdzB5TmpBeE1qUXdPVFV4TURkYUZ3MHpOakF4TWpJd09UVTJNRGRhTUJVeApFekFSQmdOVkJBTVRDbXQxWW1WeWJtVjBaWE13Z2dFaU1BMEdDU3FHU0liM0RRRUJBUVVBQTRJQkR3QXdnZ0VLCkFvSUJBUURZWkxueHAxUWFvMWNmOWVBNlZkalZwb2RFenpwMzk4d2QxWFBvd1VjQm1idzY1RFlWR0c5c1VlcVYKL042elFSREZJMmJQeUVTRDVJM092U0tvVGRWNS9rZ1RSaWFyTFlhYS84WENhTzM5Q05PamlmMjdaQVJYa2xjcwo5czJiODNOU1JycDJiNmJhNitHbkYwbUM1UXlPbE1uZVN6Uys3OFV2L3Jpdk9nQ2tkdzBCSUQvblRoTjdhR3lhCjQvQ1RJaVZCMkVoQXllT2FWSEVER3hLMk1ZNXhWL2lxVjBoVFh3V3B1NnhiZ1UxdlFlbml4dkIweVM3MjgyblEKajEydUpjdTFiUkcwWStDbFMzb1owKzBYRGY5dFNnNU54RTR6aWh2NjFpQVl3MnBTRzNJNU5CYVZXb0NtbkJVeAprbkJBbTRaZXJvNDRTVHZLU0pPMXd5ckQzcnluQWdNQkFBR2pXVEJYTUE0R0ExVWREd0VCL3dRRUF3SUNwREFQCkJnTlZIUk1CQWY4RUJUQURBUUgvTUIwR0ExVWREZ1FXQkJRYnhDdm8xL1k4UW5Bc3VhNG9aelVObjBGb3l6QVYKQmdOVkhSRUVEakFNZ2dwcmRXSmxjbTVsZEdWek1BMEdDU3FHU0liM0RRRUJDd1VBQTRJQkFRQ2JkTUkwUGFyYgowbTZHdDRaNFJNRmtxR3d2T0VNdTlYZVRlTVVLanpXSklRVEUxSlpDajFGZnlHbWhBZ3NQaERQcVBabDZyMmFvCmlrOVdhbm12dmxkM3RGWElqbnRPM0ZyVmtOTGREKzYwVlVkMVNuL2ZvSk16VmJLNzlqNWlUUWs1aWJSNnRraXoKSFZnTktvTHlaYjk4SXpFOEdEcVQ2YnV2ZG9UM2xSMW5VWlNNVHJFazZLdXA4MDJjNGN3dUxHT0MrR2I1WUg2bwp0Z3d5d3gxY2hMcGx4VkJkSlBtSld2NHFxNm0vYXZlUmRLNm9jdXIzVWNraGJhYUVweWhwUmRDMDYzWXE2YjdCCnhvV1kvdGRRQzhMNEQ1SlVrSWx6TW9Cdzd5emJrVTJINER6QVYzMHBlakpSdVVSSStxQWFxSERvalBWenNzSG4KY3ByMGFTTlVIWGNnCi0tLS0tRU5EIENFUlRJRklDQVRFLS0tLS0K
    server: https://192.168.10.100:6443
  name: default-cluster
contexts:
- context:
    cluster: default-cluster
    namespace: default
    user: default-auth
  name: default-context
current-context: default-context
kind: Config
preferences: {}
users:
- name: default-auth
  user:
    client-certificate: /var/lib/kubelet/pki/kubelet-client-current.pem
    client-key: /var/lib/kubelet/pki/kubelet-client-curre

 

 

6. 모니터링 툴 설치 : 프로메테우스-스택 설치 → 인증서 익스포터 설치 → 그라파나 대시보드 확인

6-1. metrics-server 설치

# metrics-server
(⎈|kubernetes-admin@kubernetes:N/A) root@k8s-ctr:~# helm repo add metrics-server https://kubernetes-sigs.github.io/metrics-server/
(⎈|kubernetes-admin@kubernetes:N/A) root@k8s-ctr:~# helm upgrade --install metrics-server metrics-server/metrics-server --set 'args[0]=--kubelet-insecure-tls' -n kube-system

# 확인
(⎈|kubernetes-admin@kubernetes:N/A) root@k8s-ctr:~# kubectl top node
NAME      CPU(cores)   CPU(%)   MEMORY(bytes)   MEMORY(%)
k8s-ctr   175m         4%       816Mi           29%
k8s-w1    31m          1%       374Mi           21%
k8s-w2    34m          1%       344Mi           19%

(⎈|kubernetes-admin@kubernetes:N/A) root@k8s-ctr:~# kubectl top pod -A --sort-by='cpu'
NAMESPACE      NAME                              CPU(cores)   MEMORY(bytes)
kube-system    kube-apiserver-k8s-ctr            57m          202Mi
kube-system    etcd-k8s-ctr                      36m          43Mi
kube-system    kube-controller-manager-k8s-ctr   25m          54Mi
kube-system    kube-scheduler-k8s-ctr            15m          23Mi
kube-flannel   kube-flannel-ds-mqn8t             10m          13Mi
kube-flannel   kube-flannel-ds-gwnvf             10m          13Mi
kube-flannel   kube-flannel-ds-t2tkq             9m           13Mi
kube-system    metrics-server-5dd7b49d79-dqzjl   4m           19Mi
kube-system    coredns-668d6bf9bc-c2g8k          3m           16Mi
kube-system    coredns-668d6bf9bc-qdwrj          3m           16Mi
kube-system    kube-proxy-fz6s9                  2m           16Mi
kube-system    kube-proxy-6gfjf                  2m           16Mi
kube-system    kube-proxy-9cgdt                  1m           16Mi

 

6-2. kube-prometheus-stack 설치

# repo 추가
(⎈|kubernetes-admin@kubernetes:N/A) root@k8s-ctr:~# helm repo add prometheus-community https://prometheus-community.github.io/helm-charts
"prometheus-community" has been added to your repositories

# 파라미터 파일 생성
(⎈|kubernetes-admin@kubernetes:N/A) root@k8s-ctr:~# cat <<EOT > monitor-values.yaml
prometheus:
  prometheusSpec:
    scrapeInterval: "20s"
    evaluationInterval: "20s"
    externalLabels:
      cluster: "myk8s-cluster"
  service:
    type: NodePort
    nodePort: 30001

grafana:
  defaultDashboardsTimezone: Asia/Seoul
  adminPassword: prom-operator
  service:
    type: NodePort
    nodePort: 30002

alertmanager:
  enabled: true
defaultRules:
  create: true

kubeProxy:
  enabled: false
prometheus-windows-exporter:
  prometheus:
    monitor:
      enabled: false
EOT


# 배포
(⎈|kubernetes-admin@kubernetes:N/A) root@k8s-ctr:~# helm install kube-prometheus-stack prometheus-community/kube-prometheus-stack --version 80.13.3 -f monitor-values.yaml --create-namespace --namespace monitoring

(⎈|kubernetes-admin@kubernetes:N/A) root@k8s-ctr:~# helm list -n monitoring
NAME                 	NAMESPACE 	REVISION	UPDATED                                	STATUS	CHART                        	APP VERSION
kube-prometheus-stack	monitoring	1       	2026-01-24 21:05:06.688856739 +0900 KST	failed	kube-prometheus-stack-80.13.3	v0.87.1

(⎈|kubernetes-admin@kubernetes:N/A) root@k8s-ctr:~# kubectl get pod,svc,ingress,pvc -n monitoring
NAME                                                            READY   STATUS              RESTARTS   AGE
pod/alertmanager-kube-prometheus-stack-alertmanager-0           0/2     Init:0/1            0          28s
pod/kube-prometheus-stack-admission-patch-5r4p6                 0/1     ContainerCreating   0          62s
pod/kube-prometheus-stack-grafana-5cb7c586f9-vwsr8              0/3     ContainerCreating   0          63s
pod/kube-prometheus-stack-kube-state-metrics-7846957b5b-rl2ck   1/1     Running             0          63s
pod/kube-prometheus-stack-operator-584f446c98-v2qhc             1/1     Running             0          63s
pod/kube-prometheus-stack-prometheus-node-exporter-854fb        1/1     Running             0          63s
pod/kube-prometheus-stack-prometheus-node-exporter-9wfls        1/1     Running             0          63s
pod/kube-prometheus-stack-prometheus-node-exporter-g7qm6        1/1     Running             0          63s
pod/prometheus-kube-prometheus-stack-prometheus-0               0/2     Init:0/1            0          28s

NAME                                                     TYPE        CLUSTER-IP      EXTERNAL-IP   PORT(S)                         AGE
service/alertmanager-operated                            ClusterIP   None            <none>        9093/TCP,9094/TCP,9094/UDP      28s
service/kube-prometheus-stack-alertmanager               ClusterIP   10.96.224.200   <none>        9093/TCP,8080/TCP               64s
service/kube-prometheus-stack-grafana                    NodePort    10.96.113.15    <none>        80:30002/TCP                    64s
service/kube-prometheus-stack-kube-state-metrics         ClusterIP   10.96.132.129   <none>        8080/TCP                        64s
service/kube-prometheus-stack-operator                   ClusterIP   10.96.238.79    <none>        443/TCP                         64s
service/kube-prometheus-stack-prometheus                 NodePort    10.96.55.234    <none>        9090:30001/TCP,8080:31632/TCP   64s
service/kube-prometheus-stack-prometheus-node-exporter   ClusterIP   10.96.13.3      <none>        9100/TCP                        64s
service/prometheus-operated                              ClusterIP   None            <none>        9090/TCP                        28s

# 각각 웹 접속 실행 : NodePort 접속
(⎈|kubernetes-admin@kubernetes:N/A) root@k8s-ctr:~# open http://192.168.10.100:30001 # prometheus
(⎈|kubernetes-admin@kubernetes:N/A) root@k8s-ctr:~# open http://192.168.10.100:30002 # grafana : 접속 계정 admin / prom-operator

 

 

6-3. kube-controller-manager, etcd, kube-scheduler 메트릭 수집 설정

# kube-controller-manager bind-address 127.0.0.1 => 0.0.0.0 변경
(⎈|kubernetes-admin@kubernetes:N/A) root@k8s-ctr:~# sed -i 's|--bind-address=127.0.0.1|--bind-address=0.0.0.0|g' /etc/kubernetes/manifests/kube-controller-manager.yaml
(⎈|kubernetes-admin@kubernetes:N/A) root@k8s-ctr:~# cat /etc/kubernetes/manifests/kube-controller-manager.yaml | grep bind-address
    - --bind-address=0.0.0.0

# kube-scheduler bind-address 127.0.0.1 => 0.0.0.0 변경
(⎈|kubernetes-admin@kubernetes:N/A) root@k8s-ctr:~# sed -i 's|--bind-address=127.0.0.1|--bind-address=0.0.0.0|g' /etc/kubernetes/manifests/kube-scheduler.yaml
(⎈|kubernetes-admin@kubernetes:N/A) root@k8s-ctr:~# cat /etc/kubernetes/manifests/kube-scheduler.yaml | grep bind-address
    - --bind-address=0.0.0.0
    
# etcd metrics-url(http) 127.0.0.1 에 192.168.10.100 추가
(⎈|kubernetes-admin@kubernetes:N/A) root@k8s-ctr:~# sed -i 's|--listen-metrics-urls=http://127.0.0.1:2381|--listen-metrics-urls=http://127.0.0.1:2381,http://192.168.10.100:2381|g' /etc/kubernetes/manifests/etcd.yaml
(⎈|kubernetes-admin@kubernetes:N/A) root@k8s-ctr:~# cat /etc/kubernetes/manifests/etcd.yaml | grep listen-metrics-urls
    - --listen-metrics-urls=http://127.0.0.1:2381,http://192.168.10.100:2381

 

6-4. k8s 인증서 위치 확인

# Check certificates expiration for a Kubernetes cluster
(⎈|kubernetes-admin@kubernetes:N/A) root@k8s-ctr:~# kubeadm certs check-expiration
[check-expiration] Reading configuration from the "kubeadm-config" ConfigMap in namespace "kube-system"...
[check-expiration] Use 'kubeadm init phase upload-config --config your-config.yaml' to re-upload it.

CERTIFICATE                EXPIRES                  RESIDUAL TIME   CERTIFICATE AUTHORITY   EXTERNALLY MANAGED
admin.conf                 Jan 24, 2027 09:56 UTC   364d            ca                      no
apiserver                  Jan 24, 2027 09:56 UTC   364d            ca                      no
apiserver-etcd-client      Jan 24, 2027 09:56 UTC   364d            etcd-ca                 no
apiserver-kubelet-client   Jan 24, 2027 09:56 UTC   364d            ca                      no
controller-manager.conf    Jan 24, 2027 09:56 UTC   364d            ca                      no
etcd-healthcheck-client    Jan 24, 2027 09:56 UTC   364d            etcd-ca                 no
etcd-peer                  Jan 24, 2027 09:56 UTC   364d            etcd-ca                 no
etcd-server                Jan 24, 2027 09:56 UTC   364d            etcd-ca                 no
front-proxy-client         Jan 24, 2027 09:56 UTC   364d            front-proxy-ca          no
scheduler.conf             Jan 24, 2027 09:56 UTC   364d            ca                      no
super-admin.conf           Jan 24, 2027 09:56 UTC   364d            ca                      no

CERTIFICATE AUTHORITY   EXPIRES                  RESIDUAL TIME   EXTERNALLY MANAGED
ca                      Jan 22, 2036 09:56 UTC   9y              no
etcd-ca                 Jan 22, 2036 09:56 UTC   9y              no
front-proxy-ca          Jan 22, 2036 09:56 UTC   9y              no

# 위 kubelet.conf 에 대한 인증서/키 파일 위치 : worker 노드 동일
(⎈|kubernetes-admin@kubernetes:N/A) root@k8s-ctr:~# tree /var/lib/kubelet/pki/
/var/lib/kubelet/pki/
├── kubelet-client-2026-01-24-18-56-11.pem
├── kubelet-client-current.pem -> /var/lib/kubelet/pki/kubelet-client-2026-01-24-18-56-11.pem
├── kubelet.crt
└── kubelet.key

1 directory, 4 files

 

6-5. x509 certificate exporter 설치

# w1/w2 에 node label 설정
(⎈|kubernetes-admin@kubernetes:N/A) root@k8s-ctr:~# kubectl label node k8s-w1 worker="true" --overwrite
node/k8s-w1 labeled
(⎈|kubernetes-admin@kubernetes:N/A) root@k8s-ctr:~# kubectl label node k8s-w2 worker="true" --overwrite
node/k8s-w2 labeled

(⎈|kubernetes-admin@kubernetes:N/A) root@k8s-ctr:~# cat << EOF > cert-export-values.yaml
# -- hostPaths Exporter
hostPathsExporter:
  hostPathVolumeType: Directory

  daemonSets:
    cp:
      nodeSelector:
        node-role.kubernetes.io/control-plane: ""
      tolerations:
      - effect: NoSchedule
        key: node-role.kubernetes.io/control-plane
        operator: Exists
      watchFiles:
      - /var/lib/kubelet/pki/kubelet-client-current.pem
      - /var/lib/kubelet/pki/kubelet.crt
      - /etc/kubernetes/pki/apiserver.crt
      - /etc/kubernetes/pki/apiserver-etcd-client.crt
      - /etc/kubernetes/pki/apiserver-kubelet-client.crt
      - /etc/kubernetes/pki/ca.crt
      - /etc/kubernetes/pki/front-proxy-ca.crt
      - /etc/kubernetes/pki/front-proxy-client.crt
      - /etc/kubernetes/pki/etcd/ca.crt
      - /etc/kubernetes/pki/etcd/healthcheck-client.crt
      - /etc/kubernetes/pki/etcd/peer.crt
      - /etc/kubernetes/pki/etcd/server.crt
      watchKubeconfFiles:
      - /etc/kubernetes/admin.conf
      - /etc/kubernetes/controller-manager.conf
      - /etc/kubernetes/scheduler.conf

    nodes:
      nodeSelector:
        worker: "true"
      watchFiles:
      - /var/lib/kubelet/pki/kubelet-client-current.pem
      - /etc/kubernetes/pki/ca.crt

prometheusServiceMonitor:
  create: true
  scrapeInterval: 15s
  scrapeTimeout: 10s
EOF

# helm chart 설치
(⎈|kubernetes-admin@kubernetes:N/A) root@k8s-ctr:~# helm repo add enix https://charts.enix.io
"enix" has been added to your repositories
(⎈|kubernetes-admin@kubernetes:N/A) root@k8s-ctr:~# helm install x509-certificate-exporter enix/x509-certificate-exporter -n monitoring --values cert-export-values.yaml
NAME: x509-certificate-exporter
LAST DEPLOYED: Sat Jan 24 21:18:08 2026
NAMESPACE: monitoring
STATUS: deployed
REVISION: 1
TEST SUITE: None

# 설치 확인
(⎈|kubernetes-admin@kubernetes:N/A) root@k8s-ctr:~# helm list -n monitoring
NAME                     	NAMESPACE 	REVISION	UPDATED                                	STATUS  	CHART                           	APP VERSION
kube-prometheus-stack    	monitoring	1       	2026-01-24 21:05:06.688856739 +0900 KST	failed  	kube-prometheus-stack-80.13.3   	v0.87.1
x509-certificate-exporter	monitoring	1       	2026-01-24 21:18:08.623833792 +0900 KST	deployed	x509-certificate-exporter-3.19.1	3.19.1

## x509 대시보드 추가 : grafana sidecar 컨테이너가 configmap 확인 후 추가
(⎈|kubernetes-admin@kubernetes:N/A) root@k8s-ctr:~# kubectl get cm -n monitoring x509-certificate-exporter-dashboard
NAME                                  DATA   AGE
x509-certificate-exporter-dashboard   1      41s

# 데몬셋 확인 : cp, nodes 각각
(⎈|kubernetes-admin@kubernetes:N/A) root@k8s-ctr:~# kubectl get ds -n monitoring -l app.kubernetes.io/instance=x509-certificate-exporter
NAME                              DESIRED   CURRENT   READY   UP-TO-DATE   AVAILABLE   NODE SELECTOR                            AGE
x509-certificate-exporter-cp      1         1         1       1            1           node-role.kubernetes.io/control-plane=   57s
x509-certificate-exporter-nodes   2         2         2       2            2           worker=true                              57s

# 파드 정보 확인 : IP 확인
(⎈|kubernetes-admin@kubernetes:N/A) root@k8s-ctr:~# kubectl get pod -n monitoring -l app.kubernetes.io/instance=x509-certificate-exporter -owide
NAME                                    READY   STATUS    RESTARTS   AGE   IP           NODE      NOMINATED NODE   READINESS GATES
x509-certificate-exporter-cp-l7b97      1/1     Running   0          71s   10.244.0.6   k8s-ctr   <none>           <none>
x509-certificate-exporter-nodes-pgdpf   1/1     Running   0          71s   10.244.2.6   k8s-w2    <none>           <none>
x509-certificate-exporter-nodes-rwdrb   1/1     Running   0          71s   10.244.1.7   k8s-w1    <none>           <none>

# 프로메테우스 서비스모니터 수집을 위한 Service(ClusterIP) 정보 확인
(⎈|kubernetes-admin@kubernetes:N/A) root@k8s-ctr:~# kubectl get svc,ep -n monitoring x509-certificate-exporter
NAME                                TYPE        CLUSTER-IP      EXTERNAL-IP   PORT(S)    AGE
service/x509-certificate-exporter   ClusterIP   10.96.189.118   <none>        9793/TCP   86s

NAME                                  ENDPOINTS                                         AGE
endpoints/x509-certificate-exporter   10.244.0.6:9793,10.244.1.7:9793,10.244.2.6:9793   86s

# 컨트롤플레인 노드에 배포된 'x509 익스포터' 파드에 메트릭 호출 확인
(⎈|kubernetes-admin@kubernetes:N/A) root@k8s-ctr:~# curl -s 10.244.0.6:9793/metrics | grep '^x509' | head -n 3
x509_cert_expired{filename="apiserver-etcd-client.crt",filepath="/etc/kubernetes/pki/apiserver-etcd-client.crt",issuer_CN="etcd-ca",serial_number="6752242067453093961",subject_CN="kube-apiserver-etcd-client"} 0
x509_cert_expired{filename="apiserver.crt",filepath="/etc/kubernetes/pki/apiserver.crt",issuer_CN="kubernetes",serial_number="6316045001739227051",subject_CN="kube-apiserver"} 0
x509_cert_expired{filename="ca.crt",filepath="/etc/kubernetes/pki/ca.crt",issuer_CN="kubernetes",serial_number="2535030548539110019",subject_CN="kubernetes"} 0

# 워커 노드에 배포된 'x509 익스포터' 파드에 메트릭 호출 확인
(⎈|kubernetes-admin@kubernetes:N/A) root@k8s-ctr:~# curl -s 10.244.1.7:9793/metrics | grep '^x509' | head -n 3
x509_cert_expired{filename="ca.crt",filepath="/etc/kubernetes/pki/ca.crt",issuer_CN="kubernetes",serial_number="2535030548539110019",subject_CN="kubernetes"} 0
x509_cert_expired{filename="kubelet-client-current.pem",filepath="/var/lib/kubelet/pki/kubelet-client-current.pem",issuer_CN="kubernetes",serial_number="305345650348149891832660344899669548281",subject_CN="system:node:k8s-w1",subject_O="system:nodes"} 0
x509_cert_not_after{filename="ca.crt",filepath="/etc/kubernetes/pki/ca.crt",issuer_CN="kubernetes",serial_number="2535030548539110019",subject_CN="kubernetes"} 2.084608567e+09

# 프로메테우스 CR 정보 확인 : 서비스모니터와 룰 셀렉터 정보 확인
(⎈|kubernetes-admin@kubernetes:N/A) root@k8s-ctr:~# kubectl get prometheuses.monitoring.coreos.com -n monitoring -o yaml

# helm 배포 시, label 추가 해둠
(⎈|kubernetes-admin@kubernetes:N/A) root@k8s-ctr:~# kubectl edit servicemonitors -n monitoring x509-certificate-exporter
...
  labels:
    app.kubernetes.io/instance: x509-certificate-exporter
    app.kubernetes.io/managed-by: Helm
    app.kubernetes.io/name: x509-certificate-exporter
    app.kubernetes.io/version: 3.19.1
    helm.sh/chart: x509-certificate-exporter-3.19.1
    release: kube-prometheus-stack
...

(⎈|kubernetes-admin@kubernetes:N/A) root@k8s-ctr:~# kubectl get prometheusrules.monitoring.coreos.com -n monitoring x509-certificate-exporter -o yaml | head -n 20

apiVersion: monitoring.coreos.com/v1
kind: PrometheusRule
metadata:
  ...
  labels:
    app.kubernetes.io/instance: x509-certificate-exporter
    app.kubernetes.io/managed-by: Helm
    app.kubernetes.io/name: x509-certificate-exporter
    app.kubernetes.io/version: 3.19.1
    helm.sh/chart: x509-certificate-exporter-3.19.1
    release: kube-prometheus-stack
  name: x509-certificate-exporter
...

 

6-6. 프로메테우스 확인

서비스모니터에 의해 수집된 타겟 정보 확인

 

6-7. 그라파나 확인

 

7. 샘플 애플리케이션 배포

7-1. 샘플 애플리케이션 배포

# 샘플 애플리케이션 배포
(⎈|kubernetes-admin@kubernetes:N/A) root@k8s-ctr:~# cat << EOF | kubectl apply -f -
apiVersion: apps/v1
kind: Deployment
metadata:
  name: webpod
spec:
  replicas: 2
  selector:
    matchLabels:
      app: webpod
  template:
    metadata:
      labels:
        app: webpod
    spec:
      affinity:
        podAntiAffinity:
          requiredDuringSchedulingIgnoredDuringExecution:
          - labelSelector:
              matchExpressions:
              - key: app
                operator: In
                values:
                - sample-app
            topologyKey: "kubernetes.io/hostname"
      containers:
      - name: webpod
        image: traefik/whoami
        ports:
        - containerPort: 80
---
apiVersion: v1
kind: Service
metadata:
  name: webpod
  labels:
    app: webpod
spec:
  selector:
    app: webpod
  ports:
  - protocol: TCP
EOFype: ClusterIP0
deployment.apps/webpod created
service/webpod created

 

7-2. 애플리케이션 확인 & 반복 호출

# 배포 확인
(⎈|kubernetes-admin@kubernetes:N/A) root@k8s-ctr:~# kubectl get deploy,svc,ep webpod -owide
NAME                     READY   UP-TO-DATE   AVAILABLE   AGE   CONTAINERS   IMAGES           SELECTOR
deployment.apps/webpod   0/2     2            0           24s   webpod       traefik/whoami   app=webpod

NAME             TYPE        CLUSTER-IP      EXTERNAL-IP   PORT(S)   AGE   SELECTOR
service/webpod   ClusterIP   10.96.156.154   <none>        80/TCP    24s   app=webpod

NAME               ENDPOINTS   AGE
endpoints/webpod   <none>      24s

# webpod service clusterip 변수 지정
(⎈|kubernetes-admin@kubernetes:N/A) root@k8s-ctr:~# SVCIP=$(kubectl get svc webpod -o jsonpath='{.spec.clusterIP}')
echo $SVCIP
10.96.156.154

# 통신 확인
(⎈|kubernetes-admin@kubernetes:N/A) root@k8s-ctr:~# curl -s $SVCIP
Hostname: webpod-697b545f57-wb9hd
IP: 127.0.0.1
IP: ::1
IP: 10.244.1.8
IP: fe80::8c06:f6ff:fe5c:f521
RemoteAddr: 10.244.0.0:20714
GET / HTTP/1.1
Host: 10.96.156.154
User-Agent: curl/8.9.1
Accept: */*

 

 

8. kubeadm 인증서 갱신

8-1. 현재 인증서 정보 확인 : 인증서 파일 생성일 → 만료일 확인

# Check certificates expiration for a Kubernetes cluster
(⎈|kubernetes-admin@kubernetes:N/A) root@k8s-ctr:~# kc describe cm -n kube-system kubeadm-config | grep -i cert
caCertificateValidityPeriod: 87600h0m0s
certificateValidityPeriod: 8760h0m0s
certificatesDir: /etc/kubernetes/pki


 현재 인증서가 UTC 기준 1월 24일 09:56 분 생성되어서, 유효기간 365일(1년) 이후 만료일은 '27년 1월 24일 09:55분.
(⎈|kubernetes-admin@kubernetes:N/A) root@k8s-ctr:~# kubeadm certs check-expiration -v 6
I0124 21:28:27.266056  117213 loader.go:402] Config loaded from file:  /etc/kubernetes/admin.conf
I0124 21:28:27.266668  117213 envvar.go:172] "Feature gate default state" feature="ClientsAllowCBOR" enabled=false
I0124 21:28:27.266688  117213 envvar.go:172] "Feature gate default state" feature="ClientsPreferCBOR" enabled=false
I0124 21:28:27.266693  117213 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
I0124 21:28:27.266697  117213 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
[check-expiration] Reading configuration from the "kubeadm-config" ConfigMap in namespace "kube-system"...
[check-expiration] Use 'kubeadm init phase upload-config --config your-config.yaml' to re-upload it.
I0124 21:28:27.274926  117213 round_trippers.go:560] GET https://192.168.10.100:6443/api/v1/namespaces/kube-system/configmaps/kubeadm-config?timeout=10s 200 OK in 7 milliseconds
I0124 21:28:27.275676  117213 kubeproxy.go:55] attempting to download the KubeProxyConfiguration from ConfigMap "kube-proxy"
I0124 21:28:27.277450  117213 round_trippers.go:560] GET https://192.168.10.100:6443/api/v1/namespaces/kube-system/configmaps/kube-proxy?timeout=10s 200 OK in 1 milliseconds
I0124 21:28:27.278787  117213 kubelet.go:74] attempting to download the KubeletConfiguration from ConfigMap "kubelet-config"
I0124 21:28:27.280968  117213 round_trippers.go:560] GET https://192.168.10.100:6443/api/v1/namespaces/kube-system/configmaps/kubelet-config?timeout=10s 200 OK in 1 milliseconds
I0124 21:28:27.282732  117213 loader.go:402] Config loaded from file:  /etc/kubernetes/kubelet.conf
I0124 21:28:27.284115  117213 cert_rotation.go:140] Starting client certificate rotation controller
I0124 21:28:27.291268  117213 round_trippers.go:560] POST https://192.168.10.100:6443/apis/authentication.k8s.io/v1/selfsubjectreviews?timeout=10s 201 Created in 7 milliseconds
I0124 21:28:27.293635  117213 round_trippers.go:560] GET https://192.168.10.100:6443/api/v1/nodes/k8s-ctr?timeout=10s 200 OK in 2 milliseconds
I0124 21:28:27.301805  117213 round_trippers.go:560] GET https://192.168.10.100:6443/api/v1/namespaces/kube-system/pods?fieldSelector=spec.nodeName%3Dk8s-ctr&labelSelector=component%3Dkube-apiserver%2Ctier%3Dcontrol-plane 200 OK in 6 milliseconds

I0124 21:28:27.303023  117213 certs.go:360] Overriding the cluster certificate directory with the value from command line flag --cert-dir: /etc/kubernetes/pki
I0124 21:28:27.303375  117213 certs.go:473] validating certificate period for CA certificate
I0124 21:28:27.303874  117213 loader.go:402] Config loaded from file:  /etc/kubernetes/admin.conf
I0124 21:28:27.304200  117213 certs.go:473] validating certificate period for etcd CA certificate
I0124 21:28:27.304793  117213 loader.go:402] Config loaded from file:  /etc/kubernetes/controller-manager.conf
I0124 21:28:27.305372  117213 certs.go:473] validating certificate period for front-proxy CA certificate
I0124 21:28:27.306284  117213 loader.go:402] Config loaded from file:  /etc/kubernetes/scheduler.conf
I0124 21:28:27.307746  117213 loader.go:402] Config loaded from file:  /etc/kubernetes/super-admin.conf
CERTIFICATE                EXPIRES                  RESIDUAL TIME   CERTIFICATE AUTHORITY   EXTERNALLY MANAGED
admin.conf                 Jan 24, 2027 09:56 UTC   364d            ca                      no
apiserver                  Jan 24, 2027 09:56 UTC   364d            ca                      no
apiserver-etcd-client      Jan 24, 2027 09:56 UTC   364d            etcd-ca                 no
apiserver-kubelet-client   Jan 24, 2027 09:56 UTC   364d            ca                      no
controller-manager.conf    Jan 24, 2027 09:56 UTC   364d            ca                      no
etcd-healthcheck-client    Jan 24, 2027 09:56 UTC   364d            etcd-ca                 no
etcd-peer                  Jan 24, 2027 09:56 UTC   364d            etcd-ca                 no
etcd-server                Jan 24, 2027 09:56 UTC   364d            etcd-ca                 no
front-proxy-client         Jan 24, 2027 09:56 UTC   364d            front-proxy-ca          no
scheduler.conf             Jan 24, 2027 09:56 UTC   364d            ca                      no
super-admin.conf           Jan 24, 2027 09:56 UTC   364d            ca                      no

CERTIFICATE AUTHORITY   EXPIRES                  RESIDUAL TIME   EXTERNALLY MANAGED
ca                      Jan 22, 2036 09:56 UTC   9y              no
etcd-ca                 Jan 22, 2036 09:56 UTC   9y              no
front-proxy-ca          Jan 22, 2036 09:56 UTC   9y              no

 

 

8-2. 수동 인증서 갱신 실행

# 샘플 애플리케이션 반복 호출(신규 터미널)
(⎈|kubernetes-admin@kubernetes:N/A) root@k8s-ctr:~# SVCIP=$(kubectl get svc webpod -o jsonpath='{.spec.clusterIP}')
(⎈|kubernetes-admin@kubernetes:N/A) root@k8s-ctr:~# while true; do curl -s $SVCIP | grep Hostname; sleep 1; done
Hostname: webpod-697b545f57-wb9hd
Hostname: webpod-697b545f57-wb9hd
Hostname: webpod-697b545f57-8ld9r

# 사전 백업 : HA Controlplane 모두
(⎈|kubernetes-admin@kubernetes:N/A) root@k8s-ctr:~# cp -r /etc/kubernetes/pki /etc/kubernetes/pki.backup.$(date +%F)
(⎈|kubernetes-admin@kubernetes:N/A) root@k8s-ctr:~# ls -l /etc/kubernetes/pki.backup.$(date +%F)
total 56
-rw-r--r--. 1 root root 1281 Jan 24 21:31 apiserver.crt
-rw-r--r--. 1 root root 1123 Jan 24 21:31 apiserver-etcd-client.crt
-rw-------. 1 root root 1675 Jan 24 21:31 apiserver-etcd-client.key
-rw-------. 1 root root 1675 Jan 24 21:31 apiserver.key
-rw-r--r--. 1 root root 1176 Jan 24 21:31 apiserver-kubelet-client.crt
-rw-------. 1 root root 1675 Jan 24 21:31 apiserver-kubelet-client.key
-rw-r--r--. 1 root root 1107 Jan 24 21:31 ca.crt
-rw-------. 1 root root 1675 Jan 24 21:31 ca.key
drwxr-xr-x. 2 root root  162 Jan 24 21:31 etcd
-rw-r--r--. 1 root root 1123 Jan 24 21:31 front-proxy-ca.crt
-rw-------. 1 root root 1675 Jan 24 21:31 front-proxy-ca.key
-rw-r--r--. 1 root root 1119 Jan 24 21:31 front-proxy-client.crt
-rw-------. 1 root root 1675 Jan 24 21:31 front-proxy-client.key
-rw-------. 1 root root 1679 Jan 24 21:31 sa.key
-rw-------. 1 root root  451 Jan 24 21:31 sa.pub
(⎈|kubernetes-admin@kubernetes:N/A) root@k8s-ctr:~#
(⎈|kubernetes-admin@kubernetes:N/A) root@k8s-ctr:~# mkdir /etc/kubernetes/backup-conf.$(date +%F)
(⎈|kubernetes-admin@kubernetes:N/A) root@k8s-ctr:~# cp /etc/kubernetes/*.conf /etc/kubernetes/backup-conf.$(date +%F)
(⎈|kubernetes-admin@kubernetes:N/A) root@k8s-ctr:~# ls -l /etc/kubernetes/backup-conf.$(date +%F)
total 36
-rw-------. 1 root root 5658 Jan 24 21:31 admin.conf
-rw-------. 1 root root 5678 Jan 24 21:31 controller-manager.conf
-rw-------. 1 root root 1974 Jan 24 21:31 kubelet.conf
-rw-------. 1 root root 5626 Jan 24 21:31 scheduler.conf
-rw-------. 1 root root 5682 Jan 24 21:31 super-admin.conf

# 인증서 만료 상태 확인
(⎈|kubernetes-admin@kubernetes:N/A) root@k8s-ctr:~# kubeadm certs check-expiration
[check-expiration] Reading configuration from the "kubeadm-config" ConfigMap in namespace "kube-system"...
[check-expiration] Use 'kubeadm init phase upload-config --config your-config.yaml' to re-upload it.

CERTIFICATE                EXPIRES                  RESIDUAL TIME   CERTIFICATE AUTHORITY   EXTERNALLY MANAGED
admin.conf                 Jan 24, 2027 09:56 UTC   364d            ca                      no
apiserver                  Jan 24, 2027 09:56 UTC   364d            ca                      no
apiserver-etcd-client      Jan 24, 2027 09:56 UTC   364d            etcd-ca                 no
apiserver-kubelet-client   Jan 24, 2027 09:56 UTC   364d            ca                      no
controller-manager.conf    Jan 24, 2027 09:56 UTC   364d            ca                      no
etcd-healthcheck-client    Jan 24, 2027 09:56 UTC   364d            etcd-ca                 no
etcd-peer                  Jan 24, 2027 09:56 UTC   364d            etcd-ca                 no
etcd-server                Jan 24, 2027 09:56 UTC   364d            etcd-ca                 no
front-proxy-client         Jan 24, 2027 09:56 UTC   364d            front-proxy-ca          no
scheduler.conf             Jan 24, 2027 09:56 UTC   364d            ca                      no
super-admin.conf           Jan 24, 2027 09:56 UTC   364d            ca                      no

CERTIFICATE AUTHORITY   EXPIRES                  RESIDUAL TIME   EXTERNALLY MANAGED
ca                      Jan 22, 2036 09:56 UTC   9y              no
etcd-ca                 Jan 22, 2036 09:56 UTC   9y              no
front-proxy-ca          Jan 22, 2036 09:56 UTC   9y              no

# 인증서 전체 갱신 : 기존 cert 삭제 -> CA로 재서명된 새 cert 생성
(⎈|kubernetes-admin@kubernetes:N/A) root@k8s-ctr:~# kubeadm certs renew all
[renew] Reading configuration from the "kubeadm-config" ConfigMap in namespace "kube-system"...
[renew] Use 'kubeadm init phase upload-config --config your-config.yaml' to re-upload it.

certificate embedded in the kubeconfig file for the admin to use and for kubeadm itself renewed
certificate for serving the Kubernetes API renewed
certificate the apiserver uses to access etcd renewed
certificate for the API server to connect to kubelet renewed
certificate embedded in the kubeconfig file for the controller manager to use renewed
certificate for liveness probes to healthcheck etcd renewed
certificate for etcd nodes to communicate with each other renewed
certificate for serving etcd renewed
certificate for the front proxy client renewed
certificate embedded in the kubeconfig file for the scheduler manager to use renewed
certificate embedded in the kubeconfig file for the super-admin renewed

Done renewing certificates. You must restart the kube-apiserver, kube-controller-manager, kube-scheduler and etcd, so that they can use the new certificates.


# 인증서 만료 상태 확인
(⎈|kubernetes-admin@kubernetes:N/A) root@k8s-ctr:~# kubeadm certs check-expiration
[check-expiration] Reading configuration from the "kubeadm-config" ConfigMap in namespace "kube-system"...
[check-expiration] Use 'kubeadm init phase upload-config --config your-config.yaml' to re-upload it.

CERTIFICATE                EXPIRES                  RESIDUAL TIME   CERTIFICATE AUTHORITY   EXTERNALLY MANAGED
admin.conf                 Jan 24, 2027 12:31 UTC   364d            ca                      no
apiserver                  Jan 24, 2027 12:31 UTC   364d            ca                      no
apiserver-etcd-client      Jan 24, 2027 12:31 UTC   364d            etcd-ca                 no
apiserver-kubelet-client   Jan 24, 2027 12:31 UTC   364d            ca                      no
controller-manager.conf    Jan 24, 2027 12:31 UTC   364d            ca                      no
etcd-healthcheck-client    Jan 24, 2027 12:31 UTC   364d            etcd-ca                 no
etcd-peer                  Jan 24, 2027 12:31 UTC   364d            etcd-ca                 no
etcd-server                Jan 24, 2027 12:31 UTC   364d            etcd-ca                 no
front-proxy-client         Jan 24, 2027 12:31 UTC   364d            front-proxy-ca          no
scheduler.conf             Jan 24, 2027 12:31 UTC   364d            ca                      no
super-admin.conf           Jan 24, 2027 12:31 UTC   364d            ca                      no

CERTIFICATE AUTHORITY   EXPIRES                  RESIDUAL TIME   EXTERNALLY MANAGED
ca                      Jan 22, 2036 09:56 UTC   9y              no
etcd-ca                 Jan 22, 2036 09:56 UTC   9y              no
front-proxy-ca          Jan 22, 2036 09:56 UTC   9y              no

# ca 인증서는 그대로, 나머지 인증서는 신규 생성
(⎈|kubernetes-admin@kubernetes:N/A) root@k8s-ctr:~# ls -lt /etc/kubernetes/pki/
total 56
-rw-r--r--. 1 root root 1119 Jan 24 21:31 front-proxy-client.crt
-rw-------. 1 root root 1679 Jan 24 21:31 front-proxy-client.key
-rw-r--r--. 1 root root 1176 Jan 24 21:31 apiserver-kubelet-client.crt
-rw-------. 1 root root 1675 Jan 24 21:31 apiserver-kubelet-client.key
-rw-r--r--. 1 root root 1123 Jan 24 21:31 apiserver-etcd-client.crt
-rw-------. 1 root root 1675 Jan 24 21:31 apiserver-etcd-client.key
-rw-r--r--. 1 root root 1281 Jan 24 21:31 apiserver.crt
-rw-------. 1 root root 1679 Jan 24 21:31 apiserver.key
-rw-------. 1 root root 1679 Jan 24 18:56 sa.key
-rw-------. 1 root root  451 Jan 24 18:56 sa.pub
drwxr-xr-x. 2 root root  162 Jan 24 18:56 etcd
-rw-r--r--. 1 root root 1123 Jan 24 18:56 front-proxy-ca.crt
-rw-------. 1 root root 1675 Jan 24 18:56 front-proxy-ca.key
-rw-r--r--. 1 root root 1107 Jan 24 18:56 ca.crt
-rw-------. 1 root root 1675 Jan 24 18:56 ca.key

(⎈|kubernetes-admin@kubernetes:N/A) root@k8s-ctr:~# ls -lt /etc/kubernetes/pki/etcd
total 32
-rw-r--r--. 1 root root 1196 Jan 24 21:31 server.crt
-rw-------. 1 root root 1679 Jan 24 21:31 server.key
-rw-r--r--. 1 root root 1196 Jan 24 21:31 peer.crt
-rw-------. 1 root root 1679 Jan 24 21:31 peer.key
-rw-r--r--. 1 root root 1123 Jan 24 21:31 healthcheck-client.crt
-rw-------. 1 root root 1675 Jan 24 21:31 healthcheck-client.key
-rw-r--r--. 1 root root 1094 Jan 24 18:56 ca.crt
-rw-------. 1 root root 1675 Jan 24 18:56 ca.key


# apiserver 인증서
(⎈|kubernetes-admin@kubernetes:N/A) root@k8s-ctr:~# cat /etc/kubernetes/pki/apiserver.crt | openssl x509 -text -noout
Certificate:
    Data:
        Version: 3 (0x2)
        Serial Number: 7287864220267195902 (0x6523b1e5444e2dfe)
        Signature Algorithm: sha256WithRSAEncryption
        Issuer: CN=kubernetes
        Validity
            Not Before: Jan 24 12:26:46 2026 GMT
            Not After : Jan 24 12:31:46 2027 GMT
            
# control component 의 kubeconfig 신규 생성 확인
(⎈|kubernetes-admin@kubernetes:N/A) root@k8s-ctr:~# ls -lt /etc/kubernetes/*.conf
-rw-------. 1 root root 5678 Jan 24 21:31 /etc/kubernetes/super-admin.conf
-rw-------. 1 root root 5626 Jan 24 21:31 /etc/kubernetes/scheduler.conf
-rw-------. 1 root root 5678 Jan 24 21:31 /etc/kubernetes/controller-manager.conf
-rw-------. 1 root root 5654 Jan 24 21:31 /etc/kubernetes/admin.conf
-rw-------. 1 root root 1974 Jan 24 18:56 /etc/kubernetes/kubelet.conf

 

8-3. control-plane static pod 재기동 & admin.conf kubeconfig 재적용

# 사전 백업 : static pod 매니페스트
(⎈|kubernetes-admin@kubernetes:N/A) root@k8s-ctr:~# cp -r /etc/kubernetes/manifests /etc/kubernetes/manifests.backup.$(date +%F)
(⎈|kubernetes-admin@kubernetes:N/A) root@k8s-ctr:~# ls -l /etc/kubernetes/manifests.backup.$(date +%F)
total 16
-rw-------. 1 root root 2576 Jan 24 21:34 etcd.yaml
-rw-------. 1 root root 3603 Jan 24 21:34 kube-apiserver.yaml
-rw-------. 1 root root 3102 Jan 24 21:34 kube-controller-manager.yaml
-rw-------. 1 root root 1655 Jan 24 21:34 kube-scheduler.yaml

# static pod 모니터링(신규 터미널)
(⎈|kubernetes-admin@kubernetes:N/A) root@k8s-ctr:~# watch -d crictl ps

# static pod manifest 삭제
(⎈|kubernetes-admin@kubernetes:N/A) root@k8s-ctr:~# rm -rf /etc/kubernetes/manifests/*.yaml

# static pod manifest 복사 -> 파드 재기동  
(⎈|kubernetes-admin@kubernetes:N/A) root@k8s-ctr:~# cp /etc/kubernetes/manifests.backup.$(date +%F)/*.yaml /etc/kubernetes/manifests
(⎈|kubernetes-admin@kubernetes:N/A) root@k8s-ctr:~# tree /etc/kubernetes/manifests
/etc/kubernetes/manifests
├── etcd.yaml
├── kube-apiserver.yaml
├── kube-controller-manager.yaml
└── kube-scheduler.yaml

1 directory, 4 files

# 파드 기동 확인 : CA가 바뀌지 않았기 때문에 예전 인증서도 신뢰됨 >> 다만, 예전(?) 인증서의 만료 기간을 놓칠 수 있으니, 같이 갱신 할 것!
Every 2.0s: crictl ps                                                                                                                                      k8s-ctr: Sat Jan 24 21:37:03 2026

CONTAINER           IMAGE               CREATED             STATE               NAME                        ATTEMPT             POD ID              POD
               NAMESPACE
5089984fc24b4       cfa17ff3d6634	22 seconds ago      Running             kube-scheduler              0                   949a0d03486b9       kube-scheduler-k8s-ctr
               kube-system
05f61bb01a7a6       1211402d28f58	22 seconds ago      Running             etcd                        0                   318a29e5cd40b       etcd-k8s-ctr
               kube-system
e510345a46907       82766e5f2d560	22 seconds ago      Running             kube-controller-manager     0                   e1b0c9dbe8b2b       kube-controller-manager-k8s-ctr
               kube-system
7d42851059591       58951ea1a0b5d	22 seconds ago      Running             kube-apiserver              0                   6ce0a9479754e       kube-apiserver-k8s-ctr
               kube-system
81913ea92e88d       11873b3fefc46	18 minutes ago      Running             x509-certificate-exporter   0                   8b832b28f605e       x509-certificate-exporter-cp-l7b97
               monitoring
               

# admin.conf kubeconfig 재적용
(⎈|kubernetes-admin@kubernetes:N/A) root@k8s-ctr:~# yes | cp  /etc/kubernetes/admin.conf ~/.kube/config ; echo
(⎈|kubernetes-admin@kubernetes:N/A) root@k8s-ctr:~# chown $(id -u):$(id -g) ~/.kube/config

 

'스터디 > K8s Deploy' 카테고리의 다른 글

[K8s Deploy] Kubespray offline 설치  (0) 2026.02.15
[K8s Deploy] Kubespary HA & Upgrade  (0) 2026.02.04
[K8s] Kubespray 배포 분석  (0) 2026.02.01
[K8s Deploy] Ansible 기초  (1) 2026.01.18
[K8s Deploy] Bootstrap Kubernetes the hard way  (0) 2026.01.10