Create Kubernetes cluster.
Hardware requirements
Three VMs, each should have at least 2 GB of RAM, 20GB disk space, and 2 CPUs at minimum.
I will use Ubuntu operating system.
Prerequisites
Update package index.
$ sudo apt update
Upgrade installed packages.
$ sudo apt upgrade
Ensure that hostname is set.
$ sudo hostnamectl --static set-hostname kubernetes-1
Inspect SWAP memory configuration.
$ swapon --summary
Filename Type Size Used Priority /swap.img file 1912828 0 -2
$ cat /etc/fstab
# /etc/fstab: static file system information. # # Use 'blkid' to print the universally unique identifier for a # device; this may be used with UUID= as a more robust way to name devices # that works even if disks are added and removed. See fstab(5). # # <file system> <mount point> <type> <options> <dump> <pass> # / was on /dev/ubuntu-vg/ubuntu-lv during curtin installation /dev/disk/by-id/dm-uuid-LVM-nVMhi4rUSehHlUHrizzBzjngFfFNNEyMjHko0eI2JM3Ekxpox3vdwlIJUfntyCxn / ext4 defaults 0 1 # /boot was on /dev/sda2 during curtin installation /dev/disk/by-uuid/90771c59-faae-4d65-99e6-43becaa24c21 /boot ext4 defaults 0 1 /swap.img none swap sw 0 0
Disable SWAP.
$ sudo swapoff /swap.img
$ sudo sed -i -e "/^\/swap.img/ s/./#&/" /etc/fstab
$ cat /etc/fstab
# /etc/fstab: static file system information. # # Use 'blkid' to print the universally unique identifier for a # device; this may be used with UUID= as a more robust way to name devices # that works even if disks are added and removed. See fstab(5). # # <file system> <mount point> <type> <options> <dump> <pass> # / was on /dev/ubuntu-vg/ubuntu-lv during curtin installation /dev/disk/by-id/dm-uuid-LVM-nVMhi4rUSehHlUHrizzBzjngFfFNNEyMjHko0eI2JM3Ekxpox3vdwlIJUfntyCxn / ext4 defaults 0 1 # /boot was on /dev/sda2 during curtin installation /dev/disk/by-uuid/90771c59-faae-4d65-99e6-43becaa24c21 /boot ext4 defaults 0 1 #/swap.img none swap sw 0 0
Load modules used by containerd and calico networking.
$ sudo tee /etc/modules-load.d/containerd.conf <<EOF overlay br_netfilter EOF
$ sudo modprobe overlay
$ sudo modprobe br_netfilter
Update sysctl setting related to the calico networking.
$ sudo tee -a /etc/sysctl.d/10-kubernetes.conf <<EOF net.bridge.bridge-nf-call-iptables = 1 net.ipv4.ip_forward = 1 EOF
$ sudo sysctl --system
* Applying /etc/sysctl.d/10-console-messages.conf ... kernel.printk = 4 4 1 7 * Applying /etc/sysctl.d/10-ipv6-privacy.conf ... net.ipv6.conf.all.use_tempaddr = 2 net.ipv6.conf.default.use_tempaddr = 2 * Applying /etc/sysctl.d/10-kernel-hardening.conf ... kernel.kptr_restrict = 1 * Applying /etc/sysctl.d/10-kubernetes.conf ... net.ipv4.ip_forward = 1 * Applying /etc/sysctl.d/10-magic-sysrq.conf ... kernel.sysrq = 176 * Applying /etc/sysctl.d/10-network-security.conf ... net.ipv4.conf.default.rp_filter = 2 net.ipv4.conf.all.rp_filter = 2 * Applying /etc/sysctl.d/10-ptrace.conf ... kernel.yama.ptrace_scope = 1 * Applying /etc/sysctl.d/10-zeropage.conf ... vm.mmap_min_addr = 65536 * Applying /usr/lib/sysctl.d/50-default.conf ... kernel.core_uses_pid = 1 net.ipv4.conf.default.rp_filter = 2 net.ipv4.conf.default.accept_source_route = 0 net.ipv4.conf.default.promote_secondaries = 1 net.ipv4.ping_group_range = 0 2147483647 net.core.default_qdisc = fq_codel fs.protected_hardlinks = 1 fs.protected_symlinks = 1 fs.protected_regular = 1 fs.protected_fifos = 1 * Applying /usr/lib/sysctl.d/50-pid-max.conf ... kernel.pid_max = 4194304 * Applying /usr/lib/sysctl.d/99-protect-links.conf ... fs.protected_fifos = 1 fs.protected_hardlinks = 1 fs.protected_regular = 2 fs.protected_symlinks = 1 * Applying /etc/sysctl.d/99-sysctl.conf ... * Applying /etc/sysctl.conf ...
Install containerd and runc
Download recent containerd version.
$ wget https://github.com/containerd/containerd/releases/download/v1.7.3/containerd-1.7.3-linux-amd64.tar.gz
Inspect archive contents.
$ tar tvfz containerd-1.7.3-linux-amd64.tar.gz
drwxr-xr-x root/root 0 2023-07-27 22:59 bin/ -rwxr-xr-x root/root 12058624 2023-07-27 22:59 bin/containerd-shim-runc-v2 -rwxr-xr-x root/root 25629208 2023-07-27 22:59 bin/containerd-stress -rwxr-xr-x root/root 8302592 2023-07-27 22:59 bin/containerd-shim-runc-v1 -rwxr-xr-x root/root 6594560 2023-07-27 22:59 bin/containerd-shim -rwxr-xr-x root/root 54761376 2023-07-27 22:59 bin/containerd -rwxr-xr-x root/root 27769848 2023-07-27 22:59 bin/ctr
Extract binary files.
$ sudo tar xvfz containerd-1.7.3-linux-amd64.tar.gz -C /usr/local/
bin/ bin/containerd-shim-runc-v2 bin/containerd-stress bin/containerd-shim-runc-v1 bin/containerd-shim bin/containerd bin/ctr
Download CNI plugins.
$ wget https://github.com/containernetworking/plugins/releases/download/v1.3.0/cni-plugins-linux-amd64-v1.3.0.tgz
Inspect archive contents.
$ tar tvfz cni-plugins-linux-amd64-v1.3.0.tgz
drwxrwxr-x root/root 0 2023-05-09 19:53 ./ -rwxr-xr-x root/root 3514598 2023-05-09 19:53 ./loopback -rwxr-xr-x root/root 4016001 2023-05-09 19:53 ./bandwidth -rwxr-xr-x root/root 4348835 2023-05-09 19:53 ./ptp -rwxr-xr-x root/root 4187498 2023-05-09 19:53 ./vlan -rwxr-xr-x root/root 4059321 2023-05-09 19:53 ./host-device -rwxr-xr-x root/root 3603365 2023-05-09 19:53 ./tuning -rwxr-xr-x root/root 3754911 2023-05-09 19:53 ./vrf -rwxr-xr-x root/root 3716095 2023-05-09 19:53 ./sbr -rwxr-xr-x root/root 4258344 2023-05-09 19:53 ./tap -rwxr-xr-x root/root 10816051 2023-05-09 19:53 ./dhcp -rwxr-xr-x root/root 2984504 2023-05-09 19:53 ./static -rwxr-xr-x root/root 4649749 2023-05-09 19:53 ./firewall -rwxr-xr-x root/root 4227193 2023-05-09 19:53 ./macvlan -rwxr-xr-x root/root 4171248 2023-05-09 19:53 ./dummy -rwxr-xr-x root/root 4531309 2023-05-09 19:53 ./bridge -rwxr-xr-x root/root 4193323 2023-05-09 19:53 ./ipvlan -rwxr-xr-x root/root 3955775 2023-05-09 19:53 ./portmap -rwxr-xr-x root/root 3444776 2023-05-09 19:53 ./host-local
Extract CNI plugins.
$ sudo mkdir -p /opt/cni/bin
$ sudo tar xvfz cni-plugins-linux-amd64-v1.3.0.tgz -C /opt/cni/bin/
./ ./loopback ./bandwidth ./ptp ./vlan ./host-device ./tuning ./vrf ./sbr ./tap ./dhcp ./static ./firewall ./macvlan ./dummy ./bridge ./ipvlan ./portmap ./host-local
Download runc.
$ wget https://github.com/opencontainers/runc/releases/download/v1.1.8/runc.amd64
Ensure that it is available in PATH.
$ sudo install -m 755 runc.amd64 /usr/local/sbin/runc
Create containerd configuration.
$ sudo mkdir -p /etc/containerd
$ sudo tee /etc/containerd/config.toml <<EOF version = 2 [plugins] [plugins."io.containerd.grpc.v1.cri"] [plugins."io.containerd.grpc.v1.cri".containerd] discard_unpacked_layers = true [plugins."io.containerd.grpc.v1.cri".containerd.runtimes] [plugins."io.containerd.grpc.v1.cri".containerd.runtimes.runc] runtime_type = "io.containerd.runc.v2" [plugins."io.containerd.grpc.v1.cri".containerd.runtimes.runc.options] SystemdCgroup = true EOF
Download systemd service file.
$ wget https://raw.githubusercontent.com/containerd/containerd/main/containerd.service
Inspect containerd service definition.
$ cat containerd.service
# Copyright The containerd Authors. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. [Unit] Description=containerd container runtime Documentation=https://containerd.io After=network.target local-fs.target [Service] #uncomment to enable the experimental sbservice (sandboxed) version of containerd/cri integration #Environment="ENABLE_CRI_SANDBOXES=sandboxed" ExecStartPre=-/sbin/modprobe overlay ExecStart=/usr/local/bin/containerd Type=notify Delegate=yes KillMode=process Restart=always RestartSec=5 # Having non-zero Limit*s causes performance problems due to accounting overhead # in the kernel. We recommend using cgroups to do container-local accounting. LimitNPROC=infinity LimitCORE=infinity LimitNOFILE=infinity # Comment TasksMax if your systemd version does not supports it. # Only systemd 226 and above support this version. TasksMax=infinity OOMScoreAdjust=-999 [Install] WantedBy=multi-user.target
Install service.
$ sudo install -m 644 containerd.service /etc/systemd/system/
Reload systemd configuration.
$ sudo systemctl daemon-reload
Enable scontainerd service.
$ sudo systemctl enable --now containerd
Inspect service status.
$ sudo systemctl status containerd.service
● containerd.service - containerd container runtime Loaded: loaded (/etc/systemd/system/containerd.service; enabled; vendor preset: enabled) Active: active (running) since Sun 2023-07-30 14:08:33 UTC; 12s ago Docs: https://containerd.io Process: 1179 ExecStartPre=/sbin/modprobe overlay (code=exited, status=0/SUCCESS) Main PID: 1180 (containerd) Tasks: 7 Memory: 13.6M CPU: 112ms CGroup: /system.slice/containerd.service └─1180 /usr/local/bin/containerd Jul 30 14:08:33 kubernetes-1 containerd[1180]: time="2023-07-30T14:08:33.824829709Z" level=info msg="Start subscribing containerd event" Jul 30 14:08:33 kubernetes-1 containerd[1180]: time="2023-07-30T14:08:33.825284981Z" level=info msg="Start recovering state" Jul 30 14:08:33 kubernetes-1 containerd[1180]: time="2023-07-30T14:08:33.825411651Z" level=info msg="Start event monitor" Jul 30 14:08:33 kubernetes-1 containerd[1180]: time="2023-07-30T14:08:33.825437058Z" level=info msg="Start snapshots syncer" Jul 30 14:08:33 kubernetes-1 containerd[1180]: time="2023-07-30T14:08:33.825446656Z" level=info msg="Start cni network conf syncer for default" Jul 30 14:08:33 kubernetes-1 containerd[1180]: time="2023-07-30T14:08:33.825501588Z" level=info msg="Start streaming server" Jul 30 14:08:33 kubernetes-1 containerd[1180]: time="2023-07-30T14:08:33.825711079Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Jul 30 14:08:33 kubernetes-1 containerd[1180]: time="2023-07-30T14:08:33.825835770Z" level=info msg=serving... address=/run/containerd/containerd.sock Jul 30 14:08:33 kubernetes-1 systemd[1]: Started containerd container runtime. Jul 30 14:08:33 kubernetes-1 containerd[1180]: time="2023-07-30T14:08:33.827360148Z" level=info msg="containerd successfully booted in 0.050140s"
Play with containerd CLI.
$ sudo ctr image pull docker.io/library/hello-world:latest
docker.io/library/hello-world:latest: resolved |++++++++++++++++++++++++++++++++++++++| index-sha256:926fac19d22aa2d60f1a276b66a20eb765fbeea2db5dbdaafeb456ad8ce81598: done |++++++++++++++++++++++++++++++++++++++| manifest-sha256:7e9b6e7ba2842c91cf49f3e214d04a7a496f8214356f41d81a6e6dcad11f11e3: done |++++++++++++++++++++++++++++++++++++++| config-sha256:9c7a54a9a43cca047013b82af109fe963fde787f63f9e016fdc3384500c2823d: done |++++++++++++++++++++++++++++++++++++++| layer-sha256:719385e32844401d57ecfd3eacab360bf551a1491c05b85806ed8f1b08d792f6: done |++++++++++++++++++++++++++++++++++++++| elapsed: 3.2 s total: 3.0 Ki (964.0 B/s) unpacking linux/amd64 sha256:926fac19d22aa2d60f1a276b66a20eb765fbeea2db5dbdaafeb456ad8ce81598... done: 57.563771ms
$ sudo ctr run --rm docker.io/library/hello-world:latest ctr-test
Hello from Docker! This message shows that your installation appears to be working correctly. To generate this message, Docker took the following steps: 1. The Docker client contacted the Docker daemon. 2. The Docker daemon pulled the "hello-world" image from the Docker Hub. (amd64) 3. The Docker daemon created a new container from that image which runs the executable that produces the output you are currently reading. 4. The Docker daemon streamed that output to the Docker client, which sent it to your terminal. To try something more ambitious, you can run an Ubuntu container with: $ docker run -it ubuntu bash Share images, automate workflows, and more with a free Docker ID: https://hub.docker.com/ For more examples and ideas, visit: https://docs.docker.com/get-started/
Install kubelet, kubeadm, and kubectl utilities
Install software required to use kubernetes repository.
$ sudo apt install apt-transport-https curl
$ curl -fsSL https://pkgs.k8s.io/core:/stable:/v1.28/deb/Release.key | sudo gpg --dearmor -o /etc/apt/keyrings/kubernetes-apt-keyring.gpg
Define package repository.
$ echo "deb [signed-by=/etc/apt/keyrings/kubernetes-apt-keyring.gpg] https://pkgs.k8s.io/core:/stable:/v1.27/deb/ /" | sudo tee /etc/apt/sources.list.d/kubernetes.list
Update package index.
$ sudo apt-get update
Install required utilities.
$ sudo apt-get install kubelet kubeadm kubectl
Hold these package versions.
$ sudo apt-mark hold kubelet kubeadm kubectl
kubelet set on hold. kubeadm set on hold. kubectl set on hold.
Set up the Kubernetes control plane
Pull images beforehand.
$ sudo kubeadm config images pull
[config/images] Pulled registry.k8s.io/kube-apiserver:v1.27.4 [config/images] Pulled registry.k8s.io/kube-controller-manager:v1.27.4 [config/images] Pulled registry.k8s.io/kube-scheduler:v1.27.4 [config/images] Pulled registry.k8s.io/kube-proxy:v1.27.4 [config/images] Pulled registry.k8s.io/pause:3.9 [config/images] Pulled registry.k8s.io/etcd:3.5.7-0 [config/images] Pulled registry.k8s.io/coredns/coredns:v1.10.1
Perform dry run.
$ sudo kubeadm init --pod-network-cidr=10.11.0.0/16 --dry-run
[init] Using Kubernetes version: v1.27.4 [preflight] Running pre-flight checks [preflight] Would pull the required images (like 'kubeadm config images pull') [certs] Using certificateDir folder "/etc/kubernetes/tmp/kubeadm-init-dryrun3858364406" [certs] Generating "ca" certificate and key [certs] Generating "apiserver" certificate and key [certs] apiserver serving cert is signed for DNS names [kubernetes kubernetes-1 kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local] and IPs [10.96.0.1 192.168.8.192] [certs] Generating "apiserver-kubelet-client" certificate and key [certs] Generating "front-proxy-ca" certificate and key [certs] Generating "front-proxy-client" certificate and key [certs] Generating "etcd/ca" certificate and key [certs] Generating "etcd/server" certificate and key [certs] etcd/server serving cert is signed for DNS names [kubernetes-1 localhost] and IPs [192.168.8.192 127.0.0.1 ::1] [certs] Generating "etcd/peer" certificate and key [certs] etcd/peer serving cert is signed for DNS names [kubernetes-1 localhost] and IPs [192.168.8.192 127.0.0.1 ::1] [certs] Generating "etcd/healthcheck-client" certificate and key [certs] Generating "apiserver-etcd-client" certificate and key [certs] Generating "sa" key and public key [kubeconfig] Using kubeconfig folder "/etc/kubernetes/tmp/kubeadm-init-dryrun3858364406" [kubeconfig] Writing "admin.conf" kubeconfig file [kubeconfig] Writing "kubelet.conf" kubeconfig file [kubeconfig] Writing "controller-manager.conf" kubeconfig file [kubeconfig] Writing "scheduler.conf" kubeconfig file [kubelet-start] Writing kubelet environment file with flags to file "/etc/kubernetes/tmp/kubeadm-init-dryrun3858364406/kubeadm-flags.env" [kubelet-start] Writing kubelet configuration to file "/etc/kubernetes/tmp/kubeadm-init-dryrun3858364406/config.yaml" [control-plane] Using manifest folder "/etc/kubernetes/tmp/kubeadm-init-dryrun3858364406" [control-plane] Creating static Pod manifest for "kube-apiserver" [control-plane] Creating static Pod manifest for "kube-controller-manager" [control-plane] Creating static Pod manifest for "kube-scheduler" [etcd] Would ensure that "/var/lib/etcd" directory is present [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/tmp/kubeadm-init-dryrun3858364406" [dryrun] Wrote certificates, kubeconfig files and control plane manifests to the "/etc/kubernetes/tmp/kubeadm-init-dryrun3858364406" directory [dryrun] The certificates or kubeconfig files would not be printed due to their sensitive nature [dryrun] Please examine the "/etc/kubernetes/tmp/kubeadm-init-dryrun3858364406" directory for details about what would be written [dryrun] Would write file "/etc/kubernetes/manifests/kube-apiserver.yaml" with content: apiVersion: v1 kind: Pod metadata: annotations: kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint: 192.168.8.192:6443 creationTimestamp: null labels: component: kube-apiserver tier: control-plane name: kube-apiserver namespace: kube-system spec: containers: - command: - kube-apiserver - --advertise-address=192.168.8.192 - --allow-privileged=true - --authorization-mode=Node,RBAC - --client-ca-file=/etc/kubernetes/pki/ca.crt - --enable-admission-plugins=NodeRestriction - --enable-bootstrap-token-auth=true - --etcd-cafile=/etc/kubernetes/pki/etcd/ca.crt - --etcd-certfile=/etc/kubernetes/pki/apiserver-etcd-client.crt - --etcd-keyfile=/etc/kubernetes/pki/apiserver-etcd-client.key - --etcd-servers=https://127.0.0.1:2379 - --kubelet-client-certificate=/etc/kubernetes/pki/apiserver-kubelet-client.crt - --kubelet-client-key=/etc/kubernetes/pki/apiserver-kubelet-client.key - --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname - --proxy-client-cert-file=/etc/kubernetes/pki/front-proxy-client.crt - --proxy-client-key-file=/etc/kubernetes/pki/front-proxy-client.key - --requestheader-allowed-names=front-proxy-client - --requestheader-client-ca-file=/etc/kubernetes/pki/front-proxy-ca.crt - --requestheader-extra-headers-prefix=X-Remote-Extra- - --requestheader-group-headers=X-Remote-Group - --requestheader-username-headers=X-Remote-User - --secure-port=6443 - --service-account-issuer=https://kubernetes.default.svc.cluster.local - --service-account-key-file=/etc/kubernetes/pki/sa.pub - --service-account-signing-key-file=/etc/kubernetes/pki/sa.key - --service-cluster-ip-range=10.96.0.0/12 - --tls-cert-file=/etc/kubernetes/pki/apiserver.crt - --tls-private-key-file=/etc/kubernetes/pki/apiserver.key image: registry.k8s.io/kube-apiserver:v1.27.4 imagePullPolicy: IfNotPresent livenessProbe: failureThreshold: 8 httpGet: host: 192.168.8.192 path: /livez port: 6443 scheme: HTTPS initialDelaySeconds: 10 periodSeconds: 10 timeoutSeconds: 15 name: kube-apiserver readinessProbe: failureThreshold: 3 httpGet: host: 192.168.8.192 path: /readyz port: 6443 scheme: HTTPS periodSeconds: 1 timeoutSeconds: 15 resources: requests: cpu: 250m startupProbe: failureThreshold: 24 httpGet: host: 192.168.8.192 path: /livez port: 6443 scheme: HTTPS initialDelaySeconds: 10 periodSeconds: 10 timeoutSeconds: 15 volumeMounts: - mountPath: /etc/ssl/certs name: ca-certs readOnly: true - mountPath: /etc/ca-certificates name: etc-ca-certificates readOnly: true - mountPath: /etc/pki name: etc-pki readOnly: true - mountPath: /etc/kubernetes/pki name: k8s-certs readOnly: true - mountPath: /usr/local/share/ca-certificates name: usr-local-share-ca-certificates readOnly: true - mountPath: /usr/share/ca-certificates name: usr-share-ca-certificates readOnly: true hostNetwork: true priority: 2000001000 priorityClassName: system-node-critical securityContext: seccompProfile: type: RuntimeDefault volumes: - hostPath: path: /etc/ssl/certs type: DirectoryOrCreate name: ca-certs - hostPath: path: /etc/ca-certificates type: DirectoryOrCreate name: etc-ca-certificates - hostPath: path: /etc/pki type: DirectoryOrCreate name: etc-pki - hostPath: path: /etc/kubernetes/pki type: DirectoryOrCreate name: k8s-certs - hostPath: path: /usr/local/share/ca-certificates type: DirectoryOrCreate name: usr-local-share-ca-certificates - hostPath: path: /usr/share/ca-certificates type: DirectoryOrCreate name: usr-share-ca-certificates status: {} [dryrun] Would write file "/etc/kubernetes/manifests/kube-controller-manager.yaml" with content: apiVersion: v1 kind: Pod metadata: creationTimestamp: null labels: component: kube-controller-manager tier: control-plane name: kube-controller-manager namespace: kube-system spec: containers: - command: - kube-controller-manager - --allocate-node-cidrs=true - --authentication-kubeconfig=/etc/kubernetes/controller-manager.conf - --authorization-kubeconfig=/etc/kubernetes/controller-manager.conf - --bind-address=127.0.0.1 - --client-ca-file=/etc/kubernetes/pki/ca.crt - --cluster-cidr=10.11.0.0/16 - --cluster-name=kubernetes - --cluster-signing-cert-file=/etc/kubernetes/pki/ca.crt - --cluster-signing-key-file=/etc/kubernetes/pki/ca.key - --controllers=*,bootstrapsigner,tokencleaner - --kubeconfig=/etc/kubernetes/controller-manager.conf - --leader-elect=true - --requestheader-client-ca-file=/etc/kubernetes/pki/front-proxy-ca.crt - --root-ca-file=/etc/kubernetes/pki/ca.crt - --service-account-private-key-file=/etc/kubernetes/pki/sa.key - --service-cluster-ip-range=10.96.0.0/12 - --use-service-account-credentials=true image: registry.k8s.io/kube-controller-manager:v1.27.4 imagePullPolicy: IfNotPresent livenessProbe: failureThreshold: 8 httpGet: host: 127.0.0.1 path: /healthz port: 10257 scheme: HTTPS initialDelaySeconds: 10 periodSeconds: 10 timeoutSeconds: 15 name: kube-controller-manager resources: requests: cpu: 200m startupProbe: failureThreshold: 24 httpGet: host: 127.0.0.1 path: /healthz port: 10257 scheme: HTTPS initialDelaySeconds: 10 periodSeconds: 10 timeoutSeconds: 15 volumeMounts: - mountPath: /etc/ssl/certs name: ca-certs readOnly: true - mountPath: /etc/ca-certificates name: etc-ca-certificates readOnly: true - mountPath: /etc/pki name: etc-pki readOnly: true - mountPath: /usr/libexec/kubernetes/kubelet-plugins/volume/exec name: flexvolume-dir - mountPath: /etc/kubernetes/pki name: k8s-certs readOnly: true - mountPath: /etc/kubernetes/controller-manager.conf name: kubeconfig readOnly: true - mountPath: /usr/local/share/ca-certificates name: usr-local-share-ca-certificates readOnly: true - mountPath: /usr/share/ca-certificates name: usr-share-ca-certificates readOnly: true hostNetwork: true priority: 2000001000 priorityClassName: system-node-critical securityContext: seccompProfile: type: RuntimeDefault volumes: - hostPath: path: /etc/ssl/certs type: DirectoryOrCreate name: ca-certs - hostPath: path: /etc/ca-certificates type: DirectoryOrCreate name: etc-ca-certificates - hostPath: path: /etc/pki type: DirectoryOrCreate name: etc-pki - hostPath: path: /usr/libexec/kubernetes/kubelet-plugins/volume/exec type: DirectoryOrCreate name: flexvolume-dir - hostPath: path: /etc/kubernetes/pki type: DirectoryOrCreate name: k8s-certs - hostPath: path: /etc/kubernetes/controller-manager.conf type: FileOrCreate name: kubeconfig - hostPath: path: /usr/local/share/ca-certificates type: DirectoryOrCreate name: usr-local-share-ca-certificates - hostPath: path: /usr/share/ca-certificates type: DirectoryOrCreate name: usr-share-ca-certificates status: {} [dryrun] Would write file "/etc/kubernetes/manifests/kube-scheduler.yaml" with content: apiVersion: v1 kind: Pod metadata: creationTimestamp: null labels: component: kube-scheduler tier: control-plane name: kube-scheduler namespace: kube-system spec: containers: - command: - kube-scheduler - --authentication-kubeconfig=/etc/kubernetes/scheduler.conf - --authorization-kubeconfig=/etc/kubernetes/scheduler.conf - --bind-address=127.0.0.1 - --kubeconfig=/etc/kubernetes/scheduler.conf - --leader-elect=true image: registry.k8s.io/kube-scheduler:v1.27.4 imagePullPolicy: IfNotPresent livenessProbe: failureThreshold: 8 httpGet: host: 127.0.0.1 path: /healthz port: 10259 scheme: HTTPS initialDelaySeconds: 10 periodSeconds: 10 timeoutSeconds: 15 name: kube-scheduler resources: requests: cpu: 100m startupProbe: failureThreshold: 24 httpGet: host: 127.0.0.1 path: /healthz port: 10259 scheme: HTTPS initialDelaySeconds: 10 periodSeconds: 10 timeoutSeconds: 15 volumeMounts: - mountPath: /etc/kubernetes/scheduler.conf name: kubeconfig readOnly: true hostNetwork: true priority: 2000001000 priorityClassName: system-node-critical securityContext: seccompProfile: type: RuntimeDefault volumes: - hostPath: path: /etc/kubernetes/scheduler.conf type: FileOrCreate name: kubeconfig status: {} [dryrun] Would write file "/var/lib/kubelet/config.yaml" with content: apiVersion: kubelet.config.k8s.io/v1beta1 authentication: anonymous: enabled: false webhook: cacheTTL: 0s enabled: true x509: clientCAFile: /etc/kubernetes/pki/ca.crt authorization: mode: Webhook webhook: cacheAuthorizedTTL: 0s cacheUnauthorizedTTL: 0s cgroupDriver: systemd clusterDNS: - 10.96.0.10 clusterDomain: cluster.local containerRuntimeEndpoint: "" cpuManagerReconcilePeriod: 0s evictionPressureTransitionPeriod: 0s fileCheckFrequency: 0s healthzBindAddress: 127.0.0.1 healthzPort: 10248 httpCheckFrequency: 0s imageMinimumGCAge: 0s kind: KubeletConfiguration logging: flushFrequency: 0 options: json: infoBufferSize: "0" verbosity: 0 memorySwap: {} nodeStatusReportFrequency: 0s nodeStatusUpdateFrequency: 0s resolvConf: /run/systemd/resolve/resolv.conf rotateCertificates: true runtimeRequestTimeout: 0s shutdownGracePeriod: 0s shutdownGracePeriodCriticalPods: 0s staticPodPath: /etc/kubernetes/manifests streamingConnectionIdleTimeout: 0s syncFrequency: 0s volumeStatsAggPeriod: 0s [dryrun] Would write file "/var/lib/kubelet/kubeadm-flags.env" with content: KUBELET_KUBEADM_ARGS="--container-runtime-endpoint=unix:///var/run/containerd/containerd.sock --pod-infra-container-image=registry.k8s.io/pause:3.9" [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/tmp/kubeadm-init-dryrun3858364406". This can take up to 4m0s [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace [dryrun] Would perform action CREATE on resource "configmaps" in API group "core/v1" [dryrun] Attached object: apiVersion: v1 data: ClusterConfiguration: | apiServer: extraArgs: authorization-mode: Node,RBAC timeoutForControlPlane: 4m0s apiVersion: kubeadm.k8s.io/v1beta3 certificatesDir: /etc/kubernetes/pki clusterName: kubernetes controllerManager: {} dns: {} etcd: local: dataDir: /var/lib/etcd imageRepository: registry.k8s.io kind: ClusterConfiguration kubernetesVersion: v1.27.4 networking: dnsDomain: cluster.local podSubnet: 10.11.0.0/16 serviceSubnet: 10.96.0.0/12 scheduler: {} kind: ConfigMap metadata: creationTimestamp: null name: kubeadm-config namespace: kube-system [dryrun] Would perform action CREATE on resource "roles" in API group "rbac.authorization.k8s.io/v1" [dryrun] Attached object: apiVersion: rbac.authorization.k8s.io/v1 kind: Role metadata: creationTimestamp: null name: kubeadm:nodes-kubeadm-config namespace: kube-system rules: - apiGroups: - "" resourceNames: - kubeadm-config resources: - configmaps verbs: - get [dryrun] Would perform action CREATE on resource "rolebindings" in API group "rbac.authorization.k8s.io/v1" [dryrun] Attached object: apiVersion: rbac.authorization.k8s.io/v1 kind: RoleBinding metadata: creationTimestamp: null name: kubeadm:nodes-kubeadm-config namespace: kube-system roleRef: apiGroup: rbac.authorization.k8s.io kind: Role name: kubeadm:nodes-kubeadm-config subjects: - kind: Group name: system:bootstrappers:kubeadm:default-node-token - kind: Group name: system:nodes [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster [dryrun] Would perform action CREATE on resource "configmaps" in API group "core/v1" [dryrun] Attached object: apiVersion: v1 data: kubelet: | apiVersion: kubelet.config.k8s.io/v1beta1 authentication: anonymous: enabled: false webhook: cacheTTL: 0s enabled: true x509: clientCAFile: /etc/kubernetes/pki/ca.crt authorization: mode: Webhook webhook: cacheAuthorizedTTL: 0s cacheUnauthorizedTTL: 0s cgroupDriver: systemd clusterDNS: - 10.96.0.10 clusterDomain: cluster.local containerRuntimeEndpoint: "" cpuManagerReconcilePeriod: 0s evictionPressureTransitionPeriod: 0s fileCheckFrequency: 0s healthzBindAddress: 127.0.0.1 healthzPort: 10248 httpCheckFrequency: 0s imageMinimumGCAge: 0s kind: KubeletConfiguration logging: flushFrequency: 0 options: json: infoBufferSize: "0" verbosity: 0 memorySwap: {} nodeStatusReportFrequency: 0s nodeStatusUpdateFrequency: 0s resolvConf: /run/systemd/resolve/resolv.conf rotateCertificates: true runtimeRequestTimeout: 0s shutdownGracePeriod: 0s shutdownGracePeriodCriticalPods: 0s staticPodPath: /etc/kubernetes/manifests streamingConnectionIdleTimeout: 0s syncFrequency: 0s volumeStatsAggPeriod: 0s kind: ConfigMap metadata: annotations: kubeadm.kubernetes.io/component-config.hash: sha256:ff76c96ce6a025e279138fef234cfd93e648e9fdd0e482723f43376929e1784c creationTimestamp: null name: kubelet-config namespace: kube-system [dryrun] Would perform action CREATE on resource "roles" in API group "rbac.authorization.k8s.io/v1" [dryrun] Attached object: apiVersion: rbac.authorization.k8s.io/v1 kind: Role metadata: creationTimestamp: null name: kubeadm:kubelet-config namespace: kube-system rules: - apiGroups: - "" resourceNames: - kubelet-config resources: - configmaps verbs: - get [dryrun] Would perform action CREATE on resource "rolebindings" in API group "rbac.authorization.k8s.io/v1" [dryrun] Attached object: apiVersion: rbac.authorization.k8s.io/v1 kind: RoleBinding metadata: creationTimestamp: null name: kubeadm:kubelet-config namespace: kube-system roleRef: apiGroup: rbac.authorization.k8s.io kind: Role name: kubeadm:kubelet-config subjects: - kind: Group name: system:nodes - kind: Group name: system:bootstrappers:kubeadm:default-node-token [dryrun] Would perform action GET on resource "nodes" in API group "core/v1" [dryrun] Resource name: "kubernetes-1" [dryrun] Would perform action PATCH on resource "nodes" in API group "core/v1" [dryrun] Resource name: "kubernetes-1" [dryrun] Attached patch: {"metadata":{"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/containerd/containerd.sock"}}} [upload-certs] Skipping phase. Please see --upload-certs [mark-control-plane] Marking the node kubernetes-1 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers] [mark-control-plane] Marking the node kubernetes-1 as control-plane by adding the taints [node-role.kubernetes.io/control-plane:NoSchedule] [dryrun] Would perform action GET on resource "nodes" in API group "core/v1" [dryrun] Resource name: "kubernetes-1" [dryrun] Would perform action PATCH on resource "nodes" in API group "core/v1" [dryrun] Resource name: "kubernetes-1" [dryrun] Attached patch: {"metadata":{"labels":{"node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""}},"spec":{"taints":[{"effect":"NoSchedule","key":"node-role.kubernetes.io/control-plane"}]}} [bootstrap-token] Using token: 8tt88w.6a0h6pv1bddnckdv [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles [dryrun] Would perform action GET on resource "secrets" in API group "core/v1" [dryrun] Resource name: "bootstrap-token-8tt88w" [dryrun] Would perform action CREATE on resource "secrets" in API group "core/v1" [dryrun] Attached object: apiVersion: v1 data: auth-extra-groups: c3lzdGVtOmJvb3RzdHJhcHBlcnM6a3ViZWFkbTpkZWZhdWx0LW5vZGUtdG9rZW4= description: VGhlIGRlZmF1bHQgYm9vdHN0cmFwIHRva2VuIGdlbmVyYXRlZCBieSAna3ViZWFkbSBpbml0Jy4= expiration: MjAyMy0wOC0wNlQxMzo0NzozMFo= token-id: OHR0ODh3 token-secret: NmEwaDZwdjFiZGRuY2tkdg== usage-bootstrap-authentication: dHJ1ZQ== usage-bootstrap-signing: dHJ1ZQ== kind: Secret metadata: creationTimestamp: null name: bootstrap-token-8tt88w namespace: kube-system type: bootstrap.kubernetes.io/token [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes [dryrun] Would perform action CREATE on resource "clusterroles" in API group "rbac.authorization.k8s.io/v1" [dryrun] Attached object: apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRole metadata: creationTimestamp: null name: kubeadm:get-nodes namespace: kube-system rules: - apiGroups: - "" resources: - nodes verbs: - get [dryrun] Would perform action CREATE on resource "clusterrolebindings" in API group "rbac.authorization.k8s.io/v1" [dryrun] Attached object: apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRoleBinding metadata: creationTimestamp: null name: kubeadm:get-nodes namespace: kube-system roleRef: apiGroup: rbac.authorization.k8s.io kind: ClusterRole name: kubeadm:get-nodes subjects: - kind: Group name: system:bootstrappers:kubeadm:default-node-token [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials [dryrun] Would perform action CREATE on resource "clusterrolebindings" in API group "rbac.authorization.k8s.io/v1" [dryrun] Attached object: apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRoleBinding metadata: creationTimestamp: null name: kubeadm:kubelet-bootstrap roleRef: apiGroup: rbac.authorization.k8s.io kind: ClusterRole name: system:node-bootstrapper subjects: - kind: Group name: system:bootstrappers:kubeadm:default-node-token [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token [dryrun] Would perform action CREATE on resource "clusterrolebindings" in API group "rbac.authorization.k8s.io/v1" [dryrun] Attached object: apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRoleBinding metadata: creationTimestamp: null name: kubeadm:node-autoapprove-bootstrap roleRef: apiGroup: rbac.authorization.k8s.io kind: ClusterRole name: system:certificates.k8s.io:certificatesigningrequests:nodeclient subjects: - kind: Group name: system:bootstrappers:kubeadm:default-node-token [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster [dryrun] Would perform action CREATE on resource "clusterrolebindings" in API group "rbac.authorization.k8s.io/v1" [dryrun] Attached object: apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRoleBinding metadata: creationTimestamp: null name: kubeadm:node-autoapprove-certificate-rotation roleRef: apiGroup: rbac.authorization.k8s.io kind: ClusterRole name: system:certificates.k8s.io:certificatesigningrequests:selfnodeclient subjects: - kind: Group name: system:nodes [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace [dryrun] Would perform action CREATE on resource "configmaps" in API group "core/v1" [dryrun] Attached object: apiVersion: v1 data: kubeconfig: | apiVersion: v1 clusters: - cluster: certificate-authority-data: LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSURCVENDQWUyZ0F3SUJBZ0lJWkVCK1ZxR1ZveGt3RFFZSktvWklodmNOQVFFTEJRQXdGVEVUTUJFR0ExVUUKQXhNS2EzVmlaWEp1WlhSbGN6QWVGdzB5TXpBNE1EVXhNelEzTWpWYUZ3MHpNekE0TURJeE16UTNNalZhTUJVeApFekFSQmdOVkJBTVRDbXQxWW1WeWJtVjBaWE13Z2dFaU1BMEdDU3FHU0liM0RRRUJBUVVBQTRJQkR3QXdnZ0VLCkFvSUJBUUNnNVFueWhERTZnMVpYVEU5M2FkSHgwdEN2bC90TVd5NTVFUVM1ZTVkZlN2MU9BcXVCSFhBdlA3eUEKaXdseTAwNmY1N0E4c0o1ZnltR0oyUGxuUmRTRzFlTGFqQTZkWDhGMExBRHE4Smk3aWNkRWVVa2M0K0IvdXAxVwpLYWVJbEtBUFhhYzVzUkx5VHhDSzdhZjFWYVArcXY2UkRYY1RWSmFTeXBFclFhYytxRk90Rlc4WlYvSmV6L3BTCktVM1pteVFpcHk1ZjJqQ1lnTHRzNmtHcit3YWcxelFWWXAvMUFNT3ZuSGsvVXgzMlN2dlZOalRqcUJOajlPNFYKTHRJY1VkSVBkQUYvczh6a0g1VTFWUWpmYlJNaVprQk9BMjFrK1dGOHNUZVR3RERHUzFhWWt3c3BhaEhLdzR1eQpEaStRRlFJVzN5MXBBd2pLUjNUSTBYVDRseldEQWdNQkFBR2pXVEJYTUE0R0ExVWREd0VCL3dRRUF3SUNwREFQCkJnTlZIUk1CQWY4RUJUQURBUUgvTUIwR0ExVWREZ1FXQkJUVDZyL0hueGNsRGhRTFpJb21CcjZXZVZoSUpEQVYKQmdOVkhSRUVEakFNZ2dwcmRXSmxjbTVsZEdWek1BMEdDU3FHU0liM0RRRUJDd1VBQTRJQkFRQkd6S05oaU5OSQpSS09CWXRIVXhmVG42M1FRc21nemhGY255QlVhOWlRSkw3QUF2bWtreUU2Q2RHV2V1M3dKNlZ4QWRwcHl0anBICjY4UWdha01KaHo1blhmNHJocHhMY0R3bzNQbG9QNmtWcTVGWGZoeFJucGdiU0JhK0JTM1EySXZhcG9XeGxXZHoKMmRiMm5VL0t6TDlmcVZBK3c0cWRoeWp0ZC9ZWWFyRmRjc3RWNXpMc2dJelcwYXd5S2c4SlZtNHh1K3A5K3JlSgozWGxySlFWVE1FRlZGL1RMZlhxVTRRRGdZUHZ5YUt5Y2NKSXVWOVBXZ1dzY3pEYlV3UlRWSnRvVVFXY3hIU2ZKCnBJdWFIdVdCSUtRNlYrK2x1b3lJeko0blNpbHZCeWZEaUFiekVHTzczbXFYY3UxT0xQUytKRXVJQldJdU1LMTgKNEY3clo1WWZJeVpjCi0tLS0tRU5EIENFUlRJRklDQVRFLS0tLS0K server: https://192.168.8.192:6443 name: "" contexts: null current-context: "" kind: Config preferences: {} users: null kind: ConfigMap metadata: creationTimestamp: null name: cluster-info namespace: kube-public [dryrun] Would perform action CREATE on resource "roles" in API group "rbac.authorization.k8s.io/v1" [dryrun] Attached object: apiVersion: rbac.authorization.k8s.io/v1 kind: Role metadata: creationTimestamp: null name: kubeadm:bootstrap-signer-clusterinfo namespace: kube-public rules: - apiGroups: - "" resourceNames: - cluster-info resources: - configmaps verbs: - get [dryrun] Would perform action CREATE on resource "rolebindings" in API group "rbac.authorization.k8s.io/v1" [dryrun] Attached object: apiVersion: rbac.authorization.k8s.io/v1 kind: RoleBinding metadata: creationTimestamp: null name: kubeadm:bootstrap-signer-clusterinfo namespace: kube-public roleRef: apiGroup: rbac.authorization.k8s.io kind: Role name: kubeadm:bootstrap-signer-clusterinfo subjects: - kind: User name: system:anonymous [dryrun] Would perform action LIST on resource "deployments" in API group "apps/v1" [dryrun] Would perform action GET on resource "configmaps" in API group "core/v1" [dryrun] Resource name: "coredns" [dryrun] Would perform action CREATE on resource "configmaps" in API group "core/v1" [dryrun] Attached object: apiVersion: v1 data: Corefile: | .:53 { errors health { lameduck 5s } ready kubernetes cluster.local in-addr.arpa ip6.arpa { pods insecure fallthrough in-addr.arpa ip6.arpa ttl 30 } prometheus :9153 forward . /etc/resolv.conf { max_concurrent 1000 } cache 30 loop reload loadbalance } kind: ConfigMap metadata: creationTimestamp: null name: coredns namespace: kube-system [dryrun] Would perform action CREATE on resource "clusterroles" in API group "rbac.authorization.k8s.io/v1" [dryrun] Attached object: apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRole metadata: creationTimestamp: null name: system:coredns rules: - apiGroups: - "" resources: - endpoints - services - pods - namespaces verbs: - list - watch - apiGroups: - "" resources: - nodes verbs: - get - apiGroups: - discovery.k8s.io resources: - endpointslices verbs: - list - watch [dryrun] Would perform action CREATE on resource "clusterrolebindings" in API group "rbac.authorization.k8s.io/v1" [dryrun] Attached object: apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRoleBinding metadata: creationTimestamp: null name: system:coredns roleRef: apiGroup: rbac.authorization.k8s.io kind: ClusterRole name: system:coredns subjects: - kind: ServiceAccount name: coredns namespace: kube-system [dryrun] Would perform action CREATE on resource "serviceaccounts" in API group "core/v1" [dryrun] Attached object: apiVersion: v1 kind: ServiceAccount metadata: creationTimestamp: null name: coredns namespace: kube-system [dryrun] Would perform action CREATE on resource "deployments" in API group "apps/v1" [dryrun] Attached object: apiVersion: apps/v1 kind: Deployment metadata: creationTimestamp: null labels: k8s-app: kube-dns name: coredns namespace: kube-system spec: replicas: 2 selector: matchLabels: k8s-app: kube-dns strategy: rollingUpdate: maxUnavailable: 1 type: RollingUpdate template: metadata: creationTimestamp: null labels: k8s-app: kube-dns spec: affinity: podAntiAffinity: preferredDuringSchedulingIgnoredDuringExecution: - podAffinityTerm: labelSelector: matchExpressions: - key: k8s-app operator: In values: - kube-dns topologyKey: kubernetes.io/hostname weight: 100 containers: - args: - -conf - /etc/coredns/Corefile image: registry.k8s.io/coredns/coredns:v1.10.1 imagePullPolicy: IfNotPresent livenessProbe: failureThreshold: 5 httpGet: path: /health port: 8080 scheme: HTTP initialDelaySeconds: 60 successThreshold: 1 timeoutSeconds: 5 name: coredns ports: - containerPort: 53 name: dns protocol: UDP - containerPort: 53 name: dns-tcp protocol: TCP - containerPort: 9153 name: metrics protocol: TCP readinessProbe: httpGet: path: /ready port: 8181 scheme: HTTP resources: limits: memory: 170Mi requests: cpu: 100m memory: 70Mi securityContext: allowPrivilegeEscalation: false capabilities: add: - NET_BIND_SERVICE drop: - all readOnlyRootFilesystem: true volumeMounts: - mountPath: /etc/coredns name: config-volume readOnly: true dnsPolicy: Default nodeSelector: kubernetes.io/os: linux priorityClassName: system-cluster-critical serviceAccountName: coredns tolerations: - key: CriticalAddonsOnly operator: Exists - effect: NoSchedule key: node-role.kubernetes.io/control-plane volumes: - configMap: items: - key: Corefile path: Corefile name: coredns name: config-volume status: {} [dryrun] Would perform action CREATE on resource "services" in API group "core/v1" [dryrun] Attached object: apiVersion: v1 kind: Service metadata: annotations: prometheus.io/port: "9153" prometheus.io/scrape: "true" creationTimestamp: null labels: k8s-app: kube-dns kubernetes.io/cluster-service: "true" kubernetes.io/name: CoreDNS name: kube-dns namespace: kube-system resourceVersion: "0" spec: clusterIP: 10.96.0.10 ports: - name: dns port: 53 protocol: UDP targetPort: 53 - name: dns-tcp port: 53 protocol: TCP targetPort: 53 - name: metrics port: 9153 protocol: TCP targetPort: 9153 selector: k8s-app: kube-dns status: loadBalancer: {} [addons] Applied essential addon: CoreDNS [dryrun] Would perform action CREATE on resource "configmaps" in API group "core/v1" [dryrun] Attached object: apiVersion: v1 data: config.conf: |- apiVersion: kubeproxy.config.k8s.io/v1alpha1 bindAddress: 0.0.0.0 bindAddressHardFail: false clientConnection: acceptContentTypes: "" burst: 0 contentType: "" kubeconfig: /var/lib/kube-proxy/kubeconfig.conf qps: 0 clusterCIDR: 10.11.0.0/16 configSyncPeriod: 0s conntrack: maxPerCore: null min: null tcpCloseWaitTimeout: null tcpEstablishedTimeout: null detectLocal: bridgeInterface: "" interfaceNamePrefix: "" detectLocalMode: "" enableProfiling: false healthzBindAddress: "" hostnameOverride: "" iptables: localhostNodePorts: null masqueradeAll: false masqueradeBit: null minSyncPeriod: 0s syncPeriod: 0s ipvs: excludeCIDRs: null minSyncPeriod: 0s scheduler: "" strictARP: false syncPeriod: 0s tcpFinTimeout: 0s tcpTimeout: 0s udpTimeout: 0s kind: KubeProxyConfiguration metricsBindAddress: "" mode: "" nodePortAddresses: null oomScoreAdj: null portRange: "" showHiddenMetricsForVersion: "" winkernel: enableDSR: false forwardHealthCheckVip: false networkName: "" rootHnsEndpointName: "" sourceVip: "" kubeconfig.conf: |- apiVersion: v1 kind: Config clusters: - cluster: certificate-authority: /var/run/secrets/kubernetes.io/serviceaccount/ca.crt server: https://192.168.8.192:6443 name: default contexts: - context: cluster: default namespace: default user: default name: default current-context: default users: - name: default user: tokenFile: /var/run/secrets/kubernetes.io/serviceaccount/token kind: ConfigMap metadata: annotations: kubeadm.kubernetes.io/component-config.hash: sha256:7e42d811a8c357cc1d1cc67b761cfca763bc2c4047e510dd8715bf7c3db1c6cb creationTimestamp: null labels: app: kube-proxy name: kube-proxy namespace: kube-system [dryrun] Would perform action CREATE on resource "daemonsets" in API group "apps/v1" [dryrun] Attached object: apiVersion: apps/v1 kind: DaemonSet metadata: creationTimestamp: null labels: k8s-app: kube-proxy name: kube-proxy namespace: kube-system spec: selector: matchLabels: k8s-app: kube-proxy template: metadata: creationTimestamp: null labels: k8s-app: kube-proxy spec: containers: - command: - /usr/local/bin/kube-proxy - --config=/var/lib/kube-proxy/config.conf - --hostname-override=$(NODE_NAME) env: - name: NODE_NAME valueFrom: fieldRef: fieldPath: spec.nodeName image: registry.k8s.io/kube-proxy:v1.27.4 imagePullPolicy: IfNotPresent name: kube-proxy resources: {} securityContext: privileged: true volumeMounts: - mountPath: /var/lib/kube-proxy name: kube-proxy - mountPath: /run/xtables.lock name: xtables-lock - mountPath: /lib/modules name: lib-modules readOnly: true hostNetwork: true nodeSelector: kubernetes.io/os: linux priorityClassName: system-node-critical serviceAccountName: kube-proxy tolerations: - operator: Exists volumes: - configMap: name: kube-proxy name: kube-proxy - hostPath: path: /run/xtables.lock type: FileOrCreate name: xtables-lock - hostPath: path: /lib/modules name: lib-modules updateStrategy: type: RollingUpdate status: currentNumberScheduled: 0 desiredNumberScheduled: 0 numberMisscheduled: 0 numberReady: 0 [dryrun] Would perform action CREATE on resource "serviceaccounts" in API group "core/v1" [dryrun] Attached object: apiVersion: v1 kind: ServiceAccount metadata: creationTimestamp: null name: kube-proxy namespace: kube-system [dryrun] Would perform action CREATE on resource "clusterrolebindings" in API group "rbac.authorization.k8s.io/v1" [dryrun] Attached object: apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRoleBinding metadata: creationTimestamp: null name: kubeadm:node-proxier roleRef: apiGroup: rbac.authorization.k8s.io kind: ClusterRole name: system:node-proxier subjects: - kind: ServiceAccount name: kube-proxy namespace: kube-system [dryrun] Would perform action CREATE on resource "roles" in API group "rbac.authorization.k8s.io/v1" [dryrun] Attached object: apiVersion: rbac.authorization.k8s.io/v1 kind: Role metadata: creationTimestamp: null name: kube-proxy namespace: kube-system rules: - apiGroups: - "" resourceNames: - kube-proxy resources: - configmaps verbs: - get [dryrun] Would perform action CREATE on resource "rolebindings" in API group "rbac.authorization.k8s.io/v1" [dryrun] Attached object: apiVersion: rbac.authorization.k8s.io/v1 kind: RoleBinding metadata: creationTimestamp: null name: kube-proxy namespace: kube-system roleRef: apiGroup: rbac.authorization.k8s.io kind: Role name: kube-proxy subjects: - kind: Group name: system:bootstrappers:kubeadm:default-node-token [addons] Applied essential addon: kube-proxy Your Kubernetes control-plane has initialized successfully! To start using your cluster, you need to run the following as a regular user: mkdir -p $HOME/.kube sudo cp -i /etc/kubernetes/tmp/kubeadm-init-dryrun3858364406/admin.conf $HOME/.kube/config sudo chown $(id -u):$(id -g) $HOME/.kube/config Alternatively, if you are the root user, you can run: export KUBECONFIG=/etc/kubernetes/admin.conf You should now deploy a pod network to the cluster. Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at: https://kubernetes.io/docs/concepts/cluster-administration/addons/ Then you can join any number of worker nodes by running the following on each as root: kubeadm join 192.168.8.192:6443 --token 8tt88w.6a0h6pv1bddnckdv \ --discovery-token-ca-cert-hash sha256:964e651f82d0417bddbaaeee36fc07da2e2c8d3e9152cee0f0828b4ed0c20614
Prerform actual run.
$ sudo kubeadm init --pod-network-cidr=10.11.0.0/16
[init] Using Kubernetes version: v1.27.4 [preflight] Running pre-flight checks [preflight] Pulling images required for setting up a Kubernetes cluster [preflight] This might take a minute or two, depending on the speed of your internet connection [preflight] You can also perform this action in beforehand using 'kubeadm config images pull' W0805 13:50:21.582325 1187 checks.go:835] detected that the sandbox image "registry.k8s.io/pause:3.8" of the container runtime is inconsistent with that used by kubeadm. It is recommended that using "registry.k8s.io/pause:3.9" as the CRI sandbox image. [certs] Using certificateDir folder "/etc/kubernetes/pki" [certs] Generating "ca" certificate and key [certs] Generating "apiserver" certificate and key [certs] apiserver serving cert is signed for DNS names [kubernetes kubernetes-1 kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local] and IPs [10.96.0.1 192.168.8.192] [certs] Generating "apiserver-kubelet-client" certificate and key [certs] Generating "front-proxy-ca" certificate and key [certs] Generating "front-proxy-client" certificate and key [certs] Generating "etcd/ca" certificate and key [certs] Generating "etcd/server" certificate and key [certs] etcd/server serving cert is signed for DNS names [kubernetes-1 localhost] and IPs [192.168.8.192 127.0.0.1 ::1] [certs] Generating "etcd/peer" certificate and key [certs] etcd/peer serving cert is signed for DNS names [kubernetes-1 localhost] and IPs [192.168.8.192 127.0.0.1 ::1] [certs] Generating "etcd/healthcheck-client" certificate and key [certs] Generating "apiserver-etcd-client" certificate and key [certs] Generating "sa" key and public key [kubeconfig] Using kubeconfig folder "/etc/kubernetes" [kubeconfig] Writing "admin.conf" kubeconfig file [kubeconfig] Writing "kubelet.conf" kubeconfig file [kubeconfig] Writing "controller-manager.conf" kubeconfig file [kubeconfig] Writing "scheduler.conf" kubeconfig file [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env" [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml" [kubelet-start] Starting the kubelet [control-plane] Using manifest folder "/etc/kubernetes/manifests" [control-plane] Creating static Pod manifest for "kube-apiserver" [control-plane] Creating static Pod manifest for "kube-controller-manager" [control-plane] Creating static Pod manifest for "kube-scheduler" [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests" [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s [apiclient] All control plane components are healthy after 9.003333 seconds [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster [upload-certs] Skipping phase. Please see --upload-certs [mark-control-plane] Marking the node kubernetes-1 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers] [mark-control-plane] Marking the node kubernetes-1 as control-plane by adding the taints [node-role.kubernetes.io/control-plane:NoSchedule] [bootstrap-token] Using token: naiq6g.kjdnyqxnkybmj60b [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key [addons] Applied essential addon: CoreDNS [addons] Applied essential addon: kube-proxy Your Kubernetes control-plane has initialized successfully! To start using your cluster, you need to run the following as a regular user: mkdir -p $HOME/.kube sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config sudo chown $(id -u):$(id -g) $HOME/.kube/config Alternatively, if you are the root user, you can run: export KUBECONFIG=/etc/kubernetes/admin.conf You should now deploy a pod network to the cluster. Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at: https://kubernetes.io/docs/concepts/cluster-administration/addons/ Then you can join any number of worker nodes by running the following on each as root: kubeadm join 192.168.8.192:6443 --token naiq6g.kjdnyqxnkybmj60b \ --discovery-token-ca-cert-hash sha256:d20759057731f61a129eca8f6a1d2bd7af68271e58037d449aa49858c251d580
Use kubeconfig with admin privileges.
$ mkdir -p $HOME/.kube
$ sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
$ sudo chown $(id -u):$(id -g) $HOME/.kube/config
Install operator for calico networking.
$ kubectl create -f https://raw.githubusercontent.com/projectcalico/calico/v3.26.1/manifests/tigera-operator.yaml
namespace/tigera-operator created customresourcedefinition.apiextensions.k8s.io/bgpconfigurations.crd.projectcalico.org created customresourcedefinition.apiextensions.k8s.io/bgpfilters.crd.projectcalico.org created customresourcedefinition.apiextensions.k8s.io/bgppeers.crd.projectcalico.org created customresourcedefinition.apiextensions.k8s.io/blockaffinities.crd.projectcalico.org created customresourcedefinition.apiextensions.k8s.io/caliconodestatuses.crd.projectcalico.org created customresourcedefinition.apiextensions.k8s.io/clusterinformations.crd.projectcalico.org created customresourcedefinition.apiextensions.k8s.io/felixconfigurations.crd.projectcalico.org created customresourcedefinition.apiextensions.k8s.io/globalnetworkpolicies.crd.projectcalico.org created customresourcedefinition.apiextensions.k8s.io/globalnetworksets.crd.projectcalico.org created customresourcedefinition.apiextensions.k8s.io/hostendpoints.crd.projectcalico.org created customresourcedefinition.apiextensions.k8s.io/ipamblocks.crd.projectcalico.org created customresourcedefinition.apiextensions.k8s.io/ipamconfigs.crd.projectcalico.org created customresourcedefinition.apiextensions.k8s.io/ipamhandles.crd.projectcalico.org created customresourcedefinition.apiextensions.k8s.io/ippools.crd.projectcalico.org created customresourcedefinition.apiextensions.k8s.io/ipreservations.crd.projectcalico.org created customresourcedefinition.apiextensions.k8s.io/kubecontrollersconfigurations.crd.projectcalico.org created customresourcedefinition.apiextensions.k8s.io/networkpolicies.crd.projectcalico.org created customresourcedefinition.apiextensions.k8s.io/networksets.crd.projectcalico.org created customresourcedefinition.apiextensions.k8s.io/apiservers.operator.tigera.io created customresourcedefinition.apiextensions.k8s.io/imagesets.operator.tigera.io created customresourcedefinition.apiextensions.k8s.io/installations.operator.tigera.io created customresourcedefinition.apiextensions.k8s.io/tigerastatuses.operator.tigera.io created serviceaccount/tigera-operator created clusterrole.rbac.authorization.k8s.io/tigera-operator created clusterrolebinding.rbac.authorization.k8s.io/tigera-operator created deployment.apps/tigera-operator created
Define initial configuration.
$ tee custom-resources.yaml <<EOF --- # This section includes base Calico installation configuration. # For more information, see: https://projectcalico.docs.tigera.io/master/reference/installation/api#operator.tigera.io/v1.Installation apiVersion: operator.tigera.io/v1 kind: Installation metadata: name: default spec: # Configures Calico networking. calicoNetwork: # Note: The ipPools section cannot be modified post-install. ipPools: - blockSize: 26 cidr: 10.11.10.0/16 encapsulation: VXLANCrossSubnet natOutgoing: Enabled nodeSelector: all() --- # This section configures the Calico API server. # For more information, see: https://projectcalico.docs.tigera.io/master/reference/installation/api#operator.tigera.io/v1.APIServer apiVersion: operator.tigera.io/v1 kind: APIServer metadata: name: default spec: {} EOF
Apply initial configuration.
$ kubectl create -f custom-resources.yaml
installation.operator.tigera.io/default created apiserver.operator.tigera.io/default created
Download and install calico kubectl plugin.
$ curl -L https://github.com/projectcalico/calico/releases/latest/download/calicoctl-linux-amd64 -o kubectl-calico
$ sudo install -m 755 kubectl-calico /usr/local/bin/
Verify that it is working as expected.
$ kubectl calico get node -o yaml
apiVersion: projectcalico.org/v3 items: - apiVersion: projectcalico.org/v3 kind: Node metadata: annotations: projectcalico.org/kube-labels: '{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"kubernetes-1","kubernetes.io/os":"linux","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""}' creationTimestamp: "2023-08-05T13:50:31Z" labels: beta.kubernetes.io/arch: amd64 beta.kubernetes.io/os: linux kubernetes.io/arch: amd64 kubernetes.io/hostname: kubernetes-1 kubernetes.io/os: linux node-role.kubernetes.io/control-plane: "" node.kubernetes.io/exclude-from-external-load-balancers: "" name: kubernetes-1 resourceVersion: "820" uid: e3d2d138-1e3a-487d-a7a8-196cf4423413 spec: addresses: - address: 192.168.8.192 type: InternalIP orchRefs: - nodeName: kubernetes-1 orchestrator: k8s status: podCIDRs: - 10.11.0.0/24 kind: NodeList metadata: resourceVersion: "872"
Add additional nodes
List bootstrap tokens.
$ sudo kubeadm token list
TOKEN TTL EXPIRES USAGES DESCRIPTION EXTRA GROUPS naiq6g.kjdnyqxnkybmj60b 23h 2023-08-06T13:50:34Z authentication,signing The default bootstrap token generated by 'kubeadm init'. system:bootstrappers:kubeadm:default-node-token
Create new bootstrap token.
$ sudo kubeadm token create --print-join-command
kubeadm join 192.168.8.192:6443 --token 4m5xvo.pxh77qwn9wkb48lb --discovery-token-ca-cert-hash sha256:d20759057731f61a129eca8f6a1d2bd7af68271e58037d449aa49858c251d580
Join additional nodes to the cluster.
$ sudo kubeadm join 192.168.8.192:6443 --token 4m5xvo.pxh77qwn9wkb48lb --discovery-token-ca-cert-hash sha256:d20759057731f61a129eca8f6a1d2bd7af68271e58037d449aa49858c251d580
[preflight] Running pre-flight checks [preflight] Reading configuration from the cluster... [preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml' [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml" [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env" [kubelet-start] Starting the kubelet [kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap... This node has joined the cluster: * Certificate signing request was sent to apiserver and a response was received. * The Kubelet was informed of the new secure connection details. Run 'kubectl get nodes' on the control-plane to see this node join the cluster.
Display available nodes.
$ kubectl get nodes
NAME STATUS ROLES AGE VERSION kubernetes-1 Ready control-plane 109m v1.27.4 kubernetes-2 Ready <none> 8m17s v1.27.4 kubernetes-3 Ready <none> 8m5s v1.27.4
Display all pods.
$ kubectl get pods --all-namespaces
NAMESPACE NAME READY STATUS RESTARTS AGE calico-apiserver calico-apiserver-5d5c8b58f5-2978v 1/1 Running 0 3m28s calico-apiserver calico-apiserver-5d5c8b58f5-b9dr8 1/1 Running 0 3m28s calico-system calico-kube-controllers-749695769-r5hwb 1/1 Running 0 5m40s calico-system calico-node-2wnxc 1/1 Running 0 5m40s calico-system calico-node-jl42h 1/1 Running 0 3m8s calico-system calico-node-mkx7q 1/1 Running 0 2m40s calico-system calico-typha-7c5457d5b6-ljc7v 1/1 Running 0 59s calico-system calico-typha-7c5457d5b6-vhjkb 1/1 Running 0 5m40s calico-system csi-node-driver-p2nl4 2/2 Running 0 5m40s calico-system csi-node-driver-psxdg 2/2 Running 0 2m40s calico-system csi-node-driver-x2w6d 2/2 Running 0 3m8s kube-system coredns-5d78c9869d-p9b97 1/1 Running 0 7m33s kube-system coredns-5d78c9869d-zdd88 1/1 Running 0 7m33s kube-system etcd-kubernetes-1 1/1 Running 0 7m47s kube-system kube-apiserver-kubernetes-1 1/1 Running 0 7m47s kube-system kube-controller-manager-kubernetes-1 1/1 Running 2 (109s ago) 7m47s kube-system kube-proxy-6nknz 1/1 Running 0 2m40s kube-system kube-proxy-hs4g6 1/1 Running 0 7m33s kube-system kube-proxy-tr9fh 1/1 Running 0 3m8s kube-system kube-scheduler-kubernetes-1 1/1 Running 2 (107s ago) 7m47s tigera-operator tigera-operator-5f4668786-sf7kr 1/1 Running 2 (2m27s ago) 7m9s
Additional notes
Getting started with containerd