Install Kubernetes on LXD. Essential information to just get you started.
Initial information
Host operating system.
$ lsb_release -a
No LSB modules are available. Distributor ID: Ubuntu Description: Ubuntu 20.04.1 LTS Release: 20.04 Codename: focal
LXD version.
$ lxc --version
4.0.2
LXD profile
Create kubernetes
profile.
$ sudo lxc profile create kubernetes
Profile kubernetes created
Apply profile configuration (this is zfs
version).
$ curl --silent https://raw.githubusercontent.com/ubuntu/microk8s/master/tests/lxc/microk8s-zfs.profile | sudo lxc profile edit kubernetes
Display kubernetes
profile.
$ sudo lxc profile show kubernetes
config: boot.autostart: "true" linux.kernel_modules: ip_vs,ip_vs_rr,ip_vs_wrr,ip_vs_sh,ip_tables,ip6_tables,netlink_diag,nf_nat,overlay,br_netfilter raw.lxc: | lxc.apparmor.profile=unconfined lxc.mount.auto=proc:rw sys:rw cgroup:rw lxc.cgroup.devices.allow=a lxc.cap.drop= security.nesting: "true" security.privileged: "true" description: "" devices: aadisable: path: /sys/module/nf_conntrack/parameters/hashsize source: /sys/module/nf_conntrack/parameters/hashsize type: disk aadisable1: path: /sys/module/apparmor/parameters/enabled source: /dev/null type: disk aadisable2: path: /dev/zfs source: /dev/zfs type: disk aadisable3: path: /dev/kmsg source: /dev/kmsg type: disk name: kubernetes used_by: []
Initial Kubernetes node setup
Create an instance using the latest Ubuntu and kubernetes
profile.
$ sudo lxc launch --profile default --profile kubernetes ubuntu:20.04 kubernetes-example-master
Creating kubernetes-example-master Starting kubernetes-example-master
Verify that the created instance is running.
$ sudo lxc list kubernetes-example-master
+---------------------------+---------+---------------------+------+-----------+-----------+ | NAME | STATE | IPV4 | IPV6 | TYPE | SNAPSHOTS | +---------------------------+---------+---------------------+------+-----------+-----------+ | kubernetes-example-master | RUNNING | 172.16.40.38 (eth0) | | CONTAINER | 0 | +---------------------------+---------+---------------------+------+-----------+-----------+
Execute bash
inside the created instance.
$ sudo lxc exec kubernetes-example-master bash
Update package index.
root@kubernetes-example-master:~# apt update
Install docker
.
root@kubernetes-example-master:~# sudo apt install docker.io
Prepare docker
configuration (you need to define a default cgroup
driver).
root@kubernetes-example-master:~# cat > /etc/docker/daemon.json <<EOF { "exec-opts": ["native.cgroupdriver=systemd"], "storage-driver": "aufs", "log-driver": "json-file", "log-opts": { "max-size": "100m" } } EOF
Restart service to apply the new configuration.
root@kubernetes-example-master:~# sudo systemctl restart docker
Service is not enabled by default.
root@kubernetes-example-master:~# sudo systemctl is-enabled docker
disabled
Enable it, so it will start at boot.
root@kubernetes-example-master:~# sudo systemctl enable docker
Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /lib/systemd/system/docker.service.
Verify that it works as expected.
root@kubernetes-example-master:~# sudo docker run hello-world
Unable to find image 'hello-world:latest' locally latest: Pulling from library/hello-world 0e03bdcc26d7: Pull complete Digest: sha256:49a1c8800c94df04e9658809b006fd8a686cab8028d33cfba2cc049724254202 Status: Downloaded newer image for hello-world:latest Hello from Docker! This message shows that your installation appears to be working correctly. To generate this message, Docker took the following steps: 1. The Docker client contacted the Docker daemon. 2. The Docker daemon pulled the "hello-world" image from the Docker Hub. (amd64) 3. The Docker daemon created a new container from that image which runs the executable that produces the output you are currently reading. 4. The Docker daemon streamed that output to the Docker client, which sent it to your terminal. To try something more ambitious, you can run an Ubuntu container with: $ docker run -it ubuntu bash Share images, automate workflows, and more with a free Docker ID: https://hub.docker.com/ For more examples and ideas, visit: https://docs.docker.com/get-started/
Display docker
details.
root@kubernetes-example-master:~# docker version
Client: Version: 19.03.8 API version: 1.40 Go version: go1.13.8 Git commit: afacb8b7f0 Built: Tue Jun 23 22:26:12 2020 OS/Arch: linux/amd64 Experimental: false Server: Engine: Version: 19.03.8 API version: 1.40 (minimum version 1.12) Go version: go1.13.8 Git commit: afacb8b7f0 Built: Thu Jun 18 08:26:54 2020 OS/Arch: linux/amd64 Experimental: false containerd: Version: 1.3.3-0ubuntu2 GitCommit: runc: Version: spec: 1.0.1-dev GitCommit: docker-init: Version: 0.18.0 GitCommit:
root@kubernetes-example-master:~# docker info
Client: Debug Mode: false Server: Containers: 1 Running: 0 Paused: 0 Stopped: 1 Images: 1 Server Version: 19.03.8 Storage Driver: aufs Root Dir: /var/lib/docker/aufs Backing Filesystem: zfs Dirs: 3 Dirperm1 Supported: true Logging Driver: json-file Cgroup Driver: systemd Plugins: Volume: local Network: bridge host ipvlan macvlan null overlay Log: awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog Swarm: inactive Runtimes: runc Default Runtime: runc Init Binary: docker-init containerd version: runc version: init version: Security Options: apparmor seccomp Profile: default Kernel Version: 5.4.0-40-generic Operating System: Ubuntu 20.04 LTS OSType: linux Architecture: x86_64 CPUs: 8 Total Memory: 31.32GiB Name: kubernetes-example-master ID: 335D:IA26:SQCB:YXWM:Q5EI:THRT:7WYT:SRTH:OMG7:RPX3:4M6J:XMKF Docker Root Dir: /var/lib/docker Debug Mode: false Registry: https://index.docker.io/v1/ Labels: Experimental: false Insecure Registries: 127.0.0.0/8 Live Restore Enabled: false WARNING: No swap limit support WARNING: the aufs storage-driver is deprecated, and will be removed in a future release.
Install packages required to add and use Kubernetes
repository.
root@kubernetes-example-master:~# sudo apt install -y apt-transport-https curl gnupg-agent software-properties-common
Add the repository signing key.
root@kubernetes-example-master:~# curl -s https://packages.cloud.google.com/apt/doc/apt-key.gpg | sudo apt-key add
OK
Add Kubernetes
repository.
root@kubernetes-example-master:~# sudo apt-add-repository "deb http://apt.kubernetes.io/ kubernetes-xenial main"
Hit:1 http://archive.ubuntu.com/ubuntu focal InRelease Hit:2 http://archive.ubuntu.com/ubuntu focal-updates InRelease Hit:3 http://archive.ubuntu.com/ubuntu focal-backports InRelease Hit:5 http://security.ubuntu.com/ubuntu focal-security InRelease Get:4 https://packages.cloud.google.com/apt kubernetes-xenial InRelease [8993 B] Get:6 https://packages.cloud.google.com/apt kubernetes-xenial/main amd64 Packages [38.8 kB] Fetched 47.8 kB in 1s (34.8 kB/s) Reading package lists... Done
Install Kubernetes
utilities.
root@kubernetes-example-master:~# sudo apt install -y kubelet kubeadm kubectl
Reading package lists... Done Building dependency tree Reading state information... Done The following package was automatically installed and is no longer required: libfreetype6 Use 'sudo apt autoremove' to remove it. The following additional packages will be installed: conntrack cri-tools ebtables kubernetes-cni socat Suggested packages: nftables The following NEW packages will be installed: conntrack cri-tools ebtables kubeadm kubectl kubelet kubernetes-cni socat 0 upgraded, 8 newly installed, 0 to remove and 21 not upgraded. Need to get 70.6 MB of archives. After this operation, 297 MB of additional disk space will be used. Get:1 http://archive.ubuntu.com/ubuntu focal/main amd64 conntrack amd64 1:1.4.5-2 [30.3 kB] Get:2 http://archive.ubuntu.com/ubuntu focal/main amd64 ebtables amd64 2.0.11-3build1 [80.3 kB] Get:4 http://archive.ubuntu.com/ubuntu focal/main amd64 socat amd64 1.7.3.3-2 [323 kB] Get:3 https://packages.cloud.google.com/apt kubernetes-xenial/main amd64 cri-tools amd64 1.13.0-01 [8775 kB] Get:5 https://packages.cloud.google.com/apt kubernetes-xenial/main amd64 kubernetes-cni amd64 0.8.6-00 [25.0 MB] Get:6 https://packages.cloud.google.com/apt kubernetes-xenial/main amd64 kubelet amd64 1.18.6-00 [19.4 MB] Get:7 https://packages.cloud.google.com/apt kubernetes-xenial/main amd64 kubectl amd64 1.18.6-00 [8826 kB] Get:8 https://packages.cloud.google.com/apt kubernetes-xenial/main amd64 kubeadm amd64 1.18.6-00 [8167 kB] Fetched 70.6 MB in 8s (8806 kB/s) Selecting previously unselected package conntrack. (Reading database ... 31636 files and directories currently installed.) Preparing to unpack .../0-conntrack_1%3a1.4.5-2_amd64.deb ... Unpacking conntrack (1:1.4.5-2) ... Selecting previously unselected package cri-tools. Preparing to unpack .../1-cri-tools_1.13.0-01_amd64.deb ... Unpacking cri-tools (1.13.0-01) ... Selecting previously unselected package ebtables. Preparing to unpack .../2-ebtables_2.0.11-3build1_amd64.deb ... Unpacking ebtables (2.0.11-3build1) ... Selecting previously unselected package kubernetes-cni. Preparing to unpack .../3-kubernetes-cni_0.8.6-00_amd64.deb ... Unpacking kubernetes-cni (0.8.6-00) ... Selecting previously unselected package socat. Preparing to unpack .../4-socat_1.7.3.3-2_amd64.deb ... Unpacking socat (1.7.3.3-2) ... Selecting previously unselected package kubelet. Preparing to unpack .../5-kubelet_1.18.6-00_amd64.deb ... Unpacking kubelet (1.18.6-00) ... Selecting previously unselected package kubectl. Preparing to unpack .../6-kubectl_1.18.6-00_amd64.deb ... Unpacking kubectl (1.18.6-00) ... Selecting previously unselected package kubeadm. Preparing to unpack .../7-kubeadm_1.18.6-00_amd64.deb ... Unpacking kubeadm (1.18.6-00) ... Setting up conntrack (1:1.4.5-2) ... Setting up kubectl (1.18.6-00) ... Setting up ebtables (2.0.11-3build1) ... Setting up socat (1.7.3.3-2) ... Setting up cri-tools (1.13.0-01) ... Setting up kubernetes-cni (0.8.6-00) ... Setting up kubelet (1.18.6-00) ... Created symlink /etc/systemd/system/multi-user.target.wants/kubelet.service → /lib/systemd/system/kubelet.service. Setting up kubeadm (1.18.6-00) ... Processing triggers for man-db (2.9.1-1) ...
Setup Kubernetes Master
Display images that will be used.
root@kubernetes-example-master:~# kubeadm config images list
k8s.gcr.io/kube-apiserver:v1.18.6 k8s.gcr.io/kube-controller-manager:v1.18.6 k8s.gcr.io/kube-scheduler:v1.18.6 k8s.gcr.io/kube-proxy:v1.18.6 k8s.gcr.io/pause:3.2 k8s.gcr.io/etcd:3.4.3-0 k8s.gcr.io/coredns:1.6.7
Pull these images.
root@kubernetes-example-master:~# kubeadm config images pull
[config/images] Pulled k8s.gcr.io/kube-apiserver:v1.18.6 [config/images] Pulled k8s.gcr.io/kube-controller-manager:v1.18.6 [config/images] Pulled k8s.gcr.io/kube-scheduler:v1.18.6 [config/images] Pulled k8s.gcr.io/kube-proxy:v1.18.6 [config/images] Pulled k8s.gcr.io/pause:3.2 [config/images] Pulled k8s.gcr.io/etcd:3.4.3-0 [config/images] Pulled k8s.gcr.io/coredns:1.6.7
Ensure that kubelet
service will not fail when the swap is enabled.
root@kubernetes-example-master:~# echo 'Environment="KUBELET_EXTRA_ARGS=--fail-swap-on=false"' >> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf
root@kubernetes-example-master:~# systemctl daemon-reload
kubelet
service should be stopped at this moment as it does not have the configuration.
root@kubernetes-example-master:~# systemctl status kubelet
● kubelet.service - kubelet: The Kubernetes Node Agent Loaded: loaded (/lib/systemd/system/kubelet.service; enabled; vendor preset: enabled) Drop-In: /etc/systemd/system/kubelet.service.d └─10-kubeadm.conf Active: activating (auto-restart) (Result: exit-code) since Wed 2020-07-29 11:50:51 UTC; 5s ago Docs: https://kubernetes.io/docs/home/ Main PID: 1798 (code=exited, status=255/EXCEPTION) Tasks: 0 (limit: 38388) Memory: 0B CGroup: /system.slice/kubelet.service
Bootstrap the Kubernetes control plane.
root@kubernetes-example-master:~# kubeadm init --pod-network-cidr=10.244.0.0/16 --ignore-preflight-errors=SystemVerification,Swap --apiserver-cert-extra-sans=127.0.0.1
[init] Using Kubernetes version: v1.18.6 [31/1907] [preflight] Running pre-flight checks [WARNING Swap]: running with swap on is not supported. Please disable swap [preflight] The system verification failed. Printing the output from the verification: KERNEL_VERSION: 5.4.0-40-generic DOCKER_VERSION: 19.03.8 DOCKER_GRAPH_DRIVER: aufs OS: Linux CGROUPS_CPU: enabled CGROUPS_CPUACCT: enabled CGROUPS_CPUSET: enabled CGROUPS_DEVICES: enabled CGROUPS_FREEZER: enabled CGROUPS_MEMORY: enabled [WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.4.0-40-generic\n", err: exit status 1 [preflight] Pulling images required for setting up a Kubernetes cluster [preflight] This might take a minute or two, depending on the speed of your internet connection [preflight] You can also perform this action in beforehand using 'kubeadm config images pull' [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env" [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml" [kubelet-start] Starting the kubelet [certs] Using certificateDir folder "/etc/kubernetes/pki" [certs] Generating "ca" certificate and key [certs] Generating "apiserver" certificate and key [certs] apiserver serving cert is signed for DNS names [kubernetes-example-master kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local] and IPs [10.96.0.1 172.16.40.38 127.0.0.1] [certs] Generating "apiserver-kubelet-client" certificate and key [certs] Generating "front-proxy-ca" certificate and key [certs] Generating "front-proxy-client" certificate and key [certs] Generating "etcd/ca" certificate and key [certs] Generating "etcd/server" certificate and key [certs] etcd/server serving cert is signed for DNS names [kubernetes-example-master localhost] and IPs [172.16.40.38 127.0.0.1 ::1] [certs] Generating "etcd/peer" certificate and key [certs] etcd/peer serving cert is signed for DNS names [kubernetes-example-master localhost] and IPs [172.16.40.38 127.0.0.1 ::1] [certs] Generating "etcd/healthcheck-client" certificate and key [certs] Generating "apiserver-etcd-client" certificate and key [certs] Generating "sa" key and public key [kubeconfig] Using kubeconfig folder "/etc/kubernetes" [kubeconfig] Writing "admin.conf" kubeconfig file [kubeconfig] Writing "kubelet.conf" kubeconfig file [kubeconfig] Writing "controller-manager.conf" kubeconfig file [kubeconfig] Writing "scheduler.conf" kubeconfig file [control-plane] Using manifest folder "/etc/kubernetes/manifests" [control-plane] Creating static Pod manifest for "kube-apiserver" [control-plane] Creating static Pod manifest for "kube-controller-manager" W0729 11:53:17.677580 2455 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC" [control-plane] Creating static Pod manifest for "kube-scheduler" W0729 11:53:17.679316 2455 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC" [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests" [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s [kubelet-check] Initial timeout of 40s passed. [apiclient] All control plane components are healthy after 42.860263 seconds [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace [kubelet] Creating a ConfigMap "kubelet-config-1.18" in namespace kube-system with the configuration for the kubelets in the cluster [upload-certs] Skipping phase. Please see --upload-certs [mark-control-plane] Marking the node kubernetes-example-master as control-plane by adding the label "node-role.kubernetes.io/master=''" [mark-control-plane] Marking the node kubernetes-example-master as control-plane by adding the taints [node-role.kubernetes.io/master:NoSchedule] [bootstrap-token] Using token: jks2dj.gtcp7wvvkaytw6au [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles [bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to get nodes [bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials [bootstrap-token] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token [bootstrap-token] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key [addons] Applied essential addon: CoreDNS [addons] Applied essential addon: kube-proxy Your Kubernetes control-plane has initialized successfully! To start using your cluster, you need to run the following as a regular user: mkdir -p $HOME/.kube sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config sudo chown $(id -u):$(id -g) $HOME/.kube/config You should now deploy a pod network to the cluster. Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at: https://kubernetes.io/docs/concepts/cluster-administration/addons/ Then you can join any number of worker nodes by running the following on each as root: kubeadm join 172.16.40.38:6443 --token jks2dj.gtcp7wvvkaytw6au \ --discovery-token-ca-cert-hash sha256:d847c0e41260846548f8707acbc4669ebddd6b527a2b035467f0e80e6794d076
The node will not be ready.
root@kubernetes-example-master:~# kubectl --kubeconfig /etc/kubernetes/admin.conf get nodes
NAME STATUS ROLES AGE VERSION kubernetes-example-master NotReady master 2m26s v1.18.6
coredns
pods will not run.
root@kubernetes-example-master:~# kubectl --kubeconfig /etc/kubernetes/admin.conf get pods --all-namespaces
NAMESPACE NAME READY STATUS RESTARTS AGE kube-system coredns-66bff467f8-6l6sk 0/1 Pending 0 4m48s kube-system coredns-66bff467f8-dtw9s 0/1 Pending 0 4m48s kube-system etcd-kubernetes-example-master 1/1 Running 0 4m50s kube-system kube-apiserver-kubernetes-example-master 1/1 Running 0 4m50s kube-system kube-controller-manager-kubernetes-example-master 1/1 Running 0 4m50s kube-system kube-proxy-c5hlk 1/1 Running 0 4m47s kube-system kube-scheduler-kubernetes-example-master 1/1 Running 0 4m50s
Install flannel
for networking.
root@kubernetes-example-master:~# kubectl --kubeconfig /etc/kubernetes/admin.conf apply -f https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml
podsecuritypolicy.policy/psp.flannel.unprivileged created clusterrole.rbac.authorization.k8s.io/flannel created clusterrolebinding.rbac.authorization.k8s.io/flannel created serviceaccount/flannel created configmap/kube-flannel-cfg created daemonset.apps/kube-flannel-ds-amd64 created daemonset.apps/kube-flannel-ds-arm64 created daemonset.apps/kube-flannel-ds-arm created daemonset.apps/kube-flannel-ds-ppc64le created daemonset.apps/kube-flannel-ds-s390x created
At this moment everything should be working as expected.
root@kubernetes-example-master:~# kubectl --kubeconfig /etc/kubernetes/admin.conf get nodes
NAME STATUS ROLES AGE VERSION kubernetes-example-master Ready master 21m v1.18.6
root@kubernetes-example-master:~# kubectl --kubeconfig /etc/kubernetes/admin.conf get nodes
NAME STATUS ROLES AGE VERSION kubernetes-example-master Ready master 19m v1.18.6 root@kubernetes-example-master:~# kubectl --kubeconfig /etc/kubernetes/admin.conf get pods --all-namespaces NAMESPACE NAME READY STATUS RESTARTS AGE kube-system coredns-66bff467f8-4n5c4 1/1 Running 0 19m kube-system coredns-66bff467f8-xdtmn 1/1 Running 0 19m kube-system etcd-kubernetes-example-master 1/1 Running 0 19m kube-system kube-apiserver-kubernetes-example-master 1/1 Running 0 19m kube-system kube-controller-manager-kubernetes-example-master 1/1 Running 0 19m kube-system kube-flannel-ds-amd64-xzrh9 1/1 Running 0 12m kube-system kube-proxy-g7c25 1/1 Running 0 19m kube-system kube-scheduler-kubernetes-example-master 1/1 Running 0 19m
Setup additional Kubernetes Node
Use token generated earlier to join the Kubernetes cluster.
root@kubernetes-example-node:~# kubeadm join 172.16.40.38:6443 \ --token jks2dj.gtcp7wvvkaytw6au \ --ignore-preflight-errors=SystemVerification,Swap \ --discovery-token-ca-cert-hash sha256:d847c0e41260846548f8707acbc4669ebddd6b527a2b035467f0e80e6794d076
W0729 12:28:46.626856 57314 join.go:346] [preflight] WARNING: JoinControlPane.controlPlane settings will be ignored when control-plane flag is not set. [preflight] Running pre-flight checks [WARNING Swap]: running with swap on is not supported. Please disable swap [preflight] The system verification failed. Printing the output from the verification: KERNEL_VERSION: 5.4.0-40-generic DOCKER_VERSION: 19.03.8 DOCKER_GRAPH_DRIVER: aufs OS: Linux CGROUPS_CPU: enabled CGROUPS_CPUACCT: enabled CGROUPS_CPUSET: enabled CGROUPS_DEVICES: enabled CGROUPS_FREEZER: enabled CGROUPS_MEMORY: enabled [WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.4.0-40-generic\n", err: exit status 1 [preflight] Reading configuration from the cluster... [preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -oyaml' [kubelet-start] Downloading configuration for the kubelet from the "kubelet-config-1.18" ConfigMap in the kube-system namespace [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml" [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env" [kubelet-start] Starting the kubelet [kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap... This node has joined the cluster: * Certificate signing request was sent to apiserver and a response was received. * The Kubelet was informed of the new secure connection details. Run 'kubectl get nodes' on the control-plane to see this node join the cluster.
Inspect nodes and pods from the master server.
root@kubernetes-example-master:~# kubectl --kubeconfig /etc/kubernetes/admin.conf get nodes
NAME STATUS ROLES AGE VERSION kubernetes-example-master Ready master 39m v1.18.6 kubernetes-example-node Ready <none> 4m42s v1.18.6 root@kubernetes-example-master:~# root@kubernetes-example-master:~# kubectl --kubeconfig /etc/kubernetes/admin.conf get pods --all-namespaces NAMESPACE NAME READY STATUS RESTARTS AGE kube-system coredns-66bff467f8-4n5c4 1/1 Running 1 39m kube-system coredns-66bff467f8-xdtmn 1/1 Running 1 39m kube-system etcd-kubernetes-example-master 1/1 Running 1 39m kube-system kube-apiserver-kubernetes-example-master 1/1 Running 1 39m kube-system kube-controller-manager-kubernetes-example-master 1/1 Running 1 39m kube-system kube-flannel-ds-amd64-rrv4p 1/1 Running 0 4m46s kube-system kube-flannel-ds-amd64-xzrh9 1/1 Running 1 32m kube-system kube-proxy-ftrfh 1/1 Running 0 4m46s kube-system kube-proxy-g7c25 1/1 Running 1 39m kube-system kube-scheduler-kubernetes-example-master 1/1 Running 1 39m
Create the first service
Create a simple Nginx service that will be running on single or multiple worker nodes.
cat << EOF | kubectl --kubeconfig /etc/kubernetes/admin.conf apply -f - apiVersion: v1 kind: Namespace metadata: name: nginx-test-namespace --- apiVersion: apps/v1 kind: Deployment metadata: name: nginx-test-deployment namespace: nginx-test-namespace labels: app: nginx-test-label spec: selector: matchLabels: app: nginx-test-label replicas: 2 template: metadata: labels: app: nginx-test-label spec: containers: - name: ngnix-test-container image: nginx:latest ports: - containerPort: 80 protocol: TCP --- apiVersion: v1 kind: Service metadata: name: nginx-test-service namespace: nginx-test-namespace spec: type: NodePort ports: - port: 80 targetPort: 80 nodePort: 30090 selector: app: nginx-test-label EOF
namespace/nginx-test-namespace created deployment.apps/nginx-test-deployment created service/nginx-test-service created
Inspect this service.
root@kubernetes-example-master:~# kubectl --kubeconfig /etc/kubernetes/admin.conf get all -n nginx-test-namespace -o wide --show-labels
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES LABELS pod/nginx-test-deployment-ff486d57b-6444j 1/1 Running 0 5m 10.244.1.10 kubernetes-example-node <none> <none> app=nginx-test-label,pod-template-hash=ff486d57b pod/nginx-test-deployment-ff486d57b-fwnbv 1/1 Running 0 5m 10.244.1.11 kubernetes-example-node <none> <none> app=nginx-test-label,pod-template-hash=ff486d57b NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE SELECTOR LABELS service/nginx-test-service NodePort 10.105.22.161 <none> 80:30090/TCP 5m app=nginx-test-label <none> NAME READY UP-TO-DATE AVAILABLE AGE CONTAINERS IMAGES SELECTOR LABELS deployment.apps/nginx-test-deployment 2/2 2 2 5m ngnix-test-container nginx:latest app=nginx-test-label app=nginx-test-label NAME DESIRED CURRENT READY AGE CONTAINERS IMAGES SELECTOR LABELS replicaset.apps/nginx-test-deployment-ff486d57b 2 2 2 5m ngnix-test-container nginx:latest app=nginx-test-label,pod-template-hash=ff486d57b app=nginx-test-label,pod-template-hash=ff486d57b
root@kubernetes-example-master:~# kubectl --kubeconfig /etc/kubernetes/admin.conf describe deployment/nginx-test-deployment -n nginx-test-namespace
Name: nginx-test-deployment Namespace: nginx-test-namespace CreationTimestamp: Wed, 29 Jul 2020 14:00:15 +0000 Labels: app=nginx-test-label Annotations: deployment.kubernetes.io/revision: 1 Selector: app=nginx-test-label Replicas: 2 desired | 2 updated | 2 total | 2 available | 0 unavailable StrategyType: RollingUpdate MinReadySeconds: 0 RollingUpdateStrategy: 25% max unavailable, 25% max surge Pod Template: Labels: app=nginx-test-label Containers: ngnix-test-container: Image: nginx:latest Port: 80/TCP Host Port: 0/TCP Environment: <none> Mounts: <none> Volumes: <none> Conditions: Type Status Reason ---- ------ ------ Available True MinimumReplicasAvailable Progressing True NewReplicaSetAvailable OldReplicaSets: <none> NewReplicaSet: nginx-test-deployment-ff486d57b (2/2 replicas created) Events: Type Reason Age From Message ---- ------ ---- ---- ------- Normal ScalingReplicaSet 8m7s deployment-controller Scaled up replica set nginx-test-deployment-ff486d57b to 2
Use curl to test it on any master or worker node.
root@kubernetes-example-master:~# curl -I http://root@kubernetes-example-master:30090
HTTP/1.1 200 OK Server: nginx/1.19.1 Date: Wed, 29 Jul 2020 14:01:40 GMT Content-Type: text/html Content-Length: 612 Last-Modified: Tue, 07 Jul 2020 15:52:25 GMT Connection: keep-alive ETag: "5f049a39-264" Accept-Ranges: bytes
Additional notes
LXD ZFS microk8s profile.
$ curl --silent https://raw.githubusercontent.com/ubuntu/microk8s/master/tests/lxc/microk8s-zfs.profile
name: microk8s config: boot.autostart: "true" linux.kernel_modules: ip_vs,ip_vs_rr,ip_vs_wrr,ip_vs_sh,ip_tables,ip6_tables,netlink_diag,nf_nat,overlay,br_netfilter raw.lxc: | lxc.apparmor.profile=unconfined lxc.mount.auto=proc:rw sys:rw cgroup:rw lxc.cgroup.devices.allow=a lxc.cap.drop= security.nesting: "true" security.privileged: "true" description: "" devices: aadisable: path: /sys/module/nf_conntrack/parameters/hashsize source: /sys/module/nf_conntrack/parameters/hashsize type: disk aadisable1: path: /sys/module/apparmor/parameters/enabled source: /dev/null type: disk aadisable2: path: /dev/zfs source: /dev/zfs type: disk aadisable3: path: /dev/kmsg source: /dev/kmsg type: disk
LXD EXT4 microk8s profile.
$ curl --silent https://raw.githubusercontent.com/ubuntu/microk8s/master/tests/lxc/microk8s.profile
name: microk8s config: boot.autostart: "true" linux.kernel_modules: ip_vs,ip_vs_rr,ip_vs_wrr,ip_vs_sh,ip_tables,ip6_tables,netlink_diag,nf_nat,overlay,br_netfilter raw.lxc: | lxc.apparmor.profile=unconfined lxc.mount.auto=proc:rw sys:rw cgroup:rw lxc.cgroup.devices.allow=a lxc.cap.drop= security.nesting: "true" security.privileged: "true" description: "" devices: aadisable: path: /sys/module/nf_conntrack/parameters/hashsize source: /sys/module/nf_conntrack/parameters/hashsize type: disk aadisable1: path: /sys/module/apparmor/parameters/enabled source: /dev/null type: disk aadisable2: path: /dev/kmsg source: /dev/kmsg type: disk
Inspect the following settings in rare cases where you cannot access services on other nodes.
$ sysctl --all --pattern "net.bridge.bridge-nf-call*"
net.bridge.bridge-nf-call-arptables = 1 net.bridge.bridge-nf-call-ip6tables = 1 net.bridge.bridge-nf-call-iptables = 1