kubernetes
[!TIP]
kubernetes.io
add/_print
as suffix in the url, it will show pages into one page i.e.:


/etc/kubernetes/manifests
/etc/kubernetes/manifests
[!TIP]
/etc/kubernetes/manifests
as the path where kubelet should look for static Pod manifests. Names of static Pod manifests are:
etcd.yaml
kube-apiserver.yaml
kube-controller-manager.yaml
kube-scheduler.yaml
/etc/kubernetes
/etc/kubernetes
[!TIP]
important kubernetes cluster configurations
/etc/kubernetes/
as the path where kubeconfig files with identities for control plane components are stored. Names of kubeconfig files are:
kubelet.conf
(bootstrap-kubelet.conf during TLS bootstrap)
controller-manager.conf
scheduler.conf
admin.conf
for the cluster admin and kubeadm itself
names of certificates and key files
[!TIP]
ca.crt
,ca.key
for the Kubernetes certificate authority
apiserver.crt
,apiserver.key
for the API server certificate
apiserver-kubelet-client.crt
,apiserver-kubelet-client.key
for the client certificate used by the API server to connect to the kubelets securely
sa.pub
,sa.key
for the key used by the controller manager when signing ServiceAccount
front-proxy-ca.crt
,front-proxy-ca.key
for the front proxy certificate authority
front-proxy-client.crt
,front-proxy-client.key
for the front proxy client
[!TIP]
apiserver-advertise-address
andapiserver-bind-port
to bind to; if not provided, those value defaults to the IP address of the default network interface on the machine and port6443
service-cluster-ip-range
to use for servicesIf an external etcd server is specified, the
etcd-servers
address and related TLS settings (etcd-cafile
,etcd-certfile
,etcd-keyfile
);
if an external etcd server is not be provided, a local etcd will be used ( via host network )
If a cloud provider is specified, the corresponding
--cloud-provider
is configured, together with the--cloud-config
path if such file exists (this is experimental, alpha and will be removed in a future version)
other api server flags
--insecure-port=0
to avoid insecure connections to the api server--enable-bootstrap-token-auth=true
to enable the BootstrapTokenAuthenticator authentication module. See TLS Bootstrapping for more details--allow-privileged
to true (required e.g. by kube proxy)--requestheader-client-ca-file
to front-proxy-ca.crt--enable-admission-plugins
to:NamespaceLifecycle
e.g. to avoid deletion of system reserved namespacesLimitRanger
andResourceQuota
to enforce limits on namespacesServiceAccount
to enforce service account automationPersistentVolumeLabel
attaches region or zone labels to PersistentVolumes as defined by the cloud provider (This admission controller is deprecated and will be removed in a future version. It is not deployed by kubeadm by default with v1.9 onwards when not explicitly opting into using gce or aws as cloud providers)DefaultStorageClass
to enforce default storage class on PersistentVolumeClaim objectsDefaultTolerationSeconds
NodeRestriction
to limit what a kubelet can modify (e.g. only pods on this node)
--kubelet-preferred-address-types
toInternalIP
,ExternalIP
,Hostname
; this makeskubectl logs
and other API server-kubelet communication work in environments where the hostnames of the nodes aren't resolvableFlags for using certificates generated in previous steps:
--client-ca-file
toca.crt
--tls-cert-file
toapiserver.crt
--tls-private-key-file
toapiserver.key
--kubelet-client-certificate
toapiserver-kubelet-client.crt
--kubelet-client-key
toapiserver-kubelet-client.key
--service-account-key-file
tosa.pub
--requestheader-client-ca-file
tofront-proxy-ca.crt
--proxy-client-cert-file
tofront-proxy-client.crt
--proxy-client-key-file
tofront-proxy-client.key
Other flags for securing the front proxy (API Aggregation) communications:
--requestheader-username-headers=X-Remote-User
--requestheader-group-headers=X-Remote-Group
--requestheader-extra-headers-prefix=X-Remote-Extra-
--requestheader-allowed-names=front-proxy-client
[!TIP]
If kubeadm is invoked specifying a
--pod-network-cidr
, the subnet manager feature required for some CNI network plugins is enabled by setting:
--allocate-node-cidrs=true
--cluster-cidr
and--node-cidr-mask-size
flags according to the given CIDRIf a cloud provider is specified, the corresponding
--cloud-provider
is specified, together with the--cloud-config
path if such configuration file exists (this is experimental, alpha and will be removed in a future version)
other flags
--controllers
enabling all the default controllers plusBootstrapSigner
andTokenCleaner
controllers for TLS bootstrap. See TLS Bootstrapping for more details--use-service-account-credentials
to trueFlags for using certificates generated in previous steps:
--root-ca-file
to ca.crt--cluster-signing-cert-file
toca.crt
, if External CA mode is disabled, otherwise to""
--cluster-signing-key-file
toca.key
, if External CA mode is disabled, otherwise to""
--service-account-private-key-file
to sa.key
flow
pod creation
ingress traffic

contol plane
TCP
Inbound
6443
Kubernetes API server
All
TCP
Inbound
2379-2380
etcd server client API
kube-apiserver, etcd
TCP
Inbound
10250
Kubelet API
Self, Control plane
TCP
Inbound
10259
kube-scheduler
Self
TCP
Inbound
10257
kube-controller-manager
Self
worker node(s)
TCP
Inbound
10250
Kubelet API
Self, Control plane

control pannel
kube-apiserver

etcd

kube-scheduler

controller manager

ccm : cloud controller manager

work node
[!NOTE]
linux
containerd
unix:///var/run/containerd/containerd.sock
CRI-O
unix:///var/run/crio/crio.sock
Docker Engine (using cri-dockerd)
unix:///var/run/cri-dockerd.sock
windows
containerd
npipe:////./pipe/containerd-containerd
Docker Engine (using cri-dockerd)
npipe:////./pipe/cri-dockerd
kubelet

kube proxy

cri-o : container runtime

jsonpath
[!NOTE|label:references:]
options
explain
$ kubectl explain hpa
KIND: HorizontalPodAutoscaler
VERSION: autoscaling/v1
DESCRIPTION:
configuration of a horizontal pod autoscaler.
FIELDS:
apiVersion <string>
...
or
$ kubectl explain configmap KIND: ConfigMap VERSION: v1 DESCRIPTION: ConfigMap holds configuration data for pods to consume. FIELDS: apiVersion <string> APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/api-conventions.md#resources ...
__start_kubectl
__start_kubectl
$ echo 'source <(kubectl completion bash)' >> ~/.bashrc
$ cat >> ~/.bashrc <<EOF
alias k='kubectl'
alias kc='kubectl -n kube-system'
alias ki='kubectl -n ingress-ngxin'
alias kk='kubectl -n kubernetes-dashboard'
for _i in k kc ki kk; do complete -F __start_kubectl "${_i}"; done
EOF
$ source ~/.bashrc
_complete_alias
_complete_alias
$ sudo dnf install -y bash-completion
# download bash_completion.sh for kubectl
$ curl -fsSL https://github.com/cykerway/complete-alias/raw/master/complete_alias -o ~/.bash_completion.sh
# or rhel/centos
$ sudo curl -fsSL https://github.com/marslo/dotfiles/raw/main/.marslo/.completion/complete_alias -o /etc/profile.d/complete_alias.sh
$ sudo chmod +x !$
$ cat >> ~/.bashrc << EOF
command -v kubectl >/dev/null && source <(kubectl completion bash)
test -f ~/.bash_completion.sh && source ~/.bash_completion.sh
# or
# test -f /etc/profile.d/complete_alias.sh && source /etc/profile.d/complete_alias.sh
alias k='kubectl'
alias kc='kubectl -n kube-system'
alias ki='kubectl -n ingress-ngxin'
alias kk='kubectl -n kubernetes-dashboard'
alias km='kubectl -n monitoring'
complete -o default -F __start_kubectl kubecolor
complete -o nosort -o bashdefault -o default -F _complete_alias $(alias | sed -rn 's/^alias ([^=]+)=.+kubec.+$/\1/p' | xargs)
EOF
$ source ~/.bashrc
kubecolor
$ [[ -d /tmp/kubecolor ]] && sudo mkdir -p /tmp/kubecolor
$ curl -fsSL https://github.com/hidetatz/kubecolor/releases/download/v0.0.25/kubecolor_0.0.25_Linux_x86_64.tar.gz | tar xzf - -C /tmp/kubecolor
$ sudo mv /tmp/kubecolor/kubecolor /usr/local/bin/
$ sudo chmod +x /usr/local/bin/kubecolor
token
check token
$ sudo kubeadm token list
TOKEN TTL EXPIRES USAGES DESCRIPTION EXTRA GROUPS
bop765.brol9nsrw820gmbi <forever> <never> authentication,signing <none> system:bootstrappers:kubeadm:default-node-token
khhfwa.jvkvrpiknx4o6ffy 19h 2018-07-13T11:37:43+08:00 authentication,signing <none> system:bootstrappers:kubeadm:default-node-token
generate token
[!NOET|label:see also:]
$ sudo kubeadm token create --print-join-command
kubeadm join 192.168.1.100:6443 --token lhb1ln.oj0fqwgd1yl7l9xp --discovery-token-ca-cert-hash sha256:cba8df87dcb70c83c19af72c02e4886fcc7b0cf05319084751e6ece688443bde
$ sudo kubeadm token create --print-join-command --ttl=0
kubeadm join 192.168.1.100:6443 --token bop765.brol9nsrw820gmbi --discovery-token-ca-cert-hash sha256:c8650c56faf72b8bf71c576f0d13f44c93bea2d21d4329c64bb97cba439af5c3
[!TIP]
ubuntu
$ kubectl drain <node name> --delete-local-data --force --ignore-daemonsets $ kubectl delete node <node name> $ sudo kubeadm reset [preflight] Running pre-flight checks. [reset] Stopping the kubelet service. [reset] Unmounting mounted directories in "/var/lib/kubelet" [reset] Removing kubernetes-managed containers. [reset] Deleting contents of stateful directories: [/var/lib/kubelet /etc/cni/net.d /var/lib/dockershim /var/run/kubernetes /var/lib/etcd] [reset] Deleting contents of config directories: [/etc/kubernetes/manifests /etc/kubernetes/pki] [reset] Deleting files: [/etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/bootstrap-kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf] $ systemctl stop kubelet $ docker system prune -a -f $ systemctl stop docker $ sudo rm -rf /etc/kubernetes/ $ sudo rm -rf /var/lib/cni/ $ sudo rm -rf /var/lib/kubelet/* $ sudo rm -rf /etc/cni/ $ sudo ifconfig cni0 down $ sudo ifconfig flannel.1 down $ rm -rf ~/.kube/ $ sudo apt purge kubeadm kubectl kubelet kubernetes-cni kube* $ sudo apt autoremove
CentOS/RHEL
$ kubectl drain <node name> --delete-local-data --force --ignore-daemonsets $ kubectl delete node <node name> $ sudo kubeadm reset -f --v=5 $ docker system prune -a -f # stop and disable services $ systemctl stop kubelet $ systemctl disable kubelet $ systemctl stop docker $ systemctl disable docker $ systemctl stop crio # or $ systemctl disable crio $ sudo rm -rf /etc/systemd/system/multi-user.target.wants/kubelet.service $ sudo rm -rf /etc/systemd/system/multi-user.target.wants/docker.service $ sudo rm -rf /usr/lib/systemd/system/docker.service $ sudo rm -rf /usr/lib/systemd/system/kubelet.service.d/ # network interface $ sudo ifconfig cni0 down $ sudo ip link delete cni0 $ sudo ifconfig flannel.1 down $ sudo ip link delete flannel.1 $ sudo ifconfig docker0 down $ sudo ip link delete docker0 $ sudo ifconfig vxlan.calico down $ sudo ip link delete vxlan.calico $ sudo yum versionlock delete docker-ce $ sudo yum versionlock delete docker-ce-cli $ sudo yum versionlock delete kubeadm $ sudo yum versionlock delete kubelet $ sudo yum versionlock delete kubectl $ sudo yum versionlock delete kubernetes-cni # or $ sudo yum versionlock clear $ sudo yum remove -y docker-ce docker-ce-cli containerd.io kubectl kubeadm kubelet kubernetes-cni $ sudo yum autormeove $ sudo rm -rf /etc/cni /etc/kubernetes /etc/docker $HOME/.kube $ sudo rm -rf /usr/libexec/docker /usr/libexec/kubernetes $ sudo rm -rf /var/lib/etcd/ # optional $ sudo rm -rf /var/lib/kubelet/ /var/lib/dockershim /var/lib/yum/repos/x86_64/7/kubernetes /var/log/pods /var/log/containers $ sudo rm -rf /var/run/docker.sock $ sudo rm -rf /var/cache/yum/x86_64/7/kubernetes $ sudo yum clean all $ sudo rm -rf /var/cache/yum $ sudo yum makecache $ sudo yum check-update
references
Last updated
Was this helpful?