etcd
Last updated
Was this helpful?
Last updated
Was this helpful?
[!TIP] You need:
Three or more machines that meet kubeadm's minimum requirements for the control-plane nodes. Having an odd number of control plane nodes can help with leader selection in the case of machine or zone failure.
including a container runtime, already set up and working
Three or more machines that meet kubeadm's minimum requirements for the workers
including a container runtime, already set up and working
Full network connectivity between all machines in the cluster (public or private network)
Superuser privileges on all machines using sudo
You can use a different tool; this guide uses sudo in the examples.
SSH access from one device to all nodes in the system
kubeadm and kubelet already installed on all machines.
And you also need:
Three or more additional machines, that will become etcd cluster members. Having an odd number of members in the etcd cluster is a requirement for achieving optimal voting quorum.
These machines again need to have kubeadm and kubelet installed.
These machines also require a container runtime, that is already set up and working.
See External etcd topology for context.
[!TIP] You need:
Three or more machines that meet kubeadm's minimum requirements for the control-plane nodes. Having an odd number of control plane nodes can help with leader selection in the case of machine or zone failure.
including a container runtime, already set up and working
Three or more machines that meet kubeadm's minimum requirements for the workers
including a container runtime, already set up and working
Full network connectivity between all machines in the cluster (public or private network)
Superuser privileges on all machines using sudo
You can use a different tool; this guide uses sudo in the examples.
SSH access from one device to all nodes in the system
kubeadm and kubelet already installed on all machines.
See Stacked etcd topology for context.
ca
client
result
sync to the others masters
etcd.service
etcd.conf
tips
more
[!TIP] consider a three-member etcd cluster. Let the URLs be:
member1=http://10.0.0.1
member2=http://10.0.0.2
member3=http://10.0.0.3
Whenmember1
fails, replace it withmember4=http://10.0.0.4
.
get member id of failed member
remove failed member
add new members
start new member with IP
additional options
[!TIP]
Update the
--etcd-servers
flag for the Kubernetes API servers to make Kubernetes aware of the configuration changes, then restart the Kubernetes API servers.Update the load balancer configuration if a load balancer is used in the deployment.
certificates located in : /etc/etcd/ssl
certificates located in : /etc/kubernetes/pki/etcd
It is recommended to back up this directory to an off-cluster location before removing the contents. You can remove this backup after a successful restore
add in /etc/kubernetes/manifests/etcd.yaml