Exploring and Verifying the Output

In this lesson, we will explore the sequential occurrence of events as a result of the rolling update and verify the change of Kubernetes version.

Exploring the Sequence of Events#

So the rolling update finished and the output starts with the same information we got when we asked for a preview, so there’s not much to comment.

Instead of destroying the first node, kops picked one master and drained it. That way, the applications running on it can shut down gracefully. We can see that it drained:

  • etcd-server-events
  • etcd-server-ip
  • kube-apiserver
  • kube-controller-manager
  • kube-proxy
  • kube-scheduler
  • Pods running on the server ip-172-20-40-167

As a result, Kubernetes rescheduled the Pods to one of the healthy nodes. That might not be true for all the Pods but only for those that can be rescheduled.

We can see that after draining finished, the master node was stopped. Since each master is associated with an auto-scaling group, AWS will detect that the node is no more, and start a new one.

Once the new server is initialized, nodeup will execute and install Docker, Kubelet, and Protokube. The latter will create the manifest that will be used by Kubelet to run the Pods required for a master node. Kubelet will also register the new node with one of the healthy masters.

That part of the process is the same as the one executed when creating a new cluster or when adding new servers. It is the part that takes longest to complete (around five minutes).

We can see that, after waiting for everything to settle, kops validated the cluster, thus confirming that upgrade of the first master node finished successfully.

Rolling upgrade of one of the master nodes
Rolling upgrade of one of the master nodes

As soon as it validated the upgrade of the first master, kops proceeded with the next node. During next ten to fifteen minutes, the same process will be repeated with the other two masters.

Once all three are upgraded, kops will execute the same process with the worker nodes, and we’ll have to wait for another ten to fifteen minutes.

Finally, once all the servers were upgraded, we can see that rolling update was completed.

Verification#

Let’s go back to our cluster and verify that Kubernetes was indeed upgraded.

The output is as follows.

Judging by versions of each of the nodes, all were upgraded to v1.9.2. The process worked.

Try to upgrade often. As a rule of thumb, you should upgrade one minor release at a time.

Even if you are a couple of minor releases behind the stable kops-recommended release, it’s better if you execute multiple rolling upgrades (one for each minor release) than to jump to the latest at once. By upgrading to the next minor release, you’ll minimize potential problems and simplify rollback if required.

Even though kops is fairly reliable, you should not trust it blindly. It’s relatively easy to create a small testing cluster running the same release as production, execute the upgrade process, and validate that everything works as expected. Once finished, you can destroy the test cluster and avoid unnecessary expenses.

Don’t trust anyone. Test upgrades in a separate cluster.


In the next lesson, we will explore how to upgrade the cluster automatically.

Upgrading the Cluster Manually: Changing the Kubernetes Version
Upgrading the Cluster Automatically
Mark as Completed
Report an Issue