Allocating Excessive Resource than the Actual Usage
Explore what happens when we allocate excessive resources than the actual usage of an application.
Allocating excessive memory#
Let’s explore another possible situation through yet another updated definition go-demo-2-insuf-node
. Just as before, the change is only in the resources
of the go-demo-2-db
Deployment.
This time, we specified that the requested memory is twice as much as the total memory of the node (6GB). The memory limit is even higher.
Applying the definition#
Let’s apply the change and observe what happens.
The output of the latter command is as follows.
This time, the status of the Pod is Pending
. Kubernetes could not place it anywhere in the cluster and is waiting until the situation changes.
Even though memory requests are associated with containers, it often makes sense to translate them into Pod requirements. We can say that the requested memory of a Pod is the sum of the requests of all the containers that form it. In our case, the Pod has only one container, so the requested memory for the Pod and the container are equal. The same can be said for limits.
During the scheduling process, Kubernetes sums the requests of a Pod and looks for a node that has enough available memory and CPU. If Pod’s request cannot be satisfied, it is placed in the pending state in the hope that resources will be freed on one of the nodes, or that a new server will be added to the cluster. Since such a thing will not happen in our case, the Pod created through the go-demo-2-db
Deployment will be pending forever, unless we change the memory request again.
ℹ️ When Kubernetes cannot find enough free resources to satisfy the resource requests of all the containers that form a Pod, it changes its state to
Pending
. Such Pods will remain in this state until requested resources become available.
Looking into the deployment’s description#
Let’s describe the go-demo-2-db
Deployment and see whether there is some additional useful information in it.
The output, limited to the events section, is as follows.
We can see that it has already FailedScheduling
seven times and that the message clearly indicates that there is Insufficient memory
.
Re-Applying the initial definition#
We’ll revert to the initial definition. Even though we know that its resources are incorrect, we know that it satisfies all the requirements and that all the Pods will be scheduled successfully.
Now that all the Pods are running, we should try to write a better definition. For that, we need to observe memory and CPU usage and use that information to decide the requests and the limits.
Try it yourself#
A list of all the commands used in the lesson is given below.
You can practice the commands in the following code playground by pressing the Run button and waiting for the cluster to set up.
/
- go-demo-2-insuf-node.yml