Exploring the Effects by Violating Quotas
Explore how to violate some Quotas and analyze the consequences.
Exploring the effects#
Now let’s create the objects and explore the effects as we defined the resource quotas in the previous lesson.
Creating the dev Namespace#
Let’s get started by creating the dev
Namespace as per our plan.
We can see from the output that the namespace "dev"
was created
as well as the resourcequota "dev"
. To be on the safe side, we’ll describe the newly created dev
quota.
The output is as follows.
We can see that the hard limits are set and that there’s currently no usage. That was to be expected since we’re not running any objects in the dev
Namespace.
Creating resources#
Let’s spice it up a bit by creating the already too familiar go-demo-2
objects.
We created the objects from the go-demo-2.yml
file and waited until the go-demo-2-api
Deployment rolled out.
Looking into the description#
Now we can revisit the values of the dev
quota.
The output is as follows.
Judging from the Used
column, we can see that we are, for example, currently running 4
Pods and that we are still below the limit of 10
. One of those Pods was created through the go-demo-2-db
Deployment, and the other three with the go-demo-2-api
. If you summarize resources we specified for the containers that form those Pods, you’ll see that the values match the used limits
and requests
.
Violating the number of pods#
So far, we did not reach any of the quotas. Let’s try to break at least one of them go-demo-2-scaled.yml
.
The definition of the go-demo-2-scaled.yml
is almost the same as the one in go-demo-2.yml
. The only difference is that the number of replicas of the go-demo-2-api
Deployment is increased to fifteen. As you already know, that should result in fifteen Pods created through that Deployment.
Applying the definition#
I’m sure you can guess what will happen if we apply the new definition. We’ll do it anyway.
We applied the new definition. We’ll give Kubernetes a few moments to do the work before we take a look at the events it’ll generate. So, take a deep breath and count from one to the number of processors in your machine.
The output of a few of the events generated inside the dev
Namespace is as follows.
We can see that we reached two of the limits imposed by the Namespace quota. We reached the maximum amount of CPU (1
) and Pods (10
). As a result, ReplicaSet controller was forbidden from creating new Pods.
Analyzing the error#
We should be able to confirm which hard limits were reached by describing the dev
Namespace.
The output, limited to the Resource Quotas
section, is as follows.
As the events showed us, the values of limits.cpu
and pods
resources are the same in both User
and Hard
columns. As a result, we won’t be able to create any more Pods, nor will we be allowed to increase CPU limits for those that are already running.
Finally, let’s take a look at the Pods inside the dev
Namespace.
The go-demo-2-api
Deployment managed to create nine Pods. Together with the Pod created through the go-demo-2-db
, we reached the limit of ten.
Reverting back to previous definition#
We confirmed that the limit and the Pod quotas work. We’ll revert to the previous definition (the one that does not reach any of the quotas) before we move onto the next verification.
The output of the latter command should indicate that the deployment "go-demo-2-api"
was successfully rolled out
.
Violating the memory quota#
Let’s take a look at yet another slightly modified definition of the go-demo-2
objects go-demo-2-mem.yml
.
Both memory request and limit of the api
container of the go-demo-2-api
Deployment is set to 200Mi
while the database remains with the memory request of 50Mi
. Knowing that the requests.memory
quota of the dev
Namespace is 500Mi
, it’s enough to do simple math and come to the conclusion that we won’t be able to run all three replicas of the go-demo-2-api
Deployment.
Applying the definition#
Just as before, we should wait for a while before taking a look at the events of the dev
Namespace.
The output, limited to one of the entries, is as follows.
We reached the quota of the requests.memory
. As a result, creation of at least one of the Pods is forbidden. We can see that we requested creation of a Pod that requests 200Mi
of memory. Since the current summary of the memory requests is 455Mi
, creating that Pod would exceed the allocated 500Mi
.
Analyzing the error#
Let’s take a closer look at the Namespace.
The output, limited to the Resource Quotas
section, is as follows.
Indeed, the amount of used memory requests is 455Mi
, meaning that we could create additional Pods with up to 45Mi
, not 200Mi
.
Reverting back to the previous Definition#
We’ll revert to the go-demo-2.yml
one more time before we explore the last quota we defined.
Violating the services quota#
The only quota we did not yet verify is services.nodeports
. We set it to 0
and, as a result, we should not be allowed to expose any node ports. Let’s confirm that is indeed true.
The output is as follows.
All our quotas work as expected. But, there are others. We won’t have time to explore examples of all the quotas we can use. Instead, we’ll list them all for future reference.
Destroying Everything#
We are about to delete the cluster for the last time.
Try it yourself#
A list of all the commands used in the lesson is given below.
You can practice the commands in the following code playground by pressing the Run button and waiting for the cluster to set up.
/
- dev.yml