Sequential Breakdown of the Process
In this lesson, we will first go through the sequential breakdown of Ingress resource creation process and then create the second Ingress resource.
Let’s see, through a sequence diagram, what happened when we created the Ingress resource.
-
The Kubernetes client (
kubectl
) sent a request to the API server requesting the creation of the Ingress resource defined in thego-demo-2.yml
file. -
The ingress controller is watching the API server for new events. It detected that there is a new Ingress resource.
-
The ingress controller configured the load balancer. In this case, it is nginx which modified
nginx.conf
with the values of allgo-demo-2-api
endpoints.
Now that one of the applications is accessible through Ingress, we should apply the same principles to the other.
The devops-toolkit
Ingress resource is very similar to go-demo-2
.
The only significant difference is that the
path
is set to/
.
It will serve all requests. It would be a much better solution if we’d change it to a unique base path (e.g., /devops-toolkit
) since that would provide a unique identifier.
However, this application does not have an option to define a base path, so an attempt to do so in Ingress would result in a failure to retrieve resources. We’d need to write rewrite
rules instead. We could, for example, create a rule that rewrites path base /devops-toolkit
to /
.
That way if, for example, someone sends a request to /devops-toolkit/something
, Ingress would rewrite it to /something
before sending it to the destination Service. While such an action is often useful, we’ll ignore it for now. For now, /
as the base path
should do.
Apart from adding Ingress to the mix, the definition removed type: NodePort
from the Service. This is the same type of action we did previously with the go-demo-2
service. We do not need external access to the Service.
Deleting and Recreating the Objects#
Let’s create the objects defined in the devops-toolkit.yml
file.
Let’s take a look at the Ingresses running inside the cluster.
The output is as follows.
We can see that now we have multiple Ingress resources. The Ingress Controller (in this case NGINX) configured itself taking both of those resources into account.
We can define multiple Ingress resources that will configure a single Ingress Controller.
Let’s confirm that both applications are accessible through HTTP (port 80
).
We’re able to view the application, whereas the curl command returned the already familiar hello, world! Message.
Ingress is a (kind of) Service that runs on all nodes of a cluster. A user can send requests to any and, as long as they match one of the rules, they will be forwarded to the appropriate Service.
Even though we can send requests to both applications using the same port (80
), that is often a sub-optimal solution. Our users would probably be happier if they could access those applications through different domains.
Try it yourself#
A list of all the commands used in the lesson is given below.
You can practice the commands in the following code playground by pressing the Run button and waiting for the cluster to set up:
/
- devops-toolkit.yml