Creating the Split API Pods
Learn to create API Pods using ReplicaSet and establish communication by creating Service.
Looking into the definition#
Let’s see the definition of backend API go-demo-2-api-rs.yml
.
Just as with the database, this ReplicaSet should be familiar since it’s very similar to the one we used before. We’ll comment only on the differences.
-
Line 6: The number of
replicas
is set to3
. That solves one of the main problems we had with the previous ReplicaSets that defined Pods with both containers. Now the number of replicas can differ, and we have one Pod for the database, and three for the backend API. -
Line 14: In the
labels
section,type
label is set toapi
so that both the ReplicaSet and the (soon to come) Service can distinguish the Pods from those created for the database. -
Line 22-23: We have the environment variable
DB
set togo-demo-2-db
. The code behind thevfarcic/go-demo-2
image is written in a way that the connection to the database is established by reading that variable. In this case, we can say that it will try to connect to the database running on the DNSgo-demo-2-db
. If you go back to the database Service definition, you’ll notice that its name isgo-demo-2-db
as well. If everything works correctly, we should expect that the DNS was created with the Service and that it’ll forward requests to the database.
The readinessProbe#
The readinessProbe
should be used as an indication that the service is ready to serve requests. When combined with Services
construct, only containers with the readinessProbe
state set to Success
will receive requests.
In earlier Kubernetes versions it used userspace
proxy mode. Its advantage is that the proxy would retry failed requests to another Pod. With the shift to the iptables
mode, that feature is lost. However, iptables
are much faster and more reliable, so the loss of the retry mechanism is well compensated. That does not mean that the requests are sent to Pods “blindly”. The lack of the retry mechanism is mitigated with readinessProbe
, which we added to the ReplicaSet.
The readinessProbe
has the same fields as the livenessProbe
. We used the same values for both, except for the periodSeconds
, where instead of relying on the default value of 10
, we set it to 1
.
While livenessProbe
is used to determine whether a Pod is alive or it should be replaced by a new one, the readinessProbe
is used by the iptables
. A Pod that does not pass the readinessProbe
will be excluded and will not receive requests. In theory, requests might be still sent to a faulty Pod, between two iterations. Still, such requests will be small in number since the iptables
will change as soon as the next probe responds with HTTP code less than 200
, or equal or greater than 400
.
Creating the ReplicaSet#
Now let’s create the ReplicaSet go-demo-2-api-rs.yml
.
Creating the Service#
Only one object is missing, that is Service go-demo-2-api-svc.yml
, the definition is given below.
There’s nothing truly new in this definition. The type
is set to NodePort
since the API should be accessible from outside the cluster. The selector
label type
is set to api
so that it matches the labels defined for the Pods.
That is the last object we’ll create (in this section), so let’s move on and do it.
We’ll take a look at what we have in the cluster.
The output is as follows.
Both ReplicaSets for db and api are there, followed by the three replicas of the go-demo-2-api
Pods and one replica of the go-demo-2-db
Pod. Finally, the two Services are running as well, together with the one created by Kubernetes itself.
Accessing the API#
Before we proceed, it might be worth mentioning that the code behind the vfarcic/go-demo-2
image is designed to fail if it cannot connect to the database. The fact that the three replicas of the go-demo-2-api
Pod are running means that the communication is established. The only verification left is to check whether we can access the API from outside the cluster.
Let’s try that out.
The output of the last command is as follows. You can also open the link beside run button to see the page.
We got the response 200
and a friendly hello, world!
message indicating that the API is indeed accessible from outside the cluster.
Destroying services#
Before we move further, we’ll delete the objects we created.
Everything we created is gone, and we can start over. At this point, you might be wondering whether it is overkill to have four YAML files for a single application. Can’t we simplify the definitions? Not really. Can we define everything in a single file? Read the next lesson.
Try it yourself#
A list of all the commands used in this lesson is given below.
You can practice the commands in the following code playground by pressing the Run button and waiting for the cluster to set up.
/
- go-demo-2-api-rs.yml
Troubleshooting tips for minikube
#
You won’t always need to bind the ports using the port-forward command to interact with the services. If you are using minikube, you can use the following commands to interact with the service: