Running a Server Container

In this lesson, you will be introduced to server containers and learn how to run long-lived containers.

We just saw how to run short-lived containers. They usually do some processing and display some output. However, there’s a very common use for long-lived containers: server containers. Whether you want to host a web application, an API, or a database, you want a container that listens for incoming network connections and is potentially long-lived.

A word of warning: it’s best not to think about containers as long-lived, even when they are. Don’t store information inside the containers. In other words, ensure your containers are stateless, not stateful. A stateless container never stores its state when it is run while stateful containers store some information about their state each time they are run. We’ll see later on how and where to store your container’s state. Docker containers are very stable, but the reason for having stateless containers is that this allows for easy scaling up and recovery. More about that later.

In short, a server container

  • is long-lived
  • listens for incoming network connections

How can we manage this? Read on.

Running a long-lived container#

Up until now, we remained connected to the container from our command line using the docker run command, making it impractical for running long-lived containers.

To disconnect while allowing the long-lived container to continue running in the background, we use the -d or –detach switch on the docker run command.

Running a container as detached means that you immediately get your command-line back and the standard output from the container is not redirected to your command-line anymore.

Suppose you want to run a ping command. The Linux alpine container can be used for this by running the below command in the given terminal:

Terminal 1
Terminal

Click to Connect...

The ping command doesn’t end since it keeps pinging the Docker server. That’s a long-lived container. I can detach from it using the Ctrl-C shortcut, and it keeps running in the background. However, it’s best to run it as detached from the beginning:

Note the addition of a -d switch. When doing so, the container starts, but we don’t see its output. Instead, the docker run command returns the ID of the container that was just created:

svg viewer

The container ID is quite long. However, you don’t need to write it entirely in your commands. As long as there is no ambiguity, you can use the beginning of the container ID in commands that require the container ID, like docker logs or docker run. Using the beginning of the container ID comes in handy when you’re managing containers manually.

The container is still running. I can see it using a docker ps command that outputs something like:

Container ID Image Status
789b08ce24b1 alpine Up 2 minutes

The status is telling us that the container has been running for 2 minutes and is still alive.

I can interact with the running container using the commands we saw above: docker logs to see its output, docker inspect to get detailed information and even docker stop in order to kill it.

Let’s look at the standard output of the container using the following command (note I only use the beginning of the container ID):

The above command prints the whole standard output of the container from its beginning, which may be lengthy. But we can get a portion of the output using the –from, –until, or –tail switches. Let’s see the most recent 10 seconds of logs for our running container:

In real-world applications with multiple running containers, you would typically redirect your containers’ output to log management services. That being said, it can still be useful to get the last output of a container for debugging purposes.

A long-running container is bound to run for quite some time, but for now, I’m going to stop and clean up that container. So, I use the following commands:

The last command is here so that I can confirm that the container is completely gone. Run all of the above commands in the given terminal to verify the results. Keep in mind to use the generated container ID and not the one used above.

Terminal 1
Terminal

Click to Connect...

Listening for Incoming Network Connections#

By default, a container runs in isolation, and as such, it doesn’t listen for incoming connections on the machine where it is running. You must explicitly open a port on the host machine and map it to a port on the container.

Suppose I want to run the NGINX web server. It listens for incoming HTTP requests on port 80 by default. If I simply run the server, my machine does not route incoming requests to it unless I use the -p switch on the docker run command.

The -p switch takes two parameters; the incoming port you want to open on the host machine, and the port to which it should be mapped inside the container. For instance, here is how I state that I want my machine to listen for incoming connections on port 8085 and route them to port 80 inside a container that runs NGINX:

/
main.sh
Your app can be found at: https://811lgmnxmw1xy.educative.run

You can view the web page locally by running a browser and querying the server for http://localhost:8085 URL.

Note the -d switch. It’s not mandatory, but since I’m running a server container it’s a good idea to keep in the background. After running the above program, an NGINX server container starts and you get its ID at the end.

Since the container is running in the background, its output isn’t displayed on the terminal. However, you can still get it by writing the following docker logs command in the same terminal above:

After running the docker logs command, you can see a trace of the browser’s HTTP request that NGINX received like the following:

svg viewer

The NGINX container continues to run and serve incoming requests on port 8085. You can see it using the docker ps command. Let’s kill it with following commands so that you keep your machine free of unused containers:

You can test all of the above commands in the terminal tab of the above widget by using the generated container ID.

Wrapping It Up#

Did you notice we now have essentially the equivalent of a brand-new server? This means we can install whatever we want on it and trash it whenever we like.

One thing I particularly like about containers is that they allow me to use any software without polluting my machine. Usually, you would hesitate before trying a new piece of software on your machine since it means installing several dependencies that may interfere with existing software and be leftover should you change your mind and uninstall the main software. Thanks to containers, I can even try big pieces of server software without polluting my machine.

Let me run a Jenkins server to illustrate that point. Jenkins is a full continuous integration server coded using Java. Thanks to Docker I don’t need to install Java or any dependency on my machine in order to run a Jenkins server. Jenkins listens by default on port 8080, so I can go away and type:

Note that I could add a -d switch since this is a long-running process. However, I am not using it here because I want to directly see the verbose output. For a real deployment, I would use the -d switch and inspect the output with the docker logs command when needed.

You can view this locally by pointing your browser to http://localhost:8088 and finish the setup:

svg viewer

And I get a full-blown Jenkins:

svg viewer

Should I decide not to continue with Jenkins and try another continuous integration server, I can simply run the docker stop and docker rm commands. I could also run two separate Jenkins servers by just executing the docker run command again using another port.

Such isolation and ease of use at a very low resource cost is an enormous advantage of containers. Now that you saw how easy containers make managing server software on a single developer machine, imagine how powerful this is going to be on server machines. Thanks to containers, the Ops part of DevOps becomes smooth.

When using such images, you could wonder about where the data is stored. Docker uses volumes for this, and we’ll cover volumes later in this chapter. Also, databases may be needed for storing data, and those may be run in containers as well. For now, don’t worry about that since we need to learn other things first.


Before we move on to volumes, try the exercise in the next lesson.

More About Docker Run
Exercise: Run a Server Container
Mark as Completed
Report an Issue