Simulating Denial of Service Attacks
In this lesson, we will be using a tool called Siege to simulate Denial of Service attacks on our application.
Running Siege to simulate a DoS attack#
Another scenario that could happen is that we might be under attack. Somebody, intentionally or unintentionally, might be creating a Denial of Service attack (DoS attack). What that really is that our applications, or even the whole cluster, might be under an extreme load. It might be receiving such a vast amount of traffic that our infrastructure cannot handle it. Although uncommon, it is not unheard of for a whole system to collapse when under a DoS attack. It is likely that a system will collapse if we don’t undertake some precautionary measures.
Before we simulate such an attack, let’s confirm that our application works more or less correctly when serving an increased number of requests. We’re not going to go through simulating millions of requests. That would be out of the scope of this course. We’re going to do a “poor man” equivalent of a Denial of Service attack.
First, we’ll put our application under test and see what happens if we keep sending fifty concurrent requests for, let’s say, 20 seconds.
We’ll use a tool called Siege that will be running as a container in a Pod.
Let’s see what happens.
We created a Pod called siege
in the go-demo-8
Namespace. It is based on the image yokogawa/siege
. We used the -it
argument (interactive, terminal), and we used --rm
so that the Pod is deleted after the process is the only container inside that Pod is terminated. All those are uneventful. The interesting part of that command is the arguments we passed to the main process in the container.
The --concurrent=50
and --time 20S
argument tells Siege to run fifty concurrent requests for 20 seconds. The last argument is the address where Siege should be sending requests. This time we’re skipping the repeater
and sending them directly to go-demo-8
.
Think of this command, and Siege in general, as a very simple way to do performance testing. Actually, it’s not even testing. We send a stream of concurrent requests, fifty in this case, for 20 seconds. I wanted us to confirm that the application can handle a high number of requests before we jump into testing the behavior of the system when faced with a simulation of a Denial of Service Attack.
The output, limited to the relevant parts, is as follows.
...
Transactions: 1676 hits
Availability: 91.94 %
Elapsed time: 19.21 secs
Data transferred: 0.05 MB
Response time: 0.01 secs
Transaction rate: 87.25 trans/sec
Throughput: 0.00 MB/sec
Concurrency: 1.07
Successful transactions: 1676
Failed transactions: 147
Longest transaction: 0.08
Shortest transaction: 0.00
...
After twenty seconds, plus whatever time was needed to pull the image and run it, the siege ended. In my case, and yours will be different, I can see that 1676 hits were made. Most of them were successful. In my case, there was an availability of 91.94 %
. There were some failed transactions because this is not a robust solution. It’s a demo application. Some were bound to fail, but ignore that. What matters is that most of the requests were successful. In my case, there were 1676
successful and 147
failed transactions.
A quick look at the code of the application#
Before we continue, let’s take a quick look at the code of the application.
Don’t worry if you’re not proficient in Go. We’re just going to observe something that could be useful to simulate Denial of Service Attacks.
The output, limited to the relevant parts, is as follows.
package main
import (
...
"golang.org/x/time/rate"
...
)
...
var limiter = rate.NewLimiter(5, 10)
var limitReachedTime = time.Now().Add(time.Second * (-60))
var limitReached = false
...
func RunServer() {
...
mux.HandleFunc("/limiter", LimiterServer)
...
}
...
func LimiterServer(w http.ResponseWriter, req *http.Request) {
logPrintf("%s request to %s\n", req.Method, req.RequestURI)
if limiter.Allow() == false {
logPrintf("Limiter in action")
http.Error(w, http.StatusText(500), http.StatusTooManyRequests)
limitReached = true
limitReachedTime = time.Now()
return
} else if time.Since(limitReachedTime).Seconds() < 15 {
logPrintf("Cooling down after the limiter")
http.Error(w, http.StatusText(500), http.StatusTooManyRequests)
return
}
msg := fmt.Sprintf("Everything is OK\n")
io.WriteString(w, msg)
}
...
We’re going to simulate Denial of Service attacks. For that, the application uses a Go library called rate
. Further on, we have the limiter
variable set to a NewLimiter(5, 10)
. That means that it limits the application to have only five requests, with a burst of ten. No “real” application would ever be like that. However, we’re using it to simulate what happens if the number of requests is above the limit of what the application can handle. There are always limits to any application; we’re just forcing this one’s threshold to be very low.
Our applications should scale up and down automatically. But if we have a sudden drastic increase in the number of requests, that might produce some adverse effects. The application might not be able to scale up that fast.
Then, we have the LimiterServer
function, which handles requests coming to the /limiter
endpoint. It checks whether we reached the five requests limit. If so, it sends 500 response codes. There is also the additional logic that blocks all other requests for fifteen seconds after the limit is reached.
All in all, we’re simulating the situation where our application reached the limit of what it can handle. If it reaches that limit, it becomes unavailable for 15
seconds. It’s a simple code that simulates what happens when an application starts receiving significantly more traffic than it can handle. If a replica of this app receives more than five simultaneous requests, it will be unavailable for fifteen seconds. That’s (roughly) what would happen with requests when the application is under Denial of Service attacks.
Running Siege on the endpoint /limiter
#
We’ll execute another siege, but this time to the endpoint /limiter
.
The output, limited to the relevant parts, is as follows.
...
Transactions: 1845 hits
Availability: 92.02 %
Elapsed time: 19.70 secs
Data transferred: 0.04 MB
Response time: 0.01 secs
Transaction rate: 93.65 trans/sec
Throughput: 0.00 MB/sec
Concurrency: 1.12
Successful transactions: 20
Failed transactions: 160
Longest transaction: 0.09
Shortest transaction: 0.00
...
We can see that this time, the number of successful transactions is 20
. It’s not only the five successful transactions you would expect because we have multiple replicas of this application. However, the exact number doesn’t matter. What is important is that we can see that the number of successful transactions is much lower than before. In my case, that’s only 20
. As a comparison, the first execution of the siege produced, in my case, 1676
successful transactions.
In the next lesson, we will run a chaos experiment for Denial of Service attack.