WEBVTT

00:00:00.040 --> 00:00:04.530
The heating ventilation and air conditioning as an element of design

00:00:04.540 --> 00:00:09.240
resilience. Research out of Toronto University illuminated that the

00:00:09.240 --> 00:00:13.610
world's data centers are estimated to consume power equivalent to about

00:00:13.610 --> 00:00:20.350
17 1000‑megawatt power plants, equaling more than 1% of the total

00:00:20.350 --> 00:00:22.090
world's electricity consumption.

00:00:22.100 --> 00:00:25.290
Now, that was a study that was done some 4 years ago,

00:00:25.300 --> 00:00:28.110
so it wouldn't be a shock to discover that that

00:00:28.110 --> 00:00:30.750
footprint has become heavier and heavier.

00:00:30.820 --> 00:00:34.880
Most of the cost of the data center is actually for cooling.

00:00:34.880 --> 00:00:37.780
That same study showed that the yearly cost of running the

00:00:37.780 --> 00:00:41.940
cooling infrastructure for a typical data center can reach

00:00:41.950 --> 00:00:45.080
anywhere from $4‑$18 million.

00:00:45.090 --> 00:00:48.630
Google states that their data centers use much less

00:00:48.640 --> 00:00:50.450
energy than the typical data center.

00:00:50.460 --> 00:00:56.070
They raise their temperature to 80 degrees Fahrenheit, use outside air for

00:00:56.070 --> 00:01:02.020
cooling, and build custom servers, and they have detailed performance data to

00:01:02.020 --> 00:01:07.050
help move the entire industry forward, showing that they get better efficacy

00:01:07.060 --> 00:01:11.210
from running at those temperatures anyway. The standard recommendation from

00:01:11.220 --> 00:01:14.550
ASHRAE, American Society of Heating, Refrigeration and Air‑Conditioning

00:01:14.550 --> 00:01:19.750
Engineers issued its first thermal guide for data centers way back in 2004. The

00:01:19.750 --> 00:01:26.660
original ASHRAE temperatures recommended this range of 18‑27 degrees Celsius,

00:01:26.660 --> 00:01:35.680
roughly translates to 68‑77 degrees Fahrenheit; also, a 5‑15 degrees Celsius dew

00:01:35.680 --> 00:01:36.090
point.

00:01:36.100 --> 00:01:38.740
Now, the dew point, if it is too low,

00:01:38.750 --> 00:01:43.230
obviously, can induce an environment full of static electricity, and if

00:01:43.230 --> 00:01:47.860
it's too high, can promote corrosion. Sixty percent should be the relative

00:01:47.860 --> 00:01:52.190
humidity that is the larger measure of moisture.

00:01:52.340 --> 00:01:56.550
There are two types of cooling, latent cooling, which is the ability for

00:01:56.550 --> 00:02:00.920
the air‑conditioning system to remove moisture, and sensible cooling,

00:02:00.920 --> 00:02:04.000
which is the ability of the air‑conditioning system to remove heat that

00:02:04.000 --> 00:02:06.170
can be measured by a thermometer.

00:02:06.180 --> 00:02:08.960
Those are the two types of cooling that take place.

00:02:08.970 --> 00:02:13.580
Imagine a very humid place where you live and a very dry, hot place

00:02:13.580 --> 00:02:17.630
where you live. Both of those would need a type of cooling to bring

00:02:17.630 --> 00:02:22.150
the environment down so that it is livable. Inside the data center,

00:02:22.150 --> 00:02:26.490
imagine that you had a bird's eye view and you're above the server

00:02:26.490 --> 00:02:27.660
racks looking down.

00:02:27.670 --> 00:02:32.760
Imagine then that you are arranging your servers in rows. So here we have a row

00:02:32.770 --> 00:02:37.770
of three servers. F stands for front, B stands for back. The next row of

00:02:37.770 --> 00:02:43.100
servers, we would actually put the back of those racks so that they are

00:02:43.100 --> 00:02:48.660
back‑to‑back with the rack before, and therefore, the front of that second rack

00:02:48.670 --> 00:02:50.880
with the third rack's front, and so on.

00:02:50.890 --> 00:02:55.690
Now, this would allow us to have server inlet temperatures,

00:02:55.700 --> 00:02:59.370
server outlet temperatures, in a more controlled fashion.

00:02:59.380 --> 00:03:03.990
Imagine if we actually created the environment so that this

00:03:04.000 --> 00:03:08.600
environment said that I'm going to make it so that I contain all the

00:03:08.600 --> 00:03:12.380
hot air. Containing all the hot air by means of some type of

00:03:12.380 --> 00:03:16.790
structure to prevent the hot air from leaving that aisle would be

00:03:16.790 --> 00:03:18.360
what's called hot aisle containment.

00:03:18.370 --> 00:03:20.960
Now imagine how the rest of the data center would feel.

00:03:20.970 --> 00:03:23.180
It would most likely feel pretty cool.

00:03:23.190 --> 00:03:26.740
But if you did something called cold aisle containment,

00:03:26.750 --> 00:03:31.340
where you now contained the air from flowing freely in the

00:03:31.340 --> 00:03:33.230
front‑to‑front configuration,

00:03:33.240 --> 00:03:38.710
then the rest of the data center may actually feel inhospitable to humans.

00:03:38.720 --> 00:03:42.020
It would really depend on what is it that you're

00:03:42.020 --> 00:03:44.760
trying to derive from the environment.

00:03:44.770 --> 00:03:48.700
What efficiencies are you bringing about? As stated before, many of

00:03:48.700 --> 00:03:52.600
the major cloud service providers have actually moved beyond what

00:03:52.610 --> 00:03:54.970
they would consider to be rudimentary,

00:03:54.980 --> 00:03:58.930
but these are what the standards have been for a number of years.

00:03:58.940 --> 00:04:03.790
Next, let's get into design resilience from an availability class perspective.
