1 00:00:00,740 --> 00:00:04,760 Lastly, we have Hyper‑V network load balancing, and the idea here, 2 00:00:04,760 --> 00:00:08,020 this is a Windows Server service where we're implementing load 3 00:00:08,020 --> 00:00:12,380 balancing in software in the operating system where we have two or 4 00:00:12,380 --> 00:00:16,430 more identically configured Windows Servers with the NLB role 5 00:00:16,430 --> 00:00:19,160 installed, and you form a cluster there. Again, 6 00:00:19,160 --> 00:00:24,340 your advertising connectivity to that server farm on a virtual IP address 7 00:00:24,340 --> 00:00:29,950 and NLB just simply allows traffic coming in on that virtual IP to be round 8 00:00:29,950 --> 00:00:33,150 robin‑ed from one backend node to the next. 9 00:00:33,150 --> 00:00:37,750 It's not a particularly robust or intelligent type of load balancing, 10 00:00:37,750 --> 00:00:41,160 but it's been part of Windows Server for a long time, and you can't 11 00:00:41,160 --> 00:00:43,600 beat the price. If you're familiar with Azure, 12 00:00:43,600 --> 00:00:46,880 there are also plenty of software‑defined networking 13 00:00:46,880 --> 00:00:49,080 load balancing options there as well. 14 00:00:49,080 --> 00:00:52,810 They all have the same idea though. If one or even two of 15 00:00:52,810 --> 00:00:55,730 these web VM machines were to go offline, 16 00:00:55,730 --> 00:01:06,000 as long as we had at least one healthy node left in the server farm, the service would still be available and accessible on that virtual IP address.