1 00:00:01,340 --> 00:00:03,450 Speaking of S2D deployment, 2 00:00:03,450 --> 00:00:07,040 the two main patterns here infrastructure‑wise are 3 00:00:07,040 --> 00:00:11,740 hyperconverged and disaggregated; also called converged. 4 00:00:11,740 --> 00:00:15,960 Hyperconverged is where your cluster is doing everything. 5 00:00:15,960 --> 00:00:19,150 This would be for a smaller business, as I said, 6 00:00:19,150 --> 00:00:21,470 that's concerned principally about costs, 7 00:00:21,470 --> 00:00:24,740 secondarily about simplified administration. 8 00:00:24,740 --> 00:00:25,510 So, in other words, 9 00:00:25,510 --> 00:00:30,500 your cluster nodes are using the failover clustering software in 10 00:00:30,500 --> 00:00:34,850 Windows to do that high availability and onboard storage with 11 00:00:34,850 --> 00:00:38,540 Storage Spaces Direct for the storage layer. 12 00:00:38,540 --> 00:00:42,070 Now you're not going to get the same kind of performance and availability 13 00:00:42,070 --> 00:00:45,980 as you will in a converged or disaggregated environment. 14 00:00:45,980 --> 00:00:50,240 That's where you've got separate clusters for storage and compute, 15 00:00:50,240 --> 00:00:53,710 and this allows independent scaling, independent maintenance, 16 00:00:53,710 --> 00:00:56,640 and frankly, independent high availability. 17 00:00:56,640 --> 00:00:58,100 And you're also, in this case, 18 00:00:58,100 --> 00:01:02,820 using an external SAN so that you can give your Window Server 19 00:01:02,820 --> 00:01:07,020 nodes back some CPU and some resources because they're not 20 00:01:07,020 --> 00:01:10,640 concerned about maintaining the shared storage like you are in a 21 00:01:10,640 --> 00:01:11,990 hyperconverged infrastructure. 22 00:01:11,990 --> 00:01:17,720 Now how do we add servers to a failover cluster in which 23 00:01:17,720 --> 00:01:20,540 we're using Storage Spaces Direct? 24 00:01:20,540 --> 00:01:23,570 The reason why I'm covering this as a separate topic is 25 00:01:23,570 --> 00:01:26,940 because if you're using Storage Spaces Direct, 26 00:01:26,940 --> 00:01:28,110 then typically, 27 00:01:28,110 --> 00:01:33,250 and I'm going to assume for the exam and for this lesson, that we're in a 28 00:01:33,250 --> 00:01:37,810 hyperconverged environment where we want to enlist each cluster node's 29 00:01:37,810 --> 00:01:41,510 local storage into the Storage Spaces Direct pool. 30 00:01:41,510 --> 00:01:42,440 Okay? 31 00:01:42,440 --> 00:01:43,850 So how does this work? 32 00:01:43,850 --> 00:01:47,140 And I'm showing you this example in Windows PowerShell. 33 00:01:47,140 --> 00:01:50,520 First, of course, we want to Re‑run cluster validation, 34 00:01:50,520 --> 00:01:54,810 bringing in the existing nodes as well as the prospective new host, 35 00:01:54,810 --> 00:01:59,070 and we want to make sure to include among the cluster validation tests, 36 00:01:59,070 --> 00:02:01,860 Storage Spaces Direct for obvious reasons. 37 00:02:01,860 --> 00:02:05,790 We then can do an Add‑ClusterNode to bring the new node in, 38 00:02:05,790 --> 00:02:07,040 and that's really it. 39 00:02:07,040 --> 00:02:09,240 The next time you refresh your storage, 40 00:02:09,240 --> 00:02:12,140 you should find that your storage pool has enlisted 41 00:02:12,140 --> 00:02:14,620 the storage on your new server. 42 00:02:14,620 --> 00:02:15,570 And what happens is, 43 00:02:15,570 --> 00:02:20,450 depending upon how many servers you have and how many disks you have, 44 00:02:20,450 --> 00:02:24,840 it can unlock various raid‑like configurations. 45 00:02:24,840 --> 00:02:26,970 For example, if you've got three, 46 00:02:26,970 --> 00:02:31,300 it's actually three servers with two disks each. 47 00:02:31,300 --> 00:02:34,280 The reason I'm kind of hesitating here in my speech is 48 00:02:34,280 --> 00:02:36,310 that the numbers aren't as important. 49 00:02:36,310 --> 00:02:39,910 But you can do traditional disk mirroring in which 50 00:02:39,910 --> 00:02:43,130 you've got in your storage pool, a virtual disk, 51 00:02:43,130 --> 00:02:44,380 and then behind the scene, 52 00:02:44,380 --> 00:02:49,330 all of its content is being mirrored so that if one of those physical disks on 53 00:02:49,330 --> 00:02:53,280 one of the cluster nodes were to fail and you replaced it, 54 00:02:53,280 --> 00:02:55,470 you could reestablish the data. 55 00:02:55,470 --> 00:02:58,550 Depending upon how many disks you've enlisted into 56 00:02:58,550 --> 00:03:03,140 your Storage Spaces Direct Pool, you can actually do three‑way mirroring, 57 00:03:03,140 --> 00:03:05,880 which would allow the failure of two physical disks, 58 00:03:05,880 --> 00:03:07,180 and you can get your data back. 59 00:03:07,180 --> 00:03:12,340 Now, this is all in terms of internal fault tolerance to Storage Spaces Direct, 60 00:03:12,340 --> 00:03:15,800 I would hope and trust that you're also conducting 61 00:03:15,800 --> 00:03:21,100 backups of your storage pools anyway, but this is from a hardware level. 62 00:03:21,100 --> 00:03:24,960 And I want you to see that these various raid‑like configurations 63 00:03:24,960 --> 00:03:27,660 and storage spaces in Storage Spaces Direct, 64 00:03:27,660 --> 00:03:31,310 they are not, and I repeat not, industry standard RAID. 65 00:03:31,310 --> 00:03:34,030 This is because we're doing it all in software. 66 00:03:34,030 --> 00:03:37,340 It's a logical volume management system. 67 00:03:37,340 --> 00:03:40,080 There's also, as you can see in the last example, 68 00:03:40,080 --> 00:03:43,090 there's dual parity, single and dual parity involved. 69 00:03:43,090 --> 00:03:45,910 And this again, would be analogous to RAID. 70 00:03:45,910 --> 00:03:49,840 The traditional two‑way mirror would be RAID 1. 71 00:03:49,840 --> 00:03:54,640 The traditional striping would be RAID 0, that's no parody. 72 00:03:54,640 --> 00:03:58,120 And then when you get into things like RAID 5 and RAID 6, 73 00:03:58,120 --> 00:04:02,970 that's what we're talking about striping data across multiple physical disks, 74 00:04:02,970 --> 00:04:07,150 which gives you increased input/output. And then single or dual parity 75 00:04:07,150 --> 00:04:16,000 would be that the Storage Spaces Direct pool could survive the loss of one or more than one disk in that array.