1 00:00:01,410 --> 00:00:09,030 Hello, also in the session, we are going to learn what exactly is random forest and what use cases, 2 00:00:09,150 --> 00:00:14,370 what type of use cases, you can basically use this machine learning algorithm. 3 00:00:14,700 --> 00:00:24,690 So random forest is that type of algorithm which is used in case of classification as well as in case 4 00:00:24,690 --> 00:00:29,470 of classification as well as in case of regression as well. 5 00:00:30,000 --> 00:00:34,630 So you might have heard that random forest is a classification algorithm. 6 00:00:35,100 --> 00:00:41,850 Yeah, most of the cases it is used in classification, but it is not correct that it is a classification 7 00:00:41,850 --> 00:00:47,880 algorithm because inside could learn, which is a machine learning library, which is especially big 8 00:00:47,880 --> 00:00:49,170 for modeling purposes. 9 00:00:49,650 --> 00:00:58,920 In this cyclone, you have something known as random forest degressive class that is responsible for 10 00:00:58,940 --> 00:01:04,670 performing the Deanwood, or I guess it would deal with some type of regression use case. 11 00:01:04,920 --> 00:01:12,160 And we have a class known as random forest classifier, which is exactly for classification use cases. 12 00:01:12,630 --> 00:01:16,260 So what exactly is this random forest? 13 00:01:16,800 --> 00:01:18,210 So random forest is nothing. 14 00:01:18,450 --> 00:01:18,800 It is. 15 00:01:18,830 --> 00:01:24,690 It just follows it just follows my example learning approach. 16 00:01:25,610 --> 00:01:30,890 Random forest just follows in Sambell Learning approach. 17 00:01:31,300 --> 00:01:34,840 So what is this in Sambell Learning Approach? 18 00:01:35,440 --> 00:01:45,040 So Istanbul Learning Airport is all about basically it learns from multiple models, it learns from 19 00:01:45,370 --> 00:01:54,400 multiple models, and at the end it will combine all the learning and add that it will combine all the 20 00:01:54,460 --> 00:01:55,090 learning. 21 00:01:55,600 --> 00:02:00,190 That was my Istanbul approach will do. 22 00:02:00,400 --> 00:02:04,600 And Random Forest basically follows this type of approach. 23 00:02:05,260 --> 00:02:07,410 And random forest is nothing. 24 00:02:07,420 --> 00:02:10,060 It is just a classification. 25 00:02:10,060 --> 00:02:13,500 It is just a collection of multiple decision trees. 26 00:02:13,510 --> 00:02:18,210 It is just a collection of multiple decision trees. 27 00:02:18,520 --> 00:02:26,050 So before understanding this random forest, you have to understand what exactly is this decision? 28 00:02:26,050 --> 00:02:28,840 Trees, how you can build decision trees. 29 00:02:29,140 --> 00:02:32,530 What are the parameters on which using what? 30 00:02:32,680 --> 00:02:36,780 What are the parameters using that you can build your decision trees. 31 00:02:36,880 --> 00:02:38,260 You have to understand that. 32 00:02:38,470 --> 00:02:40,120 So let me open a new bit. 33 00:02:40,120 --> 00:02:44,170 And here I'm going to give you a quick overview. 34 00:02:44,170 --> 00:02:46,830 What is this is and trees and how you can build it. 35 00:02:46,870 --> 00:02:51,700 What is the background of this country's what are the parameters that you are going to play with once 36 00:02:51,700 --> 00:02:53,430 you try to build the trees? 37 00:02:53,890 --> 00:02:56,790 So what exactly is decision tree? 38 00:02:57,220 --> 00:02:58,840 What is this isn't. 39 00:02:59,410 --> 00:03:02,250 So this season tree basically it's. 40 00:03:02,510 --> 00:03:04,540 And then machine learning algorithm. 41 00:03:05,720 --> 00:03:11,870 For classification use cases as well as for regression use case as well. 42 00:03:12,710 --> 00:03:19,320 So this is that algorithm that is highly used in classification as well as in regression use case. 43 00:03:20,000 --> 00:03:23,120 So it is basically this decision tree. 44 00:03:23,420 --> 00:03:26,950 It is basically a base algorithm. 45 00:03:26,960 --> 00:03:34,760 I can see it is basically a base algorithm which is used in every conceivable technique, let's say, 46 00:03:35,210 --> 00:03:39,750 in case of random forest, in case of active use. 47 00:03:39,770 --> 00:03:42,130 So these are all those algorithms. 48 00:03:42,620 --> 00:03:46,330 These are all those algorithm that follows in Zambia learning. 49 00:03:46,530 --> 00:03:54,350 And in this in all this type of algorithm, basically my decision tree is heavily used because it is 50 00:03:54,350 --> 00:03:57,620 a base algorithm that is used in in Sambell Learning. 51 00:03:57,920 --> 00:03:59,740 So what exactly is this? 52 00:03:59,750 --> 00:04:02,700 This is and what is this is entry. 53 00:04:03,500 --> 00:04:10,610 So it is basically like a rule-based kind of thing, let's say, according to condition, according 54 00:04:10,610 --> 00:04:12,750 to condition, how you have to proceed. 55 00:04:13,130 --> 00:04:13,930 So that's it. 56 00:04:13,970 --> 00:04:16,160 That's all about why this is entry. 57 00:04:16,730 --> 00:04:23,040 So it is basically that type of algorithm where it will try to decide what I can say. 58 00:04:23,300 --> 00:04:27,640 It will try to take some decision or it will try to divide the data. 59 00:04:27,980 --> 00:04:28,540 Let's say. 60 00:04:28,670 --> 00:04:31,330 Let me consider a very basic example over here. 61 00:04:31,610 --> 00:04:32,860 Let me open a new page. 62 00:04:33,620 --> 00:04:34,040 Yeah. 63 00:04:34,340 --> 00:04:39,110 Let's see if you have to go for a party that would consider example. 64 00:04:39,620 --> 00:04:40,150 Consider. 65 00:04:41,570 --> 00:04:47,310 Let's say if you have to go for a party, then you have to do some season. 66 00:04:47,510 --> 00:04:49,140 So what can we make this season? 67 00:04:49,440 --> 00:04:52,280 Let's say I'm going to put a very basic condition. 68 00:04:52,290 --> 00:04:53,480 Let's see if. 69 00:04:54,670 --> 00:05:00,940 Time is, let's say, greater than 10 p.m. So in such case, I have to put a condition. 70 00:05:01,870 --> 00:05:07,060 So what if time is not between 10 p.m. and what if time is better than 10 p.m.? 71 00:05:07,330 --> 00:05:10,230 So in such case, if I'm going to say yes, it is, yes. 72 00:05:10,240 --> 00:05:12,320 So here I'm going to say I'm not going. 73 00:05:12,670 --> 00:05:14,000 So basically don't do. 74 00:05:14,170 --> 00:05:18,030 And if it is not, then I have another condition. 75 00:05:18,040 --> 00:05:18,520 Let's see. 76 00:05:19,300 --> 00:05:20,410 Do we have money? 77 00:05:22,230 --> 00:05:23,290 Do we have money? 78 00:05:23,460 --> 00:05:24,910 Do we have sufficient money? 79 00:05:25,350 --> 00:05:25,950 I can see. 80 00:05:26,520 --> 00:05:28,530 So then we have a skin condition. 81 00:05:28,860 --> 00:05:36,510 So if yes, if we if we have sufficient money, then definitely go go and enjoy. 82 00:05:38,460 --> 00:05:44,370 Similarly, I have another condition, what if I don't have to be sure the money, so in such case, 83 00:05:44,370 --> 00:05:45,570 I can also download? 84 00:05:46,650 --> 00:05:55,590 So in such case, I can also don't go for the major thing that we will notice over here is we are trying 85 00:05:55,590 --> 00:05:56,840 to divide data. 86 00:05:57,240 --> 00:06:00,930 We are basically tracked by data on the basis of two factors. 87 00:06:01,200 --> 00:06:06,940 The very first one is exactly this time, the second one on the basis of money. 88 00:06:07,290 --> 00:06:08,990 So let me let me down here. 89 00:06:09,570 --> 00:06:10,590 Let me go down here. 90 00:06:11,340 --> 00:06:16,560 So basically on the basis of time and on the basis of money. 91 00:06:17,590 --> 00:06:20,140 Basically, I have to take some decision. 92 00:06:21,240 --> 00:06:22,830 Whether yes or no. 93 00:06:23,870 --> 00:06:31,010 Whether years or so, let's say I'm going to put some random entries over here, let's see what if my 94 00:06:31,010 --> 00:06:31,940 time is 12? 95 00:06:32,060 --> 00:06:33,880 What if my time is 10? 96 00:06:34,010 --> 00:06:35,810 What is this nine? 97 00:06:35,820 --> 00:06:37,140 What is it? 98 00:06:37,400 --> 00:06:39,510 What if I have money? 99 00:06:39,550 --> 00:06:40,790 Yeah, what if I have money? 100 00:06:40,820 --> 00:06:42,050 What if I don't have money? 101 00:06:42,470 --> 00:06:43,880 What if I don't have money? 102 00:06:43,910 --> 00:06:44,410 What if. 103 00:06:44,720 --> 00:06:45,710 Yeah, I have money. 104 00:06:46,130 --> 00:06:48,020 So if I have this condition. 105 00:06:48,410 --> 00:06:54,620 If I have this condition here, basically here my decision will be no, because here you will see it 106 00:06:54,620 --> 00:06:55,660 is greater than 10. 107 00:06:56,060 --> 00:06:57,180 Similarly over here. 108 00:06:57,230 --> 00:06:59,420 I have no similar law here. 109 00:06:59,660 --> 00:07:02,030 I have no money over here. 110 00:07:02,090 --> 00:07:09,000 My decision will be yours because over here my time is less than ten as well as I have money as such. 111 00:07:09,010 --> 00:07:10,400 Excuse my decision would be yes. 112 00:07:11,060 --> 00:07:14,180 So you will see if I have to interpret this. 113 00:07:14,180 --> 00:07:19,880 I can see whenever my time is at 2:00 and I have money for an hour of my time is 8:00. 114 00:07:19,880 --> 00:07:24,580 It means I have to come in this brand and I have money that I have to come in this one. 115 00:07:25,010 --> 00:07:29,020 So it means I can go I can go for a party and enjoy. 116 00:07:29,480 --> 00:07:32,660 So basically what we are going to do in this. 117 00:07:32,690 --> 00:07:36,140 So basically we can have multiple features. 118 00:07:36,140 --> 00:07:36,890 In this case. 119 00:07:36,890 --> 00:07:41,140 We have this two features on the basis of the decision tree. 120 00:07:41,180 --> 00:07:43,010 I have basically two features. 121 00:07:43,010 --> 00:07:47,030 And on the basis of these two features, I have to take some decisions. 122 00:07:47,450 --> 00:07:51,350 Let's see what if we have a more number of features? 123 00:07:51,530 --> 00:07:53,030 So let me open a new bridge. 124 00:07:53,180 --> 00:07:56,420 What if we have a more number of features? 125 00:07:56,720 --> 00:08:01,670 Let's say let's say 50 or 100 Alexi's 60 odd thousand. 126 00:08:01,670 --> 00:08:03,380 Any number of features let's we have. 127 00:08:03,740 --> 00:08:08,390 So in such case, we have to draw a high radical tree like this one. 128 00:08:09,050 --> 00:08:10,100 So this is my bet. 129 00:08:10,100 --> 00:08:17,930 And that's if you will come over here, here, my parent node, you will see here, my parents node 130 00:08:17,930 --> 00:08:21,620 is time and this data node is exactly money. 131 00:08:22,040 --> 00:08:25,310 And this is my leave node where I have with this season. 132 00:08:26,030 --> 00:08:28,580 So similarly, you were here last year. 133 00:08:28,670 --> 00:08:30,160 This is my parents node. 134 00:08:30,170 --> 00:08:35,740 So this is exactly will be my parents node and these are exactly the same. 135 00:08:36,260 --> 00:08:40,100 This is my granddaughter, which is my data and what I can see. 136 00:08:41,250 --> 00:08:47,780 So these are all tunnels that contain some kind of information that contains some kind of data that's 137 00:08:47,790 --> 00:08:49,660 in some level. 138 00:08:49,680 --> 00:08:51,540 Over here, I can have this one. 139 00:08:51,840 --> 00:08:52,950 I can't have this one. 140 00:08:53,340 --> 00:08:54,720 I can have this one here. 141 00:08:54,720 --> 00:08:56,850 I have this one condition here. 142 00:08:56,850 --> 00:08:58,350 I have this one to condition. 143 00:08:58,800 --> 00:09:01,730 So let's get these all out of my life. 144 00:09:01,740 --> 00:09:02,040 No. 145 00:09:03,520 --> 00:09:11,310 So what what we will do, it will be like a hierarchy, this it will be hierarchically like this. 146 00:09:11,710 --> 00:09:18,690 So on the basis of some, on the basis on some condition, I have to basically do some decision. 147 00:09:18,850 --> 00:09:22,810 That's what my decision tree will do in this country. 148 00:09:23,230 --> 00:09:31,670 Whatever pattern we will learn from our training data, whatever pattern we will learn from our training 149 00:09:31,670 --> 00:09:33,370 data on which our. 150 00:09:34,430 --> 00:09:40,550 Relationship is going to establish basically what craniotomies on which our relationship is going to 151 00:09:40,550 --> 00:09:41,210 establish. 152 00:09:41,630 --> 00:09:50,240 So whatever we are going to learn from data, the similar pattern, the similar pattern, or I can see 153 00:09:50,240 --> 00:09:56,090 the similar hierarchy we are going to follow for our test data, the similar hierarchy we are going 154 00:09:56,090 --> 00:09:58,310 to follow for our prediction data. 155 00:09:58,790 --> 00:10:05,300 So let's say in a previous use case, which is exactly this one, let's say you have to do prediction 156 00:10:05,300 --> 00:10:10,640 for what if my time will be nine o'clock and what if I don't have sufficient? 157 00:10:10,960 --> 00:10:12,850 Next, you have to do prediction for this one. 158 00:10:13,200 --> 00:10:17,270 Let's say this is my unseen neglected this is my testing data. 159 00:10:17,420 --> 00:10:22,200 And in this case, this is exactly my training data. 160 00:10:22,520 --> 00:10:25,090 This is exactly my training data. 161 00:10:25,730 --> 00:10:33,270 So on the basis of this, Janita, you have this disease and you have this disease and this is entered 162 00:10:33,380 --> 00:10:39,590 DNG and basically by a lot of Volumnia, you have to do some kind of prediction. 163 00:10:39,840 --> 00:10:43,900 So basically, let's say you have to do prediction for this data. 164 00:10:44,120 --> 00:10:45,140 Let's say this data. 165 00:10:45,500 --> 00:10:54,050 So what we have what we have time as less than 10 p.m. it means I will come in this loop and now I'm 166 00:10:54,050 --> 00:10:57,160 going to say I don't have sufficient money to go for party. 167 00:10:57,500 --> 00:11:02,300 So after it, I will come I will come over here to here. 168 00:11:02,330 --> 00:11:04,990 My decision is I don't have to go. 169 00:11:05,270 --> 00:11:07,310 So my decision is exactly. 170 00:11:07,490 --> 00:11:08,650 I don't have to go. 171 00:11:08,810 --> 00:11:10,190 Let me open a new page. 172 00:11:10,200 --> 00:11:12,640 Let me this, this, this, this one. 173 00:11:12,800 --> 00:11:16,210 Yeah, you will see, this is my data. 174 00:11:16,340 --> 00:11:18,650 Let's say let's say I have this data. 175 00:11:19,190 --> 00:11:25,820 So let's say I have to create a hierarchy of this data or I can say I have to build such a tree so that 176 00:11:25,970 --> 00:11:30,710 in future, whenever I have some data so that I can do some kind of prediction. 177 00:11:30,890 --> 00:11:34,840 So let's say I have to be I have to build such a tree for this. 178 00:11:35,300 --> 00:11:43,610 So but a major factor over here is what feature I have to select as my parents or whether I have to 179 00:11:43,610 --> 00:11:46,070 select this time feature as my parents. 180 00:11:46,340 --> 00:11:50,090 And rest of all will be my subsequent node. 181 00:11:50,810 --> 00:11:57,520 And let's see whether money will be my node and let's say all other features will be my subsequent note. 182 00:11:57,650 --> 00:12:05,390 So how I have to select the question is that whether I have to select time as my parent or whether I 183 00:12:05,390 --> 00:12:08,580 have to select this money as my parent. 184 00:12:08,810 --> 00:12:11,720 So this is a major concern in this is a.. 185 00:12:12,110 --> 00:12:17,480 This is a major concern in this country at the time of selecting better. 186 00:12:17,490 --> 00:12:20,060 Know how you have to select to parenthood. 187 00:12:20,330 --> 00:12:21,500 This is a major concern. 188 00:12:21,680 --> 00:12:29,900 So basically, we have to approach for this to select our parent or to build, because once we have 189 00:12:29,900 --> 00:12:32,450 a better node, we can easily build our decision. 190 00:12:32,720 --> 00:12:35,920 Let's say, what if we have, let's say, ten teachers? 191 00:12:36,080 --> 00:12:42,320 So once we have our parent, no, let's say once we have our parent node, then let's say we have another 192 00:12:42,320 --> 00:12:44,600 condition or another condition or whatever. 193 00:12:45,020 --> 00:12:53,240 Then the similar problem statement, the same problem statement I can again solve using again soil using 194 00:12:53,390 --> 00:12:55,970 X it how I have this better node. 195 00:12:56,210 --> 00:12:59,090 So let's say I have ten features over here. 196 00:12:59,090 --> 00:13:03,670 Let's say I have ten features already, assuming I have ten features over here. 197 00:13:03,950 --> 00:13:10,280 So basically using the concept of entropy, I will discuss later. 198 00:13:10,310 --> 00:13:12,400 I will give you a quick overview of this. 199 00:13:12,410 --> 00:13:20,680 So basically using concept of entropy and information gain, you can easily find out which can be better. 200 00:13:20,690 --> 00:13:20,960 No. 201 00:13:21,140 --> 00:13:26,270 So let's say who that will feature will have highest information. 202 00:13:26,270 --> 00:13:30,730 Gain can be my or I can see will be my Baranov. 203 00:13:30,740 --> 00:13:36,950 Let's say in this case, I have this time as the highest information gain. 204 00:13:37,220 --> 00:13:38,600 So I will do a better node. 205 00:13:39,530 --> 00:13:42,540 Let's see if I have multiple features over here. 206 00:13:42,850 --> 00:13:44,960 I have another feature as let's have three. 207 00:13:45,260 --> 00:13:50,510 So this following statement gets split it into this money and have three. 208 00:13:50,770 --> 00:13:53,870 Similarly, over here, I have some other attribute as well. 209 00:13:54,080 --> 00:14:00,050 So you will see again, you have this kind of column, Stutman, that you can solve using concept of 210 00:14:00,050 --> 00:14:02,270 entropy and information gain. 211 00:14:02,510 --> 00:14:10,280 Then again, you have some kind of tree and at the end you will be up having some kind of decision tree. 212 00:14:10,610 --> 00:14:14,980 So that's what my entropy and information gain look for. 213 00:14:14,990 --> 00:14:16,790 Let me open a new bed. 214 00:14:17,150 --> 00:14:19,610 So there are basically two approaches. 215 00:14:19,610 --> 00:14:21,710 There are basically two approaches. 216 00:14:22,340 --> 00:14:28,980 The very first one is exactly using concept of entropy and information gain. 217 00:14:29,450 --> 00:14:30,350 So using. 218 00:14:31,870 --> 00:14:41,400 This Entropia information gain, you will end up building some disease and similarly using my Guiney 219 00:14:41,410 --> 00:14:46,050 index, or you can also come as guinea impurity. 220 00:14:46,930 --> 00:14:54,040 So using this in Guinea impurity, you can also build up having some kind of vigilante because maybe 221 00:14:54,190 --> 00:15:02,350 there are some algorithm that uses as a genuine death or as a guinea index as the criterion to build 222 00:15:02,360 --> 00:15:06,220 these entries, whereas there are some algorithm that uses and probably. 223 00:15:07,240 --> 00:15:10,700 So it's all up to it's all up to you what exactly you want to use. 224 00:15:10,720 --> 00:15:14,760 So what exactly is this entropy information gain and all this stuff? 225 00:15:14,890 --> 00:15:21,580 So I'm going to give you a quick overview, because if I to start all these things, it will almost 226 00:15:21,580 --> 00:15:24,170 take out of classes to explain all this stuff. 227 00:15:24,490 --> 00:15:28,200 So what exactly is this information gain and all this kind of thing? 228 00:15:28,420 --> 00:15:35,440 So it is all about how random really does what is the probability of having randomness in your data? 229 00:15:35,650 --> 00:15:44,680 So basically what entropy will do, it will basically gives impurity present inside of our data and 230 00:15:44,680 --> 00:15:54,430 using a very basic formula with using a very basic formula, which is exactly this minus of B.I and 231 00:15:54,580 --> 00:16:04,330 log two of log two of by using this formula, using this formula, you can easily compute, you can 232 00:16:04,330 --> 00:16:11,170 easily compute the probability, you can easily compute the entropy of a particular feature where I 233 00:16:11,170 --> 00:16:15,670 is exactly the number of classes in that particular feature. 234 00:16:16,180 --> 00:16:22,090 So once you have entropy, once you have entropy, you have to compute information. 235 00:16:22,510 --> 00:16:25,260 So what exactly is this information gain. 236 00:16:25,510 --> 00:16:27,750 So information gain what exactly. 237 00:16:27,760 --> 00:16:36,910 It's basically based upon basically based upon entropy, based upon entropy feature. 238 00:16:37,420 --> 00:16:46,090 Which one is going to get our highest gain and let's say among Avron f2 f3 feature, if acto feature 239 00:16:46,090 --> 00:16:51,940 as the highest information gain, then we will select this feature as a parent node. 240 00:16:53,040 --> 00:17:01,380 So at the end, any future, whosever future has a high information gain, then that future will be 241 00:17:01,380 --> 00:17:03,570 selected as our paranoid. 242 00:17:04,020 --> 00:17:07,570 So what exactly is a basic formula behind this? 243 00:17:08,190 --> 00:17:13,900 So it is nothing but whatever entropy you have computed for a particular feature. 244 00:17:14,130 --> 00:17:15,400 What do you have to do now? 245 00:17:15,750 --> 00:17:29,700 It is basically one minus summation of S and of as in two E I will s and we add this as equals to total 246 00:17:29,700 --> 00:17:36,180 data point in that particular feature and then exactly out of total data points. 247 00:17:36,330 --> 00:17:39,780 How many we have all how many. 248 00:17:39,780 --> 00:17:42,420 We have data points in order. 249 00:17:42,660 --> 00:17:49,670 Let's say we have three classes in feature one that say we have three classes, let's zero one two in 250 00:17:49,680 --> 00:17:50,190 each other. 251 00:17:50,370 --> 00:17:56,430 So how many data points we have with respect to zero, how many with respect to one and how many with 252 00:17:56,430 --> 00:17:57,210 respect to do. 253 00:17:57,450 --> 00:17:59,590 That's what this will do. 254 00:17:59,940 --> 00:18:05,380 Similarly, what is this e e is nothing but what is the entropy of the zero? 255 00:18:05,490 --> 00:18:11,490 Similarly, in case of what is the entropy of one after it, I have to do just summation of this and 256 00:18:11,490 --> 00:18:14,330 it will give us it will return us some value. 257 00:18:14,640 --> 00:18:18,260 So let's say future two has highest information. 258 00:18:18,720 --> 00:18:27,240 So feature two will be my paranoid and let's say future one will become exactly like this. 259 00:18:28,510 --> 00:18:37,000 And now you have some kind of decision, so similarly, who's our future, who's our future has our 260 00:18:37,000 --> 00:18:42,910 highest information again, that that particular feature, will we get selected as a panel? 261 00:18:43,090 --> 00:18:45,580 That's what my information will do. 262 00:18:45,760 --> 00:18:53,740 And I have another factor, which is exactly Guiney building and using this fact that we can also build 263 00:18:53,740 --> 00:18:54,330 a precedent. 264 00:18:54,670 --> 00:18:59,890 So it's all up to you what you want to use, whether you have to come across with entropy or whether 265 00:18:59,890 --> 00:19:02,120 you have to come across to get the index. 266 00:19:02,500 --> 00:19:08,490 So what exactly is going you know, like we write down a very basic formula that it is exactly yours 267 00:19:08,500 --> 00:19:11,920 inside one building this this is entry. 268 00:19:12,490 --> 00:19:17,110 So it basically used one minus sigma of one. 269 00:19:17,110 --> 00:19:24,910 I goes to one to see where exactly number of classes in any feature, let's say in this feature one 270 00:19:24,910 --> 00:19:28,270 you have three classes, let's say zero one two. 271 00:19:28,630 --> 00:19:32,940 So here it is, nothing but B of I of squared. 272 00:19:33,370 --> 00:19:38,710 So what it will do basically it will basically do a summation of all these things. 273 00:19:38,710 --> 00:19:41,470 It will do summation or basically for B zero. 274 00:19:41,470 --> 00:19:46,870 What is the probability of this one or the probability of having one class for the burden of having 275 00:19:46,870 --> 00:19:50,200 two class and whatever the index will become? 276 00:19:50,560 --> 00:19:52,090 So who's your teacher? 277 00:19:52,090 --> 00:19:56,050 Who's or our future has the lowest Guiney index? 278 00:19:56,800 --> 00:20:03,540 Whosoever future has that lowest guinea index that will get selected as a betting board. 279 00:20:03,580 --> 00:20:05,410 So that's all about the session. 280 00:20:05,420 --> 00:20:07,210 Hopefully will love the session very much. 281 00:20:07,210 --> 00:20:10,630 How exactly I have summarized all these. 282 00:20:10,640 --> 00:20:13,560 This is a result of thinking the easiest to the. 283 00:20:13,930 --> 00:20:21,430 So basically in the next session I'm going to give you how exactly random forest works, how exactly 284 00:20:21,430 --> 00:20:28,870 it uses this is entry's inside and how random forest is able to do predictions on whether it's a regression 285 00:20:29,050 --> 00:20:31,390 case or whether it's a classification. 286 00:20:32,060 --> 00:20:34,000 So I hope you will love this session very much. 287 00:20:34,330 --> 00:20:34,910 Thank you. 288 00:20:35,080 --> 00:20:36,940 How nice to keep learning. 289 00:20:36,940 --> 00:20:38,740 Keep growing, keep practicing.