1 00:00:00,240 --> 00:00:03,270 So first, we are creating a random forest model. 2 00:00:05,240 --> 00:00:12,980 So from a and symbol, very important random forest classifier and grading, boosting classifier. 3 00:00:16,470 --> 00:00:22,740 So what is random, first class of random footage classifier is a supervised learning algorithm, which 4 00:00:22,740 --> 00:00:24,990 is mainly used for classification problems. 5 00:00:25,940 --> 00:00:32,960 As we know, that forest is made up of trees and more trees, random forest on them creates decision 6 00:00:32,960 --> 00:00:40,640 trees on data samples and then gets the prediction and finally selects the best solution by means of 7 00:00:40,640 --> 00:00:41,140 voting. 8 00:00:41,600 --> 00:00:45,790 It creates many trees and different trees, gives different answers. 9 00:00:46,160 --> 00:00:48,880 And using voting, it's the best possible answer. 10 00:00:51,000 --> 00:00:55,470 So first we had initializing random this classifier and storing it in model. 11 00:00:56,560 --> 00:00:57,700 That's the cell. 12 00:01:02,220 --> 00:01:08,600 Then the lifting of a random forest model to extreme vitrine, let's run the self. 13 00:01:14,530 --> 00:01:21,490 So to check the parameters, you can shift it up so it will show you all the different parameters which 14 00:01:21,490 --> 00:01:22,780 are used in random forest. 15 00:01:25,820 --> 00:01:30,080 So you have initializing the model, you can check it here, you can. 16 00:01:31,480 --> 00:01:37,290 So you'll find all the different parameters so you can modify any barometer if you want. 17 00:01:39,070 --> 00:01:40,000 Let's close this. 18 00:01:41,430 --> 00:01:43,920 So we have switched over to role model an next. 19 00:01:44,930 --> 00:01:47,700 Now let's put it on our test to take. 20 00:01:49,310 --> 00:01:57,500 But the model that predicted so model is a random forest, so using that predict function, they're 21 00:01:57,500 --> 00:02:00,220 predicting it Anderson. 22 00:02:01,480 --> 00:02:07,590 Of that is good inaccuracies, converting to barometer's whitest and white predicted. 23 00:02:08,430 --> 00:02:09,390 Let's run this on. 24 00:02:11,860 --> 00:02:17,920 So here we got an accuracy of fifty eight point seven percent with random forest, classify it. 25 00:02:20,010 --> 00:02:21,220 Let's proceed further. 26 00:02:21,270 --> 00:02:23,910 Now let's create Grenadian boosting morale. 27 00:02:25,320 --> 00:02:30,600 Gradient busting is a machine learning technique for classification and regression problems, which 28 00:02:30,600 --> 00:02:35,970 produces a prediction model in the far off and symbol of the production models. 29 00:02:37,570 --> 00:02:39,130 Typically like decision trees. 30 00:02:40,970 --> 00:02:46,640 The logic behind creating the posting is simple, it works on basic assumption of linear regression. 31 00:02:47,610 --> 00:02:54,330 That is the sum of its residually zero, the residual should be spread randomly around zero. 32 00:02:55,170 --> 00:03:01,330 Now think of this residual as mistakes committed by our projector model Aalto. 33 00:03:01,890 --> 00:03:05,400 Three best models are not based on such assumptions. 34 00:03:05,610 --> 00:03:12,390 But if you think logically about this assumption, we might argue that if we are able to see some pattern 35 00:03:12,390 --> 00:03:16,770 of residuals around zero, we can leverage that pattern to fit the model. 36 00:03:18,200 --> 00:03:28,580 So the intuition behind posting algorithm is to repeatedly leverage the patterns in residuals and strengthen 37 00:03:28,580 --> 00:03:35,690 a model with weak predictions and make it better once we reach a stage that residuals do not have any 38 00:03:35,690 --> 00:03:37,820 pattern that could be model. 39 00:03:38,920 --> 00:03:42,920 We can stop modeling residuals, otherwise it may lead to the. 40 00:03:44,800 --> 00:03:51,790 Algorithmically, we're minimizing our loss function such that the best loss reaches its minimal. 41 00:03:53,380 --> 00:03:54,340 Let's proceed further. 42 00:03:55,660 --> 00:03:58,990 So here we are creating a list of the world's tallest. 43 00:03:59,410 --> 00:04:02,080 And here we are storing learning rate. 44 00:04:03,150 --> 00:04:10,050 So then we're creating a for loop for learning there in a long list, you just I. 45 00:04:12,220 --> 00:04:19,370 So here we're taking a different learning rate in our classifier, so and of course, is equal to Gregan 46 00:04:19,390 --> 00:04:26,200 boosting classifier and number of estimate, the learning related to learning rate, which is this one 47 00:04:26,530 --> 00:04:29,050 first in love, first me is this one. 48 00:04:29,050 --> 00:04:29,880 Second is this one. 49 00:04:30,460 --> 00:04:33,040 So Max is going to do Max. 50 00:04:33,040 --> 00:04:34,060 That is going to do. 51 00:04:35,400 --> 00:04:37,080 And random street is called zero. 52 00:04:38,380 --> 00:04:45,470 So you're going to see a live dogfighter, extrem come little. 53 00:04:47,530 --> 00:04:52,780 So first, very initialism the classifier, and then we are putting it on extreme well-trained. 54 00:04:53,990 --> 00:04:56,540 With different learning rates, let's say. 55 00:05:01,150 --> 00:05:03,970 So here first we're printing, learning read. 56 00:05:05,410 --> 00:05:09,400 And then we're printing, training, accuracy and validation accuracy. 57 00:05:10,480 --> 00:05:15,790 So with that format, GBE underscore selectors call extrem. 58 00:05:16,930 --> 00:05:20,260 So left score gives us accuracy. 59 00:05:21,370 --> 00:05:21,880 Anderson. 60 00:05:24,710 --> 00:05:31,010 So during training, we got an accuracy of zero point 60, which is 62 percent. 61 00:05:31,970 --> 00:05:34,430 And in addition, we got an increase of 68 percent. 62 00:05:36,810 --> 00:05:38,730 Now, let's create a moral. 63 00:05:41,900 --> 00:05:44,270 Exhaustible started in summer learning method. 64 00:05:45,670 --> 00:05:50,740 Sometimes it may not be sufficient to rely upon the results of just one machine learning model. 65 00:05:52,470 --> 00:05:59,130 So and so we're learning offers a systematic solution to combine the predictive power of multiple learners. 66 00:06:00,840 --> 00:06:06,090 The models that form the ensemble, also known as beastliness, could be either from the same learning 67 00:06:06,090 --> 00:06:13,050 algorithm or different learning algorithm, bagging and boosting all too widely used and simple learners. 68 00:06:14,020 --> 00:06:16,980 So here in this, we will be using boosting technique. 69 00:06:17,900 --> 00:06:21,800 Let's just import export classifier from Zervos. 70 00:06:22,720 --> 00:06:23,320 Lecture the. 71 00:06:25,280 --> 00:06:27,980 And then initialise Juba's classify it. 72 00:06:30,810 --> 00:06:40,710 In exuberant, you left and then so exuberant, dogfighter, too extreme and Vitron, let's run undersell. 73 00:06:46,740 --> 00:06:53,810 Now, squad is called to its CB and see if that's the extent of it. 74 00:06:54,420 --> 00:06:56,870 So this will give us the accuracy score. 75 00:06:58,280 --> 00:06:59,090 Let's bring this. 76 00:07:02,930 --> 00:07:06,500 So here we got an accuracy of sixty two point seven percent. 77 00:07:11,150 --> 00:07:18,200 So comparing to other algorithms we got most accuracy with, you know, let's try supporta machine. 78 00:07:24,440 --> 00:07:31,400 Support vector machine is a supervised machine learning model that uses classification algorithms for 79 00:07:32,040 --> 00:07:33,200 classification problems. 80 00:07:34,520 --> 00:07:36,940 Support vector machine creates a support line. 81 00:07:37,940 --> 00:07:40,610 And it creates a support line, so. 82 00:07:42,080 --> 00:07:49,490 The points above the lines are yes and points below these lines are no different in that we. 83 00:07:50,600 --> 00:07:52,160 So let's proceed further. 84 00:07:52,760 --> 00:07:55,460 So if you want to read more about this, you can click on this link. 85 00:07:56,810 --> 00:07:57,830 So first import. 86 00:07:59,250 --> 00:08:06,900 From Ashkelon, not as important as we see, so to classify them, we're initializing as we see. 87 00:08:08,170 --> 00:08:11,980 With Random Street is called a zero, and Carneal is called Alabi of. 88 00:08:13,160 --> 00:08:14,900 And a story you didn't classify it. 89 00:08:16,100 --> 00:08:22,760 Then we are fighting it on Ekstrand and Vitrine, so classified outfit, Extreme Camo Mildren. 90 00:08:24,160 --> 00:08:28,600 So the parameters of equity just shifted to. 91 00:08:30,480 --> 00:08:32,610 So we have not reported it yet. 92 00:08:32,640 --> 00:08:35,280 That's why it's not showing just shift in the. 93 00:08:37,870 --> 00:08:39,010 So they have successfully. 94 00:08:40,410 --> 00:08:49,560 We deliver a service immortal, so you shift it into all the different parameters and SBC. 95 00:08:50,840 --> 00:08:53,360 And you can also read about the new default parameter. 96 00:08:56,730 --> 00:09:00,270 Like Col's here, the default rate is above. 97 00:09:01,670 --> 00:09:04,640 And you can use linear poly sigmoid. 98 00:09:06,380 --> 00:09:07,060 Etc. 99 00:09:09,090 --> 00:09:10,650 Now, let's close this. 100 00:09:11,810 --> 00:09:14,620 Now, let's predict on our test said. 101 00:09:15,690 --> 00:09:19,470 So I predict basically those classifier that predict that. 102 00:09:20,970 --> 00:09:21,990 LZ Granderson. 103 00:09:25,240 --> 00:09:27,340 Now, let's check the accuracy of over. 104 00:09:28,290 --> 00:09:30,660 So to classify, LZ Granderson. 105 00:09:32,230 --> 00:09:34,870 So here we got an accuracy of 58 percent. 106 00:09:37,650 --> 00:09:38,130 So. 107 00:09:39,290 --> 00:09:45,110 We have implemented artificial neural networks at random for this classifier, creating the boosting, 108 00:09:46,110 --> 00:09:47,990 boosting and then support the machine. 109 00:09:50,670 --> 00:09:56,550 So out of this, we got very good accuracy with a boost and also we got a very good accuracy with artificial 110 00:09:56,550 --> 00:10:02,010 neural networks because in artificial neural networks, we have not performed any future engineering. 111 00:10:02,380 --> 00:10:04,680 Still, it was working really good. 112 00:10:09,470 --> 00:10:15,620 Even though we got a very good accuracy, but this can help the banks in knowing whether the customer 113 00:10:15,620 --> 00:10:16,810 is risky or not. 114 00:10:19,000 --> 00:10:21,060 That's it for this country. 115 00:10:21,250 --> 00:10:22,680 See you again in the next project.