1 00:00:00,960 --> 00:00:06,560 In the last video, we saw how two men came in for a given medley of gay. 2 00:00:07,980 --> 00:00:12,470 We trained our model for where to one and Capullo three. 3 00:00:14,000 --> 00:00:19,150 We created two separate models for these two given values of care. 4 00:00:20,810 --> 00:00:26,380 In this lecture, we will learn how to create a single model for multiple values of care. 5 00:00:28,280 --> 00:00:32,570 We will use grid search feature of Ashkelon to do this. 6 00:00:34,650 --> 00:00:40,890 First, the surface to import grid search c.v from Mordente Selection of Escalon. 7 00:00:47,050 --> 00:00:54,910 Now, if you notice here, when we were running, came in for Gaikwad two, three and equal to one. 8 00:00:55,450 --> 00:01:00,640 We were changing this and underscored neighbors barometer of gay. 9 00:01:04,640 --> 00:01:12,750 To give multiple values of gay using grid search, we first need to create a dictionary of this parameter. 10 00:01:17,330 --> 00:01:25,520 So I'm calling my variable as bedrooms and I'm creating a dictionary with my end neighborhood parameter 11 00:01:25,730 --> 00:01:27,740 is taking value from one to 10. 12 00:01:28,370 --> 00:01:31,540 This is just a list of all the numbers from one to 10. 13 00:01:31,970 --> 00:01:38,570 And this list is in our dictionary with an underscore a neighbor as index. 14 00:01:43,430 --> 00:01:52,510 Now, similar to our previous model, training process will first need to create an object of over classifier 15 00:01:53,090 --> 00:01:57,350 and then retrain this object using over X and Y variables. 16 00:01:59,600 --> 00:02:04,510 So here my object name is Underscore, Search, Underscore CV. 17 00:02:05,360 --> 00:02:10,430 And I am using this grid search CV function to create this object. 18 00:02:11,810 --> 00:02:16,100 The arguments here are first you need to give the classifier. 19 00:02:16,760 --> 00:02:21,920 And second, you'll need to give their dictionary of all the parameters you want to change. 20 00:02:24,760 --> 00:02:28,150 So my classifier is geared neighbors classifier. 21 00:02:28,330 --> 00:02:37,000 Remember, we Uki neighbor classifier, vilely creating Gamin and my second argument here is bedrooms. 22 00:02:37,300 --> 00:02:45,190 We created bedrooms, which is a dictionary containing and Nabor says index and a list of values from 23 00:02:45,190 --> 00:02:47,950 one to 10 as its values. 24 00:02:49,120 --> 00:02:57,710 So this grid search S.V. Object Contain came in classifier for all the 10 values of Gayed that time 25 00:02:57,730 --> 00:02:59,170 giving in the bedrooms. 26 00:03:00,310 --> 00:03:01,390 If I ran this. 27 00:03:04,800 --> 00:03:07,040 My research object is ready now. 28 00:03:07,350 --> 00:03:13,700 I will fight it using my X brain is scary version and the wide screen variable. 29 00:03:19,950 --> 00:03:21,820 Now, I have created my more than. 30 00:03:24,100 --> 00:03:31,490 To find out the best barometer out of this 10 given parameters, there are different attributes of grid 31 00:03:31,490 --> 00:03:32,770 search c.v function. 32 00:03:33,550 --> 00:03:34,900 The first attribute based. 33 00:03:36,060 --> 00:03:38,500 Best Scott Bedrooms underscored. 34 00:03:39,480 --> 00:03:47,390 This will give me the value of the best barometer out of the given parameters around this. 35 00:03:48,210 --> 00:03:55,530 You can see the best barometer out of this 10 parameter is when the key is equal to seven. 36 00:03:58,950 --> 00:04:05,050 Now, I know that the best value of K is seven out of this ten values. 37 00:04:06,330 --> 00:04:10,400 The second attribute is best underscore, estimator, underscore. 38 00:04:11,820 --> 00:04:15,590 So remember, we trained our model for ten values of key. 39 00:04:16,800 --> 00:04:20,990 So we have ten different kanon models inside this grid search CV. 40 00:04:22,650 --> 00:04:28,650 This best underscore estimate that will pick up the model, which is the best. 41 00:04:29,430 --> 00:04:35,610 So here it contains the model with K equal to seven. 42 00:04:37,540 --> 00:04:40,030 Now, I will directly copy this. 43 00:04:40,330 --> 00:04:45,400 Best estimate there and to over another model, which I'm calling. 44 00:04:47,150 --> 00:04:50,270 As optimize underscored underscore in. 45 00:04:52,330 --> 00:04:59,350 So now my model with Gaikwad two seven is optimize skinning if you want. 46 00:04:59,440 --> 00:05:07,120 You can create another classifier on your own with equal to seven, as we did with Gache with the one 47 00:05:07,210 --> 00:05:08,530 and equal to three. 48 00:05:10,140 --> 00:05:15,630 Now, if we want to credit to revive a loose rule using the word expressed values, we can directly 49 00:05:15,630 --> 00:05:18,690 use the credit function with this optimise scannon. 50 00:05:20,220 --> 00:05:23,350 So I am saving my predicted values environ. 51 00:05:23,470 --> 00:05:33,370 The score tests underscore pride and I am using credit function of this optimized scannon renders them. 52 00:05:33,980 --> 00:05:34,450 This. 53 00:05:38,180 --> 00:05:39,660 Similar to the last time. 54 00:05:39,740 --> 00:05:44,640 If we want to create confusion metrics, we just have to use confusion, underscore metrics. 55 00:05:44,810 --> 00:05:47,860 And then the white test, that is the actual values and the period. 56 00:05:47,990 --> 00:05:48,610 Good news. 57 00:05:50,690 --> 00:05:52,520 You can see this is the confusion metrics. 58 00:05:52,760 --> 00:05:59,510 You know, all the rules, labels and the column labels, the columns are credit card values and the 59 00:05:59,510 --> 00:06:01,100 rules are actual values. 60 00:06:01,460 --> 00:06:08,000 If we want to find out the accuracy score, which is 37 plus 22 divided by a hundred and two total Nemerov 61 00:06:08,030 --> 00:06:11,990 observation will come out to be point five seven. 62 00:06:12,740 --> 00:06:15,320 So remember, four key equal to one. 63 00:06:15,790 --> 00:06:18,590 Our accuracy was one five three four. 64 00:06:18,620 --> 00:06:19,730 Gateway two, three. 65 00:06:20,030 --> 00:06:22,610 Again, our accuracy was one five three. 66 00:06:23,030 --> 00:06:27,920 But for Gateway to seven, our accuracy is zero point five seven. 67 00:06:29,770 --> 00:06:34,100 If you want, you can give another values for the UK here. 68 00:06:34,630 --> 00:06:43,240 So if you if you want venti or 30, you can mention all these values in your dictionary. 69 00:06:43,450 --> 00:06:48,700 And then you can train your model on all these values using a certain CVO option. 70 00:06:51,180 --> 00:06:53,320 That so you run, okay? 71 00:06:53,410 --> 00:07:00,130 And then for multiple values of gay without duplicating your codes multiple times.