1 00:00:01,080 --> 00:00:05,590 In this video, we are going to see how to tune the hyper barometer. 2 00:00:06,360 --> 00:00:08,460 That is the cost barometer. 3 00:00:10,530 --> 00:00:16,910 We will try to find out that value of this cost parameter, which can probably get us the best, best 4 00:00:16,910 --> 00:00:17,910 set performance. 5 00:00:19,500 --> 00:00:26,730 By the way, I hope that you are typing all this command on your own and not just watching these videos 6 00:00:26,910 --> 00:00:30,270 or copy pasting from the file that I've shared with you. 7 00:00:32,070 --> 00:00:38,580 I always recommend that your table with your own hand so that it stays in your memory for a longer time. 8 00:00:40,980 --> 00:00:43,580 So tune the hyper barometer. 9 00:00:44,580 --> 00:00:48,420 We have this tune function in the EU and Euro Zone one library only. 10 00:00:49,750 --> 00:00:53,070 And this tune function with daily mortar to be used. 11 00:00:53,820 --> 00:00:55,800 We tell the formula to be used. 12 00:00:57,330 --> 00:00:59,220 So the first parameter is the model. 13 00:00:59,760 --> 00:01:02,240 We want to use an SBM model here. 14 00:01:03,360 --> 00:01:05,670 The second parameter is the formula. 15 00:01:06,150 --> 00:01:12,330 Started gospel is the dependent variable are telecine and then are dark, suggesting that we want to 16 00:01:12,330 --> 00:01:15,000 use all other variables as independent variables. 17 00:01:17,190 --> 00:01:18,890 Then we have this data barometer. 18 00:01:19,320 --> 00:01:21,240 We want to use the train data. 19 00:01:22,860 --> 00:01:24,300 The canal is still linear. 20 00:01:24,540 --> 00:01:27,510 We haven't discussed other types of canals. 21 00:01:28,920 --> 00:01:31,920 And then we provide the range of the hyper parameter. 22 00:01:34,740 --> 00:01:45,600 So I'm going to run this same analysis with a list of values of see, the value of sea will be changed. 23 00:01:46,200 --> 00:01:48,360 The same model will be done again and again. 24 00:01:49,500 --> 00:01:50,420 And this tuned out. 25 00:01:50,510 --> 00:01:58,320 Our Breville will contain the information of how the error rate is changing when I'm changing the value 26 00:01:58,320 --> 00:02:02,460 of see the error rate that we are considering. 27 00:02:02,550 --> 00:02:08,430 Here is de cross-validation error rate that is within this training data set. 28 00:02:09,360 --> 00:02:12,240 Our model will create some parts. 29 00:02:12,480 --> 00:02:20,150 So by default it will create in part it will grindy the model on nine parts and it will test its performance 30 00:02:20,160 --> 00:02:21,060 on the end part. 31 00:02:22,200 --> 00:02:29,070 It will again, but under the same model on other main parts and test the performers on the remaining 32 00:02:29,070 --> 00:02:35,130 paint part and agree do it 10 times since it has an individual part of the training set. 33 00:02:36,060 --> 00:02:40,560 So by default, the number of parts it will create is 10. 34 00:02:43,400 --> 00:02:50,060 If you want to change the number of parts that it will create, you can add one parameter which is called 35 00:02:50,210 --> 00:02:53,870 Cross, which stands for cross validation set. 36 00:02:55,040 --> 00:03:01,070 So you can add a barometer called Cross and specify the values and how many number of parts you want 37 00:03:01,070 --> 00:03:02,100 to divide that train. 38 00:03:03,410 --> 00:03:10,170 Well, for now, we'll use the default value of pain, only an elegant discom on. 39 00:03:14,090 --> 00:03:20,630 Now you can see that there is a young dark out, very well created, which contains the information 40 00:03:20,630 --> 00:03:30,410 of this tuning exercise that we did not to find out the best model out of all the all these different 41 00:03:30,410 --> 00:03:31,620 models that we created. 42 00:03:32,510 --> 00:03:35,720 We use this Dündar out dollar based model. 43 00:03:35,750 --> 00:03:36,100 Come on. 44 00:03:36,320 --> 00:03:42,320 So the best model is stored in this variable and that will be stored. 45 00:03:42,560 --> 00:03:46,130 And this newly created variable, which will be best modeled. 46 00:03:46,670 --> 00:03:51,050 So we store the information of best model in this best modality. 47 00:03:51,050 --> 00:03:52,450 Will we then this. 48 00:03:52,450 --> 00:03:52,820 Come on. 49 00:03:55,250 --> 00:04:00,620 You can see that now I have another variable which contains the information of the best model. 50 00:04:01,640 --> 00:04:05,580 This variable is same as the as Winfred variable. 51 00:04:06,410 --> 00:04:10,040 Both of these have the model information of the Esmie a model. 52 00:04:13,120 --> 00:04:22,210 So same as as being fake, you can use this model to predict on the test set and use those predicted 53 00:04:22,210 --> 00:04:28,720 values to compare against the actual values and data set and then find out the prediction accuracy of 54 00:04:28,720 --> 00:04:29,260 this model. 55 00:04:31,330 --> 00:04:36,150 If you want to look at this best model, you'll under somebody's command. 56 00:04:39,970 --> 00:04:50,020 If I scroll a little bit, you can see that out of all these cost values of point zero, one point one, 57 00:04:50,020 --> 00:04:56,560 one ten and a hundred, the best value is coming at a cost of one. 58 00:04:57,670 --> 00:05:01,440 This is the same model that is stored in the SBM fit also. 59 00:05:04,420 --> 00:05:08,830 So if I predict the values on the desk, these will be the same values. 60 00:05:09,100 --> 00:05:15,910 I store these values in a new variable called White Bread and says this is linear values prediction 61 00:05:17,260 --> 00:05:25,330 and I'll create this confusion matrix again, which is the same confusion matrix that we got earlier. 62 00:05:27,070 --> 00:05:32,800 Now, if you want to check out other values of cost. 63 00:05:33,010 --> 00:05:36,510 So we know that the best is coming at one. 64 00:05:36,970 --> 00:05:44,140 And if you want to look at the cost values, Nirvan, instead of looking at this huge range of point 65 00:05:44,140 --> 00:05:54,170 zero zero one two hundred, we can give a range of point one point five point eight, one to ten and 66 00:05:54,190 --> 00:05:54,610 two on. 67 00:05:55,720 --> 00:06:04,930 So you can change this value range around this combined game and find out if there is any other cost 68 00:06:04,930 --> 00:06:08,740 value at which we can get a better result than this. 69 00:06:10,300 --> 00:06:17,160 So this is how using this tune function with the only hyper parameter, which is the cost parameter 70 00:06:18,250 --> 00:06:26,770 in a linear kernel in the coming, Ludo's will move our discussion from linear gurnard to other nonlinear 71 00:06:27,010 --> 00:06:27,490 commit.