1 00:00:01,220 --> 00:00:06,260 Now, let's start building as swim regression, Morden and Biton. 2 00:00:07,530 --> 00:00:14,520 I have provided you the links for oficial, a skill learned document of Astrium regulation. 3 00:00:15,570 --> 00:00:18,750 You can click on this link to open the documentation. 4 00:00:21,080 --> 00:00:27,830 Here you can find the syntax and the parameters of this function. 5 00:00:29,930 --> 00:00:32,960 The first parameter here is colonel here. 6 00:00:33,260 --> 00:00:37,220 You have to specify what Colonel you want to use by default. 7 00:00:37,250 --> 00:00:40,650 This is R, b, f, r, b, if it sense for radial. 8 00:00:41,970 --> 00:00:48,510 But you can also choose linear polynomial or sigmoid function, for our example. 9 00:00:48,600 --> 00:00:51,210 We will use the linear cardinal. 10 00:00:52,110 --> 00:00:58,370 The second parameter here is degree degrees sense for the degree of polynomial kernel. 11 00:00:59,250 --> 00:01:00,390 Then we have gamma. 12 00:01:00,840 --> 00:01:05,970 Gamma is only relevant and radial polynomial or sigmoid function. 13 00:01:07,170 --> 00:01:10,800 Again, we have another parameter which is specific to polynomial. 14 00:01:10,950 --> 00:01:12,270 That is coifs zero. 15 00:01:14,540 --> 00:01:23,090 And then we have see, this is the penalty parameter C, which we have already discussed during over 16 00:01:23,270 --> 00:01:24,200 two reflectors. 17 00:01:26,270 --> 00:01:32,510 And there are some other parameters as well, which we will not use while creating our modern. 18 00:01:35,790 --> 00:01:37,710 Now, these are the attributes. 19 00:01:38,830 --> 00:01:47,860 The parameters are the variables you need to provide value training, your model and attributes contain 20 00:01:47,860 --> 00:01:51,670 the information of the model that you have already trained. 21 00:01:52,630 --> 00:01:58,900 So, for example, a few write object name and you will provide DOD support underscored. 22 00:02:00,130 --> 00:02:03,580 It will give you the indices of the support vector. 23 00:02:04,840 --> 00:02:10,780 Similarly, if you want to view all the support vector, you can just right, you are model, object, 24 00:02:10,780 --> 00:02:12,670 name and dot support. 25 00:02:12,700 --> 00:02:14,620 Underscore vectors underscore. 26 00:02:16,600 --> 00:02:18,280 And it will give you all the support. 27 00:02:18,460 --> 00:02:18,960 But that's. 28 00:02:20,850 --> 00:02:25,850 So for each function, we have parameters and attributes in a scalar. 29 00:02:27,030 --> 00:02:29,130 Now let's go back to our notebook. 30 00:02:31,750 --> 00:02:37,140 So for every Skillern model, you have to follow the same astep. 31 00:02:37,780 --> 00:02:41,880 First, you have to import the function that we are doing here. 32 00:02:43,340 --> 00:02:45,470 Then we have to create an object. 33 00:02:46,250 --> 00:02:52,760 This can be your regression object or classification object using the function that you have imported. 34 00:02:53,570 --> 00:02:56,330 So here I am creating an object. 35 00:02:57,620 --> 00:03:04,460 By calling this function as we are calling to hyper parameters, we want linear kernel. 36 00:03:04,610 --> 00:03:09,470 That's why we are providing first parameter as cardinal, equal to limit. 37 00:03:10,280 --> 00:03:13,910 And we want our Seawell you panel do a load to be tosing. 38 00:03:14,390 --> 00:03:16,470 That's why we are providing sequel 2000. 39 00:03:17,720 --> 00:03:22,130 All the other hyper parameters will use there before you will lose. 40 00:03:23,000 --> 00:03:31,790 If you want to know the default value, you can open the documentation or you can just you can just 41 00:03:31,790 --> 00:03:35,030 press shift tab inside your function. 42 00:03:37,300 --> 00:03:38,890 First, we will run this. 43 00:03:39,760 --> 00:03:46,410 Now, if I press shift PEB inside my function, I will get all the parameters and their default value. 44 00:03:48,580 --> 00:03:49,690 In the next to step. 45 00:03:51,310 --> 00:03:56,560 We will use our object to fit graining data into that object. 46 00:03:58,300 --> 00:04:04,600 So since we have created our extreme standard variable, so we will use that variable and we will use 47 00:04:04,600 --> 00:04:06,610 our wide-screen variable as it is. 48 00:04:07,510 --> 00:04:09,430 So we will write as we are. 49 00:04:10,000 --> 00:04:16,990 This is our variable that we have created earlier, and we will use Daudt Fit Method to fit our model. 50 00:04:18,730 --> 00:04:23,500 If I run this now, I have fitted my model. 51 00:04:24,280 --> 00:04:29,220 I can use this model to predict the values of Y. 52 00:04:32,530 --> 00:04:38,920 Now I can predict Y values by using Daudt predict method of what object. 53 00:04:39,520 --> 00:04:43,750 So we want to assign our predicted values in y underscored test. 54 00:04:43,790 --> 00:04:45,370 Underscore red and white. 55 00:04:45,370 --> 00:04:47,210 Underscore train underscore spread. 56 00:04:47,710 --> 00:04:51,640 This will contain my predicted values on test data. 57 00:04:51,820 --> 00:04:55,840 And this will contain projected values on our train data. 58 00:04:57,280 --> 00:05:01,770 We can radically use as we are dort predict method and in bracket. 59 00:05:02,110 --> 00:05:04,330 We have to mention the X variable. 60 00:05:04,660 --> 00:05:12,040 So for our test data I am providing X underscored tests under the gold standard and for train data. 61 00:05:12,340 --> 00:05:15,790 We are providing X underscore train underscore standard. 62 00:05:16,840 --> 00:05:17,920 Let's run this. 63 00:05:20,470 --> 00:05:27,730 Now I have my predicted values in these two variables to view our predicted values. 64 00:05:28,180 --> 00:05:33,790 You can just state any of these two variables and executed. 65 00:05:37,220 --> 00:05:40,190 You can see this are over dedicated values. 66 00:05:41,770 --> 00:05:45,460 Now we want to know how our model is performing. 67 00:05:47,080 --> 00:05:51,790 So we have already predicted our test and Kreen y values. 68 00:05:52,300 --> 00:06:00,230 We will use these values to find meaning squared error and are square scored with a Skillern. 69 00:06:00,610 --> 00:06:02,350 We get this to escort's. 70 00:06:03,550 --> 00:06:10,660 We will import DESPUES scores and we will use our credit card values to find this escort's. 71 00:06:13,600 --> 00:06:22,120 Now, if you're not aware, what I mean is squared error means it mean that it is the mean of the square 72 00:06:22,150 --> 00:06:26,590 of difference between the word predicted values and actual values. 73 00:06:28,860 --> 00:06:38,200 And if you want to know about what parameters do we need to execute this, you can just press shift 74 00:06:38,250 --> 00:06:40,500 Tebbe here. 75 00:06:40,620 --> 00:06:44,980 You can see the first parameter is via Google News. 76 00:06:45,960 --> 00:06:51,310 Since our actual values are in via underscore test, we will. 77 00:06:51,320 --> 00:06:51,650 Right. 78 00:06:51,690 --> 00:06:52,980 Why underscore test? 79 00:06:53,500 --> 00:06:57,830 And the word second parameter here is predicted values. 80 00:06:58,390 --> 00:07:04,770 Since we have predicted our values and why underscore tests underscore bread, we will use this with 81 00:07:04,770 --> 00:07:08,070 Able and define Londis. 82 00:07:09,300 --> 00:07:14,970 The masc value on our test dataset is 160 million. 83 00:07:16,090 --> 00:07:23,780 Now, the thing with MASC is you can only compare model performances on the same data. 84 00:07:24,220 --> 00:07:26,150 So this are absolute number. 85 00:07:26,770 --> 00:07:32,560 So you can only compare it with the masc value of some other model on this data. 86 00:07:33,340 --> 00:07:37,180 You do not get any information by just looking at this number. 87 00:07:38,620 --> 00:07:45,700 So the better option here is to use artist square values to evaluate the performance of more than. 88 00:07:47,550 --> 00:07:54,570 Now, again, to look at the parameters that we need for our art to underscore its core function. 89 00:07:55,380 --> 00:07:57,340 Just click shift tab. 90 00:07:58,290 --> 00:08:04,380 And here also we need to put away the true value and the predicted value as our parameter. 91 00:08:07,350 --> 00:08:08,190 We'll run this. 92 00:08:09,060 --> 00:08:12,780 The oddest, good value on our training site, this point, seven one. 93 00:08:14,100 --> 00:08:15,230 This is quite good. 94 00:08:17,700 --> 00:08:25,080 Now, if you don't know about our a squirrel, you artist squirrel loose ranges from minus one to one, 95 00:08:26,010 --> 00:08:32,220 one or minus one means perfect fit and zero means no fate. 96 00:08:33,150 --> 00:08:41,760 In other words, one means your model is able to convey all the variations in your wide dataset, and 97 00:08:41,760 --> 00:08:47,940 zero means your model is not able to convey any information for our Y variable. 98 00:08:48,930 --> 00:08:55,770 So ideally you are discussing value should be between zero point five two zero point nine for a good 99 00:08:55,770 --> 00:08:56,160 model. 100 00:08:57,630 --> 00:09:02,980 For our test dataset, the artist good value is point five again. 101 00:09:03,120 --> 00:09:03,810 This is good. 102 00:09:04,350 --> 00:09:11,430 We can v.c of a sea value to increase our artist good value. 103 00:09:12,600 --> 00:09:21,720 So if you want, you can just provide another Seawell, you suppose three thousand and creadon this 104 00:09:23,430 --> 00:09:24,420 whole code? 105 00:09:30,180 --> 00:09:36,150 You can see now our artists goodwell you on our training despond, so Anwan and artists get a value 106 00:09:36,540 --> 00:09:38,840 on Ortez Sirtis point for me. 107 00:09:41,040 --> 00:09:45,840 So I recommend you to try the value of see as 500. 108 00:09:48,140 --> 00:09:51,950 And try to find out the artists good value for a sequel to 500. 109 00:09:53,980 --> 00:09:57,010 From the next lecture, we'll move on to classification. 110 00:09:57,970 --> 00:10:01,090 And there we will see advanced techniques. 111 00:10:03,230 --> 00:10:07,430 To optimize this value of C to get the best results.