1 00:00:00,570 --> 00:00:01,470 Welcome back. 2 00:00:02,130 --> 00:00:09,120 In the last lecture, we saw how to combine and cream, there were more than in this lecture. 3 00:00:09,210 --> 00:00:16,780 We will learn how to evaluate performance of automotive and how to predict classes on the new dataset 4 00:00:17,100 --> 00:00:27,150 using over this green modern first to evaluate performance of our more than we have to use the evaluate 5 00:00:27,150 --> 00:00:27,600 method. 6 00:00:28,230 --> 00:00:33,440 So just right, you have more than object me, which in our case is modern. 7 00:00:33,840 --> 00:00:39,550 And then we are calling even to a method and insight evaluate. 8 00:00:39,720 --> 00:00:42,740 We have to give our test datasets. 9 00:00:43,320 --> 00:00:49,910 So we are writing X test for the independent test dataset and then dependent tests. 10 00:00:51,150 --> 00:00:53,310 So let's just run this. 11 00:00:58,910 --> 00:01:04,700 So the output here is first the loss and then the accuracy. 12 00:01:05,330 --> 00:01:12,990 This is accuracy because this is the metrics which we mention when compiling a lot more than a few remember. 13 00:01:13,510 --> 00:01:17,720 By compiling, we mentioned metrics equal to accuracy. 14 00:01:18,350 --> 00:01:22,700 And that's why the second value which we are getting here is of accuracy. 15 00:01:23,600 --> 00:01:33,230 So on our test set, the accuracy which we are getting is 86 percent, which is really good as compared 16 00:01:33,230 --> 00:01:37,450 to logistic liquidations or vishy entries on the same dataset. 17 00:01:38,600 --> 00:01:42,860 So with those models, you will get around 70 to 80 percent accuracy. 18 00:01:42,920 --> 00:01:47,810 But here with in, then we are getting it a six percent accuracy. 19 00:01:49,730 --> 00:01:55,170 So once again, to check the accuracy score or just to check the performance of your model. 20 00:01:56,030 --> 00:01:57,190 You have to use it, will you? 21 00:01:57,190 --> 00:01:57,770 It mattered. 22 00:01:58,670 --> 00:02:06,560 Now let's learn how to predict the probabilities and how to predict the classes on what new unseen data 23 00:02:07,900 --> 00:02:08,150 since. 24 00:02:08,670 --> 00:02:10,460 Right now we do not have any new data. 25 00:02:11,030 --> 00:02:20,690 We just taking first three samples from our test dataset and consider that as your new unseen data mount 26 00:02:21,080 --> 00:02:22,250 just from this. 27 00:02:23,840 --> 00:02:31,340 So we have sold the first three reports from our test dataset and we are considering it as our new record. 28 00:02:32,600 --> 00:02:35,060 So there are two parameters which you can predict. 29 00:02:35,420 --> 00:02:42,320 First one is the probability of each class and second one is the class. 30 00:02:42,380 --> 00:02:42,850 It's an. 31 00:02:44,790 --> 00:02:49,460 Let's first calculate the probability score assigned to each class. 32 00:02:50,810 --> 00:02:53,740 To do that, we can use don't predict maitake. 33 00:02:54,980 --> 00:02:59,840 So we'll write down our object name more than then dot credit my card. 34 00:03:00,440 --> 00:03:04,830 And as an argument, we are passing over new unseen data. 35 00:03:04,910 --> 00:03:06,440 Which is X and that's what you. 36 00:03:08,730 --> 00:03:10,080 Let's just run this. 37 00:03:11,930 --> 00:03:15,770 You can see one new that contains three records. 38 00:03:16,100 --> 00:03:18,830 That's why we are getting output for three records. 39 00:03:19,250 --> 00:03:28,360 And there are ten values in each of these elements representing the probability values of corresponding 40 00:03:28,360 --> 00:03:28,820 class. 41 00:03:31,090 --> 00:03:33,100 So for the first record. 42 00:03:34,870 --> 00:03:41,670 The maximum probability is four label equate to 10 and the probability value is point nine make. 43 00:03:43,190 --> 00:03:49,910 For the second record, for Libby's one, for the label corresponding to the third object and for the 44 00:03:49,910 --> 00:03:54,500 third record, we have probably equals two hundred for the second object. 45 00:03:56,560 --> 00:04:02,840 We are also using ground equatable just to round off these probabilities, values to do the same. 46 00:04:03,080 --> 00:04:03,400 It's. 47 00:04:05,980 --> 00:04:07,360 No, it's not pretty. 48 00:04:07,660 --> 00:04:14,530 We are ping probabilities, but if we want to directly predict the loss and not the probability scores, 49 00:04:15,010 --> 00:04:17,540 we can use dort predict underscore. 50 00:04:17,700 --> 00:04:18,600 Classes matter. 51 00:04:20,170 --> 00:04:21,630 So Syntex is almost safe. 52 00:04:21,850 --> 00:04:28,080 First, you have to write your model object name and then call the credit crunch for classes matter. 53 00:04:28,450 --> 00:04:31,400 And in fact, you have to mention your new essay. 54 00:04:32,830 --> 00:04:33,760 Just run this. 55 00:04:36,540 --> 00:04:40,800 You can see here we are getting an update of the labels. 56 00:04:41,160 --> 00:04:42,600 So for the first object. 57 00:04:44,210 --> 00:04:49,430 The class level is nine, so you can also confirm it from here also. 58 00:04:50,720 --> 00:04:58,180 The maximum probability was assigned to this object and here also we are getting the position as night 59 00:04:58,510 --> 00:05:00,630 to remember the numbering. 60 00:05:00,650 --> 00:05:01,850 It starts from zero. 61 00:05:02,000 --> 00:05:03,350 So this one is zero. 62 00:05:03,680 --> 00:05:04,640 This one is one. 63 00:05:04,940 --> 00:05:05,600 And so on. 64 00:05:05,840 --> 00:05:06,950 And this one is nine. 65 00:05:08,620 --> 00:05:13,570 So for the second record, we are getting the predicted glasses, too. 66 00:05:13,990 --> 00:05:20,370 And you can also see that the probability was maximum for this record, which is zero one two. 67 00:05:20,710 --> 00:05:24,250 So second uptick for the third record. 68 00:05:24,960 --> 00:05:28,510 We are getting the predicted plus as one. 69 00:05:29,680 --> 00:05:37,800 And in the probabilities also, you can see that the maximum probability correspond to position number 70 00:05:37,810 --> 00:05:38,080 one. 71 00:05:39,950 --> 00:05:45,500 Now, again, it's a little bit difficult to interpret results like this. 72 00:05:45,530 --> 00:05:46,670 Nine to one. 73 00:05:47,240 --> 00:05:55,400 So let's just call your description instead of this, numerical labels will do the same thing we did 74 00:05:55,550 --> 00:05:56,060 earlier. 75 00:05:56,810 --> 00:06:00,260 We have already created a list called Last Name. 76 00:06:00,770 --> 00:06:04,340 We are just passing by Red and to that list. 77 00:06:05,880 --> 00:06:12,810 So if you run this in sort of this labels, you will get the description of each record. 78 00:06:13,500 --> 00:06:20,750 So now let's just confirm this by planting the image of our first new dataset. 79 00:06:22,100 --> 00:06:24,650 So the pollution is zero and we are using. 80 00:06:24,740 --> 00:06:27,230 I am sure this. 81 00:06:30,650 --> 00:06:39,260 You can see this photo looks like an ankle boot, and B, also predicted and called boot for the second 82 00:06:39,260 --> 00:06:46,460 record where the pollution is one and see that this is a pullover. 83 00:06:46,700 --> 00:06:48,610 And we also predicted Pulo. 84 00:06:50,830 --> 00:06:53,500 For the third object here, the pollution is to. 85 00:06:58,090 --> 00:07:04,000 You can see that this is a jeans Krosa and we also Groser. 86 00:07:07,010 --> 00:07:15,710 So that's all we predict with a vote more than we can use, evaluate to evaluate performance of model 87 00:07:15,830 --> 00:07:18,950 on asset and to predict future values. 88 00:07:19,010 --> 00:07:26,420 We can either use not predict it or not predict, and school classes better not predict. 89 00:07:26,500 --> 00:07:32,460 That method will give us the probability values assigned to each class for each record. 90 00:07:33,770 --> 00:07:41,720 While that attended, classes will give us details about the most probable class for that group or. 91 00:07:46,220 --> 00:07:55,880 So that's so we create green and perfect using and will be more than now just for a quick somebody. 92 00:08:00,320 --> 00:08:03,670 We started with importing and installing 10 cents. 93 00:08:05,960 --> 00:08:11,240 Then we imported what fashion amnesty does it with 10 categories. 94 00:08:14,070 --> 00:08:19,220 Then really, why did that do to set in to green validation and set? 95 00:08:20,390 --> 00:08:23,160 And before that we also normalize our dataset? 96 00:08:25,720 --> 00:08:32,740 Well, after that, we created architecture for a lot more than this is where you mention the activation 97 00:08:32,740 --> 00:08:37,270 functions, number of neurons you want in your lives. 98 00:08:40,580 --> 00:08:46,700 After that, we compiled a little more than to give lost function optimizer and the metrics we want 99 00:08:46,700 --> 00:08:47,410 to calculate. 100 00:08:47,870 --> 00:08:51,120 And after that routine, there were more than using not fit. 101 00:08:51,290 --> 00:08:51,770 Michael. 102 00:08:54,290 --> 00:09:02,670 And after what mortally strained, we can use the art history method to get all the Lawson accuracy's 103 00:09:02,730 --> 00:09:04,520 we'll use for each epoch. 104 00:09:05,610 --> 00:09:13,830 After that, we can use to elevate method to check accuracy schools on our astute asset, and we can 105 00:09:13,830 --> 00:09:20,280 also predict values for new records using both predict and not very standard classes matter.