1 00:00:00,570 --> 00:00:02,110 Welcome back. 2 00:00:02,130 --> 00:00:09,210 In the last lecture we saw how to combine and train a lot more than in this lecture. 3 00:00:09,240 --> 00:00:16,770 We will learn how to evaluate performance of automotive and how to predict classes on the new dataset 4 00:00:17,100 --> 00:00:27,150 using our this more than first to evaluate performance of our more than we have to use the Evaluate 5 00:00:27,150 --> 00:00:28,220 method. 6 00:00:28,260 --> 00:00:33,810 So just right you have more than object name which in our case is modern. 7 00:00:33,840 --> 00:00:39,700 And then we are pulling evaluate my take and inside evaluate. 8 00:00:39,720 --> 00:00:43,170 We have to give our test data sets. 9 00:00:43,320 --> 00:00:50,010 So we are writing x test for the independent test data side and then dependent test. 10 00:00:51,140 --> 00:00:53,310 So large this run this 11 00:00:58,910 --> 00:01:05,300 so the output here is first the loss and then the accuracy. 12 00:01:05,300 --> 00:01:12,560 This is accuracy because this is the metrics which we mentioned while compiling a lot more than a few 13 00:01:12,560 --> 00:01:13,330 remember. 14 00:01:13,490 --> 00:01:21,350 By combining we mentioned metrics equal to accuracy and that's what the second value which we are getting 15 00:01:21,350 --> 00:01:23,600 here is of accuracy. 16 00:01:23,600 --> 00:01:33,200 So on our test set the accuracy which we are getting is 86 percent which is really good as compared 17 00:01:33,200 --> 00:01:37,530 to logistic regulations or entries on the same data side. 18 00:01:38,630 --> 00:01:42,910 So with those models you will get around 70 to 80 percent accuracy. 19 00:01:42,940 --> 00:01:49,710 But here with it and then we are getting 86 percent accuracy. 20 00:01:49,710 --> 00:01:56,450 So once again to check the accuracy score or just to check the performance of your model you have to 21 00:01:56,450 --> 00:01:58,670 use evaluate metric. 22 00:01:58,670 --> 00:02:08,300 Now let's learn how to predict the probabilities and how to predict the classes on our new unseen data. 23 00:02:08,300 --> 00:02:10,880 Since right now we do not have any new data. 24 00:02:11,030 --> 00:02:20,460 We just taking first three samples from our test dataset and consider that as your new unseen data. 25 00:02:21,140 --> 00:02:23,420 Just from this. 26 00:02:23,870 --> 00:02:31,380 So we have saw the first three reports from our dataset and we are considering it as our new record. 27 00:02:32,630 --> 00:02:35,410 So there are two parameters which you can predict. 28 00:02:35,420 --> 00:02:42,560 First one is the probability of each class and second one is the class. 29 00:02:44,790 --> 00:02:51,850 Let's first calculate the probability score assigned to each class to do that. 30 00:02:51,860 --> 00:02:53,850 We can use not predict metric. 31 00:02:54,980 --> 00:03:00,010 So we will write the object name more than then not predict my. 32 00:03:00,440 --> 00:03:11,890 And as an argument we are passing over new unseen data which is X and that's what you notice on this. 33 00:03:11,930 --> 00:03:16,070 You can see our new data contains three records. 34 00:03:16,100 --> 00:03:24,650 That's why we are getting output for three records and there are 10 values in each of these elements 35 00:03:25,290 --> 00:03:30,850 representing the probability values of corresponding class. 36 00:03:31,070 --> 00:03:40,640 So for the first record the maximum probability is for a label equal to 10 and the probability value 37 00:03:40,640 --> 00:03:42,210 is point nine. 38 00:03:43,160 --> 00:03:44,600 For the second record. 39 00:03:44,600 --> 00:03:48,350 Probability is 1 for the label corresponding to the third object. 40 00:03:49,280 --> 00:03:56,480 And for the third record we have called equal 200 for the second object. 41 00:03:56,550 --> 00:04:00,420 We are also using round equal total just to round off. 42 00:04:00,420 --> 00:04:05,610 This probably gives values to do that similar religions. 43 00:04:05,990 --> 00:04:07,580 No it's not pretty. 44 00:04:07,660 --> 00:04:13,960 We are projecting probabilities but if we want to radically predict the class and not the probability 45 00:04:13,960 --> 00:04:17,710 scores we can use that predict underscore. 46 00:04:17,720 --> 00:04:21,830 Classes matter so so indexes almost same. 47 00:04:21,850 --> 00:04:28,110 First you have to write your model object name and then call the product standards for classes matter 48 00:04:28,450 --> 00:04:31,900 and in fact you have to mention your new asset. 49 00:04:32,800 --> 00:04:33,760 Just run this 50 00:04:36,540 --> 00:04:41,160 you can see here we are getting an array of the labels. 51 00:04:41,160 --> 00:04:51,350 So for the first object the class label is nine so you can also confirm it from here also the maximum 52 00:04:51,350 --> 00:05:00,140 probability was assigned to this object and here also we are getting a position as 9 to remember the 53 00:05:00,140 --> 00:05:00,660 numbering. 54 00:05:00,660 --> 00:05:02,030 It starts from zero. 55 00:05:02,030 --> 00:05:03,680 So this fund is zero. 56 00:05:03,680 --> 00:05:05,870 This fund is 1 and so on. 57 00:05:05,870 --> 00:05:13,700 And this fund is 9 so for the second record we are getting the predicted glasses. 58 00:05:13,910 --> 00:05:20,360 And you can also see that the probability was maximum for this record which is 0 1. 59 00:05:20,520 --> 00:05:31,450 So second object for the third record we are getting the predicted class says 1 and in the probabilities 60 00:05:31,480 --> 00:05:41,960 also you can see that the maximum probability correspond to position number 1 now again it's a little 61 00:05:41,960 --> 00:05:45,610 bit difficult to interpret desserts like this. 62 00:05:45,630 --> 00:05:52,770 Nine to one so large this causes the body description instead of this new medical labels. 63 00:05:52,770 --> 00:05:56,680 So we will do the same thing we did earlier. 64 00:05:56,810 --> 00:06:00,770 We have already created a list called last name. 65 00:06:00,770 --> 00:06:10,550 We are just passing by a red and blue that this if you run this instead of this labels you will get 66 00:06:10,550 --> 00:06:12,990 the description of each record. 67 00:06:13,620 --> 00:06:20,750 So now Lexus confirm this by plotting the image of our first new dataset. 68 00:06:22,100 --> 00:06:24,740 So the position is zero and we are using. 69 00:06:24,740 --> 00:06:27,200 I am sure this 70 00:06:30,660 --> 00:06:39,660 you can see this photo looks like and then cobalt and be also pretty and callable for the second record 71 00:06:40,860 --> 00:06:46,630 where the pollution is one and see that this is a pullover. 72 00:06:46,740 --> 00:06:53,500 And we also predicted below for the third object yellow shine this boot 73 00:06:58,060 --> 00:07:03,190 you can see that this is a jeans trouser and we also 74 00:07:06,920 --> 00:07:15,710 so they're told we predict with a word more than we can use evaluate beryllium performance of the model 75 00:07:15,830 --> 00:07:19,010 on set and to predict few values. 76 00:07:19,010 --> 00:07:27,080 We can either use not predict might hurt or not predict and school classes might not predict that method 77 00:07:27,260 --> 00:07:35,960 will give us the probability values assigned to each class for each record while they attended classes 78 00:07:36,530 --> 00:07:41,730 will give us details about the most probable last for that group or 79 00:07:46,220 --> 00:07:55,850 so that so we create train and predict using MLB more than now just for a quick somebody 80 00:08:00,300 --> 00:08:09,630 we started with importing and installing 10 cents lower then we imported what fashion Amnesty does set 81 00:08:09,970 --> 00:08:11,220 with 10 categories 82 00:08:14,070 --> 00:08:20,290 then we divided that dataset and to green validation and set. 83 00:08:20,390 --> 00:08:30,460 And before that we also normalize our dataset now after that we created architecture for our model. 84 00:08:30,790 --> 00:08:37,240 This is where you mentioned the activation functions number of neurons you want in your lives 85 00:08:40,580 --> 00:08:47,650 after that we compiled our model to give lost function optimizer and the metrics we want to calculate. 86 00:08:47,870 --> 00:08:51,530 And after that we train that one model using not fit method 87 00:08:54,320 --> 00:09:02,710 and after one more delay strain we can use the short history method to get all the loss and accuracy 88 00:09:02,750 --> 00:09:11,430 values for each epoch after that we can use dot evaluate method to check accuracy scoots on the as dataset 89 00:09:13,340 --> 00:09:20,430 and we can also predict values for new records using both predict and plus as matter.