1 00:00:05,970 --> 00:00:07,500 Now are is trained. 2 00:00:08,880 --> 00:00:11,730 You can see that it has done for all the 30 epochs. 3 00:00:13,680 --> 00:00:19,180 The loss value we are getting is point to body training data. 4 00:00:20,910 --> 00:00:24,300 Accuracy is nearly 92 percent, which is very impressive. 5 00:00:25,950 --> 00:00:31,090 This is what the training data, the validation set loss is point to fight. 6 00:00:31,860 --> 00:00:37,650 And the accuracy is nearly 90 percent using the callbacks. 7 00:00:38,280 --> 00:00:42,870 We would have stored the best model out of all the 30 epochs. 8 00:00:44,370 --> 00:00:48,780 But you can see that this last epoch is sufficiently good. 9 00:00:49,020 --> 00:00:51,630 It is giving us a ninety point nine percent accuracy. 10 00:00:52,200 --> 00:00:56,820 So we are going to check the best performance of this particular epoch. 11 00:00:59,760 --> 00:01:06,270 Another thing to notice is the validation, loss and validation error. 12 00:01:06,930 --> 00:01:08,340 Have not stopped improving. 13 00:01:09,360 --> 00:01:11,290 Although there are some dips here. 14 00:01:11,790 --> 00:01:18,660 But you can see that overall the pattern is going up for the validation set also for training. 15 00:01:18,690 --> 00:01:19,890 It will definitely go up. 16 00:01:20,220 --> 00:01:23,830 But even for the validation set, it is still going a little bit down. 17 00:01:25,440 --> 00:01:33,170 So it will make more sense to run this for more number of epochs and simultaneously UTI at least topping 18 00:01:33,180 --> 00:01:39,150 criteria so that we can go up to that point where we get the best accuracy. 19 00:01:42,210 --> 00:01:49,050 Now, let us take the performance of this last train model, the final value at 30 epochs. 20 00:01:50,760 --> 00:01:53,120 How does it perform on our test it? 21 00:01:54,840 --> 00:02:00,880 So we will use the evaluate function to store the value in CNN school. 22 00:02:01,830 --> 00:02:05,430 So let us evaluate on the test images and test labels. 23 00:02:11,230 --> 00:02:18,020 And now we can print out the lost value and the accuracy value on our deficit. 24 00:02:18,340 --> 00:02:21,130 And it is coming out to be 90 percent. 25 00:02:22,270 --> 00:02:30,460 So you can see that there is a small improvement just by adding one convolutional and one pulling layer 26 00:02:30,880 --> 00:02:33,920 in our network earlier without these two layers. 27 00:02:35,220 --> 00:02:43,510 When we ran a normal artificial neural network, we had an accuracy ranging from 85 percent to 87 percent 28 00:02:43,510 --> 00:02:44,350 for the desert. 29 00:02:46,150 --> 00:02:50,740 Now we are getting four percent increase in test accuracy. 30 00:02:51,550 --> 00:02:59,680 And now it is at 90 percent, probably adding more convolutional layers, extracting more features, 31 00:02:59,770 --> 00:03:01,920 applying more craters, probably. 32 00:03:02,320 --> 00:03:08,140 There is a chance that we can further increase this accuracy to maybe even 95 percent. 33 00:03:14,070 --> 00:03:19,740 You can also use this model, this train model, to predict on new data. 34 00:03:20,610 --> 00:03:23,790 So here I'm going to predict on the test images only. 35 00:03:26,120 --> 00:03:29,630 So the class of prediction will be stored in class bread. 36 00:03:30,390 --> 00:03:31,980 And we use predict classes. 37 00:03:31,980 --> 00:03:32,820 Functions what that. 38 00:03:37,160 --> 00:03:40,760 I'm printing out the first 20 values in this class print variable. 39 00:03:43,010 --> 00:03:54,860 So the first image is classified as ninet object by our model to give name to these predicted values. 40 00:03:55,790 --> 00:04:00,110 If you remember, we created an array of class names. 41 00:04:00,650 --> 00:04:02,570 This is where we are going to use it. 42 00:04:03,470 --> 00:04:07,520 We are going to use class names to give names to these predicted values. 43 00:04:11,280 --> 00:04:15,540 So these are the first 20 predictions by our system. 44 00:04:16,560 --> 00:04:19,920 What are the actual values for these first 20 images? 45 00:04:22,290 --> 00:04:23,880 These are the actual values. 46 00:04:24,720 --> 00:04:29,550 You can see that most of the observations are exactly matching. 47 00:04:30,000 --> 00:04:31,620 These are predicted values. 48 00:04:31,890 --> 00:04:34,900 These are actual values and most of them are same. 49 00:04:36,600 --> 00:04:38,520 I can identify this one. 50 00:04:38,610 --> 00:04:40,060 This is predicted that sandal. 51 00:04:40,160 --> 00:04:41,580 But it is actually sneaker. 52 00:04:42,600 --> 00:04:45,570 And there is no other observation which just predicted wrongly. 53 00:04:47,010 --> 00:04:51,150 If you want to plot this particular image, we're just getting predicted wrongly. 54 00:04:52,710 --> 00:04:56,080 We can use this blog function here. 55 00:04:56,250 --> 00:04:59,040 I'll have to mention which image I want to plot. 56 00:05:00,170 --> 00:05:02,720 So this is 13 image. 57 00:05:04,350 --> 00:05:07,500 So I put 13 here and run this. 58 00:05:10,090 --> 00:05:11,940 And this is the image. 59 00:05:12,660 --> 00:05:14,270 Our model is predicting sandal. 60 00:05:14,760 --> 00:05:16,530 And it is actually sneaker. 61 00:05:17,460 --> 00:05:25,320 But I think because of the similar characteristics of a standalone sneaker, this could happen. 62 00:05:25,740 --> 00:05:29,850 So for most of the predictions, our model is predicting very, very. 63 00:05:33,160 --> 00:05:40,810 This is how we add convolutional layers and pulling layers to make a convolutional neural network, 64 00:05:41,920 --> 00:05:48,370 convolutional neural network also includes part of multi-level Perceptron that we built earlier. 65 00:05:49,060 --> 00:05:53,200 So initial few layers are convolutional years later on. 66 00:05:53,530 --> 00:05:57,130 We have a normal neural network which gives us the final predictions. 67 00:06:00,470 --> 00:06:03,380 This was a simple example, fashion. 68 00:06:03,440 --> 00:06:11,030 Amnesty is a standard data set, which everyone works on when they start learning, deepening in the 69 00:06:11,030 --> 00:06:19,190 coming lectures will take up one project and we're trying to build a convolutional network on it and 70 00:06:19,310 --> 00:06:23,450 improve its accuracy, as we have to do in real life problems.