1 00:00:00,740 --> 00:00:03,270 In the last we knew, we trained our model. 2 00:00:04,920 --> 00:00:08,670 This is what happened at each epoch. 3 00:00:09,650 --> 00:00:11,880 We got accuracy and loss. 4 00:00:12,960 --> 00:00:15,740 And this was for book training. 5 00:00:16,190 --> 00:00:21,630 And the validation to this was also plotted on the right hand side. 6 00:00:22,410 --> 00:00:26,180 You can see in this graph, this top graph is not lost. 7 00:00:26,620 --> 00:00:30,660 It has blue line for lost on the training set. 8 00:00:31,350 --> 00:00:33,710 And this green line for validation loss. 9 00:00:35,130 --> 00:00:37,710 The graph on the bottom is for accuracy. 10 00:00:38,790 --> 00:00:42,870 This blue line is for training accuracy and the green line is for validation accuracy. 11 00:00:45,800 --> 00:00:49,590 You can see that after you've been to people. 12 00:00:50,180 --> 00:00:53,180 There is not much improvement in the accuracy. 13 00:00:55,040 --> 00:00:58,550 However, we have still run this program for 30 epochs. 14 00:01:01,000 --> 00:01:08,290 Now, as I said in the beginning, the real performance of a model is gauged on previously unseen data. 15 00:01:09,430 --> 00:01:14,140 This is the reason we kept a set of 10000 observations separately. 16 00:01:15,760 --> 00:01:23,200 So now we can predict the class for these test images and compare the accuracy against the actual class 17 00:01:23,500 --> 00:01:24,580 of those images. 18 00:01:27,240 --> 00:01:35,070 Just knowing that the training accuracy that we have achieved is nearly 87 percent. 19 00:01:36,000 --> 00:01:41,940 Now we are going to see if there is different accuracy achieved on the test set. 20 00:01:44,570 --> 00:01:51,930 Now, when we are looking at this performance, instead of just giving these test images and getting 21 00:01:51,930 --> 00:02:02,970 the predicted values, we can give boot images and the actual labels this way using this evaluate function. 22 00:02:03,640 --> 00:02:10,830 Our model can straightaway evaluate and show us the error and accuracy of our model on the test. 23 00:02:13,380 --> 00:02:19,140 So I stole the design of this evaluate function in the score variable. 24 00:02:19,940 --> 00:02:24,720 This evaluate takes two parameters test images and labels. 25 00:02:26,910 --> 00:02:34,110 It uses the model to predict on test images and compares the predicted level against these test labels 26 00:02:34,620 --> 00:02:35,700 to Lexus underscored. 27 00:02:42,830 --> 00:02:49,920 Now, if you want, go see the best loss and test air quality, you can run these two lines of code. 28 00:02:54,300 --> 00:03:04,140 You can see that the loss on test set is due to point forward, one that is on banning say it was zero 29 00:03:04,140 --> 00:03:04,950 point three seven. 30 00:03:06,480 --> 00:03:12,730 And the accuracy on this, it is nearly 85 percent, whereas it was 87 percent. 31 00:03:13,170 --> 00:03:21,800 On the training say this means that there is a little bit of overfitting but will not talk about that 32 00:03:22,400 --> 00:03:23,220 in detail. 33 00:03:23,230 --> 00:03:23,610 It hit. 34 00:03:26,330 --> 00:03:34,100 Now, if you are interested in actual predictions of the classes on the test that we can use to predict 35 00:03:34,100 --> 00:03:36,860 function and just in predict test images. 36 00:03:38,940 --> 00:03:40,580 So this is the predict function. 37 00:03:40,910 --> 00:03:43,430 And it just takes the input of test images. 38 00:03:43,880 --> 00:03:48,740 We do not need to give the test labels it when we get past images. 39 00:03:49,550 --> 00:03:53,750 It applies the model on it and gives us deep predictions. 40 00:03:54,320 --> 00:04:01,910 Next to underscore, you can see that we have predictions on our ten thousand best images. 41 00:04:02,900 --> 00:04:06,770 And it has ten values corresponding to each prediction. 42 00:04:08,120 --> 00:04:09,480 Let's look at those 10 values. 43 00:04:09,530 --> 00:04:19,220 Forty first image in the now since we used an outwardly was the dust softmax function. 44 00:04:20,630 --> 00:04:22,640 If you look at the first set of predictions. 45 00:04:22,910 --> 00:04:25,160 This is a set of ten probabilities. 46 00:04:27,140 --> 00:04:35,060 This is the probability of the first test image belonging to the first class. 47 00:04:36,770 --> 00:04:44,990 This is the probability for the second class and so on to find out which of these probabilities is the 48 00:04:44,990 --> 00:04:52,640 largest so that we can assign that class to that image we used, which taught Max function. 49 00:04:55,010 --> 00:04:57,100 You can see here which dark Max. 50 00:04:58,280 --> 00:05:02,040 And we input these predictions variable. 51 00:05:06,300 --> 00:05:14,910 When I've done this, it tells me that the end value, which is this seven point six six in compendious 52 00:05:14,990 --> 00:05:15,870 to one minus one. 53 00:05:17,820 --> 00:05:20,530 This value is the largest probability. 54 00:05:21,420 --> 00:05:33,570 So we want to assign the class to this image to find out what is the class associated with this tent 55 00:05:33,600 --> 00:05:34,200 position. 56 00:05:34,920 --> 00:05:43,560 If you remember, we created a class name I.D. which stored dawdy names of the classes here, is that 57 00:05:43,620 --> 00:05:43,790 right? 58 00:05:45,660 --> 00:05:50,100 So we want to find out the end element of the city. 59 00:05:50,940 --> 00:05:55,440 We just put that entire which match function to this class, Demitry. 60 00:05:57,030 --> 00:06:02,640 So basically with this which Max will return the position and corresponding to that position, we find 61 00:06:02,640 --> 00:06:04,620 out the last name from this day. 62 00:06:05,480 --> 00:06:11,220 So when we done this, it tells us that the class is ankle boot. 63 00:06:12,630 --> 00:06:19,860 So basically our prediction is that the first image in the test set is of ankle boot. 64 00:06:21,930 --> 00:06:29,130 If you want to check takes look at that image will again use the block function and we block this first 65 00:06:29,190 --> 00:06:30,450 image of this thing. 66 00:06:32,670 --> 00:06:36,570 So when we block this image, you can have a look at this image. 67 00:06:36,960 --> 00:06:42,840 This looks like an ankle boot and which is also our prediction that it is an ankle boot. 68 00:06:43,500 --> 00:06:45,190 So this looks correct. 69 00:06:46,770 --> 00:06:53,190 Lastly, instead of predicting probabilities, you can straightaway predict the flosses also using the 70 00:06:53,190 --> 00:06:54,780 predict classes function. 71 00:06:57,680 --> 00:07:05,720 Here's how we do that, we create a new video we'll call glass spread, and this gets the information 72 00:07:05,720 --> 00:07:08,840 off all the predicted glasses on the best images. 73 00:07:09,390 --> 00:07:11,610 This function also gets only one parameter. 74 00:07:11,660 --> 00:07:12,920 That is the input. 75 00:07:13,140 --> 00:07:14,000 Best images. 76 00:07:16,020 --> 00:07:20,770 And we've done this and we look at the first grindy predictions. 77 00:07:22,000 --> 00:07:25,860 You can see that for the first image we have predicting nine plus. 78 00:07:27,010 --> 00:07:29,530 So you may think that a lead we predicted then. 79 00:07:29,880 --> 00:07:31,690 Why is it number looking nine? 80 00:07:32,200 --> 00:07:42,880 So this tent is the position of the glass label and glass is actually labeled nine because glass labels 81 00:07:42,880 --> 00:07:43,900 are starting from zero. 82 00:07:45,250 --> 00:07:48,010 So both of these represent the same thing. 83 00:07:48,670 --> 00:07:52,850 Ninet label label glass is ankle boot only. 84 00:07:54,940 --> 00:07:56,770 So these are all declasse predictions. 85 00:07:58,210 --> 00:07:58,890 So we have both. 86 00:07:58,930 --> 00:08:02,050 The method one is to predict the probabilities. 87 00:08:03,190 --> 00:08:04,990 Use those predicted properties. 88 00:08:05,290 --> 00:08:07,080 Find out the maximum out of those. 89 00:08:07,620 --> 00:08:12,690 And that glass having the maximum probability will be the predicted glass. 90 00:08:13,180 --> 00:08:19,500 Or you can straightaway you do predict losses function to get the predicted glass on the new best set. 91 00:08:21,790 --> 00:08:22,370 That's all in. 92 00:08:22,490 --> 00:08:22,840 We do. 93 00:08:23,560 --> 00:08:28,120 So we have created a complete neural network classification model. 94 00:08:29,890 --> 00:08:34,030 So here's a summary of whatever we did to build this classification model. 95 00:08:35,110 --> 00:08:42,910 We installed a cross package and we activated it using install, underscore, get. 96 00:08:43,210 --> 00:08:44,080 We installed it. 97 00:08:44,380 --> 00:08:46,970 US go get us Labidi and Antiflu. 98 00:08:49,630 --> 00:08:58,810 We then imported the data we pre-process did and normalized it using that data. 99 00:08:59,410 --> 00:09:00,440 It was in two parts. 100 00:09:00,880 --> 00:09:03,050 It had a train pass and a test flight. 101 00:09:03,850 --> 00:09:10,510 We used to train part to bring our model when we were defining our model. 102 00:09:10,840 --> 00:09:11,860 We had three parts. 103 00:09:12,310 --> 00:09:14,650 One was to give the structure of the model. 104 00:09:15,640 --> 00:09:18,670 The second was configuring the learning process. 105 00:09:19,240 --> 00:09:22,060 And the third was learning the quick function. 106 00:09:22,230 --> 00:09:28,900 Kuprin de model one time model was train reaching its performance on the basic. 107 00:09:29,650 --> 00:09:35,650 And we found out that our model is giving all the performance of 85 percent, which we were satisfied. 108 00:09:37,930 --> 00:09:45,520 So following this entire process, we have created a classification model using neural networks.