1 00:00:00,720 --> 00:00:06,120 To train our model, we are creating another object model under the school history. 2 00:00:06,270 --> 00:00:11,100 And we are going to fit our model using extranet Invitrogen data. 3 00:00:11,850 --> 00:00:16,610 We want a number of epochs to be totally and Betsey's to be 64. 4 00:00:17,520 --> 00:00:24,510 And we also have a validation set in which we have X valid and five valid datasets. 5 00:00:26,040 --> 00:00:27,330 So let's run this. 6 00:00:33,200 --> 00:00:34,730 This may take our own Ben. 7 00:00:35,020 --> 00:00:35,840 Ben Monette's. 8 00:00:45,470 --> 00:00:49,580 Now, as you can see, this will take around ten to fifteen minutes. 9 00:00:50,000 --> 00:00:57,350 So if you have a system with low configuration, I recommend you to use five to bucks on Lee. 10 00:01:00,390 --> 00:01:07,410 Here you can also see we are getting training, loss, training, accuracy, validation, loss and validation, 11 00:01:07,410 --> 00:01:08,040 accuracy. 12 00:01:09,690 --> 00:01:17,670 And you can notice that I have not used any callbacks, but it is recommended to use callbacks to save 13 00:01:17,730 --> 00:01:19,980 the model after each epoch. 14 00:01:21,300 --> 00:01:25,140 It is better to use a list topping callback before running this model. 15 00:01:43,750 --> 00:01:46,420 So what training is going to complete? 16 00:01:47,980 --> 00:01:54,550 You can see on our train set, we have the accuracy score of zero point nine two. 17 00:01:55,330 --> 00:01:58,330 So around 92 percent accuracy in the last Depok. 18 00:01:58,870 --> 00:02:03,640 And the validation, accuracy of it be eight percent. 19 00:02:06,980 --> 00:02:13,490 We can also plot Epoch Ways training and validation accuracy on the plot. 20 00:02:15,560 --> 00:02:21,620 So all this accuracy information, along with the e-book where you are, is stored in dark history. 21 00:02:23,210 --> 00:02:29,960 So we are going to plot modern history, not history, on autograph and a few plaudit. 22 00:02:33,050 --> 00:02:41,510 You can see the training losses decreasing and the validation losses also decreasing and both validation, 23 00:02:41,540 --> 00:02:45,950 accuracy and training accuracy are increasing. 24 00:02:49,100 --> 00:02:53,900 You can also identified that the model has still not converged. 25 00:02:54,230 --> 00:03:02,810 And if we run it for a few more, Reebok's border training and validation accuracy are going to rise 26 00:03:03,020 --> 00:03:03,950 to a certain point. 27 00:03:06,260 --> 00:03:14,470 I think it will be better to run it for our own 70 or 80 bucks with a lead stopping callback. 28 00:03:17,520 --> 00:03:22,940 But for more, we will evaluate our model on our test data. 29 00:03:24,620 --> 00:03:31,310 If you remember, for over Hénin model, without any corn layer, we were getting accuracy of around 30 00:03:31,340 --> 00:03:33,410 86 percent on our test data. 31 00:03:34,730 --> 00:03:40,800 So now let's check how our convolutional neural network is performing on our test radio. 32 00:03:45,790 --> 00:03:50,180 We will use door to a new method and we are receiving this value. 33 00:03:50,410 --> 00:03:53,250 And another variable that is Eevee. 34 00:03:53,770 --> 00:03:55,960 So let's look at the value. 35 00:03:57,640 --> 00:04:00,580 You can see the lost value is around zero point three. 36 00:04:00,940 --> 00:04:04,590 And we are getting an accuracy of around 88 percent. 37 00:04:07,880 --> 00:04:16,220 So as compared to neural network, we are getting around two percent increase in the accuracy on our 38 00:04:16,220 --> 00:04:18,970 test dataset using CNN model. 39 00:04:20,240 --> 00:04:25,430 And another thing to notice is that we have just used a single convolutional layer. 40 00:04:26,780 --> 00:04:34,520 If we use multiple convolutional layers and if we run it for a few more Reeboks, we will definitely 41 00:04:34,520 --> 00:04:37,460 get better accuracy on our test dataset. 42 00:04:41,120 --> 00:04:45,750 Now, we have already seen how to predict the classes for any new data. 43 00:04:48,380 --> 00:04:54,290 The matter we are willing to use this predict under school classes, and then we have to provide a new 44 00:04:54,290 --> 00:04:54,820 dataset. 45 00:04:56,000 --> 00:05:03,820 As of now, I don't have any new dataset, so I'm just taking the first three object from my dad's dataset. 46 00:05:05,150 --> 00:05:12,250 And then we are going to restore the predicted class and have widespread variable and we are returning 47 00:05:12,250 --> 00:05:12,530 it. 48 00:05:14,510 --> 00:05:24,950 So for the first object, the predicted last label is nine minus sense for the ankle boots and for the 49 00:05:24,950 --> 00:05:29,000 second object, the predicted classes to and for the third object. 50 00:05:29,030 --> 00:05:30,530 The predictive classes one. 51 00:05:31,700 --> 00:05:37,640 Now let's see the actual values we have the actual values in Y test. 52 00:05:38,420 --> 00:05:45,110 So if we ran this, you can see the predicted values are the same as the actual values. 53 00:05:46,220 --> 00:05:50,060 You can also look at the image by using blood. 54 00:05:50,120 --> 00:05:50,810 I am sure. 55 00:05:52,130 --> 00:05:55,960 Just remember to reshape it again from three to two to three. 56 00:05:56,870 --> 00:05:59,740 And after that, you can use log dot. 57 00:05:59,960 --> 00:06:04,640 I am sure method to plot the object image. 58 00:06:06,260 --> 00:06:11,150 And for the first object, the class label was nine, which is ankle boots. 59 00:06:12,020 --> 00:06:17,030 And in the image also, we can see that the image looks like of an ankle boot. 60 00:06:18,800 --> 00:06:20,560 So all these steps are similar. 61 00:06:20,720 --> 00:06:28,820 What we did for in in classification models, in this model, we just change over and put X and Y shapes. 62 00:06:29,630 --> 00:06:35,840 And then we added a convolutional layer and a pulling layer before a word. 63 00:06:35,990 --> 00:06:36,680 And then model. 64 00:06:40,200 --> 00:06:43,320 That's all for this lecture in the next lecture. 65 00:06:43,530 --> 00:06:47,880 We will see a comparison of model with and without willingly. 66 00:06:49,190 --> 00:06:49,620 Thank you.