1 00:00:00,960 --> 00:00:10,500 So the model is now fit, but when I posed the video, I had earlier done this for 20 epochs, but I 2 00:00:10,500 --> 00:00:13,110 saw that the solution was not converged. 3 00:00:13,650 --> 00:00:15,930 So I had landed for another 30. 4 00:00:16,020 --> 00:00:17,540 Still, the solution was not in words. 5 00:00:17,610 --> 00:00:19,890 So I have done it for no 50 bucks. 6 00:00:21,540 --> 00:00:24,300 So overall, I have run this model. 7 00:00:24,370 --> 00:00:32,790 Four hundred epochs and after 100 epochs, I can see that in some places, in some epochs, we have 8 00:00:32,820 --> 00:00:38,500 achieved a model giving a validation accuracy of nearly 84 percent also. 9 00:00:39,870 --> 00:00:47,910 So if you are using callbacks, you can store the best model using a callback and you can use that model 10 00:00:48,390 --> 00:00:50,850 to test your performance on your test dataset. 11 00:00:51,450 --> 00:00:54,090 For now, we have this model. 12 00:00:54,570 --> 00:01:01,800 This last model, which is giving us validation accuracy of eighty three point three six percent, which 13 00:01:01,800 --> 00:01:02,520 is also good enough. 14 00:01:02,640 --> 00:01:11,940 You can see that we have achieved over eight nine percent accuracy improvement just by doing the documentation. 15 00:01:13,200 --> 00:01:22,200 So even if you have fewer data observations to bring in CNN, you have the option of creating artificial 16 00:01:22,200 --> 00:01:30,120 data by using the data augmentation for which we used this function image data generator. 17 00:01:32,310 --> 00:01:34,860 So this was the second part of our project. 18 00:01:36,090 --> 00:01:41,400 We have seen considerable improvement in accuracy after using data augmentation. 19 00:01:42,780 --> 00:01:48,950 And the last part we are going to use up retained CNN model. 20 00:01:50,310 --> 00:01:56,610 There are certain architectures which have been trained on large datasets and these networks are huge 21 00:01:56,760 --> 00:02:00,840 and they have performed really well in certain competitions. 22 00:02:01,530 --> 00:02:05,480 And some of these architectures are part of a decade as Labadie. 23 00:02:05,820 --> 00:02:09,940 So we can straightaway use the weight of these Bre-X train models. 24 00:02:10,800 --> 00:02:19,370 So in the next video, we are going to use such a retrain model and applied on our God vs. Dogs dataset. 25 00:02:19,870 --> 00:02:22,660 And let us see if we can improve the accuracy for the.