1 00:00:00,660 --> 00:00:08,460 Now, in this video, we are going to create artificial image data, which we are going to use as training 2 00:00:08,460 --> 00:00:11,700 data to improve the performance of our model. 3 00:00:14,520 --> 00:00:16,570 So the first step is same. 4 00:00:17,370 --> 00:00:19,170 We need to create directory variables. 5 00:00:20,100 --> 00:00:25,080 If you are continuing with the last project, these variables will already be created for you. 6 00:00:26,460 --> 00:00:32,970 Since I have created a new project, I need to run these lines of code again to create directory variables. 7 00:00:36,840 --> 00:00:43,710 Now, this is the point where we have difference between the last model and this model, this time in 8 00:00:43,710 --> 00:00:45,120 the image data generator. 9 00:00:45,720 --> 00:00:52,470 Instead of just having this rescale parameter, we are going to give a few more parameters. 10 00:00:53,280 --> 00:01:04,980 The concept is image data generator is going to generate new images by rotating, shifting, sharing 11 00:01:05,210 --> 00:01:08,670 or zooming the images that we already have in us. 12 00:01:10,860 --> 00:01:16,020 So imagine that we have an image of a cat face image. 13 00:01:16,020 --> 00:01:23,790 Data generator will create a new image from that face by rotating that at some particular angle. 14 00:01:24,990 --> 00:01:35,390 We have given that range for that angle from zero to 40 degrees or shifting that face horizontally or 15 00:01:35,400 --> 00:01:37,350 vertically by 20 percent. 16 00:01:38,460 --> 00:01:41,390 Sheering the image sharing. 17 00:01:41,400 --> 00:01:46,710 There was two translating different pixel roles, but different values. 18 00:01:47,490 --> 00:01:54,710 So the bottom most rule of pixel will remain at its position and the top most will move horizontally 19 00:01:54,720 --> 00:01:55,920 by some percentage. 20 00:01:56,550 --> 00:02:02,700 So we have given a sheer range of, again, 20 percent, a zoom range of 20 percent. 21 00:02:05,550 --> 00:02:07,380 Horizontal flip is also allowed. 22 00:02:07,830 --> 00:02:17,610 So the image can be horizontally flipped and film mode suggest what is to be done with newly created 23 00:02:17,610 --> 00:02:18,240 pixels. 24 00:02:18,540 --> 00:02:23,130 For example, when we hear the end image, there will be some newly created pixels. 25 00:02:23,550 --> 00:02:27,450 When we zoom out, there will be some newly created pixels. 26 00:02:27,930 --> 00:02:30,030 What should be done with those pixels? 27 00:02:31,110 --> 00:02:37,800 When we said phillimore to nearest, it means whatever is the nearest value of pixel in that actual 28 00:02:37,800 --> 00:02:38,160 image. 29 00:02:38,430 --> 00:02:41,400 That would be free throughout the newly created pixels. 30 00:02:43,500 --> 00:02:44,970 So I'll go through this again. 31 00:02:45,690 --> 00:02:53,580 Rotation means you can give a value between zero to 180 and it tells what is the range of rotation within 32 00:02:53,580 --> 00:02:56,400 which the image will be randomly rotated. 33 00:02:57,230 --> 00:02:57,810 Hi, Chip. 34 00:02:58,050 --> 00:03:05,070 And we'd shift our ranges within which deep picture will be horizontally or vertically translated. 35 00:03:08,210 --> 00:03:13,060 Shearing's is what shearing transformations luminance is for. 36 00:03:13,220 --> 00:03:23,990 Zooming in or zooming out, horizontal flip is flipping the image horizontally, and Fillmore is used 37 00:03:23,990 --> 00:03:31,310 for filling the newly created pixels, which will appear when we do a rotation or zoom in or shirting. 38 00:03:33,200 --> 00:03:38,810 So when we give all these parameters, random values of all these parameters will be taken. 39 00:03:39,350 --> 00:03:42,410 And a new image will be created from the old image. 40 00:03:43,040 --> 00:03:43,870 That's underscored. 41 00:03:44,780 --> 00:03:52,220 Now, this data gene function is really this will be used for generating data, for training and validation 42 00:03:53,720 --> 00:03:54,650 for testing. 43 00:03:55,100 --> 00:03:57,020 We do not need to do any such things. 44 00:03:57,170 --> 00:03:58,580 We have our test set. 45 00:03:59,180 --> 00:04:07,400 We just need to reskill the information in that using image data generator function, but only specifying 46 00:04:07,430 --> 00:04:07,640 this. 47 00:04:07,660 --> 00:04:10,190 But we drove reskilling by 255. 48 00:04:10,940 --> 00:04:17,510 So we'd underscore the strange NATO function will be exactly same as last time. 49 00:04:17,900 --> 00:04:19,940 Only difference will be the second parameter. 50 00:04:21,080 --> 00:04:26,750 Instead of using the train gen that we created last time, we are going to use this data engine data. 51 00:04:27,890 --> 00:04:31,600 Will have additional randomly generated images also. 52 00:04:32,300 --> 00:04:39,770 So when this brain generator is giving images of bad taste today, to this time we distribute instead 53 00:04:39,770 --> 00:04:40,370 of going deep. 54 00:04:41,750 --> 00:04:49,310 So when this dream generator is giving images in batches of today to all of the images, will have something 55 00:04:49,310 --> 00:04:51,350 changed from the original dataset. 56 00:04:53,360 --> 00:04:58,530 So the original image will have a random mutations in all of these parameters. 57 00:04:58,880 --> 00:05:06,020 And those 32 newly generated images will be sending class more is still binary. 58 00:05:06,650 --> 00:05:07,100 Exactly. 59 00:05:07,100 --> 00:05:09,230 Same thing is done for validation generated. 60 00:05:10,510 --> 00:05:12,910 We run both of these lines of code. 61 00:05:17,260 --> 00:05:20,460 Our model architecture, again, stays the same. 62 00:05:21,420 --> 00:05:28,470 We have Ford Convolutional Lears for Max pulling levers in combination with each convolutional layer. 63 00:05:29,370 --> 00:05:38,100 After that, we have a flattened layer which gives its output as input to our densely connected neural 64 00:05:38,100 --> 00:05:38,610 network. 65 00:05:39,670 --> 00:05:40,990 So we create this architecture. 66 00:05:43,650 --> 00:05:45,510 You can see the model again if you want to. 67 00:05:47,010 --> 00:05:54,600 Then we compile this model again with these same lost function, which is binary cross and Droopy Optimizer 68 00:05:54,780 --> 00:05:58,170 is automats prop with a learning rate of 10 is about minus four. 69 00:05:58,980 --> 00:06:01,890 And the metric we are going to monitor is accuracy. 70 00:06:05,820 --> 00:06:07,710 This fit function is all the same. 71 00:06:08,970 --> 00:06:17,450 So if you have noticed the only difference in this model than the previous model is here, the Rangin 72 00:06:17,730 --> 00:06:20,470 has been replaced by dadaji and data. 73 00:06:20,480 --> 00:06:28,770 Jane has more parameters instead of just one of rescaling with all the random values of these parameters 74 00:06:29,110 --> 00:06:31,380 will have newly created images. 75 00:06:31,800 --> 00:06:38,610 Those images will go in a batch of 32 and the steps, but epoch is still hundred. 76 00:06:39,510 --> 00:06:47,070 So the thing is, last thing in each people, there were two thousand images using which the training 77 00:06:47,070 --> 00:06:48,900 was being done this time. 78 00:06:49,530 --> 00:06:56,800 Since we are generating 32 image bache, we will have 32 hundred images but epoch. 79 00:06:58,560 --> 00:07:08,520 Now when we done this, it is again going to take a long time, but hopefully this will give us better 80 00:07:08,580 --> 00:07:09,750 validation, accuracy. 81 00:07:11,190 --> 00:07:12,150 Let's wait and watch.