1 00:00:00,820 --> 00:00:05,400 OK, so now let's start building our first CNN model. 2 00:00:07,170 --> 00:00:15,270 We will be using the same data that we use for classification problem in in in here. 3 00:00:15,390 --> 00:00:20,970 Their task is to identify the fashion article name depending on their images. 4 00:00:23,730 --> 00:00:26,700 We have 10 categories of different objects. 5 00:00:27,450 --> 00:00:30,060 We have T-shirts, trousers, Bullo words. 6 00:00:31,020 --> 00:00:38,730 And for all of these articles we have, they are 28 by 28 pixel gray scale images. 7 00:00:41,190 --> 00:00:45,870 We have already created a neural network model for this classification problem. 8 00:00:47,520 --> 00:00:52,470 Now we are going to add convolutional layer with over and then more than. 9 00:00:54,580 --> 00:00:59,400 So let's just start by importing some of the important libraries. 10 00:00:59,640 --> 00:01:01,480 We are importing them by pandas. 11 00:01:01,650 --> 00:01:02,490 And my Lord. 12 00:01:05,100 --> 00:01:08,580 Then we are also importing tens of flow and get us. 13 00:01:11,260 --> 00:01:16,120 Then, as I have told you earlier, we will be using fashion amnesty data. 14 00:01:17,800 --> 00:01:21,370 This data is available in cadeaux datasets. 15 00:01:22,540 --> 00:01:25,120 For more information, you can click on this link. 16 00:01:26,170 --> 00:01:34,600 Here we have around 60000 images as our training data and another 10000 images as of our test data. 17 00:01:35,590 --> 00:01:37,480 We have 10 different categories. 18 00:01:38,020 --> 00:01:41,260 All these categories are labeled from zero to nine. 19 00:01:44,320 --> 00:01:45,950 And this is this index. 20 00:01:46,300 --> 00:01:49,880 We are going to use to import fashion and this data. 21 00:01:50,200 --> 00:01:53,770 We have already done this in our Hénin tutorial. 22 00:01:54,010 --> 00:01:57,910 So I'm not going to spend our time on this. 23 00:02:01,320 --> 00:02:09,360 So we're importing our data and do X train full, Y train full X test and Y test variables. 24 00:02:10,350 --> 00:02:16,260 Let's just run this now since we have labels for all these articles. 25 00:02:16,830 --> 00:02:23,650 We are going to clear the list with their description so that we can refer this list whenever we get 26 00:02:23,650 --> 00:02:24,750 a class label. 27 00:02:25,930 --> 00:02:28,990 So let's create a last name list also. 28 00:02:31,680 --> 00:02:39,870 Now we have to do Boota reshaping this donly change we are going to do in preprocessing of convolutional 29 00:02:39,930 --> 00:02:42,270 neural network as compared to. 30 00:02:42,420 --> 00:02:42,840 And then. 31 00:02:45,540 --> 00:02:52,860 If you remember four and then we converted a what do these images into a single one dimensional array? 32 00:02:53,100 --> 00:02:54,420 Losing light and function. 33 00:02:55,470 --> 00:02:59,700 But for CNN, we need a three dimensional array as input. 34 00:03:00,900 --> 00:03:02,850 We need height, weight. 35 00:03:03,810 --> 00:03:05,750 And also another dimension. 36 00:03:05,780 --> 00:03:06,720 Four channels. 37 00:03:08,400 --> 00:03:12,720 Currently, we have our X data in the form of this 2D images. 38 00:03:12,900 --> 00:03:19,230 There is no another dimension for channels since these are simple grayscale images. 39 00:03:20,760 --> 00:03:25,440 But by default for CNN layers, we need a two dimensional images. 40 00:03:26,070 --> 00:03:33,630 So we are going to reshape what extrem data and we are going to add another dimension to our data. 41 00:03:34,410 --> 00:03:39,030 So earlier we were using 28 by 28 pixel images. 42 00:03:39,720 --> 00:03:43,940 Now we are shipping it in to 28 and to 28. 43 00:03:43,980 --> 00:03:46,280 And to one one it stands for Channel. 44 00:03:47,850 --> 00:03:55,170 And again, we have 60000 images in our training dataset and 10000 images in our test dataset. 45 00:03:56,850 --> 00:04:02,400 So before doing this, we ship the shape of our extranet fully. 46 00:04:02,400 --> 00:04:09,930 Dataset was 60000, cross 28, cross 28, Norfork Convolutional Neural Network. 47 00:04:11,130 --> 00:04:14,040 We are adding another dimension for channel as well. 48 00:04:14,430 --> 00:04:20,070 So we are just adding another dimension to make it four dimensional. 49 00:04:21,420 --> 00:04:26,070 Just on this now we have reshaped our data. 50 00:04:26,700 --> 00:04:29,400 The next step is to normalize the data. 51 00:04:31,650 --> 00:04:36,690 So all over pixel values are between zero to 255. 52 00:04:37,590 --> 00:04:42,090 So we are just going to divide over an entire dataset by 255. 53 00:04:42,450 --> 00:04:45,980 In that way, all lower values will lie between zero and one. 54 00:04:48,090 --> 00:04:52,560 We already did the similar thing for a word and then model as well. 55 00:04:52,950 --> 00:04:56,190 So I'm going to spend much time here. 56 00:04:57,900 --> 00:05:03,240 Similarly, we are going to split over dataset and do screen and validation sites. 57 00:05:04,590 --> 00:05:13,200 We are keeping 55000 images for our training, the desert and dress of 5000 images for our validation 58 00:05:13,200 --> 00:05:13,700 dataset. 59 00:05:15,240 --> 00:05:17,880 So Lurdes, just on this as well. 60 00:05:19,080 --> 00:05:27,750 So first, 5000 are going into validation and from 5000 to 60000 are going into training data. 61 00:05:30,060 --> 00:05:39,540 Now, let's say the seed for over 10 sort of flow and number so that we get the same result every time 62 00:05:39,540 --> 00:05:40,410 we run this code.