1 00:00:01,620 --> 00:00:05,070 Now, let us see the importance of a pulling lever in our network. 2 00:00:06,600 --> 00:00:13,590 In the last video, we created a network in which there was one convolutional layer and one pulling 3 00:00:13,590 --> 00:00:16,200 layer in the model, somebody. 4 00:00:16,470 --> 00:00:22,500 We saw that the number of total parameters that are to be trained is nearly one point six million. 5 00:00:25,130 --> 00:00:28,770 Now, if I remove this pulling layer. 6 00:00:32,130 --> 00:00:35,170 I'll comment on this part that there is a pulling layer. 7 00:00:43,630 --> 00:00:44,760 Run this part also. 8 00:00:46,550 --> 00:00:57,690 And now if I look at the somebody, you can see that by just removing one pulling leered, the number 9 00:00:57,690 --> 00:01:01,350 of parameters have now reached six point five million. 10 00:01:02,800 --> 00:01:04,570 Earlier, it was one point six million. 11 00:01:05,760 --> 00:01:11,730 But after that, moving this max, pulling lere, the number of parameters is six point five million. 12 00:01:12,660 --> 00:01:17,100 You can imagine the computational load this will have on our system. 13 00:01:19,020 --> 00:01:25,150 If you want to go through with this, let us update model to model to every bit. 14 00:01:35,100 --> 00:01:36,590 It's configured model to. 15 00:01:41,360 --> 00:01:46,010 And reduce the number of epochs to really run it. 16 00:01:53,100 --> 00:02:00,150 So here we will see the no time taken by a network to train for one epoch. 17 00:02:00,900 --> 00:02:06,960 So when we had that pulling layer, the time taken was nearly 32 seconds but epoch. 18 00:02:07,320 --> 00:02:09,160 So it took nearly 15 minutes. 19 00:02:09,330 --> 00:02:12,540 Fifteen minutes to train for 30 bucks. 20 00:02:13,800 --> 00:02:21,080 Now, I have run it for only three blocks because I knew the time taken for E.T. book will be very lunch. 21 00:02:22,850 --> 00:02:24,900 Let us see when it completes one epoch. 22 00:02:30,960 --> 00:02:32,010 There you go. 23 00:02:32,610 --> 00:02:35,070 For each epoch. 24 00:02:35,280 --> 00:02:43,140 Now it is going to take one minute so that computational time has doubled because we have to move this 25 00:02:43,390 --> 00:02:44,020 pooling layer. 26 00:02:46,500 --> 00:02:55,080 Now, imagine having a network with multiple convolutional layers because from basic features, we wanted 27 00:02:55,080 --> 00:02:57,030 to extract higher level features also. 28 00:02:57,540 --> 00:03:05,040 So that is why I told you that we have a set of convolutional layers which bring out higher level features 29 00:03:05,230 --> 00:03:06,960 out of these input images. 30 00:03:08,910 --> 00:03:17,200 Now, if you have to use multiple convolutional layers, you need to have pooling layers in-between. 31 00:03:17,550 --> 00:03:21,270 So that deep computational load stays under control. 32 00:03:22,770 --> 00:03:31,620 So if you add another convolutional layer without adding the max pooling layer, it will increase the 33 00:03:31,920 --> 00:03:36,780 processing time to it even more than this one minute body block. 34 00:03:39,860 --> 00:03:42,590 So this is just the time taken by our system. 35 00:03:43,010 --> 00:03:49,040 It is also taking up resources of our system, like Haraam or Deepu. 36 00:03:51,140 --> 00:03:57,680 So if you have a lot of congressional layers in your network and you do not add pooling layers, you 37 00:03:57,680 --> 00:04:04,700 may reach a point where your network cannot train because of limitation on your processing power of 38 00:04:04,700 --> 00:04:05,780 your CPA or GP. 39 00:04:07,250 --> 00:04:13,630 So it is very important that we keep decompositional law under control by using Max Pully. 40 00:04:15,500 --> 00:04:16,920 So that is all in this we do. 41 00:04:17,090 --> 00:04:25,370 I just wanted to highlight that when we have a large convolutional neural network model, pulling layers 42 00:04:25,550 --> 00:04:31,310 are used along with congressional leaders to keep decompositional load under check.