1 00:00:02,860 --> 00:00:06,390 Now let us look at another CNN architecture. 2 00:00:06,460 --> 00:00:14,200 Google it was the winner of 2014 challenge Google and had one new concept and the concept was often 3 00:00:14,350 --> 00:00:25,600 Inception module Inception Module looks something like this the input to the inception module is given 4 00:00:26,050 --> 00:00:33,520 to four different layers three of these layers are convolution layers and the fourth one is a max puller. 5 00:00:35,740 --> 00:00:42,640 If you look at these convolution layers these are also one by one kernel that is the window is of the 6 00:00:42,640 --> 00:00:44,860 size of a single pixel. 7 00:00:45,100 --> 00:00:50,230 Usually we have been using continuously layers with two by two or three by 2.0. 8 00:00:50,980 --> 00:00:56,140 But in the inception module you can see that Windows or is one by one. 9 00:00:56,230 --> 00:01:03,880 Used this oneness represents this trade so it has a straight of one the output of these two convolution 10 00:01:03,900 --> 00:01:09,910 alert and this match pulling layer then went into three different congressional layers. 11 00:01:10,090 --> 00:01:18,210 The output of these four was then put into a debt concatenated will not be discussing that concatenated 12 00:01:18,220 --> 00:01:19,850 here. 13 00:01:19,870 --> 00:01:28,510 This whole thing is called an inception module and the actual architecture of Google it was something 14 00:01:28,510 --> 00:01:32,830 like this input of images went through here. 15 00:01:32,830 --> 00:01:39,640 Then there was a convolution layer a max puller a local response normally to convolution layers and 16 00:01:39,640 --> 00:01:40,700 so on. 17 00:01:40,870 --> 00:01:44,220 All of these are staggered Inception modules. 18 00:01:44,320 --> 00:01:46,120 So this is the inception layer. 19 00:01:46,120 --> 00:01:50,680 This is an separate layer so many of these Inception layers. 20 00:01:51,070 --> 00:01:57,550 The output of these goes on to another set of Inception layers and then we finally have a fully connected 21 00:01:58,060 --> 00:01:58,840 neural network 22 00:02:01,740 --> 00:02:10,800 so if we look at it it is a very complex network although it had very few training parameters for mostly 23 00:02:10,860 --> 00:02:16,590 working professionals or students working in the field of data science and machine learning. 24 00:02:16,590 --> 00:02:22,650 Creating such architectures on their machines is not possible. 25 00:02:22,650 --> 00:02:32,030 So one of the good things that is coming with our gate US library is that we are able to use these pre 26 00:02:32,040 --> 00:02:35,460 trained models for our problem 27 00:02:38,730 --> 00:02:47,250 these architectures were created to solve one particular problem but tenants allow us to use these architectures 28 00:02:48,240 --> 00:02:49,380 for other problems. 29 00:02:49,380 --> 00:02:51,660 Also let us see how.