1 00:00:02,040 --> 00:00:04,300 Lynn, it is a very simple architecture. 2 00:00:05,920 --> 00:00:07,550 It has three convolutional layers. 3 00:00:08,620 --> 00:00:12,700 Two of the congressional leaders also have average pulling leered. 4 00:00:13,510 --> 00:00:15,520 Notice that it is not max pooling. 5 00:00:16,060 --> 00:00:18,970 It was average pulling, as I told you earlier. 6 00:00:18,970 --> 00:00:22,600 Also, average billing was more popular in the earlier days. 7 00:00:23,330 --> 00:00:25,510 And later on, Max Pooling became more popular. 8 00:00:26,290 --> 00:00:32,640 So Limit being one of the earliest congressional neural network architectures, had average pooling. 9 00:00:34,030 --> 00:00:41,630 So the first convolutional it and in congressional it have average pulling toward converting it straight 10 00:00:41,690 --> 00:00:41,900 away. 11 00:00:41,960 --> 00:00:45,640 It gives its output to a fully connected neural network. 12 00:00:47,290 --> 00:00:53,410 If you look at the input image it took in an image of size 32, what did you do? 13 00:00:55,180 --> 00:01:02,350 And after the convolutional Lears, we had 120 such features of one by one size. 14 00:01:04,120 --> 00:01:07,480 These would fit into a fully connected neural network. 15 00:01:09,160 --> 00:01:15,350 This problem was run on amnestied only, which is handwriting recognition data. 16 00:01:16,090 --> 00:01:18,400 And it was able to achieve very good accuracy's. 17 00:01:20,050 --> 00:01:21,700 This architecture is very simple. 18 00:01:21,880 --> 00:01:25,780 In fact, I would encourage you to make this architecture in your system. 19 00:01:26,200 --> 00:01:29,450 You know, everything that you need to know to create this architecture. 20 00:01:29,710 --> 00:01:30,730 And you can run this.