1 00:00:00,750 --> 00:00:06,660 I read about an issue like, why are new roads are really cool? 2 00:00:07,050 --> 00:00:11,750 We cannot just use a single neuron to perform complex has. 3 00:00:12,390 --> 00:00:18,990 This is the reason our brain had billions of neurons stacked in layers forming a network. 4 00:00:19,010 --> 00:00:22,740 Similarly, artificial neurons are engaged in layers. 5 00:00:23,220 --> 00:00:29,690 Each and every layer will be connected in such a way that information is bought from one to another. 6 00:00:29,850 --> 00:00:38,130 The typical Annand consists of the following layers input layer upon layer and about layer. 7 00:00:38,700 --> 00:00:44,190 So Layer has a collection of neurons and are neurons in one layer interact. 8 00:00:45,190 --> 00:00:53,740 With all the new rules in other areas, however, new rules in the assembly will not interact with one 9 00:00:53,740 --> 00:01:04,090 another simply because new rules from the adjacent layers have connection between the however new rules 10 00:01:04,090 --> 00:01:04,820 in the same place. 11 00:01:04,860 --> 00:01:12,640 You do not have any connections with your thumb, nor are you to present the new rules in the artificial 12 00:01:12,640 --> 00:01:13,450 neural network. 13 00:01:13,870 --> 00:01:16,970 And a typical animation in this. 14 00:01:20,280 --> 00:01:28,080 As can you say, that in both hit and light and how put your shoulder input layer is where we fit in 15 00:01:28,270 --> 00:01:35,190 to the network, the numbers up newroz it is the number of in both we feed to the network. 16 00:01:35,580 --> 00:01:39,180 Each input will have some influence on predicting the output. 17 00:01:39,690 --> 00:01:44,270 However, no computation is performed in the input layer. 18 00:01:44,610 --> 00:01:49,020 It is just you for passive information from the outside world to the network. 19 00:01:50,240 --> 00:01:57,890 He didn't lie any lie between the input layer and the output layer is called a layer is processed. 20 00:01:57,890 --> 00:02:05,390 The input received from the input layer, the hidden layer is responsible for the arriving complex relationships 21 00:02:05,870 --> 00:02:07,600 between input and output. 22 00:02:08,090 --> 00:02:10,250 That is, the hidden layer identified. 23 00:02:10,250 --> 00:02:16,190 A pattern in the data set is a majorly responsible for learning the data representation. 24 00:02:17,080 --> 00:02:18,970 And for extracting the features. 25 00:02:20,240 --> 00:02:26,780 There can be any number of years, however, we have to to a number of Hidalgo's, according to our 26 00:02:26,870 --> 00:02:27,800 use case. 27 00:02:28,730 --> 00:02:35,990 For a very simple problem, we can just use one dollar, but while performing complex task such as image 28 00:02:35,990 --> 00:02:36,800 recognition. 29 00:02:37,680 --> 00:02:46,140 With so many hits like yours where failure is responsible for extracting important features, the network 30 00:02:46,710 --> 00:02:51,900 is Konar deep neural network when we have many hit delayers. 31 00:02:52,820 --> 00:03:00,590 About layer after processing the input, the hidden layers and its result to the output layer and the 32 00:03:00,590 --> 00:03:07,130 name suggests I like it missed the output, the number of neurons in the output layer is based on the 33 00:03:07,130 --> 00:03:08,100 tile problem. 34 00:03:08,480 --> 00:03:11,140 We want our network to solve it. 35 00:03:11,450 --> 00:03:19,040 If it's a binary classification, then a number of new roles in the Abulafia is one that tell us which 36 00:03:19,040 --> 00:03:20,810 class in Bush belong. 37 00:03:21,820 --> 00:03:31,420 It is a Montebourg multiclass classification, say, with five glasses, and we want to get the probability 38 00:03:31,420 --> 00:03:39,640 of a glass as an output than the number of new in the output layer is five ish, emitting the probability 39 00:03:40,240 --> 00:03:49,060 it is a regression problem, then we have one neuron in the Abulafia and that is on about and I hope 40 00:03:49,060 --> 00:03:52,450 you enjoy it and I will see you in the next video.