1 00:00:00,650 --> 00:00:01,030 All right. 2 00:00:01,050 --> 00:00:07,920 And now we've got this fancy function create model what's happening here when we call model dot summary 3 00:00:08,070 --> 00:00:11,000 after we've created an instance of our model. 4 00:00:11,160 --> 00:00:13,840 We've got layout type Kerry's layout. 5 00:00:13,860 --> 00:00:17,910 So we go up here we've got Okay hub carriers layer that makes sense. 6 00:00:18,540 --> 00:00:25,260 We've got output shape multiple and program number five million four hundred thirty two thousand seven 7 00:00:25,260 --> 00:00:26,440 hundred thirteen. 8 00:00:26,460 --> 00:00:28,490 That's a fairly large number. 9 00:00:28,560 --> 00:00:35,980 When we come here total parameters that we have down here trainable prams and non trainable prams and 10 00:00:35,990 --> 00:00:43,750 it seems that the non trainable prams line up with this parameter number or parameter parameters sure 11 00:00:43,750 --> 00:00:44,860 for parameters. 12 00:00:45,200 --> 00:00:51,550 And we have another layer here called dense with param number one hundred twenty thousand turn of 40 13 00:00:52,340 --> 00:00:55,650 that corresponds to our dense layer here. 14 00:00:56,050 --> 00:00:56,810 Mm hmm. 15 00:00:57,290 --> 00:01:02,690 Well this is one of the amazing things in the benefits of transfer learning. 16 00:01:02,690 --> 00:01:10,910 The reason it says non trainable programs or parameters is because we're using mobile Net V2. 17 00:01:10,910 --> 00:01:19,280 And now if we come up here these are the patterns that mobile Net V2 has learned throughout these layers 18 00:01:19,280 --> 00:01:27,500 here or by repeatedly going through if we come back up to the top of this article by repeatedly going 19 00:01:27,500 --> 00:01:34,250 through our images or the images that mobile Net V2 was originally trained on better. 20 00:01:34,310 --> 00:01:36,370 That's the correct terminology there. 21 00:01:36,380 --> 00:01:38,710 So it's gone through all of these images. 22 00:01:39,050 --> 00:01:42,840 And if you're wondering what images did mobile Net V2 go through. 23 00:01:42,860 --> 00:01:55,040 Well it trained on if we read through here we go image net. 24 00:01:55,100 --> 00:01:55,790 There we go. 25 00:01:56,180 --> 00:02:04,810 So it's whites were originally obtained by training on the I L SVR RC 2012 CLSA data set for image classification. 26 00:02:04,880 --> 00:02:06,260 Image net. 27 00:02:06,260 --> 00:02:06,720 Mm hmm. 28 00:02:06,880 --> 00:02:16,840 Okay let's look up that image net they mentioned it image net is an image database organized according 29 00:02:16,840 --> 00:02:18,940 to the word knit hierarchy. 30 00:02:18,970 --> 00:02:20,870 Okay. 31 00:02:20,950 --> 00:02:24,680 Currently we have an average of over 500 images per node. 32 00:02:24,880 --> 00:02:25,620 OK. 33 00:02:25,630 --> 00:02:33,280 So essentially image net is a large database of 14 over 14 million images. 34 00:02:33,350 --> 00:02:41,250 And so that these patterns these parameters are what mobile Net has learned by training on image net. 35 00:02:42,170 --> 00:02:49,910 And so this is the beautiful thing about transfer learning is that all these patents that mobile Net 36 00:02:49,910 --> 00:02:53,050 has learned through these layers here and by the way. 37 00:02:53,060 --> 00:02:59,450 This is why they call it deep learning because essentially each one of these layers is a layer in our 38 00:02:59,450 --> 00:03:00,830 neural network. 39 00:03:00,830 --> 00:03:09,050 So if you imagine this expanded out each one of them is a layer and hence the reason they call it deep 40 00:03:09,050 --> 00:03:13,370 is because the diagrams are usually they have an input layer and then it just keeps going down and down 41 00:03:13,370 --> 00:03:14,850 and down and down and down. 42 00:03:15,140 --> 00:03:23,490 And so inherently what each of these layers actually are is a small model within themselves. 43 00:03:24,080 --> 00:03:29,470 So these are just multiple pattern finding models layered on top of each other. 44 00:03:29,570 --> 00:03:32,000 Hence the term deep learning. 45 00:03:32,000 --> 00:03:40,410 So for come back this is what our base layer is those five and a half million patterns that mobile Net 46 00:03:40,410 --> 00:03:48,600 V2 has found within the image net and now we get to utilize those patterns that it's found in our own 47 00:03:48,600 --> 00:03:49,480 problem. 48 00:03:49,500 --> 00:03:55,590 So rather than try and find them from scratch on our own which could take hours and lots of compute 49 00:03:55,590 --> 00:04:01,750 power that we don't really have we get to utilize them and train our own. 50 00:04:01,750 --> 00:04:07,990 One hundred and twenty thousand parameters so this is out our dense layer here. 51 00:04:08,100 --> 00:04:12,880 That's what we're going to be training so we're going to be passing out images to mobile Net. 52 00:04:12,880 --> 00:04:17,730 It's going to be using its baseline patents to figure out what's in our images. 53 00:04:17,860 --> 00:04:24,400 And then finally we're going to stick this layer on top and go hey we've got our own data data that 54 00:04:24,430 --> 00:04:30,790 isn't an image net but let's use what you figured out in image net for classifying which dog breed is 55 00:04:30,790 --> 00:04:32,220 in a certain photo. 56 00:04:32,650 --> 00:04:37,900 Now that we have a model we probably need to make some callbacks and you might be wondering what are 57 00:04:37,900 --> 00:04:38,680 callbacks. 58 00:04:38,680 --> 00:04:47,230 Well let's save that for the next video where we create some callbacks and then after that we'll create 59 00:04:47,230 --> 00:04:50,130 a function that's going to train our model. 60 00:04:50,590 --> 00:04:51,880 That sounds like a good idea to me.