1 00:00:00,390 --> 00:00:02,160 Now we've trained our first model. 2 00:00:02,250 --> 00:00:07,150 Congratulations by the way a good idea is to evaluate how it's going. 3 00:00:07,320 --> 00:00:09,170 And of course we can look here. 4 00:00:09,300 --> 00:00:11,010 We can see the accuracy. 5 00:00:11,220 --> 00:00:12,440 It's getting pretty high. 6 00:00:12,570 --> 00:00:18,600 So accuracy 100 percent on the training data set but only about 63 per cent on the validation data set. 7 00:00:18,630 --> 00:00:22,630 It's over fitting but that's a good thing over fitting to begin with. 8 00:00:22,650 --> 00:00:25,150 Means our model is learning something. 9 00:00:25,380 --> 00:00:31,790 But since we've set up a tensor flow callback we can now utilize our logs folder. 10 00:00:31,830 --> 00:00:38,100 So if we go into dog vision and we go to logs so yeah we've got two files there. 11 00:00:38,100 --> 00:00:40,650 This is from I trained our first model yesterday. 12 00:00:40,650 --> 00:00:43,900 So this is where we've got logs from yesterday. 13 00:00:44,040 --> 00:00:45,680 And this is today's model. 14 00:00:45,930 --> 00:00:51,660 So or the one that I've just trained when I had to reset this notebook and so what we're gonna do now 15 00:00:51,750 --> 00:00:58,470 is load up tensor board so that we can check out these log files and see how our models are performing. 16 00:00:58,470 --> 00:01:02,310 Now this is helpful if you're training more than one model. 17 00:01:02,430 --> 00:01:04,770 Of course we can just as I said refer to this. 18 00:01:04,800 --> 00:01:13,470 And then if we ran another model maybe down here if we did model to equals train model we'd get this 19 00:01:13,560 --> 00:01:19,170 output this verbose output is what it's referred to and we could just compare these two cells but that's 20 00:01:19,170 --> 00:01:22,130 not going to really work over the longer term. 21 00:01:22,170 --> 00:01:24,440 So that's what we're all setting ourselves up about. 22 00:01:24,480 --> 00:01:26,180 So let's write a little heading here. 23 00:01:26,280 --> 00:01:28,630 Actually we might do it in this cell. 24 00:01:28,830 --> 00:01:30,900 So we'll go checking 25 00:01:33,960 --> 00:01:39,870 the tensor board logs and the beautiful thing about setting up this like a tensor board callback is 26 00:01:39,870 --> 00:01:43,710 that whenever we train a new model it's going to automatically log its performance. 27 00:01:43,710 --> 00:01:45,100 So let's do it. 28 00:01:45,090 --> 00:01:45,530 All right. 29 00:01:45,530 --> 00:01:46,620 Here we go. 30 00:01:46,620 --> 00:01:58,840 The tensor board magic function percentage sign their tensor board will access we have to pass it the 31 00:01:58,840 --> 00:02:05,340 logs directory but it will access the logs directory we created earlier 32 00:02:07,980 --> 00:02:09,670 and visualize 33 00:02:13,220 --> 00:02:20,270 its contents will turn that into markdown command Ayman collab. 34 00:02:20,380 --> 00:02:23,850 Cell shortcuts are slightly different to Jupiter. 35 00:02:24,020 --> 00:02:25,420 We go to tensor board now. 36 00:02:25,660 --> 00:02:31,510 This is the magic function I was talking about we have to pass it the tag to our log directory. 37 00:02:31,840 --> 00:02:40,280 And if you remember we saved our logs we just saw that we just have to pass the file path to here so 38 00:02:40,280 --> 00:02:41,570 we could copy that. 39 00:02:41,720 --> 00:02:47,990 But remember we're in the habit of typing it out and now because this magic function is as if we're 40 00:02:47,990 --> 00:02:50,930 running it on the command line we're not going to pass it. 41 00:02:50,930 --> 00:02:58,250 The file name the file path as a string we're going to pass it as just non string just as if we are 42 00:02:58,250 --> 00:03:00,810 running this on the command line writing it in terminal. 43 00:03:01,130 --> 00:03:01,790 So if I show you. 44 00:03:01,790 --> 00:03:03,160 Terminal. 45 00:03:03,320 --> 00:03:11,880 So just if we are running a desktop I'll zoom in there see how desktop is an in string format just the 46 00:03:11,880 --> 00:03:20,740 exact same there as I drive my drive and now the trick here is because we've got a space in my drive. 47 00:03:20,740 --> 00:03:24,580 We need to escape it using the backslash. 48 00:03:24,580 --> 00:03:25,540 So my drive. 49 00:03:25,540 --> 00:03:33,760 And then we're going to go forward slash we need dog vision again another spicier dog vision slash logs 50 00:03:35,490 --> 00:03:41,550 so hopefully this should loading to the dog vision logs folder launching tents board. 51 00:03:41,610 --> 00:03:45,980 Yes that's what we're after. 52 00:03:46,000 --> 00:03:50,320 So this is a great tool to help us visualize how our experiments are going 53 00:03:53,310 --> 00:03:58,350 might take a little bit to load for the first time because you're gonna say it's gonna be a fairly graphical 54 00:03:58,350 --> 00:04:01,640 interface check that out. 55 00:04:01,750 --> 00:04:02,440 There we go. 56 00:04:02,440 --> 00:04:06,010 So see here on the left let's just have a look at this interface. 57 00:04:06,010 --> 00:04:11,800 We've got a few options tensor board we've got inactive because this will actually update if we're training 58 00:04:11,800 --> 00:04:17,910 a model we can refresh it if we want if we come down here. 59 00:04:17,910 --> 00:04:21,370 These are the experiments we've run so far. 60 00:04:21,540 --> 00:04:22,940 So there's today. 61 00:04:22,950 --> 00:04:26,040 It's the 5th of February 2020 where I live right now. 62 00:04:26,100 --> 00:04:27,510 I just ran things. 63 00:04:27,930 --> 00:04:35,310 And if we turn off these these are from yesterday we're only going to get two graphs here and one of 64 00:04:35,310 --> 00:04:40,730 them is the validation data sets of the validation performance and the other is the training set. 65 00:04:40,730 --> 00:04:42,740 So the epoch loss. 66 00:04:42,840 --> 00:04:45,930 So these are epochs down here and this is the lost value. 67 00:04:45,960 --> 00:04:52,890 Let's go back up so remember the lost value we're trying to minimize while the accuracy we're trying 68 00:04:52,890 --> 00:05:03,910 to increase so if this epoch accuracy if this is going up that's what we want but you see here how the 69 00:05:03,910 --> 00:05:10,780 training gate is set the one in the red is going up way further almost two to one compared to the validation 70 00:05:10,960 --> 00:05:11,970 data. 71 00:05:12,100 --> 00:05:17,440 That's where we can tell that our machine learning model is over fitting because it's learning the patterns 72 00:05:17,440 --> 00:05:26,100 in the training data far too well so a later experiment would be trying to stop our model how machine 73 00:05:26,100 --> 00:05:31,170 learning model from overfishing so one of the ways we can potentially do that is by using more data. 74 00:05:31,200 --> 00:05:39,020 So we'll try that later on but as we ran more experiments as we keep using our create tensor flow callback 75 00:05:39,890 --> 00:05:46,070 it's going to continually store logs in the logs folder. 76 00:05:46,070 --> 00:05:52,460 And now that way if you find that one machine learning model if we turn these back on if you find that 77 00:05:52,460 --> 00:05:58,300 one one type of model is really outperforming so let's have a look at both of them just on the training 78 00:05:58,380 --> 00:06:04,670 dataset so if we look here they're basically following the same path going down and down that's with 79 00:06:04,670 --> 00:06:10,360 the loss of the loss is going down that's good it means our model is learning something and if we follow 80 00:06:10,360 --> 00:06:16,630 this they're also following the same path here which makes sense because we're using very similar data 81 00:06:17,380 --> 00:06:23,110 as well because we set a random state way back up when we when we split our data we're actually using 82 00:06:23,110 --> 00:06:30,700 quite the same right back out when we created our validation set see how we will use random state. 83 00:06:30,700 --> 00:06:33,610 So both of these experiments are using the same sort of data. 84 00:06:33,640 --> 00:06:39,510 So it makes sense that they get very similar results. 85 00:06:39,520 --> 00:06:47,090 So again you can call this whenever in your notebook whenever you've run an experiment and that way 86 00:06:47,630 --> 00:06:52,600 you can check out okay if I know I ran this experiment here I can see. 87 00:06:52,600 --> 00:06:53,010 All right. 88 00:06:53,030 --> 00:07:02,280 So that model that we ran with these different values here in our create model. 89 00:07:02,440 --> 00:07:03,800 Got a certain results. 90 00:07:03,840 --> 00:07:07,090 And then if we altered this we could see how that changes the results. 91 00:07:07,110 --> 00:07:12,690 So you can start to see over time how this gets more and more valuable at tracking your experiments 92 00:07:12,720 --> 00:07:16,230 and seeing how it does all righty. 93 00:07:16,270 --> 00:07:22,210 So now that we've seen tensor board we've seen how we can track our experiments over time what I believe 94 00:07:22,240 --> 00:07:27,970 our next step is to do is to use our train model to make some predictions. 95 00:07:28,030 --> 00:07:30,270 So let's do that in the next video.