1 00:00:00,240 --> 00:00:01,960 We've got a model up here ready to go. 2 00:00:02,080 --> 00:00:05,040 And we've checked out what summarizing it means. 3 00:00:05,040 --> 00:00:10,740 But what we'll do first is create some callbacks so let's make a little heading here and you might be 4 00:00:10,740 --> 00:00:16,520 wondering well what are callbacks and that's what we're going to dive into in this lesson and right 5 00:00:16,590 --> 00:00:29,350 down here at callbacks are helper functions a model can use during training to do such things as save 6 00:00:29,350 --> 00:00:30,190 its progress 7 00:00:32,790 --> 00:00:38,390 check its progress or stop training early. 8 00:00:39,450 --> 00:00:46,140 If a model stops improving and pluses a few more. 9 00:00:46,140 --> 00:00:49,710 These are just a few things and these are the ones that we are going to implement. 10 00:00:49,710 --> 00:00:54,780 Now you might be wondering why should we implement callback if you want to find out a bit more about 11 00:00:54,780 --> 00:00:55,560 model callbacks. 12 00:00:55,560 --> 00:01:06,220 We can go tend to flow Kara's model callbacks and this is going to bring up the documentation Kerry's 13 00:01:06,220 --> 00:01:07,270 custom callbacks 14 00:01:09,750 --> 00:01:14,710 excellent so there's a few more callbacks that we can do in there. 15 00:01:14,730 --> 00:01:17,700 Oh there's one tensor board we're gonna be setting that up. 16 00:01:17,810 --> 00:01:24,890 And so while the model's training because it can be training for a long time you might want to figure 17 00:01:24,890 --> 00:01:26,360 out what it's actually doing. 18 00:01:26,480 --> 00:01:32,540 So there are times where you will once you scale up your models our model won't take this long but you 19 00:01:32,540 --> 00:01:39,590 could be training a model for up to a week at a time especially some of the biggest deep learning models. 20 00:01:39,710 --> 00:01:44,630 And so if your model is training for a week you're probably going to want to know how's it going what's 21 00:01:44,630 --> 00:01:45,710 it doing. 22 00:01:45,710 --> 00:01:47,690 And if it's training for too long. 23 00:01:47,810 --> 00:01:52,940 Yes that's the thing you probably want to stop it early before it begins to over fit. 24 00:01:52,940 --> 00:01:54,620 But we're getting ahead of ourselves here. 25 00:01:54,980 --> 00:01:59,110 Let's create a couple of callbacks so we can see it in action. 26 00:01:59,450 --> 00:02:02,570 That's going to help solidify what's going on. 27 00:02:02,570 --> 00:02:04,570 Same with what's going on up here. 28 00:02:04,610 --> 00:02:06,770 This functions a fair bit to take in. 29 00:02:06,770 --> 00:02:13,430 But once we start to use it once we start to try and a model you'll start to see what's happening now 30 00:02:13,690 --> 00:02:26,160 what two callbacks do we want to make will create two callbacks one for tensor board which helps track 31 00:02:26,250 --> 00:02:41,620 our model's progress and another for early stopping which prevents our model from training for too long. 32 00:02:41,700 --> 00:02:45,430 Now these are a couple of handy callbacks that you can create with basically any model. 33 00:02:45,540 --> 00:02:50,880 It's just tensor board which is like a dashboard for seeing how well your models are doing seeing its 34 00:02:50,880 --> 00:02:51,480 progress. 35 00:02:51,510 --> 00:02:56,930 Seeing that the loss function is getting reduced and that the accuracy is going up there. 36 00:02:56,970 --> 00:03:02,940 The two ones main ones will be looking for an early stopping callback is something that if our model 37 00:03:02,940 --> 00:03:09,420 is training for too long and it's stopped improving before it over fits we want it to stop training. 38 00:03:09,420 --> 00:03:10,110 So let's go. 39 00:03:10,110 --> 00:03:11,380 Here we go. 40 00:03:11,430 --> 00:03:22,620 TENSOR low callback oh no we want tensor board tensor board so let's see. 41 00:03:22,810 --> 00:03:27,910 I might actually go in here callbacks tensor board and see what it says there we go. 42 00:03:27,920 --> 00:03:33,690 There's the documentation for the tensor board callback but let's code it up first so to low tensor 43 00:03:33,690 --> 00:03:42,140 board into the notebook you can go load there's a magic function tensor board notebook extension we 44 00:03:42,150 --> 00:03:46,050 go load extend tensor board 45 00:03:48,660 --> 00:03:53,310 wonderful so that's going to load the tense avoid extension into our notebooks so we can have a look 46 00:03:53,310 --> 00:03:58,880 at tensor board inside here inside the code lab notebook and now we're gonna see this in a moment. 47 00:03:58,890 --> 00:04:04,860 So just bear with me to set up a tensor board callback we need to do three things maybe four that was 48 00:04:04,860 --> 00:04:20,610 number one let's write that here to set up a tensor board callback we need to do three things actually 49 00:04:20,610 --> 00:04:22,580 three things that was the first one. 50 00:04:22,590 --> 00:04:29,860 Number one if I could type this would be amazing load the tensor board notebook extension we're gonna 51 00:04:29,880 --> 00:04:42,180 put a little green checkmark there and then number two is create a tensor board callback which is able 52 00:04:42,180 --> 00:04:45,770 to save logs to a directory. 53 00:04:45,870 --> 00:04:52,800 So basically these logs are how our model is doing during training and pass it to our models 54 00:04:55,920 --> 00:04:56,760 fit the function. 55 00:04:56,760 --> 00:05:01,230 Now we haven't seen the fit function just yet but it's very similar to the one you've seen before in 56 00:05:01,230 --> 00:05:18,630 socket learn and we go here visualize our models training logs with tensor board magic function we'll 57 00:05:18,630 --> 00:05:22,220 do this after model training. 58 00:05:22,530 --> 00:05:23,600 Wonderful. 59 00:05:23,640 --> 00:05:27,300 So let's now create a function 12 step two here. 60 00:05:27,300 --> 00:05:32,370 So the first thing we have to do is import date time because now tensor board callback we want to be 61 00:05:32,370 --> 00:05:34,650 out of track each experiment. 62 00:05:34,650 --> 00:05:40,220 And one of the best ways to track and experiment is by the time that you ran that experiment. 63 00:05:40,230 --> 00:05:45,120 So that's why we need date time to access the current date and time you'll see this in a minute. 64 00:05:45,840 --> 00:05:56,190 So we go here create a function to build a tensor board callback make it nice and simple Create tensor 65 00:05:56,190 --> 00:05:58,290 board callback 66 00:06:01,980 --> 00:06:14,900 and then here what we're going to do is create a log directory for storing tensor board logs. 67 00:06:14,950 --> 00:06:20,410 So what we might set up is just a logs file in our dog vision project. 68 00:06:20,410 --> 00:06:22,500 I want dog vision yes. 69 00:06:22,570 --> 00:06:29,780 And new folder I'm going to call these logs so this is where we're going to store all of the logs for 70 00:06:29,780 --> 00:06:36,600 our tensor board callback so we'll X out of that and now we're going to log Dir. 71 00:06:36,610 --> 00:06:40,090 Create a variable OS path join. 72 00:06:40,090 --> 00:06:44,620 So this is short for it's just accessing path names. 73 00:06:44,620 --> 00:06:47,480 We need to have access to the path of this logs file. 74 00:06:47,500 --> 00:06:48,640 So that's what we're doing here. 75 00:06:49,780 --> 00:07:01,340 And we'll create drive slash my drive slash dog vision slash logs and then we're going to join it with. 76 00:07:01,900 --> 00:07:12,280 So make it so the logs get tracked whenever we run an experiment. 77 00:07:12,280 --> 00:07:19,780 So every time we fit our model we want to create some training logs for the current date and time. 78 00:07:19,780 --> 00:07:21,850 So this is where we access date time 79 00:07:24,770 --> 00:07:33,910 date time dot now dot string this line here is just saying hey get the exact current time so in my case 80 00:07:33,910 --> 00:07:39,630 it would be Tuesday 525 p.m. And we're gonna go straight from time. 81 00:07:39,800 --> 00:07:48,050 We can do almost any format you want here but we'll go for a pretty standard one of year month day hour 82 00:07:48,500 --> 00:07:56,420 minute second so the log directory every time we run this function it's going to create 83 00:08:00,670 --> 00:08:08,350 in logs with the name appended to it of the current date time. 84 00:08:08,350 --> 00:08:10,860 Does that make sense or we'll see it when we run it anyway. 85 00:08:13,310 --> 00:08:24,710 And then we want to return TAF carers callbacks tensor board and we want to pass it logged. 86 00:08:24,920 --> 00:08:31,230 So see here and able visualizations for tensor board tensor board is visualization tool provided with 87 00:08:31,230 --> 00:08:34,500 tensor flow beautiful. 88 00:08:34,580 --> 00:08:39,410 So these callback logs events four tenths of all including metric summary plots training graph visualizations 89 00:08:39,440 --> 00:08:39,850 beautiful. 90 00:08:39,860 --> 00:08:45,980 That's what we want and we need to pass it along to the path of the directory where to save the log 91 00:08:45,980 --> 00:08:48,080 files to be passed by tensor phone. 92 00:08:48,260 --> 00:08:53,960 That's exactly what we've created so little green tech here. 93 00:08:53,960 --> 00:08:56,810 Come on now area my emerges. 94 00:08:57,230 --> 00:09:00,120 Wonderful and we'll run that. 95 00:09:00,140 --> 00:09:05,570 So we've got our first callback create tensor board callback now and then next video we're going to 96 00:09:05,570 --> 00:09:12,700 create an early stopping callback and then we might be able to start getting our model ready to train. 97 00:09:12,710 --> 00:09:17,270 Now actually we're probably ready to train now because we're setting up these experiments that we want 98 00:09:17,270 --> 00:09:19,690 to be at a run back to back to back. 99 00:09:19,700 --> 00:09:22,340 This is why we're creating callbacks ready to go. 100 00:09:22,670 --> 00:09:26,120 So I'll see in the next video we'll create an early stopping callback.