1 00:00:00,350 --> 00:00:01,110 All righty. 2 00:00:01,140 --> 00:00:05,970 So we've got a tensor board callback which is going to help us monitor our model's performance as it's 3 00:00:05,970 --> 00:00:06,420 training. 4 00:00:06,420 --> 00:00:10,760 So figuring out if it's learning correctly is it doing well or not. 5 00:00:10,760 --> 00:00:16,870 Now let's create an early stopping callback early stopping helps prevent overfishing. 6 00:00:16,950 --> 00:00:23,820 So our model learning the training data too well by stopping a model when a certain evaluation metric 7 00:00:23,880 --> 00:00:25,600 stops improving. 8 00:00:25,620 --> 00:00:31,620 So if a model trains for too long it can actually do so well at finding patterns in a certain dataset 9 00:00:32,280 --> 00:00:35,770 that it's not able to use those patterns on a nother data set. 10 00:00:35,790 --> 00:00:37,830 So it doesn't generalize well. 11 00:00:37,830 --> 00:00:44,220 So if we come back to our exam if you were to learn the course materials far too well would like you 12 00:00:44,220 --> 00:00:50,040 learnt them off by heart a.k.a. if you're in university and you've studied way too hard on the course 13 00:00:50,040 --> 00:00:53,370 materials you've memorized them off by heart. 14 00:00:53,370 --> 00:01:01,930 When you come to a practice exam or a final exam with questions that you haven't seen before if you've 15 00:01:02,080 --> 00:01:08,560 memorized everything on here instead of just learning the inherent problem solving patterns of the course 16 00:01:08,560 --> 00:01:13,910 materials you're not going to do very well on the practice exam or the final exam. 17 00:01:14,260 --> 00:01:17,290 So you're not going to be able to generalize very well. 18 00:01:17,590 --> 00:01:24,010 And so early stopping helps prevent our model from over fitting the course materials a.k.a. the training 19 00:01:24,010 --> 00:01:24,360 set. 20 00:01:24,850 --> 00:01:31,540 So don't when it comes to a data set that our model hasn't seen before it's able to perform well let's 21 00:01:31,540 --> 00:01:35,050 say it and action will create an early stopping callback. 22 00:01:35,060 --> 00:01:44,340 We'll go here like this early stopping the callback and we'll change that to markdown. 23 00:01:44,440 --> 00:01:45,070 Wonderful. 24 00:01:46,090 --> 00:01:52,240 And so if we go up here we could go tensor flow carers barely stopping callback and this should take 25 00:01:52,240 --> 00:01:55,730 us to the documentation. 26 00:01:55,820 --> 00:01:56,640 There we go. 27 00:01:56,880 --> 00:02:01,360 Be stopping so that's what you can check out there. 28 00:02:01,440 --> 00:02:08,540 Stop training when a monitored quantity has stopped improving let's do that might just link this in 29 00:02:08,540 --> 00:02:09,080 here why not 30 00:02:12,510 --> 00:02:24,450 beautiful and we'll write a little note here early stopping helps stop our model from over feeding by 31 00:02:25,140 --> 00:02:33,930 stopping training if a certain evaluation metric stops improving 32 00:02:37,640 --> 00:02:45,890 so it is set up early stopping is pretty simple so we'll go create a stopping callback and we'll go 33 00:02:47,330 --> 00:02:49,460 early stopping. 34 00:02:49,490 --> 00:02:55,900 TMF carers callbacks and early stopping. 35 00:02:56,150 --> 00:02:57,260 Wonderful. 36 00:02:57,260 --> 00:02:58,970 And we wanted to monitor. 37 00:02:59,090 --> 00:03:03,430 So this is the parameters we pass it to by default. 38 00:03:03,560 --> 00:03:06,240 It monitors the validation loss. 39 00:03:06,530 --> 00:03:14,030 So we're going to see this in a second when we run a model but we might choose validation accuracy. 40 00:03:14,030 --> 00:03:20,120 And now you could choose validation loss if you want but we're gonna choose validation accuracy and 41 00:03:20,120 --> 00:03:23,470 we're gonna give it a patient's value of three. 42 00:03:23,480 --> 00:03:27,170 What is patient's number of epochs with no improvement. 43 00:03:27,170 --> 00:03:29,530 After which training will be stopped. 44 00:03:29,570 --> 00:03:30,440 What's an epoch. 45 00:03:31,250 --> 00:03:34,140 Well let's figure that out in the next video I think. 46 00:03:34,570 --> 00:03:36,360 So we just run this. 47 00:03:36,740 --> 00:03:41,090 These are gonna make these callbacks as well as this model function. 48 00:03:41,100 --> 00:03:44,690 I'm gonna make a lot more sense once we start training a model. 49 00:03:44,690 --> 00:03:47,450 So that's our early stopping callback ready. 50 00:03:47,450 --> 00:03:53,010 We're going to monitor the validation accuracy as soon as it stops improving for three epochs. 51 00:03:53,060 --> 00:03:54,650 We're gonna stop training our model. 52 00:03:55,130 --> 00:04:00,230 So in the next couple of videos what we might do is I think we are definitely well and truly ready to 53 00:04:00,230 --> 00:04:01,680 start training a model. 54 00:04:01,790 --> 00:04:07,280 So if we come back to our keynote what we're focused on we've picked a model from tensor flow to suit 55 00:04:07,280 --> 00:04:09,590 our tensor flow hub to suit our problem. 56 00:04:09,590 --> 00:04:11,710 We've turned our data into tenses. 57 00:04:11,840 --> 00:04:16,540 Now it's time to fit a model to the training data and make a prediction. 58 00:04:16,580 --> 00:04:19,340 So how about we make a function for training a model.