1 00:00:00,330 --> 00:00:00,930 All right. 2 00:00:00,960 --> 00:00:05,080 So in this lesson we will load our pre trained model. 3 00:00:05,100 --> 00:00:09,840 This is gonna be the same model that we've saved in the previous lesson and we're now loaded into a 4 00:00:09,840 --> 00:00:16,710 fresh notebook and use that model to make predictions on the amnesty test dataset. 5 00:00:16,710 --> 00:00:24,060 If you head over to the lesson resources you should see a downloadable zip file and if you extract the 6 00:00:24,060 --> 00:00:30,430 contents of this zip file you should see this Jupiter notebook right here which is going to be a stub 7 00:00:30,450 --> 00:00:35,310 project and you should see to see as V files with some test data. 8 00:00:35,310 --> 00:00:42,330 So after you've downloaded and extracted these files take these test files to see as V files and drag 9 00:00:42,330 --> 00:00:46,950 them over into your amnesty folder just Adam right here. 10 00:00:46,950 --> 00:00:53,280 Now this amnesty folder is a folder that you would have created for the previous module but if you don't 11 00:00:53,280 --> 00:00:58,570 have it then just create a new folder and call it amnesty in all caps. 12 00:00:58,680 --> 00:01:03,540 Once you've done that drag the stop project over into your projects folder. 13 00:01:03,540 --> 00:01:08,880 Now you can go back into Jupiter notebook and you should see the stop project that I've given you right 14 00:01:08,880 --> 00:01:09,180 here. 15 00:01:10,690 --> 00:01:18,120 So just click on it and I'll open in a new window and what you should see here are a couple of markdown 16 00:01:18,130 --> 00:01:23,470 cells that delineate the different sections of the stop project and I find a little bit of code here 17 00:01:23,470 --> 00:01:26,350 already to help us load the test data. 18 00:01:26,650 --> 00:01:32,680 We've used num pi extensively already to load the text files so I've just saved you the work and added 19 00:01:32,680 --> 00:01:33,880 this code here. 20 00:01:33,910 --> 00:01:39,340 The important thing to note about the stop project is that it expects to find these load underscore 21 00:01:39,400 --> 00:01:47,380 x test see as V and load underscore why test dot CSP files in the end this sub folder and it has to 22 00:01:47,380 --> 00:01:49,350 be spelled exactly like this. 23 00:01:49,390 --> 00:01:54,040 And the file names also have to match this exactly. 24 00:01:54,040 --> 00:01:57,340 Now that we're all set up how shall we get started. 25 00:01:57,340 --> 00:02:01,460 Well to do anything useful in tensor flow we need two things. 26 00:02:01,510 --> 00:02:04,040 First we need a graph. 27 00:02:04,060 --> 00:02:07,000 And second we need a session. 28 00:02:07,000 --> 00:02:09,760 The graph is something like this. 29 00:02:09,760 --> 00:02:16,420 It's what holds onto all our tensor is in all the operations and represents how the data flows from 30 00:02:16,420 --> 00:02:23,600 the inputs to the outputs so let's create a blank graph in our Jupiter notebook. 31 00:02:23,600 --> 00:02:26,840 So right here we're just going to create a variable called Graph. 32 00:02:26,880 --> 00:02:34,150 Although our case and we're going to set it equal to T F dot graph with some parentheses at the end. 33 00:02:34,370 --> 00:02:40,400 And what this will do is it will create a placeholder for us a graph object placeholder. 34 00:02:41,060 --> 00:02:46,880 And this at the moment is empty but soon it's going to hold onto the graph that we will load from our 35 00:02:46,880 --> 00:02:47,780 model. 36 00:02:47,780 --> 00:02:53,900 The next thing that we need is a session object and this session if you recall is what actually runs 37 00:02:53,900 --> 00:02:56,090 our operations and tensor flow. 38 00:02:56,090 --> 00:03:00,230 We can only really do calculations intensive flow when we run a session. 39 00:03:00,230 --> 00:03:01,400 So let's create a session. 40 00:03:01,530 --> 00:03:04,230 I'm going to call it says an all lowercase. 41 00:03:04,550 --> 00:03:10,970 And once again it's TAF dot session with parentheses at the end. 42 00:03:10,970 --> 00:03:17,210 Now this will create a session object but we actually should link this session to our graph. 43 00:03:17,390 --> 00:03:18,120 How do we do that. 44 00:03:18,980 --> 00:03:21,500 Well let's look at the documentation. 45 00:03:21,500 --> 00:03:28,370 If you search for tensor flow session then you should be able to go right here to this link and of course 46 00:03:28,370 --> 00:03:35,720 I want to put all of this into the lesson resources when you scroll down you'll see this section labeled 47 00:03:35,870 --> 00:03:36,670 in it. 48 00:03:36,770 --> 00:03:44,120 This is the code that will allow us to actually create the session object and then we see as the second 49 00:03:44,150 --> 00:03:47,490 argument we can supply a graph by default. 50 00:03:47,570 --> 00:03:52,090 This graph has the value none so it is indeed an optional parameter. 51 00:03:52,310 --> 00:03:59,540 But here is how we can link the graph object that we've created to the session object so let's modify 52 00:03:59,690 --> 00:04:07,640 this line of code right here so that it will read graph is equal to graph between the parentheses. 53 00:04:07,640 --> 00:04:09,740 So this is the name of the parameter. 54 00:04:10,130 --> 00:04:12,610 And here is the actual value. 55 00:04:12,650 --> 00:04:15,160 This object right here that we're passing in. 56 00:04:15,470 --> 00:04:16,190 Brilliant. 57 00:04:16,190 --> 00:04:19,250 So now we have a session and a graph and the two are linked. 58 00:04:19,820 --> 00:04:23,450 So the next step is actually loading our saved model. 59 00:04:23,570 --> 00:04:29,340 So how do we load a model well let's pull up the documentation again for tensor flow. 60 00:04:29,540 --> 00:04:35,700 If you google something like load model tensor flow then you should be taking to this page right here. 61 00:04:36,020 --> 00:04:37,800 If we look at this section right here. 62 00:04:37,850 --> 00:04:40,820 Loading a saved model in Python. 63 00:04:41,030 --> 00:04:44,240 Then we see a little excerpt with this bit of code. 64 00:04:44,480 --> 00:04:50,870 There we see that there's something called a load function and it seems to require three arguments a 65 00:04:50,870 --> 00:04:56,780 session some sort of tag and an export directory. 66 00:04:56,780 --> 00:05:00,810 Let's click on this link right here and pull up a more detailed explanation. 67 00:05:00,830 --> 00:05:05,540 So when I click on it and navigate to this load function right here and there what I see is this full 68 00:05:05,540 --> 00:05:07,640 name of the load function. 69 00:05:08,030 --> 00:05:13,580 But just like in the last lesson with simple Save there were some aliases that make it compatible with 70 00:05:13,580 --> 00:05:15,980 future versions of tensor flow. 71 00:05:15,980 --> 00:05:21,500 So in order to get our code to work with the upcoming tensor flow version 2.0 we're going to use this 72 00:05:21,590 --> 00:05:23,120 full name right here. 73 00:05:23,120 --> 00:05:30,680 TMF dot com Pat dot v one dot saved underscore model dot load so let's import this load function in 74 00:05:30,680 --> 00:05:32,150 our Jupiter notebook. 75 00:05:32,150 --> 00:05:38,900 I want to head back in here and at the very very top I'm going to import the load function and that 76 00:05:38,900 --> 00:05:49,880 means importing it from tensor flow dot compact dot V1 dot saved underscore model and from there we 77 00:05:49,880 --> 00:05:52,190 can import load. 78 00:05:52,190 --> 00:05:58,340 Now what I can do is maybe run all the cells in my notebook after you've run all the cells. 79 00:05:58,460 --> 00:06:00,420 You can call your load function. 80 00:06:00,560 --> 00:06:02,090 So I've made a little sexual note here. 81 00:06:02,090 --> 00:06:08,770 Load the model and I've pasted in a little bit of the documentation as we can see in this example write 82 00:06:08,780 --> 00:06:10,910 him we need three things. 83 00:06:10,910 --> 00:06:13,260 The first is that session object. 84 00:06:13,610 --> 00:06:14,910 So let's add that right here. 85 00:06:15,020 --> 00:06:21,800 We'll say load open parentheses and then for the session we'll supply our session object. 86 00:06:22,060 --> 00:06:26,420 The session object that we've created right here in this cell. 87 00:06:26,420 --> 00:06:29,720 The second thing that we need to provide is some sort of tag. 88 00:06:29,930 --> 00:06:36,230 In the example in the documentation they've provided this tag underscore constants dot training. 89 00:06:36,230 --> 00:06:39,130 Now we know that we're not gonna be looking to train our model. 90 00:06:39,380 --> 00:06:42,120 We're looking to serve our model instead. 91 00:06:42,140 --> 00:06:43,550 So what's the tag for serving. 92 00:06:44,090 --> 00:06:51,110 Well actually the really neat thing about having used these simple save function in the last lesson 93 00:06:51,650 --> 00:06:58,700 is that it automatically adds a tag to the model and we can see here that it automatically uses the 94 00:06:58,700 --> 00:07:07,760 serving tag which is exactly what we wanted the easiest way to get hold of this serving tag in our notebook 95 00:07:07,760 --> 00:07:14,220 here is to import the tag constants so much scroll back up here where we're importing load. 96 00:07:14,480 --> 00:07:22,340 And I'm going to tack on tag underscore constants to the import limit shift enter on the cell and then 97 00:07:22,340 --> 00:07:28,910 I can come down here where I'm calling my load function and for the tags I'm going to simply set them 98 00:07:28,910 --> 00:07:34,410 equal to tag underscore constants dot serving. 99 00:07:34,460 --> 00:07:40,880 So we're going exactly with the same format as with the documentation right here. 100 00:07:41,000 --> 00:07:44,390 The last thing we need to provide is this export directory. 101 00:07:44,390 --> 00:07:49,460 If we go to the detailed documentation for this load function we can see that in the description the 102 00:07:49,490 --> 00:07:55,280 export directory is simply the directory in which the saved model protocol buffer and variables to be 103 00:07:55,280 --> 00:07:57,260 loaded are located. 104 00:07:57,260 --> 00:08:02,840 All that means is that the load function needs to know where to find this PDB file and these variable 105 00:08:02,840 --> 00:08:03,800 files. 106 00:08:03,860 --> 00:08:09,740 So we created a sub director here called saved model where all these files were saved and that means 107 00:08:09,740 --> 00:08:16,190 for the third argument in the load function we're simply going to say export underscored Diaw is equal 108 00:08:16,190 --> 00:08:25,220 to single quotes saved model and all we have to do is spell that folder name exactly the same way as 109 00:08:25,220 --> 00:08:29,300 we've gone in Windows Explorer or in Mac finder. 110 00:08:29,330 --> 00:08:31,790 Now let's that shift enter on the cell and see if it works. 111 00:08:33,860 --> 00:08:35,070 Brilliant. 112 00:08:35,390 --> 00:08:40,330 So now that we've successfully loaded our model we can use it to make predictions. 113 00:08:40,610 --> 00:08:41,690 But how do we do that. 114 00:08:41,720 --> 00:08:47,870 How do we make a prediction using this model that we've just loaded for that we need to remember. 115 00:08:47,870 --> 00:08:51,330 What are the inputs that we actually need to make a prediction. 116 00:08:51,470 --> 00:08:54,490 And what are the outputs of a prediction. 117 00:08:54,500 --> 00:08:57,450 We've talked a little bit about this in the previous lesson. 118 00:08:57,590 --> 00:08:59,920 The inputs where our exits. 119 00:08:59,990 --> 00:09:07,790 So I've got the graph for a model here and we can see here that the features namely the pixel values 120 00:09:07,880 --> 00:09:13,650 of the image that we're going to predict are gonna be stored in this tensor right here. 121 00:09:13,700 --> 00:09:21,170 We call this place holder capital X. So let's retrieve this tensor from the graph and we can do that 122 00:09:21,410 --> 00:09:28,460 by saying capital X is equal to some creating a new Python variable here and then I'm going to store 123 00:09:28,880 --> 00:09:31,560 a tensor that I'm going to grab from the graph. 124 00:09:31,610 --> 00:09:40,580 So when I take our graph object and then use the function get tensor by name and within the parentheses 125 00:09:40,660 --> 00:09:46,200 mean to supply single quotes and then I'm going to have to supply the name of the tensor. 126 00:09:46,310 --> 00:09:54,380 And in this case it's capital X because that's what we've got write him but then I have to add a colon 127 00:09:54,860 --> 00:09:56,300 and a zero. 128 00:09:56,390 --> 00:10:00,460 Now that's the full name of the tensor that I might get from the graph. 129 00:10:00,670 --> 00:10:04,640 Now this point you might ask why is there this colon and a zero at the end. 130 00:10:04,640 --> 00:10:05,790 Why is this tacked on. 131 00:10:05,930 --> 00:10:07,550 Because we don't see it intensively right. 132 00:10:08,690 --> 00:10:13,160 Well the answer is that this has something to do with how tensor flow works. 133 00:10:13,280 --> 00:10:21,980 If we look at 10 support what we actually see are a whole bunch of operations so labels is an operation 134 00:10:23,110 --> 00:10:29,930 and if I expand my first layer then clicking on any of these things we see that these are all operations 135 00:10:30,680 --> 00:10:38,540 and all of these operations are linked with arrows and the arrows are the actual tensor is the thing 136 00:10:38,540 --> 00:10:44,240 to understand about tensor flow is that all of these operations have inputs and outputs. 137 00:10:44,300 --> 00:10:50,960 Take this ad operation here in our first layer here we're simply just adding the biases to the weights 138 00:10:51,530 --> 00:10:55,640 and what we get from this addition is a single output the sum. 139 00:10:55,700 --> 00:11:00,220 Now say I want to access the sum of the weights and the biases in the first layer. 140 00:11:00,680 --> 00:11:02,270 How would I do that. 141 00:11:02,420 --> 00:11:05,850 What I would do is I would take the name of the operation. 142 00:11:05,900 --> 00:11:13,940 So this is layer underscore one slash add and then if I wanted to get a hold of the sum in this line 143 00:11:13,940 --> 00:11:20,180 of code here I would simply have graph don't get tensor by name and then in the single quotes I would 144 00:11:20,180 --> 00:11:27,860 have layer underscore one slash add there's only one little twist again I'm going I have to add a colon 145 00:11:28,250 --> 00:11:29,090 and add zero. 146 00:11:29,090 --> 00:11:33,440 This will give me access to the actual sum of the addition. 147 00:11:33,740 --> 00:11:38,840 So this will give me access to the values that are the result of adding two things together. 148 00:11:38,930 --> 00:11:46,640 If I leave out this colon on zero then I get access to the operation itself to the ADD operation on 149 00:11:46,640 --> 00:11:49,860 the other hand if I have the suffix colon zero. 150 00:11:50,060 --> 00:11:51,350 Then I get access to the sum. 151 00:11:52,040 --> 00:11:55,340 So this is a little bit of a quirk of tensor flow. 152 00:11:55,340 --> 00:12:00,890 By adding this colon zero to an operation we get access to an end point of an operation. 153 00:12:00,890 --> 00:12:06,350 So in this case this add operation and the reason we have colon zero is because there might be more 154 00:12:06,350 --> 00:12:08,950 than one output in the case of addition of course. 155 00:12:08,990 --> 00:12:13,430 If you add two things you only have one output but if you had some other operation that has more than 156 00:12:13,430 --> 00:12:15,600 one output then these things are indexed. 157 00:12:15,620 --> 00:12:17,280 The first one is at index 0. 158 00:12:17,300 --> 00:12:21,900 The second one is in index 1 The third one is set in next to and so on. 159 00:12:21,920 --> 00:12:25,710 So when it changes back to X colon 0. 160 00:12:25,910 --> 00:12:29,800 So this is how we're going to get hold of our inputs to make our predictions. 161 00:12:29,870 --> 00:12:36,080 So the next question is how do we actually access the predictions from our model and actually want to 162 00:12:36,080 --> 00:12:38,220 put this over to you as a challenge. 163 00:12:38,330 --> 00:12:44,870 Can you get hold of the tensor that will hold the models predictions from the graph store these predictions 164 00:12:45,320 --> 00:12:48,440 in a variable called Y underscore pred. 165 00:12:48,710 --> 00:12:57,180 I'll give you a few seconds to pause the video before I show you the solution. 166 00:12:57,290 --> 00:12:58,510 Ready. 167 00:12:58,520 --> 00:13:00,410 Here's the solution. 168 00:13:00,410 --> 00:13:05,110 Now if you recall the output of our model was this operation right here. 169 00:13:05,120 --> 00:13:08,890 It had the name accuracy underscore calc slash prediction. 170 00:13:08,900 --> 00:13:13,850 So what I want to do is I'm going to copy this name right here and I'm going to create this variable 171 00:13:13,880 --> 00:13:18,130 by underscore pred and I want to use the same code as above graph. 172 00:13:18,140 --> 00:13:25,040 Don't get tensor by name and then as an argument I'm going to provide in single quotes the name of the 173 00:13:25,040 --> 00:13:28,200 operation and then a colon and then a zero. 174 00:13:28,220 --> 00:13:34,600 This line of code will store the tensor that holds the actual predictions in this variable right here. 175 00:13:35,000 --> 00:13:41,600 So now we have everything we need the inputs and the outputs to make predictions using our pre trained 176 00:13:41,600 --> 00:13:47,910 model so the next step is making predictions on our testing dataset. 177 00:13:47,990 --> 00:13:49,510 How do we do that. 178 00:13:49,520 --> 00:13:54,560 Well any time we actually want to use tensor float to make a calculation or give us an output of some 179 00:13:54,560 --> 00:13:55,220 sort. 180 00:13:55,220 --> 00:14:02,270 Then we have to run a session and we can run our session by writing says Dot run. 181 00:14:02,380 --> 00:14:06,280 This takes our session object and we'll call the run functional. 182 00:14:07,130 --> 00:14:09,100 And it will give us some sort of output usually right. 183 00:14:09,370 --> 00:14:17,320 So I'm going to store this output in a python variable called prediction and now I have to provide some 184 00:14:17,320 --> 00:14:19,170 arguments to this run function. 185 00:14:19,180 --> 00:14:22,660 Let's take a look at the documentation again for this session. 186 00:14:22,660 --> 00:14:26,490 Here we've got the run function and it's got a number of arguments that it can take. 187 00:14:26,500 --> 00:14:32,320 Now I'm not interested in the options and the metadata I'm only interested in these two the fetches 188 00:14:32,770 --> 00:14:34,130 and the feed dictionary. 189 00:14:34,360 --> 00:14:36,200 So what are the fetches in the feed dictionary. 190 00:14:36,910 --> 00:14:41,570 Well the fetches are the values that we want to fetch from the graph. 191 00:14:41,740 --> 00:14:46,010 Here we can specify the kind of output that we want from the session. 192 00:14:46,240 --> 00:14:48,240 In our case we want the predictions. 193 00:14:48,250 --> 00:14:49,000 Right. 194 00:14:49,420 --> 00:14:51,440 Now what about the feed dictionary. 195 00:14:51,640 --> 00:14:55,810 Well the feed dictionary is the data that the prediction is based on. 196 00:14:55,810 --> 00:14:59,800 So once again I'd like to pose this over to you as a challenge. 197 00:14:59,800 --> 00:15:07,180 Can you figure out how to call this wrong function and supply the values to the fetches and the feed 198 00:15:07,180 --> 00:15:10,620 dictionary that will give us the predictions for our model. 199 00:15:10,660 --> 00:15:17,220 Remember the goal is to predict the handwritten digits that are stored inside X on the score test. 200 00:15:17,350 --> 00:15:21,550 I'll give you a few seconds to pause the video before I show you the solution 201 00:15:24,920 --> 00:15:25,870 ready. 202 00:15:25,880 --> 00:15:27,510 Here's the solution. 203 00:15:27,770 --> 00:15:35,150 So for the finches I want to set those equal to while the output that we're after we're after Y underscore 204 00:15:35,570 --> 00:15:43,810 pret the output right here and for the feed dictionary we're going to have to provide X on the score 205 00:15:43,810 --> 00:15:45,220 test. 206 00:15:45,220 --> 00:15:52,060 So this has to be provided in the form of a python dictionary with a name and a value. 207 00:15:52,510 --> 00:15:54,540 The name is gonna be capital X.. 208 00:15:54,610 --> 00:15:59,620 And the value is gonna be X on a score test and that's it. 209 00:15:59,620 --> 00:16:05,140 So let's execute these cells and check if it worked first then when execute this cell right here. 210 00:16:05,860 --> 00:16:11,170 And second I want to execute this cell right here where I'm running my session. 211 00:16:11,170 --> 00:16:18,250 Let's see what the actual values are for the first couple of labels in the y underscored test data set. 212 00:16:18,670 --> 00:16:23,470 So why underscore test and then it's square brackets. 213 00:16:23,480 --> 00:16:28,180 Colon five so that we see that the first digit is seven. 214 00:16:28,180 --> 00:16:29,580 The second is a two. 215 00:16:29,580 --> 00:16:30,540 The third one is a one. 216 00:16:30,970 --> 00:16:34,810 And so on now let's take a look at the predictions. 217 00:16:34,980 --> 00:16:40,200 The predictions that are the result of running our session we can pull up like so just like the cell 218 00:16:40,200 --> 00:16:44,070 above and we can see that the output seems to match. 219 00:16:45,510 --> 00:16:47,550 And that's really encouraging right. 220 00:16:47,580 --> 00:16:53,970 This would suggest that we've successfully loaded our model and we've successfully made some predictions 221 00:16:54,390 --> 00:16:57,900 on X underscore test that match the correct values. 222 00:16:58,230 --> 00:17:02,750 But just because the first five results match doesn't mean that they all match. 223 00:17:02,760 --> 00:17:03,060 Right. 224 00:17:03,840 --> 00:17:10,230 So as a challenge can you calculate what percent of the test data set we predicted correctly in this 225 00:17:10,230 --> 00:17:11,410 notebook. 226 00:17:11,430 --> 00:17:16,350 There's a number of ways you can do this but I'll give you a few seconds to pause the video before I 227 00:17:16,350 --> 00:17:19,140 show you a sample solution. 228 00:17:19,140 --> 00:17:19,440 All right. 229 00:17:19,470 --> 00:17:21,480 So here's how I would do it. 230 00:17:21,690 --> 00:17:27,620 I would store the number of correct predictions in a variable something like an hour on a score. 231 00:17:27,840 --> 00:17:28,590 Correct. 232 00:17:28,680 --> 00:17:35,300 And I'm going to set this equal to while I want a check for quality using num PIs equal function. 233 00:17:35,940 --> 00:17:43,190 And here I can check whether two things are equal namely the prediction and why on a score test. 234 00:17:43,260 --> 00:17:47,220 So let's have a look at what this looks like here. 235 00:17:47,260 --> 00:17:54,050 I get an array of billions so each entry that's equal to true has a value equal to one and each entry. 236 00:17:54,080 --> 00:17:57,730 That's equal to False has a value equal to zero. 237 00:17:57,750 --> 00:18:01,020 So what I can do then it's actually just some them all up. 238 00:18:01,020 --> 00:18:01,670 Right. 239 00:18:01,680 --> 00:18:05,280 This will give us the number of correct predictions. 240 00:18:05,490 --> 00:18:12,420 And in order to calculate the percentage what I need to do is divide the total number of correct predictions 241 00:18:12,720 --> 00:18:15,140 by the number of entries. 242 00:18:15,210 --> 00:18:23,890 So in our case we've got 10000 entries in this dataset so I could either divide it by ten thousand or 243 00:18:24,160 --> 00:18:31,870 I could just divided by the length of this array which is alien parentheses prediction and what I get 244 00:18:31,900 --> 00:18:35,980 in this case is ninety seven point eight six percent. 245 00:18:35,980 --> 00:18:42,720 And that's it there's just one last thing to do and that's a little bit of cleanup because at the moment 246 00:18:43,020 --> 00:18:44,870 our tensor flow session is actually still running. 247 00:18:44,910 --> 00:18:46,080 It's still open. 248 00:18:46,080 --> 00:18:53,730 So we can take our session object and close it and we can take our graph and just reset the default 249 00:18:53,730 --> 00:18:57,330 graph and that's it all done at this point. 250 00:18:57,330 --> 00:19:04,740 We've successfully saved and loaded a saved model using tensor flow and we've also verified that our 251 00:19:04,740 --> 00:19:08,530 saved model still works and can make accurate predictions. 252 00:19:08,610 --> 00:19:14,370 The next step is converting our tensor flow model into something that is able to run in the browser 253 00:19:15,120 --> 00:19:18,720 and for that we need to convert our tensor flow model into tensor flow. 254 00:19:18,790 --> 00:19:19,610 Yes. 255 00:19:19,890 --> 00:19:23,210 And that's exactly what we're going to do in the next lesson. 256 00:19:23,220 --> 00:19:24,260 I'll see you there. 257 00:19:24,270 --> 00:19:24,780 Take care.