1 00:00:00,770 --> 00:00:06,350 In this lesson we're going to talk about named scopes and the reason we're gonna talk about name scopes 2 00:00:06,710 --> 00:00:13,040 is so that we can start grouping different types of operations together in our tensor flow graph. 3 00:00:13,040 --> 00:00:18,740 And while we're doing that we'll also get a chance to talk about some Python programming concepts in 4 00:00:18,740 --> 00:00:19,310 particular. 5 00:00:19,610 --> 00:00:23,340 We're going to make use of something called the context manager. 6 00:00:23,540 --> 00:00:29,960 Wouldn't it be nice to group certain operations together in our graph for example this set of operations 7 00:00:29,960 --> 00:00:33,220 here all happens in the first hidden layer. 8 00:00:33,320 --> 00:00:38,000 We've got our weights that are multiplied with our inputs and then we've got our biases that are added 9 00:00:38,000 --> 00:00:43,920 to our weights after the multiplication and then we have all of that feed into our activation function. 10 00:00:43,940 --> 00:00:47,690 All of this could be represented as a single unit. 11 00:00:47,990 --> 00:00:53,660 If we do that for every single layer we can see how the different layers are connected on our tensor 12 00:00:53,660 --> 00:01:01,190 flow graph the way we can accomplish this intense flow is with something called a name scope the name 13 00:01:01,190 --> 00:01:03,770 scope will help us organize our calculations. 14 00:01:03,770 --> 00:01:08,100 And it does this using something called the context manager. 15 00:01:08,120 --> 00:01:13,190 So if you think about it all the different calculations that are part of say the first hidden layer 16 00:01:13,450 --> 00:01:16,450 they should really belong to the same context. 17 00:01:16,490 --> 00:01:29,180 So with T F dot name on a score scope open parentheses single quotes output underscore layer colon at 18 00:01:29,180 --> 00:01:37,230 the end and then select all of these lines of code and hit tab on my keyboard to invent them. 19 00:01:37,340 --> 00:01:42,460 And what this does is it makes all of these lines part of the same block. 20 00:01:42,800 --> 00:01:49,730 All of these lines of code now have the same context and that context has a name and that name is going 21 00:01:49,730 --> 00:01:51,380 to be output layer. 22 00:01:51,860 --> 00:01:58,880 So so all of these operations that I've got here that all belong to my output layer should not be grouped 23 00:01:59,300 --> 00:02:02,300 in tensor bold so let's see if this works. 24 00:02:02,300 --> 00:02:11,480 So let's see if I can come here to set up tensor flow graph and go to cell run all below and update 25 00:02:11,750 --> 00:02:15,030 my graph looks the same as before. 26 00:02:15,080 --> 00:02:22,600 Well it's because it is the same as before I found my model at 15 0 6 and at 15 25. 27 00:02:22,790 --> 00:02:27,530 So that's why I have to change this to my most recent run. 28 00:02:28,250 --> 00:02:34,650 And now you can see all the operations that have the same name scope grouped in my output layer. 29 00:02:34,910 --> 00:02:40,790 If I wanted to peep inside and see what operations there are actually inside all I need to do is double 30 00:02:40,790 --> 00:02:47,030 click and then I can see all the operations all the calculations that I've grouped under the same name 31 00:02:47,030 --> 00:02:47,960 scope. 32 00:02:48,080 --> 00:02:55,400 So what I'd like you to do is pause the video real quick and group all the other operations that belong 33 00:02:55,400 --> 00:02:57,220 to the second hidden layer. 34 00:02:57,620 --> 00:03:01,730 And the first hidden layer to clean up our graph. 35 00:03:01,910 --> 00:03:05,650 I'll give you a few seconds to pause the video and give this a go. 36 00:03:10,870 --> 00:03:12,140 Ready. 37 00:03:12,160 --> 00:03:14,050 Here's the solution. 38 00:03:14,050 --> 00:03:19,600 So we have to do a little bit of reorganization in our code to add a name scope to our second hidden 39 00:03:19,600 --> 00:03:20,290 layer. 40 00:03:20,290 --> 00:03:27,610 It's pretty straightforward because all we need to do is write with T F name underscore scope and we 41 00:03:27,610 --> 00:03:36,290 can give us any name we want hidden the square to semicolon and use this right here and indented. 42 00:03:36,400 --> 00:03:40,360 So that's our second hidden layer done for our first hidden layer. 43 00:03:40,600 --> 00:03:44,530 We're gonna have to move some of our code to the same cell to make our life easier. 44 00:03:44,710 --> 00:03:51,150 So I'll take this one him with him take this bit of code him move it below. 45 00:03:51,160 --> 00:03:52,800 Same with this one. 46 00:03:52,990 --> 00:04:01,960 Move it him and I'll take these three cells here and I'm going to delete them and now I can add the 47 00:04:01,960 --> 00:04:13,780 same line with TAF dot name scope open parentheses single quotes hidden under square one and colon at 48 00:04:13,780 --> 00:04:14,750 the end. 49 00:04:15,050 --> 00:04:22,870 Take this indented and then what I can do is simply go to this markdown cell here and see run all below 50 00:04:23,370 --> 00:04:29,650 back in tense of board refresh my page and then I'm gonna go to my most recent model this one right 51 00:04:29,650 --> 00:04:37,510 here and here we can already see that our layers are grouped very very nicely now we've got our inputs 52 00:04:37,660 --> 00:04:43,840 our axis going to our first hidden layer and the output of our first layer is going to our second hidden 53 00:04:43,840 --> 00:04:49,900 layer and the output of the second hidden layer is going to our third hidden layer our output layer 54 00:04:50,470 --> 00:04:56,380 but I tell you what looking back at our code it's starting to look a little bit repetitive right we've 55 00:04:56,380 --> 00:05:02,350 got this block right here we've got this block right here and this block right here I think that we 56 00:05:02,350 --> 00:05:09,610 can all see that most of this code is actually the same so why not create a single function that will 57 00:05:09,610 --> 00:05:18,070 set up all our layers for us I'm going to define a function kind of call that function setup underscore 58 00:05:18,520 --> 00:05:27,040 layer and it's going to take maybe three inputs that first input I'll just call input the second input 59 00:05:27,460 --> 00:05:34,090 I'll call it Wait underscored dimensions the I am the third input I'll call bias underscore dimensions 60 00:05:34,480 --> 00:05:42,010 and that fourth input I'll call name and what I'm going to do is I'm going to half with statement with 61 00:05:42,010 --> 00:05:52,580 block here that reads TMF dot name scope open parentheses name colon and then inside that block I'm 62 00:05:52,580 --> 00:05:58,520 basically going to do this I'm basically going to take everything I normally do to set up a layer and 63 00:05:58,640 --> 00:06:01,350 I'll do that inside my function. 64 00:06:01,350 --> 00:06:07,450 I won't have to tab over a little bit and actually fix this to having him but going back a little bit 65 00:06:07,550 --> 00:06:10,000 so shift temp will go back one. 66 00:06:10,190 --> 00:06:14,160 And now I can start replacing things that will vary with my parameters. 67 00:06:14,270 --> 00:06:17,830 So this shall read wait I mentioned the. 68 00:06:17,930 --> 00:06:20,720 I'm actually just gonna call W and the biases. 69 00:06:20,750 --> 00:06:27,260 I'm just going to call a capital B the shape for my biases are gonna be my bias dimension and then I'll 70 00:06:27,260 --> 00:06:29,360 make my variable names a little bit more generic. 71 00:06:29,370 --> 00:06:35,810 This is gonna be initial w this is gonna be w and there's gonna be initial w this is gonna be initial 72 00:06:35,810 --> 00:06:36,680 B. 73 00:06:36,890 --> 00:06:43,520 This is gonna be initial B and this will just be b layer and here's where it gets interesting again. 74 00:06:43,700 --> 00:06:48,910 The input to this layer is gonna be equal to this parameter here. 75 00:06:48,950 --> 00:06:56,600 My input parameter and it's gonna multiply those input times the weights plus the biases and then the 76 00:06:56,690 --> 00:06:58,480 output has to vary right. 77 00:06:58,490 --> 00:07:04,670 The output has to be different for the output layer versus the first layer or the second layer. 78 00:07:05,360 --> 00:07:06,790 So I'm gonna do this. 79 00:07:06,890 --> 00:07:17,180 Well I'll tell you what if the name is equal to W equals two out for output. 80 00:07:17,180 --> 00:07:20,950 In that case my output will be equal to soft Max. 81 00:07:21,170 --> 00:07:25,880 So T F Dot and Dot soft Max parentheses layer underscore in. 82 00:07:26,210 --> 00:07:34,610 However if the name is different from the word out then we're simply going to say the output is going 83 00:07:34,610 --> 00:07:37,930 to be equal to T F dot and then die. 84 00:07:37,930 --> 00:07:38,440 Really. 85 00:07:39,110 --> 00:07:45,740 Which is what we have for all the hidden layers and then layer underscore in my function will then return 86 00:07:46,190 --> 00:07:47,370 my output. 87 00:07:47,480 --> 00:07:48,700 So that should be fairly generic. 88 00:07:48,700 --> 00:07:49,260 Right. 89 00:07:49,280 --> 00:07:55,430 This will allow us to insert a cell below and set up all of our layers and a couple of lines of code 90 00:07:56,000 --> 00:07:57,430 layer underscore one. 91 00:07:57,860 --> 00:08:04,590 It's gonna be equal to set up underscore layer open parentheses and then what's gonna be the input the 92 00:08:04,610 --> 00:08:06,300 input the first hidden layer. 93 00:08:06,330 --> 00:08:14,210 It's gonna be our place holder capital X. The Wait underscored dimensions are gonna be equal to square 94 00:08:14,210 --> 00:08:15,280 brackets. 95 00:08:15,590 --> 00:08:23,240 The total number of inputs and then and then the other dimension for this one is gonna be and one hidden 96 00:08:23,240 --> 00:08:26,060 one so five hundred and twelve. 97 00:08:26,140 --> 00:08:33,340 The bias on a score dimension is gonna be equal to square brackets and hidden one. 98 00:08:33,420 --> 00:08:37,880 So the number of neurons again five hundred and twelve. 99 00:08:38,060 --> 00:08:43,760 And then we're gonna call this name as equal to layer on the score one must split this over a couple 100 00:08:43,760 --> 00:08:45,950 of lines two lines. 101 00:08:46,010 --> 00:08:46,580 Here we go. 102 00:08:47,240 --> 00:08:49,110 That's the setup for layer 1. 103 00:08:49,130 --> 00:08:49,920 Done. 104 00:08:49,940 --> 00:08:53,530 I can do the same thing for layer 2 and for our output layer. 105 00:08:54,200 --> 00:09:00,860 So there are 2 and output and I'm actually going to go back to my setup layer function here. 106 00:09:01,370 --> 00:09:07,560 And I'm going to call this thing layer on a score out that way we avoid confusion. 107 00:09:07,610 --> 00:09:12,720 So I'm just gonna rename these variables to not clash with this one. 108 00:09:13,040 --> 00:09:17,360 I want to keep the output of my model with the name output. 109 00:09:17,580 --> 00:09:21,710 Now let me update the weights and the biased dimensions. 110 00:09:21,750 --> 00:09:29,780 Our second hidden layer is of course going to have 512 inputs but it's only gonna have 64 neurons so 111 00:09:29,870 --> 00:09:39,950 an underscore hidden to the output layer is gonna have 64 inputs and underscore hidden 2 and 10 outputs 112 00:09:40,430 --> 00:09:44,150 and R underscore classes for both of these layers. 113 00:09:44,210 --> 00:09:48,230 The number of neurons also has to be reflected in the biased dimension. 114 00:09:48,230 --> 00:09:55,190 So for the second hidden layer it was an underscore hidden too and for the output layer it was again 115 00:09:55,580 --> 00:10:02,270 number of classes the names for layer one and layer 2 where you're going to keep with layer on the square 116 00:10:02,270 --> 00:10:08,420 one and layer on the square 2 and then for the output layer in order to trigger this if statement. 117 00:10:08,420 --> 00:10:10,990 I'm just gonna name it lowercase out. 118 00:10:11,090 --> 00:10:13,390 So this is gonna be the same as this. 119 00:10:13,460 --> 00:10:20,150 So my output layer is gonna have these soft Max activation function and that leaves us with one last 120 00:10:20,150 --> 00:10:21,250 thing to do. 121 00:10:21,350 --> 00:10:28,310 The outputs of the first layer are gonna be the inputs for the second layer and the inputs for the third 122 00:10:28,310 --> 00:10:31,370 layer are gonna be the outputs of the second layer. 123 00:10:31,940 --> 00:10:37,580 So this place will look at this capital X it's only gonna be input for the first layer the input for 124 00:10:37,580 --> 00:10:46,340 the second layer is actually gonna be layer number 1 and the inputs for the output layer is gonna be 125 00:10:46,700 --> 00:10:49,610 layer number 2. 126 00:10:49,610 --> 00:10:57,560 Now what I'm gonna do is I take these lines of code and I want to move them to the very bottom of our 127 00:10:57,560 --> 00:10:58,670 notebook. 128 00:10:58,670 --> 00:10:59,690 So I'm going to keep hold of them. 129 00:10:59,720 --> 00:11:05,960 I'm not going to delete them but I'm going to move them all the way down and I want to set a markdown 130 00:11:05,990 --> 00:11:06,590 cell here. 131 00:11:07,070 --> 00:11:14,310 That's kind of read code for first part of module. 132 00:11:14,460 --> 00:11:16,100 I think what I'll do is I'll come at the Mount. 133 00:11:16,130 --> 00:11:21,540 So we're keeping them around as a reference but we're not using them anymore. 134 00:11:21,560 --> 00:11:23,400 We're not executing them anymore. 135 00:11:23,570 --> 00:11:29,810 Now that we've done all of that we can come back up here to where we've got our mark down set up tensor 136 00:11:29,810 --> 00:11:36,080 flow graph and we're to run all below and what we get is the crash. 137 00:11:36,080 --> 00:11:36,430 Why. 138 00:11:36,860 --> 00:11:40,860 Because we've still got some old code floating around this line here. 139 00:11:41,030 --> 00:11:41,480 Well now. 140 00:11:41,480 --> 00:11:49,610 Error out because there is no more tensor called B so I'll move this line of code down to the very bottom 141 00:11:49,880 --> 00:11:58,130 as well and I'm going to comment it out before I rerun everything I'm going to execute this cell here 142 00:11:58,670 --> 00:12:02,090 which closes our session and resets our graph. 143 00:12:02,090 --> 00:12:06,500 And then when I come back here and try again so I'll run all below. 144 00:12:06,500 --> 00:12:08,990 Now our model starts training again. 145 00:12:08,990 --> 00:12:10,910 We can come back into tensor board. 146 00:12:11,480 --> 00:12:16,880 So now that we've successfully created some named scopes for our layers we can also create some name 147 00:12:16,880 --> 00:12:19,890 scopes for the other calculations that we're doing. 148 00:12:20,060 --> 00:12:25,910 In particular I'm thinking of our optimization because our optimizer is actually connected to all of 149 00:12:25,910 --> 00:12:32,120 our layers and I'm thinking of our lost function and our accuracy calculation. 150 00:12:32,180 --> 00:12:33,750 There's a lot going on here. 151 00:12:33,860 --> 00:12:38,680 There's a lot of nodes and a lot of edges and we can consolidate those. 152 00:12:38,780 --> 00:12:41,060 Let's tackle one at a time. 153 00:12:41,090 --> 00:12:43,890 I'm going to hit up the accuracy calculation first. 154 00:12:43,970 --> 00:12:51,440 So for the accuracy calculation I'm going to provide a name scope that reads with T F dot named scope 155 00:12:51,800 --> 00:13:02,300 open parentheses single quotes accuracy underscore calc colon and tab this over so that it's inside 156 00:13:02,450 --> 00:13:04,580 the body of this scope. 157 00:13:04,580 --> 00:13:11,120 I'm also going to come down here where we've got our two summaries and I'm also going to provide a name 158 00:13:11,120 --> 00:13:13,470 scope for these two with TFT. 159 00:13:13,520 --> 00:13:19,440 Name scope open parentheses and I'm gonna call this performance. 160 00:13:19,520 --> 00:13:21,410 This is gonna be the performance named scope. 161 00:13:21,500 --> 00:13:25,790 Next I'm gonna tackle the optimizer the optimizer. 162 00:13:25,790 --> 00:13:31,970 I'm just gonna give the name f named scope open parentheses optimizer. 163 00:13:32,180 --> 00:13:33,490 I think that's pretty straightforward. 164 00:13:33,570 --> 00:13:36,150 And also I kind of like imagination. 165 00:13:36,170 --> 00:13:37,890 So there we go. 166 00:13:37,910 --> 00:13:40,180 That's the name scope for optimizer done. 167 00:13:40,220 --> 00:13:44,960 And while we're at it let's add a name scope for our loss function as well. 168 00:13:44,960 --> 00:13:55,730 So with TAF dot name underscore scope parentheses loss on a score calc colon at the end tab everything 169 00:13:55,730 --> 00:14:01,660 over and now let's go to the very top and run all the cells again. 170 00:14:01,910 --> 00:14:03,330 So will it run all the cells below. 171 00:14:04,010 --> 00:14:10,700 They'll go into tensor bode well refresh our graph will refresh our page and we'll take a look at our 172 00:14:10,700 --> 00:14:11,640 latest graph. 173 00:14:11,680 --> 00:14:17,120 Remember changes dropped on here to your most recent run after you've made all the changes and now you'd 174 00:14:17,120 --> 00:14:18,820 see something like this. 175 00:14:18,920 --> 00:14:26,570 You should see that we've got everything a lot more tidy we've got our inputs here x those go into a 176 00:14:26,570 --> 00:14:30,480 layer 1 but they also are connected to our optimizer. 177 00:14:30,530 --> 00:14:35,000 We've cut our first hidden layer feeding through to our second hidden layer which is feeds through to 178 00:14:35,000 --> 00:14:41,600 our output layer and our outputs go into the accuracy calculations as well as the lost calculations 179 00:14:42,180 --> 00:14:48,950 the lost calculations of course also feed through to the optimizer because the optimizer is connected 180 00:14:49,070 --> 00:14:54,860 to all the weights in all the layers and of course it's also connected to the lost calculations. 181 00:14:54,860 --> 00:14:57,530 It's not connected to the accuracy calculations. 182 00:14:57,530 --> 00:14:59,300 It doesn't have anything to do with that. 183 00:14:59,340 --> 00:15:04,030 So looking at the graph in this way when everything's grouped together it's a lot more clear. 184 00:15:04,910 --> 00:15:10,460 But one thing that we can do is right click on this and we can actually remove this thing from the main 185 00:15:10,460 --> 00:15:13,790 graph because it's a little bit distracting it's connected to everything right. 186 00:15:14,330 --> 00:15:18,240 So if we move it to the side then we can see that's still there. 187 00:15:18,680 --> 00:15:25,550 You can still see it connected to the optimizer but we see that this now represents our neural network 188 00:15:25,850 --> 00:15:27,150 in the clearest way yet. 189 00:15:27,290 --> 00:15:33,230 We have our inputs our features our Xs feeding through from the bottom going through all the layers 190 00:15:33,520 --> 00:15:40,420 and at the very top of our graph we have the two outputs that we care about the loss and the accuracy. 191 00:15:40,430 --> 00:15:45,770 Now that almost covers everything I want to talk about with the graph but before I end this lesson I 192 00:15:45,770 --> 00:15:51,400 want to show you one other cool thing that tensor board can do tensor board can actually also show us 193 00:15:51,410 --> 00:15:57,630 some images namely the images of the numbers we have in our dataset to show you how. 194 00:15:57,820 --> 00:16:04,170 We just have to add a single operation here and we add this operation here just above the code we're 195 00:16:04,180 --> 00:16:05,500 running our session. 196 00:16:05,590 --> 00:16:14,890 So add a quick markdown cell here that's gonna read check input images in board and the way we're going 197 00:16:14,890 --> 00:16:18,340 to do this is with a summary. 198 00:16:18,640 --> 00:16:28,420 So there's a tense about summary called Image T F dot summary dot image and then open the parentheses. 199 00:16:28,420 --> 00:16:30,900 And then I want to provide a name for the summary. 200 00:16:30,930 --> 00:16:37,600 Going to call this image underscore input and then I'm going to provide a value. 201 00:16:37,960 --> 00:16:39,510 Now this is the tricky part. 202 00:16:40,120 --> 00:16:43,070 I have to get an image pack out of our data. 203 00:16:43,240 --> 00:16:45,640 So I'm going to leave this for now. 204 00:16:46,240 --> 00:16:52,410 And I'm going to have X underscore image as the variable. 205 00:16:52,410 --> 00:16:55,670 I want to feed into my image summary here. 206 00:16:56,280 --> 00:17:00,460 That's gonna be equal to some data from our placeholder tensor. 207 00:17:00,600 --> 00:17:04,680 All the data and the place holder tensor is like in a single row right. 208 00:17:04,680 --> 00:17:07,110 Seven hundred and eighty four values. 209 00:17:07,110 --> 00:17:16,290 In order to display the image which is 28 by 28 pixels we have to reshape the thing so as CTF data reshape 210 00:17:16,930 --> 00:17:22,410 and I'm going to grab my placeholder tensor and then I have to say how I'm going to reshape it. 211 00:17:22,440 --> 00:17:29,430 So when I add some square brackets him and I'm going to reshape it into the following dimensions minus 212 00:17:29,430 --> 00:17:41,760 one comma 28 comma 28 comma 1 oneness the color channel 28 is the width and 28 is the height at minus 213 00:17:41,760 --> 00:17:48,050 1 is that because I've only got three dimensions and this other dimension would be for all the samples. 214 00:17:48,090 --> 00:17:54,300 In other words I would have more than one image that's 28 by twenty eight with one color channel. 215 00:17:54,300 --> 00:18:04,080 Now that I've got this I can come down here and I can supply X on a square image to my summary and I'm 216 00:18:04,080 --> 00:18:10,620 going to supply one more argument here namely the number of images I actually want to show up intensive 217 00:18:10,620 --> 00:18:10,920 mode. 218 00:18:11,220 --> 00:18:18,600 So Max on the score outputs is the name of the parameter and I'm going to say for I don't want to see 219 00:18:18,600 --> 00:18:26,130 all sixty thousand of them right now since reshaping stuff and working out the summary are an operation 220 00:18:26,130 --> 00:18:26,940 and tensor flow. 221 00:18:27,390 --> 00:18:30,020 We can also give them a name scope. 222 00:18:30,210 --> 00:18:39,210 So I'm going to say with space T F dot name scope open parentheses and then I'll say show on a score. 223 00:18:39,210 --> 00:18:48,120 Image as the name scope for this thing and alt tab this over so that it's in the body of this with block. 224 00:18:48,120 --> 00:18:53,910 Now both this reshape calculation and my summary have the same context. 225 00:18:54,390 --> 00:19:00,040 Let's go back up to where we set up tents aboard and run everything below. 226 00:19:00,240 --> 00:19:05,070 So our models already training and we refresh tents aboard. 227 00:19:05,670 --> 00:19:08,750 And what we should see is a new type of summary. 228 00:19:08,790 --> 00:19:15,680 Previously we only had the scalar and the graph but now we've got another type of summary here namely 229 00:19:15,690 --> 00:19:19,770 our images summary because we added an image summary here. 230 00:19:19,770 --> 00:19:27,090 If I change my graph to my latest run then you will also see this additional node here. 231 00:19:27,090 --> 00:19:35,640 We've got our show image name scope for two calculations namely our summary and our reshape calculation. 232 00:19:35,640 --> 00:19:39,630 These in turn are connected to our features tensor are x. 233 00:19:39,690 --> 00:19:43,820 Now let's check out our images tab here in my images tab. 234 00:19:43,860 --> 00:19:50,460 I get an image summary for every run that I've done and what I'm looking at here are four images from 235 00:19:50,610 --> 00:19:54,350 my training run and my validation run in my training run. 236 00:19:54,390 --> 00:19:59,610 I've got the number six in my validation run I've got the number five showing up if I just wanted to 237 00:19:59,610 --> 00:20:05,370 see one of them I could hit this radio button him so we'll see the screen one I've just got my validation 238 00:20:05,370 --> 00:20:09,360 data set for which I'm looking at these four images why four. 239 00:20:09,390 --> 00:20:11,780 Because that's our max output right. 240 00:20:11,850 --> 00:20:13,530 So that's pretty cool right. 241 00:20:13,530 --> 00:20:20,040 Not only did we learn about name scoping and grouping operations in our tensor flow graph we also got 242 00:20:20,040 --> 00:20:26,010 to use a different kind of summary we've been using this scalar type summaries many many times before 243 00:20:26,250 --> 00:20:29,890 and now we got to see the image summary as well in the next lesson. 244 00:20:29,970 --> 00:20:36,240 We don't try out a few different models and this will allow us to look at a new kind of summary as well. 245 00:20:36,240 --> 00:20:42,570 The histogram the histogram is incredibly powerful because it will allow us to see how our weights are 246 00:20:42,570 --> 00:20:49,920 changing over time as we're running the model and training it over the course of several epochs. 247 00:20:49,920 --> 00:20:53,440 So stay tuned for all of that and more in the next lesson. 248 00:20:53,470 --> 00:20:54,230 I'll see you in a bit.