WEBVTT 0:00:09.940000 --> 0:00:11.560000 Statistics and troubleshooting with them. 0:00:11.560000 --> 0:00:17.020000 It's basically, Wireshark takes a whole bunch of information and provides 0:00:17.020000 --> 0:00:20.980000 it to you in a way in these summaries or give you tools so that you can 0:00:20.980000 --> 0:00:24.420000 do baseline. So, in this slide, you can get general information about 0:00:24.420000 --> 0:00:29.100000 everything that's going on so that you can make some decisions on what 0:00:29.100000 --> 0:00:34.700000 you're seeing and how that is going to affect your troubleshooting process. 0:00:34.700000 --> 0:00:39.680000 So, some of the most helpful ones which we've already touched on as we 0:00:39.680000 --> 0:00:42.740000 were going through the other modules and we will continue to touch on 0:00:42.740000 --> 0:00:47.060000 through future modules and hopefully you get into the habit of using if 0:00:47.060000 --> 0:00:54.640000 you do not do so already is these summaries and these conversation summaries 0:00:54.640000 --> 0:01:01.680000 to show you specifically what's on your network, what's speaking and what 0:01:01.680000 --> 0:01:04.060000 it's speaking to. 0:01:04.060000 --> 0:01:11.900000 What's communicating with what in what direction and other important pieces 0:01:11.900000 --> 0:01:17.880000 of information such as what protocols are running, what is what was captured 0:01:17.880000 --> 0:01:22.680000 helps you build out your filters, helps you to come to some conclusions 0:01:22.680000 --> 0:01:25.920000 about what may be running on your network. 0:01:25.920000 --> 0:01:32.260000 There is a whole subset of service response time graphs which allows you 0:01:32.260000 --> 0:01:39.920000 to plot specific things like how if you're using an application how it's 0:01:39.920000 --> 0:01:47.420000 actually being seen as a round trip or something of that nature and it's 0:01:47.420000 --> 0:01:48.360000 plotted on a graph. 0:01:48.360000 --> 0:01:53.180000 You can find these in statistics and you can use your I O graph where 0:01:53.180000 --> 0:01:58.200000 you can add more filters and you can use flow graphs so that you can see 0:01:58.200000 --> 0:02:04.240000 traffic flow, traffic flow patterns and this is extremely helpful when 0:02:04.240000 --> 0:02:11.800000 looking at TCP specifically the TCP handshake. 0:02:11.800000 --> 0:02:16.020000 So we briefly touched on it before as we were talking about filters because 0:02:16.020000 --> 0:02:21.400000 I thought that it might be relevant to helping you build filters but this 0:02:21.400000 --> 0:02:25.560000 is really where we're going to go into this menu and talk about what you 0:02:25.560000 --> 0:02:31.660000 can find here. So on this menu you can select from about a dozen tools 0:02:31.660000 --> 0:02:37.120000 which will allow you to do some statistical analysis with wire shark and 0:02:37.120000 --> 0:02:43.040000 one of the key things here is that some of the tools that we already talked 0:02:43.040000 --> 0:02:49.460000 about are found on this menu as well as some others which we will get 0:02:49.460000 --> 0:02:54.060000 into in our future modules as we troubleshoot some specific problems. 0:02:54.060000 --> 0:03:00.220000 For example if we're looking at some wireless LAN problems we will use 0:03:00.220000 --> 0:03:08.160000 the wireless LAN traffic option to see specific SSID's as an example. 0:03:08.160000 --> 0:03:12.980000 But regardless it should be something that you do get in the habit of 0:03:12.980000 --> 0:03:19.640000 using, something that you're comfortable with and something that you test 0:03:19.640000 --> 0:03:24.020000 with so that you can see specific options that are available. 0:03:24.020000 --> 0:03:29.040000 So what I'll do is I will show you some of the things here that we can 0:03:29.040000 --> 0:03:32.520000 do so let me clear out my filter from before. 0:03:32.520000 --> 0:03:36.820000 And the first thing I want to do is I want to look at a summary. 0:03:36.820000 --> 0:03:43.040000 So this is my capture summary it's everything that's in the capture. 0:03:43.040000 --> 0:03:50.560000 As you can see here what's helpful is the name and location of the file 0:03:50.560000 --> 0:03:52.780000 or the capture itself. 0:03:52.780000 --> 0:03:59.220000 How big the capture is the file format which is PCAPNG. 0:03:59.220000 --> 0:04:03.260000 We will talk about file formats in the last module but what's very important 0:04:03.260000 --> 0:04:09.200000 about that module is that there's different formats you could save in 0:04:09.200000 --> 0:04:13.880000 and if you save in the older format such as PCAP as an example these capture 0:04:13.880000 --> 0:04:19.200000 file comments or packet comments will not translate over so you may miss 0:04:19.200000 --> 0:04:20.840000 some information. 0:04:20.840000 --> 0:04:26.300000 It will tell you date and time again as we've talked about in the future. 0:04:26.300000 --> 0:04:30.460000 I'm sorry we talked about in the past module. 0:04:30.460000 --> 0:04:35.340000 This is critically important to make sure that the timing on your system 0:04:35.340000 --> 0:04:41.200000 is accurate maybe use NTP otherwise you're going to have incorrect time. 0:04:41.200000 --> 0:04:45.260000 How long the capture was where it was caught and so on. 0:04:45.260000 --> 0:04:51.340000 So there's a lot of information here that you can glean specifically the 0:04:51.340000 --> 0:04:55.900000 amount of packets and this is a good one right here between first and 0:04:55.900000 --> 0:05:01.140000 last packet your average time and seconds. 0:05:01.140000 --> 0:05:06.200000 Where does this play into being helpful is that if you were just capturing 0:05:06.200000 --> 0:05:11.300000 a simple conversation that you were running a pre-capture filter on and 0:05:11.300000 --> 0:05:16.060000 you knew that you were isolating that conversation you might be able to 0:05:16.060000 --> 0:05:23.480000 find right there that that entire conversation took X amount of seconds. 0:05:23.480000 --> 0:05:28.080000 So there's some things that you could do to get the information in here 0:05:28.080000 --> 0:05:33.100000 more accurate and this could be very helpful in determining some high 0:05:33.100000 --> 0:05:36.640000 level causes of an issue. 0:05:36.640000 --> 0:05:43.540000 So also there's some comments summary so if you made comments in the packets 0:05:43.540000 --> 0:05:47.540000 you will be able to pull them there and it's nice because you can either 0:05:47.540000 --> 0:05:51.780000 save it as or copy and save as you can save this in the next class. 0:05:51.780000 --> 0:05:54.900000 So you can see the information pasted into a report. 0:05:54.900000 --> 0:06:00.120000 You can show address resolution the protocol hierarchy which we already 0:06:00.120000 --> 0:06:06.480000 covered in a previous module will show you your top talking information. 0:06:06.480000 --> 0:06:13.100000 Your data on your network and what protocols were in use. 0:06:13.100000 --> 0:06:16.740000 You can see your conversations. 0:06:16.740000 --> 0:06:24.040000 This was critically important for us to figure out where to apply a conversation 0:06:24.040000 --> 0:06:29.400000 filter. As we mentioned before we can filter directly on. 0:06:29.400000 --> 0:06:36.560000 Okay. Other important statistics is the end point list. 0:06:36.560000 --> 0:06:39.060000 You can find what end points are in here. 0:06:39.060000 --> 0:06:42.540000 You can search and find what your packet links are. 0:06:42.540000 --> 0:06:47.780000 One of the helpful things in here, sorry open on the wrong screen, is 0:06:47.780000 --> 0:06:53.020000 that when you search for packet links why this could be so helpful to 0:06:53.020000 --> 0:07:01.760000 you is that we essentially would not want a ton of tiny packets on our 0:07:01.760000 --> 0:07:05.220000 network. It just makes everything work harder. 0:07:05.220000 --> 0:07:07.940000 It inundates buffers. 0:07:07.940000 --> 0:07:15.680000 What we would rather have is something in the correct MTU size and or 0:07:15.680000 --> 0:07:19.060000 jumble frames if you have everything enabled across your network. 0:07:19.060000 --> 0:07:24.880000 So it all comes in one conversation and it's not inundated with packets 0:07:24.880000 --> 0:07:26.560000 to increase I.O. 0:07:26.560000 --> 0:07:33.180000 As you can see in this summary there's actually a lot of little packets. 0:07:33.180000 --> 0:07:37.880000 So that might be something you want to look at as you're troubleshooting. 0:07:37.880000 --> 0:07:41.820000 You may want to say well you know what maybe poor performance is because 0:07:41.820000 --> 0:07:44.260000 there's a lot of small packets. 0:07:44.260000 --> 0:07:48.640000 You can build an I.O. 0:07:48.640000 --> 0:07:51.520000 graph. This will show you some specific things. 0:07:51.520000 --> 0:07:56.160000 We do a module on this so I don't want to go too deeply into it here but 0:07:56.160000 --> 0:08:01.760000 you can filter specifically on some traffic so I believe this is an HTTP 0:08:01.760000 --> 0:08:05.900000 in here. And I can see in here some some spiking. 0:08:05.900000 --> 0:08:14.440000 So it allows me to do things with it so that I can really see what's going 0:08:14.440000 --> 0:08:18.700000 on and I can change the way I see it. 0:08:18.700000 --> 0:08:27.080000 But you can get these very quickly from your statistics menu. 0:08:27.080000 --> 0:08:29.480000 You can do a compare. 0:08:29.480000 --> 0:08:34.460000 You can pull up specific HTTP information. 0:08:34.460000 --> 0:08:37.280000 The requests as an example. 0:08:37.280000 --> 0:08:43.400000 You can do your TCP or your UDP streams graphs from here. 0:08:43.400000 --> 0:08:47.760000 One thing to mention is if you're highlighting a packet in your packets 0:08:47.760000 --> 0:08:56.640000 list pane and it happens to be a packet that is TCP it will be grayed 0:08:56.640000 --> 0:08:58.660000 out and you will not be able to see it. 0:08:58.660000 --> 0:09:02.740000 So that may be something that you want to take heat of. 0:09:02.740000 --> 0:09:10.540000 Or if it's UDP it will allow you to pull up a graph of the actual traffic 0:09:10.540000 --> 0:09:14.120000 and you can do some statistical analysis on this. 0:09:14.120000 --> 0:09:20.740000 The flow graph. This is very helpful. 0:09:20.740000 --> 0:09:25.260000 We'll just look at all packets at TCP flow. 0:09:25.260000 --> 0:09:29.580000 We also have another module on this so we won't get very deep into it 0:09:29.580000 --> 0:09:31.080000 but where this is helpful. 0:09:31.080000 --> 0:09:36.020000 It will let you know from what particular IP to another particular IP 0:09:36.020000 --> 0:09:42.520000 packet by packet specifically showing you the time deltas exactly what's 0:09:42.520000 --> 0:09:49.040000 taking place. So you can see for example a large amount of resets if that's 0:09:49.040000 --> 0:09:51.180000 something that was problematic. 0:09:51.180000 --> 0:09:57.240000 Or you may see duplicate acts if that you believe that it shouldn't be 0:09:57.240000 --> 0:09:58.480000 retransmitting as much. 0:09:58.480000 --> 0:10:04.260000 So there's a lot of information that you can glean from in here. 0:10:04.260000 --> 0:10:05.920000 And so on and so forth. 0:10:05.920000 --> 0:10:11.340000 So we don't want to get too deeply into each one of those tools because 0:10:11.340000 --> 0:10:15.160000 we have a separate module on them but it was something where we wanted 0:10:15.160000 --> 0:10:23.040000 to show you that yes you can in fact pull up some key data and statistics 0:10:23.040000 --> 0:10:30.480000 from here. And you can look at your capture as a whole. 0:10:30.480000 --> 0:10:35.240000 And that's essentially what we really want out of this menu is the tools. 0:10:35.240000 --> 0:10:39.280000 We want to say okay well as a whole what does this capture look like in 0:10:39.280000 --> 0:10:46.400000 the realm of protocols in the realm of errors in the realm of objects 0:10:46.400000 --> 0:10:52.180000 in the realm of whatever it is that you want to see as a menu option. 0:10:52.180000 --> 0:10:54.280000 What does it look like as a whole. 0:10:54.280000 --> 0:10:58.100000 And then we can actually drill down from there into key areas of it which 0:10:58.100000 --> 0:11:00.280000 is extremely helpful. 0:11:00.280000 --> 0:11:04.980000 It's actually this is very helpful for when you run a capture for the 0:11:04.980000 --> 0:11:11.480000 first time. A lot of times it would be how it would be suggested that 0:11:11.480000 --> 0:11:15.800000 you open this menu up and you really take a deep dive into the overall 0:11:15.800000 --> 0:11:20.900000 of what's going on so that you can then decide how you want to drill down. 0:11:20.900000 --> 0:11:26.660000 This is very helpful for large captures that you may not know exactly 0:11:26.660000 --> 0:11:27.800000 what the issue is. 0:11:27.800000 --> 0:11:32.040000 You may not even know what's running on the network because you may not 0:11:32.040000 --> 0:11:39.540000 be familiar with it or you may just not know that it's there. 0:11:39.540000 --> 0:11:45.340000 So, water shark will allow you to do statistics, statistical analysis 0:11:45.340000 --> 0:11:50.740000 and the data. We find it through the statistics menu and some of the very 0:11:50.740000 --> 0:11:56.480000 helpful very helpful things we can glean from this menu and its options 0:11:56.480000 --> 0:12:02.680000 and its tools is what protocols are running on your network that wire 0:12:02.680000 --> 0:12:05.780000 shark has captured from that segment. 0:12:05.780000 --> 0:12:10.880000 What are the top talkers who is talking the most and from whom to whom 0:12:10.880000 --> 0:12:16.320000 and is it a one to many or many to one type of conversation? 0:12:16.320000 --> 0:12:19.620000 Is it a unicast? 0:12:19.620000 --> 0:12:22.960000 What type of conversation is it? 0:12:22.960000 --> 0:12:27.780000 We have some tools in there that we will get into in more detail but we 0:12:27.780000 --> 0:12:34.680000 have specific tools that will allow us to gather more information about 0:12:34.680000 --> 0:12:37.020000 the capture that we just took. 0:12:37.020000 --> 0:12:41.600000 Two notes, remember things may be grayed out. 0:12:41.600000 --> 0:12:44.140000 They'll be grayed out if they're not in option. 0:12:44.140000 --> 0:12:49.580000 So, if you do not have any wireless traffic, the wireless LAN or WLAN 0:12:49.580000 --> 0:12:57.840000 traffic tool will not be available to you so be wary that if it's grayed 0:12:57.840000 --> 0:13:03.360000 out its for a reason its because its not relevant to that capture. 0:13:03.360000 --> 0:13:08.800000 And make sure that as you're going through your capture you're making 0:13:08.800000 --> 0:13:14.500000 notes. You can paste a lot of this stuff into a report or export it into 0:13:14.500000 --> 0:13:21.460000 a report to give you an overall baseline of your network operating under 0:13:21.460000 --> 0:13:28.140000 good conditions and if operating at a performance degradation you can 0:13:28.140000 --> 0:13:30.800000 look at both reports and or both captures. 0:13:30.800000 --> 0:13:35.820000 And figure out statistically what the differences are. 0:13:35.820000 --> 0:13:42.480000 And lastly, just remember one of the key aspects of using this tool. 0:13:42.480000 --> 0:13:47.260000 Let's find out what's running on our network, what we're capturing so 0:13:47.260000 --> 0:13:54.480000 that we can then further drill down into it if we're not seeing for example 0:13:54.480000 --> 0:14:01.280000 SNMP as you do not see in this particular statistics. 0:14:01.280000 --> 0:14:06.000000 Capture, you may not want to start worrying about how to build filters 0:14:06.000000 --> 0 - 162 00:13:42,482 --> 00:13:47,202 let's find out what's running on our network, what we're capturing. 163 00:13:47,202 --> 00:13:50,807 So that we can then further drill down into it. 164 00:13:50,807 --> 00:13:59,883 If we're not seeing, for example, SNMP as you do not see in this particular statistics. 165 00:13:59,892 --> 00:14:06,544 Capture - you may not want to start worrying about how to build filters for it 166 00:14:06,543 --> 00:14:10,033 because it's likely that it's not there in the capture to search for. 167 00:14:10,033 --> 00:14:15,884 So, hopefully that through this module, these tools and learning about them 168 00:14:15,893 --> 00:14:21,013 has made it easier and more efficient for you to use Wireshark. 169 00:14:21,013 --> 00:14:25,339 Alright, so one of the questions in the chat are 170 00:14:25,336 --> 00:14:32,387 do we have scenarios of issues to go through and how we use Wireshark 171 00:14:32,387 --> 00:14:34,387 to come to the conclusion of a problem? 172 00:14:34,387 --> 00:14:41,018 So, if you look at the syllabus, tomorrow is pretty much all that. 173 00:14:41,018 --> 00:14:46,071 We're going to go through voice, HTTP, 174 00:14:46,071 --> 00:14:52,180 FTP, wireless, each module is a problem. 175 00:14:52,180 --> 00:14:55,991 And we'll look at Wireshark and figure out how to solve that problem. 176 00:14:55,991 --> 00:15:01,637 So yes, today I'm prepping that in as we talk about the tools themselves. 177 00:15:01,637 --> 00:15:08,911 So as an example, we did bring up a DNS issue where the client could not communicate. 178 00:15:08,911 --> 00:15:12,363 It was in a large capture full of data. 179 00:15:12,363 --> 00:15:17,222 So we filtered out all the data that we did not need to see 180 00:15:17,222 --> 00:15:22,421 and we isolated the communication adn showed the actual failure 181 00:15:22,421 --> 00:15:25,720 of the client being able to resolve DNS. 182 00:15:25,736 --> 00:15:29,345 So, yes, we do have scenarios. 183 00:15:29,364 --> 00:15:34,181 Today will be scenarios that are put in to the modules. 184 00:15:34,181 --> 00:15:38,183 Whereas tomorrow's, the modules each one of them is a scenario. 185 00:15:38,183 --> 00:15:42,343 So I hope that helps answer your question. 186 00:15:42,343 --> 00:15:46,500 Packet loss is a little tricky to capture. 187 00:15:46,500 --> 00:15:48,500 Wireshark will give you clues. 188 00:15:48,500 --> 00:15:52,550 One of the things that we're going to talk about in the next module is using the flow graph 189 00:15:52,550 --> 00:15:56,007 which actaully the timing of that question is perfect 190 00:15:56,007 --> 00:15:59,470 because the flow graph is going to be able to show you 191 00:15:59,483 --> 00:16:06,775 specifically when you pick the TCP flow, how well it's perform 192 00:16:06,783 --> 00:16:11,080 your application is performing. So for example, if you're trying to 193 00:16:11,080 --> 00:16:15,575 send a request to pull a webpage 194 00:16:15,575 --> 00:16:20,659 and you see in capture that the data keeps re-transmitting 195 00:16:20,659 --> 00:16:22,659 or you're getting duplicate acts or 196 00:16:22,659 --> 00:16:25,837 you're seeing a lot of re-transmissions, 197 00:16:25,837 --> 00:16:31,018 it's likely that, that something may be getting dropped somewhere. 198 00:16:31,018 --> 00:16:35,878 It could be something else but there's a way to, to isolate that 199 00:16:35,881 --> 00:16:38,660 and the clue that, the clues that you're going to get 200 00:16:38,682 --> 00:16:40,902 may be from Wireshark's flow graph. 201 00:16:40,902 --> 00:16:44,002 So, we're going to get into that in the next section 202 00:16:44,002 --> 00:16:45,351 but just real quick, 203 00:16:45,351 --> 00:16:49,102 when you go to the statistics menu, when you pull up flow graph, 204 00:16:49,102 --> 00:16:53,940 you can take a look at either all or displayed packets, the TCP flow, 205 00:16:53,940 --> 00:17:00,964 and it will show you a very detailed view 206 00:17:00,964 --> 00:17:04,625 of exactly what's going on from source to destination. 207 00:17:04,625 --> 00:17:08,719 There's actually multiple IP's up here, source to destination. 208 00:17:08,719 --> 00:17:10,719 That's what the arrow is showing you. 209 00:17:10,719 --> 00:17:13,158 And it will show all the TCP communication. 210 00:17:13,158 --> 00:17:17,340 Here I have a bunch of resets which could be an issue. 211 00:17:17,340 --> 00:17:21,612 If you see a ton of resets coming back, there's obviously something wrong there. 212 00:17:21,612 --> 00:17:30,098 If I see constant duplications of an act that may be something's getting dropped 213 00:17:30,098 --> 00:17:34,929 and it has to resend it. So, there's some granular filters 214 00:17:34,929 --> 00:17:39,567 you can look, put in which we covered in another Q&A section. 215 00:17:39,567 --> 00:17:43,783 Or you can use something such as the flow graph 216 00:17:43,783 --> 00:17:48,124 to try to figure out what's going on from one IP to another IP. 217 00:17:48,124 --> 00:17:50,698 And see if that gives you a hint. 218 00:17:50,698 --> 00:17:57,793 You can also go to the analyze menu and look at the Expert 219 00:17:57,793 --> 00:18:01,564 where it may tell you, as an example 220 00:18:01,564 --> 00:18:04,367 that you have duplicate acknowledgements. 221 00:18:04,367 --> 00:18:06,158 And you have many of them. 222 00:18:06,158 --> 00:18:12,723 They may be coming from the same IP, source to destination. 223 00:18:12,723 --> 00:18:18,116 We have tons of suspected re-transmissions, that may be an issue. 224 00:18:18,116 --> 00:18:22,081 We can have windowing problems where buffers are overloaded. 225 00:18:22,081 --> 00:18:29,869 So, all of these things basically could relate to packet loss. 226 00:18:29,869 --> 00:18:35,877 And the more that you dig in to the tool and figure out specifically from one IP to another 227 00:18:35,877 --> 00:18:38,830 where you may think there's a performance issue 228 00:18:38,830 --> 00:18:44,009 by isolating re-transmissions, duplicated acknowledgements, 229 00:18:44,009 --> 00:18:49,567 windowing problems. If you see all these stuff from something that's performing poorly 230 00:18:49,567 --> 00:18:54,991 it's likely that you may have some, some packet loss. 231 00:18:54,991 --> 00:19:02,918