1 00:00:01,540 --> 00:00:06,190 Speaking of which, let's take a look at Log Analytics and the Log Analytics 2 00:00:06,190 --> 00:00:13,230 agent deployment process next. Azure Log Analytics is a cloud‑based log 3 00:00:13,230 --> 00:00:17,510 aggregation platform. You can see on the left side of this diagram I drew 4 00:00:17,510 --> 00:00:22,950 that we have logs emitted within Azure, certainly all of your Azure resources, 5 00:00:22,950 --> 00:00:26,940 the Microsoft‑provided ones, emit log streams, 6 00:00:26,940 --> 00:00:31,030 multi‑cloud, Google Cloud. I don't have AWS, but they're certainly 7 00:00:31,030 --> 00:00:35,380 supported. In your on‑premises environment, those machines are kicking out 8 00:00:35,380 --> 00:00:39,970 logs. You can actually create custom log sources. 9 00:00:39,970 --> 00:00:42,840 That's why I have a gap here just to show you that. 10 00:00:42,840 --> 00:00:47,960 So you may have line of business homegrown applications that generate logs 11 00:00:47,960 --> 00:00:51,200 that you want to centrally manage, centrally report on. 12 00:00:51,200 --> 00:00:54,040 That is the idea of Log Analytics. 13 00:00:54,040 --> 00:00:59,460 You point those on and off cloud machines into your Log Analytics workspace. 14 00:00:59,460 --> 00:01:03,780 The workspace is the Azure resource that houses all of the logs. And 15 00:01:03,780 --> 00:01:07,630 Log Analytics, as long as it understands what the log is and what the 16 00:01:07,630 --> 00:01:10,530 log definition is in terms of its schema, 17 00:01:10,530 --> 00:01:14,280 it will convert those logs into what look like relational 18 00:01:14,280 --> 00:01:17,040 database tables with columns and rows. 19 00:01:17,040 --> 00:01:20,810 There is support, as I said, for custom logs as well. 20 00:01:20,810 --> 00:01:24,200 So you're not limited only to the logs that Log 21 00:01:24,200 --> 00:01:26,760 Analytics knows about out of box. 22 00:01:26,760 --> 00:01:31,270 The Log Analytics workspace is a big data exploration and analytics engine. 23 00:01:31,270 --> 00:01:35,140 If you're familiar at all with Hadoop and Apache Spark, 24 00:01:35,140 --> 00:01:39,560 it's kind of like that. More to Apache Spark than Hadoop actually 25 00:01:39,560 --> 00:01:43,510 because the idea is that in Log Analytics, you don't necessarily have 26 00:01:43,510 --> 00:01:47,340 to submit your reporting queries and then sit there and wait or come 27 00:01:47,340 --> 00:01:49,540 back the next day to check on it. 28 00:01:49,540 --> 00:01:52,800 You can run the queries and see the results within seconds. 29 00:01:52,800 --> 00:01:57,640 It's a massively scalable cluster that's running under the hood. 30 00:01:57,640 --> 00:02:00,850 Lastly, how do we query our Log Analytics workspace? 31 00:02:00,850 --> 00:02:00,980 Well, 32 00:02:00,980 --> 00:02:05,060 we use a unified query language Microsoft developed called Kusto, 33 00:02:05,060 --> 00:02:09,940 K‑u‑s‑t‑o. And that's a play on Jacques Cousteau, 34 00:02:09,940 --> 00:02:15,100 the late great oceanographer and explorer, because the idea here is that KQL 35 00:02:15,100 --> 00:02:20,330 allows us to explore the vast log data that we've aggregated in our hybrid 36 00:02:20,330 --> 00:02:24,860 cloud environment. And you can see that the syntax is familiar to you if you've 37 00:02:24,860 --> 00:02:29,040 used the Splunk language or AWS log search. 38 00:02:29,040 --> 00:02:32,760 There are some elements of Structured Query Language, or SQL. There's 39 00:02:32,760 --> 00:02:35,890 some elements of PowerShell and Bash in here as well. 40 00:02:35,890 --> 00:02:42,000 It's a pretty cool language, and it's very powerful, indeed. We'll see more about it in the upcoming demo.