Azure Log Analytics is an essential tool in the Azure ecosystem, designed to help you gather, analyze, and act on data from both cloud and on-premises environments. At the heart of Azure Log Analytics are tables—where all collected data is stored. Knowing how these tables work, their structure, and how to use them effectively can give you deep insights into your infrastructure and applications.
In this blog, we’ll dive into what Azure Log Analytics tables are, the different types, how they’re structured, and how you can make the most of them.
1. What Is Azure Log Analytics?
Azure Log Analytics is a part of Azure Monitor that allows you to collect and analyze telemetry data from various sources. Whether the data comes from Azure resources, your own applications, or even on-premises servers, Log Analytics gathers it all in one place. This data is organized into tables within a Log Analytics workspace, making it easier to analyze and act on.
2. What Are Log Analytics Tables?
Think of Log Analytics tables like a database table where data is stored in rows and columns. Each table in Log Analytics is dedicated to a specific type of data—like logs, metrics, or traces. For instance, the Perf table is where performance data is kept, and the Heartbeat table tracks the health status of virtual machines.
Each table has a schema, which is just a fancy way of saying that it has a predefined structure. This structure makes it easy to run queries and analyze the data.
3. Different Types of Log Analytics Tables
Azure Log Analytics tables come in various types, each suited for different kinds of data:
- Basic Logs: These tables are for simple, structured data. They’re optimized for storing large amounts of data at a low cost. Examples include
SyslogandEventtables. - Analytics Logs: These tables are for more complex data that might need advanced querying. They hold detailed telemetry data from various Azure resources. Examples include
PerfandAppRequeststables. - Custom Logs: If you have data that doesn’t fit into any predefined tables, you can create custom tables to store it. This is especially useful for applications with unique logging needs.
- Application Insights Tables: These are created by Application Insights and contain data like request rates, response times, and errors in your applications. Examples include
AppRequestsandAppExceptionstables.
4. The Structure of Log Analytics Tables
Each table in Log Analytics has a schema—essentially, a set of rules about what kind of data it can store and how that data is organized.
- Columns: Each column in a table holds a specific type of data, such as
string,datetime, orint. For example, theTimeGeneratedcolumn stores the timestamp of when the data was collected. - Records: A record is just a row in the table, with each row containing values for all the columns in the schema.
- Metadata: This includes information like the table’s name, size, and creation date, which helps you manage and organize your data.
Example Table Schema
Here’s what a typical Perf table might look like:
| Column Name | Data Type | Description |
|---|---|---|
| TimeGenerated | datetime | When the data was collected |
| Computer | string | The machine that generated the data |
| CounterName | string | The name of the performance counter |
| CounterValue | real | The value of the performance counter |
| InstanceName | string | The instance of the performance counter |
5. How to Query Log Analytics Tables
To get the data you need from these tables, you use Kusto Query Language (KQL). KQL is a powerful query language designed to handle large datasets efficiently.
Basic KQL Query
Here’s a simple query to fetch data from the Perf table:
Perf
| where CounterName == "Processor(_Total)\% Processor Time"
| order by TimeGenerated desc
| limit 10
This query pulls the last 10 records where the CounterName is related to CPU usage, showing the most recent ones first.
Advanced KQL Query
For more complex needs, KQL allows you to join tables and perform calculations:
let cpuData = Perf
| where CounterName == "Processor(_Total)\% Processor Time";
let memoryData = Perf
| where CounterName == "Memory\Available MBytes";
cpuData
| join kind=inner (memoryData) on Computer, TimeGenerated
| project TimeGenerated, Computer, CounterValueCpu=cpuData.CounterValue, CounterValueMemory=memoryData.CounterValue
| order by TimeGenerated desc
This query combines CPU and memory data from the Perf table, matching records by Computer and TimeGenerated.
6. Real-World Uses for Log Analytics Tables
Log Analytics tables can be used in a variety of scenarios:
- Monitoring Performance: Use tables like
PerfandHeartbeatto keep tabs on the performance and health of your virtual machines. - Security Auditing: Analyze logs in tables like
SecurityEventto detect security threats or unauthorized access. - Application Monitoring: Use Application Insights tables to monitor application performance, spot failures, and understand user behavior.
- Compliance Reporting: Generate reports for compliance audits by querying the relevant logs from your tables.
7. Best Practices for Managing Your Tables
To make the most out of Azure Log Analytics, keep these tips in mind:
- Optimize Your Queries: Use filters and indexes to speed up your queries. Avoid pulling more data than you need by using
whereclauses effectively. - Set Retention Policies: Manage storage costs by setting appropriate data retention periods. You can keep your data for as long as needed, but be mindful of the costs.
- Leverage Custom Logs: If you have unique logging requirements, create custom tables to ensure you’re capturing all necessary data.
- Review and Clean Up Regularly: Over time, your tables can accumulate a lot of data. Regularly review and delete old records to keep things running smoothly.
Azure Log Analytics tables are the backbone of data collection and analysis in Azure Monitor. By understanding how these tables work, you can gain valuable insights into your infrastructure and applications. Whether you’re monitoring performance, enhancing security, or ensuring compliance, these tables provide the tools you need to make informed decisions and keep your systems running efficiently.
By following best practices and mastering KQL, you can turn raw data into actionable insights, helping your organization stay resilient and efficient.