LoggingThe Ultimate Guide

your open-source resource for understanding, analyzing, and troubleshooting system logs

curated byloggly

3

Analyzing Linux Logs

There’s a great deal of information waiting for you within your logs, although it’s not always as easy as you’d like to extract it. In this section, we will cover some examples of basic analysis you can do with your logs right away (just search what’s there). We’ll also cover more advanced analysis that may take some upfront effort to set up properly, but will save you time on the back end. Examples of advanced analysis you can do on parsed data include generating summary counts, filtering on field values, and more.

We’ll show you first how to do this yourself on the command line using several different tools and then show you how a log management tool can automate much of the grunt work and make this so much more streamlined.

Searching with Grep

Searching for text is the most basic way to find what you’re looking for. The most common tool for searching text is grep. This command line tool, available on most Linux distributions, allows you to search your logs using regular expressions. A regular expression is a pattern written in a special language that can identify matching text. The simplest pattern is to put the string you’re searching for surrounded by quotes

Regular Expressions

Here’s an example to find authentication logs for “user hoover” on an Ubuntu system:

It can be hard to construct regular expressions that are accurate. For example, if we searched for a number like the port “4792” it could also match timestamps, URLs, and other undesired data. In the below example for Ubuntu, it matched an Apache log that we didn’t want.

Surround Search

Another useful tip is that you can do surround search with grep. This will show you what happened a few lines before or after a match. It can help you debug what lead up to a particular error or problem. The B flag gives you lines before, and A gives you lines after. For example, we can see that when someone failed to login as an admin, they also failed the reverse mapping which means they might not have a valid domain name. This is very suspicious!

Tail

You can also pair grep with tail to get the last few lines of a file, or to follow the logs and print them in real time. This is useful if you are making interactive changes like starting a server or testing a code change.

A full introduction on grep and regular expressions is outside the scope of this guide, but Ryan’s Tutorials include more in-depth information.

Log management systems have higher performance and more powerful searching abilities. They often index their data and parallelize queries so you can quickly search gigabytes or terabytes of logs in seconds. In contrast, this would take minutes or in extreme cases hours with grep. Log management systems also use query languages like Lucene which offer an easier syntax for searching on numbers, fields, and more.

Parsing with Cut, AWK, and Grok

Command Line Tools

Linux offers several command line tools for text parsing and analysis. They are great if you want to quickly parse a small amount of data but can take a long time to process large volumes of data

Cut

The cut command allows you to parse fields from delimited logs. Delimiters are characters like equal signs or commas that break up fields or key value pairs.

Let’s say we want to parse the user from this log:

We can use the cut command like this to get the text after the eighth equal sign. This example is on an Ubuntu system:

AWK

Alternately, you can use awk, which offers more powerful features to parse out fields. It offers a scripting language so you can filter out nearly everything that’s not relevant.

For example, let’s say we have the following log line on an Ubuntu system and we want to extract the username that failed to login:

Here’s how you can use the awk command. First, put a regular expression /sshd.*invalid user/ to match the sshd invalid user lines. Then print the ninth field using the default delimiter of space using { print $9 }. This outputs the usernames.

You can read more about how to use regular expressions and print fields in the Awk User’s Guide.

Log Management Systems

Log management systems make parsing easier and enable users to quickly analyze large collections of log files. They can automatically parse standard log formats like common Linux logs or web server logs. This saves a lot of time because you don’t have to think about writing your own parsing logic when troubleshooting a system problem.

Here you can see an example log message from sshd which has each of the fields remoteHost and user parsed out. This is a screenshot from Loggly, a cloud-based log management service.

Parsed SSH Log

You can also do custom parsing for non-standard formats. A common tool to use is Grok which uses a library of common regular expressions to parse raw text into structured JSON. Here is an example configuration for Grok to parse kernel log files inside Logstash:

And here is what the parsed output looks like from Grok:

Parsed Kernel Log

Filtering with Rsyslog and AWK

Filtering allows you to search on a specific field value instead of doing a full text search. This makes your log analysis more accurate because it will ignore undesired matches from other parts of the log message. In order to search on a field value, you need to parse your logs first or at least have a way of searching based on the event structure.

How to Filter on One App

Often, you just want to see the logs from just one application. This is easy if your application always logs to a single file. It’s more complicated if you need to filter one application among many in an aggregated or centralized log. Here are several ways to do this:

  1. Use the rsyslog daemon to parse and filter logs. This example writes logs from the sshd application to a file named sshd-messages, then discards the event so it’s not repeated elsewhere. You can try this example by adding it to your rsyslog.conf file.

2. Use command line tools like awk to extract the values of a particular field like the sshd username. This example is from an Ubuntu system.

3. Use a log management system that automatically parses your logs, then click to filter on the desired application name. Here is a screenshot showing the syslog fields in a log management service called Loggly. We are filtering on the appName “sshd” as indicated by the Venn diagram icon.

Filter ssh logs

How to Filter on Errors

One of the most common thing people want to see in their logs is errors. Unfortunately, the default syslog configuration doesn’t output the severity of errors directly, making it difficult to filter on them.

There are two ways you can solve this problem. First, you can modify your rsyslog configuration to output the severity in the log file to make it easier to read and search. In your rsyslog configuration you can add a template with pri-text such as the following:

This example gives you output in the following format. You can see that the severity in this message is err.

You can use awk or grep to search for just the error messages. In this example for Ubuntu, we’re including some surrounding syntax like the . and the > which match only this field.

Your second option is to use a log management system. Good log management systems automatically parse syslog messages and extract the severity field. They also allow you to filter on log messages of a certain severity with a single click.

Here is a screenshot from Loggly showing the syslog fields with the error severity highlighted to show we are filtering for errors:

Filter ssh errors

Written & Contributed by

Amy

Sadequl

This guide will help software developers and system administrators become experts at using logs to better run their systems. This is a vendor-neutral, community effort featuring examples from a variety of solutions

Meet Our Contributors Become a contributor