Log Management and Analytics

Explore the full capabilities of Log Management and Analytics powered by SolarWinds Loggly

View Product Info

FEATURES

Infrastructure Monitoring Powered by SolarWinds AppOptics

Instant visibility into servers, virtual hosts, and containerized environments

View Infrastructure Monitoring Info

Application Performance Monitoring Powered by SolarWinds AppOptics

Comprehensive, full-stack visibility, and troubleshooting

View Application Performance Monitoring Info

Digital Experience Monitoring Powered by SolarWinds Pingdom

Make your websites faster and more reliable with easy-to-use web performance and digital experience monitoring

View Digital Experience Monitoring Info

Centralizing Java Logs

Ultimate Guide to Logging - Your open-source resource for understanding, analyzing, and troubleshooting system logs

Centralizing Java Logs

In an enterprise environment, you’ll likely have logs stored on different systems, requiring you to log in to multiple servers. Those logs could also have been created using different Appenders and Layouts. This fragmented approach to logging can make it difficult to retrieve data to monitor, troubleshoot, or maintain your application.

Centralized logging resolves these issues by unifying the format and storage of your log data. With centralization, log data is sent from your application to a service where it’s automatically parsed and imported into a database. Centralized logging systems aren’t typically limited to specific log formats or applications. They can also support logs created by scripts, services, and system commands. Not only can centralized logging systems consolidate log data from various sources, but they can also process and present data through an accessible interface.

This section covers ways you can implement centralized logging in your Java applications. We’ll focus on logging from a framework such as Log4j.

Benefits of Centralizing Java Logs

There are three key benefits to managing logs through a centralized logging system.

  1. Log data is stored in a single location. Instead of having to retrieve logs from multiple systems, you can access your logs from a single interface.
  2. Log data is automatically parsed. Logs come in hundreds of formats, including plain text, JSON, and XML. Logging systems can automatically detect the type of log being read and convert it into a standard format.
  3. Log data is searchable, indexable, and exportable. Log entries are broken down into individual fields, which can be searched and filtered. Logs can also be archived, exported, or redistributed without touching the original log file.

With a centralized logging system, you can have multiple applications on multiple systems storing logs in a single location, saving you time and effort.

Popular Tools to Centralize Logs

There are many popular tools to centralize logs, and your choice of which to use depends on your needs.

Java Logging Frameworks

If you can modify your Java code, you can use a logging framework such as Log4j or Logback. You can send logs to a logging service located on the server itself (syslog) or to a log management service (see below).

Server Logging Daemons

Most servers run a local logging daemon to collect logs generated by the server. You can also write application logs for these services. These daemons commonly write logs to a file on the server, but they can also be configured to forward logs to a log management solution. Popular daemons include rsyslog, syslog-ng, and NXlog.

On-Premises Log Management Software

Some centralized logging software can be self-hosted and deployed onto your infrastructure. These can range from simple log collectors to complete log management solutions. Examples include Splunk®, Elastic Stack®, and Graylog®.

Cloud-Based Log Management Services

Cloud-based logging services provide the benefits of a log management solution without the burden of running or maintaining your services. Log events are sent to a third-party server, where they are processed, indexed, and stored. You can then access your logs using a web browser or app. Examples include Splunk Cloud®, Loggly®, Sumo Logic®, Papertrail®, and Logentries®.

Output Methods

Once you’ve chosen a logging system, the next step is to ensure your logs are being delivered. Different Appenders provide different methods for transmitting logs from your application to your logging system.

File

In addition to the console, another common log destination is a file. With file output, applications write events to a location on a local disk. The main advantage of files compared to the console is that log events persist after the application exits or the console closes. Assuming the log format is plain text, log files can be easily opened in a text editor or console for troubleshooting application issues. Some potential drawbacks include disk I/O latency, potential file permission issues, and disk space consumption. To prevent exhausting available memory or disk space, utilities such as logrotate can split a single log file into multiple files once the file reaches a certain size or according to a regular schedule.

One major disadvantage of file logging is it can be difficult to maintain log events that span multiple lines, such as stack traces. They are easy to read in a text editor, but a program reading or analyzing the logs will often consider each line a separate event. One alternative is to use a structured Layout designed to store events on a single line. For example, with Log4j and Logback, you can store JSON and XML events on a single line by setting the compact attribute to true. Another solution is to monitor the log file using a program that can recognize indented stack traces, which is explained in more detail in the Centralizing Multiline Stack Traces section.

Syslog

Many logging frameworks can also send log events to a syslog daemon. The syslog daemon is widely used in Unix-based operating systems to store logs from applications, system processes, and devices. It can also read logs from files and remote systems. A main advantage of using a syslog daemon is it’s a separate process from your application, so it can asynchronously transmit logs without affecting your application’s performance. When transmitting over a network, syslog daemons use internal queues to buffer log events in case of network interruptions. When sending logs from your local syslog daemon to another server, it’s important to pick the right transport layer protocol—either UDP or TCP. For message reliability, TCP is the recommended transport protocol since UDP can drop network packets or deliver them out of order.

One disadvantage of syslog is it doesn’t support multiline events. Your logs will need to be structured in a way that stores them on a single line or import them using a module that automatically converts them.

HTTP/S

Some logging frameworks support the transmission of log data using HTTP. HTTP and HTTPS are the application layer protocols that drive the internet. HTTP transmits data in an unencrypted form, while HTTPS adds encryption and validation between the sender and receiver. A benefit of HTTP/S transmission is it allows for multiline events and provides built-in security using HTTPS.

Log4j provides an HTTP Appender, and there are HTTP appenders available for Logback. Some log management services like SolarWinds® Loggly® provide HTTP/S endpoints designed to ingest events from these Appenders.

Asynchronous Versus Synchronous

Logging calls are typically synchronous, so the program won’t continue executing until the log event is recorded and acknowledged. This can result in noticeable overhead, especially when the application logs are frequent or when the individual log calls are expensive. For example, a network outage could cause an HTTP Appender to hang the application until Appender times out. In some environments, such as Android, calls like this are strictly prohibited, and the operating system will throw an exception if an application attempts to perform any network operations on the main thread.

Alternatively, applications can use asynchronous logging calls. In asynchronous logging, logging calls are queued on a separate thread, allowing the main thread to continue running. This mitigates latency and offers a potentially higher throughput of logging events, especially during bursts. One disadvantage of asynchronous logging is the increased complexity of error handling. Instead of checking the result of the logging call in the next line, the caller must handle the error another way, for example, by providing a callback that retries the request. Another disadvantage is events might be lost before the program records them, which could be a deal breaker for auditing applications. For example, your application could crash before an asynchronous call finishes recording data about the error, resulting in the loss of valuable troubleshooting data.

One technique to alleviate this problem is to use synchronous Loggers for ERROR-level events and asynchronous Loggers for everything else. Additionally, you might want to think about how your application will recover when a network link goes down. Will your threads time out eventually, or will you insert logs into a queue and send them when the link resumes?

In this Logback example, all log events with INFO severity or higher are logged asynchronously to a file, and all ERROR events from the com.example Logger are synchronously sent to syslog.

<configuration>
  <appender name="FILE" class="ch.qos.Logback.core.FileAppender">
     <file>/path/to/foo.txt</file>
     <encoder>
     <pattern>%d{HH:mm:ss.SSS} [%thread] %-5level %logger{36} - %msg%n</pattern>
     </encoder>
  </appender>
 
  <appender name='ASYNC' class='ch.qos.Logback.classic.AsyncAppender'>
     <appender-ref ref='FILE' />
  </appender>
 
  <appender name="SYSLOG" class="ch.qos.Logback.classic.net.SyslogAppender">
     <syslogHost>myhost</syslogHost>
     <facility>USER</facility>
     <suffixPattern>[%thread] %logger %msg</suffixPattern>
  </appender>
 
  <logger name="com.example" level="ERROR">
     <appender-ref ref="SYSLOG" />
  </logger>
 
  <root level="INFO">
     <appender-ref ref='ASYNC' />
  </root>
</configuration>

Centralizing Multiline Stack Traces

Multiline stack traces are often more complicated to process than regular log entries. Stack traces vary in length and have multiple sections, which can make them difficult to parse through pattern matching. The parser needs to know where the stack trace begins, where it ends, and how each line relates to the event.

In some cases, each line of the stack trace is treated as a separate event. This is a common problem in plain text logs, which can have a wide variety of layouts and patterns. For more information, see the ParsingMultilineStack Traces section.

HTTP/S

HTTP/S natively supports new lines, so your stack traces can be sent without modification. You’ll have to choose an Appender that supports HTTP/S as a transport protocol. One disadvantage is that the HTTP/S protocol is heavier than syslog because it includes more headers and requires acknowledgment. Also, you’ll need an HTTP/S endpoint or collector ready to receive the logs. Some cloud-based services like SolarWinds Loggly provide this service.

Syslog Protocol

Unfortunately, the syslog protocol was written to support a single line per event. By default, Logback and Log4j’s SyslogAppenders log each line of a stack trace with a new line, which is interpreted as a separate event.

One way to overcome this is by using an Appender that writes a UDP or TCP packet with the new lines still in them. Rsyslog will recognize and convert it to a single-line event by replacing  the \n\r characters with the octal codes #012#011. You can ignore or reconvert these octals to new lines in your log management solution.

If you are using file monitoring, you can configure your syslog daemon to recognize multiline events. You can configure rsyslog’s imfile with paragraph read mode. This treats an empty line between each stack trace as an event separator.

For example, this program generates exceptions while trying to load two missing files. Log4j’s PatternLayout includes the %xEx conversion pattern. Adding it to the pattern lets us separate each log entry with an empty line.

<PatternLayout pattern="%d{HH:mm:ss.SSSS} [%t] %-5level %logger{36} - %m<b>%xEx</b>%n"/>

09:53:14.0505 [main] ERROR DemoClass - An exception occurred: java.io.FileNotFoundException: myFile (No such file or directory)
  at java.io.FileInputStream.open(Native Method) ~[?:1.7.0_79]
  at java.io.FileInputStream.<init>(FileInputStream.java:146) ~[?:1.7.0_79]
  at java.io.FileInputStream.<init>(FileInputStream.java:101) ~[?:1.7.0_79]
  at java.io.FileReader.<init>(FileReader.java:58) ~[?:1.7.0_79]
  at DemoClass.openFile(DemoClass.java:37) [my-class-1.0-SNAPSHOT-jar-with-dependencies.jar:?]
  at DemoClass.main(DemoClass.java:31) [my-class-1.0-SNAPSHOT-jar-with-dependencies.jar:?]

09:53:14.0518 [main] ERROR DemoClass - An exception occurred: java.io.FileNotFoundException: tmpFile (No such file or directory)
  at java.io.FileInputStream.open(Native Method) ~[?:1.7.0_79]  at java.io.FileInputStream.<init>(FileInputStream.java:146) ~[?:1.7.0_79]
  at java.io.FileInputStream.<init>(FileInputStream.java:101) ~[?:1.7.0_79]
  at java.io.FileReader.<init>(FileReader.java:58) ~[?:1.7.0_79]
  at DemoClass.openFile(DemoClass.java:37) [my-class-1.0-SNAPSHOT-jar-with-dependencies.jar:?]
  at DemoClass.main(DemoClass.java:32) [my-class-1.0-SNAPSHOT-jar-with-dependencies.jar:?]

These log entries are stored in the /var/log/myLog.log file. The following imfile configuration scans myLog.log every 10 seconds for changes and forwards each event to rsyslog. Note that this example requires rsyslog version 8 or later.

module(load="imfile" PollingInterval="10")

# File 1
input(type="imfile"
     File="/var/log/myLog.log"
     Tag="debuggingProduction"
     readMode=1)

module(load="imfile" PollingInterval="10") loads the imfile module and sets a polling period to 10 seconds. input() declares a file to monitor using imfile. The File parameter specifies the name of the file, while Tag applies a unique tag to each event originating from this file. Setting readMode to 1 tells imfile to read in paragraph mode, which treats blank lines as event separators.

After adding this to your rsyslog configuration file (/etc/rsyslog.conf for most Linux distributions), restart the rsyslog service. Logs from the /var/log/myLog.log file will start to appear in the syslog stream as complete events with octal characters.

Logging from Android

Android provides a built-in logging API through the android.util.Log class. Android also supports several logging frameworks, including Logger, Timber, and Logback.

Android’s built-in logging API prints log data to LogCat, which provides a buffer that stores log data for access over ADB and Android Profiler. android.util.Log is useful for development and debugging but is often less suited for collecting and centralizing log data when the app is used in a live environment. Other logging frameworks provide ways to store and submit log data without having to use a separate debugging tool or remote shell.

Most logging frameworks on Android will use HTTP/S as a transport protocol because there is no local syslog daemon to forward logs. App developers can use cloud-based logging solutions to collect and store logs.

Logging with Logback

There are two ways to use Logback on Android: the slf4j-android library, and logback-android. slf4j-android is an SLF4J binding that routes all SLF4J log requests to Android’s Log class. It’s meant to simplify the use of existing libraries that use SLF4J on Android. logback-android also uses the SLF4J API, but also leverages Logback’s API. This lets you send log events to your destination of choice.

For example, we can add a Logback Appender to our app by importing from org.slf4j and creating a new Logger:

package com.example;
 
import org.slf4j.Logger;
import org.slf4j.LoggerFactory;
 
import android.os.Bundle;
import android.app.Activity;
 
public class MainActivity extends Activity {
  @Override
  protected void onCreate(Bundle savedInstanceState) {
     super.onCreate(savedInstanceState);
     setContentView(R.layout.activity_main);
 
     // SLF4J
     Logger log = LoggerFactory.getLogger(MainActivity.class);
     log.info("hello world");
  }
}

Logback-android checks for a configuration file stored in assets/logback.xml.

<configuration>
  <appender name="file" class="ch.qos.logback.core.FileAppender">
  <file>/data/data/com.example/files/log/foo.log</file>
     <encoder>
     <pattern>%d{HH:mm:ss.SSS} [%thread] %-5level %logger{36} - %msg%n</pattern>
     </encoder>
  </appender>
 
  <root level="INFO">
     <appender-ref ref="file" />
 
  </root>
</configuration>

You can also configure logback-android in your app’s AndroidManifest.xml file. For more details, see the logback-android project page.

Logging with Timber

Timber is a logging framework that extends Android’s Log class, adding a lightweight and extensible API. Timber uses static methods to log data, and logging behavior is configured through instances of Tree objects. Trees are initialized at the start of the program and contain the code necessary to forward log entries to their proper destination. In many ways, Trees are similar to Appenders.

This example shows how to log a few simple actions using Timber. We’re using a LogglyTree provided by the timber-loggly library to send our logs to SolarWinds Loggly. The resulting logs are automatically formatted as JSON.

// ExampleApp.java
import android.app.Application;
import com.github.tony19.timber.loggly.LogglyTree;
import timber.log.Timber;
 
public class ExampleApp extends Application {
  @Override
  public void onCreate() {
    super.onCreate();
 
    final String LOGGLY_TOKEN = /* your loggly token */;
    Timber.plant(new LogglyTree(LOGGLY_TOKEN));
  }
}
 
// MainActivity.java
public class MainActivity extends Activity {
  @Override
  protected void onCreate(Bundle savedInstanceState) {
    super.onCreate(savedInstanceState);
    setContentView(R.layout.activity_main);
 
    Timber.i("Activity created!");
    Toast.makeText(this, "logged message", Toast.LENGTH_SHORT)  .show();
  }
 
  @Override
  public boolean onCreateOptionsMenu(Menu menu) {
    getMenuInflater().inflate(R.menu.menu_main, menu);
    return true;
  }
 
  @Override
  public boolean onOptionsItemSelected(MenuItem item) {
    int id = item.getItemId();
 
    Timber.d("option selected: %d", id);
    Toast.makeText(this, "logged message", Toast.LENGTH_SHORT).show();
 
    if (id == R.id.action_settings) {
      return true;
    }
 
    return super.onOptionsItemSelected(item);
  }
}

Below is a screenshot of the app at startup. A “logged message” toast appears at startup and when the “Settings” menu option is selected. The toast indicates that a message was sent to Loggly. Note the log event is sent as JSON in this form.

{
  "level": "<i>[LEVEL]</i>",
  "message": "<i>[BODY]</i>"
}

This means the log event triggered before the toast (Timber.i(“Activity created!”);) will appear in our logs like this.

{
  "level": "INFO",
  "message": "Activity created!"
}

Logging in Android via Timber. © 2022 Google LLC. All rights reserved.

The image below shows Loggly received the messages from the example app.

Viewing Android logs in Loggly.

Removing Logs from Release Builds

Android developers often prefer to remove some or all logging from release builds to improve overall performance and reduce package size. This can be accomplished by the R8 compiler (previously ProGuard) by setting up rules that strip out the logging calls. For example, one would add the following rules to their application’s proguard-rules.pro file to remove DEBUG, VERBOSE, or INFO level logs:

-assumenosideeffects class android.util.Log {
  public static *** d(...);
  public static *** v(...);
  public static *** i(...);
}

There are cases where logging is preferred to be kept in the release builds, in which case you’d want to prevent ProGuard from removing those calls. When using SLF4J and Logback, be sure to include these rules to retain your logging calls:

-keep class ch.qos.** { *; }
-keep class org.slf4j.** { *; }
-keepattributes *Annotation*

Logging from Tomcat

Apache® Tomcat is a popular open-source web server for hosting Java Servlets, JavaServer Pages (JSP), and other Java-based web technologies. Tomcat comes with a robust logging system named JULI (the Java Utility Logging Implementation) that allows multiple logging frameworks to work independently. JULI is a fork of the Apache Commons Logging framework and provides new features and flexibility.

JULI’s strength lies in its ability to separate your application’s logging framework from Tomcat’s logging framework. This makes it possible to use your framework of choice in your web applications—even java.util.logging—without interfering with other applications or with the Tomcat server. For a complete overview of JULI’s features, see the Tomcat logging documentation.

Some distributions of Tomcat are simplified to use a hard-coded java.util.logging, although they can typically be reconfigured to allow for different frameworks, including Log4j, Logback, and SLF4J using JULI.

Tomcat Logging Setup and Configuration

Throughout this section, you’ll see many references to “Catalina.” Catalina is the Tomcat servlet container and handles several tasks, including starting and stopping servlets, redirecting requests to servlets, and managing access rights. The following table shows the location of Catalina’s configuration files for various operating systems. By default, configuration files are stored in the $CATALINA_HOME directory. If you’re running multiple instances of Tomcat, the files will be stored in the $CATALINA_BASE directory.

JULI’s logging behavior is configured through a logging.properties file, which can be set on a global or per-application level. The global configuration is available at $CATALINA_BASE/conf/logging.properties. If the global configuration file is missing or unreadable, JULI defaults to the logging.properties file used by the system’s Java installation. For application-specific configurations, the configuration file is stored in the application’s WEB-INF/classes/ folder. You can also configure logging behavior in the code of your application. JULI uses the same configuration syntax as java.util.logging, with a few minor exceptions.

For more information on configuring java.util.logging, see the Java Logging Basics section.

Default Output for Tomcat Logs

By default, Tomcat logs to $CATALINA_HOME/logs/catalina.out. All System.out and System.err logs are redirected to this file, as well as any uncaught exceptions. While Tomcat itself doesn’t perform log rotation, Ubuntu and CentOS rotate catalina.out on a weekly basis using logrotate.

The following example shows the output of an ArithmeticException logged by Tomcat. We log the exception using Tomcat’s global logging.properties file.

The servlet:

package TestApplication;
 
import javax.servlet.*;
import javax.servlet.http.*;
import java.util.logging.*;
 
public class Test extends HttpServlet {
  final static Logger logger = Logger.getLogger(Test.class.getName());
 
  @Override
  public void doGet(HttpServletRequest request, HttpServletResponse response) throws IOException, ServletException {
    try {
      int i = 1 / 0;
    }
    catch (Exception ex) {
      logger.log(Level.SEVERE, "Exception: ", ex);
    }
  }
}

Navigating to the servlet’s page generates the following log entry.

May 16, 2019 11:08:05 AM TestApplication.Test doGet
SEVERE: Exception:
java.lang.ArithmeticException: / by zero
  at TestApplication.Test.doGet(Test.java:16)
  at javax.servlet.http.HttpServlet.service(HttpServlet.java:620)
  at javax.servlet.http.HttpServlet.service(HttpServlet.java:727)
  at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:303)
  ...
  at java.lang.Thread.run(Thread.java:745)

The output is similar to what you would expect from a standard Java application. The first few lines show the date, Logger name, method name, and log message. The rest of the entry shows the complete stack trace, including messages from Tomcat’s other components.

Using Different Frameworks with Tomcat

To use a different logging framework with Tomcat, your framework will need to be capable of redirecting logging calls to java.util.logging. It must also be able to run in an environment where different logging frameworks are present.

Note this is only necessary for logging Tomcat itself. For your web applications, you can choose and implement a logging framework of your choice normally.

Log4j

Tomcat automatically detects and uses Log4j if the log4j-api, log4j-core, and log4j-appserver jars are in the classpath during boot. You will also need a configuration file named log4j2-tomcat.{xml, json, yaml, yml, or properties} in the boot classpath. The method recommended by Log4j is to create a $CATALINA_HOME/log4j2/lib directory for your jar files and a $CATALINA_HOME/log4j2/conf directory for your configuration file. Then, add the following line to the setenv.sh file in Tomcat’s bin directory:

CLASSPATH=$CATALINA_HOME/log4j2/lib/*:$CATALINA_HOME/log4j2/conf

For details, see the Log4j Tomcat documentation.

Logback and SLF4J

The tomcat-slf4j-Logback project bundles the Tomcat server, SLF4J, and Logback into a single unified package. It allows Tomcat to use SLF4J and Logback for its internal logging while enabling web applications to use their own SLF4J implementations. To install it, download the release build and copy the files into your $CATALINA_HOME directory. Note that this will overwrite your server.xml file.

For details, see the project’s GitHub page.

Additional Resources

Log Management Solutions

Guides and Tutorials

Tomcat


Last updated 2022