Log Management and Analytics

Explore the full capabilities of Log Management and Analytics powered by SolarWinds Loggly

View Product Info

FEATURES

Infrastructure Monitoring Powered by SolarWinds AppOptics

Instant visibility into servers, virtual hosts, and containerized environments

View Infrastructure Monitoring Info

Application Performance Monitoring Powered by SolarWinds AppOptics

Comprehensive, full-stack visibility, and troubleshooting

View Application Performance Monitoring Info

Digital Experience Monitoring Powered by SolarWinds Pingdom

Make your websites faster and more reliable with easy-to-use web performance and digital experience monitoring

View Digital Experience Monitoring Info

Python Logging Libraries and Frameworks

Ultimate Guide to Logging - Your open-source resource for understanding, analyzing, and troubleshooting system logs

Python Logging Libraries and Frameworks

Python has many different frameworks, and each one implements logging in slightly different ways. This section explains how logging is handled by some of the more popular Python frameworks, including Django, Flask, and Twisted.

Logging Django

Django is the most popular web application framework for Python. It uses the standard Python logging module and provides a hierarchy of predefined loggers, including:

  • django, the root logger. All other loggers derive from this.
  • django.server for server logs.
  • django.request for web requests.
  • django.db for database queries.

If Django is running in debug mode, all info-level or higher messages that aren’t from django.server are sent to the console. Otherwise, all critical-level messages that aren’t from django.server are emailed to the administrators via the AdminEmailHandler. You can disable this using the methods presented here, but you should only do this if you’ve already implemented another method of monitoring logs.

Configuring Django’s Logging Behavior

Django stores its logging configuration in the core LOGGING setting, which uses the dictConfig() format. You can change this setting in your project’s settings.py file, and Django will apply your changes on the next start. Since Django’s internal loggers all derive from the root django logger, configuring this logger affects all logs originating from Django.

For example, the following configuration writes all output to a file named debug.log:

LOGGING = {
  'version': 1,
  'handlers': {
    'file': {
      'level': 'DEBUG',
      'class': 'logging.FileHandler',
      'filename': 'debug.log',
    },
  },
  'loggers': {
    'django': {
      'handlers': ['file'],
      'level': 'DEBUG'
    },
  },
}

To learn more about logging in Django, see the Django documentation.

Logging in JSON

Converting logs to JSON makes it easier for log parsing tools to extract individual data fields from your logs. However, this requires a third-party library, such as python-json-logger. This library adds a new formatter class that formats events as a JSON string. You can attach this formatter to any handler.

For example, this file handler writes each log event as a single line JSON string:

'formatters': {
  'json': {
    'class': 'pythonjsonlogger.jsonlogger.JsonFormatter'
  }
},
'handlers': {
  'file': {
    'level': 'DEBUG',
    'class': 'logging.FileHandler',
    'filename': 'debug.log',
    'formatter': 'json'
  },
}

Centralizing Django Logs

Centralization consolidates your logs into a single location, making them easier to manage and access. Centralization is detailed in the Centralizing Python Logs section of this guide. We’ll briefly cover two methods here: standalone logging and Docker logging.

Centralizing Standalone Django Application Logs

With a standalone application, you can ship logs directly from your application’s logging framework to a centralization service. For example, you can log to syslog using the SysLogHandler built into the Python logging framework. Here, we log all info-level or higher messages to a server located at at syslog.example.com:

'handlers': {
  'syslog': {
    'level': 'DEBUG',
    'class': 'logging.handlers.SysLogHandler',
    'facility': 'local7',
    'address': ('syslog.example.com', 514),
  },
}

Many log management services support syslog messages and some also provide custom handlers. For example, SolarWinds® Loggly® provides the Loggly Python handler, which sends logs directly to your Loggly account over HTTPS in JSON format. Here is an example of how to use this handler (make sure to replace <TOKEN> with your Loggly customer token):

'handlers': {
  'loggly': {
    'level': 'DEBUG',
    'class': 'loggly.handlers.HTTPSHandler',
    'url': 'https://logs-01.loggly.com/inputs/<TOKEN>/tag/django'
  },
},
'loggers': {
  'django': {
    'handlers': ['loggly'],
    'level': 'DEBUG'
  },
}

This also applies a “django” tag to each log event. In Loggly, we can search for tag:”django” to retrieve all of our Django logs:

Searching for Django logs in SolarWinds Loggly. © 2019 SolarWinds, Inc. All rights reserved.

Docker Applications

Docker automatically collects logs printed by containers to STDOUT or STDERR and routes them to a dedicated logging driver. Other services can also access this stream. For example, the Logspout container automatically forwards messages from the logging driver to a destination of your choice, including Loggly. With Logspout, start by logging your Django application to console:

'formatters': {
  'json': {
    '()': 'pythonjsonlogger.jsonlogger.JsonFormatter'
  },
},
'handlers': {
  'console': {
    'level': 'DEBUG',
    'class': 'logging.StreamHandler',
    'formatter': 'JSON'
  },
},
'loggers': {
  'django': {
    'handlers': ['console'],
    'level': 'DEBUG'
  },
}

Then, run the Logspout container with Loggly as your destination:

$ sudo docker run -d -e 'LOGGLY_TOKEN=<token>'--volume /var/run/docker.sock:/tmp/docker.sock iamatypeofwalrus/logspout-loggly

Logging Flask

Flask is the second most popular Python web framework. Like Django, it also uses Python’s standard logging framework. While Django has multiple loggers, Flask only uses one logger named app.logger.

By default, app.logger logs to the stream specified by the environment variable wsgi.errors (typically STDERR). You can apply your own configuration using dictConfig() before you initialize the app object.

For example, here we send all logs to the console in JSON format. This is particularly useful when running Flask in Docker, since the logging driver (or Logspout) will automatically detect and route each event:

from logging.config import dictConfig

dictConfig({
  'version': 1,
  'formatters': {
    'json': {
      '()': 'pythonjsonlogger.jsonlogger.JsonFormatter'
    }
  },
  'handlers': {
    'console': {
      'level': 'DEBUG',
      'class': 'logging.StreamHandler',
      'formatter': 'json'
    },
  },
  'root': {
    'level': 'INFO',
    'handlers': ['console']
  }
})

app = Flask(__name__)

For more information, see the Flask logging documentation.

Logging Twisted

Twisted is an asynchronous event-driven networking framework. In addition to web traffic, it also supports SSH, DNS, instant messaging, and other protocols.

On the surface, Twisted’s logging framework is similar to other frameworks: loggers generate events and send them to observers, which receive and handle events. The main difference is log events in Twisted are generated from dicts rather than strings. This adds a great deal of structure to logs, and Twisted even recommends using JSON to maintain the structure of logs. However, this makes it harder to integrate with other logging tools. Although Twisted is compatible with the standard logging library, using it isn’t recommended. Instead, you can use a file or syslog observer to centralize your logs.

As an example, let’s look at the echoserv example provided by Twisted Matrix Labs. We’ll add a logging call to the main() method. We’ll declare a global logger and attach it to a JSON file observer, which writes logs to a file in JSON format. Twisted uses objects called publishers to attach loggers to observers. However, in this example, we’ll use a global publisher to route all log messages to the file observer.

from twisted.logger import Logger

# Declare our global logger

log = Logger()

import io

from twisted.logger import globalLogPublisher, jsonFileLogObserver, formatEvent

# Declare our publisher and assign a new jsonFileLogObserver

globalLogPublisher.addObserver(jsonFileLogObserver(io.open("debug.log," "a")))

...

def main():

log.info("Starting application")

f = Factory()

...

The resulting log looks like the following in /var/log/syslog and in Loggly:

{"log_namespace": "__main__," "log_time": 1569620747.600693, "log_source": null, "log_format": "Starting application," "log_logger": {"unpersistable": true}, "log_level": {"name": "info," "__class_uuid__": "02e59486-f24d-46ad-8224-3acdf2a5732a"}}

Viewing a log from Twisted in SolarWinds Loggly. © 2019 SolarWinds, Inc. All rights reserved.

 


Last updated: 2022