Log Management and Analytics

Explore the full capabilities of Log Management and Analytics powered by SolarWinds Loggly

View Product Info

FEATURES

Infrastructure Monitoring Powered by SolarWinds AppOptics

Instant visibility into servers, virtual hosts, and containerized environments

View Infrastructure Monitoring Info

Application Performance Monitoring Powered by SolarWinds AppOptics

Comprehensive, full-stack visibility, and troubleshooting

View Application Performance Monitoring Info

Digital Experience Monitoring Powered by SolarWinds Pingdom

Make your websites faster and more reliable with easy-to-use web performance and digital experience monitoring

View Digital Experience Monitoring Info
Use Cases

PostgreSQL Logs: Logging Setup and Troubleshooting

Start Free Trial

Fully Functional for 30 Days

Last updated: October 2024

As with other relational database management systems (RDBMS), PostgreSQL provides a log of activity and error messages. Logs can be a daunting situation, as they contain many lines of information and, therefore, often leave us confused.

However, logs are there to help efficiently solve errors. If something goes wrong with your app, it’s vital you’ve correctly set up and managed your logs. Confusion will only delay you in solving the error.

The key with logs is to have access to sufficient lines of activity or history to fix an issue. However, there shouldn’t be too much to go through, as this will only slow you down. Therefore, a logs management strategy is important.

In this article, we first look at reasons why logging is important. Next, we explain parameters used to customize logs in PostgreSQL. Furthermore, we describe how to record queries in PostgreSQL and finally recommend a tool for managing PostgreSQL logs.

Why Is Logging Necessary?

Primarily, logging enables database administrators (DBAs) to identify and resolve problems. In simple terms, the log is the place one visits when more detail is necessary to resolve an issue or to monitor a new change. It shows activity along with a timestamp, which is useful in maintaining your application’s database.

From a security perspective, we use logs to act as a red flag when an issue occurs. Reviewing logs may help identify malicious attacks on your system. However, given the large amount of information shown on the logs, it’s impractical to sift through all the lines. Therefore, it’s advised to make use of a logging management system to better organize and set up your logs.

Postgres Log Parameters

As mentioned, it’s vital you have enough logs to solve an issue but not too much, or it’ll slow your investigation down. You can achieve this balance by fully understanding Postgres log parameters. These parameters help you customize the logs so they best apply to your scenario.

The parameters can be set in the PostgreSQL.conf file or on the server command line. Some of the parameters used in the log management system are listed below.

Application Name

The application name contains fewer than 64 characters and can be displayed as pg_stat_activity in the view. It’s also included in the hosted CSV log and can be included in other logs. Please note this application name should only contain printable ASCII characters. Otherwise, they’ll be replaced by question marks.

Parameter log_checkpoints

When this parameter is set to true, checkpoints and restart points are also recorded in the server log.

Parameter logging_collector

PostgreSQL logging is only enabled when this parameter is set to true and the log collector is running. The log collector silently collects logs sent to stderr as a standard fault stream and redirects them to the file destination of the log file. Some messages cannot be printed on the console, so this method is better than Syslog. For example, if an error occurs while running the script, the archive_command () used won’t display the log on the console.

Parameter log_directory

If the log collector is enabled, this parameter determines where the log file should be created. By default, the value of this parameter is pg_log, which can be an absolute or relative path to the cluster database’s location.

Parameter log_destination

The PostgreSQL log management system allows users to store logs in several ways, such as stderr, csvlog, event log (Windows only), and Syslog. For example, if we set this parameter to csvlog, the logs will be saved in a comma-separated format. With this parameter, we can indicate where the log is stored and provide several parameters separated by commas. The most common default value of this parameter ID is stderr.

Parameter log_filename

This parameter determines the name of the file created for logging purposes. This field changes over time and should be mentioned along with the % -escapes escape sequence.

How to Log Queries

It’s easy to temporarily enable logging in PostgreSQL by changing some configuration settings and restarting the server. However, we’ll show how to configure Postgres to create permanent iterative lоg files fоr аll ѕеѕѕiоnѕ and connections.

Step 1: Find Configuration Files

If уоu’re not sure of the PostgreSQL.conf configuration file location, the easiest way to find it is to connect to the Postgres client (psql) using the SHOW config_file command:

$ psql -U postgres -c 'SHOW config_file'

Step 2: Find the Data Directory Path

The next step is to verify the data directory path of the Postgres installation using another SHOW statement command:

$ psql -U postgres -c 'SHOW data_directory'

In some installations, the configuration file and data directory are located on the same path, while in other installations (as shown in this example), it’s different. In either case, please copy this data directory path for future use.

Step 3: Configure PostgreSQL to Generate Log Output

Open the PostgreSQL.conf file you found in step one and scroll down to the ERROR REPORTING AND LOG section. The most important of these settings are log_destination and logging_collector. The following are the recommended settings. However, you can adjust them as needed at any time:

#--------------------------------------------------------------------------

# ERROR REPORTING AND LOGGING

#--------------------------------------------------------------------------




# - Where to Log -




log_destination = ‘csvlog’ # Valid values are combinations of

# stderr, csvlog, syslog, and eventlog,

# depending on platform. csvlog

# requires logging_collector to be on.




# This is used when logging to stderr:

logging_collector = on # Enable capturing of stderr and csvlog

# into log files. Required to be on for

# csvlogs.

# (change requires restart)




# These are only used if logging_collector is on:

log_directory = ‘pg_log’ # directory where log files are written,

# can be absolute or relative to PGDATA

log_filename = ‘postgresql-%Y-%m-%d_%H%M%S.log’ # log file name pattern,

In this case, we instruct PostgreSQL to generate logs in CSV format and output them to the pg_log directory (in the data directory). We also commented on the log_filename setting to generate some suitable names, including the log file’s timestamp.

Step 4: Restart the PostgreSQL Service

Now restart the PostgreSQL service for these settings (especially logging_collector). Performing a Postgres restart is different for each system, but the command for UNIX systems typically looks like this:

$ Service PostgreSQL restart

 * Restart the PostgreSQL 9.3 database server. [OK]

Step 5: Check the Log Generation

After restarting the system, the protocol should start immediately. To ensure this, please scroll to the data / pg_log directory of the Postgres installation. Remember, we’ve already received the data directory path. Just add / pg_log at the end to bring up the log directory and navigate to the directory:

$ cd /var/lib/postgresql/9.3/main/pg_log

List the files. Thereafter, you should see that the log file was created after the previous restart of the service:

$ ls -l -rw ------- 1 postgres postgres 935 Apr 13 20:30 postgresql-2016-04-13_203022.csv

Postgres Logging Levels

Log levels are tags that show the severity of each log. These levels help you prioritize logs when troubleshooting. Here’s a list of Postgres logging levels from least to most severe:

  • DEBUG (Levels 1-5): Provides detailed information about the database’s operation. Higher levels (e.g., DEBUG5) are lengthy, and vice versa.
  • INFO: Informational messages highlighting the application’s progress, such as connection attempts and configuration changes.
  • NOTICE: Inform about conditions that are not errors but may require attention, such as implicit type conversions.
  • WARNING: Alerts about potential issues that are not critical but may need attention, such as deprecated features.
  • ERROR: This field provides information about an error that causes the current command to fail but keeps the database operational.
  • LOG: Shows informational messages that are always logged, regardless of the logging level settings, such as server startup and shutdown messages.
  • FATAL: Provides information on severe errors that terminate the current database session.
  • PANIC: Highlights an error that causes all database sessions to abort and the database server to restart.

Logging PostgreSQL Queries

When logging in PostgreSQL, you’ll need to capture the SQL statements and their durations to help with monitoring, debugging, and performance analysis. To do this, you need to configure the log_statement and log_duration.

The log_statement parameter allows you to specify which SQL statement to include in the logs. Here are the four options available:

  • none: The default setting is not to log any SQL statement.
  • ddl: Logs all data definition statements like CREATE, DROP, ALTER
  • mod: Besides DDL statements, it logs data modification statements like INSERT, UPDATE, or DELETE.
  • all: This allows you to log all statements.

The log_duration is a boolean parameter that, when set to on, logs the duration of each completed SQL statement.

Postgres Log File Management

You’ll need to configure log rotation and retention policies to manage log files effectively. This configuration allows you to control your log files and prevent them from using too much disc space.

Log Rotation

Log rotation involves creating new log files periodically and archiving the old ones to prevent individual log files from becoming too large. The parameters to configure include:

  • Log_rotation_age: Specifies a log file’s maximum age (in minutes, hours, or days) before rotating to a new log file.
  • Log_rotation_size: Specifies the maximum size (in kilobytes, megabytes, or gigabytes) of a log file before rotation.

Log retention

PostgreSQL doesn’t have a built-in method to delete old log files automatically. However, you can manage retention through:

  • Turning on the log_truncate_on_rotation parameter to truncate (overwrite) the existing log file of the same name during log rotation.
  • Filesystem-level rotation tools like logrotate on Linux systems.
  • Custom scripts to delete old logs based on your retention policy.

Analyzing Postgres Logs

Analyzing PostgreSQL logs is essential for performance tuning, debugging, and monitoring. Some popular tools you can use are pgBadger and pgFouine, and sometimes you can use custom scripts.

pgBadger

pgBadger is a Postgres log analyzer written in Perl. It leverages JavaScript® to generate detailed HTML reports, making database activity and performance easy to understand.

Key features include:

  • Detailed performance analysis.
  • HTML reports with graphs and charts.
  • Supports CSV output.
  • Incremental log analysis.
  • Parallel log processing for improved performance.

pgFouine

pgFouine is a PHP log analyzer for PostgreSQL. It provides detailed reports on query performance, including slow queries and their frequency.

Key features are:

  • Identifies slow queries and their frequency.
  • Provides recommendations for query optimization.
  • HTML reports with detailed query statistics.

Custom scripts

You can create custom scripts for specific needs or more control over log analysis. You can write these scripts in languages like Python®, Perl®, or Bash® and customize them to meet your specific requirements.

Loggly: Log Analysis and Log Monitoring Solution

As mentioned, logging enables us to identify problems and resolve them efficiently. Postgres allows you to use parameters to customize your logs. A good log setup is important to quickly and correctly resolve your issues. To help you do this, I suggest trying SolarWinds® Loggly®, which provides a level of control to manage access to any server and database type (including PostgreSQL).

Loggly works well for getting all-encompassing information that doesn’t require a high level of sorting/analysis and works nicely for weblogs, server logs, and other types of information. To try Loggly for free, click here.


This post was written by Daniel de Oliveira. Daniel is a full stack developer and analyst, experienced with Java, Angular, and Flutter.