Log Management and Analytics

Explore the full capabilities of Log Management and Analytics powered by SolarWinds Loggly

View Product Info


Infrastructure Monitoring Powered by SolarWinds AppOptics

Instant visibility into servers, virtual hosts, and containerized environments

View Infrastructure Monitoring Info

Application Performance Monitoring Powered by SolarWinds AppOptics

Comprehensive, full-stack visibility, and troubleshooting

View Application Performance Monitoring Info

Digital Experience Monitoring Powered by SolarWinds Pingdom

Make your websites faster and more reliable with easy-to-use web performance and digital experience monitoring

View Digital Experience Monitoring Info

Centralizing Node Logs

Ultimate Guide to Logging - Your open-source resource for understanding, analyzing, and troubleshooting system logs

Centralizing Node Logs

When you’re logging in applications, a good practice is centralizing logs, which means storing them in a central location. This includes things such as shared file systems; servers such as MySQL/PostgreSQL, Riak, and Cassandra; as well as remote logging services, such as SolarWinds® Loggly®.

Benefits of Centralizing Logs

Keeping all the log data in a central location makes it easy to extract and query later, as the need arises. Depending on the logging library you choose, there will be an assortment of built-in transports, as well as third-party extension libraries, that will allow your application code to log to a variety of locations.


Logs can be sent to several locations, including files in the local filesystem, a syslog server, an HTTP/S endpoint, and more. Depending on your logging framework, including Winston and Pino, you may have a much wider array of options to use, such as CouchDB, MongoDB, Redis, and Loggly.


Sending logs over HTTP/S is commonly used to send logs to third-party services such as HipChat, Slack, Loggly, Airbrake, and notification services. In the example below, you can see how to configure a simple HTTP endpoint, using Winston’s http transport.

logger.add(new winston.transports.Http(options));

The HTTP transport is a generic way to log, query, and stream logs from an arbitrary HTTP endpoint.

It takes options that are passed to the node.js http or https request:

  • host: (Default: localhost) Remote host of the HTTP logging endpoint
  • port: (Default: 80 or 443) Remote port of the HTTP logging endpoint
  • path: (Default: /) Remote URI of the HTTP logging endpoint
  • auth: (Default: None) An object representing the username and password for HTTP Basic Auth
  • ssl: (Default: false) Value indicating if we should us HTTPS


Now let’s see how to connect to a remote syslog server. To do this, I’m using a third-party transport for Winston, suitably called winston-syslog.  For using Syslog transport in Winston, you can simply require it and then either pass an instance to a new Winston logger or add it to an existing Winston logger.

const winston = require('winston');
  // Requiring `winston-syslog` will expose
  winston.add(new winston.transports.Syslog(options));

Syslog Log Levels

In syslog, the levels that don’t match will be ignored, because it only allows a subset of the levels available in winston. You can instruct Winston to use the syslog levels, in order to use winston-syslog effectively.

  const winston = require('winston');
  const logger = winston.createLogger({
    levels: winston.config.syslog.levels,
    transports: [
      new winston.transports.Syslog()

The syslog transport will only log to the level that is available in the syslog protocol.

MongoDB Transport

We can store logs in MongoDB using the winston-mongodb module (as of winston@3.0.0).

You can refer to the below code.

const winston = require('winston');
logger.add(new winston.transports.MongoDB(options));

FastFileRotate Transport

fast-file-rotate is a performant file transport providing daily log rotation. It outperforms the winston-daily-rotate-file module, although it has fewer configurable options.

const FileRotateTransport = require('fast-file-rotate');
const winston = require('winston');

const logger = winston.createLogger({
  transports: [
    new FileRotateTransport({
      fileName: __dirname + '/console%DATE%.log'

Pino transport

Like Winston, Pino also supports multiple transports. The difference is Pino transports are supplementary tools that run as a separate process. You pipe logs from your Node.js application to the transport over standard input as in the example below.

const split = require('split2')
const pump = require('pump')
const through = require('through2')
const myTransport = through.obj(function (chunk, enc, cb) {
pump(process.stdin, split(JSON.parse), myTransport)

Let’s save “transport” as the file my-transport-process.js. Logs can now be consumed using shell piping:

$ node my-app-which-logs-stuff-to-stdout.js | node my-transport-process.js

Ideally, the transport should consume logs in a separate process to the application. Using transports in the same process causes unnecessary load and slows down the Node's single threaded event loop.

Log Rotation

If you’re centralizing logs to another destination, you should make sure logs aren’t filling up your local server. If you’re familiar with Unix/Linux systems, then you’ll be familiar with the concept of log rotation. Log rotation cycles through log files, creating archives and cleaning up old files over time. The process starts with one log file. When certain conditions are met, be that timefile size, etc., the file is archived and a new file is then logged to. The process of log rotation is commonly performed by a cron job on the host, but it can also be performed directly by a logging framework like Winston. To do so in Winston, use the following code example.

const { createLogger, format, transports } = require('winston');

const logger = winston.createLogger({
    transports: [
        // DailyRotateFile transport, which logs `undefined` on falsey value.
        new transports.DailyRotateFile({
            dirname: 'logs',
            filename: 'somefile.log',
            maxsize: '5m',
            level: 'silly',
            format: format.combine(
                format.timestamp({ format: 'YYYY-MM-DDTHH:mm:ss.sss' })

This will log to somefile.log. When the file grows larger than 5 megabytes, it will back up the file, appending the current date in DD-MM-YYYY format, and create a new log file.