A Benchmark of Five Node.js Logging Libraries

 

Node JS Benchmarking Blog Header

Logs are a valuable resource in analyzing application behavior and traffic. However, it’s important to choose a logging framework that is both efficient and reliable. Which Node.js logging library is the best choice for your application? This blog post compares performance and reliability among three popular debug logging libraries. It also compares performance between two popular Express request logging libraries. The goal was to measure the time required to log a large number of messages and compare the reliability of these libraries by recording message drop rates.

Libraries Tested

The following libraries were tested for debug-level logging.

  1. Bunyan
  2. Log4js
  3. Winston

The following libraries were tested for HTTP request logging with Express.

  1. Express-winston
  2. Morgan

Setup and Configuration

Test Structure

Here’s how the test was performed:

Syslog (UDP and TCP) File
DEBUG Logging 100,000 DEBUG logs over 10 iterations 10,000 DEBUG logs over 100 iterations
Request Logging 100,000 requests over 10 iterations, maximum 500 concurrent connections

All of the libraries were tested and compared using syslog UDP and TCP (rsyslog was used in the tests), as well as file transports for the debug-level logging tests. Benchmarking with the file transport contained more iterations (with less runs per iteration) to eliminate overload. For each specific test, a total of 1 million log events was logged. Each specific test was run three times, and the results were then averaged. Log data was sent to a local server; therefore, the tests are not affected by network lag.

For the debug-level logging tests, a list of prime numbers was being generated in the background to simulate a workload.

For the request logging tests, an Express app was created that returned the plain text “Hello World” when requested. The benchmark results were found using ApacheBench.

Server Hardware

All of the tests were performed on a server with burstable high frequency Intel Xeon processors and 4GiB of RAM. The Node.js processes, rsyslog, and ApacheBench were all running on this server, so there was no network latency affecting the tests.

Logging Format and Library Configuration

The standard logging format for each specific library was used for the tests. The configuration for each library was kept as close to default as possible, so keep in mind that customizing the configuration to suit your specific application may improve each library’s performance.

Many of the libraries used require additional code in order to log with syslog. The following libraries were added:

A Note on Drop Rates

Since the system was being pushed to its limits for testing purposes, the syslog drop rates seen below are greater than you should ever experience in a real-world application. Requests sent to a real-world application are normally spread out over time, which allows the system to catch up.

Debug Logging: Test Results

File Transport

Each library produced similar results with a file transport. Each library was able to complete an iteration in less than 500 milliseconds. Bunyan, the fastest of the three, ran 26% faster than Log4js and 34% faster than Winston. As expected with a file transport, there was a 0% drop rate for all three libraries.

01-FileTransport

Framework Time (ms) Drop Rate
Bunyan 282 0%
Log4js 381 0%
Winston 426 0%

Syslog Transport

TCP

As with UDP, Log4js is the fastest library for the TCP protocol. The tradeoff is a slight packet drop rate of less than 1%. Bunyan was the fastest library with a 0% drop rate. In addition to having the highest average time, Winston had the highest amount of fluctuation between tests.

Framework Time (ms) Drop Rate
Winston 12,210 0%
Log4js 9,022 < 1%
Bunyan 11,392 0%

UDP

For the UDP tests, the libraries proved to have striking differences in both efficiency and reliability. Log4js was not only the fastest library, but it also produced the smallest drop rate. Log4js ran 36% faster than Bunyan and 71% faster than Winston. Its drop rate was 16% less than Bunyan and 25% less than Winston.

02-udp

Framework Time (ms) Drop Rate
Bunyan 4,769 43%
Log4js 3,537 27%
Winston 10,537 52%

HTTP Request Logging: Test Results

Syslog Transport

TCP

As expected with TCP, both libraries had a 0% drop rate. Morgan processed each request 56% faster than Winston, which suggests that it would be the ideal request logging library in a real-world application. The following charts show the average time required to process each individual request.

03-TCP

Framework Time (ms)
Morgan 108
Winston 243

The following charts show the average total request time per iteration of 100,000 requests.

04-TCP

Framework Time (ms) Drop Rate
Morgan 47,360 0%
Winston 91,628 0%

UDP

Both libraries evidenced an insignificant drop rate of less than 1%. As with TCP, Morgan runs more efficiently timewise compared to Winston. The following charts show the average time required to process each individual request.

05-UDP

Framework Time (ms)
Morgan 129
Winston 163

The following charts show the average total request time per iteration of 100,000 requests.

06-UDP

Framework Time (ms) Drop Rate
Morgan 50,002 < 1%
Winston 62,375 < 1%

Conclusion

When outputting debug logs to a file, the most efficient library timewise is Bunyan. When using the syslog protocol, Log4js turned out to be the fastest of the three libraries for both TCP and UDP. However, you may choose to use Bunyan over Log4js for TCP since Bunyan had a 0% drop rate. For request logging, Morgan always supersedes Winston for efficiency. With Node.js logging libraries, the tradeoff between efficiency and reliability is not always significant. As seen from the results, the most efficient library can also be the most reliable one.


About Lukas

Lukas Rossi is an experienced front- and back-end web designer and developer who has developed numerous mobile applications and websites. Lukas discovered an interest in web development during high school, and since then he hasn’t stopped learning the latest technologies. He is the manager of App Dimensions, a company that creates mobile apps to improve everyday life. In his free time, Lukas enjoys bike riding and various other outdoor activities.

Lukas is also a contributor to the Ultimate Guide to Logging. Check out his contributions here.


3 comments

  • Leander

    7 months ago

    Since I just finished making this, I’m going to add this to the list: https://github.com/diamondio/better-logs

    I find when developing larger libraries with lots and lots of modules, it’s good to leave the log statements in the code so it’d be easy to pick up again. The problem is that this creates so much spam in the terminal, and this library gives you finer control over what logs to show/hide.

    It’s very similar to winston except it has a much cleaner interface and very flexible to use with streams. It also includes morgan for HTTP logging!

  • Eric

    12 months ago

    Can you share the code for your tests?

    • Karen Sowa

      Karen Sowa

      11 months ago

      Eric,
      Here is the reply from Lukas:
      Here is the URL to the repository: https://github.com/appdimensions/nodejs-logging-benchmarks

      For the syslog debug logs, I called the logging function 100,000 times per iteration, and I iterated 10 times, for a total of 1,000,000 logs. There is a 5 second pause between each iteration.

      For the file debug logs, I called the logging function 10,000 times per iteration, and I iterated 100 times, for a total of 1,000,000 logs. There is a 1 second pause between each iteration.

      While the syslog and file debug logging programs are running, a child process is also running that generates a list of random numbers.

      At the start of each iteration (for syslog and file debug logs), I store the current “process.hrtime()” value in a variable. At the end of each iteration, the time it took is found by taking the current “process.hrtime()” and subtracting from it the value that was stored before the iteration started. The difference represents how long the iteration took.

      The request log scripts are a simple Express application that simply outputs “Hello World”. The libraries used will take care of the request logging (one script required some extra configuration code to log properly).

      I then used the Apache benchmarking tool (ab) and a loop to request the web page 100,000 times each iteration, with 10 iterations, and a maximum of 500 concurrent connections. ab provides the information about average request time.

Share Your Thoughts

Top