Log Management and Analytics

Explore the full capabilities of Log Management and Analytics powered by SolarWinds Loggly

View Product Info


Infrastructure Monitoring Powered by SolarWinds AppOptics

Instant visibility into servers, virtual hosts, and containerized environments

View Infrastructure Monitoring Info

Application Performance Monitoring Powered by SolarWinds AppOptics

Comprehensive, full-stack visibility, and troubleshooting

View Application Performance Monitoring Info

Digital Experience Monitoring Powered by SolarWinds Pingdom

Make your websites faster and more reliable with easy-to-use web performance and digital experience monitoring

View Digital Experience Monitoring Info
Use Cases

Best Practices for AWS Lambda Logging

Start Free Trial

Fully Functional for 30 Days

Logging serves a few valuable purposes, from debugging to serving as a source for alerts about problems in production. But for logs to be useful, they need to be accessible. When it comes to AWS Lambda logging, there can be a world of difference between accessing logs for debugging and for monitoring. Developers might be running AWS Lambda services in a local environment that logs to their file system while the production logs get fed into several different systems for monitoring and support. We’ll cover these concerns and more in this post on best practices for AWS Lambda logging.



Source Code Should Not Change per Environment

As a general rule, you don’t ever want to change source code when deploying to different environments. This applies to logging as well. Though it may be tempting to log everything during development and call the logging lines out before deployment, this habit can lead to trouble. There are better ways to accomplish the same thing using configuration and log levels.

Log at Different Levels

The industry has standardized around a few basic levels, more or less. Though there’s some variation depending on the specific languages, these levels are generally applicable in some form or another:

1. Verbose: All the messages

2. Debug: Messages including stack traces

3. Info: Informational messages

4. Warning: Nothing too serious, but you might want to take some action on this

5. Error: Only the errors, please

6. Critical: Well, it just bombed… This must be addressed immediately

What do we want to accomplish with different log levels? First, they give us a convenient way to filter logs. Open your developer tools in your browser (F12 or Shift+Ctrl+i) and check the console. There, you can filter the logs. This is one use for logging levels. Ideally, you’d be able to set the AWS Lambda services to different logging levels to avoid flooding your logs with verbose messages.

Provide Enough Context

You need to provide enough context to make the logs useful. No one likes poking around in logs and looking at 900,000 entries of “toString is not a function of ‘undefined'” or “object reference not an instance of an object” with absolutely zero context around the errors. These log entries are useful only to the extent we can use them to resolve the issue. Context can be anything from the class or function name to the stack trace. However, even stack traces can fail us when an error comes from a threaded context.


So what about the context needed when you have branching events or a chain of events? Many AWS Lambda applications are built using orchestration, and we need to account for these patterns as well. AWS X-Ray will grab these chains of events for you and link them in one trace record.

Use a Logging Framework

When you use a logging framework, you get the benefits of many of the above best practices. They’re usually pretty good about capturing some context (or at least giving guidance through methods or functions designed to take in context). Logging frameworks also provide a way to log at different levels and configure which level to log at. They’ll typically even have a way to nest context for more complex use cases.


AWS Lambda has its own mechanism to log errors occurring in execution. It’s fine to use this error message as is unless you want to provide additional details or log errors in a particular way. For example, you may want to provide a more meaningful error message or stick with a certain format.

Centralize Log Aggregation

Generally, you want to centralize log aggregation across applications, especially in more complex environments. When you have a bunch of AWS Lambda services running async off message queues, it can be easy to fall into the trap of keeping your view too narrow. If you’re looking at only the few Lambda services you’re running, you may be missing problems impacting the entire system. This may be beyond the scope of using more specific log levels and messages for AWS Lambda services, but it’s getting at how we actually use the logs.

Separate Producers From Consumers

Producers and consumers are inherently separate when it comes to AWS Lambda services, but let’s set this up conceptually, since the pattern shows us how the log data should flow. On the producer side, we have the AWS Lambda service logging through a logging framework at different levels. It’s logging informational messages, debug messages, error messages, and perhaps even critical failures.

Some of these messages can just be stashed away until they’re needed. Others are symptoms of a problem and should be surfaced to someone who can take action. These are different consumers of the events. You may have support engineers as one set of consumers and developers as another set, and who knows? Maybe the COO would like a dashboard to monitor the entire operation. These consumers all have different visibility and information needs from events produced from the same event sources. Some consumers need a broader set of events, and others a narrower but deeper set. This is where separating producers and consumers matters.

Send Logs to S3

To support the producer–consumer pattern, we want to get the logs into an easily accessible storage location. S3 is a great way to store log data for several reasons. It can be accessed using several tools, including Loggly. S3 data can be queried directly using Athena. The storage service has a plethora of storage options, including security, retention policies, storage tiers, and replication, and it’s dirt cheap to use.

Amazon AWS provides a great sample for sending logs—including X-Ray traces—to S3. Earlier, we noted X-Ray traces will link together logs from several calls in an orchestrated set of Lambda services, so you’ll probably want to bring those along to S3 as well. You can find the source code for the sample in the developer guide.


Finally, make sure your logs are in a consumable form. You can use Loggly to monitor all sorts of applications, including AWS Lambda. It’s a great tool to try for free if you’re looking for a way to get all your logs into one place.



This post was written by Phil Vuollet. Phil leads software engineers on the path to high levels of productivity. He writes about topics relevant to technology and business, occasionally gives talks on the same topics, and is a family man who enjoys playing soccer and board games with his children.