How to Implement Logging in Docker with a Sidecar Approach

 

loggly_banner_3

As a consultant building highly automated systems for clients using Docker, I have seen how important it is to be able to get application logs out of your containers and into a place where the developers can view and search through them easily. Sending your logs to Loggly accomplishes this and gives you some very nice features such as an easy interface to search logs, create alerts, and build dashboards off of the data.

Centralized Log Management Is a Must-Have in the World of Docker

Logging is very important because it gives a complete view into your application environment. If you’re beyond the days of having only a few machines, most of your machines are dynamic. The cluster controls where your application runs and takes care of healing itself when a machine goes down and a new one replaces it. This makes it impossible to solve problems manually by ssh’ing into a machine to look at logs: You first have to figure out where exactly your app is running and get the logs from the potentially numerous machines they are on to get the full picture. In a container world, centralized log management is a must.

Since Loggly is the central place into which all the logs flow, you can find items for a certain application in an environment no matter where it is running. In most cases, application developers don’t care where an application ran; they just need to know what happened.

Solution: Pair Application and Logging Containers

My approach is to pair up each container running a CoreOS cluster with a Loggly logging container. This makes it very easy to see which container is logging out, specify which files need to be logged out, set tags for each application/container, and start and stop the container and its logging in unison.

The following is the Docker process output for one application in a container:

# docker ps
CONTAINER IDIMAGECOMMANDNAMES
20245e614d36app_x"./run.sh"app_x.service
66d3faccb0fagarland/loggly:latest"./run.sh"app_x_logger.service

When we launch the “app_x” container, we also launch an “app_x_logger” container that pairs up with it.  

The best practice on a CoreOS cluster is avoid installing anything natively on the OS because most of it is not persisted across upgrades. I like this idea because it keeps the OS clean. To support file-based logging, I created a Docker container that supports shared volumes on Docker. You can find and use the container on DockerHub here.

How to Use the Loggly Docker Container

When we start, the application container is named “app_x”. We expose one or more locations inside the container where logs are written to. We will later use the Loggly logging container to grab any files that show up here and send them to Loggly.

docker run \
-d \
-name app_x \
-v /opt/app/logs \
app_x

Notice the “-v /opt/app/logs” parameter. This tells Docker to expose this out as a data volume so that another container can access it.

Then we start up the Loggly logging container to use the “Data Volumes” from the app_x container.

docker run -d \
–volumes-from app_x \
env LOGGLY_TOKEN=<TOKEN> \
env LOGGLY_ACCOUNT=<ACCOUNT_NAME> \
env USERNAME=<USERNAME> \
env PASSWORD=<PASSWORD> \
env DIRECTORIES_TO_MONITOR=“/opt/tomcat/logs,/var/log” \
env TAGS=“app_x” \
garland/loggly

Notice the “–volumes-from app_x”. This option takes all the data volumes exposed from the container named “app_x” and binds it into this container. Files generated by “app_x” into these directories will also be accessible to this container. (A detailed description on what is happening here can be found here.)

Once started, the Loggly log container will basically run Loggly’s configure-file-monitoring.sh shell script, adding in all the directories you passed into it. It’s best to use as many of the Loggly default setup scripts as possible so you don’t have to maintain your own. If Loggly updates the scripts to fix bugs or add functionality, this container will get all the updates without requiring you to do anything. A simple bash loop was added so that every file in the directories that was passed into the container (“DIRECTORIES_TO_MONITOR”) will be added to Loggly.

Analyzing Your Docker Logs in Loggly

I recommend using Loggly tags with all of your log events to make it very easy for you to search for logs of interest. Just prefix all of your searches with:


tag:app_x*


This will use a tag to pull up the logs that came from this “app_x” container. If you pair each application this way, each application has its own tag, making it very easy to search for specific items.

Conclusion

The hardest part about this setup is to create the initial concept of pairing up an application container and a logging container, also referred to as sidecar or sidekicks. (See this article for other examples.) Once you get that completed, you can simply reuse it over and over again. At this point, you can pretty much forget that the logging container is there. This approach to logging with Docker and Loggly is a DevOps dream: You build something once that effortlessly allows you to troubleshoot the required logs and application, yet it can be reused on every application without any maintenance.


2 comments

  • Justin

    1 year ago

    To keep the example consistent, should the DIRECTORIES_TO_MONITOR be the volume that was mounted from the APP_X, /opt/app/logs?

    • Karen Sowa

      Karen Sowa

      11 months ago

      Justin,
      Here’s the reply from Garland:
      Yes. All APP_X has to do is expose one or more volumes like:

      -v /opt/tomcat/logs
      -v /var/log

      Then in the garland/loggly container when you reference it by name like “–volumes-from app_x “, it will grab all of those volumes and mount it into the “garland/loggly” container.

Share Your Thoughts

Top