Blog How-tos

How to send Kubernetes logs to Loggly

By Garland Kan 06 Apr 2017

Kubernetes is an open-source cluster system for running containers. Without going into a lot of detail, it runs containers for you on a set of nodes and moves those containers if the node is unhealthy for any reason. It has more features than that; you can visit the web site to see them all. All I have to say is that it is a great system to run your containers on. Kubernetes has predefined in-cluster logging that will send the logs from all of the Kubernetes pods you are running to Elasticsearch. This blog will show you how to use the same mechanisms to send the logs over to Loggly with all of the pod/namespace/container name/k8s host tags along with it. We will take their existing work and make a few modifications. For experienced users who wish to skip right to the good stuff, here are the pod files to make that happen.

Why aggregate Kubernetes logs?

There are times when you want to be able to easily get the logs from all of the applications that are running on your Kubernetes cluster. The “kubectl” command-line tool is available to tail the logs for you, but that is cumbersome and can create a security issue: It requires direct access to the cluster. Also, you don’t have any filtering or searching options. Getting your application logs into Loggly gives you an easy web-based interface where your team can see how your applications are behaving on the cluster, which allows you to lock down direct access to those who need it. It also offers easier searching, alerting, and more. With Loggly search, it’s easy to dig into details about an application. For example, the screenshot below shows all the “GET” requests from an HTTP server application. With our Fluentd plugin, we can include all of the relevant Kubernetes tags so we can slice and dice the data in many ways.

How to Send Kubernetes Logs to Loggly

Here’s what to do

In a nutshell here’s what we’re going to do: Replace the Fluentd image with the configuration for sending the logs to a local Elasticsearch cluster running on Kubernetes with our own Fluentd image with the configuration to send it over to Loggly. This is the Fluentd Kubernetes pod configuration:  https://github.com/kubernetes/kubernetes/blob/master/cluster/addons/fluentd-elasticsearch/fluentd-es-ds.yaml

Instead we are going to turn on this Fluentd Kubernetes pod: https://github.com/sekka1/kubernetes-fluentd-loggly/blob/master/k8s-fluentd-daemonset.yaml

The difference is that we are using another container. This container is the one with the Loggly Fluentd plugin installed (details of what is installed below). We swapped out the configuration from sending the logs to Elasticsearch to Loggly. To send your own Kubernetes logs, the only thing you have to do is replace the Loggly token with your own customer token.

In the “k8s-fluentd-daemonset.yaml” file, look for “YOUR_LOGGLY_TOKEN_HERE” and replace it with your token. You should be familiar with the “kubectl” CLI utility if you are managing a Kubernetes cluster.  This is the CLI tool that allows you to control the cluster from the command line. The following command will create the Fluentd Loggly container in your Kubernetes cluster.

kubectl --namespace kube-system create -f
./kubernetes-fluentd-loggly/blob/master/k8s-fluentd-daemonset.yaml

That is it. This will run on all the nodes you have in the Kubernetes cluster and start sending logs over to your Loggly account.  If you go into your dashboard, you can search for these logs.

How to Send Kubernetes Logs to Loggly 2

This is a sample log for a zookeeper pod. The important thing to note here is that it has all of the proper tags associated with it:

  • Labels – apps
  • Host
  • Pod name
  • Container name
  • Pod ID
  • Namespace

Step-by-step details

For those of you who want to know the details, the following section will go through what was changed.

We installed the Fluentd Loggly plugin: https://github.com/sekka1/kubernetes-fluentd-loggly/blob/master/build.sh#L37

Then we used the plugin to send the logs to Loggly: https://github.com/sekka1/kubernetes-fluentd-loggly/blob/master/td-agent.conf#L290

You will notice that the Loggly URL is parameterized. We add in the URL when we start up the Kubernetes pod here:  https://github.com/sekka1/kubernetes-fluentd-loggly/blob/master/k8s-fluentd-daemonset.yaml#L27

And this is the line you have to change with your own token information (highlighted when you click the preceding link).

Summary

With a few changes we were able to easily change out the entire backend logging system of where Kubernetes sends all of the cluster logs and send it over to Loggly. This shows that Kubernetes has done a very good job of building a modular system.

Garland Kan

Garland Kan Garland Kan helps customers run large-scale, reliable applications on Amazon Web Services (AWS) by working with engineers and architects to design, build, optimize, and operate infrastructure in the cloud. His specialties are Docker, Kubernetes, systems automation, security, and migrating workloads to container-based workloads. In addition to helping customers build and deploy applications, he writes various blogs to help the community to use Docker-based infrastructures.

Share Your Thoughts

2 comments

  • Srinath Reddy

    4 weeks ago

    I tried to install Loggly in a Kubernetes cluster but when I start the k8s-fluentd-daemonset.yaml the pods start and then get deleted. Also the instructions above says to replace the fluentd-ds.yaml with the k8s-fluentd-daemonset.yaml but does not give any info on how to do that.

    NAME READY STATUS RESTARTS AGE
    elasticsearch-logging-v1-dz2tt 1/1 Running 0 24d
    elasticsearch-logging-v1-m3rgz 1/1 Running 0 58d
    fluentd-elasticsearch-ip-172-20-0-125.us-west-2.compute.internal 1/1 Running 1 24d
    fluentd-elasticsearch-ip-172-20-0-192.us-west-2.compute.internal 1/1 Running 0 24d
    fluentd-elasticsearch-ip-172-20-0-21.us-west-2.compute.internal 1/1 Running 0 58d
    fluentd-elasticsearch-ip-172-20-0-59.us-west-2.compute.internal 1/1 Running 0 58d
    fluentd-es-v1.20-22d77 1/1 Terminating 0 1m
    fluentd-es-v1.20-3l2gz 1/1 Terminating 0 1m
    fluentd-es-v1.20-o4qxt 1/1 Terminating 0 1m
    fluentd-es-v1.20-sx5n3 1/1 Terminating 0 1m
    heapster-v1.2.0-1374379659-8pifi 4/4 Running 0 58d
    kibana-logging-v1-prpns 1/1 Running 0 24d
    kube-dns-v20-vpfb0 3/3 Running 0 58d
    kube-proxy-ip-172-20-0-125.us-west-2.compute.internal 1/1 Running 1 58d
    kube-proxy-ip-172-20-0-192.us-west-2.compute.internal 1/1 Running 0 58d
    kube-proxy-ip-172-20-0-21.us-west-2.compute.internal 1/1 Running 0 58d
    kube-proxy-ip-172-20-0-59.us-west-2.compute.internal 1/1 Running 0 58d
    kubernetes-dashboard-v1.4.0-78ifs 1/1 Running 0 58d
    monitoring-influxdb-grafana-v4-rewcq 2/2 Running 0 58d

Shares