Log Management and Analytics

Explore the full capabilities of Log Management and Analytics powered by SolarWinds Loggly

View Product Info

FEATURES

Infrastructure Monitoring Powered by SolarWinds AppOptics

Instant visibility into servers, virtual hosts, and containerized environments

View Infrastructure Monitoring Info

Application Performance Monitoring Powered by SolarWinds AppOptics

Comprehensive, full-stack visibility, and troubleshooting

View Application Performance Monitoring Info

Digital Experience Monitoring Powered by SolarWinds Pingdom

Make your websites faster and more reliable with easy-to-use web performance and digital experience monitoring

View Digital Experience Monitoring Info
Use Cases

How to Start Monitoring Kubernetes

Start Free Trial

Fully Functional for 30 Days

Kubernetes adoption has been increasing at a high speed in recent years. Many companies adopt Kubernetes because it has features designed to help applications become resilient and fault tolerant. Without forgetting this, you can configure auto scaling and self-healing with a minimal set of configurations in Kubernetes and your applications.

However, many of the applications running in Kubernetes are complex, so troubleshooting can be challenging. These challenges are directly proportional to the complexity and size of the applications. Therefore, logs become your eyes when something goes wrong in a live environment. Luckily, Kubernetes offers different types of logs you can enable with a few clicks or flags. And if these logs aren’t enough, there are other ways you can get more logs from your applications.

But how and where do you start? Well, today’s post sheds some light on how you can get started with monitoring in Kubernetes.

Types of Metrics and Logs in Kubernetes

Kubernetes comes with different types of built-in logs. Some logs need to be enabled, and others are simply the logs the applications send to stdout and stderr. You can get logs from the control plane, worker nodes, applications, and even from every call to the Kubernetes API for auditing purposes. How you enable and find these logs depends on where Kubernetes is running or how you created the cluster. For instance, in a cloud provider where you don’t have access to the control plane, you can enable logs with a few clicks.

Typically, you can find all the logs we’ve mentioned at the following locations:

  • /var/log/containers in the host for all your applications running in the cluster
  • /var/log/dmesg, /var/log/secure/, and /var/log/messages in every server
  • /var/log/journal for everything the control plane emits
  • /var/log/audit.log or any other custom location you define for audit logs

Always check with your cloud provider or the tool you’re using to install Kubernetes, such as kubeadm, to confirm the exact locations where you can find all the available logs.

Enable Logging in the Cluster

To get started with logging and monitoring Kubernetes, you must ensure your cluster is emitting logs from the control plane. It doesn’t matter where your cluster is running or how you installed it—control plane logs are available. It’s simply a matter of finding out how you can enable them. In AWS, for instance, when you create a cluster using EKS (or even after creating it), you simply enable all the types of logs we mentioned before, and that’s it. If your cluster is running on-premises and can configure the cluster’s components, the story changes slightly, but it’s not a state secret.

For instance, a typical cluster configuration will use static pods to launch the cluster components (both in the control plane and worker nodes). In the documentation, you can find all the available flags related to logs for each of the components in the cluster, like the kube-api server.

Lastly, it’s a good idea to familiarize yourself with the log format Kubernetes uses for all its logs.

Instrument Your Applications

Once you’ve enabled all the possible logs you can get from the Kubernetes cluster, there’s another source for logs you need to enable, and this source resides in your applications. You need a way to see and understand what’s happening when your applications are running in a live environment with real users. Unfortunately, enabling logs in your applications isn’t something you enable with a flag. To do so, you need to instrument your applications.

Instrumenting your application means adding the ability to emit logs for all the events happening when your application is running. For instance, you could emit a log saying a connection to the database was successful, but it took one minute to return some data. Or you could emit an error log when something goes wrong with your application and include some context like the input from a user to make troubleshooting easier.

The good news is instrumenting your applications to emit logs isn’t new. All programming languages have a way to emit logs. Moreover, you can use some libraries and frameworks to enable default logs within your applications. For instance, a good framework for instrumenting your applications is OpenTelemetry. It’s easy to get started, and it supports several languages, like Go, .NET, Java, Python, and many more.

Once you have instrumentation in place, you must ensure you’re sending the application logs to stderr or stdout to collect all these logs more easily.

Configure Logging Collection With or Without an Agent

When you look at the documentation from Kubernetes, you’ll find different patterns for collecting logs in a cluster. For instance, you can have a sidecar container collecting logs from an application container and sending them to a centralized location. However, to get started quickly, we’d recommend you take one of two approaches.

The first one is to configure a logging agent in charge of processing all types of logs in the cluster and sending them to a centralized logging solution like SolarWinds® Loggly®. We have a guide you can follow with all the steps you need to take to collect logs using Fluentd running as a DaemonSet (agent) in Kubernetes.

The second approach is to use an open-source project called rKubelog, which can collect logs without deploying a DaemonSet or a sidecar container. This approach is an excellent option when you want to get started quickly. Additionally, it’s a perfect approach to collect logs in nodeless clusters like AWS Fargate. rKubelog implements the controller pattern in Kubernetes, which means what you’re installing is a custom controller in charge of streaming all the logs from the cluster to Loggly without you having to configure anything else. You can get more information about installing rKubelog and configuring it with Loggly on our documentation site or the GitHub project site.

Takeaways

As you can see, getting started with monitoring and logging in Kubernetes isn’t too difficult. You just need a solid understanding of how logging works. Additionally, you can get started quickly when you use the proper tools for the job. For instance, rKubelog is a good approach when you don’t want to mess with multiple complex configurations. However, if you’d like to have more control, you can configure a DaemonSet to deploy Fluentd and stream the logs to a centralized location.

You must also spend time instrumenting your applications. Kubernetes has a broad offering regarding the logs available in the cluster. Nonetheless, these aren’t enough to understand what’s going on inside your applications. When you use projects like OpenTelemetry that understand the importance of logs in your application, your job of emitting logs gets easier.

Lastly, this is a continuous improvement process. You’re always going to find new log sources or ways to emit logs from your applications. Just keep in mind this is a continuous process, not a race with a finish line.

This post was written by Christian Meléndez. Christian is a technologist who started as a software developer and has more recently become a cloud architect focused on implementing continuous delivery pipelines with applications in several flavors, including .NET, Node.js, and Java, often using Docker containers.