Log Management and Analytics

Explore the full capabilities of Log Management and Analytics powered by SolarWinds Loggly

View Product Info

FEATURES

Infrastructure Monitoring Powered by SolarWinds AppOptics

Instant visibility into servers, virtual hosts, and containerized environments

View Infrastructure Monitoring Info

Application Performance Monitoring Powered by SolarWinds AppOptics

Comprehensive, full-stack visibility, and troubleshooting

View Application Performance Monitoring Info

Digital Experience Monitoring Powered by SolarWinds Pingdom

Make your websites faster and more reliable with easy-to-use web performance and digital experience monitoring

View Digital Experience Monitoring Info

Resources White papers and guides

No B*llsh*t Benchmarking Elasticsearch Case Study

No B*llsh*t Benchmarking Case Study Using Elastic Search

Benchmarking Indexing and Search Performance of Elasticsearch

Every development team should benchmark its software to understand how it performs and what its performance envelope looks like — that is, where things start to break down. Scalability, reliability, and good performance are P1 features for virtually every software application because few users today have the patience to tolerate a poor experience.

In this white paper, we’ll walk you through our benchmarking process and show you how to do the same types of analyses using Loggly. Rather than try and explain how we benchmark our entire system, we’ll focus on a single component: Elasticsearch (ES). Indexing and search performance are critically important to us. In many ways, the performance of ES guides our overall architecture because once we know how hard we can push ES, we can then design the rest of the system to stay within those boundaries.

If you are a developer looking to improve your benchmarking tools, a Loggly user looking for new ways to use Loggly, or maybe just someone who is interested in Elasticsearch, we think you’ll find some useful and interesting information here. Our primary focus is on a general approach to benchmarking, but we’ll also dive deep on some specific techniques and on ES failure modes.

We’ll start with the simplest possible benchmark: indexing performance. To make this a bit more fun, we’re going to turn on the time machine and look at how ES performance has changed since version 0.90.13. From there, we’ll build up to more complex test cases that are closer to what we actually run in production. Along the way, we’ll identify a few of the traps that you’re likely to fall into as you go from “Hey, we got ES up and running!” to “Our business really depends on ES!” As we do this, we’ll also walk you through how the test bed evolves to deal with these changes.

Complete the form on this page to download our 97-page Benchmarking Case Study using Elasticsearch.

Additional Resources: 
What we learned, Configuring Elasticsearch for Performance