The ElasticSearch meetup last night was the first time we’ve had a chance to talk in detail about how we use ElasticSearch in our brand spanking new product. The agenda had me talking for about 5 minutes, then do Q&A for another 5, but I ended up in the limelight for about 40 minutes—due entirely to a seemingly never-ending stream of questions. People love big data, and tackling the big challenges it generates – who knew? Thanks to ElasticSearch for allowing me to bring this to life.
The beauty of meetups is that you never really know who is going to be there, and what you’re going to get to talk about. In my initial Q&A session, and in one-on-one chats at the end of the evening, it was fascinating to see how many people have run across the same issues that we did in the last few months. Everything from split-brain masters to use of replicas to dynamic mapping was asked and (hopefully) answered.
ElasticSearch is a very complex system, but is very easy to control thanks to the awesomely clean and simple API. Like all distributed systems, there are always some things that pop up that bring to mind those medieval maps with “Here Be Dragons”. As I mentioned last night, we’ve stumbled into a couple of areas like that, but had awesome support from the team at ES, and have found ways to avoid those scary places. With that experience comes knowledge, and sharing the insight in-person in a one-to-100+ setting is a great way to keep the learning cycle progressing.
I rarely believe people when they say “we’ve had no problems” when it comes to running distributed systems, because if you are doing big, new, scary things there will always be things that can trip you up. The challenge is to find ways to identify and avoid those trip-wires… which not coincidently, sounds a lot like what Loggly does with log data.
Back when we started Loggly, ElasticSearch was but a glimmer in Shay’s eyes, and SolrCloud was a branch off the Solr trunk. We had no choice but to implement a TON of custom code to deal with streaming data, sort-of-real-time indexing, index management, distributed search, and a smorgasbord of management tools. Don’t get me wrong – this was a huge amount of fun, and at the time it was a competitive advantage.
But the market moves fast… these days we get almost all of this for free, and can focus on USING the data rather than managing it or trying to recreate round spherical rolling objects, which is a perpetual challenge with engineers.
At Loggly, our goal is to make complex data analysis problems simple. The marketing team would have me say we do it for “cloud-centric companies who have a choice of log management solutions”, but it’s our day-in-day-out focus on log management that makes your job of using that data easier.
That focus extends to our UI, which is designed to let our users do “advanced search” without even knowing that that is what they’re doing. We’ve made log management our business and the final measure of success is helping you understand your data, and the best way to do that is make it as easy as possible for you to explore, visualize and consume it quickly.
I’ve dedicated the past decade and a half to making search faster, easier and more powerful… so people can do big things with the data. That’s my story, what can I say but I’m addicted to search.