Docker Datacenter (DDC) and What It Means to DevOps and the Industry
Or: Why Cloud Providers Should Be Nervous
Docker’s latest offering was announced just today. Docker Datacenter (DDC) is basically a virtual, portable datacenter designed to give small and large businesses control over creating, managing, and shipping containers. Docker calls this Containers-as-a-Service (CaaS).
DDC is comprised of various open-source and proprietary, commercial components: Docker Universal Control Plane (which also is generally available today), Docker Trusted Registry, and embedded support for Docker Engine. It promises more efficient, centralized and easier manageability and enterprise-grade support for customers who package the components of their application stacks into Docker containers. This concept, often referred to as microservices, has a number of advantages: The containers serve as independent building blocks that can easily be re-used, saving both time and money.
Also, because these containers can be run on any operating system that supports Docker containers, they remove the headache (and costs) associated with small and large differences between different Linux vendors, OS versions, or patch levels. For IT organizations and developers, that is huge.
Cloud Migration Made Easy… finally!
Now Docker Datacenter takes this a step further, and this might actually be the biggest news about today’s release. Think of DDC as a virtual datacenter on your hard drive, containing your entire application stack. Docker talks about it being an “on-premise” solution, which might be a bit confusing. Confusing, because you could easily (i.e. with no big changes to your applications) move it from your hard drive on your local server to a hosted cloud like AWS, Google Cloud, potentially Microsoft Azure, or others—as long as they support running Docker containers. As a result, customers have an easier way to switch between cloud hosting providers or move from a public to a private or hybrid cloud. Maybe Docker sees “on-premise” as a first step, maybe they just don’t want to step on other vendors’ feet, at least for now?
Cloud migration and supporting multiple clouds is typically extremely costly and painful, and the business model of many cloud providers is based on locking the customers into their specific technologies. DDC removes this vendor lock-in and opens up the floor for more competition. Customers rejoice, cloud providers be nervous!
Hold on, not so fast! There’s a catch here. DDC is not entirely comprised of open-source, it’s a commercial technology after all. It too adds a new vendor lock-in, which is to Docker (the company). However, that’s not necessarily wrong. Docker wants—and needs—to earn money with DDC. Apart from that, Docker (the technology) removes a lot of painful OS and OS vendor dependencies, and now Docker (the company) sets sail to do the same with cloud provider lock-ins. For a lot of customers the trade-off will still be a very good deal. And for cloud providers this is still a solid reason to be nervous. At the end of the day, offering a cloud-agnostic solution like DDC would be detrimental to many cloud providers’ business models, and customers know this. Docker brings a lot more credibility to the table for this case.
Docker Datacenter and Logging
By the way—What does DDC mean for log management and Loggly customers? On a more general level, DDC adds more components that generate log data and need to be monitored. Docker containers and microservices have their own challenges when it comes to log data, about which we have written here and here.
Docker Datacenter and Loggly
As for Loggly customers, Loggly is a Docker Ecosystem Technology Partner (ETP), and our multiple integrations with Docker containers work just fine with DDC. Stay tuned for more posts specifically on how to log from DDC.
If you want to know more about how Loggly integrates with Docker, see the reading list below:
- Using syslog and the Loggly Docker Container: The Loggly Docker Container uses syslog to listen for syslog events from your application containers and then forwards them to Loggly. You can use the Loggly Docker Container to implement a sidecar approach, where each Docker container gets paired with a logging container.
- Docker Logging Driver: This approach uses Docker’s logging driver to send stdout/stderr to the host’s syslog daemon, which then forwards to Loggly.
- Logspout: You can also use the logspout container to send stdout/stderr to Loggly.
- Docker Mounted Volumes for file or unix socket logging: This container will send logs to Loggly by passing in a list of directories. Every file in the directory will be added into the syslog entry to be monitored.
- Logging directly via the application: The application running inside the container handles its own logging using a logging framework. You can then send these logs to Loggly using the HTTP/S libraries in the Loggly Libraries Catalog.
If you want to explore Docker methods in more depth, you might want to read this blog post discussing the pros and cons of different approaches.