Everything you wanted to know about Kubernetes but were afraid to ask
Friday, January 30, 2015
In the previous weeks, Miles Ward, Google Cloud Platform’s Global Head of Solutions, kicked off the Kubernetes blog series with a post about the overarching concepts around containers, Docker, and Kubernetes, and Joe Beda, Senior Staff Engineer and Kubernetes co-founder, articulated the key components of a container cluster management tool based on Google’s ten years experience in running its entire business on containers. This week, Martin Buhr, Product Manager for the Kubernetes open source project, answers many of your burning questions about Kubernetes and our support for containers on Google Cloud Platform.
I’ve been fortunate to be able to engage with many in our community, and we consistently hear many of the same questions:
This post will answer these questions, and we’d love to field others we may have missed via the Kubernetes G+ page.
First, there is the altruistic motive. We have enjoyed amazing benefits by moving to the model embodied by Kubernetes over the past ten years. It enabled us to dramatically scale developer productivity and the number of services we were able to offer without investing in a corresponding increase in operational overhead. It also gave us fantastic workload portability, enabling us to quickly “drain” applications from one resource pool and move to another. As with many other technologies and concepts that we’ve shared with the community over the years, we think Kubernetes will help make the world a better place and help others enjoy similar benefits. Other examples include Android, Chromium, and many of the technologies that underpin the rising popularity of Linux containers (including memcg, the Go programming language in which Docker is written, cgroups, and cadvisor).
Second, there is the practical reason grounded in our desire to make Google Cloud Platform the best platform on the web for customers to build and host their applications. As Urs Hölzle, Senior Vice President for Technical Infrastructure at Google noted last March, we’re unifying Google’s core infrastructure and Google Cloud Platform and see a significant business opportunity for Google in Google Cloud Platform. By enabling customers to start using the same patterns and best practices Google has developed for its own container based workloads, we make it easy for customers to move those workloads around to where they make the most sense based on factors like latency, cost, and adjacent services. We think over time that our deep, comprehensive support for containers on Google Cloud Platform will create a gravity well in the market for container based apps and that a significant percentage of them will end up with us.
Imagine individual Docker containers as packing boxes. The boxes that need to stay together because they need to go to the same location or have an affinity to each other are loaded into shipping containers. In this analogy, the packing boxes are Docker containers, and the shipping containers are Kubernetes pods.
Ultimately, all these pods make up your application.
You don’t want this ship adrift on the stormy seas of the Internet. Kubernetes acts as ship captain – adeptly steering the ship along a smooth path, and insuring that the applications under its supervision are effectively managed and stay healthy.
Once you move beyond working with a handful of containers, and especially when your application grows beyond more than one physical host, we strongly advise that you use Kubernetes (for reasons we’ve highlighted recently).
In terms of how Kubernetes differs from other container management systems out there, such as Swarm, Kubernetes is the third iteration of cluster managers that Google has developed. It incorporates the cumulative learnings of over a decade of experience in production container management. It embodies the cluster centric model, which we’ve found works best for developing, deploying, and managing container based applications. Swarm and similar systems embody the single node model and may work well for some use cases, but there are several critical architectural patterns missing that customers will ultimately need as they move to production use cases (these were highlighted in Joe’s post last week).
First, as outlined above, we view Kubernetes as core to our cloud strategy, and we’re internally committed to making Google Cloud Platform a significant part of Google’s overall business. Our deep experience in running containerized workloads is a big competitive advantage for Google Cloud Platform, so it makes sense for us us to continue to invest in making Kubernetes robust and mature. As an expression of this, we have some of our most experienced engineering talent working on the project, including Googlers with years of experience developing and refining our internal cluster management systems and processes.
Second, we’ve been very fortunate to have a vibrant, experienced community of contributors form around Kubernetes. Many of them have incorporated Kubernetes into their own products, resulting in a vested interest in the health and sustainability of Kubernetes. For example, Red Hat made Kubernetes an integral part of OpenShift version 3, and as of the time of this post, two of the top ten contributors are from the growing team Red Hat has working on Kubernetes. Thus, even if Google were to get taken out by a meteorite, a significant community of contributors would remain to carry it forward.
We think that Kubernetes will help developers create better container based applications that require less operational overhead to run, thereby accelerating the trend toward container adoption. Given the inherent portability of container based applications managed by Kubernetes, every new one created is another candidate to run on Google Cloud Platform.
Our hope is that container based apps will be made even more awesome through the use of Kubernetes (regardless of where they reside), and our goal is to ensure that Kubernetes based apps will be exceptionally awesome on Google Cloud Platform. How much of the market moves to containers and how much of this load we’re able to attract to Google Cloud Platform remains to be seen, but we’ve placed our bets on wide-scale adoption.
The thing that most excites me about Kubernetes is the frequency at which I see customers rolling up their sleeves and contributing to the project itself. While I’m very proud of what our extended team has created in Kubernetes, I think Joe Beda said it best in his most recent blog post:
Try it out, file bug reports, ask for help or send a pull request (PR).
-Posted by Martin Buhr, Product Manager, Kubernetes
1 The theories of supply chain diversification and vendor risk management both recommend against relying on a single supplier for any critical component of one’s business or infrastructure. This has been borne out by the experience of numerous customers over the years with large vendors of proprietary IT systems and software. Part of the appeal of Docker and Kubernetes is the degree to which they significantly lower the friction involved in moving applications between various resource pools (laptop to server, server to server, data center to data center, cloud to cloud, etc.).
Everything you wanted to know about Kubernetes but were afraid to ask
When we announced the Kubernetes open source project in June of 2014, we were thrilled with the large community of customers and partners it quickly created. Red Hat, VMware, CoreOS, and others are helping to grow and mature Kubernetes at a remarkable pace. There is also a growing community of users who are both utilizing Kubernetes to manage their container clusters, but in many cases are also contributing to the project itself.I’ve been fortunate to be able to engage with many in our community, and we consistently hear many of the same questions:
- Given that Google already has its own mature, robust cluster management systems (which handle around two billion new containers a week), why did you create Kubernetes?
- How does Kubernetes relate to Docker? How does it differ from Docker Swarm?
- What insures that Google is committed to the Kubernetes open source project over the long run?
- How does Kubernetes fit in with and augment your overarching strategy for Google Cloud Platform?
- What incentive does Google have to make Kubernetes great outside of Google Cloud Platform for deployment on premise or on other public clouds?
- What is the relationship between Kubernetes and Google Container Engine, now and in the future?
This post will answer these questions, and we’d love to field others we may have missed via the Kubernetes G+ page.
Why Kubernetes?
Given that Google already has its own mature, robust cluster management systems, many wonder why we created Kubernetes. There are actually two reasons for this.First, there is the altruistic motive. We have enjoyed amazing benefits by moving to the model embodied by Kubernetes over the past ten years. It enabled us to dramatically scale developer productivity and the number of services we were able to offer without investing in a corresponding increase in operational overhead. It also gave us fantastic workload portability, enabling us to quickly “drain” applications from one resource pool and move to another. As with many other technologies and concepts that we’ve shared with the community over the years, we think Kubernetes will help make the world a better place and help others enjoy similar benefits. Other examples include Android, Chromium, and many of the technologies that underpin the rising popularity of Linux containers (including memcg, the Go programming language in which Docker is written, cgroups, and cadvisor).
Second, there is the practical reason grounded in our desire to make Google Cloud Platform the best platform on the web for customers to build and host their applications. As Urs Hölzle, Senior Vice President for Technical Infrastructure at Google noted last March, we’re unifying Google’s core infrastructure and Google Cloud Platform and see a significant business opportunity for Google in Google Cloud Platform. By enabling customers to start using the same patterns and best practices Google has developed for its own container based workloads, we make it easy for customers to move those workloads around to where they make the most sense based on factors like latency, cost, and adjacent services. We think over time that our deep, comprehensive support for containers on Google Cloud Platform will create a gravity well in the market for container based apps and that a significant percentage of them will end up with us.
How does Kubernetes relate to Docker? How does it differ from Docker Swarm?
When referring to “Docker,” we’re specifically talking about using the Docker container image format and Docker Engine to run Docker images (as opposed to Docker Inc., the company that has popularized these concepts). These Docker containers are then managed by Kubernetes.Imagine individual Docker containers as packing boxes. The boxes that need to stay together because they need to go to the same location or have an affinity to each other are loaded into shipping containers. In this analogy, the packing boxes are Docker containers, and the shipping containers are Kubernetes pods.
Ultimately, all these pods make up your application.
You don’t want this ship adrift on the stormy seas of the Internet. Kubernetes acts as ship captain – adeptly steering the ship along a smooth path, and insuring that the applications under its supervision are effectively managed and stay healthy.
Once you move beyond working with a handful of containers, and especially when your application grows beyond more than one physical host, we strongly advise that you use Kubernetes (for reasons we’ve highlighted recently).
In terms of how Kubernetes differs from other container management systems out there, such as Swarm, Kubernetes is the third iteration of cluster managers that Google has developed. It incorporates the cumulative learnings of over a decade of experience in production container management. It embodies the cluster centric model, which we’ve found works best for developing, deploying, and managing container based applications. Swarm and similar systems embody the single node model and may work well for some use cases, but there are several critical architectural patterns missing that customers will ultimately need as they move to production use cases (these were highlighted in Joe’s post last week).
Is Google committed to Kubernetes?
Both customers and partners are asking variations of the following question: “Given that I’m considering betting the future of my project/app/business on the long term viability of Kubernetes, what assurance do I have that Google will not lose interest over the long term, causing the project to whither?”First, as outlined above, we view Kubernetes as core to our cloud strategy, and we’re internally committed to making Google Cloud Platform a significant part of Google’s overall business. Our deep experience in running containerized workloads is a big competitive advantage for Google Cloud Platform, so it makes sense for us us to continue to invest in making Kubernetes robust and mature. As an expression of this, we have some of our most experienced engineering talent working on the project, including Googlers with years of experience developing and refining our internal cluster management systems and processes.
Second, we’ve been very fortunate to have a vibrant, experienced community of contributors form around Kubernetes. Many of them have incorporated Kubernetes into their own products, resulting in a vested interest in the health and sustainability of Kubernetes. For example, Red Hat made Kubernetes an integral part of OpenShift version 3, and as of the time of this post, two of the top ten contributors are from the growing team Red Hat has working on Kubernetes. Thus, even if Google were to get taken out by a meteorite, a significant community of contributors would remain to carry it forward.
How does Kubernetes fit into Google’s cloud strategy?
As we mentioned, Google Cloud Platform is a key business for Google, and we are confident (based on ten years of experience using containers to run our business and the significant technical and operational depth we’ve acquired in doing so) that we can make Google Cloud Platform the best place on the web for containers. Kubernetes embodies the best practices and patterns based on this hard won experience for creating and running container based workloads.We think that Kubernetes will help developers create better container based applications that require less operational overhead to run, thereby accelerating the trend toward container adoption. Given the inherent portability of container based applications managed by Kubernetes, every new one created is another candidate to run on Google Cloud Platform.
Our hope is that container based apps will be made even more awesome through the use of Kubernetes (regardless of where they reside), and our goal is to ensure that Kubernetes based apps will be exceptionally awesome on Google Cloud Platform. How much of the market moves to containers and how much of this load we’re able to attract to Google Cloud Platform remains to be seen, but we’ve placed our bets on wide-scale adoption.
Kubernetes on other clouds? On-premise?
For our strategy to be successful, we need Kubernetes to be awesome everywhere, even for customers who will run their apps on other clouds or in their own datacenters. Thus, our goal for Kubernetes is ubiquity. Wherever you run your container based app, our hope is that you do so using Kubernetes so that you can benefit from all the things Google has gotten right over the years (as well as the numerous lessons we’ve learned from the things we got wrong). Even if you never plan on moving beyond your own datacenters, or plan on sticking with your current cloud provider exclusively into the foreseeable future1, we would still love to talk to you about why Kubernetes makes sense as a foundational piece of your container strategy.Kubernetes and Google Container Engine?
This brings us to Google Container Engine, our managed container hosting offering and the embodiment of Kubernetes on Google Cloud Platform. We want everyone to use Kubernetes based on its own merits and develop container based apps based on proven patterns battle tested at Google. In parallel, we’re making Google Cloud Platform a fantastic place to develop and run container based applications, giving customers the benefits of not only Google’s experience in operating and maintaining container clusters, but also of all the adjacent services on Google Cloud Platform. At present, Google Container Engine is simply hosted Kubernetes, but look for us to start introducing features and linkages to other Google Cloud Platform services to further enhance its utility.We're Stoked!
It’s an exciting time to be an application developer! As you’ve seen above, Google is deeply committed to Kubernetes, and we and our ecosystem of contributors are working hard to make sure it’s the best tool for creating and managing container clusters regardless of where these clusters run. From our perspective, the first and best option is that you run your container based apps on Google Container Engine, second best is that you run them on Google Compute Engine using Kubernetes, and third best is that you run them someplace else using Kubernetes.The thing that most excites me about Kubernetes is the frequency at which I see customers rolling up their sleeves and contributing to the project itself. While I’m very proud of what our extended team has created in Kubernetes, I think Joe Beda said it best in his most recent blog post:
While we have a lot of experience in this space, Google doesn't have all the answers. There are requirements and considerations that we don't see internally. With that in mind, please check out what we are building and get involved!
Try it out, file bug reports, ask for help or send a pull request (PR).
-Posted by Martin Buhr, Product Manager, Kubernetes
1 The theories of supply chain diversification and vendor risk management both recommend against relying on a single supplier for any critical component of one’s business or infrastructure. This has been borne out by the experience of numerous customers over the years with large vendors of proprietary IT systems and software. Part of the appeal of Docker and Kubernetes is the degree to which they significantly lower the friction involved in moving applications between various resource pools (laptop to server, server to server, data center to data center, cloud to cloud, etc.).