Two months ago, we announced Kubernetes, an open source cluster manager for Docker containers. Since then we’ve seen an impressive community develop around Kubernetes, and today we’re thrilled to welcome VMware to the Kubernetes community.

We’ve spent a lot of time talking about how we’re building Kubernetes to provide a unique infrastructure for easily building scalable, reliable systems like we do at Google. With the addition of VMware in the community, we thought we’d take the time to discuss the infrastructure side of cluster management and how VMware’s deep technical expertise in this area will make Kubernetes a more capable, powerful and secure platform beyond Google Cloud Platform.

One of the fundamental tenets of Kubernetes is the decoupling of application containers from the details of the systems on which they run. Google Cloud Platform provides a homogenous set of raw resources via virtual machines (VMs) to Kubernetes, and in turn, Kubernetes schedules containers to use those resources. This decoupling simplifies application development since users only ask for abstract resources like cores and memory, and it also simplifies data center operations, since every machine is identical and isolated from the details of the applications that run on them.

VMware will provide enhanced capabilities for running a reliable Kubernetes cluster, much like Google Cloud Platform. The core resources here are:

  • Machines: virtual machines on which containers run
  • Network: the physical or virtualized connectivity between containers in the cluster
  • Storage: reliable, cluster level distributed storage outside of a container’s lifecycle

Providing machines for Kubernetes in not only necessary as a pool of raw cycles and bytes but also can provide a critical extra layer of security. Security is a continuum on which you pick solutions based on threats and risk tolerance. While container security is an evolving area, VMs have a longer track record and are a smaller attack surface. Fundamentally, even in Kubernetes, the machine is a strong security domain. Linux containers can provide strong resource isolation, ensuring, for example, that one container has dedicated access to a specific core in the processor. For semi-trusted workloads, containers may be sufficient. However, because containers share the same kernel, there’s an expanded surface area that may make them insufficient as your only line of defense. For untrusted workloads or users, we highly suggest defense in depth with virtual machine technology as a second layer of security. Indeed, this is how two different users’ Kubernetes clusters can safely co-exist on the same physical infrastructure in a Google data center. VMware will help Kubernetes implement this same pattern of using virtualization to secure physical machines, when those machines are outside of Google’s data centers.

While running individual containers is sufficient for some use cases, the real power of containers comes from implementing distributed systems, and to do this you need a network. However, you don’t just need any network. Containers provide end users with an abstraction that makes each container a self contained unit of computation Traditionally, one place where this has broken down is networking, where containers are exposed on the network via the shared host machine’s address. In Kubernetes, we’ve taken an alternative approach: that each group of containers (called a Pod) deserves its own, unique IP address that’s reachable from any other Pod in the cluster, whether they’re co-located on the same physical machine or not. To achieve this in the Google data center, we’ve taken advantage of the advanced routing features that are available via Google Compute Engine’s Andromeda network virtualization. VMware, with their deep knowledge in network virtualization, specifically Open Virtual Switch (OVS), will simplify network configuration in Kubernetes clusters running outside of Google’s data centers.

Finally, nearly every application that you run needs some sort of storage, but the storing that data on specific machines in your datacenter makes it difficult to schedule containers in the cluster to maximize efficiency and reliability, since pods are forced to co-locate with their data. When Kubernetes runs on Google Cloud Platform, you’ll soon be able to pair your container up with a Persistent Disk (PD) volume, so that regardless of where your container is scheduled in the cluster, its storage follows it to the physical machine. VMware will work with Kubernetes to include integration points to distributed storage systems such as their Virtual-SAN scalable virtual storage solution to enable similar capabilities for users not running on Google Cloud Platform, in addition to simpler less robust shared storage solutions available for users that don't have access to a reliable network storage system.

We developed and open sourced Kubernetes to provide applications developers and operations teams with the ability to build and scale their applications like Google. The addition of VMware’s technical expertise in cluster infrastructure will enable people begin to compute like Google, regardless of where they physically do that computation.

-Posted by Craig Mcluckie, Product Manager