Kubernetes Explained

Kubernetes
Kubernetes
Logo Size:84px
Author:Google
Developer:Cloud Native Computing Foundation
Released:0.2[1] /
Programming Language:Go
Genre:Cluster management software
License:Apache License 2.0

Kubernetes (K8s)[2] is an open-source container orchestration system for automating software deployment, scaling, and management.[3] [4] Originally designed by Google, the project is now maintained by a worldwide community of contributors, and the trademark is held by the Cloud Native Computing Foundation.

The name Kubernetes originates from the Greek κυβερνήτης (kubernḗtēs), meaning governor, 'helmsman' or 'pilot'. Kubernetes is often abbreviated as K8s, counting the eight letters between the K and the s (a numeronym).[5]

Kubernetes assembles one or more computers, either virtual machines or bare metal, into a cluster which can run workloads in containers. It works with various container runtimes, such as containerd and CRI-O.[6] Its suitability for running and managing workloads of all sizes and styles has led to its widespread adoption in clouds and data centers. There are multiple distributions of this platform – from independent software vendors (ISVs) as well as hosted-on-cloud offerings from all the major public cloud vendors.[7]

Kubernetes is one of the most widely deployed software systems in the world[8] being used across companies including Google, Microsoft, Amazon, Apple, Meta, Nvidia, Reddit and Pinterest.

History

Kubernetes (Greek, Ancient (to 1453);: [[wikt:κυβερνήτης|κυβερνήτης]]|translit=kubernḗtēs, or, and the etymological root of cybernetics) was announced by Google on June 6, 2014.[9] The project was conceived and created by Google employees Joe Beda, Brendan Burns, and Craig McLuckie. Others at Google soon joined to help build the project including Ville Aikas, Dawn Chen, Brian Grant, Tim Hockin, and Daniel Smith.[10] Other companies such as Red Hat and CoreOS joined the effort soon after, with notable contributors such as Clayton Coleman and Kelsey Hightower.

The design and development of Kubernetes was inspired by Google's Borg cluster manager and based on Promise Theory.[11] Many of its top contributors had previously worked on Borg;[12] [13] they codenamed Kubernetes "" after the Star Trek ex-Borg character Seven of Nine[14] and gave its logo a seven-spoked ship's wheel (designed by Tim Hockin). Unlike Borg, which was written in C++, Kubernetes is written in the Go language.

Kubernetes was announced in June, 2014 and version 1.0 was released on July 21, 2015.[15] Google worked with the Linux Foundation to form the Cloud Native Computing Foundation (CNCF)[16] and offered Kubernetes as the seed technology.

Google was already offering a managed Kubernetes service, GKE, and Red Hat was supporting Kubernetes as part of OpenShift since the inception of the Kubernetes project in 2014.[17] In 2017, the principal competitors rallied around Kubernetes and announced adding native support for it:

On March 6, 2018, Kubernetes Project reached ninth place in the list of GitHub projects by the number of commits, and second place in authors and issues, after the Linux kernel.[23]

Until version 1.18, Kubernetes followed an N-2 support policy, meaning that the three most recent minor versions receive security updates and bug fixes.[24] Starting with version 1.19, Kubernetes follows an N-3 support policy.[25]

Concepts

Kubernetes defines a set of building blocks ("primitives") that collectively provide mechanisms that deploy, maintain, and scale applications based on CPU, memory[26] or custom metrics.[27] Kubernetes is loosely coupled and extensible to meet the needs of different workloads. The internal components as well as extensions and containers that run on Kubernetes rely on the Kubernetes API.[28] The platform exerts its control over compute and storage resources by defining resources as objects, which can then be managed as such.

Kubernetes follows the primary/replica architecture. The components of Kubernetes can be divided into those that manage an individual node and those that are part of the control plane.[29]

Control plane

The Kubernetes master node handles the Kubernetes control plane of the cluster, managing its workload and directing communication across the system. The Kubernetes control plane consists of various components such as TLS encryption, RBAC, and a strong authentication method, network separation, each its own process, that can run both on a single master node or on multiple masters supporting high-availability clusters. The various components of the Kubernetes control plane are as follows.[30]

Etcd

Etcd[31] is a persistent, lightweight, distributed, key-value data store (originally developed for Container Linux). It reliably stores the configuration data of the cluster, representing the overall state of the cluster at any given point of time. etcd favors consistency over availability in the event of a network partition (see CAP theorem). The consistency is crucial for correctly scheduling and operating services.

API server

The API server serves the Kubernetes API using JSON over HTTP, which provides both the internal and external interface to Kubernetes.[32] The API server processes, validates REST requests, and updates the state of the API objects in etcd, thereby allowing clients to configure workloads and containers across worker nodes.[33] The API server uses etcd's watch API to monitor the cluster, roll out critical configuration changes, or restore any divergences of the state of the cluster back to the desired state as declared in etcd.

As an example, a human operator may specify that three instances of a particular "pod" (see below) need to be running, and etcd stores this fact. If the Deployment controller finds that only two instances are running (conflicting with the etcd declaration),[34] it schedules the creation of an additional instance of that pod.

Scheduler

The scheduler is an extensible component that selects the node that an unscheduled pod (the basic unit of workloads to be scheduled) runs, based on resource availability and other constraints. The scheduler tracks resource allocation on each node to ensure that workload is not scheduled in excess of available resources. For this purpose, the scheduler must know the resource requirements, resource availability, and other user-provided constraints or policy directives such as quality-of-service, affinity/anti-affinity requirements, and data locality. The scheduler's role is to match resource "supply" to workload "demand".[35]

Kubernetes allows running multiple schedulers within a single cluster. As such, scheduler plug-ins may be developed and installed as in-process extensions to the native vanilla scheduler by running it as a separate scheduler, as long as they conform to the Kubernetes scheduling framework.[36] This allows cluster administrators to extend or modify the behavior of the default Kubernetes scheduler according to their needs.

Controllers

A controller is a reconciliation loop that drives the actual cluster state toward the desired state, communicating with the API server to create, update, and delete the resources it manages (e.g., pods or service endpoints).[37]

An example controller is a ReplicaSet controller, which handles replication and scaling by running a specified number of copies of a pod across the cluster. The controller also handles creating replacement pods if the underlying node fails. Other controllers that are part of the core Kubernetes system include a DaemonSet controller for running exactly one pod on every machine (or some subset of machines), and a Job controller for running pods that run to completion (e.g. as part of a batch job).[38] Labels selectors often form part of the controller's definition that specify the set of pods that a controller manages.

The controller manager is a single process that manages several core Kubernetes controllers (including the examples described above), is distributed as part of the standard Kubernetes installation and responding to the loss of nodes.[30]

Custom controllers may also be installed in the cluster, further allowing the behavior and API of Kubernetes to be extended when used in conjunction with custom resources (see custom resources, controllers and operators below).

Nodes

A node, also known as a worker or a minion, is a machine where containers (workloads) are deployed. Every node in the cluster must run a container runtime, as well as the below-mentioned components, for communication with the primary network configuration of these containers.

kubelet

kubelet is responsible for the running state of each node, ensuring that all containers on the node are healthy. It takes care of starting, stopping, and maintaining application containers organized into pods as directed by the control plane.[39] kubelet monitors the state of a pod, and if not in the desired state, the pod re-deploys to the same node. Node status is relayed every few seconds via heartbeat messages to the API server. Once the control plane detects a node failure, a higher-level controller is expected to observe this state change and launch pods on another healthy node.[40]

Container runtime

A container runtime is responsible for the lifecycle of containers, including launching, reconciling and killing of containers. kubelet interacts with container runtimes via the Container Runtime Interface (CRI),[41] [42] which decouples the maintenance of core Kubernetes from the actual CRI implementation.

Originally, kubelet interfaced exclusively with the Docker runtime[43] through a "dockershim". However, from November 2020[44] up to April 2022, Kubernetes has deprecated the shim in favor of directly interfacing with the container through containerd, or replacing Docker with a runtime that is compliant with the Container Runtime Interface (CRI).[45] [46] With the release of v1.24 in May 2022, the "dockershim" has been removed entirely.[47]

Examples of popular container runtimes that are compatible with kubelet include containerd (initially supported via Docker), rkt[48] and CRI-O.

kube-proxy

kube-proxy is an implementation of a network proxy and a load balancer, and it supports the service abstraction along with the other networking operations. It is responsible for routing traffic to the appropriate container based on IP and port number of the incoming request.

Namespaces

In Kubernetes, namespaces are utilized to segregate the resources it handles into distinct and non-intersecting collections.[49] They are intended for use in environments with many users spread across multiple teams, or projects, or even separating environments like development, test, and production.

Pods

The basic scheduling unit in Kubernetes is a pod,[50] which consists of one or more containers that are guaranteed to be co-located on the same node. Each pod in Kubernetes is assigned a unique IP address within the cluster, allowing applications to use ports without the risk of conflict.[51] Within the pod, all containers can reference each other.

A container resides inside a pod. The container is the lowest level of a micro-service, which holds the running application, libraries, and their dependencies.

Workloads

Kubernetes supports several abstractions of workloads that are at a higher level over simple pods. This allows users to declaratively define and manage these high-level abstractions, instead of having to manage individual pods by themselves. Several of these abstractions, supported by a standard installation of Kubernetes, are described below.

ReplicaSets, ReplicationControllers and Deployments

A ReplicaSet's purpose is to maintain a stable set of replica pods running at any given time. As such, it is often used to guarantee the availability of a specified number of identical Pods.[52] The ReplicaSet can also be said to be a grouping mechanism that lets Kubernetes maintain the number of instances that have been declared for a given pod. The definition of a ReplicaSet uses a selector, whose evaluation will result in identifying all pods that are associated with it.

A ReplicationController, similar to a ReplicaSet, serves the same purpose and behaves similarly to a ReplicaSet, which is to ensure that there will always be a specified number of pod replicas as desired. The ReplicationController workload was the predecessor of a ReplicaSet, but was eventually deprecated in favor of ReplicaSet to make use of set-based label selectors.

Deployments are a higher-level management mechanism for ReplicaSets. While the ReplicaSet controller manages the scale of the ReplicaSet, the Deployment controller manages what happens to the ReplicaSet whether an update has to be rolled out, or rolled back, etc. When Deployments are scaled up or down, this results in the declaration of the ReplicaSet changing, and this change in the declared state is managed by the ReplicaSet controller.

StatefulSets

StatefulSets are controllers that enforce the properties of uniqueness and ordering amongst instances of a pod, and can be used to run stateful applications.[53] While scaling stateless applications is only a matter of adding more running pods, doing so for stateful workloads is harder, because the state needs to be preserved if a pod is restarted. If the application is scaled up or down, the state may need to be redistributed.

Databases are an example of stateful workloads. When run in high-availability mode, many databases come with the notion of a primary instance and secondary instances. In this case, the notion of ordering of instances is important. Other applications like Apache Kafka distribute the data amongst their brokers; hence, one broker is not the same as another. In this case, the notion of instance uniqueness is important.

DaemonSets

DaemonSets are responsible for ensuring that a pod is created on every single node in the cluster.[54] Generally, most workloads scale in response to a desired replica count, depending on the availability and performance requirements as needed by the application. However, in other scenarios it may be necessary to deploy a pod to every single node in the cluster, scaling up the number of total pods as nodes are added and garbage collecting them as they are removed. This is particularly helpful for use cases where the workload has some dependency on the actual node or host machine, such as log collection, ingress controllers, and storage services.

Services

A Kubernetes service is a set of pods that work together, such as one tier of a multi-tier application. The set of pods that constitute a service are defined by a label selector. Kubernetes provides two modes of service discovery, using environment variables or using Kubernetes DNS.[55] Service discovery assigns a stable IP address and DNS name to the service, and load balances traffic in a round-robin manner to network connections of that IP address among the pods matching the selector (even as failures cause the pods to move from machine to machine). By default a service is exposed inside a cluster (e.g., back end pods might be grouped into a service, with requests from the front-end pods load-balanced among them), but a service can also be exposed outside a cluster (e.g., for clients to reach front-end pods).[56]

Volumes

Filesystems in the Kubernetes container provide ephemeral storage, by default. This means that a restart of the pod will wipe out any data on such containers, and therefore, this form of storage is quite limiting in anything but trivial applications. A Kubernetes volume[57] provides persistent storage that exists for the lifetime of the pod itself. This storage can also be used as shared disk space for containers within the pod. Volumes are mounted at specific mount points within the container, which are defined by the pod configuration, and cannot mount onto other volumes or link to other volumes. The same volume can be mounted at different points in the file system tree by different containers.

ConfigMaps and Secrets

A common application challenge is deciding where to store and manage configuration information, some of which may contain sensitive data. Configuration data can be anything as fine-grained as individual properties, or coarse-grained information like entire configuration files such as JSON or XML documents. Kubernetes provides two closely related mechanisms to deal with this need, known as ConfigMaps and Secrets, both of which allow for configuration changes to be made without requiring an application rebuild.

The data from ConfigMaps and Secrets will be made available to every single instance of the application to which these objects have been bound via the Deployment. A Secret and/or a ConfigMap is sent to a node only if a pod on that node requires it, which will only be stored in memory on the node. Once the pod that depends on the Secret or ConfigMap is deleted, the in-memory copy of all bound Secrets and ConfigMaps are deleted as well.

The data from a ConfigMap or Secret is accessible to the pod through one of the following ways:[58]

  1. As environment variables, which will be consumed by kubelet from the ConfigMap when the container is launched;
  2. Mounted within a volume accessible within the container's filesystem, which supports automatic reloading without restarting the container.

The biggest difference between a Secret and a ConfigMap is that Secrets are specifically designed for containing secure and confidential data, although they are not encrypted at rest by default, and requires additional setup in order to fully secure the use of Secrets within the cluster.[59] Secrets are often used to store confidential or sensitive data like certificates, credentials to work with image registries, passwords, and ssh keys.

Labels and selectors

Kubernetes enables clients (users or internal components) to attach keys called labels to any API object in the system, such as pods and nodes. Correspondingly, label selectors are queries against labels that resolve to matching objects. When a service is defined, one can define the label selectors that will be used by the service router/load balancer to select the pod instances that the traffic will be routed to. Thus, simply changing the labels of the pods or changing the label selectors on the service can be used to control which pods get traffic and which don't, which can be used to support various deployment patterns like blue–green deployments or A/B testing. This capability to dynamically control how services utilize implementing resources provides a loose coupling within the infrastructure.

For example, if an application's pods have labels for a system tier (with values such as frontend, backend, for example) and a release_track (with values such as [[Feature toggle#Canary release|canary]], production, for example), then an operation on all of backend and canary nodes can use a label selector, such as:[60]

tier=backend AND release_track=canary

Just like labels, field selectors also let one select Kubernetes resources. Unlike labels, the selection is based on the attribute values inherent to the resource being selected, rather than user-defined categorization. metadata.name and metadata.namespace are field selectors that will be present on all Kubernetes objects. Other selectors that can be used depend on the object/resource type.

Add-ons

Add-ons are additional features of the Kubernetes cluster implemented as applications running within it. The pods may be managed by Deployments, ReplicationControllers, and so on. There are many add-ons. Some of the more important are:

DNS
  • Cluster DNS is a DNS server, in addition to the other DNS server(s) in the environment, which serves DNS records for Kubernetes services. Containers started by Kubernetes automatically include this DNS server in their DNS searches.
    Web UI
  • This is a general purpose, web-based UI for Kubernetes clusters. It allows administrators to manage and troubleshoot applications running in the cluster, as well as the cluster itself.
    Resource monitoring
  • Container Resource Monitoring records metrics about containers in a central database, and provides a UI for browsing that data.
    Cost monitoring
  • Kubernetes cost monitoring applications allow breakdown of costs by pods, nodes, namespaces, and labels.
    Cluster-level logging
  • To prevent the loss of event data in the event of node or pod failures, container logs can be saved to a central log store with a search/browsing interface. Kubernetes provides no native storage for log data, but one can integrate many existing logging solutions into the Kubernetes cluster.

    Storage

    Containers emerged as a way to make software portable. The container contains all the packages needed to run a service. The provided file system makes containers extremely portable and easy to use in development. A container can be moved from development to test or production with no or relatively few configuration changes.

    Historically Kubernetes was suitable only for stateless services. However, many applications have a database, which requires persistence, which leads to the creation of persistent storage for Kubernetes. Implementing persistent storage for containers is one of the top challenges of Kubernetes administrators, DevOps and cloud engineers. Containers may be ephemeral, but more and more of their data is not, so one needs to ensure the data's survival in case of container termination or hardware failure. When deploying containers with Kubernetes or containerized applications, companies often realize that they need persistent storage. They need to provide fast and reliable storage for databases, root images and other data used by the containers.

    In addition to the landscape, the Cloud Native Computing Foundation (CNCF), has published other information about Kubernetes Persistent Storage including a blog helping to define the container attached storage pattern. This pattern can be thought of as one that uses Kubernetes itself as a component of the storage system or service.[61]

    More information about the relative popularity of these and other approaches can be found on the CNCF's landscape survey as well, which showed that OpenEBS a Stateful Persistent Storage platform from Datacore Software,[62] and Rook a storage orchestration project were the two projects most likely to be in evaluation as of the Fall of 2019.[63]

    Container Attached Storage is a type of data storage that emerged as Kubernetes gained prominence. The Container Attached Storage approach or pattern relies on Kubernetes itself for certain capabilities while delivering primarily block, file, object and interfaces to workloads running on Kubernetes.[64]

    Common attributes of Container Attached Storage include the use of extensions to Kubernetes, such as custom resource definitions, and the use of Kubernetes itself for functions that otherwise would be separately developed and deployed for storage or data management. Examples of functionality delivered by custom resource definitions or by Kubernetes itself include retry logic, delivered by Kubernetes itself, and the creation and maintenance of an inventory of available storage media and volumes, typically delivered via a custom resource definition.[65] [66]

    Container Storage Interface (CSI)

    In Kubernetes version 1.9, the initial Alpha release of Container Storage Interface (CSI) was introduced.[67] Previously, storage volume plug-ins were included in the Kubernetes distribution. By creating a standardized CSI, the code required to interface with external storage systems was separated from the core Kubernetes code base. Just one year later, the CSI feature was made Generally Available (GA) in Kubernetes.[68]

    API

    A key component of the Kubernetes control plane is the API Server, which exposes an HTTP API that can be invoked by other parts of the cluster as well as end users and external components. This API is a REST API and is declarative in nature, and is the same API exposed to the control plane. The API server is backed by etcd to store all records persistently.[69]

    API objects

    In Kubernetes, all objects serve as the "record of intent" of the cluster's state, and are able to define the desired state that the writer of the object wishes for the cluster to be in.[70] As such, most Kubernetes objects have the same set of nested fields, as follows:

    All objects in Kubernetes are subject to the same API conventions. Some of these include:

    Custom resources, controllers and operators

    The Kubernetes API can be extended using Custom Resources, which represent objects that are not part of the standard Kubernetes installation. These custom resources are declared using Custom Resource Definitions (CRDs), which is a kind of resource that can be dynamically registered and unregistered without shutting down or restarting a cluster that is currently running.[74]

    Custom controllers are another extension mechanism that interact with the Kubernetes API, similar to the default controllers in the standard pre-installed Kubernetes controller manager. These controllers may interact with custom resources to allow for a declarative API: users may declare the desired state of the world via the custom resources, and it is the responsibility of the custom controller to observe the change and reconcile it.

    The combination of custom resources and custom controllers are often referred to as a Kubernetes Operator. The key use case for operators are to capture the aim of a human operator who is managing a service or set of services and to implement them using automation, and with a declarative API supporting this automation. Human operators who look after specific applications and services have deep knowledge of how the system ought to behave, how to deploy it, and how to react if there are problems.

    Examples of problems solved by operators include taking and restoring backups of that application's state, and handling upgrades of the application code alongside related changes such as database schemas or extra configuration settings. Several notable projects under the Cloud Native Computing Foundation's incubation program follow the operator pattern to extend Kubernetes, including Argo, Open Policy Agent and Istio.[75]

    API security

    Kubernetes defines the following strategies for controlling access to its API.[76]

    Transport security

    The Kubernetes API server listens on a TCP port that serves HTTPS traffic, in order to enforce transport layer security (TLS) using CA certificates.[30]

    In older versions of Kubernetes, the API server supported listening on both HTTP and HTTPS ports (with the HTTP port number having no transport security whatsoever). This was deprecated in v1.10 and eventually dropped support in v1.20 of Kubernetes.[77]

    Authentication

    All requests made to the Kubernetes API server are expected to be authenticated, and supports several authentication strategies, some of which are listed below:[78]

    1. X.509 client certificates
    2. Bearer tokens
    3. Service account tokens, intended for programmatic API access

    Users are typically expected to indicate and define cluster URL details along with the necessary credentials in a kubeconfig file, which are natively supported by other Kubernetes tools like kubectl and the official Kubernetes client libraries.[79]

    Authorization

    The Kubernetes API supports the following authorization modes:[80]

    1. Node authorization mode: Grants a fixed list of operations of API requests that kubelets are allowed to perform, in order to function properly.
    2. Attribute-based access control (ABAC) mode: Grants access rights to users through the use of defined access control policies which combine attributes together.
    3. Role-based access control (RBAC) mode: Grants access rights to users based on roles that are granted to the user, where each role defines a list of actions that are allowed.
    4. Webhook mode: Queries a REST API service to determine if a user is authorized to perform a given action.[30]

    API clients

    Kubernetes supports several official API clients:

    Cluster API

    The same API design principles have been used to define an API to harness a program in order to create, configure, and manage Kubernetes clusters. This is called the Cluster API.[83] A key concept embodied in the API is using Infrastructure as Software, or the notion that the Kubernetes cluster infrastructure is itself a resource / object that can be managed just like any other Kubernetes resources. Similarly, machines that make up the cluster are also treated as a Kubernetes resource. The API has two pieces the core API, and a provider implementation. The provider implementation consists of cloud-provider specific functions that let Kubernetes provide the cluster API in a fashion that is well-integrated with the cloud-provider's services and resources.[30]

    Uses

    Kubernetes is commonly used as a way to host a microservice-based implementation, because it and its associated ecosystem of tools provide all the capabilities needed to address key concerns of any microservice architecture.

    Distributions

    Various vendors offer Kubernetes-based platforms or infrastructure as a service (IaaS) that deploy Kubernetes.[84] [85]

    These are typically categorized according to open-source, commercial or managed distributions. Several notable distributions are listed below:[86]

    Open-source distributions

    Commercial distributions

    Managed distributions

    Release timeline

    Release timeline
    VersionRelease dateEnd of Life date[87] Notes
    10 July 2015Original Release
    9 November 2015[88]
    16 March 201623 October 2016[89]
    1 July 20161 November 2016[90]
    26 September 201621 April 2017[91]
    12 December 20161 October 2017[92]
    28 March 201723 November 2017[93]
    30 June 20174 April 2018[94]
    28 August 201712 July 2018[95]
    15 December 201729 September 2018[96]
    28 March 201813 February 2019[97]
    3 July 20181 May 2019[98]
    27 September 20188 July 2019[99]
    3 December 201815 October 2019[100]
    25 March 201911 December 2019[101]
    20 June 20196 May 2020[102]
    22 October 20192 September 2020[103]
    9 December 201913 January 2021[104]
    25 March 202018 June 2021[105]
    26 August 2020[106] 28 October 2021From Kubernetes version 1.19 on, the support window has been extended to one year of full support plus two months of maintenance mode period.
    [107]
    8 December 202028 February 2022[108]
    8 April 202128 June 2022[109]
    4 August 202128 October 2022[110]
    7 December 202128 February 2023[111]
    3 May 202228 July 2023[112]
    23 August 202227 October 2023[113]
    9 December 202224 February 2024[114]
    11 April 202328 June 2024[115]
    15 August 202328 October 2024[116]
    13 December 202328 February 2025[117]
    17 April 202428 June 2025[118]

    Support windows

    The chart below visualizes the period for which each release is/was supportedImageSize = width:1000 height:auto barincrement:35PlotArea = left:100 right:50 bottom:30 top:10

    DateFormat = dd/mm/yyyyPeriod = from:01/12/2018 till:28/06/2025TimeAxis = orientation:horizontalScaleMajor = unit:year increment:1 start:2019ScaleMinor = unit:month increment:1 start:01/01/2019

    Define $dx = 25 # shift text to right side of bar

    Colors = id:out_of_support value:rgb(0.992,0.702,0.671) legend:Out_of_support id:in-support value:rgb(0.996,0.973,0.776) legend:In_support id:latest value:rgb(0.831,0.957,0.706) legend:Latest_stable_version id:prerelease value:rgb(0.996,0.82,0.627) legend:Preview_version

    PlotData= mark:(line,black) fontsize:S bar:1.30.x from:17/04/2024 till:28/06/2025 text:1.30.x color:latest bar:1.29.x from:13/12/2023 till:28/02/2025 text:1.29.x color:in-support bar:1.28.x from:15/08/2023 till:28/10/2024 text:1.28.x color:in-support bar:1.27.x from:11/04/2023 till:30/05/2024 text:1.27.x color:in-support bar:1.26.x from:09/12/2022 till:24/02/2024 text:1.26.x color:out_of_support bar:1.25.x from:23/08/2022 till:27/10/2023 text:1.25.x color:out_of_support bar:1.24.x from:03/05/2022 till:28/07/2023 text:1.24.x color:out_of_support bar:1.23.x from:07/12/2021 till:28/02/2023 text:1.23.x color:out_of_support bar:1.22.x from:04/08/2021 till:28/10/2022 text:1.22.x color:out_of_support bar:1.21.x from:08/04/2021 till:28/06/2022 text:1.21.x color:out_of_support bar:1.20.x from:08/12/2020 till:28/02/2022 text:1.20.x color:out_of_support bar:1.19.x from:26/08/2020 till:28/10/2021 text:1.19.x color:out_of_support bar:1.18.x from:25/03/2020 till:30/04/2021 text:1.18.x color:out_of_support bar:1.17.x from:09/12/2019 till:30/01/2021 text:1.17.x color:out_of_support bar:1.16.x from:18/09/2019 till:25/08/2020 text:1.16.x color:out_of_support bar:1.15.x from:19/06/2019 till:23/03/2020 text:1.15.x color:out_of_support bar:1.14.x from:25/03/2019 till:09/12/2019 text:1.14.x color:out_of_support bar:1.13.x from:03/12/2018 till:18/09/2019 text:1.13.x color:out_of_support

    See also

    Notes and References

    1. Web site: v0.2 . github.com . 2014-09-09 .
    2. Web site: Kubernetes GitHub Repository. January 22, 2021. GitHub.
    3. Web site: kubernetes/kubernetes . live . https://web.archive.org/web/20170421035413/https://github.com/kubernetes/kubernetes . 2017-04-21 . 2017-03-28 . GitHub . en-US.
    4. Web site: What is Kubernetes? . 2017-03-31 . Kubernetes.
    5. Web site: Overview Kubernetes. 2022-01-04. https://archive.today/20230108084516/https://kubernetes.io/docs/concepts/overview/. 2023-01-08. Kubernetes. en.
    6. Web site: Container runtimes. 2021-11-14. Kubernetes. en.
    7. Web site: Turnkey Cloud Solutions . July 25, 2023 . Kubernetes Documentation.
    8. Web site: CNCF Annual Survey 2022. January 31, 2023. CNCF.
    9. Google Open Sources Its Secret Weapon in Cloud Computing. live. Wired. https://web.archive.org/web/20150910171929/http://www.wired.com/2014/06/google-kubernetes. 10 September 2015. 24 September 2015. Metz. Cade.
    10. Google Made Its Secret Blueprint Public to Boost Its Cloud. Wired. en-US. 2016-06-27. live. https://web.archive.org/web/20160701040235/http://www.wired.com/2015/06/google-kubernetes-says-future-cloud-computing/. 2016-07-01. Metz. Cade.
    11. Web site: https://twitter.com/kelseyhightower/status/1527333243845873664 . 2023-12-14 . X (formerly Twitter) . en.
    12. Abhishek Verma. Luis Pedrosa. Madhukar R. Korupolu. David Oppenheimer. Eric Tune. John Wilkes. Large-scale cluster management at Google with Borg. Proceedings of the European Conference on Computer Systems (EuroSys). April 21–24, 2015. live. https://web.archive.org/web/20170727090712/https://research.google.com/pubs/pub43438.html. 2017-07-27.
    13. Web site: Borg, Omega, and Kubernetes - ACM Queue. queue.acm.org. 2016-06-27. live. https://web.archive.org/web/20160709023750/http://queue.acm.org/detail.cfm?id=2898444. 2016-07-09.
    14. News: Early Stage Startup Heptio Aims to Make Kubernetes Friendly. 2016-12-06.
    15. Web site: As Kubernetes Hits 1.0, Google Donates Technology To Newly Formed Cloud Native Computing Foundation. TechCrunch. 21 July 2015 . 24 September 2015. live. https://web.archive.org/web/20150923060338/http://techcrunch.com/2015/07/21/as-kubernetes-hits-1-0-google-donates-technology-to-newly-formed-cloud-native-computing-foundation-with-ibm-intel-twitter-and-others/. 23 September 2015.
    16. Web site: Cloud Native Computing Foundation. live. https://web.archive.org/web/20170703085502/https://www.cncf.io/. 2017-07-03.
    17. Web site: Red Hat and Google collaborate on Kubernetes to manage Docker containers at scale. 2014-07-10. 2022-08-06.
    18. Web site: VMware and Pivotal Launch Pivotal Container Service (PKS) and Collaborate with Google Cloud to Bring Kubernetes to Enterprise Customers. 2017-08-29. 2022-08-06.
    19. Web site: Mesosphere adds Kubernetes support to its data center operating system. Frederic. Lardinois. 2017-09-06. 2022-08-06.
    20. Web site: Docker Announces Enhancements to the Docker Platform to Simplify and Advance the Management of Kubernetes for Enterprise IT. https://web.archive.org/web/20200926222104/https://www.docker.com/docker-news-and-press/docker-announces-enhancements-docker-platform-simplify-and-advance-management. 2017-10-17. 2020-09-26. dead.
    21. Web site: Introducing AKS (managed Kubernetes) and Azure Container Registry improvements. Gabe. Monroy. 2017-10-24. 2022-08-06.
    22. Web site: Introducing Amazon Elastic Container Service for Kubernetes (Preview). 2017-11-29. 2022-08-06.
    23. Web site: Kubernetes Is First CNCF Project To Graduate. 3 December 2018. Conway. Sarah. Cloud Native Computing Foundation. 6 March 2018. en. Compared to the 1.5 million projects on GitHub, Kubernetes is No. 9 for commits and No. 2 for authors/issues, second only to Linux.. live. https://web.archive.org/web/20181029081848/https://www.cncf.io/blog/2018/03/06/kubernetes-first-cncf-project-graduate/. 29 October 2018.
    24. Web site: Kubernetes version and version skew support policy. 2020-03-03. Kubernetes.
    25. Web site: 26 August 2020. Kubernetes 1.19 Release Announcement > Increase Kubernetes support window to one year. 2020-08-28. Kubernetes.
    26. Web site: Sharma . Priyanka . Autoscaling based on CPU/Memory in Kubernetes—Part II . Powerupcloud Tech Blog . Medium . 27 December 2018 . 13 April 2017 . 17 October 2019 . https://web.archive.org/web/20191017165844/https://blog.powerupcloud.com/autoscaling-based-on-cpu-memory-in-kubernetes-part-ii-fe2e495bddd4 . dead .
    27. Web site: Configure Kubernetes Autoscaling With Custom Metrics . Bitnami . BitRock . 27 December 2018 . 15 November 2018 . 27 March 2019 . https://web.archive.org/web/20190327101233/https://docs.bitnami.com/kubernetes/how-to/configure-autoscaling-custom-metrics/ . dead .
    28. Web site: An Introduction to Kubernetes. DigitalOcean. 24 September 2015. live. https://web.archive.org/web/20151001183617/https://www.digitalocean.com/community/tutorials/an-introduction-to-kubernetes. 1 October 2015.
    29. Web site: Kubernetes Infrastructure. OpenShift Community Documentation. OpenShift. 24 September 2015. live. https://web.archive.org/web/20150706121423/https://docs.openshift.org/latest/architecture/infrastructure_components/kubernetes_infrastructure.html. 6 July 2015.
    30. Web site: Kubernetes Hardening Guide. . January 26, 2024.
    31. [Container Linux by CoreOS#Cluster infrastructure|Container Linux by CoreOS: Cluster infrastructure]
    32. Web site: Kubernetes from the ground up: API server. Marhubi. Kamal. 2015-09-26. kamalmarhubi.com. 2015-11-02. live. https://web.archive.org/web/20151029131948/http://kamalmarhubi.com/blog/2015/09/06/kubernetes-from-the-ground-up-the-api-server/. 2015-10-29.
    33. Web site: An Introduction to Kubernetes . 20 July 2018 . Ellingwood . Justin . 2 May 2018 . . en . One of the most important primary services is an API server. This is the main management point of the entire cluster as it allows a user to configure Kubernetes' workloads and organizational units. It is also responsible for making sure that the etcd store and the service details of deployed containers are in agreement. It acts as the bridge between various components to maintain cluster health and disseminate information and commands. . https://web.archive.org/web/20180705232851/https://www.digitalocean.com/community/tutorials/an-introduction-to-kubernetes . 5 July 2018.
    34. Web site: Deployments . 2022-02-27 . Kubernetes . en.
    35. Web site: The Three Pillars of Kubernetes Container Orchestration - Rancher Labs. 18 May 2017. rancher.com. 22 May 2017. live. https://web.archive.org/web/20170624022635/http://rancher.com/three-pillars-kubernetes-container-orchestration/. 24 June 2017.
    36. Web site: Scheduling Framework . July 24, 2023 . Kubernetes Documentation.
    37. Web site: Overview of a Replication Controller . Documentation . . 2015-11-02 . live . https://web.archive.org/web/20150922010433/https://coreos.com/kubernetes/docs/latest/replication-controller.html . 2015-09-22 .
    38. Web site: Kubernetes: Exciting Experimental Features . Sanders . Jake . Livewyer . 2015-10-02 . 2015-11-02 . live . https://web.archive.org/web/20151020071601/http://www.livewyer.com/blog/2015/10/02/kubernetes-exciting-experimental-features . 2015-10-20 .
    39. Web site: What [..] is a Kubelet?]. Marhubi. Kamal. 2015-08-27. kamalmarhubi.com. 2015-11-02. live. https://web.archive.org/web/20151113064016/http://kamalmarhubi.com/blog/2015/08/27/what-even-is-a-kubelet/. 2015-11-13.
    40. Web site: Kubernetes Security Issues and Best Practices Snyk. 2021-05-16. snyk.io. 26 July 2020. en-US.
    41. Web site: 2016-12-19 . Introducing Container Runtime Interface (CRI) in Kubernetes . 2021-05-16 . Kubernetes . en.
    42. Web site: Container Runtime Interface (CRI) . July 24, 2023 . Kubernetes Documentation .
    43. Web site: 10 October 2018 . Kubernetes v1.12: Introducing RuntimeClass . kubernetes.io.
    44. Web site: Deprecate Dockershim - Kubernetes Github repository - PR 94624 . Github.com.
    45. Web site: 2 December 2020 . Don't Panic: Kubernetes and Docker . 2020-12-22 . Kubernetes Blog . en.
    46. Web site: kubernetes/community . 2021-05-16 . GitHub . en.
    47. Web site: 17 February 2022 . Updated: Dockershim Removal FAQ . Kubernetes Blog.
    48. Web site: 11 July 2016 . rktnetes brings rkt container engine to Kubernetes . kubernetes.io.
    49. Web site: Namespaces. kubernetes.io.
    50. Web site: Pods. kubernetes.io.
    51. Web site: Kubernetes 101 – Networking . Langemak . Jon . Das Blinken Lichten . 2015-02-11 . 2015-11-02 . live . https://web.archive.org/web/20151025035709/http://www.dasblinkenlichten.com/kubernetes-101-networking/ . 2015-10-25 .
    52. Web site: ReplicaSet. kubernetes.io. en. 2020-03-03.
    53. Web site: StatefulSets . kubernetes.io.
    54. Web site: DaemonSet . kubernetes.io.
    55. Web site: Service. kubernetes.io.
    56. Web site: Kubernetes 101 – External Access Into The Cluster . Langemak . Jon . Das Blinken Lichten . 2015-02-15 . 2015-11-02 . live . https://web.archive.org/web/20151026035212/http://www.dasblinkenlichten.com/kubernetes-101-external-access-into-the-cluster/ . 2015-10-26 .
    57. Web site: Volumes. kubernetes.io.
    58. Web site: ConfigMaps . July 24, 2023 . Kubernetes Documentation.
    59. Web site: Secrets . 2023-07-23 . Kubernetes . en.
    60. Web site: Intro: Docker and Kubernetes training - Day 2 . . 2015-10-20 . 2015-11-02 . dead . https://web.archive.org/web/20151029210659/http://christianposta.com/slides/docker/generated/day2.html#/label-examples . 2015-10-29 .
    61. Web site: 2018-04-19. Container Attached Storage: A primer. 2021-05-16. Cloud Native Computing Foundation. en-US.
    62. Web site: Dataore Acquired MayaData original developer of OpenEBS. datacore.com. 18 November 2021 .
    63. Web site: CNCF SURVEY 2019. cncf.io.
    64. Web site: 2018-04-19. Container Attached Storage: A primer. 2020-10-09. Cloud Native Computing Foundation. en-US.
    65. Web site: Container Attached Storage SNIA. 2020-10-09. www.snia.org.
    66. Web site: Cloud Native Application Checklist: Cloud Native Storage. 2020-10-09. www.replex.io. en.
    67. Web site: Introducing Container Storage Interface (CSI) Alpha for Kubernetes. kubernetes.io. 10 January 2018 .
    68. Web site: Container Storage Interface (CSI) for Kubernetes GA. kubernetes.io. 15 January 2019 .
    69. Web site: Operating etcd clusters for Kubernetes . July 24, 2023 . Kubernetes Documentation.
    70. Web site: Objects In Kubernetes . July 24, 2023 . Kubernetes Documentation.
    71. Web site: API Conventions . July 24, 2023 . . Kubernetes.
    72. Web site: Owners and Dependents . July 24, 2023 . Kubernetes Documentation.
    73. Web site: Garbage Collection . July 24, 2023 . Kubernetes Documentation.
    74. Web site: Custom Resources . July 24, 2023 . Kubernetes Documentation.
    75. Web site: Cloud Native Landscape . July 24, 2023 . Cloud Native Computing Foundation.
    76. Web site: Controlling Access to the Kubernetes API . July 24, 2023 . Kubernetes Documentation.
    77. Web site: Remove the apiserver insecure port · Issue #91506 · kubernetes/kubernetes . GitHub.
    78. Web site: Authorization . July 24, 2023 . Kubernetes Documentation.
    79. Web site: Organizing Cluster Access Using kubeconfig Files . July 24, 2023 . Kubernetes Documentation.
    80. Web site: Authorization Overview . July 24, 2023 . Kubernetes Documentation.
    81. Web site: Command line tool (kubectl) . July 24, 2023 . Kubernetes Documentation.
    82. Web site: Client Libraries . July 24, 2023 . Kubernetes Documentation.
    83. Web site: Introduction - The Cluster API Book. cluster-api.sigs.k8s.io.
    84. Web site: The 7 Most Popular Kubernetes Distributions. 2021-12-28. en.
    85. Web site: MSV. Janakiram. Why Kubernetes Developer Ecosystem Needs A PaaS. 2021-05-16. Forbes. en.
    86. Web site: 2022-01-03. 5 Cloud Native Trends to Watch out for in 2022. 2022-02-03. The New Stack. en-US.
    87. Web site: 4 May 2022. Kubernetes Patch Releases. GitHub. 2022-05-09.
    88. Web site: Kubernetes 1.1 Performance upgrades, improved tooling and a growing community. November 9, 2015. kubernetes.io.
    89. Web site: Kubernetes 1.2: Even more performance upgrades, plus easier application deployment and management. March 17, 2016. kubernetes.io.
    90. Web site: Kubernetes 1.3: Bridging Cloud Native and Enterprise Workloads. July 6, 2016. kubernetes.io.
    91. Web site: Kubernetes 1.4: Making it easy to run on Kubernetes anywhere. September 26, 2016. kubernetes.io.
    92. Web site: Kubernetes 1.5: Supporting Production Workloads. December 13, 2016. kubernetes.io.
    93. Web site: Kubernetes 1.6: Multi-user, Multi-workloads at Scale. March 28, 2017. kubernetes.io.
    94. Web site: Kubernetes 1.7: Security Hardening, Stateful Application Updates and Extensibility. June 30, 2017. kubernetes.io.
    95. Web site: Kubernetes 1.8: Security, Workloads and Feature Depth. September 29, 2017. kubernetes.io.
    96. Web site: Kubernetes 1.9: Apps Workloads GA and Expanded Ecosystem. December 15, 2017. kubernetes.io.
    97. Web site: Kubernetes 1.10: Stabilizing Storage, Security, and Networking. March 26, 2018. kubernetes.io.
    98. Web site: Kubernetes 1.11: In-Cluster Load Balancing and CoreDNS Plugin Graduate to General Availability. June 27, 2018. kubernetes.io.
    99. Web site: Kubernetes 1.12: Kubelet TLS Bootstrap and Azure Virtual Machine Scale Sets (VMSS) Move to General Availability. September 27, 2018. kubernetes.io.
    100. Web site: Kubernetes 1.13: Simplified Cluster Management with Kubeadm, Container Storage Interface (CSI), and CoreDNS as Default DNS are Now Generally Available. December 3, 2018. kubernetes.io.
    101. Web site: Kubernetes 1.14: Production-level support for Windows Nodes, Kubectl Updates, Persistent Local Volumes GA. March 25, 2019. kubernetes.io.
    102. Web site: Kubernetes 1.15: Extensibility and Continuous Improvement. June 19, 2019. kubernetes.io.
    103. Web site: Kubernetes 1.16: Custom Resources, Overhauled Metrics, and Volume Extensions. September 18, 2019. kubernetes.io.
    104. Web site: Kubernetes 1.17: Stability. December 9, 2019. kubernetes.io.
    105. Web site: Kubernetes 1.18: Fit & Finish. March 25, 2020. kubernetes.io.
    106. Web site: 26 August 2020. Kubernetes 1.19 Release Announcement. 2020-08-28. Kubernetes.
    107. Web site: Kubernetes 1.19: Accentuate the Paw-sitive . Kubernetes . 2020-08-26 . 2024-01-13.
    108. Web site: Kubernetes 1.20: The Raddest Release. December 8, 2020. kubernetes.io.
    109. Web site: Kubernetes 1.21: Power to the Community. April 8, 2021. kubernetes.io.
    110. Web site: Kubernetes 1.22: Reaching New Peaks. August 4, 2021. kubernetes.io.
    111. Web site: Kubernetes 1.23: The Next Frontier. December 7, 2021. kubernetes.io.
    112. Web site: Kubernetes 1.24: Stargazer. May 3, 2022. kubernetes.io.
    113. Web site: Kubernetes v1.25: Combiner. August 23, 2022. kubernetes.io.
    114. Web site: Kubernetes v1.26: Electrifying. December 9, 2022. kubernetes.io.
    115. Web site: Kubernetes v1.27: Chill Vibes. April 11, 2023. kubernetes.io.
    116. Web site: Kubernetes v1.28: Planternetes. August 15, 2023. kubernetes.io.
    117. Web site: Kubernetes v1.29: Mandala. December 13, 2023. kubernetes.io.
    118. Web site: Kubernetes v1.30: Uwubernetes. April 17, 2024. kubernetes.io.