Key concepts
Learn more about Kubernetes access control.
allows API Endpoint user authorization. Access Token is contained in Kubeconfig and API Server settings. Read more on Authenticating users in Kubernetes.
API Endpoint is the Control Plane URL address for cluster management access (both read and write). See an in-depth description of Kubernetes API.
API Server — the Control Plane component that orchestrates the API Endpoint. It is responsible for authorization request routing between Kubernetes components. Read more on Kubernetes API.
is our custom modification of Kubernetes native cluster autoscaler component. It is purposed to automatically scale the workload when your system’s demand increases. Proceed to Autoscaler dedicated page for the detailed description.
You can find Kubernetes native Cluster autoscaler detailed description here.
is a Kubernetes Control Plane component that embeds cloud-specific control logic, responsible for interaction between kubernetes controllers and services (Load Balancer, CSI, Autoscaler) and cloud provider API .
CCM separates cloud-specific components, infrastructure and cloud platform interactions. Thus it enables multitrack feature releases and multiple cloud providers integration. CCM synchronizes Websa Cloud and Kubernetes states and configirations. Follow the link for extended CCM information.
Cloud Controller Manager configures endpoints and implements Load Balancer service, integrated with cloud infrastructure. Websa Load Balancer instances can be ordered via our Helpdesk.
Two types of traffic constitute cluster incomming traffic: Maintenance traffic is routed via API Endpoint, Workload traffic can be handled with Ingress Controller or with Load Balancer.
Kubernetes Cluster is a service instance that incorporates a Control Plane and some of our cloud components (Cloud Servers, Load Balancers, Volumes, Private Networks, etc…) in the form of containers and acts as a cloud solutions orchestrator. Read up on Kubernetes Clusters.
CNI plugin for cluster networking is required for Kubernetes network model implementation. We are currently utilizing a third-party solution Calico CNI.
CSI is a Kubernetes component interacting with Websa Volumes via WCS API, responsible for volumes management. You can find out more on CSI here. A StorageClass indicates volume tariff plan.
Resource example:
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: VolumeAbcTariff
provisioner: kubernetes.io
parameters:
type: gp2
reclaimPolicy: Retain
allowVolumeExpansion: true
mountOptions:
- debug
volumeBindingMode: Immediate
Control Plane is the service, designed as a set of controllers organizing the Kubernetes Cluster operation. It is physically located on the cloud-provider side. The client can access the Control Plane via the specific API Endpoint.
Control Plane incorporates the following components:
Including controllers maintained within our platform:
Compatible user-maintained third-party solutions, as (but not limited to)
Ingress Controller is a implements a Kubernetes Ingress, acting as a revers proxy and performing the L7 traffic distribution between platform pods. You’ll need to implement one of the suitable third-party IC solutions, as Nginx Ingress Controller or HAproxy Ingress Controller.
Native K8s components running on each worker. Kube-proxy applies network changes as Network Address Translation rules within the worker. Learn more about kube-proxy.
Kubeconfig is a Kubernetes configuration file that allows client’s access to a specific Kubernetes Cluster instance. See more on Kubeconfig.
Kubelet is another native K8s components running on every worker. Kubelet creates, deletes, applies and updates worker containers. Explore the details.
Pod is an application containers group of one or more containers (Docker for example), with a shared storage, IP address and information, such as but not limited to the specific ports and specifications for running the containers. Pods are deployed on Workers.
Worker is a host running the containers and performing any tasks assigned. Kubelet and Kube-proxy are running on each worker. Find out more about Workers.
Worker Pool is a group of Workers with the same specifications and located within a single Cluster on a public or private cloud.