Day 30 Task: Kubernetes Architecture

Day 30 Task: Kubernetes Architecture

Kubernetes Overview

With the widespread adoption of containers among organizations, Kubernetes, the container-centric management software, has become a standard to deploy and operate containerized applications and is one of the most important parts of DevOps.

Originally developed at Google and released as open-source in 2014. Kubernetes builds on 15 years of running Google's containerized workloads and the valuable contributions from the open-source community. Inspired by Google’s internal cluster management system, Borg,

Tasks

  • What is Kubernetes? Write in your own words and why do we call it k8s?

Kubernetes is an open-source technology for container orchestration that is frequently referred to as K8s. Simply said, it enables the automation of containerized application deployment, scaling, and management. Applications and their dependencies can be stored and run in containers, which makes it simpler to maintain consistency across many settings.

The eight letters between "K" and "s" in "Kubernetes" are used in the abbreviation "K8s."

  • What are the benefits of using k8s?

For the management of containerized applications, Kubernetes has various advantages:

  1. Container Orchestration: Kubernetes reduces personal involvement and human error by automating container deployment, scaling, and administration.

  2. Scalability: It makes it simple for applications to scale by adding or deleting containers in accordance with workload demands.

  3. High Availability: Kubernetes makes sure that applications are always available by distributing containers across several nodes and replacing broken containers on demand.

  4. Resource Efficiency: By effectively arranging containers onto nodes, it maximizes resource utilization.

  5. Declarative Configuration: In configuration files, you specify the desired state of your apps, and Kubernetes makes sure the system complies.

  6. Self-Healing: Kubernetes can automatically detect and replace unhealthy containers or nodes.

  7. Service Discovery and Load Balancing: It provides built-in techniques for load balancing across containers and service discovery.

  8. Rolling Updates: Applications can be updated and rolled back without causing any downtime.

  9. Ecosystem: Kubernetes has a rich ecosystem of tools and extensions for monitoring, logging, and more.

  • Explain the architecture of Kubernetes

Kubernetes has a master-worker architecture:

  • Master Node: The control plane components are hosted on the master node. These components include:

    • API Server: Acts as the front-end for the Kubernetes control plane and is responsible for processing API requests.

    • Scheduler: Assigns work to worker nodes, deciding where to run containers based on resource requirements and policies.

    • Controller Manager: Ensures the desired state of the cluster and handles tasks like replication, scaling, and node management.

    • etcd: A distributed key-value store that stores the cluster's configuration data.

  • Worker Node: The worker nodes are responsible for running containers. They have the following key components:

    • Kubelet: Communicates with the API server and ensures containers are running in a Pod.

    • Container Runtime: The software responsible for running containers (e.g., Docker, containerd).

    • Kube Proxy: Maintains network rules on nodes and enables communication between Pods.

  • What is a Control Plane?

The control plane, also known as the master node, is the brain of the Kubernetes cluster. It manages and controls the overall state and operation of the cluster. The control plane consists of several components, including the API server, scheduler, controller manager, and etcd, as mentioned earlier.

  • API Server: performs the role of the control plane's front-end and acts as the starting point for all API calls. It checks for validity and handles requests before interacting with the etcd store to read or modify the cluster's state.

  • Scheduler: Determines where and how to run containers based on resource requirements, affinity/anti-affinity rules, and other constraints.

  • Controller Manager: Maintains the desired state of various resources and controllers. It ensures that the cluster remains in the specified configuration and handles tasks such as scaling and node management.

  • etcd: A distributed and consistent key-value store that stores the entire configuration and state of the Kubernetes cluster. It serves as the source of truth for all data in the cluster.

  • Write the difference between kubectl and kubelets.

  • kubectl: The command-line tool Kubectl is used by developers and administrators to communicate with the Kubernetes cluster. From the command line, users can create, change, delete, and analyze different Kubernetes resources (such as pods, services, and deployments). To carry out these actions, Kubectl talks with the Kubernetes API server.

  • kubelet: Every worker node in the cluster has a component called Kubelet running on it. Its main duty is to make sure that each container in a Pod is running and safe. To manage the containers, it uses the Pod specifications provided by the API server and communicates with the container runtime (such as Docker). Additionally, Kubelet updates the control plane on the state of nodes and containers.

  • Explain the role of the API server.

  • In Kubernetes, the central management point and the main interface for communicating with the cluster are the API server. Among its primary functions are:

    • Authentication and Authorization: The API server verifies users' identities and makes sure they have the rights necessary to carry out requested tasks.

    • Validation and Admission Control: It verifies incoming requests to make sure they follow to the policies and limitations of the cluster. Before authorizing changes to the cluster state, admission controllers can be set up to perform further checks and make judgments.

    • Serving the Kubernetes API: The API server exposes the Kubernetes API, enabling administrators, developers, and outside services to communicate programmatically with the cluster.

    • Communication with etcd: To read or modify the cluster's configuration and status, the API server talks with the etcd store. It serves as an intermediary, ensuring protection and stability.

    • Endpoint for kubectl: To manage and query the cluster, kubectl and other Kubernetes clients interface with the API server. Their requests are processed by the API server, which then converts them into actions and works with other components to carry those actions out.


Happy Learning

Thanks For Reading! :)

-Sriparthu💝💥