Kubernetes : Resistance is Futile

Kubernetes

Aditya Kumar
10 min readDec 29, 2020

Kubernetes is a portable, extensible, open-source platform for managing containerized workloads and services, that facilitates both

declarative configuration and automation. It has a large, rapidly growing ecosystem. Kubernetes services, support, and tools are widely available.

The name Kubernetes originates from Greek, meaning helmsman or pilot. Google open-sourced the Kubernetes project in 2014. Kubernetes combines over 15 years of Google’s experience running production workloads at scale with best-of-breed ideas and practices from the community.

Why is Kubernetes getting so popular?

At the time of this article, Kubernetes is about six years old, and over the last two years, it has risen in popularity to consistently be one of the most loved platforms. This year, it comes in as the number three most loved platform. If you haven’t heard about Kubernetes yet, it’s a platform that allows you to run and orchestrate container workloads.

Containers began as a Linux kernel process isolation construct that encompasses cgroups from 2007 and namespaces from 2002. Containers became more of a thing when LXC became available in 2008, and Google developed its own internal ‘run everything in containers mechanism’ called Borg. Fast forward to 2013, and Docker was released and completely popularized containers for the masses. At the time, Mesos was the primary tool for orchestrating containers, however, it wasn’t as widely adopted. Kubernetes was released in 2015 and quickly became the de facto container orchestration standard.

To try to understand the popularity of Kubernetes, let’s consider some questions. When was the last time developers could agree on the way to deploy production applications? How many developers do you know who run tools as is out of the box? How many cloud operations engineers today don’t understand how applications work? We’ll explore the answers in this article.

→ Infrastructure as YAML

Coming from the world of Puppet and Chef, one of the big shifts with Kubernetes has been the move from infrastructure as code towards infrastructure as data — specifically, as YAML. All the resources in Kubernetes that include Pods, Configurations, Deployments, Volumes, etc., can simply be expressed in a YAML file. For example:

apiVersion: v1
kind: Pod
metadata:
name: site
labels:
app: web
spec:
containers:
- name: front-end
image: nginx
ports:
- containerPort: 80

This representation makes it easier for DevOps or site reliability engineers to fully express their workloads without the need to write code in a programming language like Python, Ruby, or JavaScript.

→Innovation

Over the last few years, Kubernetes has had major releases every three or four months, which means that every year there are three or four major releases. The number of new features being introduced hasn’t slowed, evidenced by over 30 different additions and changes in its last release. Furthermore, the contributions don’t show signs of slowing down even during these difficult times as indicated by the Kubernetes project GitHub activity.

The new features allow cluster operators more flexibility when running a variety of different workloads. Software engineers also love to have more controls to deploy their applications directly to production environments.

→Community

Another big aspect of Kubernetes popularity is its strong community. For starters, Kubernetes was donated to a vendor-neutral home in 2015 as it hit version 1.0: the Cloud Native Computing Foundation.

There is also a wide range of community SIGs (special interest groups) that target different areas in Kubernetes as the project moves forwards. They continuously add new features and make it even more user friendly.

The Cloud Native Foundation also organizes CloudNativeCon/KubeCon, which as of this writing, is the largest ever open-source event in the world. The event, which is normally held up to three times a year, gathers thousands of technologists and professionals who want to improve Kubernetes and its ecosystem as well as make use of some of the new features released every three months.

Furthermore, the Cloud Native Foundation has a Technical Oversite Community that, together with its SIGs, look at the foundations’ new and existing projects in the cloud-native ecosystem. Most of the projects help enhance the value proposition of Kubernetes.

Finally, I believe that Kubernetes would not have the success that it does without the conscious effort by the community to be inclusive to each other and to be welcoming to any newcomers.

→Future

One of the main challenges developers face in the future is how to focus more on the details of the code rather than the infrastructure where that code runs on. For that, serverless is emerging as one of the leading architectural paradigms to address that challenge. There are already very advanced frameworks such as Knative and OpenFaas that use Kubernetes to abstract the infrastructure from the developer.

We’ve shown a brief peek at Kubernetes in this article, but this is just the tip of the iceberg. There are many more resources, features, and configurations users can leverage. We will continue to see new open-source projects and technologies that enhance or evolve Kubernetes, and as we mentioned, the contributions and the community aren’t going anywhere.

Why you need Kubernetes and what it can do

Containers are a good way to bundle and run your applications. In a production environment, you need to manage the containers that run the applications and ensure that there is no downtime. For example, if a container goes down, another container needs to start. Wouldn’t it be easier if this behavior was handled by a system?

That’s how Kubernetes comes to the rescue! Kubernetes provides you with a framework to run distributed systems resiliently. It takes care of scaling and failover for your application, provides deployment patterns, and more. For example, Kubernetes can easily manage a canary deployment for your system.

Kubernetes provides you with:

  • Service discovery and load balancing Kubernetes can expose a container using the DNS name or using their own IP address. If traffic to a container is high, Kubernetes is able to load balance and distribute the network traffic so that the deployment is stable.
  • Storage orchestration Kubernetes allows you to automatically mount a storage system of your choice, such as local storages, public cloud providers, and more.
  • Automated rollouts and rollbacks You can describe the desired state for your deployed containers using Kubernetes, and it can change the actual state to the desired state at a controlled rate. For example, you can automate Kubernetes to create new containers for your deployment, remove existing containers and adopt all their resources to the new container.
  • Automatic bin packing You provide Kubernetes with a cluster of nodes that it can use to run containerized tasks. You tell Kubernetes how much CPU and memory (RAM) each container needs. Kubernetes can fit containers onto your nodes to make the best use of your resources.
  • Self-healing Kubernetes restarts containers that fail, replaces containers, kills containers that don’t respond to your user-defined health check, and doesn’t advertise them to clients until they are ready to serve.
  • Secret and configuration management Kubernetes lets you store and manage sensitive information, such as passwords, OAuth tokens, and SSH keys. You can deploy and update secrets and application configuration without rebuilding your container images, and without exposing secrets in your stack configuration

What Kubernetes is not

Kubernetes is not a traditional, all-inclusive PaaS (Platform as a Service) system. Since Kubernetes operates at the container level rather than at the hardware level, it provides some generally applicable features common to PaaS offerings, such as deployment, scaling, load balancing, and lets users integrate their logging, monitoring, and alerting solutions. However, Kubernetes is not monolithic, and these default solutions are optional and pluggable. Kubernetes provides the building blocks for building developer platforms, but preserves user choice and flexibility where it is important.

Kubernetes:

  • Does not limit the types of applications supported. Kubernetes aims to support an extremely diverse variety of workloads, including stateless, stateful, and data-processing workloads. If an application can run in a container, it should run great on Kubernetes.
  • Does not deploy source code and does not build your application. Continuous Integration, Delivery, and Deployment (CI/CD) workflows are determined by organization cultures and preferences as well as technical requirements.
  • Does not provide application-level services, such as middleware (for example, message buses), data-processing frameworks (for example, Spark), databases (for example, MySQL), caches, nor cluster storage systems (for example, Ceph) as built-in services. Such components can run on Kubernetes, and/or can be accessed by applications running on Kubernetes through portable mechanisms, such as the Open Service Broker.
  • Does not dictate logging, monitoring, or alerting solutions. It provides some integrations as proof of concept, and mechanisms to collect and export metrics.
  • Does not provide nor mandate a configuration language/system (for example, Jsonnet). It provides a declarative API that may be targeted by arbitrary forms of declarative specifications.
  • Does not provide nor adopt any comprehensive machine configuration, maintenance, management, or self-healing systems.
  • Additionally, Kubernetes is not a mere orchestration system. In fact, it eliminates the need for orchestration. The technical definition of orchestration is execution of a defined workflow: first do A, then B, then C. In contrast, Kubernetes comprises a set of independent, composable control processes that continuously drive the current state towards the provided desired state. It shouldn’t matter how you get from A to C. Centralized control is also not required. This results in a system that is easier to use and more powerful, robust, resilient, and extensible.

Here are some Case Studies that how actually big Companies uses kubernetes and why…

CASE STUDY : NOKIA

Challenge

Nokia’s core business is building telecom networks end-to-end; its main products are related to the infrastructure, such as antennas, switching equipment, and routing equipment. “As telecom vendors, we have to deliver our software to several telecom operators and put the software into their infrastructure, and each of the operators have a bit different infrastructure,” says Gergely Csatari, Senior Open Source Engineer. “There are operators who are running on bare metal. There are operators who are running on virtual machines. There are operators who are running on VMware Cloud and OpenStack Cloud. We want to run the same product on all of these different infrastructures without changing the product itself.”

Solution

The company decided that moving to cloud native technologies would allow teams to have infrastructure-agnostic behavior in their products. Teams at Nokia began experimenting with Kubernetes in pre-1.0 versions. “The simplicity of the label-based scheduling of Kubernetes was a sign that showed us this architecture will scale, will be stable, and will be good for our purposes,” says Csatari. The first Kubernetes-based product, the Nokia Telephony Application Server, went live in early 2018. “Now, all the products are doing some kind of re-architecture work, and they’re moving to Kubernetes.”

Impact

Kubernetes has enabled Nokia’s foray into 5G. “When you develop something that is part of the operator’s infrastructure, you have to develop it for the future, and Kubernetes and containers are the forward-looking technologies,” says Csatari. The teams using Kubernetes are already seeing clear benefits. “By separating the infrastructure and the application layer, we have less dependencies in the system, which means that it’s easier to implement features in the application layer,” says Csatari. And because teams can test the exact same binary artifact independently of the target execution environment, “we find more errors in early phases of the testing, and we do not need to run the same tests on different target environments, like VMware, OpenStack, or bare metal,” he adds. As a result, “we save several hundred hours in every release.”

CASE STUDY : adidas

Challenge

In recent years, the adidas team was happy with its software choices from a technology perspective — but accessing all of the tools was a problem. For instance, “just to get a developer VM, you had to send a request form, give the purpose, give the

title of the project, who’s responsible, give the internal cost center a call so that they can do recharges,” says Daniel Eichten, Senior Director of Platform Engineering. “The best case is you got your machine in half an hour. Worst case is half a week or sometimes even a week.”

Solution

To improve the process, “we started from the developer point of view,” and looked for ways to shorten the time it took to get a project up and running and into the adidas infrastructure, says Senior Director of Platform Engineering Fernando Cornago. They found the solution with containerization, agile development, continuous delivery, and a cloud native platform that includes Kubernetes and Prometheus.

Impact

Just six months after the project began, 100% of the adidas e-commerce site was running on Kubernetes. Load time for the e-commerce site was reduced by half. Releases went from every 4–6 weeks to 3–4 times a day. With 4,000 pods, 200 nodes, and 80,000 builds per month, adidas is now running 40% of its most critical, impactful systems on its cloud native platform.

Conclusion:

Kubernetes has not only helped in the vertical and horizontal scaling of containers but has turned the tables for innovative engineering expectations. It has been succeeded in deployment for an initial estimate of servers. Also, it has gained so much popularity in a short span of time. Hence, most of the industry engineers will respond with inspiring stories to get their businesses right on tracks.

Thank you Everyone For Reading ..!!

--

--

Aditya Kumar
0 Followers

Full Stack Web Developer | Linux System Administrator | Ansible | Kubernetes | Django | AWS | Coder