Kubernetes Real Industry Use Cases

Vinodha kumara
9 min readDec 26, 2020

Container-based microservices architectures have profoundly changed the way development and operations teams test and deploy modern software. Containers help companies modernize by making it easier to scale and deploy applications, but containers have also introduced new challenges and more complexity by creating an entirely new infrastructure ecosystem.

Large and small software companies alike are now deploying thousands of container instances daily, and that’s the complexity of scale they have to manage. So how do they do it?

What is Kubernetes?

Containers

Kubernetes is an open-source container-orchestration system for automating computer application deployment, scaling, and management. It was originally designed by Google and is now maintained by the Cloud Native Computing Foundation.

Containers have become increasingly popular since the Docker containerization project launched in 2013, but large, distributed containerized applications can become increasingly difficult to coordinate. By making containerized applications dramatically easier to manage at scale, Kubernetes has become a key part of the container revolution.

Going back in time

Let’s take a look at why Kubernetes is so useful by going back in time.

Container Evolution

Traditional deployment era: For example, if multiple applications run on a physical server, there can be instances where one application would take up most of the resources, and as a result, the other applications would underperform. A solution for this would be to run each application on a different physical server. But this did not scale as resources were underutilized, and it was expensive for organizations to maintain many physical servers.

Virtualized deployment era: As a solution, virtualization was introduced. It allows you to run multiple Virtual Machines (VMs) on a single physical server’s CPU. Virtualization allows applications to be isolated between VMs and provides a level of security as the information of one application cannot be freely accessed by another application.

Virtualization allows better utilization of resources in a physical server and allows better scalability because an application can be added or updated easily, reduces hardware costs, and much more. With virtualization, you can present a set of physical resources as a cluster of disposable virtual machines.

Each VM is a full machine running all the components, including its own operating system, on top of the virtualized hardware.

Container deployment era: Containers are similar to VMs, but they have relaxed isolation properties to share the Operating System (OS) among the applications. Therefore, containers are considered lightweight. Similar to a VM, a container has its own filesystem, the share of CPU, memory, process space, and more. As they are decoupled from the underlying infrastructure, they are portable across clouds and OS distributions.

Containers have become popular because they provide extra benefits, such as:

🔷 Increased ease and efficiency of container image creation compared to VM image use.

🔷Continuous development, integration, and deployment

🔷Dev and Ops separation of concerns: Create application container images at build/release time rather than deployment time, thereby decoupling applications from infrastructure.

🔷Resource isolation: predictable application performance.

🔷Resource utilization: High efficiency and density.

Why you need Kubernetes and what it can do.

Containers are a good way to bundle and run your applications. In a production environment, you need to manage the containers that run the applications and ensure that there is no downtime. For example, if a container goes down, another container needs to start. Wouldn’t it be easier if this behavior was handled by a system?

Kubernetes provides you with:

  • Service discovery and load balancing Kubernetes can expose a container using the DNS name or using their own IP address. If traffic to a container is high, Kubernetes is able to load balance and distribute the network traffic so that the deployment is stable.
  • Storage orchestration Kubernetes allows you to automatically mount a storage system of your choice, such as local storage, public cloud providers, and more.
  • Automated rollouts and rollbacks You can describe the desired state for your deployed containers using Kubernetes, and it can change the actual state to the desired state at a controlled rate. For example, you can automate Kubernetes to create new containers for your deployment, remove existing containers, and adopt all their resources to the new container.
  • Automatic bin packing You provide Kubernetes with a cluster of nodes that it can use to run containerized tasks. You tell Kubernetes how much CPU and memory (RAM) each container needs. Kubernetes can fit containers onto your nodes to make the best use of your resources.
  • Self-healing Kubernetes restarts containers that fail, replaces containers, kills containers that don’t respond to your user-defined health check, and doesn’t advertise them to clients until they are ready to serve.
  • Secret and configuration management Kubernetes let you store and manage sensitive information, such as passwords, OAuth tokens, and SSH keys. You can deploy and update secrets and application configuration without rebuilding your container images, and without exposing secrets in your stack configuration.

Is Kubernetes getting adopted in enterprises?

In two words: Hell Yeah!

Several data points show rapid Kubernetes adoption. Sumo Logic’s fourth annual Continuous Intelligence Report on “The State of Modern Applications and DevSecOps in the Cloud” highlights some cool adoption data on Kubernetes within enterprises. The report states that K8s is seeing increased adoption in on-premise as well as cloud-based environments. In fact, 1 in 3 enterprises in the AWS cloud today use Kubernetes as their key orchestration solution.

Kubernetes from Real-world Use Cases

Tinder’s Move to Kubernetes

Tinder moves to Kubernetes

Due to high traffic volume, Tinder’s engineering team faced challenges of scale and stability. What did they do?

The answer is, of course, Kubernetes.

Tinder’s engineering team solved interesting challenges to migrate 200 services and run a Kubernetes cluster at scale totaling 1,000 nodes, 15,000 pods, and 48,000 running containers.

Was that easy? No way. However, they had to do it for the smooth business operations going further. One of their engineering leaders said, “As we onboarded more and more services to Kubernetes, we found ourselves running a DNS service that was answering 250,000 requests per second.” Tinder’s entire engineering organization now has knowledge and experience on how to containerize and deploy their applications on Kubernetes.

Reddit’s Kubernetes Story

Kubernetes at Reddit

Reddit is one of the busiest sites in the world. Kubernetes forms the core of Reddit’s internal infrastructure.

For many years, the Reddit infrastructure team followed traditional ways of provisioning and configuring. However, this didn’t go far until they saw some huge drawbacks and failures happening while doing things the old way. Then they moved to Kubernetes.

The New York Times’s Journey to Kubernetes

The New York Time’s

Today the majority of the NYT’s customer-facing applications are running on Kubernetes. What an amazing story. The biggest impact has been an increase in the speed of deployment and productivity. Legacy deployments that took up to 45 minutes are now pushed in just a few. It’s also given developers more freedom and fewer bottlenecks. The New York Times has gone from a ticket-based system for requesting resources and weekly deploy schedules to allowing developers to push updates independently.

Spotify Using Kubernetes

is an early K8s adopter and has significant cost-saving values by adopting K8s as described in this note. Leveraging K8s, Spotify has seen 2–3x CPU utilization using the orchestration capabilities of K8s, resulting in better IT spend optimization.

Spotify

Airbnb’s Kubernetes Story

Airbnb’s transition from a monolithic to a microservices architecture is pretty amazing. They needed to scale continuous delivery horizontally, and the goal was to make continuous delivery available to the company’s 1,000 or so engineers so they could add new services. Airbnb adopted Kubernetes to support over 1,000 engineers concurrently configuring and deploying over 250 critical services to Kubernetes. The net result is that AirBnb can now do over 500 deploys per day on average.

I want you to see this excellent presentation from Melanie Cebula, the infrastructure engineer at Airbnb.

Pinterest’s Kubernetes Story

With over 250 million monthly active users and serving over 10 billion recommendations every single day, the engineers at Pinterest knew these numbers are going to grow day by day, and they began to realize the pain of scalability and performance issues.

Their initial strategy was to move their workload from EC2 instances to Docker containers; they first moved their services to Docker to free up engineering time spent on Puppet and to have an immutable infrastructure.

The next strategy was to move to Kubernetes. Now they can take ideas from ideation to production in a matter of minutes, whereas earlier they used to take hours or even days. They have cut down so much overhead cost by utilizing Kubernetes and have removed a lot of manual work without making engineers worry about the underlying infrastructure.

Pokemon Go’s Kubernetes Story

Pokemon

How was Pokemon Go able to scale so efficiently became so successful? The answer is Kubernetes. Pokemon Go was developed and published by Niantic Inc. and grew to 500+ million downloads and 20+ million daily active users.

Pokemon Go engineers never thought their user base would increase exponentially to surpass expectations within a short time. They were not ready for it, and the servers couldn’t handle this much traffic.

Pokemon Go also faced a severe challenge when it came to vertical and horizontal scaling because of the real-time activity by millions of users worldwide. Niantic was not prepared for this.

The solution was in the magic of containers. The application logic for the game ran on Google Container Engine (GKE) powered by the open-source Kubernetes project. Niantic chose GKE for its ability to orchestrate their container cluster at a planetary-scale, freeing its team to focus on deploying live changes for their players. In this way, Niantic used Google Cloud to turn Pokémon GO into service for millions of players, continuously adapting and improving. This gave them more time to concentrate on building the game’s application logic and new features rather than worrying about the scaling part.

LendingTree Using Kubernetes

LendingTree has many microservices that make up its business apps. LendingTree uses Kubernetes and its horizontal scaling capability to deploy and run these services, and to ensure that their customers have access to service even during peak load. And to get visibility into these containerized and virtual services and monitor its Kubernetes deployment, LendingTree uses Sumo Logic.

And We have data on 18,653 companies that use Kubernetes. The companies using Kubernetes are most often found in United States and in the Computer Software industry. Kubernetes is most often used by companies with 50–200 employees and 1M-10M dollars in revenue.

Conclusion. Kubernetes is a great tool for orchestrating containerized applications. It automates the very complex task of dynamically scaling an application in real-time. The world is turning to automatic so Nowadays all companies using Kubernetes.

--

--

Vinodha kumara

DevSecOps, MLOps, Cloud Computing, DE, ML, DL, Programmer, Blogger, Volunteer