Do you really need Kubernetes?

Emrah Şamdan
6 min readJun 29, 2020

When it comes to container orchestration, everyone thinks “Kubernetes!” After all, this is what the cool kids do. But as you guessed from the title of this article, you might want to pause for a moment before jumping on the bandwagon. Indeed, Kubernetes has a (very) steep learning curve, a bewilderingly vast ecosystem, and a seemingly infinite landscape of third-party providers attempting to sell you some kind of “easy Kubernetes” (which in itself is a telltale sign of its complexity). At the end of the day, dealing with Kubernetes can be mind-numbing.

In this article, we’ll take a look at various use cases where Kubernetes is one solution, but where there are also alternatives that actually might be better.

Example of a Three-Tier Website

Let’s consider first the archetypal use case of a three-tier website: load balancing, app, database. And let’s assume the app tier can automatically scale out and in, based on traffic. Now, let’s take a look at how different options handle this setup.

With Kubernetes

Kubernetes can handle all three tiers and provides solutions for each of them. It can also provide autoscaling, although interestingly this is not native in Kubernetes and requires plug-ins.

The configuration, however, will be quite complicated, including stateful storage for the database tier. Autoscaling would typically be based on Prometheus alarms as well, meaning you would have one more software to understand, install, and configure…

It is interesting to note that at the end of the day, Kubernetes will interface with your cloud vendor to provision VMs, storage, load balancers, etc. So whether you use Kubernetes or not, you’ll most likely end up using the same cloud resources either way.

Importantly, Kubernetes’ worker nodes and control-plane nodes still require regular maintenance and patching. So the onus is essentially on you to perform such maintenance activities, and the disruption to the Kubernetes cluster is quite significant. To be blunt, this is actually quite a significant pain point and is technically outside of Kubernetes’ scope since Kubernetes only manages the containers, not the VMs running the containers.

Alternative on AWS: Use Managed Services

With AWS, all three tiers can be handled by AWS services. The load-balancing tier can be handled with Elastic Load Balancing, and such a solution has multiple advantages: high availability, built-in autoscaling, and zero maintenance.

The app tier can be handled by ECS, a container-orchestration system like Kubernetes. Although it’s simpler and offers less features, ECS is much easier to configure and integrates well with other AWS services, such as Parameter Store and IAM. If you’re using Fargate as the backend (and you probably should unless you have very specific requirements), you’re also freed from any maintenance operations. Plus, autoscaling uses CloudWatch alarms, so you don’t need to install and configure extra software. Furthermore, AWS Fargate is a step change in container orchestration and relieves you from having to think about worker nodes; e.g., you no longer have to manage two layers of autoscaling (containers and worker nodes).

You should definitely consider ECS backed by Fargate unless there are specific reasons why you can’t use it, for instance, if you need GPU-equipped instances, which Fargate doesn’t support, yet.

The database tier will typically be handled by RDS. In just a few tick boxes, you can configure daily backups, a standby instance in a different availability zone with automated failover, read replicas, etc. Again, no maintenance is required from you, as this is managed automatically by RDS, and obviously, you don’t have to handle the configuration of stateful storages, etc.

Another Alternative on AWS: A Serverless Website

If your website is “modern” (i.e., it consists of a set of static files with JavaScript that perform API calls), you can use the API Gateway backed by Lambda functions. This allows for both your web server and app tiers to be easily handled using serverless services.

Additionally, the database tier can be handled by Aurora Serverless (watch out for the costs, though!) for an entirely serverless solution. With such an architecture, autoscaling and high availability are built in, and you’ll have no hassle with maintenance and patching.

Another Use Case: Batch Jobs

Let’s take another example: long-running, compute-intensive jobs. Typically, a job queue would enqueue job requests, which would be served by a compute cluster. Let’s again assume that the cluster can automatically scale out and back in, based on the size of the job queue (i.e., how many jobs are pending).

With Kubernetes

This use case requires a job queue to store and manage job requests. Unfortunately, Kubernetes doesn’t offer any built-in solution. So once again, you would have to rely on additional software such as RabbitMQ or Redis.

The autoscaling part would again be tricky, since you would have to extract the required information (number of pending job requests) from that third-party software.

With AWS

Such an architecture would typically be based on AWS Batch, which manages your batch jobs and offers a built-in job queue. It can use either Docker or EC2 instances as its backend, and you can save money by using Fargate or Spot Instances.

Again, the simplicity and efficiency of a managed solution shines through.

With Docker Swarm

Docker now offers a “ Swarm mode.” The main advantage of Docker Swarm is that it’s very easy to set up; your Docker Swarm cluster can be created, configured, and go live in just a couple of hours.

Unfortunately, Docker Swarm is quite limited and offers no built-in queue, so you need to bring your own solution for that; plus, there’s no easy way to manage autoscaling…

The Main Argument: Avoid Vendor Lock-In

Many companies choose Kubernetes in order to avoid vendor lock-in. The argument here is that since Kubernetes itself is independent of any cloud vendor, it should be easy to switch vendors at any time. However, this reasoning is usually flawed. The rational decision would be to compare the cost of changing cloud vendors to the cost of creating and maintaining a truly cloud-agnostic Kubernetes cluster.

Given there are only three major cloud providers ( Amazon AWS, Google GCP, and Microsoft Azure), you don’t have many choices when it comes to switching, anyway. As to smaller providers, they are notoriously less reliable-you might as well jump off a cliff before deciding on one of them. So, the odds of you changing cloud providers for a given workload are quite low; and if you do change, you’d probably do it only once.

Kubernetes is difficult, and you would typically require a full-time DevOps engineer (maybe part-time after the initial setup phase) for a not-so-complex Kubernetes cluster. You need to take this cost into account when deciding on whether to adopt Kubernetes or not.

All in all, it can certainly be argued that fears surrounding vendor lock-in are often exaggerated and that the cost of a single move from one cloud provider to another may be less than the cost of setting up and maintaining a truly cloud-agnostic Kubernetes cluster. Remember, if you use a managed solution, such as Amazon Elastic Kubernetes Service or Google Kubernetes Engine, you are not truly cloud-agnostic.

Conclusion

The main takeaway here is that you should think twice before following the crowd and going down a path that could be both perilous and costly. Make sure to take into account the costs associated with both you and your team getting familiar with Kubernetes, as this is often overlooked and can actually be quite large.

If you do decide to go with Kubernetes, you should generally use a managed solution, such as GKE (Google Cloud) or EKS (AWS). These will make your life a lot easier than doing Kubernetes the hard way, and you would only be trading a bit of vendor lock-in for a much simpler life.

At the end of the day, container orchestration is meant to solve a problem. So first, make sure you are clear on what you want to solve, and then consider other options besides Kubernetes before making a decision.

Also keep in mind that serverless is growing in popularity and can be used to solve many problems that used to be addressed by either traditional architectures or container-based architectures. Running your code without having to worry about the maintenance, availability, and scaling of the underlying infrastructure is definitely a step change in cloud computing.

One final note about security: Using Kubernetes (and the additional services typically required to run a Kubernetes cluster, such as Helm and Prometheus), increase the surface attack of your architecture. The infamous CVE-2018–1002105 was, after all, a high-profile and very dangerous vulnerability in Kubernetes…

So, keep your eyes open and your mind sharp. The cool kids’ toys might simply not be the best for you. And if you’re having issues with debugging, tracing, or securing your cloud architecture, Thundra is here to help.

Originally published at https://blog.thundra.io.

--

--