Web Analytics Made
Easy - StatCounter


About us Contact us

Our Journey From Heroku To Kubernetes [Part-1]

Tech 2019 / 09 / 30

Our Journey From Heroku To Kubernetes [Part-1]

Our client was running their infrastructure in Heroku. As a PaaS, Heroku is awesome and you can sleep good while your application is being taken care by the best trained infra professionals.

But this happiness comes with some caveats:

  1. You have to pay for it.
  2. You don’t have enough freedom to play with it. You have to follow their book and they will guarantee it will just work.
  3. It doesn’t allow you to implement complex deployment process.
  4. Multiple environment management sometimes can be very cumbersome and tricky.
  5. Every time you switch your infrastructure you have to start from scratch.
  6. Micro-service based applications are not well suited in this kind of environment.
  7. Limited monitoring tools (sometimes).
  8. It’s hard to achieve true infrastructure as code.

While the first reason seems to not be an issue when you start small. As you grow, your cost can significantly hurt you in the long run. For us, the second reason was the biggest issue; even if we somehow achieved it, the first reason seemed to dawn on us.

Anyway, we decided for a change. A change will take us a long long way from here, without having to break our money bank and also it gives us just enough room to expand for the unknown future.

Moreover, our application had to achieve some very strict checks before it was deployed to new versions. The checks were:

  1. We take backups of that target environment databases.
  2. For testing convenience, we restored production database (excluding user and sensitive information) onto staging database, to conveniently check if anything was breaking on the real world data. It gave us confidence that our newly added application is working as expected.

Our application was a Ruby On Rails application. It was just a medium sized monolith with having some moving parts (which we will see as we go forward).

So when we decided to go for IaaS, there were a bunch of choices we could have made. But we decided to use DigitalOcean. Although AWS may seem to be the obvious choice, we decided to go for DO because:

  1. It provides one of the cheapest hardware/spec per dollar.
  2. It provides managed Kubernetes.
  3. It has very easy and developer friendly documentation, tutorials and a great community.
  4. Although it has some shortcomings now but it seems to be promising in the upcoming days.
  5. It also has managed cheap Postgres Database, Load Balancer and S3 compatible storage option, which are must needs for our project.

In terms of cost optimisation, on Heroku we had this following setup


  1. Application: Standard-1X = $25
  2. Background Processing: Standard-1X = $25
  3. Postgres: Standard-0 = $50


  1. Application: 3 Standard-2X = $150
  2. Background Processing: Standard-1X = $25
  3. Scheduler: Standard-1X = $25
  4. Postgres: Standard-0 = $50

We had cost around $350-$400 every month.

As we moved to k8s on DO, lets see what we achieved,

Kubernetes On DigitalOcean:

Node Pool 1(5 instance): 1vcpu + 2Gb Ram => $10 * 5 = $50

Node Pool 2(3 instance): 2vcpu + 4Gb Ram => $20 * 3 = $60

In these two node pools we separated our environment specific nodes by tags(don’t worry I will explain later about these terms later). Now see what we were able to achieve in these two node pools. I know some of the terms in below may be unknown to you, so for now just skip them:

1 Cert Manager (for proving and renewing lets-encrypt free SSL for all applications)

1 Cert manager cainjector

1 Cert manager webhook

1 Docker registry (Backed by DigitalOcean Space)

1 CI/CD instance

1 Staging Postgres database(Using Kubedb) with HA.

1 tiller instance(For using Helm Package manager)

1 Docker registry UI

1 nginx ingress pod

1 auto job cleaner (for all namespace)

1 DB auto backup task runner for staging

1 DB auto backup task runner for production

Staging application(2x)

Staging Background Job Runner(1x)

Production database with HA. 1 primary, 1 slave and 1 standby

PostgresAdmin UI.

Linkerd Service Mesh for monitoring.

As you can see it’s a pretty long list. Additionally we now also have freedom to add any custom task runner anytime. Anyway, the primary goal is already achieved and we were able to cut down our infra cost like one-third. We had some additional cost apart from k8s clusters like:

Load Balancer: While we used ingress controller, we had to just use one load balancer for all.

S3 storage: Some objects were already hosted in AWS s3, we decided to keep as it is. As new elements has been added in the equation like hourly database backup, new docker images etc, we planned to go for s3 compatible DO solution space.

NewRelic APM.

Mail solution cost.

But these cost were there in Heroku too and is platform and infrastructure independent. So I excluded them out the equation at first.

Additionally, we have used Helm to package our application to the cluster and used Drone to automate the CI/CD pipeline, which is hosted into our kubernetes pipeline itself. Naturally I guess CI/CD pipeline meant to be running into your cluster. Whenever a new deployment happens it creates new pods and jobs, whenever a deployment completes it frees the resources. Helm was meant to handle complex deployment process and it integrates with CI/CD very easily. If your deployment takes a lot of juice then you can autoscale your cluster and can use some nodes dedicatedly for build and deployment stuff only.


I have planned to cover follow things in my upcoming stories:

Concept on Kubernetes components.

Setting up a Kubernetes cluster.

Setting up a private registry inside your Kubernetes cluster.

Deploying your first application inside the cluster.

Setting up ingress and load-balancer for your application.

Add free SSL certificates to your application using lets-encrypt.

While there are thousands of tutorials and videos on Kubernetes basics, I will try to focus to write for the first-timers so that they can have the same experience as we had. On top of that I will also cover following sections, which I guess will be tremendously helpful (because we didn’t find good articles on this items yet).

Set up CI-CD inside of your Kubernetes cluster.

Managing multiple environment?

Database inside or outside of your cluster? Things to consider.

Helm chart for your application.

Managing node pools and segregating your workloads.

Managing roles inside of your cluster.

Better deployment process to test your app more efficiently.

So when you are building your app or infrastructure or anything, it’s impossible to maintain and follow the best practices at the first time, we all know that this just don’t work like this. So I am going to update my articles from time to time and will let you see that how things move on from early stage to the future. Stay tuned 😉

Check out other articles from our engineering team:

You have ideas, We have solutions.