DevOps.dev

Devops.dev is a community of DevOps enthusiasts sharing insight, stories, and the latest development in the field.

Follow publication

How to scale down Kubernetes cluster workloads during off-hours

--

Photo by Visual Stories || Micheile on Unsplash

You heard it right, everyone needs to rest once a while, even our little Kubernetes cluster.

Usecase

  • One of the most important aspects when it comes to running the workload in the cloud is to keep cost under control or tune it such that you can save extra.
  • You may be hosting workload in Kubernetes where you won't get traffic post business hours.
  • Or on weekends, you just want to scale down as no traffic flows to your apps during that time.
  • The cost to keep those worker nodes at off-hours is pretty high if you calculate for a quarter or for a year.

Solution

Though there isn’t any one-click solution, Kubernetes finds a way or always Kubernetes Admin does !!

Strangely, there isn’t any tool out of the box from the AWS side, heck not even a blog on how can customers achieve that. Ahem, GCP aced in this scenario.

Kubernetes Downscaler

Kube downscaler is a FOSS project by Henning Jacobs who’s the creator of the famous project kube-ops-view.

This project fits exactly to our requirement as it can scale down below resources in a specified timeframe :

  1. Statefulsets
  2. Deployments (HPA too)
  3. Cronjobs

Before we begin, here are the prerequisites :

  • Kubernetes cluster
  • Cluster autoscaler configured
  • Bit of patience

Installation :

  • Clone the repository:
git clone https://codeberg.org/hjacobs/kube-downscaler
  • Update the configmap file deploy/config.yaml to your Cluster TIMEZONE and desired uptime, here’s an example of mine :
apiVersion: v1
kind: ConfigMap
metadata:
name: kube-downscaler
data:
# timeframe in which your resources should be up
DEFAULT_UPTIME: "Mon-Fri 09:30-06:30 Asia/Kolkata"

Apply the manifest files :

kubectl apply -f deploy/

Working and configuration:

  1. As soon as the downscaler pod runs, you can check the logs of it, it should look like Scale down deployment/myapp to replicas 0 (dry-run)
  2. As a safety plug, no scaling operations will happen when the pod starts as --dry-run the argument is enabled. Remove that by patching the deployment to start the scaling activity:
kubectl patch deployment kube-downscaler --type='json' -p='[{"op": "replace", "path": "/spec/template/spec/containers/0/args", "value": [
"--interval=60"
]}]'

3. Once, the dry-run argument is removed, all the resources ( deployment, Statefulset & cronjob) will be scaled down to 0 ( default replica) if current_time ≠ default_uptime is mentioned in the above-mentioned configmap.

4. In case you need to exclude any app from being scaled-down, you can annotate that deployment/statefulset/cronjob with :

kubectl annotate deploy myapp 'downscaler/exclude=true'

If you want to have a minimum 1 replicas after scaling down activity, you can annotate the resource :

kubectl annotate deploy myapp 'downscaler/downtime-replicas=1'

Note: No need to annotate uptime value in each deployment or statefulset since by default all pods will be scaled down.

Additional tuning like namespace-based annotation etc are mentioned in the readme here.

Achieving node scale down :

Once the pods are scaled-down, assuming you have cluster autoscaler configured, it should automatically remove the nodes that are unused or empty from your node group.

Note: Cluster autoscaler is mandatory since at the end of the day that’s what removes worker nodes to save your bill.

Originally published at https://tanmay-bhat.github.io.

Sign up to discover human stories that deepen your understanding of the world.

Free

Distraction-free reading. No ads.

Organize your knowledge with lists and highlights.

Tell your story. Find your audience.

Membership

Read member-only stories

Support writers you read most

Earn money for your writing

Listen to audio narrations

Read offline with the Medium app

--

--

Published in DevOps.dev

Devops.dev is a community of DevOps enthusiasts sharing insight, stories, and the latest development in the field.

Responses (1)

Write a response