Rightsizing Kubernetes requests/Limits usage

tjtharrison
7 min readJan 13, 2024

At the end of last year, I started seeing this error pop up on occasion when I deploy a new deployment into my cluster — So thought this was a good opportunity to do some house keeping.

Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Warning FailedScheduling 4m33s default-scheduler 0/3 nodes are available: 3 Insufficient cpu. preemption: 0/3 nodes are available: 3 No preemption victims found for incoming pod..
Warning FailedScheduling 3m52s (x2 over 4m9s) default-scheduler 0/3 nodes are available: 3 Insufficient cpu. preemption: 0/3 nodes are available: 3 No preemption victims found for incoming pod..

It was also good timing with Datadog releasing their container report, which had some interesting facts about resource utilisation across container deployments.

The full report is an interesting read, I’ve included the link below:

Photo by Brina Blum on Unsplash

Investigation

I can see using kubectl top nodes that my resource usage across my cluster isn’t that high which is good and confirms the issue is only related to the requests/limits that I’ve set on my deployments and not any capacity issues with my cluster.

kubectl top nodes
NAME CPU(cores) CPU% MEMORY(bytes) MEMORY%
hk8s-master 813m 20% 11055Mi 70%
hk8s-node1 633m 15% 7221Mi 45%
hk8s-node2 640m 16% 8323Mi 52%

I’ve been pretty heavy handed with the values I’ve been setting on my resources I’ve been deploying onto my Kubernetes clusters…

--

--