Remove defaults for container resources.
long-term
S
Sapphire Mastodon
We would like to have the default container resources (Limit Memory – 500Mi & Limit CPU – 400m) removed so that there would be no limit set at all. Either removing it from your side works or even preferrable to give this control to us to set / remove the default limits, especially when we're relying on self-hosted build infra i.e in our case our on-prem k8s cluster.
This helps us address the resource issues across the account, instead of customizing the resources specific to individual pipelines/steps which would be an overhead and also not reliable.
We understand there is a FF to increase the defaults but we're not looking at that option, we're looking to have the defaults removed or the control be given to us. When we're leveraging our self-hosted build infra, we expect to have that control.
Log In
N
Nofar Bluestein
long-term
L
Lemon glacier Duck
Hi,
I am also looking at this... It's causes us pain with our self hosted build servers.
Aside from the issue highlighted in the original comment, we also have seen huge amount of CPU throttling. There is very little that can be done about it; Our fix has been to use Kyverno policy to remove the CPU limit (but keep requests). We would prefer that we manage this via Harness rather than some out of bounds system.
For reference, this article describes the problem well -> https://home.robusta.dev/blog/stop-using-cpu-limits
N
Nofar Bluestein
under review
S
Sapphire Mastodon
Nofar Bluestein Have you got a chance to take a look at my feedback?
S
Sapphire Mastodon
Hi Nofar,
I do see it as a big problem when we're talking about dealing with creating / migrating 1000's of pipelines from different platforms into Harness. Obviously, we would be standardizing using templates, but these parameters would be highly varying from app to app, pipeline to pipeline and it would be extremely difficult to figure out the needs of each pipeline individually when we're talking about the scale as huge as 1000's of pipelines. Ideally it would not be possible to know these requirements beforehand so that the resources could be fixed as per the need. I hope I'm making sense.
To be more specific, assume we have 2000 pipelines and we have triggered them in batches say 100 pipelines triggered every 10 mins with the default resource limits, now after the execution of all 2000 pipelines, we see at least 1000 failed pipelines due to resource constraints, now we have to tweak the resources for these 1000 pipelines in accordance to their needs, which would not be an ideal way to handle.
Also when we're relying on self-hosted build infra, it doesn't make sense as to why Harness would have the control over the limits, it should be up to us as to how our build infrastructure would be consumed either we run out of resources or underutilize that's up to us. Although we would be open to any best practices / suggestions / improvements but at the end we should be the ultimate to decide how we manage the self-hosted infra whether we limit it with defaults or
not have any limit at all which is ideally what we're looking to have.
Currently in our GitHub actions ecosystem, where we have a k8s self-hosted build infra we do not limit any resources and let the builds consume the resources they need. We know this might have some limitations where builds might consume more memory and we might overlook the opportunity to optimize in such cases but we're fine with that.
I hope I was able to convey our intent, if there is any lack of clarity, I would be happy to answer any follow-up items you may have.
Thanks,
Vamsi
N
Nofar Bluestein
pending feedback
N
Nofar Bluestein
Hey, thank you for your feedback.
Can you please elaborate on why you need to remove the limits? what is the impact of using current limits?
limits, generally, help us control the memory consumption by the step and throws OOM exception in case usage exceeds the limit.
Pranav Rastogi
long-term
N
Nofar Bluestein
under review