-
Notifications
You must be signed in to change notification settings - Fork 3.9k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Implemented batch processing for check capacity provisioning class #7283
base: master
Are you sure you want to change the base?
Conversation
Hi @Duke0404. Thanks for your PR. I'm waiting for a kubernetes member to verify that this patch is reasonable to test. If it is, they should reply with Once the patch is verified, the new status will be reflected by the I understand the commands that are listed here. Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository. |
[APPROVALNOTIFIER] This PR is NOT APPROVED This pull-request has been approved by: Duke0404 The full list of commands accepted by this bot can be found here.
Needs approval from an approver in each of these files:
Approvers can indicate their approval by writing |
5dddce4
to
bca8b93
Compare
bca8b93
to
bbc758d
Compare
/ok-to-test |
bbc758d
to
11703e9
Compare
please, fix tests |
11703e9
to
28f4c57
Compare
I think the tests were failing due to an issue with Github actions, they are passing now. |
FYI @MaciekPytel @mwielgus the change we've been discussing a couple weeks ago. |
I don't like the way that checkcapacity is implemented and I'm not super happy about doubling down on it. Checking capacity is not really similar to scale-up, but it is conceptually pretty much the same as fitting existing pods on upcoming nodes in FilterOutSchedulable. Both cases involve just binpacking on existing nodes and don't require using Estimator, Expander, etc to make scale-up decisions or all the logic related to actuating such decisions. PodListProcessor interface has been pretty much designed exactly for this use-case, while scale-up orchestrator is intended to do more complex operations that you don't actually need here. The architectural / maintenance downside is inconsistency with the rest of the codebase and related maintenance problems (anyone debugging CA must be aware that provreq works differently from other, similar extensions to CA - our steep learning curve is likely a sum total of small gotchas like that). I'm not going to block this PR, but I'd really like to look into aligning provisioning request implementation with the expected CA architecture - and migrating the logic to PLP would be an obvious first step here. |
I agree with the general point about CA architecture. I would go even further and say we should probably explicitly group "capacity booking" (CRs, PRs, and possible future overprovisioning API) to make the implementation more consistent - while it'll be slightly different for each case (ex. all-or-nothing for PRs) they're logically very similar and if there's ever a substantial change to how the capacity needs to be booked for CA to recognize it, it'll have to be re-implemented everywhere.
I think this argument essentially boils down to "this could be optimized further". With improvements to frequent loops logic (meaning we'll skip the scan interval), this change will still significantly improve performance compared to what there's now, even in large clusters. Yes, it's possible to improve it even more by skipping refreshing cluster state, but it isn't necessarily an argument against doing partial optimization now.
I expect batch processing to remain an experimental feature for 1.31 (meaning it's turned off by default). I agree we may want to solve this before it becomes enabled by default. |
What type of PR is this?
/kind feature
What this PR does / why we need it:
Implements batch processing such that user can configure CA to process multiple CheckCapacity ProvisioningRequests in a single autoscaling iteration.
Which issue(s) this PR fixes:
Fixes #
Special notes for your reviewer:
Does this PR introduce a user-facing change?
Additional documentation e.g., KEPs (Kubernetes Enhancement Proposals), usage docs, etc.:
cc: @yaroslava-serdiuk @aleksandra-malinowska @kawych