-
Notifications
You must be signed in to change notification settings - Fork 693
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Make HighNodeUtilization select 1 node if all nodes are underutilized #1616
base: master
Are you sure you want to change the base?
Conversation
[APPROVALNOTIFIER] This PR is NOT APPROVED This pull-request has been approved by: The full list of commands accepted by this bot can be found here.
Needs approval from an approver in each of these files:
Approvers can indicate their approval by writing |
Welcome @zoonage! |
Hi @zoonage. Thanks for your PR. I'm waiting for a kubernetes-sigs member to verify that this patch is reasonable to test. If it is, they should reply with Once the patch is verified, the new status will be reflected by the I understand the commands that are listed here. Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository. |
This is the first time I've tried to use the de-scheduler, how are the *NodeUtilization plugins preventing pods from being re-scheduled onto the node they've just been evicted from? |
I've added PreferredNoSchedule tainting to nodes now to avoid scheduling on nodes we're trying to remove |
Just realised the tainting has an unintended effect on LowNodeUtilization, will give this a rethink |
Eventually reconcile the issue in #725 by tainting and removing 1 node at a time on each run.
There's definitely a more efficient way by working out how many nodes could be removed to achieve a certain resource utilisation density on the remaining nodes, however this is a quick fix that eventually reconciles the cluster into the desired state