You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
It showed a high st percentage. Indeed, my cluster was running on a batch of vms, which are running on a private cloud platform lacking for resources.
I guess descheduler might not count the st percentage into cpu utilization. However, kubernetes as a cloud native tool should be popular to be deployed on vm nodes. And for vm nodes, to compute real available cpu, we should take the st percentage into consideration.
What did you expect to see?
What did you see instead?
The text was updated successfully, but these errors were encountered:
Hi @owenchenxy
This is because the descheduler determines usage based only on pod requests/limits, not on actual real-time resource consumption. It does this in order to be consistent with the scheduler, which takes the same approach.
For this reason, the actual usage from commands like kubectl top (which uses kubelet metrics) may differ from the scheduler and descheduler's view. There has been some discussion on adding metrics-based real time usage descheduling, but this effort has not made much progress beyond initial proposals.
For more information, please see the pinned issue at the top of this repo: #225
(We also have a PR open to make this clearer in the documentation #708)
I'm going to close this issue as a duplicate, please feel free to continue discussion on the topic in that main issue if you would like. Thanks!
/close
Hi @owenchenxy
This is because the descheduler determines usage based only on pod requests/limits, not on actual real-time resource consumption. It does this in order to be consistent with the scheduler, which takes the same approach.
For this reason, the actual usage from commands like kubectl top (which uses kubelet metrics) may differ from the scheduler and descheduler's view. There has been some discussion on adding metrics-based real time usage descheduling, but this effort has not made much progress beyond initial proposals.
For more information, please see the pinned issue at the top of this repo: #225
(We also have a PR open to make this clearer in the documentation #708)
I'm going to close this issue as a duplicate, please feel free to continue discussion on the topic in that main issue if you would like. Thanks!
/close
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.
What version of descheduler are you using?
descheduler version: v0.23.0
Does this issue reproduce with the latest release?
yes
Which descheduler CLI options are you using?
Please provide a copy of your descheduler policy config file
What k8s version are you using (
kubectl version
)?kubectl version
OutputWhat did you do?
I applied the descheduler in my kubernetes cluster but no pods were evicted as I expected.
I ran
kubectl top nodes
to inspect the nodes' utilizations:it showed high cpu utilization for testkuber01n03 and testkuber01n04.
however, when I inspect the log of descheduler, I found below information
which shows different cpu utilization from the
kubectl top nodes
command.Futhur I login into testkuber01n03 which showed the cpu utilization of 99%, and ran
top
to see the details:It showed a high st percentage. Indeed, my cluster was running on a batch of vms, which are running on a private cloud platform lacking for resources.
I guess descheduler might not count the st percentage into cpu utilization. However, kubernetes as a cloud native tool should be popular to be deployed on vm nodes. And for vm nodes, to compute real available cpu, we should take the st percentage into consideration.
What did you expect to see?
What did you see instead?
The text was updated successfully, but these errors were encountered: