Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Updates node_pool to use kubeconfig on control plane node to delete n… #90

Open
wants to merge 6 commits into
base: main
Choose a base branch
from
Open
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
2 changes: 2 additions & 0 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -67,6 +67,8 @@ terraform apply -target=module.node_pool_green

At which point, you can either destroy the old pool, or taint/evict pods, etc. once this new pool connects.

When destroying node pools, you must have `KUBECONFIG` set to the cluster's kubeconfig on your local `kubectl` instance in order for the cluster to [cordon, drain](https://kubernetes.io/docs/tasks/administer-cluster/safely-drain-node/), and delete the node from the cluster (otherwise, the node remains in a `NotReady` state, once the machine itself has been terminated).

## GPU Node Pools

The `gpu_node_pool` module provisions and configures GPU nodes for use with your Kubernetes cluster. The module definition requires `count_gpu` (defaults to "0"), and `plan_gpu` (defaults to `g2.large`). See [`examples/gpu_node_pool.tf`](examples/gpu_node_pool.tf) for usage.
Expand Down
15 changes: 15 additions & 0 deletions modules/gpu_node_pool/main.tf
Original file line number Diff line number Diff line change
Expand Up @@ -20,5 +20,20 @@ resource "metal_device" "gpu_node" {

billing_cycle = "hourly"
project_id = var.project_id

provisioner "local-exec" {
when = destroy
command = "kubectl cordon ${self.hostname} || echo \"If unsuccessful, set KUBECONFIG for your local kubectl for cluster to active, and cordon ${self.hostname} manually.\""
}

provisioner "local-exec" {
when = destroy
command = "kubectl drain ${self.hostname} --delete-local-data --ignore-daemonsets || echo \"If unsuccessful, set KUBECONFIG for your local kubectl for cluster to active, and drain ${self.hostname} manually.\""
}

provisioner "local-exec" {
when = destroy
command = "kubectl delete node ${self.hostname} || echo \"If unsuccessful, set KUBECONFIG for your local kubectl for cluster to active, and delete node ${self.hostname} manually.\""
}
}

30 changes: 30 additions & 0 deletions modules/node_pool/main.tf
Original file line number Diff line number Diff line change
Expand Up @@ -20,6 +20,21 @@ resource "metal_device" "x86_node" {

billing_cycle = "hourly"
project_id = var.project_id

provisioner "local-exec" {
when = destroy
command = "kubectl cordon ${self.hostname} || echo \"If unsuccessful, set KUBECONFIG for your local kubectl for cluster to active, and cordon ${self.hostname} manually.\""
}

provisioner "local-exec" {
when = destroy
command = "kubectl drain ${self.hostname} --delete-local-data --ignore-daemonsets || echo \"If unsuccessful, set KUBECONFIG for your local kubectl for cluster to active, and drain ${self.hostname} manually.\""
}

provisioner "local-exec" {
when = destroy
command = "kubectl delete node ${self.hostname} || echo \"If unsuccessful, set KUBECONFIG for your local kubectl for cluster to active, and delete node ${self.hostname} manually.\""
}
}

resource "metal_device" "arm_node" {
Expand All @@ -33,4 +48,19 @@ resource "metal_device" "arm_node" {

billing_cycle = "hourly"
project_id = var.project_id

provisioner "local-exec" {
when = destroy
command = "kubectl cordon ${self.hostname} || echo \"If unsuccessful, set KUBECONFIG for your local kubectl for cluster to active, and cordon ${self.hostname} manually.\""
}

provisioner "local-exec" {
when = destroy
command = "kubectl drain ${self.hostname} --delete-local-data --ignore-daemonsets || echo \"If unsuccessful, set KUBECONFIG for your local kubectl for cluster to active, and drain ${self.hostname} manually.\""
}

provisioner "local-exec" {
when = destroy
command = "kubectl delete node ${self.hostname} || echo \"If unsuccessful, set KUBECONFIG for your local kubectl for cluster to active, and delete node ${self.hostname} manually.\""
}
}
1 change: 1 addition & 0 deletions modules/node_pool/variables.tf
Original file line number Diff line number Diff line change
Expand Up @@ -58,3 +58,4 @@ variable "storage" {
type = string
description = "Configure Storage ('ceph' or 'openebs') Operator"
}