We read every piece of feedback, and take your input very seriously.
To see all available qualifiers, see our documentation.
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Hi,
We are using an open-source version of Nomad. Nomad version is v1.9.3
There is an issue where we are unable to scale down task groups in a job independently.
When a task group is scaled down or up, it causes the task beloging to another group of same job to restart.
In the below job specification, we have 2 task groups, nginx-group1 and nginx-group2.
job "nginx" { namespace = "platforms" node_pool = "platforms" group "nginx-group1" { count = 1 network { port "http" { to = "80" } } task "nginx-task" { driver = "docker" config { image = "nginx:latest" ports = ["http"] } service { name = "platforms-nginx-service" port = "http" provider = "consul" check { type = "http" port = "http" path = "/" interval = "2s" timeout = "2s" } } } } group "nginx-group2" { count = 1 network { port "http" { to = "80" } } task "nginx-task" { driver = "docker" config { image = "nginx:latest" ports = ["http"] } service { name = "platforms-nginx-service" port = "http" provider = "consul" check { type = "http" port = "http" path = "/" interval = "2s" timeout = "2s" } } } } }
scaling down nginx-group1 to 0. nomad_job % nomad job scale nginx nginx-group1 0
nginx-group1
nomad_job % nomad job scale nginx nginx-group1 0
sijo.george@macblr0263 nomad_job % nomad job scale nginx nginx-group1 0 ==> 2025-01-25T14:10:59+05:30: Monitoring evaluation "648b5bb9" 2025-01-25T14:10:59+05:30: Evaluation triggered by job "nginx" 2025-01-25T14:10:59+05:30: Allocation "fb80a82b" modified: node "86c82a4a", group "nginx-group2" 2025-01-25T14:11:00+05:30: Evaluation within deployment: "2538df26" 2025-01-25T14:11:00+05:30: Evaluation status changed: "pending" -> "complete" ==> 2025-01-25T14:11:00+05:30: Evaluation "648b5bb9" finished with status "complete" ==> 2025-01-25T14:11:00+05:30: Monitoring deployment "2538df26" ✓ Deployment "2538df26" successful 2025-01-25T14:11:12+05:30 ID = 2538df26 Job ID = nginx Job Version = 3 Status = successful Description = Deployment completed successfully Deployed Task Group Desired Placed Healthy Unhealthy Progress Deadline nginx-group2 1 1 1 0 2025-01-25T14:21:09+05:30
This will result in restart the allocation belonging to nginx-group2
nginx-group2
sijo.george@macblr0263 nomad_job % nomad job status nginx ID = nginx Name = nginx Submit Date = 2025-01-25T14:10:59+05:30 Type = service Priority = 50 Datacenters = * Namespace = platforms Node Pool = platforms Status = running Periodic = false Parameterized = false Summary Task Group Queued Starting Running Failed Complete Lost Unknown nginx-group1 0 0 0 0 2 0 0 nginx-group2 0 0 1 0 0 0 0 Latest Deployment ID = 2538df26 Status = successful Description = Deployment completed successfully Deployed Task Group Desired Placed Healthy Unhealthy Progress Deadline nginx-group2 1 1 1 0 2025-01-25T14:21:09+05:30 Allocations ID Node ID Task Group Version Desired Status Created Modified 9b31b80a 86c82a4a nginx-group1 2 stop complete 14m13s ago 2m13s ago e8687ec7 86c82a4a nginx-group1 0 stop complete 15m18s ago 14m36s ago fb80a82b 86c82a4a nginx-group2 3 run running 15m18s ago 2m2s ago`
Nomad task groups should scale up and down independently
The text was updated successfully, but these errors were encountered:
No branches or pull requests
Hi,
We are using an open-source version of Nomad. Nomad version is v1.9.3
There is an issue where we are unable to scale down task groups in a job independently.
When a task group is scaled down or up, it causes the task beloging to another group of same job to restart.
Reproduction steps
In the below job specification, we have 2 task groups, nginx-group1 and nginx-group2.
scaling down
nginx-group1
to 0.nomad_job % nomad job scale nginx nginx-group1 0
This will result in restart the allocation belonging to
nginx-group2
Expected Result
Nomad task groups should scale up and down independently
The text was updated successfully, but these errors were encountered: