Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Read create_infra from containers.conf when kube play .yaml #24584

Closed
PiotrBzdrega opened this issue Nov 17, 2024 · 5 comments
Closed

Read create_infra from containers.conf when kube play .yaml #24584

PiotrBzdrega opened this issue Nov 17, 2024 · 5 comments
Labels
kind/feature Categorizes issue or PR as related to a new feature. kube stale-issue

Comments

@PiotrBzdrega
Copy link

When creating pod using podman pod create --infra=false we have possibility to avoid creating infra container.
As far i know there is no counterpart command in .yaml file.
In such situation i would suggest to create in containers.conf:

#Used to overwrite "CreateInfra" property during podman kube play .yaml command   
create_infra = true
@Luap99
Copy link
Member

Luap99 commented Nov 18, 2024

AFAIK k8s always uses a infra container and the kube command tires to match k8s behavior wherever possible as long as it makes sense for podman. And I think there is important functionality that requires a infra container to be created for the pod, i.e. we need a container running to keep namespaces open that can be shared for all containers in a pod.

What exactly is your use case?

@Luap99 Luap99 added kind/feature Categorizes issue or PR as related to a new feature. kube labels Nov 18, 2024
@PiotrBzdrega
Copy link
Author

PiotrBzdrega commented Nov 18, 2024

@Luap99 Thank you Paul for reaction,
I thought that infra is not so inevitable if there is cli --infra=false 😀
I read in "Podman in Action" ~Daniel Walsh that

The infra container (pause container) is similar to the rootless pause container; its only purpose is to hold open the namespaces and cgroups, while containers come and go. However, each pod will have a different infra container

In my pod i won't change number of running containers ( don't need to add or remove during runtime).
I assumed that this is main task for infra, to let new container join namespace/cgroup.

But the main (dumb) reason ... is that i would like to filter out infra container during checking state of containers and have it short to not process it a lot 😅.

Additional side effect to consider what will happen if we would use this configuration variable durnig podman kube play:
Working with the current version (5.3.0), if the .yaml file is broken, pod will be created with only one infra container with pod status = created.
How podman will behave if there won't be dedicated infra 🤔, Will pod be created with 0 containers or maybe nothing will happen like with broken compose file in docker - compose

@Luap99
Copy link
Member

Luap99 commented Nov 18, 2024

Well without the infra the containers will not share any namespaces which is not what most people want. In that case they are just a group of containers where podman pod start/stop starts and stops the containers at the some time.

But the main (dumb) reason ... is that i would like to filter out infra container during checking state of containers and have it short to not process it a lot 😅.

I don't know what you are doing but there is a IsInfra field in the inspect and list container output that can be used to filter.

$ bin/podman ps -a
CONTAINER ID  IMAGE                                        COMMAND     CREATED        STATUS                    PORTS       NAMES
2b6ad4da01c0  localhost/podman-pause:5.4.0-dev-1731674600              6 minutes ago  Up 3 minutes                          a7f302810edb-infra
a940f402b017  docker.io/library/debian:bookworm            bash        3 minutes ago  Exited (0) 3 minutes ago              magical_hamilton
$ bin/podman ps -a --format '{{if not .IsInfra}}{{.State }}{{end}}'

exited

Additional side effect to consider what will happen if we would use this configuration variable durnig podman kube play:
Working with the current version (5.3.0), if the .yaml file is broken, pod will be created with only one infra container with pod status = created.

Yeah I think this is pretty much undefined behavior. I guess on errors we should clean the pod up again which the code seem to say as well

if playErr := kubeplay(reader); playErr != nil {
// FIXME: The cleanup logic below must be fixed to only remove
// resources that were created before a failure. Otherwise,
// rerunning the same YAML file will cause an error and remove
// the previously created workload.
//
// teardown any containers, pods, and volumes that might have been created before we hit the error
// reader, err := readerFromArg(args[0])
// if err != nil {
// return err
// }
// if err := teardown(reader, entities.PlayKubeDownOptions{Force: true}, true); err != nil && !errorhandling.Contains(err, define.ErrNoSuchPod) {
// return fmt.Errorf("error tearing down workloads %q after kube play error %q", err, playErr)
// }
return playErr

So I am not sure why this isn't done today or what the problems with that are.

Copy link

A friendly reminder that this issue had no activity for 30 days.

@Luap99
Copy link
Member

Luap99 commented Jan 7, 2025

I close this as won't fix, the k8s behavior requires a infra container

@Luap99 Luap99 closed this as not planned Won't fix, can't repro, duplicate, stale Jan 7, 2025
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
kind/feature Categorizes issue or PR as related to a new feature. kube stale-issue
Projects
None yet
Development

No branches or pull requests

2 participants