Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

podman-remote: create command is wrong #25026

Closed
axel7083 opened this issue Jan 16, 2025 · 11 comments · Fixed by #25092
Closed

podman-remote: create command is wrong #25026

axel7083 opened this issue Jan 16, 2025 · 11 comments · Fixed by #25092
Assignees
Labels
jira kind/bug Categorizes issue or PR as related to a bug.

Comments

@axel7083
Copy link

Issue Description

When checking the CreateCommand of a container created through the api we are getting some weird results.

Steps to reproduce the issue

Steps to reproduce the issue

  1. Create a container using the api (E.g. using Podman-Desktop)
  2. Inspect the container Config.CreateCommand
  3. assert command is wrong

Describe the results you received

$: podman inspect hello-container | jq .[0].Config.CreateCommand
[
  "podman",
  "system",
  "service",
  "--time=0"
]

Describe the results you expected

Empty array or nil.

podman info output

host:
  arch: amd64
  buildahVersion: 1.38.0
  cgroupControllers:
  - cpu
  - io
  - memory
  - pids
  cgroupManager: systemd
  cgroupVersion: v2
  conmon:
    package: conmon-2.1.12-3.fc41.x86_64
    path: /usr/bin/conmon
    version: 'conmon version 2.1.12, commit: '
  cpuUtilization:
    idlePercent: 94.31
    systemPercent: 0.82
    userPercent: 4.87
  cpus: 16
  databaseBackend: sqlite
  distribution:
    distribution: fedora
    variant: workstation
    version: "41"
  eventLogger: journald
  freeLocks: 2031
  hostname: fedora
  idMappings:
    gidmap:
    - container_id: 0
      host_id: 1000
      size: 1
    - container_id: 1
      host_id: 524288
      size: 65536
    uidmap:
    - container_id: 0
      host_id: 1000
      size: 1
    - container_id: 1
      host_id: 524288
      size: 65536
  kernel: 6.12.8-200.fc41.x86_64
  linkmode: dynamic
  logDriver: journald
  memFree: 19032514560
  memTotal: 67107631104
  networkBackend: netavark
  networkBackendInfo:
    backend: netavark
    dns:
      package: aardvark-dns-1.13.1-1.fc41.x86_64
      path: /usr/libexec/podman/aardvark-dns
      version: aardvark-dns 1.13.1
    package: netavark-1.13.1-1.fc41.x86_64
    path: /usr/libexec/podman/netavark
    version: netavark 1.13.1
  ociRuntime:
    name: crun
    package: crun-1.19.1-1.fc41.x86_64
    path: /usr/bin/crun
    version: |-
      crun version 1.19.1
      commit: 3e32a70c93f5aa5fea69b50256cca7fd4aa23c80
      rundir: /run/user/1000/crun
      spec: 1.0.0
      +SYSTEMD +SELINUX +APPARMOR +CAP +SECCOMP +EBPF +CRIU +LIBKRUN +WASM:wasmedge +YAJL
  os: linux
  pasta:
    executable: /usr/bin/pasta
    package: passt-0^20241211.g09478d5-1.fc41.x86_64
    version: |
      pasta 0^20241211.g09478d5-1.fc41.x86_64
      Copyright Red Hat
      GNU General Public License, version 2 or later
        <https://www.gnu.org/licenses/old-licenses/gpl-2.0.html>
      This is free software: you are free to change and redistribute it.
      There is NO WARRANTY, to the extent permitted by law.
  remoteSocket:
    exists: true
    path: /run/user/1000/podman/podman.sock
  rootlessNetworkCmd: pasta
  security:
    apparmorEnabled: false
    capabilities: CAP_CHOWN,CAP_DAC_OVERRIDE,CAP_FOWNER,CAP_FSETID,CAP_KILL,CAP_NET_BIND_SERVICE,CAP_SETFCAP,CAP_SETGID,CAP_SETPCAP,CAP_SETUID,CAP_SYS_CHROOT
    rootless: true
    seccompEnabled: true
    seccompProfilePath: /usr/share/containers/seccomp.json
    selinuxEnabled: true
  serviceIsRemote: false
  slirp4netns:
    executable: ""
    package: ""
    version: ""
  swapFree: 8589930496
  swapTotal: 8589930496
  uptime: 52h 2m 21.00s (Approximately 2.17 days)
  variant: ""
plugins:
  authorization: null
  log:
  - k8s-file
  - none
  - passthrough
  - journald
  network:
  - bridge
  - macvlan
  - ipvlan
  volume:
  - local
registries:
  search:
  - registry.fedoraproject.org
  - registry.access.redhat.com
  - docker.io
store:
  configFile: /home/axel7083/.config/containers/storage.conf
  containerStore:
    number: 10
    paused: 0
    running: 0
    stopped: 10
  graphDriverName: overlay
  graphOptions: {}
  graphRoot: /home/axel7083/.local/share/containers/storage
  graphRootAllocated: 1022505254912
  graphRootUsed: 396294459392
  graphStatus:
    Backing Filesystem: btrfs
    Native Overlay Diff: "true"
    Supports d_type: "true"
    Supports shifting: "false"
    Supports volatile: "true"
    Using metacopy: "false"
  imageCopyTmpDir: /var/tmp
  imageStore:
    number: 670
  runRoot: /run/user/1000/containers
  transientStore: false
  volumePath: /home/axel7083/.local/share/containers/storage/volumes
version:
  APIVersion: 5.3.1
  Built: 1732147200
  BuiltTime: Thu Nov 21 01:00:00 2024
  GitCommit: ""
  GoVersion: go1.23.3
  Os: linux
  OsArch: linux/amd64
  Version: 5.3.1

Podman in a container

No

Privileged Or Rootless

None

Upstream Latest Release

Yes

Additional environment details

No response

Additional information

Linked issues

@axel7083 axel7083 added the kind/bug Categorizes issue or PR as related to a bug. label Jan 16, 2025
@afbjorklund
Copy link
Contributor

Seems like it is broken when using the docker client (like PD), but not when using podman-remote.

@axel7083
Copy link
Author

Thanks @Luap99

@deversmann
Copy link

Am I correct in assuming that I should see this fixed in podman version 5.4.0-rc2? Because I just installed that and I still see the following for a container started by Podman Desktop. Or am I misunderstanding this issue entriely?

               "CreateCommand": [
                    "/usr/bin/podman",
                    "--log-level=info",
                    "system",
                    "service"
               ],

@Luap99
Copy link
Member

Luap99 commented Jan 31, 2025

Did you create a new container? Once it is in the container config we cannot really change it.

@Luap99
Copy link
Member

Luap99 commented Jan 31, 2025

Also the version inside the machine VM matters, so make sure to check that as well. The client version does not matter here as this is a server side fix

@deversmann
Copy link

deversmann commented Jan 31, 2025

Yup... new machine... though now that I'm looking closer, the new machine VM still has Podman 5.3.1 in it. Do I just update that manually or is there a different image I need to tell my local Podman to download?

@Luap99
Copy link
Member

Luap99 commented Jan 31, 2025

If you do remove the machine and create a new one it will update the image.

The other thing to update the image inside is podman machine os apply quay.io/podman/machine-os:5.4 --restart
currently the 5.4 image should ship with rc2

@deversmann
Copy link

deversmann commented Jan 31, 2025

deversma@deversma-mac [11:26:09] [~]
-> % podman machine os apply quay.io/podman/machine-os:5.4 --restart
Pulling manifest: ostree-unverified-registry:quay.io/podman/machine-os:5.4
Staging deployment...done
Changes queued for next boot. Run "systemctl reboot" to start a reboot
Machine "podman-machine-default" restarted successfully
deversma@deversma-mac [11:26:29] [~]
-> % podman machine ssh
Connecting to vm podman-machine-default. To close connection, use `~.` or `exit`
Fedora CoreOS 41.20250105.3.0
Tracker: https://github.com/coreos/fedora-coreos-tracker
Discuss: https://discussion.fedoraproject.org/tag/coreos

Last login: Fri Jan 31 10:27:34 2025 from 192.168.127.1
core@localhost:~$ podman --version
podman version 5.3.1
core@localhost:~$

@Luap99
Copy link
Member

Luap99 commented Jan 31, 2025

Oh that looks quite bad, something is wrong in the image build pipeline

@deversmann
Copy link

Is this a bug report I should file somewhere or you got this from here?

@Luap99
Copy link
Member

Luap99 commented Jan 31, 2025

I will take care of it next week, containers/podman-machine-os#75

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
jira kind/bug Categorizes issue or PR as related to a bug.
Projects
None yet
Development

Successfully merging a pull request may close this issue.

4 participants