Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

pulling image with zstd:chunked compression is returning manifest too big error #24885

Open
miabbott opened this issue Dec 20, 2024 · 8 comments
Labels
kind/bug Categorizes issue or PR as related to a bug.

Comments

@miabbott
Copy link
Contributor

miabbott commented Dec 20, 2024

Issue Description

I regularly build a "pet" container via Github Actions and push it to quay.io as a zstd:chunked compressed image. See the GHA definition here - https://github.com/miabbott/pet/blob/main/.github/workflows/build.yml

When I tried pulling the latest image today, I got the following error:

$ podman pull quay.io/miabbott/pet                                                                                                                                        
Trying to pull quay.io/miabbott/pet:latest...                                                                                                                                                  
Getting image source signatures                                                                                                                                                                
Copying blob 4ff7216921d7 done  34.5KiB / 34.5KiB (skipped: 240.0b = 0.68%)                                                                                                                    
Copying blob 1753f747eed5 done  339.1MiB / 339.1MiB (skipped: 6.6KiB = 0.00%)                                                                                                                  
Error: partial pull of blob sha256:133f7c600c0bfb06bb109c35d10339ee36e223727510624b0c538d6dd5aba990: read zstd:chunked manifest: manifest too big 

I suspected this might be a problem pushing the image with an older version of podman that is used in the Ubuntu runner for GHA, but I was able to reproduce the same error when building/pushing on Fedora 40 with podman 5.3.1

Steps to reproduce the issue

Steps to reproduce the issue

  1. Build image from https://github.com/miabbott/pet/tree/main
  2. Push as zstd:chunked compressed image to quay.io
  3. Pull the compressed image from quay.io

Describe the results you received

$ podman pull quay.io/miabbott/pet                                                                                                                                        
Trying to pull quay.io/miabbott/pet:latest...                                                                                                                                                  
Getting image source signatures                                                                                                                                                                
Copying blob 4ff7216921d7 done  34.5KiB / 34.5KiB (skipped: 240.0b = 0.68%)                                                                                                                    
Copying blob 1753f747eed5 done  339.1MiB / 339.1MiB (skipped: 6.6KiB = 0.00%)                                                                                                                  
Error: partial pull of blob sha256:133f7c600c0bfb06bb109c35d10339ee36e223727510624b0c538d6dd5aba990: read zstd:chunked manifest: manifest too big 

Describe the results you expected

Successful retrieval of the image.

podman info output

From the client pulling the image:

$ podman info
host:                   
  arch: amd64           
  buildahVersion: 1.38.0
  cgroupControllers:                
  - cpu                                     
  - io         
  - memory                     
  - pids                                              
  cgroupManager: systemd
  cgroupVersion: v2              
  conmon:              
    package: conmon-2.1.12-2.fc40.x86_64
    path: /usr/bin/conmon
    version: 'conmon version 2.1.12, commit: '  
  cpuUtilization:
    idlePercent: 95.34
    systemPercent: 0.96
    userPercent: 3.7
  cpus: 8   
  databaseBackend: sqlite
  distribution:
    distribution: fedora
    variant: silverblue
    version: "40"
  eventLogger: journald
  freeLocks: 2048
  hostname: meatwad
  idMappings:
    gidmap:
    - container_id: 0
      host_id: 1000           
      size: 1                 
    - container_id: 1
      host_id: 524288
      size: 65536                                               
    uidmap:      
    - container_id: 0
      host_id: 1000
      size: 1 
    - container_id: 1
      host_id: 524288     
      size: 65536 
  kernel: 6.12.4-100.fc40.x86_64                               
  linkmode: dynamic               
  logDriver: journald        
  memFree: 11606368256
  memTotal: 33336758272    
  networkBackend: netavark     
  networkBackendInfo:      
    backend: netavark         
    dns:                     
      package: aardvark-dns-1.13.1-1.fc40.x86_64
      path: /usr/libexec/podman/aardvark-dns
      version: aardvark-dns 1.13.1
    package: netavark-1.13.0-1.fc40.x86_64
    path: /usr/libexec/podman/netavark
    version: netavark 1.13.0
  ociRuntime:                                                           
    name: crun
    package: crun-1.18.2-1.fc40.x86_64
    path: /usr/bin/crun
    version: |-                      
      crun version 1.18.2
      commit: 00ab38af875ddd0d1a8226addda52e1de18339b5
      rundir: /run/user/1000/crun
      spec: 1.0.0    
      +SYSTEMD +SELINUX +APPARMOR +CAP +SECCOMP +EBPF +CRIU +LIBKRUN +WASM:wasmedge +YAJL
  os: linux
  pasta:                                                         
    executable: /usr/bin/pasta
    package: passt-0^20241127.gc0fbc7e-1.fc40.x86_64
    version: |
      pasta 0^20241127.gc0fbc7e-1.fc40.x86_64
      Copyright Red Hat
      GNU General Public License, version 2 or later
        <https://www.gnu.org/licenses/old-licenses/gpl-2.0.html>
      This is free software: you are free to change and redistribute it.
      There is NO WARRANTY, to the extent permitted by law.
  remoteSocket:
    exists: true
    path: /run/user/1000/podman/podman.sock
  rootlessNetworkCmd: pasta
  security:
    apparmorEnabled: false
    capabilities: CAP_CHOWN,CAP_DAC_OVERRIDE,CAP_FOWNER,CAP_FSETID,CAP_KILL,CAP_NET_BIND_SERVICE,CAP_SETFCAP,CAP_SETGID,CAP_SETPCAP,CAP_SETUID,CAP_SYS_CHROOT
    rootless: true
    seccompEnabled: true
    seccompProfilePath: /usr/share/containers/seccomp.json
    selinuxEnabled: true
  serviceIsRemote: false
  slirp4netns:
    executable: /usr/bin/slirp4netns
    package: slirp4netns-1.3.1-1.fc40.x86_64
    version: |-
      slirp4netns version 1.3.1
      commit: e5e368c4f5db6ae75c2fce786e31eef9da6bf236
      libslirp: 4.7.0
      SLIRP_CONFIG_VERSION_MAX: 4
      libseccomp: 2.5.5
  swapFree: 4707753984
  swapTotal: 8589930496
  uptime: 23h 0m 1.00s (Approximately 0.96 days)
  variant: ""
plugins:
  authorization: null
  log:
  - k8s-file
  - none
  - passthrough
  - journald
  network:
  - bridge
  - macvlan
  - ipvlan
  volume:
  - local
registries:
  search:
  - registry.fedoraproject.org
  - registry.access.redhat.com
  - docker.io
store:
  configFile: /var/home/miabbott/.config/containers/storage.conf
  containerStore:
    number: 0
    paused: 0
    running: 0
    stopped: 0
  graphDriverName: overlay
  graphOptions: {}
  graphRoot: /var/home/miabbott/.local/share/containers/storage
  graphRootAllocated: 433944612864
  graphRootUsed: 355216216064
  graphStatus:
    Backing Filesystem: xfs
    Native Overlay Diff: "true"
    Supports d_type: "true"
    Supports shifting: "false"
    Supports volatile: "true"
    Using metacopy: "false"
  imageCopyTmpDir: /var/tmp
  imageStore:
    number: 72
  runRoot: /run/user/1000/containers
  transientStore: false
  volumePath: /var/home/miabbott/.local/share/containers/storage/volumes
version:
  APIVersion: 5.3.1
  Built: 1732147200
  BuiltTime: Wed Nov 20 19:00:00 2024
  GitCommit: ""
  GoVersion: go1.22.7
  Os: linux
  OsArch: linux/amd64
  Version: 5.3.1

Podman in a container

No

Privileged Or Rootless

Rootless

Upstream Latest Release

Yes

@miabbott miabbott added the kind/bug Categorizes issue or PR as related to a bug. label Dec 20, 2024
@miabbott
Copy link
Contributor Author

miabbott commented Jan 6, 2025

I tried experimenting with an image manifest that had gzip and zstd:chunked compression pushed to the registry. Somewhat predictably, I got the same error. But this time I had --log-level=debug turned on:

$ podman manifest create pet:manifest
d347b982027187cdfdc2371dabd91c73b5253ff6bbd79375a448754093d2165c

$ podman manifest add pet:manifest pet:amd64
d347b982027187cdfdc2371dabd91c73b5253ff6bbd79375a448754093d2165c

$ podman manifest push --add-compression=zstd:chunked pet:manifest quay.io/miabbott/pet:manifest
Getting image list signatures
Copying 2 images generated from 1 images in list
Copying image sha256:9ea4031a5d7713bcb82205f426c68a3864c47b832f20f421444fd8218dcf3b0a (1/2)
Getting image source signatures
Copying blob 1cf1de6333b6 skipped: already exists
Copying blob c53d1896e92f skipped: already exists
Copying blob 47dad5a05ef4 skipped: already exists
Copying config 73172a6dc6 done   |
Writing manifest to image destination
Replicating image sha256:9ea4031a5d7713bcb82205f426c68a3864c47b832f20f421444fd8218dcf3b0a (2/2)
Getting image source signatures
Copying blob 1817ea1450a3 done   |
Copying blob f0ec3a240662 done   |
Copying blob 50f293228a5d done   |
Copying config 73172a6dc6 done   |
Writing manifest to image destination
Writing manifest list to image destination
Storing list signatures

$ podman manifest inspect pet:manifest
{
    "schemaVersion": 2,
    "mediaType": "application/vnd.oci.image.index.v1+json",
    "manifests": [
        {
            "mediaType": "application/vnd.oci.image.manifest.v1+json",
            "size": 926,
            "digest": "sha256:9ea4031a5d7713bcb82205f426c68a3864c47b832f20f421444fd8218dcf3b0a",
            "platform": {
                "architecture": "amd64",
                "os": "linux"
            }
        }
    ]
}

$ podman manifest inspect quay.io/miabbott/pet:manifest
{
    "schemaVersion": 2,
    "mediaType": "application/vnd.oci.image.index.v1+json",
    "manifests": [
        {
            "mediaType": "application/vnd.oci.image.manifest.v1+json",
            "size": 939,
            "digest": "sha256:5ec163cef7976cee29554cdfa60db3080b5bf236e88eaf93de027d68dfd935f9",
            "platform": {
                "architecture": "amd64",
                "os": "linux"
            }
        },
        {
            "mediaType": "application/vnd.oci.image.manifest.v1+json",
            "size": 1867,
            "digest": "sha256:ff8a0ec3c6948096280d51d061991ecb4743e75a2802bdf47312b1d206dec14e",
            "platform": {
                "architecture": "amd64",
                "os": "linux"
            },
            "annotations": {
                "io.github.containers.compression.zstd": "true"
            }
        }
    ]
}

$ podman --log-level=debug pull quay.io/miabbott/pet:manifest
INFO[0000] podman filtering at log level debug
DEBU[0000] Called pull.PersistentPreRunE(podman --log-level=debug pull quay.io/miabbott/pet:manifest)
DEBU[0000] Using conmon: "/usr/bin/conmon"
INFO[0000] Using sqlite as database backend
DEBU[0000] Using graph driver overlay
DEBU[0000] Using graph root /var/home/miabbott/.local/share/containers/storage
DEBU[0000] Using run root /run/user/1000/containers
DEBU[0000] Using static dir /var/home/miabbott/.local/share/containers/storage/libpod
DEBU[0000] Using tmp dir /run/user/1000/libpod/tmp
DEBU[0000] Using volume path /var/home/miabbott/.local/share/containers/storage/volumes
DEBU[0000] Using transient store: false
DEBU[0000] [graphdriver] trying provided driver "overlay"
DEBU[0000] Cached value indicated that overlay is supported
DEBU[0000] Cached value indicated that overlay is supported
DEBU[0000] Cached value indicated that metacopy is not being used
DEBU[0000] Cached value indicated that native-diff is usable
DEBU[0000] backingFs=xfs, projectQuotaSupported=false, useNativeDiff=true, usingMetacopy=false
DEBU[0000] Initializing event backend journald
DEBU[0000] Configured OCI runtime runc initialization failed: no valid executable found for OCI runtime runc: invalid argument
DEBU[0000] Configured OCI runtime runj initialization failed: no valid executable found for OCI runtime runj: invalid argument
DEBU[0000] Configured OCI runtime kata initialization failed: no valid executable found for OCI runtime kata: invalid argument
DEBU[0000] Configured OCI runtime runsc initialization failed: no valid executable found for OCI runtime runsc: invalid argument
DEBU[0000] Configured OCI runtime youki initialization failed: no valid executable found for OCI runtime youki: invalid argument
DEBU[0000] Configured OCI runtime crun-vm initialization failed: no valid executable found for OCI runtime crun-vm: invalid argument
DEBU[0000] Configured OCI runtime crun-wasm initialization failed: no valid executable found for OCI runtime crun-wasm: invalid argument
DEBU[0000] Configured OCI runtime ocijail initialization failed: no valid executable found for OCI runtime ocijail: invalid argument
DEBU[0000] Configured OCI runtime krun initialization failed: no valid executable found for OCI runtime krun: invalid argument
DEBU[0000] Using OCI runtime "/usr/bin/crun"
INFO[0000] Setting parallel job count to 25
DEBU[0000] Pulling image quay.io/miabbott/pet:manifest (policy: always)
DEBU[0000] Looking up image "quay.io/miabbott/pet:manifest" in local containers storage
DEBU[0000] Normalized platform linux/amd64 to {amd64 linux  [] }
DEBU[0000] Trying "quay.io/miabbott/pet:manifest" ...
DEBU[0000] reference "[overlay@/var/home/miabbott/.local/share/containers/storage+/run/user/1000/containers]quay.io/miabbott/pet:manifest" does not resolve to an image ID
DEBU[0000] Trying "quay.io/miabbott/pet:manifest" ...
DEBU[0000] reference "[overlay@/var/home/miabbott/.local/share/containers/storage+/run/user/1000/containers]quay.io/miabbott/pet:manifest" does not resolve to an image ID
DEBU[0000] Trying "quay.io/miabbott/pet:manifest" ...
DEBU[0000] Loading registries configuration "/etc/containers/registries.conf"
DEBU[0000] Loading registries configuration "/etc/containers/registries.conf.d/000-shortnames.conf"
DEBU[0000] Normalized platform linux/amd64 to {amd64 linux  [] }
DEBU[0000] Attempting to pull candidate quay.io/miabbott/pet:manifest for quay.io/miabbott/pet:manifest
DEBU[0000] parsed reference into "[overlay@/var/home/miabbott/.local/share/containers/storage+/run/user/1000/containers]quay.io/miabbott/pet:manifest"
Trying to pull quay.io/miabbott/pet:manifest...
DEBU[0000] Copying source image //quay.io/miabbott/pet:manifest to destination image [overlay@/var/home/miabbott/.local/share/containers/storage+/run/user/1000/containers]quay.io/miabbott/pet:manifest
DEBU[0000] Using registries.d directory /etc/containers/registries.d
DEBU[0000] Trying to access "quay.io/miabbott/pet:manifest"
DEBU[0000] No credentials matching quay.io/miabbott/pet found in /run/user/1000/containers/auth.json
DEBU[0000] No credentials matching quay.io/miabbott/pet found in /var/home/miabbott/.config/containers/auth.json
DEBU[0000] Found credentials for quay.io/miabbott/pet in credential helper containers-auth.json in file /var/home/miabbott/.docker/config.json
DEBU[0000]  No signature storage configuration found for quay.io/miabbott/pet:manifest, using built-in default file:///var/home/miabbott/.local/share/containers/sigstore
DEBU[0000] Looking for TLS certificates and private keys in /etc/docker/certs.d/quay.io
DEBU[0000] GET https://quay.io/v2/
DEBU[0000] Ping https://quay.io/v2/ status 401
DEBU[0000] GET https://quay.io/v2/auth?account=miabbott&scope=repository%3Amiabbott%2Fpet%3Apull&service=quay.io
DEBU[0000] Increasing token expiration to: 60 seconds
DEBU[0000] GET https://quay.io/v2/miabbott/pet/manifests/manifest
DEBU[0000] Content-Type from manifest GET is "application/vnd.oci.image.index.v1+json"
DEBU[0000] Using SQLite blob info cache at /var/home/miabbott/.local/share/containers/cache/blob-info-cache-v1.sqlite
DEBU[0000] Source is a manifest list; copying (only) instance sha256:ff8a0ec3c6948096280d51d061991ecb4743e75a2802bdf47312b1d206dec14e for current system
DEBU[0000] GET https://quay.io/v2/miabbott/pet/manifests/sha256:ff8a0ec3c6948096280d51d061991ecb4743e75a2802bdf47312b1d206dec14e
DEBU[0000] Content-Type from manifest GET is "application/vnd.oci.image.manifest.v1+json"
DEBU[0000] IsRunningImageAllowed for image docker:quay.io/miabbott/pet:manifest
DEBU[0000]  Using transport "docker" policy section ""
DEBU[0000]  Requirement 0: allowed
DEBU[0000] Overall: allowed
DEBU[0000] Downloading /v2/miabbott/pet/blobs/sha256:73172a6dc6b83e0a59d139ec71efec544a93be428bf258c887c35eb6013257d5
DEBU[0000] GET https://quay.io/v2/miabbott/pet/blobs/sha256:73172a6dc6b83e0a59d139ec71efec544a93be428bf258c887c35eb6013257d5
Getting image source signatures
DEBU[0001] Reading /var/home/miabbott/.local/share/containers/sigstore/miabbott/pet@sha256=ff8a0ec3c6948096280d51d061991ecb4743e75a2802bdf47312b1d206dec14e/signature-1
DEBU[0001] Not looking for sigstore attachments: disabled by configuration
DEBU[0001] Manifest has MIME type application/vnd.oci.image.manifest.v1+json, ordered candidate list [application/vnd.oci.image.manifest.v1+json, application/vnd.docker.distribution.manifest.v2+json, application/vnd.docker.distribution.manifest.v1+prettyjws, application/vnd.docker.distribution.manifest.v1+json]
DEBU[0001] ... will first try using the original manifest unmodified
DEBU[0001] Checking if we can reuse blob sha256:9bf8f280dd477dfb0f961f44e53a58d223cb5730d4135e83a11045fdd8ba0c11: general substitution = true, compression for MIME type "application/vnd.oci.image.layer.v1.tar+zstd" = true
DEBU[0001] [graphdriver] trying provided driver "overlay"
DEBU[0001] Checking if we can reuse blob sha256:eca3c53f5b0869378a4134f840c7f86cbcd1f87b03cdb24a369dd5f7f7d56b38: general substitution = true, compression for MIME type "application/vnd.oci.image.layer.v1.tar+zstd" = true
DEBU[0001] Checking if we can reuse blob sha256:b530f9061530023f6bf6d4bc89c5156c4fc190af455e4b130eb70828adc80d25: general substitution = true, compression for MIME type "application/vnd.oci.image.layer.v1.tar+zstd" = true
DEBU[0001] Cached value indicated that overlay is supported
DEBU[0001] Cached value indicated that overlay is supported
DEBU[0001] Cached value indicated that metacopy is not being used
DEBU[0001] backingFs=xfs, projectQuotaSupported=false, useNativeDiff=true, usingMetacopy=false
DEBU[0001] Could not create zstd:chunked differ for blob "sha256:9bf8f280dd477dfb0f961f44e53a58d223cb5730d4135e83a11045fdd8ba0c11": read zstd:chunked manifest: manifest too big
Copying blob 9bf8f280dd47 [--------------------------------------] 0.0b / 1.4GiB
DEBU[0001] Downloading /v2/miabbott/pet/blobs/sha256:eca3c53f5b0869378a4134f840c7f86cbcd1f87b03cdb24a369dd5f7f7d56b38
DEBU[0001] GET https://quay.io/v2/miabbott/pet/blobs/sha256:eca3c53f5b0869378a4134f840c7f86cbcd1f87b03cdb24a369dd5f7f7d56b38
Copying blob eca3c53f5b08 [=========>----------------------------] 132.4KiB / 526.3KiB
Copying blob b530f9061530 skipped: already exists
DEBU[0003] Written lookaside cache for layer "2f7b6d216a3789349a676b1a41c31217db16834e2d2a1adc069b7277178d81e0" with length 1152049
Copying blob eca3c53f5b08 [=========>----------------------------] 132.4KiB / 526.3KiB
Copying blob b530f9061530 skipped: already exists
DEBU[0004] [graphdriver] trying provided driver "overlay"
DEBU[0004] Cached value indicated that overlay is supported
DEBU[0004] Cached value indicated that overlay is supported
DEBU[0004] Cached value indicated that metacopy is not being used
Copying blob eca3c53f5b08 [=========>----------------------------] 132.4KiB / 526.3KiB
Copying blob b530f9061530 skipped: already exists
DEBU[0004] Written lookaside cache for layer "3cb9a67c0ea6107e3866c495514260e69db4a05fd13c8e48b4f3bd08b8bd4110" with length 118318
DEBU[0004] Written lookaside cache for layer "86196b44e2821c805e15f751f5173e08e57ac2ff58ff465f60a3d85533326157" with length 370667
DEBU[0004] Written lookaside cache for layer "f9fcc70cfda3211930453a13e6cff516d4973f35b9b8c10df63bf8970ef03219" with length 68
DEBU[0004] Written lookaside cache for layer "2c9e5896e267d3545409e79a3ff190c41a6482421e4452a9dbe9a0efa8815520" with length 134649
DEBU[0004] Written lookaside cache for layer "8fe595ef6634f7f3e1e3d4fa31b6daa892a7421d4162cdd3b9b2961e723b8b13" with length 201
DEBU[0004] Created zstd:chunked differ for blob "sha256:eca3c53f5b0869378a4134f840c7f86cbcd1f87b03cdb24a369dd5f7f7d56b38"
DEBU[0004] Applying differ in /var/home/miabbott/.local/share/containers/storage/overlay/staging/1221715906/dir
Copying blob eca3c53f5b08 [=========>----------------------------] 132.4KiB / 526.3KiB
Copying blob b530f9061530 skipped: already exists
Copying blob eca3c53f5b08 done  526.3KiB / 526.3KiB (skipped: 244.0b = 0.05%)
Copying blob b530f9061530 skipped: already exists
DEBU[0004] Error pulling candidate quay.io/miabbott/pet:manifest: copying system image from manifest list: partial pull of blob sha256:9bf8f280dd477dfb0f961f44e53a58d223cb5730d4135e83a11045fdd8ba0c11: read zstd:chunked manifest: manifest too big
Error: copying system image from manifest list: partial pull of blob sha256:9bf8f280dd477dfb0f961f44e53a58d223cb5730d4135e83a11045fdd8ba0c11: read zstd:chunked manifest: manifest too big
DEBU[0004] Shutting down engines
INFO[0004] Received shutdown.Stop(), terminating!        PID=389924

One thing I have lamented in this scenario is that I don't seem to be able to instruct podman pull to fallback to the gzip compressed version of the image. I have to explicitly pull by the sha256 digest of the gzip version.

@Luap99
Copy link
Member

Luap99 commented Jan 21, 2025

@mtrmac Is this something you are aware off? Maybe it is already taken care with your recent fixes?

@mtrmac
Copy link
Collaborator

mtrmac commented Jan 21, 2025

There is a hard-coded limit (maxTocSize) for the chunked manifest.

Looking at

% skopeo inspect --raw docker://quay.io/miabbott/pet@sha256:ff8a0ec3c6948096280d51d061991ecb4743e75a2802bdf47312b1d206dec14e | jq . 
{

  "layers": [

    {
      "mediaType": "application/vnd.oci.image.layer.v1.tar+zstd",
      "digest": "sha256:9bf8f280dd477dfb0f961f44e53a58d223cb5730d4135e83a11045fdd8ba0c11",
      "size": 1539239266,
      "annotations": {
        "io.github.containers.zstd-chunked.manifest-checksum": "sha256:7222cb20cddde751c56f389c67bc7a236f9057d8032bacdf855353d50dd7f75f",
        "io.github.containers.zstd-chunked.manifest-position": "1512123515:15866144:96193119:1",
        "io.github.containers.zstd-chunked.tarsplit-position": "1527989667:11249527:325306389"
      }
    }
  ],

}

the last layer has:

  • total size 1.43 GB
  • uncompressed zstd:chunked manifest size of 91.7 MB
  • uncompressed tar-split size 310 MB

In the immediate term, the manifest + tar-split are kept in memory; restructuring to store them only on disk, and to stream them, would be a bit non-trivial (e.g. for ensureTOCMatchesTarSplit, we would either need to build a random-access database, a massive overkill, or do it the 1970s way and sort both items on disk, with limited memory use, and then compare two sorted data streams).

Right now, we could either increase the limit (but then, if not 50 MB, what is a reasonable amount? 50 MB might already be more than some users might want); or, I think preferably, recognize the situation and return ErrFallbackToOrdinaryLayerDownload, giving up on a chunked pull but still reading this Zstd-compressed version. @giuseppe WDYT, are there other options to consider?

@mtrmac
Copy link
Collaborator

mtrmac commented Jan 21, 2025

@mtrmac Is this something you are aware off?

For the record, this is the first time I have head of it; I have added a link to containers/image#2189 .

@giuseppe
Copy link
Member

for now we could just bump the limit, since it was arbitrarily chosen.

Maybe we could avoid the hard coded limit if the current process is running in a cgroup with a memory limit set? Or be a proportion of the image size (few percent points)?

@mtrmac
Copy link
Collaborator

mtrmac commented Jan 24, 2025

There are various things we could do to tune the limit — we can always defer to users and make that an option.

But, first, fundamentally there must be some limit because the computer has limited RAM; and there is no guarantee that the amount of available RAM is proportionate to available disk space.

Interestingly? tar-split data is kept in memory both for partial and full pulls, although for full pulls it seems that it should be possible to stream things with limited RAM requirements.


One other point to think about is that the tar-split data scales with number of files, and can hypothetically be fairly small for huge images with a few large files; but the zstd:chunked manifest, chunking files using the rolling sum, has a lower limit of one entry per 8 kB.

@mtrmac
Copy link
Collaborator

mtrmac commented Jan 24, 2025

Right now, we could … recognize the situation and return ErrFallbackToOrdinaryLayerDownload, giving up on a chunked pull but still reading this Zstd-compressed version.

containers/storage#2230

@mtrmac
Copy link
Collaborator

mtrmac commented Jan 28, 2025

(Also, for the record, containers/image#2698 will track reporting the error earlier; it happens ~immediately but for this image, we spend quite a lot of time still trying to pull other parts of the image.)

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
kind/bug Categorizes issue or PR as related to a bug.
Projects
None yet
Development

No branches or pull requests

4 participants