-
Notifications
You must be signed in to change notification settings - Fork 2.5k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
pulling image with zstd:chunked
compression is returning manifest too big
error
#24885
Comments
I tried experimenting with an image manifest that had
One thing I have lamented in this scenario is that I don't seem to be able to instruct |
@mtrmac Is this something you are aware off? Maybe it is already taken care with your recent fixes? |
There is a hard-coded limit ( Looking at % skopeo inspect --raw docker://quay.io/miabbott/pet@sha256:ff8a0ec3c6948096280d51d061991ecb4743e75a2802bdf47312b1d206dec14e | jq .
{
…
"layers": [
…
{
"mediaType": "application/vnd.oci.image.layer.v1.tar+zstd",
"digest": "sha256:9bf8f280dd477dfb0f961f44e53a58d223cb5730d4135e83a11045fdd8ba0c11",
"size": 1539239266,
"annotations": {
"io.github.containers.zstd-chunked.manifest-checksum": "sha256:7222cb20cddde751c56f389c67bc7a236f9057d8032bacdf855353d50dd7f75f",
"io.github.containers.zstd-chunked.manifest-position": "1512123515:15866144:96193119:1",
"io.github.containers.zstd-chunked.tarsplit-position": "1527989667:11249527:325306389"
}
}
],
…
} the last layer has:
In the immediate term, the manifest + tar-split are kept in memory; restructuring to store them only on disk, and to stream them, would be a bit non-trivial (e.g. for Right now, we could either increase the limit (but then, if not 50 MB, what is a reasonable amount? 50 MB might already be more than some users might want); or, I think preferably, recognize the situation and return |
For the record, this is the first time I have head of it; I have added a link to containers/image#2189 . |
for now we could just bump the limit, since it was arbitrarily chosen. Maybe we could avoid the hard coded limit if the current process is running in a cgroup with a memory limit set? Or be a proportion of the image size (few percent points)? |
There are various things we could do to tune the limit — we can always defer to users and make that an option. But, first, fundamentally there must be some limit because the computer has limited RAM; and there is no guarantee that the amount of available RAM is proportionate to available disk space. Interestingly? tar-split data is kept in memory both for partial and full pulls, although for full pulls it seems that it should be possible to stream things with limited RAM requirements. One other point to think about is that the tar-split data scales with number of files, and can hypothetically be fairly small for huge images with a few large files; but the zstd:chunked manifest, chunking files using the rolling sum, has a lower limit of one entry per 8 kB. |
|
(Also, for the record, containers/image#2698 will track reporting the error earlier; it happens ~immediately but for this image, we spend quite a lot of time still trying to pull other parts of the image.) |
Issue Description
I regularly build a "pet" container via Github Actions and push it to quay.io as a
zstd:chunked
compressed image. See the GHA definition here - https://github.com/miabbott/pet/blob/main/.github/workflows/build.ymlWhen I tried pulling the latest image today, I got the following error:
I suspected this might be a problem pushing the image with an older version of
podman
that is used in the Ubuntu runner for GHA, but I was able to reproduce the same error when building/pushing on Fedora 40 withpodman
5.3.1Steps to reproduce the issue
Steps to reproduce the issue
zstd:chunked
compressed image to quay.ioDescribe the results you received
Describe the results you expected
Successful retrieval of the image.
podman info output
Podman in a container
No
Privileged Or Rootless
Rootless
Upstream Latest Release
Yes
The text was updated successfully, but these errors were encountered: