v0.5.4
What's Changed
- Attempt to install podman by @ericcurtin in #621
- Introduce ramalama bench by @ericcurtin in #620
- Add man page for cuda support by @rhatdan in #623
- Less verbose output by @ericcurtin in #624
- Avoid dnf install on OSTree system by @ericcurtin in #622
- Fix list in README - Credits section by @kubealex in #627
- added mac cpu only support by @bmahabirbu in #628
- Added --jinja to llama-run command by @engelmi in #625
- Update llama.cpp version by @ericcurtin in #630
- Add shortname for deepseek by @rhatdan in #631
- fixed rocm detection by adding gfx targets in containerfile by @bmahabirbu in #632
- Point macOS users to script install by @kubealex in #635
- Update docker.io/nvidia/cuda Docker tag to v12.8.0 by @renovate in #633
- feat: add argument to define amd gpu targets by @jobcespedes in #634
- Bump to v0.5.4 by @rhatdan in #641
New Contributors
- @kubealex made their first contribution in #627
- @engelmi made their first contribution in #625
- @jobcespedes made their first contribution in #634
Full Changelog: v0.5.3...v0.5.4