-
Notifications
You must be signed in to change notification settings - Fork 95
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Revert "Added --jinja to llama-run command" #683
Conversation
Reviewer's Guide by SourceryThis pull request reverts the addition of the --jinja flag to the llama-run command. The change was made in the previous pull request, but it was decided that the flag should not be included by default. Sequence diagram for llama-run command execution without --jinja flagsequenceDiagram
participant User
participant Model
participant LlamaRun
User->>Model: Execute llama-run
Model->>LlamaRun: Run with context and temp
Note over LlamaRun: No Jinja templating
LlamaRun-->>Model: Process completion
Model-->>User: Return results
Class diagram showing Model class changesclassDiagram
class Model {
+build_exec_args_run(args, model_path, prompt)
+build_exec_args_bench(args, model_path)
}
note for Model "Removed --jinja flag from llama-run command arguments"
File-Level Changes
Tips and commandsInteracting with Sourcery
Customizing Your ExperienceAccess your dashboard to:
Getting Help
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Hey @ericcurtin - I've reviewed your changes - here's some feedback:
Overall Comments:
- Please provide more context in the PR description about why this feature is being reverted. This will help with future maintenance and understanding of the codebase.
Here's what I looked at during the review
- 🟢 General issues: all looks good
- 🟢 Security: all looks good
- 🟢 Testing: all looks good
- 🟢 Complexity: all looks good
- 🟢 Documentation: all looks good
Help me be more useful! Please click 👍 or 👎 on each comment and I'll use the feedback to improve your reviews.
Signed-off-by: Eric Curtin <[email protected]>
ab19d16
to
45ba7e7
Compare
I'm gonna propose we remove this just for a few weeks and re-introduce again when the newer container images have propagated to peoples machines a little more. I'm a little worried about giving the FOSDEM talk and things not working when people try it out. So it's just a temporary revert, for a few weeks. |
This also makes me question how well our container image versioning is actually working, I had an older container image on my machine and I was trying to run a newer version of ramalama and it didn't pull the newer image (and hence it didn't work because of lack of --jinja argument) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM
Reverts #625
Summary by Sourcery
Enhancements:
llama-run
command.