-
Notifications
You must be signed in to change notification settings - Fork 107
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
JetBrains AI Assistant integration? #694
Comments
@nikAizuddin https://www.jetbrains.com/pycharm/ is a valid IDE? Never used JetBrains, I would like to give it a try. |
Yup, PyCharm Community Edition should be okay.
|
looks like ramalama directly uses llama.cpp as opposed to ollama which wraps the responses in its own api Line 364 in 032efae
Error message seems to be coming from: ollama shims llama.cpp However, the key difference is that ollama serves a status page @ / currently seeing 3 possible fixes:
|
I would prefer 3, llama.cpp is a much more open project and the basis of most of the work being done by Ollama. JetBrains should support llama.cpp |
You raised my interest though @eye942 we have the llama.cpp webui on by default in RamaLama. Does it cause issue having it on? I think it's quite useful, but have always assumed it had no impact on the REST API (maybe a false assumption) |
Yeah, the issue raised here is a llama.cpp server-side error raised only when
That is the only side-effect of having webui on, from what I can tell. If webui is off, that I believe is still incompatible with ollama/openai's api because llama.cpp then serves a 404 error with a corresponding json body: {"error":{"code":404,"message":"File Not Found","type":"not_found_error"}} OpenAI's api spec doesn't specify what should be served at "/" (however, they do serve a 200 response, linked): {
"message": "Welcome to the OpenAI API! Documentation is available at https://platform.openai.com/docs/api-reference"
} Haven't tested it with JetBrain's product, but it's highly likely that they are just determining whether |
We should probably turn off the WEB UI by default for ramalama serve and then add an option to enable it. |
Is RamaLama a drop-in replacement to Ollama? I'm trying to use RamaLama with JetBrains AI Assistant but
failed to connect
Logs:
It seems ramalama returned
gzip is not supported by this browser
to JetBrains AI Assistant:Steps to reproduce
ramalama --image localhost/ramalama/rocm-gfx9:latest serve qwen2.5-coder:7b
The text was updated successfully, but these errors were encountered: