You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Using Llama.cpp with Vulkan support should provide GPU acceleration on a wide range of graphics cards. It is also easier to integrate than Ollama. For a good example, see LM Studio.
The text was updated successfully, but these errors were encountered:
LM studio supports OpenAI like API, so you can use the OpenAI Handler and running it externally, if it solves your use case.
Maybe we can add a specialized handler that can do a few other things like for ollama.
For direct llama.cpp support, getting hardware acceleration work under Flatpak always requires more effort. I will check what I can do
Using Llama.cpp with Vulkan support should provide GPU acceleration on a wide range of graphics cards. It is also easier to integrate than Ollama. For a good example, see LM Studio.
The text was updated successfully, but these errors were encountered: