You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I’d like to suggest adding support for llama.cpp models to expand Alpaca’s compatibility, especially for users with older AMD GPUs that can leverage Vulkan for GPU-accelerated inference.
This would allow users with older GPUs to take advantage of Vulkan for GPU-accelerated inference, ensuring smoother performance and opening up Alpaca to a wider range of hardware setups.
Thank you.
The text was updated successfully, but these errors were encountered:
I’d like to suggest adding support for llama.cpp models to expand Alpaca’s compatibility, especially for users with older AMD GPUs that can leverage Vulkan for GPU-accelerated inference.
This would allow users with older GPUs to take advantage of Vulkan for GPU-accelerated inference, ensuring smoother performance and opening up Alpaca to a wider range of hardware setups.
Thank you.
The text was updated successfully, but these errors were encountered: