A web interface for Ollama, providing a user-friendly way to interact with local language models.
-
Ollama Installation: Make sure Ollama is installed on your system. You can download it from the official Ollama website.
-
Modern Browser: Use a modern browser with IndexedDB support for storage.
-
Clone the repository from GitHub.
-
Run the Ollama server:
ollama serve
-
Start a simple HTTP server in the same directory as
index.html
.:python3 -m http.server 8080
-
Host the Project Files:
- Upload the repository files to a hosting service or server (e.g.,
https://ollama-x.web.app
).
- Upload the repository files to a hosting service or server (e.g.,
-
Start the Ollama Server:
- Set the
OLLAMA_ORIGINS
environment variable to allow the hosted site to communicate with the server:- Windows:
$env:OLLAMA_ORIGINS="https://ollama-x.web.app"
- macOS:
launchctl setenv OLLAMA_ORIGINS "https://ollama-x.web.app"
- Linux:
export OLLAMA_ORIGINS="https://ollama-x.web.app"
- Windows:
- Run the Ollama server:
ollama serve
- Set the
Visit your local server (http://localhost:8080
) or hosted site (e.g., https://ollama-x.web.app
) in your browser and select a model to start the chat.