![](https://private-user-images.githubusercontent.com/123345456/347456217-4232bc0c-2491-43e3-b602-91c4f4333e45.png?jwt=eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.eyJpc3MiOiJnaXRodWIuY29tIiwiYXVkIjoicmF3LmdpdGh1YnVzZXJjb250ZW50LmNvbSIsImtleSI6ImtleTUiLCJleHAiOjE3MzkxNDc1MTksIm5iZiI6MTczOTE0NzIxOSwicGF0aCI6Ii8xMjMzNDU0NTYvMzQ3NDU2MjE3LTQyMzJiYzBjLTI0OTEtNDNlMy1iNjAyLTkxYzRmNDMzM2U0NS5wbmc_WC1BbXotQWxnb3JpdGhtPUFXUzQtSE1BQy1TSEEyNTYmWC1BbXotQ3JlZGVudGlhbD1BS0lBVkNPRFlMU0E1M1BRSzRaQSUyRjIwMjUwMjEwJTJGdXMtZWFzdC0xJTJGczMlMkZhd3M0X3JlcXVlc3QmWC1BbXotRGF0ZT0yMDI1MDIxMFQwMDI2NTlaJlgtQW16LUV4cGlyZXM9MzAwJlgtQW16LVNpZ25hdHVyZT1mYTNmZmU2ZjRkNzBiNzczMmY3ZDU2OTUyMDhjZmZlZjU2NTA0ZWM4YzQxYWRjNWQzOGYyZjZhNDlmOWM4NjgzJlgtQW16LVNpZ25lZEhlYWRlcnM9aG9zdCJ9.HPNt-IiUh4p0IoT4dZ6qPYKruVqjtfcJf-PRBNmnIms)
🤖 Rust client library for interacting with the Ollama API, enabling operations like chat completions, model pulls, embeddings generation, model listing, and model pushes.
- Chat Completion:
chat_completion(model, content, role)
- Model Pull:
pull_model(name, stream_mode)
- Generate Embeddings:
gen_embeddings(model, prompt)
- List Models:
list_models()
- Push Models:
push_models(name, stream_mode)
use kazama::{chat_completion, pull_model, gen_embeddings, list_models, push_models};
#[tokio::main]
async fn main() {
// Example: Chat Completion
chat_completion("model_name", "Hello!", "user").await.expect("Failed to complete chat");
// Example: Model Pull
pull_model("model_name", false).await.expect("Failed to pull model");
// Example: Generate Embeddings
gen_embeddings("model_name", "Generate embeddings from this prompt").await.expect("Failed to generate embeddings");
// Example: List Models
list_models().await.expect("Failed to list models");
// Example: Push Models
push_models("model_name", true).await.expect("Failed to push model");
}
For detailed API documentation, refer here. Acknowledgement for icon: https://www.artstation.com/artwork/n0q6Ye