How to expose LLMS over a REST endpoint #565
Unanswered
rajnikhil-netapp
asked this question in
Q&A
Replies: 1 comment
-
Simplest: https://github.com/langchain-ai/langserve/blob/main/examples/llm/server.py If you need to support configuration, you can use configurable fields: If you need to pick up user information from the request itself: Use the examples for reference: https://github.com/langchain-ai/langserve/tree/main?tab=readme-ov-file#examples |
Beta Was this translation helpful? Give feedback.
0 replies
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
-
This module contains the functions to create the language model manager (LLM) instances.
The LLMs are used to interact with the language models (LMs) to generate responses, embeddings, etc.
The module contains the following functions:
* get_langchain_llm - returns a language model manager for the chat model.
* get_embedding_llm - returns a language model manager for the embeddings model.
How can I expose these two llms over a REST endpoint using langserve.
Beta Was this translation helpful? Give feedback.
All reactions