A vector similarity search engine for humans🥳
$ pip install vsslite
VSSLite provides a user-friendly interface for langchain and sqlite-vss.
$ export OPENAI_APIKEY="YOUR_API_KEY"
$ python -m vsslite
Or
import uvicorn
from vsslite import LangChainVSSLiteServer
app = LangChainVSSLiteServer(YOUR_API_KEY).app
uvicorn.run(app, host="127.0.0.1", port=8000)
Go http://127.0.0.1:8000/docs to know the details and try it out.
from vsslite import LangChainVSSLiteClient
# Initialize
vss = LangChainVSSLiteClient()
# Add data with embeddings
vss.add("The difference between eel and conger eel is that eel is more expensive.")
vss.add("Red pandas are smaller than pandas, but when it comes to cuteness, there is no \"lesser\" about them.")
vss.add("There is no difference between \"Ohagi\" and \"Botamochi\" themselves; they are used interchangeably depending on the season.")
# Search
print(vss.search("fish", count=1))
print(vss.search("animal", count=1))
print(vss.search("food", count=1))
Now you can get these search results.
$ python run.py
[{'page_content': 'The difference between eel and conger eel is that eel is more expensive.', 'metadata': {'source': 'inline'}}]
[{'page_content': 'Red pandas are smaller than pandas, but when it comes to cuteness, there is no "lesser" about them.', 'metadata': {'source': 'inline'}}]
[{'page_content': 'There is no difference between "Ohagi" and "Botamochi" themselves; they are used interchangeably depending on the season.', 'metadata': {'source': 'inline'}}]
Helps CRUD.
# Add
id = vss.add("The difference between eel and conger eel is that eel is more expensive.")[0]
# Get
vss.get(id)
# Update
vss.update(id, "The difference between eel and conger eel is that eel is more expensive. Una-jiro is cheaper than both of them.")
# Delete
vss.delete(id)
# Delete all
vss.delete_all()
Upload data. Accept Text, PDF, CSV and JSON for now.
vss.upload("path/to/data.json")
Use async methods when you use VSSLite in server apps.
await vss.aadd("~~~")
await vss.aupdate(id, "~~~")
await vss.aget(id)
await vss.adelete(id)
await vss.aupdate_all()
await vss.asearch("~~~")
await vss.aupload("~~~")
VSSLite supports namespaces for dividing the set of documents to search or update.
vss = LangChainVSSLiteClient()
# Search product documents
vss.search("What is the difference between super size and ultra size?", namespace="product")
# Search company documents
vss.search("Who is the CTO of Unagiken?", namespace="company")
You can quickly launch a Q&A web service based on documents 🚅
$ pip install streamlit
$ pip install streamlit-chat
This is an example for OpenAI terms of use (upload terms of use to VSSServer with namespace openai
).
Save this script as runui.py
.
import asyncio
from vsslite.chat import (
ChatUI,
VSSQAFunction
)
# Setup QA function
openai_qa_func = VSSQAFunction(
name="get_openai_terms_of_use",
description="Get information about terms of use of OpenAI services including ChatGPT.",
parameters={"type": "object", "properties": {}},
namespace="openai",
# answer_lang="Japanese", # <- Uncomment if you want to get answer in Japanese
# is_always_on=True, # <- Uncomment if you want to always fire this function
verbose=True
)
# Start app
chatui = ChatUI(temperature=0.5, functions=[openai_qa_func])
asyncio.run(chatui.start())
$ streamlit run runui.py
See https://docs.streamlit.io to know more about Streamlit.
You can quickly launch a LINE Bot based on documents 🛫
$ pip install aiohttp line-bot-sdk
This is an example for OpenAI terms of use (upload terms of use to VSSServer with namespace openai
).
Save this script as line.py
.
import os
from vsslite.chatgpt_processor import VSSQAFunction
from vsslite.line import LineBotServer
# Setup QA function(s)
from vsslite.chatgpt_processor import VSSQAFunction
openai_qa_func = VSSQAFunction(
name="get_openai_terms_of_use",
description="Get information about terms of use of OpenAI services including ChatGPT.",
parameters={"type": "object", "properties": {}},
vss_url=os.getenv("VSS_URL") or "http://127.0.0.1:8000",
namespace="openai",
# answer_lang="Japanese", # <- Uncomment if you want to get answer in Japanese
# is_always_on=True, # <- Uncomment if you want to always fire this function
verbose=True
)
app = LineBotServer(
channel_access_token=YOUR_CHANNEL_ACCESS_TOKEN,
channel_secret=YOUR_CHANNEL_SECRET,
endpoint_path="/linebot", # <- Set "https://your_domain/linebot" to webhook url at LINE Developers
functions=[openai_qa_func]
).app
$ uvicorn line:app --host 0.0.0.0 --port 8002
Set `https://your_domain/linebot`` to webhook url at LINE Developers.
If you want to start VSSLite API with chat console, use docker-compose.yml
in examples.
Set your OpenAI API Key in vsslite.env and execute the command below:
$ docker-compose -p vsslite --env-file vsslite.env up -d --build
Or, use Dockerfile to start each service separately.
$ docker build -t vsslite-api -f Dockerfile.api .
$ docker run --name vsslite-api --mount type=bind,source="$(pwd)"/vectorstore,target=/app/vectorstore -d -p 8000:8000 -e OPENAI_API_KEY=$OPENAI_API_KEY vsslite-api:latest
$ docker build -t vsslite-chat -f Dockerfile.chat .
$ docker run --name vsslite-chat -d -p 8001:8000 -e OPENAI_API_KEY=$OPENAI_API_KEY vsslite-chat:latest
VSSLite supports Azure OpenAI Service👍
Use OpenAIEmbeddings
configured for Azure.
from langchain.embeddings import OpenAIEmbeddings
azure_embeddings = OpenAIEmbeddings(
openai_api_type="azure",
openai_api_base="https://your-endpoint.openai.azure.com/",
openai_api_version="2023-08-01-preview",
deployment="your-embeddings-deployment-name"
)
app = LangChainVSSLiteServer(
apikey=YOUR_API_KEY or os.getenv("OPENAI_API_KEY"),
persist_directory="./vectorstore",
chunk_size=500,
chunk_overlap=0,
embedding_function=azure_embeddings
).app
Create ChatUI
with Azure OpenAI Service configurations.
chatui = ChatUI(
apikey=YOUR_API_KEY or os.getenv("OPENAI_API_KEY"),
temperature=0.5,
functions=[openai_qa_func],
# Config for Azure OpenAI Service
api_type="azure",
api_base="https://your-endpoint.openai.azure.com/",
api_version="2023-08-01-preview",
engine="your-embeddings-deployment-name"
)
See also the examples.
See v0.3.0 README