Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[Usage]: Can the embedding model be deployed using the openai interface? #7506

Closed
LIUKAI0815 opened this issue Aug 14, 2024 · 4 comments
Closed
Labels
stale usage How to use vllm

Comments

@LIUKAI0815
Copy link

LIUKAI0815 commented Aug 14, 2024

Your current environment

How would you like to use vllm

I want to run inference of a [bge-large-zh-v1.5]. I don't know how to integrate it with vllm.

@LIUKAI0815 LIUKAI0815 added the usage How to use vllm label Aug 14, 2024
@DarkLight1337
Copy link
Member

If the model use an architecture that isn't in vLLM yet, then you have to implement the model first.

@DarkLight1337
Copy link
Member

vLLM's OpenAI-compatible server does support embedding models (provided that the model is implemented in vLLM)

Copy link

This issue has been automatically marked as stale because it has not had any activity within 90 days. It will be automatically closed if no further activity occurs within 30 days. Leave a comment if you feel this issue should remain open. Thank you!

@github-actions github-actions bot added the stale label Nov 14, 2024
@DarkLight1337
Copy link
Member

Closing as completed by #9056

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
stale usage How to use vllm
Projects
None yet
Development

No branches or pull requests

2 participants