Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add ability to select models and edit system/assistant prompts #1051

Draft
wants to merge 1 commit into
base: main
Choose a base branch
from

Conversation

vmpuri
Copy link
Contributor

@vmpuri vmpuri commented Aug 22, 2024

**Goal: ** Users should be able to select the model from the chat interface and receive a response from that model.

Currently: we just send the request and take the response from whatever model the "server" process has loaded. To load a different model, the user would have to stop the server and re-start it with different CLI args.

Solution: This is a bit tricky. OpenAI can just route the request to a system that already has the model loaded. If we wanted to replicate this functionality, the user's system would quickly run out of memory to have even two instances of low-parameter models loaded. To get around this, we'll just assume that the system will only ever have a single model loaded.

Flow:

  • User will have a dropdown of model IDs (retrieved from the /models endpoint).
  • Upon selecting a different model, the user should press a confirmation button to initiate the model unload/load. If we didn't do this, then someone messing around with the UI might accidentally trigger a multi-second, intensive operation which isn't intended.
  • Pressing this button will submit a query to the server to swap out its models.
  • First, we will validate that the model is still on the server. It should be since we just queried /models, and this is re-queried every time a selection in the dropdown is made. If not, the server should return an appropriate error response.
  • While no model is loaded and the query has been submitted, input elements should be disabled until a response is received
  • The default prompts (i.e. assistant prompt, system prompt) should be enabled or disabled based on whether or not the model supports chat.

Warning This PR is still in draft status and does not yet include server side changes.

Copy link

pytorch-bot bot commented Aug 22, 2024

🔗 Helpful Links

🧪 See artifacts and rendered test results at hud.pytorch.org/pr/pytorch/torchchat/1051

Note: Links to docs will display an error until the docs builds have been completed.

✅ No Failures

As of commit 457ec2d with merge base d5bb3c6 (image):
💚 Looks good so far! There are no failures yet. 💚

This comment was automatically generated by Dr. CI and updates every 15 minutes.

@facebook-github-bot facebook-github-bot added the CLA Signed This label is managed by the Meta Open Source bot. label Aug 22, 2024
@facebook-github-bot
Copy link

Hi @vmpuri!

Thank you for your pull request.

We require contributors to sign our Contributor License Agreement, and yours needs attention.

You currently have a record in our system, but the CLA is no longer valid, and will need to be resubmitted.

Process

In order for us to review and merge your suggested changes, please sign at https://code.facebook.com/cla. If you are contributing on behalf of someone else (eg your employer), the individual CLA may not be sufficient and your employer may need to sign the corporate CLA.

Once the CLA is signed, our tooling will perform checks and validations. Afterwards, the pull request will be tagged with CLA signed. The tagging process may take up to 1 hour after signing. Please give it that time before contacting us about it.

If you have received this in error or have any questions, please contact us at [email protected]. Thanks!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
CLA Signed This label is managed by the Meta Open Source bot.
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants