Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Issue in chat server response #2908

Closed
raja-jamwal opened this issue Aug 24, 2024 · 2 comments
Closed

Issue in chat server response #2908

raja-jamwal opened this issue Aug 24, 2024 · 2 comments
Labels
chat gpt4all-chat issues local-server Related to the Chat UI's built-in API server

Comments

@raja-jamwal
Copy link

Thank you for all your work on gpt4all. I was trying to integrate it with continue.dev until I found the following issue. The response is blanking out.

continuedev/continue#2092

I debugged with a few other local LM providers like LM Studio and I think the issue is rooted in the following code.

responseObject.insert("id", "foobarbaz");

  • Can we return a random unique ID for the chat response?
  • Can we have chat.completion for the object instead of text_completion

Following is the raw chat.completion response from LM studio (works with continue.dev) and GPT4all server (not working).

LM studio

python3 test.py
ChatCompletion(id='chatcmpl-bi4v56tuiq9ugh9vgztib', choices=[Choice(finish_reason='stop', index=0, logprobs=None, message=ChatCompletionMessage(content="Hello! How can I help you today? If you have any questions or tasks you'd like me to assist with, please feel free to ask.", refusal=None, role='assistant', function_call=None, tool_calls=None))], created=1724486348, model='NousResearch/Hermes-2-Pro-Mistral-7B-GGUF/Hermes-2-Pro-Mistral-7B.Q4_0.gguf', object='chat.completion', service_tier=None, system_fingerprint=None, usage=CompletionUsage(completion_tokens=31, prompt_tokens=38, total_tokens=69))

GPT4All

python3 test.py
ChatCompletion(id='foobarbaz', choices=[Choice(finish_reason='stop', index=0, logprobs=None, message=ChatCompletionMessage(content='Hello! How can I assist you today?', refusal=None, role='assistant', function_call=None, tool_calls=None), references=[])], created=1724486376, model='Llama 3.1 8B Instruct 128k', object='text_completion', service_tier=None, system_fingerprint=None, usage=CompletionUsage(completion_tokens=9, prompt_tokens=12, total_tokens=21))

Any other insight is most welcome.

@cosmic-snow
Copy link
Collaborator

There is already an open issue regarding continue.dev:

In that, the conclusion so far seems to be that it's due to not being able to stream the response. Are you sure it's caused by not returning a unique ID?

@cosmic-snow cosmic-snow added chat gpt4all-chat issues local-server Related to the Chat UI's built-in API server labels Aug 24, 2024
@raja-jamwal
Copy link
Author

thank you for pointing out the discussion, I'll add more findings in the original issue.

@raja-jamwal raja-jamwal closed this as not planned Won't fix, can't repro, duplicate, stale Aug 25, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
chat gpt4all-chat issues local-server Related to the Chat UI's built-in API server
Projects
None yet
Development

No branches or pull requests

2 participants