You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Thank you for all your work on gpt4all. I was trying to integrate it with continue.dev until I found the following issue. The response is blanking out.
Can we return a random unique ID for the chat response?
Can we have chat.completion for the object instead of text_completion
Following is the raw chat.completion response from LM studio (works with continue.dev) and GPT4all server (not working).
LM studio
python3 test.py
ChatCompletion(id='chatcmpl-bi4v56tuiq9ugh9vgztib', choices=[Choice(finish_reason='stop', index=0, logprobs=None, message=ChatCompletionMessage(content="Hello! How can I help you today? If you have any questions or tasks you'd like me to assist with, please feel free to ask.", refusal=None, role='assistant', function_call=None, tool_calls=None))], created=1724486348, model='NousResearch/Hermes-2-Pro-Mistral-7B-GGUF/Hermes-2-Pro-Mistral-7B.Q4_0.gguf', object='chat.completion', service_tier=None, system_fingerprint=None, usage=CompletionUsage(completion_tokens=31, prompt_tokens=38, total_tokens=69))
GPT4All
python3 test.py
ChatCompletion(id='foobarbaz', choices=[Choice(finish_reason='stop', index=0, logprobs=None, message=ChatCompletionMessage(content='Hello! How can I assist you today?', refusal=None, role='assistant', function_call=None, tool_calls=None), references=[])], created=1724486376, model='Llama 3.1 8B Instruct 128k', object='text_completion', service_tier=None, system_fingerprint=None, usage=CompletionUsage(completion_tokens=9, prompt_tokens=12, total_tokens=21))
Any other insight is most welcome.
The text was updated successfully, but these errors were encountered:
In that, the conclusion so far seems to be that it's due to not being able to stream the response. Are you sure it's caused by not returning a unique ID?
Thank you for all your work on gpt4all. I was trying to integrate it with continue.dev until I found the following issue. The response is blanking out.
continuedev/continue#2092
I debugged with a few other local LM providers like LM Studio and I think the issue is rooted in the following code.
gpt4all/gpt4all-chat/server.cpp
Line 406 in c9dda3d
chat.completion
for the object instead oftext_completion
Following is the raw chat.completion response from LM studio (works with continue.dev) and GPT4all server (not working).
LM studio
GPT4All
Any other insight is most welcome.
The text was updated successfully, but these errors were encountered: