You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I do not see anyone else with this error, but I have been experiencing it for the past week or so. I do not know if it has something to do with embeddings, because I do not really understand what embedding is, other than like a basic language model.
I do not have anything connected to Open AI. Like I said unless I am doing the embedding incorrectly. When I use Open AI as my LLM then there is no issues, but if I try to use any other model I get this issue. Could someone help me understand this and what I am doing incorrectly?
Agent 0: Generating
Traceback (most recent call last):
File "openai_streaming.py", line 147, in aiter
File "openai_streaming.py", line 174, in stream
openai.APIError: Requested generation length 1024 is not possible! The provided prompt is 4142 tokens long, so generating 1024 tokens requires a sequence length of 5166, but the maximum supported sequence length is just 4096!
Here is the relevant parts of my initialize.py.
main chat model used by agents (smarter, more accurate)
I do not see anyone else with this error, but I have been experiencing it for the past week or so. I do not know if it has something to do with embeddings, because I do not really understand what embedding is, other than like a basic language model.
I do not have anything connected to Open AI. Like I said unless I am doing the embedding incorrectly. When I use Open AI as my LLM then there is no issues, but if I try to use any other model I get this issue. Could someone help me understand this and what I am doing incorrectly?
Agent 0: Generating
Traceback (most recent call last):
File "openai_streaming.py", line 147, in aiter
File "openai_streaming.py", line 174, in stream
openai.APIError: Requested generation length 1024 is not possible! The provided prompt is 4142 tokens long, so generating 1024 tokens requires a sequence length of 5166, but the maximum supported sequence length is just 4096!
Here is the relevant parts of my initialize.py.
main chat model used by agents (smarter, more accurate)
The text was updated successfully, but these errors were encountered: