-
Notifications
You must be signed in to change notification settings - Fork 89
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Use of Function/Tool with OpenAI #173
Comments
I haven't used the new API for extraction yet, so I don't have a sense of quality. Previously gpt-3.5-turbo seemed significantly worse than text-davinci-003. If anyone is willing to run some experiments, the thing to do is: Set up a zero and few shot scenario with the new API (with AIMessages that contain the function invocation request payload specified), and see how it performs against text-davinci-003. Compare with |
Folks should try this: Please if you try this in your data and see any differences in performance let me know! |
I have been trying out 3 different ways of data extraction.
|
I also found that if you need to do "semantic extraction" (basically trying to realize what the user actually means) Kor has better results for me at the moment. I think maybe some way of combining Kor with functions to force a structured results (and prevent hallucinations) would get optimal results |
2023-06-13 OpenAI's announcement of the API's changes allows to pass a function parameter which supposedly improves the LLM interpretation of the task. (Langchain already implemented the necessary changes in 0.199/0.200) Do you see how can this be use to improve Kor data extraction?
The text was updated successfully, but these errors were encountered: