Replies: 1 comment
-
Try looking for the error message online -- I don't think this is LangServe related e.g., |
Beta Was this translation helpful? Give feedback.
0 replies
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
-
Hi
My scenario is FRONT_END(:5005/openaiapi)--> BACK_END(:8000/api_openai )-INVOKE -->OPENAI (API)
In my front end, I'm using Fastapi listening to /api_openai, to forward the request to openai wrapper
FRONT_END
BACKEND*
The funny thing is that the request works when I run on my Jupyter notebook
K8s, short for Kubernetes, is an open-source container orchestration platform that automates the deployment, scaling, and management of containerized applications. It provides a highly scalable and flexible infrastructure for running and managing applications in a distributed environment.
BUT It doesn't work when I curl the front end
Doing some troubleshooting I found that the problem is in this part of the front_end code. The front end is a fast api implementation intended to fw requests to .invoke. Which works when is directly accessed (Jupyter) but it doesn't work when is curl by the fe
openai_llm.invoke(prompt)
I'm using the same envs and nothing is different. Is there anything I should be aware when forwarding requests? Does .invoke need any additional detail on the request?
Beta Was this translation helpful? Give feedback.
All reactions