You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Describe the bug
Every time i make a request from JS, Postman or even embed in websites the request takes a considerable amount of time in retriving a answer (if it does) and takes around a x1000 of tokens in the open IA embedding model, triying to find the error i found that for every request is added a chatflow in to the ChatflowPool
To Reproduce
Steps to reproduce the behavior:
Create a flow with embedding a model of Open IA
Make curl or use postman to call the flowise api
See chatflow added multiple times in console and billing in Open IA
Expected behavior
The api should not add the chatflow in the ChatflowPool every time a request is created from the api or the embed (i think the problem comes from there)
The amount of tokens used per request should be similar to the amount of tokens used in the chat integrated in flowise and the responses should be quicker
Screenshots
Setup
Installation [npx flowise start]
Flowise Version [2.1.2]
OS: [Windows]
Browser [Firefox]
The text was updated successfully, but these errors were encountered:
ah yes, when you use In Memory Store, every time when the question is asked, it will do the embeddings everytime. recommend to use others.. we should have some description around mentioning that
Describe the bug
Every time i make a request from JS, Postman or even embed in websites the request takes a considerable amount of time in retriving a answer (if it does) and takes around a x1000 of tokens in the open IA embedding model, triying to find the error i found that for every request is added a chatflow in to the ChatflowPool
To Reproduce
Steps to reproduce the behavior:
Expected behavior
The api should not add the chatflow in the ChatflowPool every time a request is created from the api or the embed (i think the problem comes from there)
The amount of tokens used per request should be similar to the amount of tokens used in the chat integrated in flowise and the responses should be quicker
Screenshots
Setup
npx flowise start
]The text was updated successfully, but these errors were encountered: