-
-
Notifications
You must be signed in to change notification settings - Fork 411
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
'dict' object has no attribute 'locals' #384
Comments
You probably don't want to hear this, but since you've isolated it to specific code within llama_index or qdrant, that's probably where your issue lies -- not within any thing within this pipelines repo. The code within this repo is very straightforward and there is no mention of 'locals' Having no specific research done (you didn't include enough code), a few issues in qdrant's issues and SO postings may point there -- langchain-ai/langchain#16962. There's also a small chance that something doesn't behave well when using async responses, but that's just a guess since |
Hello! I'm facing the same issue however I'm not using qdrant... |
Hello! I resolved the error on my end. :) It seems the issue was caused by the requirements being downloaded twice:
title: Custom Llama Index Pipeline
author: open-webui
date: 2024-05-30
version: 1.0
license: MIT
description: A pipeline for retrieving relevant information from a knowledge base using the Llama Index library.
requirements: llama-index-retrievers-bm25, llama-index-embeddings-huggingface, llama-index-readers-github, llama-index-vector-stores-postgres This duplication led to dependency errors. |
Thank you @paulinergt but in my case I can't remove the But I'm back with an update from my end. I wasn't able to run the pipeline using the Helm chart, but I managed to do it locally from source code and locally using Docker. When using Docker, I'm using exactly the same dependencies and the image
And when I restart the container, the pipeline gets fetched again and magically starts working without any issues. I tried to replicate this behavior on K8S, but without success. I still get:
So, to sum up:
Any ideas? @ezavesky I'm attaching code of the pipeline (Valves values are empty on purpose, I replace it with real data) |
Update from my side. I've made a custom image, with more detailed stack trace, of open webUI pipelines and deployed it to my k8s cluster and there is a stacktrace
so it looks like a problem with compatbility with Pydantic and llama_index but for now that's all I got. I still have no clue why this works locally but not on k8s, I am not changing any versions of dependencies |
Another update. I managed to make it works but with a catch. I removed requirements from the pipeline script header and I've built custom Open WebUI Pipelines image with requirements.txt content set to what I have on my computer after I think the key difference is Python environment. I don't know what's inside this distro https://github.com/open-webui/pipelines/blob/main/Dockerfile#L1 but it's probably different from Python3.11 I have installed on my computer. |
I think the issue is related to conflicting versions of pydantic. |
Pipeline starts but my module is not loaded. After dependencies being downloaded, there is a following error with no stacktrace nor explanation:
For requirements I use:
I am using helm chart for pipelines (chart version 0.0.5) and UI (chart version 4.0.6) deployment.
Worth to add that I managed to run it successfully, as a standalone script, on my local machine using the same versions of dependencies. Also, I noticed that the error shows up when I'm adding these code to the pipeline
Whole output log below
scratch_142.txt
The text was updated successfully, but these errors were encountered: