Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

'dict' object has no attribute 'locals' #384

Open
filowsky opened this issue Jan 3, 2025 · 7 comments
Open

'dict' object has no attribute 'locals' #384

filowsky opened this issue Jan 3, 2025 · 7 comments

Comments

@filowsky
Copy link

filowsky commented Jan 3, 2025

Pipeline starts but my module is not loaded. After dependencies being downloaded, there is a following error with no stacktrace nor explanation:

Error loading module: test-pipeline
**'dict' object has no attribute 'locals'**
WARNING:root:No Pipeline class found in test-pipeline
INFO:     Application startup complete.
INFO:     Uvicorn running on http://0.0.0.0:9099 (Press CTRL+C to quit)

For requirements I use:

requests~=2.32.3
pydantic>=2.8.0 
llama-index==0.12.5
llama-index-llms-azure-openai==0.3.0 
llama-index-embeddings-azure-openai==0.3.0 
llama-index-vector-stores-qdrant==0.4.2

I am using helm chart for pipelines (chart version 0.0.5) and UI (chart version 4.0.6) deployment.

Worth to add that I managed to run it successfully, as a standalone script, on my local machine using the same versions of dependencies. Also, I noticed that the error shows up when I'm adding these code to the pipeline

from llama_index.core import StorageContext
from llama_index.vector_stores.qdrant import QdrantVectorStore
from qdrant_client import QdrantClient

...

self.index = VectorStoreIndex.from_documents(
    documents=self.documents,
    show_progress=True,
    storage_context=StorageContext.from_defaults(
        vector_store=QdrantVectorStore(
            client=QdrantClient(url=QDRANT_URL),
            collection_name="collection_name"
        )
    ),
)

Whole output log below
scratch_142.txt

@ezavesky
Copy link

ezavesky commented Jan 5, 2025

You probably don't want to hear this, but since you've isolated it to specific code within llama_index or qdrant, that's probably where your issue lies -- not within any thing within this pipelines repo. The code within this repo is very straightforward and there is no mention of 'locals'

Having no specific research done (you didn't include enough code), a few issues in qdrant's issues and SO postings may point there -- langchain-ai/langchain#16962. There's also a small chance that something doesn't behave well when using async responses, but that's just a guess since locals implies some localized variable scope.

@paulinergt
Copy link

Hello! I'm facing the same issue however I'm not using qdrant...
If anyone has an update on this it would be very much appreciated :)
Thank you!

@paulinergt
Copy link

Hello! I resolved the error on my end. :)

It seems the issue was caused by the requirements being downloaded twice:

  1. From the requirements.txt file.
  2. Directly from the pipeline script header via the install_frontmatter_requirements function:
title: Custom Llama Index Pipeline
author: open-webui
date: 2024-05-30
version: 1.0
license: MIT
description: A pipeline for retrieving relevant information from a knowledge base using the Llama Index library.
requirements: llama-index-retrievers-bm25, llama-index-embeddings-huggingface, llama-index-readers-github, llama-index-vector-stores-postgres

This duplication led to dependency errors.
I removed the requirements section from the pipeline header, which fixed the issue.

@filowsky
Copy link
Author

filowsky commented Jan 15, 2025

Thank you @paulinergt but in my case I can't remove the requirements section from header because, as far as I know, there is no possibility to pass requirements.txt via Helm chart. Also, I tried various combinations of dependencies installation locally and I couldn't reproduce the error, everything was working fine when running pipelines from the source code (main branch).

But I'm back with an update from my end.

I wasn't able to run the pipeline using the Helm chart, but I managed to do it locally from source code and locally using Docker.

When using Docker, I'm using exactly the same dependencies and the image ghcr.io/open-webui/pipelines:main, which is used by the chart in version 0.0.5. Interestingly, when running the pipeline in Docker, I get the following message:

WARNING:root:No Pipeline class found in test-pipeline

And when I restart the container, the pipeline gets fetched again and magically starts working without any issues. I tried to replicate this behavior on K8S, but without success. I still get:

Error loading module: test-pipeline
'dict' object has no attribute 'locals'
WARNING:root:No Pipeline class found in test-pipeline

So, to sum up:

  • running pipelines locally from source code using pip install -r requirements.txt && ./start.sh, as mentioned in README, works 100% fine
  • running pipelines locally with Docker works after container is restarted (magic)
  • running pipelines on K8S using Helm chart does not work

Any ideas? @ezavesky I'm attaching code of the pipeline (Valves values are empty on purpose, I replace it with real data)

scratch_143.txt

@filowsky
Copy link
Author

filowsky commented Jan 17, 2025

Update from my side. I've made a custom image, with more detailed stack trace, of open webUI pipelines and deployed it to my k8s cluster and there is a stacktrace

ERROR:root:'dict' object has no attribute 'locals'
Traceback (most recent call last):
  File "/app/main.py", line 151, in load_module_from_path
    spec.loader.exec_module(module)
  File "<frozen importlib._bootstrap_external>", line 940, in exec_module
  File "<frozen importlib._bootstrap>", line 241, in _call_with_frames_removed
  File "/app/./pipelines/supporthub-pipeline.py", line 15, in <module>
  File "/usr/local/lib/python3.11/site-packages/llama_index/core/__init__.py", line 16, in <module>
    from llama_index.core.base.response.schema import Response
  File "/usr/local/lib/python3.11/site-packages/llama_index/core/base/response/schema.py", line 9, in <module>
    from llama_index.core.schema import NodeWithScore
  File "/usr/local/lib/python3.11/site-packages/llama_index/core/schema.py", line 244, in <module>
    class RelatedNodeInfo(BaseComponent):
  File "/usr/local/lib/python3.11/site-packages/pydantic/_internal/_model_construction.py", line 202, in __new__
    }
  File "/usr/local/lib/python3.11/site-packages/pydantic/_internal/_model_construction.py", line 539, in complete_model_class
    cls.__pydantic_fields__ = fields
                 ^^^^^^^^^^^^^^^^^^^^
  File "/usr/local/lib/python3.11/site-packages/pydantic/main.py", line 626, in __get_pydantic_core_schema__
    __tracebackhide__ = True
           ^^^^^^^^^^^^^^^
  File "/usr/local/lib/python3.11/site-packages/pydantic/_internal/_schema_generation_shared.py", line 82, in __call__
  File "/usr/local/lib/python3.11/site-packages/pydantic/_internal/_generate_schema.py", line 502, in generate_schema
    warn(

  File "/usr/local/lib/python3.11/site-packages/pydantic/_internal/_generate_schema.py", line 753, in _generate_schema_inner
    else:

  File "/usr/local/lib/python3.11/site-packages/pydantic/_internal/_generate_schema.py", line 580, in _model_schema
    """Generate core schema.
                    ^^^^^^^^^
  File "/usr/local/lib/python3.11/site-packages/pydantic/_internal/_generate_schema.py", line 580, in <dictcomp>
    """Generate core schema.
                        ^^^^^
  File "/usr/local/lib/python3.11/site-packages/pydantic/_internal/_generate_schema.py", line 916, in _generate_md_field_schema
    return core_schema.datetime_schema()
               ^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/usr/local/lib/python3.11/site-packages/pydantic/_internal/_generate_schema.py", line 1081, in _common_field_schema
    def _generate_dc_field_schema(
                         ^^^^^^^^^
  File "/usr/local/lib/python3.11/site-packages/pydantic/_internal/_generate_schema.py", line 1820, in _apply_annotations
    mode_lookup: dict[_ParameterKind, Literal['positional_only', 'positional_or_keyword', 'keyword_only']] = {
             ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/usr/local/lib/python3.11/site-packages/pydantic/_internal/_schema_generation_shared.py", line 82, in __call__
  File "/usr/local/lib/python3.11/site-packages/pydantic/_internal/_generate_schema.py", line 1801, in inner_handler
    config=core_config,
         ^^^^^^^^^^^^^^^
  File "/usr/local/lib/python3.11/site-packages/pydantic/_internal/_generate_schema.py", line 758, in _generate_schema_inner
    if (get_schema := getattr(obj, '__get_pydantic_core_schema__', None)) is not None:
           ^^^^^^^^^^^^^^^^^^^^
  File "/usr/local/lib/python3.11/site-packages/pydantic/_internal/_generate_schema.py", line 840, in match_type
    args = (_typing_extra._make_forward_ref(a) if isinstance(a, str) else a for a in args)
    ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/usr/local/lib/python3.11/site-packages/pydantic/_internal/_generate_schema.py", line 864, in _match_generic_type
  File "/usr/local/lib/python3.11/site-packages/pydantic/_internal/_generate_schema.py", line 1152, in _union_schema
    @staticmethod

  File "/usr/local/lib/python3.11/site-packages/pydantic/_internal/_generate_schema.py", line 502, in generate_schema
    warn(

  File "/usr/local/lib/python3.11/site-packages/pydantic/_internal/_generate_schema.py", line 737, in _generate_schema_inner
  File "/usr/local/lib/python3.11/site-packages/pydantic/_internal/_generate_schema.py", line 1752, in _annotated_schema
    dataclass,
  File "/usr/local/lib/python3.11/site-packages/pydantic/_internal/_generate_schema.py", line 1820, in _apply_annotations
    mode_lookup: dict[_ParameterKind, Literal['positional_only', 'positional_or_keyword', 'keyword_only']] = {
             ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/usr/local/lib/python3.11/site-packages/pydantic/_internal/_schema_generation_shared.py", line 82, in __call__
  File "/usr/local/lib/python3.11/site-packages/pydantic/_internal/_generate_schema.py", line 1902, in new_handler
    raise NotImplementedError(
             ^^^^^^^^^^^^^^^^^^
  File "/usr/local/lib/python3.11/site-packages/pydantic/functional_serializers.py", line 73, in __get_pydantic_core_schema__
    localns=handler._get_types_namespace().locals,
            ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
AttributeError: 'dict' object has no attribute 'locals'

so it looks like a problem with compatbility with Pydantic and llama_index but for now that's all I got. I still have no clue why this works locally but not on k8s, I am not changing any versions of dependencies

@filowsky
Copy link
Author

filowsky commented Jan 17, 2025

Another update. I managed to make it works but with a catch. I removed requirements from the pipeline script header and I've built custom Open WebUI Pipelines image with requirements.txt content set to what I have on my computer after pip install -r requirements.txt and pip freeze. Looks like dependency resolution is different on my computer and remote k8s cluster. But I don't think the issue is solved, it should not be like this.

I think the key difference is Python environment. I don't know what's inside this distro https://github.com/open-webui/pipelines/blob/main/Dockerfile#L1 but it's probably different from Python3.11 I have installed on my computer.

@grssmnn
Copy link

grssmnn commented Feb 7, 2025

I think the issue is related to conflicting versions of pydantic.
I've upgraded pipelines to latest pydantic==2.10.6 and rebuilt the container, this seems to be working for me.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

4 participants