Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[Bug]: Resolver hangs forever in the "Attempt to resolve issue" step due to AgentStateChangedObservation(content='', agent_state=<AgentState.RATE_LIMITED: 'rate_limited'>, observation='agent_state_changed') #6692

Open
2 tasks done
oconnorjoseph opened this issue Feb 12, 2025 · 0 comments
Labels
bug Something isn't working resolver Related to OpenHands Resolver

Comments

@oconnorjoseph
Copy link
Contributor

Is there an existing issue for the same bug?

  • I have checked the existing issues.

Describe the bug and reproduction steps

Below is a draft for your GitHub issue submission. Feel free to adjust any details (such as version numbers or file names) as needed before submitting:


Is there an existing issue for the same bug?

  • I have checked the existing issues.

Describe the bug and reproduction steps

When using OpenHands, the Resolver sometimes hangs indefinitely during the "Attempt to resolve issue" step when a RateLimitError is encountered from Anthropic’s API and the maximum number of retries (7 in our case) have been exhausted. Rather than timing out, the GitHub workflow keeps running indefinitely.

Reproduction Steps:

  1. Configure OpenHands (Development workflow) to use Anthropic as the LLM provider.
  2. Initiate an issue resolution process that ends up sending a high volume of tokens (or otherwise triggering a rate limit scenario).
  3. Exhaust the (7) retries on litellm.RateLimitError: AnthropicException
  4. Observe that during the "Attempt to resolve issue" step, the Resolver hangs indefinitely without recovering on AgentStateChangedObservation(content='', agent_state=<AgentState.RATE_LIMITED: 'rate_limited'>, observation='agent_state_changed') as the last message.
23:27:08 - openhands:ERROR: agent_controller.py:228 - [Agent Controller default] Error while running the agent (session ID: default): litellm.RateLimitError: AnthropicException - {"type":"error","error":{"type":"rate_limit_error","message":"This request would exceed your organization’s rate limit of 400,000 input tokens per minute. For details, refer to: https://docs.anthropic.com/en/api/rate-limits; see the response headers for current usage. Please reduce the prompt length or the maximum tokens requested, or try again later. You may also contact sales at https://www.anthropic.com/contact-sales to discuss your options for a rate limit increase."}}. Traceback: Traceback (most recent call last):
  File "/opt/hostedtoolcache/Python/3.12.8/x64/lib/python3.12/site-packages/litellm/llms/anthropic/chat/handler.py", line 412, in completion
    response = client.post(
               ^^^^^^^^^^^^
  File "/opt/hostedtoolcache/Python/3.12.8/x64/lib/python3.12/site-packages/litellm/llms/custom_httpx/http_handler.py", line 557, in post
    raise e
  File "/opt/hostedtoolcache/Python/3.12.8/x64/lib/python3.12/site-packages/litellm/llms/custom_httpx/http_handler.py", line 538, in post
    response.raise_for_status()
  File "/opt/hostedtoolcache/Python/3.12.8/x64/lib/python3.12/site-packages/httpx/_models.py", line 829, in raise_for_status
    raise HTTPStatusError(message, request=request, response=self)
httpx.HTTPStatusError: Client error '429 Too Many Requests' for url 'https://api.anthropic.com/v1/messages'
For more information check: https://developer.mozilla.org/en-US/docs/Web/HTTP/Status/429
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
  File "/opt/hostedtoolcache/Python/3.12.8/x64/lib/python3.12/site-packages/litellm/main.py", line 1878, in completion
    response = anthropic_chat_completions.completion(
               ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/opt/hostedtoolcache/Python/3.12.8/x64/lib/python3.12/site-packages/litellm/llms/anthropic/chat/handler.py", line 427, in completion
    raise AnthropicError(
litellm.llms.anthropic.common_utils.AnthropicError: {"type":"error","error":{"type":"rate_limit_error","message":"This request would exceed your organization’s rate limit of 400,000 input tokens per minute. For details, refer to: https://docs.anthropic.com/en/api/rate-limits; see the response headers for current usage. Please reduce the prompt length or the maximum tokens requested, or try again later. You may also contact sales at https://www.anthropic.com/contact-sales to discuss your options for a rate limit increase."}}
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
  File "/opt/hostedtoolcache/Python/3.12.8/x64/lib/python3.12/site-packages/openhands/controller/agent_controller.py", line 226, in _step_with_exception_handling
    await self._step()
  File "/opt/hostedtoolcache/Python/3.12.8/x64/lib/python3.12/site-packages/openhands/controller/agent_controller.py", line 662, in _step
    action = self.agent.step(self.state)
             ^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/opt/hostedtoolcache/Python/3.12.8/x64/lib/python3.12/site-packages/openhands/agenthub/codeact_agent/codeact_agent.py", line 405, in step
    response = self.llm.completion(**params)
               ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/opt/hostedtoolcache/Python/3.12.8/x64/lib/python3.12/site-packages/tenacity/__init__.py", line 336, in wrapped_f
    return copy(f, *args, **kw)
           ^^^^^^^^^^^^^^^^^^^^
  File "/opt/hostedtoolcache/Python/3.12.8/x64/lib/python3.12/site-packages/tenacity/__init__.py", line 475, in __call__
    do = self.iter(retry_state=retry_state)
         ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/opt/hostedtoolcache/Python/3.12.8/x64/lib/python3.12/site-packages/tenacity/__init__.py", line 376, in iter
    result = action(retry_state)
             ^^^^^^^^^^^^^^^^^^^
  File "/opt/hostedtoolcache/Python/3.12.8/x64/lib/python3.12/site-packages/tenacity/__init__.py", line 418, in exc_check
    raise retry_exc.reraise()
          ^^^^^^^^^^^^^^^^^^^
  File "/opt/hostedtoolcache/Python/3.12.8/x64/lib/python3.12/site-packages/tenacity/__init__.py", line 185, in reraise
    raise self.last_attempt.result()
          ^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/opt/hostedtoolcache/Python/3.12.8/x64/lib/python3.12/concurrent/futures/_base.py", line 449, in result
    return self.__get_result()
           ^^^^^^^^^^^^^^^^^^^
  File "/opt/hostedtoolcache/Python/3.12.8/x64/lib/python3.12/concurrent/futures/_base.py", line 401, in __get_result
    raise self._exception
  File "/opt/hostedtoolcache/Python/3.12.8/x64/lib/python3.12/site-packages/tenacity/__init__.py", line 478, in __call__
    result = fn(*args, **kwargs)
             ^^^^^^^^^^^^^^^^^^^
  File "/opt/hostedtoolcache/Python/3.12.8/x64/lib/python3.12/site-packages/openhands/llm/llm.py", line 251, in wrapper
    resp: ModelResponse = self._completion_unwrapped(*args, **kwargs)
                          ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/opt/hostedtoolcache/Python/3.12.8/x64/lib/python3.12/site-packages/litellm/utils.py", line 1156, in wrapper
    raise e
  File "/opt/hostedtoolcache/Python/3.12.8/x64/lib/python3.12/site-packages/litellm/utils.py", line 1034, in wrapper
    result = original_function(*args, **kwargs)
             ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/opt/hostedtoolcache/Python/3.12.8/x64/lib/python3.12/site-packages/litellm/main.py", line 3085, in completion
    raise exception_type(
          ^^^^^^^^^^^^^^^
  File "/opt/hostedtoolcache/Python/3.12.8/x64/lib/python3.12/site-packages/litellm/litellm_core_utils/exception_mapping_utils.py", line 2201, in exception_type
    raise e
  File "/opt/hostedtoolcache/Python/3.12.8/x64/lib/python3.12/site-packages/litellm/litellm_core_utils/exception_mapping_utils.py", line 547, in exception_type
    raise RateLimitError(
litellm.exceptions.RateLimitError: litellm.RateLimitError: AnthropicException - {"type":"error","error":{"type":"rate_limit_error","message":"This request would exceed your organization’s rate limit of 400,000 input tokens per minute. For details, refer to: https://docs.anthropic.com/en/api/rate-limits; see the response headers for current usage. Please reduce the prompt length or the maximum tokens requested, or try again later. You may also contact sales at https://www.anthropic.com/contact-sales to discuss your options for a rate limit increase."}}
23:27:08 - openhands:INFO: agent_controller.py:451 - [Agent Controller default] Setting agent(CodeActAgent) state from AgentState.RUNNING to AgentState.ERROR
23:27:08 - openhands:INFO: agent_controller.py:451 - [Agent Controller default] Setting agent(CodeActAgent) state from AgentState.ERROR to AgentState.RATE_LIMITED
23:27:08 - openhands:INFO: resolve_issue.py:198 - AgentStateChangedObservation(content='', agent_state=<AgentState.ERROR: 'error'>, observation='agent_state_changed')
23:27:08 - openhands:INFO: resolve_issue.py:198 - AgentStateChangedObservation(content='', agent_state=<AgentState.RATE_LIMITED: 'rate_limited'>, observation='agent_state_changed')

OpenHands Installation

GitHub resolver

OpenHands Version

0.23.0

Operating System

Linux

Logs, Errors, Screenshots, and Additional Context

Here is the full log archive: logs_34255864474.zip

@oconnorjoseph oconnorjoseph added the bug Something isn't working label Feb 12, 2025
@mamoodi mamoodi added the resolver Related to OpenHands Resolver label Feb 12, 2025
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something isn't working resolver Related to OpenHands Resolver
Projects
None yet
Development

No branches or pull requests

2 participants