Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Making use of Proxy for the prompt optimization #6

Open
noamKayzer opened this issue Nov 21, 2024 · 3 comments
Open

Making use of Proxy for the prompt optimization #6

noamKayzer opened this issue Nov 21, 2024 · 3 comments

Comments

@noamKayzer
Copy link

noamKayzer commented Nov 21, 2024

Hey.

I'm trying to use the promptim optimization with azure client.
Its seems like configuring for doing so isn't do much (it keeps ask for the regular OPENAPI_KEY)
Is the use of an OpenAI (or Claude) proxy implemented yet, or is there a problem with my configuration?

part of my config file:
"optimizer": {
"model": {
"model": "gpt4o",
"model_provider": "azure_openai",
"max_tokens_to_sample": 8192,
"configurable_fields": "any",
"api_key": "db64482d5b7d-------------1",
"deployment_name": "gpt-4o",
"api_version": "2024-08-01-preview",
"base_url": "https://----------------.openai.azure.com"
}
}
Thanks

@hinthornw
Copy link
Owner

Could you test again on verson 0.0.7? Thank you! 🙏

@Squidward1012
Copy link

Squidward1012 commented Nov 26, 2024

Could you test again on verson 0.0.7? Thank you! 🙏

{
  "name": "Test",
  "dataset": "tweet-optim",
  "description": "test task",
  "evaluators": "./task.py:evaluators",
  "evaluator_descriptions": {
    "my_example_criterion": "CHANGEME: This is a description of what the example criterion is testing. It is provided to the metaprompt to improve how it responds to different results."
  },
  "optimizer": {
    "model": {
      "model": "qwen2.5-72b-instruct",
      "base_url": "https://dashscope.aliyuncs.com/compatible-mode/v1",
      "model_provider": "openai"
    }
  },
  "initial_prompt": {
    "identifier": "tweet-generator-example-with-nothing"
  },
  "$schema": "https://raw.githubusercontent.com/hinthornw/promptimizer/refs/heads/main/config-schema.json"
}

File "C:\Users\23939.conda\envs\3.12\Lib\site-packages\openai_base_client.py", line 1634, in _request
raise self._make_status_error_from_response(err.response) from None
openai.NotFoundError: Error code: 404 - {'error': {'message': 'The model gpt-4o-mini does not exist or you do not
have access to it.', 'type': 'invalid_request_error', 'param': None, 'code': 'model_not_found'}, 'request_id':
'3125e679-3b43-9990-bc94-6a5321d11279'}
Getting baseline scores... ---------------------------------------- 100% 0:00:00
Error running target function: Error code: 404 - {'error': {'message': 'The model gpt-4o-mini does not exist or you do not have access to it.', 'type': 'invalid_request_error', 'param': None, 'code': 'model_not_found'}, 'request_id': 'de583ec4-f4b7-9997-8407-672e56e2f288'}

@noamKayzer
Copy link
Author

Could you test again on verson 0.0.7? Thank you! 🙏

Its seems like nothing change after upgrading, it keeps looking for the API-KEY for the regular openai client.

raise OpenAIError(
openai.OpenAIError: The api_key client option must be set either by passing api_key to the client or by setting the OPENAI_API_KEY environment variable

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants