-
Notifications
You must be signed in to change notification settings - Fork 16.2k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
(Community): Adding Structured Support for ChatPerplexity #29361
base: master
Are you sure you want to change the base?
(Community): Adding Structured Support for ChatPerplexity #29361
Conversation
The latest updates on your projects. Learn more about Vercel for Git ↗︎ 1 Skipped Deployment
|
@chain | ||
def _oai_structured_outputs_parser(ai_msg: AIMessage) -> PydanticBaseModel: | ||
if ai_msg.additional_kwargs.get("parsed"): | ||
return ai_msg.additional_kwargs["parsed"] |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Is a BaseModel instance getting populated under "parsed"
in .additional_kwargs
?
@ccurme please see now. I have double checked now and tested as well with Preplexity Docs. |
@ccurme looking all good, please review |
1 similar comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I enabled standard tests for perplexity to pick up tests for structured output. It's currently failing-- we expect to handle TypedDict, Pydantic, and JSON schema.
More importantly, this doesn't appear to work for any input type. Let me know if I'm doing something wrong.
from langchain_community.chat_models import ChatPerplexity
from pydantic import BaseModel, Field
class Joke(BaseModel):
"""Joke to tell user."""
setup: str = Field(description="question to set up a joke")
punchline: str = Field(description="answer to resolve the joke")
llm = ChatPerplexity(model="sonar").with_structured_output(Joke)
result = llm.invoke("Tell me a joke about cats.")
BadRequestError: Error code: 400 - {'error': {'message': '["At body -> response_format -> ResponseFormatText -> type: Input should be 'text'", "At body -> response_format -> ResponseFormatJSONSchema -> type: Input should be 'json_schema'", "At body -> response_format -> ResponseFormatJSONSchema -> json_schema: Field required", "At body -> response_format -> ResponseFormatRegex -> type: Input should be 'regex'", "At body -> response_format -> ResponseFormatRegex -> regex: Field required"]', 'type': 'bad_request', 'code': 400}}
okay @ccurme |
@ccurme I have ensured that we are handling |
bind_tools
for structured output #29357