Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Extraction Schema not returning values per the Info Tool Call #5

Open
blaziken105 opened this issue Oct 24, 2024 · 0 comments
Open

Comments

@blaziken105
Copy link

So I cloned the repo as is and ran it in langgraph studio with the example in the README. It seems to always come up with the "Unsatisfactory response"

After a bunch of logging and digging into the code, it seems like the info variable is always undefined or just shows up as {} in the log and never ends up structuring the output per the extraction schema, which is strange since it seems to work perfectly in the python version and at every stage of the relevant tool call it pulls out information per the extraction schema.

Here is the section of the code where I think the issue lies

console.error("Response Messages", responseMessages )

  // If the model has collected enough information to fill uot
  // the provided schema, great! It will call the "Info" tool
  // We've decided to track this as a separate state variable
  let info;
  if ((response?.tool_calls && response.tool_calls?.length) || 0) {
    for (const tool_call of response.tool_calls || []) {
      if (tool_call.name === "Info") {
        info = tool_call.args;
        // If info was called, the agent is submitting a response.
        // (it's not actually a function to call, it's a schema to extract)
        // To ensure that the graph doesn'tend up in an invalid state
        // (where the AI has called tools but no tool message has been provided)
        // we will drop any extra tool_calls.
        response.tool_calls = response.tool_calls?.filter(
          (tool_call) => tool_call.name === "Info",
        );
        console.error("Info in tool calls",tool_call)
        console.error("Response in tool calls",response.tool_calls)
        break;
      }
    }
  } else {
    // If LLM didn't respect the tool_choice
    responseMessages.push(
      new HumanMessage("Please respond by calling one of the provided tools."),
    );
  }
  console.error("info ", info)

Here is the screenshot of the output where I can see this issue
Screenshot 2024-10-24 at 11 11 43 AM

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant