Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

fix(openai): Allow o1 to stream a single chunk #7616

Open
wants to merge 4 commits into
base: main
Choose a base branch
from
Open
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
19 changes: 18 additions & 1 deletion libs/langchain-openai/src/chat_models.ts
Original file line number Diff line number Diff line change
Expand Up @@ -1320,7 +1320,6 @@ export class ChatOpenAI<
stream: true as const,
};
let defaultRole: OpenAIRoleEnum | undefined;

const streamIterable = await this.completionWithRetry(params, options);
let usage: OpenAIClient.Completions.CompletionUsage | undefined;
for await (const data of streamIterable) {
Expand Down Expand Up @@ -1599,6 +1598,24 @@ export class ChatOpenAI<
);
generations.push(generation);
}

await runManager?.handleLLMNewToken(
generations[0].text ?? "",
{
prompt: usageMetadata.input_tokens,
completion: usageMetadata.output_tokens,
},
undefined,
undefined,
undefined,
{
chunk: new ChatGenerationChunk({
message: new AIMessageChunk({ ...generations[0].message }),
text: generations[0].text ?? "",
}),
}
);

return {
generations,
llmOutput: {
Expand Down
28 changes: 28 additions & 0 deletions libs/langchain-openai/src/tests/chat_models.int.test.ts
Original file line number Diff line number Diff line change
Expand Up @@ -26,6 +26,7 @@
ChatCompletionMessage,
} from "openai/resources/index.mjs";
import { ChatOpenAI } from "../chat_models.js";
import { RunnableLambda } from "@langchain/core/runnables";

Check failure on line 29 in libs/langchain-openai/src/tests/chat_models.int.test.ts

View workflow job for this annotation

GitHub Actions / Check linting

`@langchain/core/runnables` import should occur before import of `../chat_models.js`

// Save the original value of the 'LANGCHAIN_CALLBACKS_BACKGROUND' environment variable
const originalBackground = process.env.LANGCHAIN_CALLBACKS_BACKGROUND;
Expand Down Expand Up @@ -1285,3 +1286,30 @@
console.log(chunk);
}
});

test.only("Streaming with o1 will yield at least one chunk with content", async () => {

Check failure on line 1290 in libs/langchain-openai/src/tests/chat_models.int.test.ts

View workflow job for this annotation

GitHub Actions / Check linting

Unexpected focused test
const model = new ChatOpenAI({
model: "o1",
});

const runnable = RunnableLambda.from(() => model.streamEvents(["user", "What color is the sky?"], {
version: "v2",
}));

const result = runnable.streamEvents({}, {
version: "v2"
});

let content = "";
let numStreamChunks = 0;
for await (const chunk of result) {
console.log(chunk.event)
if (chunk.event === "on_chat_model_stream") {
content += chunk.data.chunk.content;
numStreamChunks += 1;
}
}

expect(content.length).toBeGreaterThan(10);
expect(numStreamChunks).toBe(1);
});
Loading