Replies: 1 comment
-
I found some similar discussions and issues that might help you with streaming in your chain:
To enable streaming in your chain so that const query = "What is 3 * 12? Also, what is 11 + 49?";
const stream = await modelWithTools.stream(query);
for await (const chunk of stream) {
console.log(chunk.tool_call_chunks);
} This code sets up a query and streams the output, logging each chunk of the tool call as it is received. Additionally, you can add some logic to compile your stream as it's being returned: const output = {};
let currentKey: string | null = null;
for await (const chunk of await modelWithTools.stream(query)) {
for (const key of Object.keys(chunk)) {
if (output[key] === undefined) {
output[key] = chunk[key];
} else {
output[key] += chunk[key];
}
if (key !== currentKey) {
console.log(`\n\n${key}: ${JSON.stringify(chunk[key])}`);
} else {
console.log(chunk[key]);
}
currentKey = key;
}
} This additional logic helps to compile the stream as it is being returned, providing a more structured output. For a more detailed example, you can refer to the LangChain documentation on streaming final outputs: for await (const chunk of await ragChainWithSource.stream("What is task decomposition?")) {
console.log(chunk)
} This example demonstrates how to stream outputs as they are generated using LangChain. Additionally, you can refer to the test("Stream model output", async () => {
const remote = new RemoteRunnable({ url: `${BASE_URL}/b` });
const stream = await remote.stream({ text: "What are the 5 best apples?" });
let chunkCount = 0;
let accumulator: AIMessageChunk | null = null;
for await (const chunk of stream) {
const innerChunk = chunk as AIMessageChunk;
accumulator = accumulator ? accumulator.concat(innerChunk) : innerChunk;
chunkCount += 1;
}
expect(chunkCount).toBe(18);
expect(accumulator?.content).toEqual(
'"object1, object2, object3, object4, object5"'
);
}); This test case shows how to stream model output and accumulate chunks to form the final response. For a more comprehensive understanding of streaming in LangChain, you can refer to the LangChain documentation on streaming: import { ChatOpenAI } from "@langchain/openai";
const model = new ChatOpenAI({
model: "gpt-4o",
temperature: 0,
});
const stream = await model.stream("Hello! Tell me about yourself.");
const chunks = [];
for await (const chunk of stream) {
chunks.push(chunk);
console.log(`${chunk.content}|`)
} This example shows how to stream the output from a model token by token, making the application feel more responsive by showing intermediate progress [6][7][8][9][10]. |
Beta Was this translation helpful? Give feedback.
-
Checked other resources
Commit to Help
Example Code
Description
I can't seem to be able to stream in this chain, even the console.log sends back the complete answer instead of chunks
System Info
none
Beta Was this translation helpful? Give feedback.
All reactions