Making Langchain Agent with BufferMemory and streaming #5654
Unanswered
RobertTompsky
asked this question in
Q&A
Replies: 1 comment 2 replies
-
Hey @RobertTompsky! 👋 I'm here to help you out with any bugs, questions, or contributions you might have. To add memory to the Langchain Agent with import { BufferMemory } from "langchain/memory";
import { ChatPromptTemplate, MessagesPlaceholder } from "@langchain/core/prompts";
import { ChatOpenAI } from "@langchain/openai";
import { AgentExecutor, createToolCallingAgent } from "langchain/agents";
import { TavilySearchResults } from "@langchain/community/tools/tavily_search";
import { Calculator } from "@langchain/community/tools/calculator";
import { WikipediaQueryRun } from "@langchain/community/tools/wikipedia_query_run";
import { HumanMessage, AIMessage, SystemMessage } from "@langchain/core/messages";
import { ChatMessageHistory } from "langchain/memory";
import 'dotenv/config';
// Function to create chat messages from stored messages
export function createChatMessagesFromStored(messages) {
return messages.map((message) => {
switch (message.role) {
case "human":
return new HumanMessage(message.content);
case "ai":
return new AIMessage(message.content);
case "system":
return new SystemMessage(message.content);
default:
throw new Error("Role must be defined for generic messages");
}
});
}
// Function to extract the last question from messages
export const extractLastQuestion = (messages) => {
const currContent = messages.length > 0 ? messages[messages.length - 1].content : '';
const previousMessages = messages.slice(0, messages.length - 1);
return { currContent, previousMessages };
}
// Function to get chat memory
export const getChatMemory = (messages) => {
const { currContent, previousMessages } = extractLastQuestion(messages);
const chatMemory = new BufferMemory({
chatHistory: new ChatMessageHistory(createChatMessagesFromStored(previousMessages)),
memoryKey: 'chat_history',
returnMessages: true
});
return { chatMemory, currContent };
}
// Function to create a conversation chain
export const createConversationChain = async (reqBody) => {
const { messages, gptModel, systemPrompt } = reqBody;
const { chatMemory } = getChatMemory(messages);
const chatModel = new ChatOpenAI({
apiKey: process.env.API_KEY,
model: gptModel,
temperature: 1.2,
streaming: true,
});
const chatPrompt = ChatPromptTemplate.fromMessages([
['system', `${systemPrompt} You may not need to use tools for every query - the user may just want to chat!`],
["placeholder", "{chat_history}"],
['human', "{input}"],
["placeholder", "{agent_scratchpad}"],
]);
const search = new TavilySearchResults({
apiKey: process.env.TAVILY_API_KEY,
maxResults: 3
});
const wiki = new WikipediaQueryRun({
topKResults: 3,
maxDocContentLength: 4000,
});
const tools = [search, wiki, new Calculator()];
const agent = createToolCallingAgent({
llm: chatModel,
tools,
prompt: chatPrompt
});
const agentExecutor = new AgentExecutor({
agent,
tools,
memory: chatMemory,
});
return agentExecutor;
}
// Fastify server route
import { FastifyInstance, FastifyReply, FastifyRequest } from "fastify";
const HEADERS = [
{ key: 'Content-Type', value: 'text/event-stream' },
{ key: 'Cache-Control', value: 'no-cache' },
{ key: 'Connection', value: 'keep-alive' },
{ key: 'Access-Control-Allow-Origin', value: '*' },
];
const setHeaders = (reply) => {
HEADERS.forEach(header => {
reply.raw.setHeader(header.key, header.value);
});
};
export const sendMessage = async (fastify) => {
fastify.route({
method: 'POST',
url: '/chat/send_message',
handler: async (request, reply) => {
try {
const { messages } = request.body;
const { currContent } = getChatMemory(messages);
setHeaders(reply);
const agent = await createConversationChain(request.body);
const stream = agent.streamLog({ input: currContent });
for await (const chunk of stream) {
if (chunk.ops?.length > 0 && chunk.ops[0].op === "add") {
const addOp = chunk.ops[0];
if (addOp.path.startsWith("/logs/ChatOpenAI") && typeof addOp.value === "string" && addOp.value.length) {
reply.raw.write(addOp.value);
}
}
}
reply.raw.end();
} catch (error) {
console.log(error);
reply.code(500).send({ error: error.message });
}
}
});
} This setup ensures that the agent has access to the chat history and can handle streaming responses effectively. The |
Beta Was this translation helpful? Give feedback.
2 replies
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
-
Checked other resources
Commit to Help
Example Code
Description
Yo, I'm trying to make a Langchain Agent with BufferMemory and streaming. Here you can see my code. Everything is working except memory. It seems that agent has no access to chat history.
I used the same approach with adding BufferMemory to ConversationChain earlier and it worked fine.
Also it is worth to mention that while creating chain I used the following structure of prompt:
`const chatPrompt =
But here is a trouble with this prompt. When I add new MessagesPlaceholder('agent_scratchpad') to it and use this prompt with Agent, I get an error which says that I'm trying to pass an object while the server expects string or Buffer type data, so with agent I can use prompt of following structure:
`const chatPrompt
System Info
windows
Beta Was this translation helpful? Give feedback.
All reactions