You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I added a very descriptive title to this question.
I searched the LangChain documentation with the integrated search.
I used the GitHub search to find a similar question and didn't find it.
Commit to Help
I commit to help with one of those options 👆
Example Code
constCONDENSE_PROMPT=`Given the history of the conversation and a follow up question, rephrase the follow up question to be a standalone question.If the follow up question does not need context like when the follow up question is a remark like: excellent, thanks, thank you etc., return the exact same text back.Chat History:{chat_history}Follow Up Input: {question}Standalone question:`;constQA_PROMPT=`You are a multilingual helpful and friendly assistant. - If the are images in the Images section, look carefully at the image before answering the question.- If you do not have the information in the context to answer a question, admit it openly without fabricating responses.- If the user's questions is valid and there is no documentation or context about it, let him know that he can leave a comment and we will do our best to include it at a later stage.Your responses should be tailored to the question's intent, using text formatting (bold with **, italic with __, strikethrough with ~~) to enhance clarity, and organized with headings, paragraphs, or lists as appropriate.=========context: {context}Images: {images}=========Question: {question}Answer in the {language} language:`;exportconstmakeChain=(vectorstore: PineconeStore,onTokenStream: (token: string)=>void,userEmail: string)=>{conststreamingModel=newChatOpenAI({streaming: true,modelName: MODEL_NAME,temperature: TEMPRATURE,modelKwargs: {seed: 1,},callbacks: [{handleLLMNewToken: (token)=>{onTokenStream(token);// Forward the streamed token to the front-end},},],});constnonStreamingModel=newOpenAI({modelName: 'gpt-4o',temperature: TEMPRATURE});consttranslationModel=newOpenAI({modelName: 'gpt-4o',temperature: TEMPRATURE});functiongenerateUniqueId(): string{returnuuidv4();}return{call: async(input: string,Documents: MyDocument[],roomId: string,userEmail: string)=>{constqaId=generateUniqueId();constchat_memory=MemoryService.getChatMemory(roomId);constlanguage=awaitdetectLanguageWithOpenAI(input,nonStreamingModel);if(language!=='English'){input=awaittranslateToEnglish(input,translationModel);}constimageUrls=awaitextractImageUrlsFromMemory(chat_memory);constformattedImages=imageUrls.join(', ');constformattedContext=`{context}`;constformattedPrompt=QA_PROMPT.replace('{language}',language).replace('{context}',formattedContext).replace('{images}',formattedImages);if((chat_memory.chatHistoryasany).messages.length===0){constinitialHumanMessage=newHumanMessage({content: "Hi",name: "Human",});chat_memory.chatHistory.addMessage(initialHumanMessage);}constcustomRetriever=newCustomRetriever(vectorstore);constchain=ConversationalRetrievalQAChain.fromLLM(streamingModel,customRetriever,{memory: chat_memory,questionGeneratorChainOptions: {llm: nonStreamingModel,},qaTemplate: formattedPrompt,questionGeneratorTemplate: CONDENSE_PROMPT,returnSourceDocuments: true,verbose: true,});constresponse=awaitchain.invoke({question: input,chat_history: chat_memory.chatHistory,});awaitchat_memory.saveContext({question: input},{text: response.text});constminScoreSourcesThreshold=process.env.MINSCORESOURCESTHRESHOLD!==undefined ? parseFloat(process.env.MINSCORESOURCESTHRESHOLD) : 0.78;letembeddingsStore;if(language!=='English'){embeddingsStore=awaitcustomRetriever.storeEmbeddings(response.text,minScoreSourcesThreshold);Documents=[...response.sourceDocuments];// Apply filtering after combining sourcesDocuments=Documents.filter(doc=>doc.metadata.type!=='other'&&doc.metadata.type!=="txt"&&doc.metadata.type!=="user_input");for(const[doc,score]ofembeddingsStore){if(doc.metadata.type!=="txt"&&doc.metadata.type!=="user_input"){constmyDoc=newMyDocument({pageContent: doc.pageContent,metadata: {source: doc.metadata.source,type: doc.metadata.type,videoLink: doc.metadata.videoLink,file: doc.metadata.file,score: score,image: doc.metadata.image}});Documents.push(myDoc);}}Documents.sort((a,b)=>(b.metadata.score ? 1 : 0)-(a.metadata.score ? 1 : 0));}if(language==='English'){embeddingsStore=awaitcustomRetriever.storeEmbeddings(response.text,minScoreSourcesThreshold);for(const[doc,score]ofembeddingsStore){constmyDoc=newMyDocument({pageContent: doc.pageContent,metadata: {source: doc.metadata.source,type: doc.metadata.type,videoLink: doc.metadata.videoLink,file: doc.metadata.file,score: score,image: doc.metadata.image}});Documents.push(myDoc);}// Apply filtering after combining sourcesDocuments=Documents.filter(doc=>doc.metadata.type!=='other');}if(roomId){io.to(roomId).emit(`fullResponse-${roomId}`,{roomId: roomId,sourceDocs: Documents,qaId: qaId});}else{io.emit("fullResponse",{sourceDocs: Documents,qaId: qaId});}MemoryService.updateChatMemory(roomId,chat_memory);return{text: response.text,sourceDocuments: response.sourceDocuments};},
vectorstore,};};
Description
Although I use ConversationalRetrievalQAChain, and because it is now depreciated,
I would like to know if it is possible to set up a createRetrievalChain that will be able to maintain conversations with chat history, document retrieval, prompt and images.
for example, I upload an image and ask a question about the image. The createRetrievalChain will see the image, will also gather embeddings with context to the question and then give the answer based on the image + embedding context.
Then I ask another question maybe related to the image or maybe not, and if there was an image url in chat_history on in other location, it will read the image and embeddings context and also answer the question and the process repeats.
in the above code I have tried to implement that but he is not able to read the image URL as is.
import{ChatPromptTemplate}from"@langchain/core/prompts";constQUESTION_GEN_TEMPLATE=`Given the following conversation and a follow up question, rephrase the follow up question to be a standalone question.Chat History:{chat_history}Follow Up Input: {input}Standalone question:`;constcontextualizeQPrompt=ChatPromptTemplate.fromTemplate(QUESTION_GEN_TEMPLATE);
constCOMBINE_DOCS_PROMPT=`Based on the following context:{context}And chat history:{chat_history}Answer the following question:{input}`;constqaPrompt=ChatPromptTemplate.fromTemplate(COMBINE_DOCS_PROMPT);
import{HumanMessage,AIMessage}from"@langchain/core/messages";constchat_history=[newHumanMessage("Can LangSmith help test my LLM applications?"),newAIMessage("Yes!"),];constresponse=awaitragChain.invoke({
chat_history,input: "Tell me how!",});console.log(response.answer);
This setup ensures that the createRetrievalChain maintains conversations with chat history, document retrieval, prompts, and images. Additionally, you can refer to the test example in the langchain/src/chains/tests/retrieval_chain.test.ts file for further insights on how to structure and test your retrieval chain [1][2][3][4].
To handle image URLs correctly in your createRetrievalChain setup, you need to ensure that the image URLs are processed and included in the context for answering questions. Here's an updated version of your setup that includes processing image URLs:
import{ChatPromptTemplate}from"@langchain/core/prompts";constQUESTION_GEN_TEMPLATE=`Given the following conversation and a follow up question, rephrase the follow up question to be a standalone question.Chat History:{chat_history}Follow Up Input: {input}Standalone question:`;constcontextualizeQPrompt=ChatPromptTemplate.fromTemplate(QUESTION_GEN_TEMPLATE);
constCOMBINE_DOCS_PROMPT=`Based on the following context:{context}And chat history:{chat_history}Answer the following question:{input}`;constqaPrompt=ChatPromptTemplate.fromTemplate(COMBINE_DOCS_PROMPT);
import{HumanMessage,AIMessage}from"@langchain/core/messages";constchat_history=[newHumanMessage("Can LangSmith help test my LLM applications?"),newAIMessage("Yes!"),];constresponse=awaitragChain.invoke({
chat_history,input: "Tell me how!",});console.log(response.answer);
This setup ensures that the createRetrievalChain maintains conversations with chat history, document retrieval, prompts, and images. The extractImageUrls function processes image URLs from the documents and includes them in the context for answering questions [1][2][3][4][5].
reacted with thumbs up emoji reacted with thumbs down emoji reacted with laugh emoji reacted with hooray emoji reacted with confused emoji reacted with heart emoji reacted with rocket emoji reacted with eyes emoji
-
Checked other resources
Commit to Help
Example Code
Description
Although I use ConversationalRetrievalQAChain, and because it is now depreciated,
I would like to know if it is possible to set up a createRetrievalChain that will be able to maintain conversations with chat history, document retrieval, prompt and images.
for example, I upload an image and ask a question about the image. The createRetrievalChain will see the image, will also gather embeddings with context to the question and then give the answer based on the image + embedding context.
Then I ask another question maybe related to the image or maybe not, and if there was an image url in chat_history on in other location, it will read the image and embeddings context and also answer the question and the process repeats.
in the above code I have tried to implement that but he is not able to read the image URL as is.
System Info
[email protected] | MIT | deps: 16 | versions: 282
dependencies:
@langchain/core: >=0.2.11 <0.3.0 js-tiktoken: ^1.0.12 langsmith: ~0.1.30 uuid: ^10.0.0
@langchain/openai: >=0.1.0 <0.3.0 js-yaml: ^4.1.0 ml-distance: ^4.0.0 yaml: ^2.2.1
@langchain/textsplitters: ~0.0.0 jsonpointer: ^5.0.1 openapi-types: ^12.1.3 zod-to-json-schema: ^3.22.3
binary-extensions: ^2.2.0 langchainhub: ~0.0.8 p-retry: 4 zod: ^3.22.4
dist-tags:
latest: 0.2.10 next: 0.2.3-rc.0
Beta Was this translation helpful? Give feedback.
All reactions