Replies: 5 comments 5 replies
-
🤖 Hi @dmb0058! Good to see you again here in the langchainjs repository. I hope you're doing well! Let's take a look at this together. The error message you're encountering, "ResourceNotFoundException: Could not resolve the foundation model from the provided model identifier," is not due to using a deprecated model "amazon.titan-embed-text-v1" with Amazon Bedrock. This error message is likely due to an issue with the model identifier provided to the embeddings object. In the provided code, the model identifier "amazon.titan-embed-text-v1" is passed to the BedrockEmbeddings constructor. If this model identifier is incorrect or the model is not available, it would result in the error you're seeing. To resolve this issue, you could try providing a different model identifier to the BedrockEmbeddings constructor, or check the availability and accessibility of the model "amazon.titan-embed-text-v1". Additionally, the initialization of the BedrockEmbeddings object in your code seems to be slightly different from the standard practice in the langchainjs codebase. Instead of directly passing the region and credentials to the BedrockEmbeddings constructor, a BedrockRuntimeClient instance is created with these details and then passed to the BedrockEmbeddings constructor. Here is the updated example based on the provided test code: import { BedrockRuntimeClient } from "@aws-sdk/client-bedrock-runtime";
import { BedrockEmbeddings } from "path-to-langchainjs/embeddings/bedrock";
// Function to create a BedrockRuntimeClient instance
const getClient = () => {
if (
!process.env.BEDROCK_AWS_REGION ||
!process.env.BEDROCK_AWS_ACCESS_KEY_ID ||
!process.env.BEDROCK_AWS_SECRET_ACCESS_KEY
) {
throw new Error("Missing environment variables for AWS");
}
return new BedrockRuntimeClient({
region: process.env.BEDROCK_AWS_REGION,
credentials: {
accessKeyId: process.env.BEDROCK_AWS_ACCESS_KEY_ID,
secretAccessKey: process.env.BEDROCK_AWS_SECRET_ACCESS_KEY,
},
});
};
// Create a BedrockRuntimeClient instance
const client = getClient();
// Initialize the BedrockEmbeddings object with the client
const embeddings = new BedrockEmbeddings({
maxRetries: 1,
client,
}); You can find more details in the langchain/src/embeddings/bedrock.ts and langchain/src/embeddings/tests/bedrock.int.test.ts files in the langchainjs repository. I hope this helps! If you have any other questions or need further clarification, feel free to ask. -Dosu Sources
This response is meant to be useful and save you time. It is not meant to be a precise solution, but rather a starting point for your own research. Help me be more useful! Please leave a 👍 if this is helpful and 👎 if it is irrelevant. If you want to continue the conversation, start your reply with @dosu-bot. |
Beta Was this translation helpful? Give feedback.
-
@dosu-bot,
Thanks for your suggestion. Just to note for any future readers that it does not work. It gives exactly the same result as my initial code because it is functionally equivalent.
David
… On 4 Dec 2023, at 01:21, dosubot[bot] ***@***.***> wrote:
🤖
Hi @dmb0058 <https://github.com/dmb0058>!
Good to see you again here in the langchainjs repository. I hope you're doing well! Let's take a look at this together.
The error message you're encountering, "ResourceNotFoundException: Could not resolve the foundation model from the provided model identifier," is not due to using a deprecated model "amazon.titan-embed-text-v1" with Amazon Bedrock. This error message is likely due to an issue with the model identifier provided to the embeddings object.
In the provided code, the model identifier "amazon.titan-embed-text-v1" is passed to the BedrockEmbeddings constructor. If this model identifier is incorrect or the model is not available, it would result in the error you're seeing.
To resolve this issue, you could try providing a different model identifier to the BedrockEmbeddings constructor, or check the availability and accessibility of the model "amazon.titan-embed-text-v1".
Additionally, the initialization of the BedrockEmbeddings object in your code seems to be slightly different from the standard practice in the langchainjs codebase. Instead of directly passing the region and credentials to the BedrockEmbeddings constructor, a BedrockRuntimeClient instance is created with these details and then passed to the BedrockEmbeddings constructor. Here is the updated example based on the provided test code:
import { BedrockRuntimeClient } from ***@***.***/client-bedrock-runtime";
import { BedrockEmbeddings } from "path-to-langchainjs/embeddings/bedrock";
// Function to create a BedrockRuntimeClient instance
const getClient = () => {
if (
!process.env.BEDROCK_AWS_REGION ||
!process.env.BEDROCK_AWS_ACCESS_KEY_ID ||
!process.env.BEDROCK_AWS_SECRET_ACCESS_KEY
) {
throw new Error("Missing environment variables for AWS");
}
return new BedrockRuntimeClient({
region: process.env.BEDROCK_AWS_REGION,
credentials: {
accessKeyId: process.env.BEDROCK_AWS_ACCESS_KEY_ID,
secretAccessKey: process.env.BEDROCK_AWS_SECRET_ACCESS_KEY,
},
});
};
// Create a BedrockRuntimeClient instance
const client = getClient();
// Initialize the BedrockEmbeddings object with the client
const embeddings = new BedrockEmbeddings({
maxRetries: 1,
client,
});
You can find more details in the langchain/src/embeddings/bedrock.ts <https://github.com/langchain-ai/langchainjs/blob/main/langchain/src/embeddings/bedrock.ts> and langchain/src/embeddings/tests/bedrock.int.test.ts <https://github.com/langchain-ai/langchainjs/blob/main/langchain/src/embeddings/tests/bedrock.int.test.ts> files in the langchainjs repository.
I hope this helps! If you have any other questions or need further clarification, feel free to ask.
-Dosu <https://dosu.dev/>
Sources
examples/src/indexes/vector_stores/faiss_fromdocs.ts <https://github.com/langchain-ai/langchainjs/blob/main/examples/src/indexes/vector_stores/faiss_fromdocs.ts>
langchain/src/embeddings/bedrock.ts <https://github.com/langchain-ai/langchainjs/blob/main/langchain/src/embeddings/bedrock.ts>
langchain/src/embeddings/tests/bedrock.int.test.ts <https://github.com/langchain-ai/langchainjs/blob/main/langchain/src/embeddings/tests/bedrock.int.test.ts>
This response is meant to be useful and save you time. It is not meant to be a precise solution, but rather a starting point for your own research.
Help me be more useful! Please leave a 👍 if this is helpful and 👎 if it is irrelevant.
If you want to continue the conversation, start your reply with @dosu-bot <https://github.com/dosu-bot>.
—
Reply to this email directly, view it on GitHub <#3516 (comment)>, or unsubscribe <https://github.com/notifications/unsubscribe-auth/ABDXCQOHZRDUG6VG2ITUDG3YHUQP5AVCNFSM6AAAAABAFGMOWOVHI2DSMVQWIX3LMV43SRDJONRXK43TNFXW4Q3PNVWWK3TUHM3TONBXHE4TM>.
You are receiving this because you were mentioned.
|
Beta Was this translation helpful? Give feedback.
-
@dosu-bot
Your code doesn’t work - @langchain-ai/client-bedrock-runtime does not exist.
… On 4 Dec 2023, at 09:56, dosubot[bot] ***@***.***> wrote:
@langchain-ai/client-bedrock-runtime
|
Beta Was this translation helpful? Give feedback.
-
@dosu-bot
Here’s a simpler example that fails in the same way. This is copied from https://js.langchain.com/docs/integrations/text_embedding/bedrock.
const { BedrockEmbeddings } = require("langchain/embeddings/bedrock");
require("dotenv").config();
(async () =>{
const embeddings = new BedrockEmbeddings({
region: process.env.BEDROCK_REGION,
credentials: {
accessKeyId: process.env.AWS_ACCESS_KEY_ID,
secretAccessKey: process.env.AWS_SECRET_ACCESS_KEY
},
model: "amazon.titan-embed-text-v1"
});
const res = await embeddings.embedQuery("What would be a good company name a company that makes colorful socks?");
console.log({ res });
})();
The error is the same:
$ node test.js
{
error: ResourceNotFoundException: Could not resolve the foundation model from the provided model identifier.
at de_ResourceNotFoundExceptionRes ***@***.***/client-bedrock-runtime/dist-cjs/protocols/Aws_restJson1.js:260:23)
at de_InvokeModelCommandError ***@***.***/client-bedrock-runtime/dist-cjs/protocols/Aws_restJson1.js:92:25)
at process.processTicksAndRejections (node:internal/process/task_queues:95:5)
at async ***@***.***/middleware-serde/dist-cjs/deserializerMiddleware.js:7:24
at async ***@***.******@***.***/middleware-signing/dist-cjs/awsAuthMiddleware.js:30:20
at async ***@***.***/middleware-retry/dist-cjs/retryMiddleware.js:27:46
at async ***@***.******@***.***/middleware-logger/dist-cjs/loggerMiddleware.js:7:26
at async /var/www/src/conlucra/chat/node_modules/langchain/dist/embeddings/bedrock.cjs:51:29
at async RetryOperation._fn (/var/www/src/conlucra/chat/node_modules/p-retry/index.js:50:12) {
'$fault': 'client',
'$metadata': {
httpStatusCode: 404,
requestId: '5e951c03-abeb-4b30-9b68-aa92bf44ede9',
extendedRequestId: undefined,
cfId: undefined,
attempts: 1,
totalRetryDelay: 0
}
}
}
… On 4 Dec 2023, at 10:16, dosubot[bot] ***@***.***> wrote:
🤖
Based on the error message you provided, it seems like the issue is related to the model identifier not being found. However, you've mentioned that you've checked the model identifier and it seems to be correct.
The ResourceNotFoundException error typically occurs when the specified resource (in this case, the model) cannot be found. Given that you've confirmed the model identifier is correct, the issue might be with the region or the way you're trying to import the BedrockRuntimeClient.
Regarding the BedrockRuntimeClient, it seems like there might be a misunderstanding. The BedrockRuntimeClient is not part of the LangChainJS framework, but rather it's part of the AWS SDK. You should be able to import it from the @aws-sdk/client-bedrock-runtime package, not @langchain-ai/client-bedrock-runtime.
Here's how you can import it:
const { BedrockRuntimeClient } = ***@***.***/client-bedrock-runtime");
If you're using ES6 imports:
import { BedrockRuntimeClient } from ***@***.***/client-bedrock-runtime";
Please ensure that the @aws-sdk/client-bedrock-runtime package is correctly installed in your project. If it's not, you can install it using npm or yarn:
npm install @aws-sdk/client-bedrock-runtime
# or
yarn add @aws-sdk/client-bedrock-runtime
After ensuring the package is installed and correctly imported, if you're still facing the ResourceNotFoundException error, you might want to double-check the region and model name, or contact AWS support for further assistance.
If you're still encountering issues after trying these steps, could you please provide more details about your setup? Specifically, it would be helpful to know the exact code you're using to initialize the BedrockEmbeddings object and the BedrockRuntimeClient, as well as any other relevant code. This will help me better understand the issue and provide a more accurate solution.
Sources
examples/src/llms/bedrock.js <https://github.com/langchain-ai/langchainjs/blob/main/examples/src/llms/bedrock.js>
This response is meant to be useful and save you time. It is not meant to be a precise solution, but rather a starting point for your own research.
Help me be more useful! Please leave a 👍 if this is helpful and 👎 if it is irrelevant.
If you want to continue the conversation, start your reply with @dosu-bot <https://github.com/dosu-bot>.
—
Reply to this email directly, view it on GitHub <#3516 (reply in thread)>, or unsubscribe <https://github.com/notifications/unsubscribe-auth/ABDXCQPICCBXM7SSDFMYDMTYHWPIBAVCNFSM6AAAAABAFGMOWOVHI2DSMVQWIX3LMV43SRDJONRXK43TNFXW4Q3PNVWWK3TUHM3TONJRGQ2DG>.
You are receiving this because you were mentioned.
|
Beta Was this translation helpful? Give feedback.
-
@dosu-bot
For future reference I found the error: I hadn’t realised that I needed to manually request access to the Amazon Foundation Models through the AWS Console. Once I did this it all worked.
… On 4 Dec 2023, at 10:37, dosubot[bot] ***@***.***> wrote:
🤖
Based on the error message you're receiving, it seems like the model "amazon.titan-embed-text-v1" is not found in the specified AWS region. This could be due to a few reasons:
The model "amazon.titan-embed-text-v1" does not exist or is not available in the AWS region specified in the BEDROCK_REGION environment variable.
There might be a typo or error in the model name.
The AWS credentials provided do not have the necessary permissions to access the model.
Here are some steps you can take to resolve this issue:
Verify that the model "amazon.titan-embed-text-v1" is available in your specified AWS region. You can do this by checking the AWS Bedrock documentation or the AWS console.
Check the spelling and casing of the model name. It should match exactly with the name provided by AWS Bedrock.
Ensure that the AWS credentials provided have the necessary permissions to access the model. You might need to update the IAM policy associated with the credentials to include access to the AWS Bedrock service.
If the model is not available in your region, you might need to change your region or use a different model that is available in your region. Here's how you can change the model:
const embeddings = new BedrockEmbeddings({
region: process.env.BEDROCK_REGION,
credentials: {
accessKeyId: process.env.AWS_ACCESS_KEY_ID,
secretAccessKey: process.env.AWS_SECRET_ACCESS_KEY,
},
model: "new-model-name", // Replace with the new model name
});
Remember to replace "new-model-name" with the actual name of the model you want to use.
As for your question about whether the LangChainJS framework supports the "amazon.titan-embed-text-v1" model for generating embeddings, I wasn't able to find a definitive answer within the repository. It's possible that the answer may be available elsewhere or I could have missed it. If you have more information about this, it would be helpful in providing a more accurate answer.
For more information, you can refer to the examples/src/embeddings/bedrock.ts <https://github.com/langchain-ai/langchainjs/blob/main/examples/src/embeddings/bedrock.ts> file in the LangChainJS repository.
Sources
examples/src/embeddings/bedrock.ts <https://github.com/langchain-ai/langchainjs/blob/main/examples/src/embeddings/bedrock.ts>
This response is meant to be useful and save you time. It is not meant to be a precise solution, but rather a starting point for your own research.
Help me be more useful! Please leave a 👍 if this is helpful and 👎 if it is irrelevant.
If you want to continue the conversation, start your reply with @dosu-bot <https://github.com/dosu-bot>.
—
Reply to this email directly, view it on GitHub <#3516 (reply in thread)>, or unsubscribe <https://github.com/notifications/unsubscribe-auth/ABDXCQLJW22665XUWX5WK4TYHWRWPAVCNFSM6AAAAABAFGMOWOVHI2DSMVQWIX3LMV43SRDJONRXK43TNFXW4Q3PNVWWK3TUHM3TONJRGY2DO>.
You are receiving this because you were mentioned.
|
Beta Was this translation helpful? Give feedback.
-
I'm updating an existing piece of code to use Amazon Bedrock, and want to simply create embeddings from a set of documents and then store them in a Faiss vector store.
The documents are created from a text file and the embeddings object is created using
let embeddings = new BedrockEmbeddings({ region: process.env.BEDROCK_REGION, credentials: { accessKeyId: process.env.AWS_ACCESS_KEY_ID, secretAccessKey: process.env.AWS_SECRET_ACCESS_KEY }, model: "amazon.titan-embed-text-v1" });
Both docs and embeddings look fine - as I'd expect given that the code is already working with other LLMs.
However, when I try to create the vector store with
vectorStore = await FaissStore.fromDocuments(docs, embeddings);
I get the error below, which seems to suggest that the model identifier (amazon.titan-embed-text-v1) is wrong. Am I using a deprecated model or something?
{ error: ResourceNotFoundException: Could not resolve the foundation model from the provided model identifier. at de_ResourceNotFoundExceptionRes (/var/www/src/conlucra/chat/node_modules/@aws-sdk/client-bedrock-runtime/dist-cjs/protocols/Aws_restJson1.js:260:23) at de_InvokeModelCommandError (/var/www/src/conlucra/chat/node_modules/@aws-sdk/client-bedrock-runtime/dist-cjs/protocols/Aws_restJson1.js:92:25) at process.processTicksAndRejections (node:internal/process/task_queues:95:5) at async /var/www/src/conlucra/chat/node_modules/@smithy/middleware-serde/dist-cjs/deserializerMiddleware.js:7:24 at async /var/www/src/conlucra/chat/node_modules/@aws-sdk/client-bedrock-runtime/node_modules/@aws-sdk/middleware-signing/dist-cjs/awsAuthMiddleware.js:30:20 at async /var/www/src/conlucra/chat/node_modules/@smithy/middleware-retry/dist-cjs/retryMiddleware.js:27:46 at async /var/www/src/conlucra/chat/node_modules/@aws-sdk/client-bedrock-runtime/node_modules/@aws-sdk/middleware-logger/dist-cjs/loggerMiddleware.js:7:26 at async /var/www/src/conlucra/chat/node_modules/langchain/dist/embeddings/bedrock.cjs:51:29 at async RetryOperation._fn (/var/www/src/conlucra/chat/node_modules/p-retry/index.js:50:12) { '$fault': 'client', '$metadata': { httpStatusCode: 404, requestId: 'cf74894d-8a14-4b54-bb20-ad06ba56cf79', extendedRequestId: undefined, cfId: undefined, attempts: 1, totalRetryDelay: 0 } } }
Beta Was this translation helpful? Give feedback.
All reactions