Skip to content

Commit

Permalink
Add Anthropic to examples (#4609)
Browse files Browse the repository at this point in the history
  • Loading branch information
jacoblee93 authored Mar 4, 2024
1 parent 032c4f6 commit 80dc27f
Show file tree
Hide file tree
Showing 5 changed files with 127 additions and 28 deletions.
55 changes: 44 additions & 11 deletions docs/core_docs/docs/get_started/quickstart.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -52,25 +52,23 @@ We will cover these at a high level, but keep in mind there is a lot more to eac

## LLM Chain

For this getting started guide, we will provide two options: using OpenAI (available via API) or using a local open source model.
import IntegrationInstallTooltip from "@mdx_components/integration_install_tooltip.mdx";

<Tabs>
<Tabs groupId="preferredModel">
<TabItem value="openai" label="OpenAI" default>

First we'll need to install the LangChain OpenAI integration package:

import IntegrationInstallTooltip from "@mdx_components/integration_install_tooltip.mdx";

<IntegrationInstallTooltip></IntegrationInstallTooltip>

```bash npm2yarn
npm install @langchain/openai
```

Accessing the API requires an API key, which you can get by creating an account and heading [here](https://platform.openai.com/account/api-keys). Once we have a key we'll want to set it as an environment variable by running:
Accessing the API requires an API key, which you can get by creating an account [here](https://platform.openai.com/account/api-keys). Once we have a key we'll want to set it as an environment variable:

```bash
export OPENAI_API_KEY="..."
OPENAI_API_KEY="..."
```

If you'd prefer not to set an environment variable you can pass the key in directly via the `openAIApiKey` named parameter when initiating the OpenAI Chat Model class:
Expand All @@ -92,7 +90,7 @@ const chatModel = new ChatOpenAI({});
```

</TabItem>
<TabItem value="local" label="Local">
<TabItem value="local" label="Local (using Ollama)">

[Ollama](https://ollama.ai/) allows you to run open-source large language models, such as Llama 2 and Mistral, locally.

Expand All @@ -118,6 +116,41 @@ const chatModel = new ChatOllama({
baseUrl: "http://localhost:11434", // Default value
model: "mistral",
});
```

</TabItem>
<TabItem value="anthropic" label="Anthropic">

First we'll need to install the LangChain Anthropic integration package:

<IntegrationInstallTooltip></IntegrationInstallTooltip>

```bash npm2yarn
npm install @langchain/anthropic
```

Accessing the API requires an API key, which you can get by creating an account [here](https://www.anthropic.com/). Once we have a key we'll want to set it as an environment variable:

```bash
ANTHROPIC_API_KEY="..."
```

If you'd prefer not to set an environment variable you can pass the key in directly via the `anthropicApiKey` named parameter when initiating the Anthropic Chat Model class:

```typescript
import { ChatAnthropic } from "@langchain/anthropic";

const chatModel = new ChatAnthropic({
anthropicApiKey: "...",
});
```

Otherwise you can initialize without any params:

```typescript
import { ChatAnthropic } from "@langchain/anthropic";

const chatModel = new ChatAnthropic({});
```

</TabItem>
Expand Down Expand Up @@ -253,7 +286,7 @@ Note that the size of the loaded document is large and may exceed the maximum am
We can split the document into more manageable chunks to get around this limitation and to reduce the amount of distraction
to the model using a [text splitter](/docs/modules/data_connection/document_transformers/):

```
```ts
import { RecursiveCharacterTextSplitter } from "langchain/text_splitter";

const splitter = new RecursiveCharacterTextSplitter();
Expand All @@ -274,7 +307,7 @@ This requires a few components, namely an [embedding model](/docs/modules/data_c

There are many options for both components. Here are some examples for accessing via OpenAI and via local models:

<Tabs>
<Tabs groupId="preferredModel">
<TabItem value="openai" label="OpenAI" default>

Make sure you have the `@langchain/openai` package installed and the appropriate environment variables set (these are the same as needed for the model above).
Expand All @@ -286,15 +319,15 @@ const embeddings = new OpenAIEmbeddings();
```

</TabItem>
<TabItem value="local" label="Local">
<TabItem value="local" label="Local (using Ollama)">

Make sure you have Ollama running (same set up as with the model).

```ts
import { OllamaEmbeddings } from "@langchain/community/embeddings/ollama";

const embeddings = new OllamaEmbeddings({
model: "mistral",
model: "nomic-embed-text",
maxConcurrency: 5,
});
```
Expand Down
46 changes: 40 additions & 6 deletions docs/core_docs/docs/modules/model_io/chat/quick_start.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -12,24 +12,23 @@ Rather than using a "text in, text out" API, they use an interface where "chat m

import Tabs from "@theme/Tabs";
import TabItem from "@theme/TabItem";
import IntegrationInstallTooltip from "@mdx_components/integration_install_tooltip.mdx";

<Tabs>
<Tabs groupId="preferredModel">
<TabItem value="openai" label="OpenAI" default>

First we'll need to install the LangChain OpenAI integration package:

import IntegrationInstallTooltip from "@mdx_components/integration_install_tooltip.mdx";

<IntegrationInstallTooltip></IntegrationInstallTooltip>

```bash npm2yarn
npm install @langchain/openai
```

Accessing the API requires an API key, which you can get by creating an account and heading [here](https://platform.openai.com/account/api-keys). Once we have a key we'll want to set it as an environment variable by running:
Accessing the API requires an API key, which you can get by creating an account and heading [here](https://platform.openai.com/account/api-keys). Once we have a key we'll want to set it as an environment variable:

```bash
export OPENAI_API_KEY="..."
OPENAI_API_KEY="..."
```

If you'd prefer not to set an environment variable you can pass the key in directly via the `openAIApiKey` named parameter when initiating the OpenAI Chat Model class:
Expand All @@ -51,7 +50,7 @@ const chatModel = new ChatOpenAI();
```

</TabItem>
<TabItem value="local" label="Local">
<TabItem value="local" label="Local (using Ollama)">

[Ollama](https://ollama.ai/) allows you to run open-source large language models, such as Llama 2 and Mistral, locally.

Expand All @@ -77,6 +76,41 @@ const chatModel = new ChatOllama({
baseUrl: "http://localhost:11434", // Default value
model: "mistral",
});
```

</TabItem>
<TabItem value="anthropic" label="Anthropic" default>

First we'll need to install the LangChain Anthropic integration package:

<IntegrationInstallTooltip></IntegrationInstallTooltip>

```bash npm2yarn
npm install @langchain/anthropic
```

Accessing the API requires an API key, which you can get by creating an account [here](https://www.anthropic.com/). Once we have a key we'll want to set it as an environment variable:

```bash
ANTHROPIC_API_KEY="..."
```

If you'd prefer not to set an environment variable you can pass the key in directly via the `anthropicApiKey` named parameter when initiating the `ChatAnthropic` class:

```typescript
import { ChatAnthropic } from "@langchain/anthropic";

const chatModel = new ChatAnthropic({
anthropicApiKey: "...",
});
```

Otherwise you can initialize without any params:

```typescript
import { ChatAnthropic } from "@langchain/anthropic";

const chatModel = new ChatAnthropic();
```

</TabItem>
Expand Down
2 changes: 1 addition & 1 deletion docs/core_docs/docs/modules/model_io/llms/quick_start.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -55,7 +55,7 @@ const llm = new OpenAI({});
```

</TabItem>
<TabItem value="local" label="Local">
<TabItem value="local" label="Local (using Ollama)">

[Ollama](https://ollama.ai/) allows you to run open-source large language models, such as Llama 2 and Mistral, locally.

Expand Down
50 changes: 41 additions & 9 deletions docs/core_docs/docs/modules/model_io/quick_start.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -11,29 +11,26 @@ For a deeper conceptual guide into these topics - please see [this page](/docs/m

## Models

For this getting started guide, we will provide two options: using OpenAI (a popular model available via API) or using a locally running open source model.

import Tabs from "@theme/Tabs";
import TabItem from "@theme/TabItem";
import CodeBlock from "@theme/CodeBlock";
import IntegrationInstallTooltip from "@mdx_components/integration_install_tooltip.mdx";

<Tabs>
<Tabs groupId="preferredModel">
<TabItem value="openai" label="OpenAI" default>

First we'll need to install the LangChain OpenAI integration package:

import IntegrationInstallTooltip from "@mdx_components/integration_install_tooltip.mdx";

<IntegrationInstallTooltip></IntegrationInstallTooltip>

```bash npm2yarn
npm install @langchain/openai
```

Accessing the API requires an API key, which you can get by creating an account and heading [here](https://platform.openai.com/account/api-keys). Once we have a key we'll want to set it as an environment variable by running:
Accessing the API requires an API key, which you can get by creating an account and heading [here](https://platform.openai.com/account/api-keys). Once we have a key we'll want to set it as an environment variable:

```shell
export OPENAI_API_KEY="..."
OPENAI_API_KEY="..."
```

We can then initialize the model:
Expand All @@ -58,7 +55,7 @@ const model = new ChatOpenAI({
```

</TabItem>
<TabItem value="local" label="Local">
<TabItem value="local" label="Local (using Ollama)">

[Ollama](https://ollama.ai/) allows you to run open-source large language models, such as Llama 2 and Mistral, locally.

Expand Down Expand Up @@ -89,12 +86,47 @@ const chatModel = new ChatOllama({
baseUrl: "http://localhost:11434", // Default value
model: "mistral",
});
```

</TabItem>
<TabItem value="anthropic" label="Anthropic" default>

First we'll need to install the LangChain Anthropic integration package:

<IntegrationInstallTooltip></IntegrationInstallTooltip>

```bash npm2yarn
npm install @langchain/anthropic
```

Accessing the API requires an API key, which you can get by creating an account [here](https://www.anthropic.com/). Once we have a key we'll want to set it as an environment variable:

```bash
ANTHROPIC_API_KEY="..."
```

We can then initialize the model:

```typescript
import { ChatAnthropic } from "@langchain/anthropic";

const chatModel = new ChatAnthropic({
modelName: "claude-2.1",
});
```

If you can't or would prefer not to set an environment variable, you can pass the key in directly via the `anthropicApiKey` named parameter when initiating the ChatAnthropic class:

```typescript
const model = new ChatAnthropic({
anthropicApiKey: "<your key here>",
});
```

</TabItem>
</Tabs>

Both `llm` and `chatModel` are objects that represent configuration for a particular model.
These classes represent configuration for a particular model.
You can initialize them with parameters like `temperature` and others, and pass them around.
The main difference between them is their input and output schemas.

Expand Down
2 changes: 1 addition & 1 deletion docs/core_docs/src/theme/Npm2Yarn.js
Original file line number Diff line number Diff line change
Expand Up @@ -7,7 +7,7 @@ import CodeBlock from "@theme-original/CodeBlock";
// parsing built-in npm2yarn markdown blocks
export default function Npm2Yarn({ children }) {
return (
<Tabs>
<Tabs groupId="npm2yarn">
<TabItem value="npm" label="npm">
<CodeBlock language="bash">npm i {children}</CodeBlock>
</TabItem>
Expand Down

0 comments on commit 80dc27f

Please sign in to comment.