Skip to content

Commit

Permalink
upgrading langchain-ibm to support LangChain v0.2 (#6)
Browse files Browse the repository at this point in the history
  • Loading branch information
Srijan-D authored Jul 22, 2024
1 parent 06aab97 commit cdc0dad
Showing 1 changed file with 17 additions and 12 deletions.
29 changes: 17 additions & 12 deletions libs/ibm/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -30,15 +30,17 @@ os.environ["WATSONX_APIKEY"] = watsonx_api_key
In alternative, you can set the environment variable in your terminal.

- **Linux/macOS:** Open your terminal and execute the following command:
```bash
export WATSONX_APIKEY='your_ibm_api_key'
```
To make this environment variable persistent across terminal sessions, add the above line to your `~/.bashrc`, `~/.bash_profile`, or `~/.zshrc` file.

```bash
export WATSONX_APIKEY='your_ibm_api_key'
```

To make this environment variable persistent across terminal sessions, add the above line to your `~/.bashrc`, `~/.bash_profile`, or `~/.zshrc` file.

- **Windows:** For Command Prompt, use:
```cmd
set WATSONX_APIKEY=your_ibm_api_key
```
```cmd
set WATSONX_APIKEY=your_ibm_api_key
```

### Loading the model

Expand Down Expand Up @@ -69,11 +71,11 @@ watsonx_llm = WatsonxLLM(
```

**Note:**

- You must provide a `project_id` or `space_id`. For more information refer to IBM's [documentation](https://www.ibm.com/docs/en/watsonx-as-a-service?topic=projects).
- Depending on the region of your provisioned service instance, use one of the urls described [here](https://ibm.github.io/watsonx-ai-python-sdk/setup_cloud.html#authentication).
- You need to specify the model you want to use for inferencing through `model_id`. You can find the list of available models [here](https://ibm.github.io/watsonx-ai-python-sdk/fm_model.html#ibm_watsonx_ai.foundation_models.utils.enums.ModelTypes).


Alternatively you can use Cloud Pak for Data credentials. For more details, refer to IBM's [documentation](https://ibm.github.io/watsonx-ai-python-sdk/setup_cpd.html).

```python
Expand All @@ -94,7 +96,7 @@ watsonx_llm = WatsonxLLM(
Create `PromptTemplate` objects which will be responsible for creating a random question.

```python
from langchain.prompts import PromptTemplate
from langchain_core.prompts import PromptTemplate

template = "Generate a random question about {topic}: Question: "
prompt = PromptTemplate.from_template(template)
Expand All @@ -103,14 +105,17 @@ prompt = PromptTemplate.from_template(template)
Provide a topic and run the LLMChain.

```python
from langchain.chains import LLMChain
from langchain_core.output_parsers import StrOutputParser

llm_chain = prompt | watsonx_llm | StrOutputParser()
topic = "dog"
llm_chain.invoke(topic)

llm_chain = LLMChain(prompt=prompt, llm=watsonx_llm)
response = llm_chain.invoke("dog")
print(response)
```

### Calling the Model Directly

To obtain completions, you can call the model directly using a string prompt.

```python
Expand Down

0 comments on commit cdc0dad

Please sign in to comment.