Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

docs: Standardize OpenAI Docs #25280

Merged
merged 2 commits into from
Aug 11, 2024
Merged
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
102 changes: 93 additions & 9 deletions libs/partners/openai/langchain_openai/llms/base.py
Original file line number Diff line number Diff line change
Expand Up @@ -605,21 +605,105 @@ def max_tokens_for_prompt(self, prompt: str) -> int:


class OpenAI(BaseOpenAI):
"""OpenAI large language models.
"""OpenAI completion model integration.

Setup:
Install ``langchain-openai`` and set environment variable ``OPENAI_API_KEY``.

.. code-block:: bash

pip install -U langchain-openai
export OPENAI_API_KEY="your-api-key"

Key init args — completion params:
model: str
Name of OpenAI model to use.
temperature: float
Sampling temperature.
max_tokens: Optional[int]
Max number of tokens to generate.
logprobs: Optional[bool]
Whether to return logprobs.
stream_options: Dict
Configure streaming outputs, like whether to return token usage when
streaming (``{"include_usage": True}``).

Key init args — client params:
timeout: Union[float, Tuple[float, float], Any, None]
Timeout for requests.
max_retries: int
Max number of retries.
api_key: Optional[str]
OpenAI API key. If not passed in will be read from env var OPENAI_API_KEY.
base_url: Optional[str]
Base URL for API requests. Only specify if using a proxy or service
emulator.
organization: Optional[str]
OpenAI organization ID. If not passed in will be read from env
var OPENAI_ORG_ID.

See full list of supported init args and their descriptions in the params section.

Instantiate:
.. code-block:: python

To use, you should have the environment variable ``OPENAI_API_KEY``
set with your API key, or pass it as a named parameter to the constructor.
from langchain_openai import OpenAI

Any parameters that are valid to be passed to the openai.create call can be passed
in, even if not explicitly saved on this class.
llm = OpenAI(
model="gpt-3.5-turbo-instruct",
temperature=0,
max_retries=2,
# api_key="...",
# base_url="...",
# organization="...",
# other params...
)

Example:
Invoke:
.. code-block:: python

from langchain_openai import OpenAI
input_text = "The meaning of life is "
llm.invoke(input_text)

model = OpenAI(model_name="gpt-3.5-turbo-instruct")
"""
.. code-block:: none

"a philosophical question that has been debated by thinkers and scholars for centuries."

Stream:
.. code-block:: python

for chunk in llm.stream(input_text):
print(chunk, end="|")

.. code-block:: none

a| philosophical| question| that| has| been| debated| by| thinkers| and| scholars| for| centuries|.

.. code-block:: python

"".join(llm.stream(input_text))

.. code-block:: none

"a philosophical question that has been debated by thinkers and scholars for centuries."

Async:
.. code-block:: python

await llm.ainvoke(input_text)

# stream:
# async for chunk in (await llm.astream(input_text)):
# print(chunk)

# batch:
# await llm.abatch([input_text])

.. code-block:: none

"a philosophical question that has been debated by thinkers and scholars for centuries."

""" # noqa: E501

@classmethod
def get_lc_namespace(cls) -> List[str]:
Expand Down
Loading