Skip to content
Permalink

Comparing changes

Choose two branches to see what’s changed or to start a new pull request. If you need to, you can also or learn more about diff comparisons.

Open a pull request

Create a new pull request by comparing changes across two branches. If you need to, you can also . Learn more about diff comparisons here.
base repository: langchain-ai/langserve
Failed to load repositories. Confirm that selected base ref is valid, then try again.
Loading
base: v0.2.0rc1
Choose a base ref
...
head repository: langchain-ai/langserve
Failed to load repositories. Confirm that selected head ref is valid, then try again.
Loading
compare: main
Choose a head ref
Loading
Showing with 3,210 additions and 2,852 deletions.
  1. +1 −1 .clabot
  2. +1 −1 .github/workflows/_lint.yml
  3. +0 −94 .github/workflows/_pydantic_compatibility.yml
  4. +6 −0 .github/workflows/_release.yml
  5. +0 −7 .github/workflows/langserve_ci.yml
  6. +1 −0 .github/workflows/langserve_release.yml
  7. +222 −0 MIGRATION.md
  8. +29 −27 README.md
  9. +1 −1 examples/agent/server.py
  10. +1 −1 examples/agent_custom_streaming/server.py
  11. +1 −1 examples/agent_with_history/server.py
  12. +4 −5 examples/auth/api_handler/server.py
  13. +4 −5 examples/auth/per_req_config_modifier/server.py
  14. +2 −2 examples/chat_playground/legacy_input/server.py
  15. +1 −1 examples/chat_playground/server.py
  16. +2 −2 examples/chat_with_persistence/server.py
  17. +1 −1 examples/configurable_agent_executor/server.py
  18. +9 −53 examples/configurable_chain/client.ipynb
  19. +1 −1 examples/configurable_retrieval/server.py
  20. +1 −1 examples/conversational_retrieval_chain/server.py
  21. +1 −1 examples/file_processing/server.py
  22. +13 −81 examples/llm/client.ipynb
  23. +2 −2 examples/llm/server.py
  24. +11 −46 examples/local_llm/client.ipynb
  25. +6 −23 examples/passthrough_dict/client.ipynb
  26. +2 −2 examples/router/server.py
  27. +2 −2 examples/widgets/chat/message_list/server.py
  28. +1 −1 examples/widgets/chat/tuples/server.py
  29. +53 −0 langserve/_pydantic.py
  30. +121 −57 langserve/api_handler.py
  31. +21 −7 langserve/callbacks.py
  32. +1 −1 langserve/chat_playground/dist/assets/{index-86d4d9c0.js → index-53ad47d4.js}
  33. +1 −1 langserve/chat_playground/dist/index.html
  34. +1 −0 langserve/chat_playground/src/App.tsx
  35. +34 −10 langserve/client.py
  36. +6 −5 langserve/playground.py
  37. +47 −47 langserve/playground/dist/assets/{index-dbc96538.js → index-400979f0.js}
  38. +1 −1 langserve/playground/dist/index.html
  39. +37 −23 langserve/playground/src/components/ChatMessagesControlRenderer.tsx
  40. +0 −33 langserve/pydantic_v1.py
  41. +5 −4 langserve/schema.py
  42. +86 −60 langserve/serialization.py
  43. +303 −333 langserve/server.py
  44. +2 −1 langserve/server_sent_events.py
  45. +46 −68 langserve/validation.py
  46. +1,337 −1,559 poetry.lock
  47. +11 −8 pyproject.toml
  48. +13 −4 tests/unit_tests/test_api_playground.py
  49. +16 −6 tests/unit_tests/test_serialization.py
  50. +639 −218 tests/unit_tests/test_server_client.py
  51. +3 −8 tests/unit_tests/test_validation.py
  52. +27 −0 tests/unit_tests/utils/serde.py
  53. +16 −0 tests/unit_tests/utils/stubs.py
  54. +28 −35 tests/unit_tests/utils/test_fake_chat_model.py
  55. +29 −1 tests/unit_tests/utils/tracer.py
2 changes: 1 addition & 1 deletion .clabot
Original file line number Diff line number Diff line change
@@ -1,4 +1,4 @@
{
"contributors": ["eyurtsev", "hwchase17", "nfcampos", "efriis", "jacoblee93", "dqbd", "kreneskyp", "adarsh-jha-dev", "harris", "baskaryan", "hinthornw", "bracesproul", "jakerachleff", "craigsdennis", "anhi", "169", "LarchLiu", "PaulLockett", "RCMatthias", "jwynia", "majiayu000", "mpskex", "shivachittamuru", "sinashaloudegi", "sowsan", "akira", "lucianotonet", "JGalego", "nat-n", "dirien", "donbr", "rahilvora", "WarrenTheRabbit", "StreetLamb"],
"contributors": ["eyurtsev", "hwchase17", "nfcampos", "efriis", "jacoblee93", "dqbd", "kreneskyp", "adarsh-jha-dev", "harris", "baskaryan", "hinthornw", "bracesproul", "jakerachleff", "craigsdennis", "anhi", "169", "LarchLiu", "PaulLockett", "RCMatthias", "jwynia", "majiayu000", "mpskex", "shivachittamuru", "sinashaloudegi", "sowsan", "akira", "lucianotonet", "JGalego", "nat-n", "dirien", "donbr", "rahilvora", "WarrenTheRabbit", "StreetLamb", "ccurme", "dennisrall", "Mingqi2", "xxsl"],
"message": "Thank you for your pull request and welcome to our community. We require contributors to sign our Contributor License Agreement, and we don't seem to have the username {{usersWithoutCLA}} on file. In order for us to review and merge your code, please complete the Individual Contributor License Agreement here https://forms.gle/AQFbtkWRoHXUgipM6 .\n\nThis process is done manually on our side, so after signing the form one of the maintainers will add you to the contributors list.\n\nFor more details about why we have a CLA and other contribution guidelines please see: https://github.com/langchain-ai/langserve/blob/main/CONTRIBUTING.md."
}
2 changes: 1 addition & 1 deletion .github/workflows/_lint.yml
Original file line number Diff line number Diff line change
@@ -31,7 +31,7 @@ jobs:
# Starting new jobs is also relatively slow,
# so linting on fewer versions makes CI faster.
python-version:
- "3.8"
- "3.9"
- "3.11"
steps:
- uses: actions/checkout@v3
94 changes: 0 additions & 94 deletions .github/workflows/_pydantic_compatibility.yml

This file was deleted.

6 changes: 6 additions & 0 deletions .github/workflows/_release.yml
Original file line number Diff line number Diff line change
@@ -7,6 +7,12 @@ on:
required: true
type: string
description: "From which folder this pipeline executes"
workflow_dispatch: # Allows to trigger the workflow manually in GitHub UI
inputs:
working-directory:
required: true
type: string
description: "From which folder this pipeline executes"

env:
POETRY_VERSION: "1.5.1"
7 changes: 0 additions & 7 deletions .github/workflows/langserve_ci.yml
Original file line number Diff line number Diff line change
@@ -39,13 +39,6 @@ jobs:
with:
working-directory: .
secrets: inherit

pydantic-compatibility:
uses:
./.github/workflows/_pydantic_compatibility.yml
with:
working-directory: .
secrets: inherit
test:
timeout-minutes: 10
runs-on: ubuntu-latest
1 change: 1 addition & 0 deletions .github/workflows/langserve_release.yml
Original file line number Diff line number Diff line change
@@ -10,4 +10,5 @@ jobs:
./.github/workflows/_release.yml
with:
working-directory: .
permissions: write-all
secrets: inherit
222 changes: 222 additions & 0 deletions MIGRATION.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,222 @@
# LangGraph Platform Migration Guide

We have [recently announced](https://blog.langchain.dev/langgraph-platform-announce/) LangGraph Platform, a ***significantly*** enhanced solution for deploying agentic applications at scale.

LangGraph Platform incorporates [key design patterns and capabilities](https://langchain-ai.github.io/langgraph/concepts/langgraph_platform/#option-2-leveraging-langgraph-platform-for-complex-deployments) essential for production-level deployment of large language model (LLM) applications.

In contrast to LangServe, LangGraph Platform provides comprehensive, out-of-the-box support for [persistence](https://langchain-ai.github.io/langgraph/concepts/application_structure/), [memory](https://langchain-ai.github.io/langgraph/concepts/assistants/), [double-texting handling](https://langchain-ai.github.io/langgraph/concepts/double_texting/), [human-in-the-loop workflows](https://langchain-ai.github.io/langgraph/concepts/assistants/), [cron job scheduling](https://langchain-ai.github.io/langgraph/concepts/langgraph_server/#cron-jobs), [webhooks](https://langchain-ai.github.io/langgraph/concepts/langgraph_server/#webhooks), high-load management, advanced streaming, support for long-running tasks, background task processing, and much more.

The LangGraph Platform ecosystem includes the following components:

- [LangGraph Server](https://langchain-ai.github.io/langgraph/concepts/langgraph_server/): Provides an [Assistants API](https://langchain-ai.github.io/langgraph/cloud/reference/api/api_ref.html) for LLM applications (graphs) built with [LangGraph](https://langchain-ai.github.io/langgraph/). Available in both Python and JavaScript/TypeScript.
- [LangGraph Studio](https://langchain-ai.github.io/langgraph/concepts/langgraph_studio/): A specialized IDE for real-time visualization, debugging, and interaction via a graphical interface. Available as a web application or macOS desktop app, it's a substantial improvement over LangServe's playground.
- [SDK](https://langchain-ai.github.io/langgraph/concepts/sdk/): Enables programmatic interaction with the server, available in Python and JavaScript/TypeScript.
- [RemoteGraph](https://langchain-ai.github.io/langgraph/how-tos/use-remote-graph/): Allows interaction with a remote graph as if it were running locally, serving as LangGraph's equivalent to LangServe's RemoteRunnable. Available in both Python and JavaScript/TypeScript.

## Context

LangServe was built as a deployment solution for LangChain Runnables created using the [LangChain Expression Language (LCEL)](https://python.langchain.com/docs/concepts/lcel). In LangServe, the LCEL was the orchestration layer that managed the execution of the Runnable.

[LangGraph](https://langchain-ai.github.io/langgraph/) is an open source library created by the LangChain team that provides a more flexible orchestration layer that's better suited for creating more complex LLM applications. LangGraph Platform
is the deployment solution for LangGraph applications.

## LangServe Support

We recommend using LangGraph Platform rather than LangServe for new projects.

We will continue to accept bug fixes for LangServe from the community; however, we will not be accepting new feature contributions.

## Migration

If you would like to migrate an existing LangServe application to LangGraph Platform, you have two options:

1. You can wrap the existing `Runnable` that you expose in the LangServe application via `add_routes` in a `LangGraph` node. This is the quickest way to migrate your application to LangGraph Platform.
2. You can do a larger refactor to break up the existing LCEL into appropriate `LangGraph` nodes. This is recommended if you want to take advantage of more advanced features in LangGraph Platform.

### Option 1: Wrap Runnable in LangGraph Node

This option is the quickest way to migrate your application to LangGraph Platform. You can wrap the existing `Runnable` that you expose in the LangServe application via `add_routes` in a `LangGraph` node.


Original LangServe code:

```python
from langserve import add_routes

app = FastAPI()

# Some input schema
class Input(BaseModel):
input: str
foo: Optional[str]

# Some output schema
class Output(BaseModel):
output: Any


runnable = .... # Your existing Runnable
runnable_with_types = runnable.with_types(input_type=Input, output_type=Output)

# Adds routes to the app for using the chain under:
add_routes(
app,
runnable_with_types,
)
```

Migrated LangGraph Platform code:

```python

@dataclass
class InputState: # Equivalent to Input in the original code
"""Defines the input state, representing a narrower interface to the outside world.
This class is used to define the initial state and structure of incoming data.
See: https://langchain-ai.github.io/langgraph/concepts/low_level/#state
for more information.
"""

input: str
foo: Optional[str] = None


@dataclass
class OutputState: # Equivalent to Output in the original code
"""Defines the output state, representing a narrower interface to the outside world.
https://langchain-ai.github.io/langgraph/concepts/low_level/#state
"""
output: Any

@dataclass
class SharedState:
"""The full graph state.
https://langchain-ai.github.io/langgraph/concepts/low_level/#state
"""
input: str
foo: Optional[str] = None
output: Any

runnable = ... # Same code as before

async def my_node(state: InputState, config: RunnableConfig) -> OutputState:
"""Each node does work."""
return await runnable.ainvoke({"input": state.input, "foo": state.foo})


# Define a new graph
builder = StateGraph(
SharedState, config_schema=Configuration, input=InputState, output=OutputState
)

# Add the node to the graph
builder.add_node("my_node", my_node)

# Set the entrypoint as `call_model`
builder.add_edge("__start__", "my_node")

# Compile the workflow into an executable graph
graph = builder.compile()
graph.name = "New Graph" # This defines the custom name in LangSmith
```

### 2. Refactor LCEL into LangGraph Nodes

This option is recommended if you want to take advantage of more advanced features in LangGraph Platform.

#### Memory (alternative to `RunnableWithMessageHistory`)

For example, LangGraph comes with built-in persistence that is more general than LangChain's `RunnableWithMessageHistory`.

Please refer to the guide on [upgrading to LangGraph memory](https://python.langchain.com/docs/versions/migrating_memory/) for more details.

#### Agents

If you're relying on legacy LangChain agents, you can migrate them into the pre-built
LangGraph agents. Please refer to the guide on [migrating agents](https://python.langchain.com/docs/how_to/migrate_agent/) for more details.

#### Custom Chains

If you created a custom chain and used LCEL to orchestrate it, you will usually be able to refactor it into a LangGraph without too much difficulty.

There isn't a one-size-fits-all guide for this, but generally speaking, consider creating
a separate node for any long-running step in your LCEL chain or any step that you would
want to be able to monitor or debug separately.

For example, if you have a simple Retrieval Augmented Generation (RAG) pipeline, you might have a node for the retrieval step and a node for the generation step.

Original LCEL code:

```python
...
rag_chain = (
{"context": retriever | format_docs, "question": RunnablePassthrough()}
| prompt
| llm
| StrOutputParser()
)
rag_chain.with_types(input_type=Input, output_type=Output)
```

Using LangGraph for the same pipeline:


```python

@dataclass
class InputState: # Equivalent to Input in the original code
"""Input question from the user."""
question: str

@dataclass
class OutputState: # Equivalent to Output in the original code
"""The output from the graph."""
answer: str

@dataclass
class SharedState:
question: str
docs: List[str]
response: str

async def retriever_node(state: InputState) -> SharedState:
"""Rettrieve documents based on the user's question."""
documents = await retriever.ainvoke({"context": state.question})
return {
"docs": documents
}

async def generator_node(state: SharedState) -> OutputState:
"""Generate an answer using an LLM based on the retrieved documents and question."""
context = " -- DOCUMENT -- ".join(state.docs)
prompt = [
SystemMessage(
content=(
"Answer the user's question based on the list of documents "
"that were retrieved. Here are the documents: \n\n"
f"{context}"
)
),
HumanMessage(content=state.question),
]
ai_message = await llm.ainvoke(prompt)
return {"answer": ai_message.content}

# Define a new graph
builder = StateGraph(
SharedState, config_schema=Configuration, input=InputState, output=OutputState
)
builder.add_node("retriever", retriever_node)
builder.add_node("generator", generator_node)
builder.add_edge("__start__", "retriever")
builder.add_edge("retriever", "generator")
graph = builder.compile()
graph.name = "RAG Graph"
```

Please see the [LangGraph tutorials](https://langchain-ai.github.io/langgraph/tutorials/)
for tutorials and examples that will help you get started with LangGraph
and LangGraph Platform.
Loading