Skip to content

Commit

Permalink
Browse files Browse the repository at this point in the history
…into isaac/urlchanges
  • Loading branch information
isahers1 committed Feb 7, 2025
2 parents cc052e5 + 482054c commit 80d6bad
Show file tree
Hide file tree
Showing 38 changed files with 323 additions and 56 deletions.
2 changes: 1 addition & 1 deletion Makefile
Original file line number Diff line number Diff line change
Expand Up @@ -15,7 +15,7 @@ build-api-ref:
$(PYTHON) langsmith-sdk/python/docs/create_api_rst.py
LC_ALL=C $(PYTHON) -m sphinx -T -E -b html -d langsmith-sdk/python/docs/_build/doctrees -c langsmith-sdk/python/docs langsmith-sdk/python/docs langsmith-sdk/python/docs/_build/html -j auto
$(PYTHON) langsmith-sdk/python/docs/scripts/custom_formatter.py langsmith-sdk/docs/_build/html/
cd langsmith-sdk/js && yarn && yarn run build:typedoc --useHostedBaseUrlForAbsoluteLinks true --hostedBaseUrl "https://$${VERCEL_URL:-docs.smith.langchain.com}/reference/js/"
cd langsmith-sdk/js && yarn && yarn run build:typedoc --useHostedBaseUrlForAbsoluteLinks true --hostedBaseUrl "https://docs.smith.langchain.com/reference/js/"

vercel-build: install-vercel-deps build-api-ref
mkdir -p static/reference/python
Expand Down
8 changes: 6 additions & 2 deletions docs/evaluation/concepts/index.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -139,11 +139,15 @@ Learn [how run pairwise evaluations](/evaluation/how_to_guides/evaluate_pairwise

Each time we evaluate an application on a dataset, we are conducting an experiment.
An experiment contains the results of running a specific version of your application on the dataset.
To understand how to use the LangSmith experiment view, see [how to analyze experiment results](/evaluation/how_to_guides/analyze_single_experiment).

![Experiment view](./static/experiment_view.png)

Typically, we will run multiple experiments on a given dataset, testing different configurations of our application (e.g., different prompts or LLMs).
In LangSmith, you can easily view all the experiments associated with your dataset.
Additionally, you can [compare multiple experiments in a comparison view](/evaluation/how_to_guides/compare_experiment_results).

![Example](./static/comparing_multiple_experiments.png)
![Comparison view](./static/comparison_view.png)

## Annotation queues

Expand Down Expand Up @@ -191,7 +195,7 @@ Often these are triggered when you are making app updates (e.g. updating models
LangSmith's comparison view has native support for regression testing, allowing you to quickly see examples that have changed relative to the baseline.
Regressions are highlighted red, improvements green.

![Regression](./static/regression.png)
![Comparison view](./static/comparison_view.png)

### Backtesting

Expand Down
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file removed docs/evaluation/concepts/static/regression.png
Binary file not shown.
67 changes: 67 additions & 0 deletions docs/evaluation/how_to_guides/analyze_single_experiment.mdx
Original file line number Diff line number Diff line change
@@ -0,0 +1,67 @@
---
sidebar_position: 1
---

# Analyze a single experiment
After running an experiment, you can use LangSmith's experiment view to analyze the results and draw insights about how your experiment performed.

This guide will walk you through viewing the results of an experiment and highlights the features available in the experiments view.

## Open the experiment view
To open the experiment view, select the relevant Dataset from the Dataset & Experiments page and then select the experiment you want to view.

![Open experiment view](./static/select_experiment.png)

## View experiment results
This table displays your experiment results. This includes the input, output, and reference output for each [example](/evaluation/concepts#examples) in the dataset. It also shows each configured feedback key in separate columns alongside its corresponding feedback score.

Out of the box metrics (latency, status, cost, and token count) will also be displayed in individual columns.

In the columns dropdown, you can choose which columns to hide and which to show.

![Experiment view](./static/experiment_view.png)

## Heatmap view
The experiment view defaults to a heatmap view, where feedback scores for each run are highlighted in a color.
Red indicates a lower score, while green indicates a higher score.
The heatmap visualization makes it easy to identify patterns, spot outliers, and understand score distributions across your dataset at a glance.

![Heatmap view](./static/heatmap.png)

## Sort and filter
To sort or filter feedback scores, you can use the actions in the column headers.

![Sort and filter](./static/sort_filter.png)

## Table views
Depending on the view most useful for your analysis, you can change the formatting of the table by toggling between a compact view, a full, view, and a diff view.
- The `Compact` view shows each run as a one-line row, for ease of comparing scores at a glance.
- The `Full` view shows the full output for each run for digging into the details of individual runs.
- The `Diff` view shows the text difference between the reference output and the output for each run.

![Diff view](./static/diff_mode.png)

## View the traces
Hover over any of the output cells, and click on the trace icon to view the trace for that run. This will open up a trace in the side panel.

To view the entire tracing project, click on the "View Project" button in the top right of the header.

![View trace](./static/view_trace.png)

## View evaluator runs
For evaluator scores, you can view the source run by hovering over the evaluator score cell and clicking on the arrow icon. This will open up a trace in the side panel. If you're running a LLM-as-a-judge evaluator, you can view the prompt used for the evaluator in this run.
If your experiment has [repetitions](/evaluation/concepts#repetitions), you can click on the aggregate average score to find links to all of the individual runs.

![View evaluator runs](./static/evaluator_run.png)

## Repetitions
If you've run your experiment with [repetitions](/evaluation/concepts#repetitions), there will be arrows in the output results column so you can view outputs in the table. To view each run from the repetition, hover over the output cell and click the expanded view.

When you run an experiment with repetitions, LangSmith displays the average for each feedback score in the table. Click on the feedback score to view the feedback scores from individual runs, or to view the standard deviation across repetitions.

![Repetitions](./static/repetitions.png)
## Compare to another experiment
In the top right of the experiment view, you can select another experiment to compare to. This will open up a comparison view, where you can see how the two experiments compare.
To learn more about the comparison view, see [how to compare experiment results](./compare_experiment_results).

![Compare](./static/compare_to_another.png)
51 changes: 21 additions & 30 deletions docs/evaluation/how_to_guides/compare_experiment_results.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -8,21 +8,23 @@ Oftentimes, when you are iterating on your LLM application (such as changing the

LangSmith supports a powerful comparison view that lets you hone in on key differences, regressions, and improvements between different experiments.

![](./static/regression_test.gif)
![](./static/compare.gif)

## Open the comparison view

To open the comparison view, select two or more experiments from the "Experiments" tab from a given dataset page. Then, click on the "Compare" button at the bottom of the page.
To open the experiment comparison view, click the **Dataset & Experiments** page, select the relevant Dataset, select two or more experiments on the Experiments tab and click compare.

![](./static/open_comparison_view.png)
![](./static/compare_select.png)

## Toggle different views
## Adjust the table display

You can toggle between different views by clicking on the "Display" dropdown at the top right of the page. You can toggle different views to be displayed.
You can toggle between different views by clicking "Full" or "Compact" at the top of the page.

Toggling Full Text will show the full text of the input, output and reference output for each run. If the reference output is too long to display in the table, you can click on expand to view the full content.

![](./static/toggle_views.png)
You can also select and hide individual feedback keys or individual metrics in the display settings dropdown to isolate the information you want to see.

![](./static/toggle_views.gif)

## View regressions and improvements

Expand All @@ -37,50 +39,39 @@ Click on the regressions or improvements buttons on the top of each column to fi

![Regressions Filter](./static/filter_to_regressions.png)

## Update baseline experiment

In order to track regressions, you need a baseline experiment against which to compare. This will be automatically assigned as the first experiment in your comparison, but you can
change it from the dropdown at the top of the page.
## Update baseline experiment and metric

![Baseline](./static/select_baseline.png)
In order to track regressions, you need to:
1. Select a baseline experiment against which to compare and a metric to measure. By default, the newest experiment is selected as the baseline.
2. Select feedback key (evaluation metric) you want to focus compare against. One will be assigned by default, but you can adjust as needed.
3. Configure whether a higher score is better for the selected feedback key. This preference will be stored.

## Select feedback key

You will also want to select the feedback key (evaluation metric) on which you would like focus on. This can be selected via another dropdown at the top. Again, one will be assigned by
default, but you can adjust as needed.

![Feedback](./static/select_feedback.png)
![Baseline](./static/select_baseline.png)

## Open a trace

If tracing is enabled for the evaluation run, you can click on the trace icon in the hover state of any experiment cell to open the trace view for that run. This will open up a trace in the side panel.
If the example you're evaluating is from an ingested [run](/observability/concepts#runs), you can hover over the output cell and click on the trace icon to open the trace view for that run. This will open up a trace in the side panel.

![](./static/open_trace_comparison.png)
![](./static/open_source_trace.png)

## Expand detailed view

From any cell, you can click on the expand icon in the hover state to open up a detailed view of all experiment results on that particular example input, along with feedback keys and scores.

![](./static/expanded_view.png)

## Update display settings
## View summary charts

You can adjust the display settings for comparison view by clicking on "Display" in the top right corner.
You can also view summary charts by clicking on the "Charts" tab at the top of the page.

Here, you'll be able to toggle feedback, metrics, summary charts, and expand full text.

![](./static/update_display.png)
![](./static/charts_tab.png)

## Use experiment metadata as chart labels

With the summary charts enabled, you can configure the x-axis labels based on [experiment metadata](./filter_experiments_ui#background-add-metadata-to-your-experiments). First, click the three dots in the top right of the charts (note that you will only see them if your experiments have metadata attached).

![](./static/three_dots_charts.png)

Next, select a metadata key - note that this key must contain string values in order to render in the charts.

![](./static/select_metadata_key.png)
You can configure the x-axis labels for the charts based on [experiment metadata](./filter_experiments_ui#background-add-metadata-to-your-experiments).

You will now see your metadata in the x-axis of the charts:
Select a metadata key to see change the x-axis labels of the charts.

![](./static/metadata_in_charts.png)
1 change: 1 addition & 0 deletions docs/evaluation/how_to_guides/index.md
Original file line number Diff line number Diff line change
Expand Up @@ -71,6 +71,7 @@ Set up evaluators that automatically run for all experiments against a dataset.

Use the UI & API to understand your experiment results.

- [Analyze a single experiment](./how_to_guides/analyze_single_experiment)
- [Compare experiments with the comparison view](./how_to_guides/compare_experiment_results)
- [Filter experiments](./how_to_guides/filter_experiments_ui)
- [View pairwise experiments](./how_to_guides/evaluate_pairwise#view-pairwise-experiments)
Expand Down
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file added docs/evaluation/how_to_guides/static/compare.gif
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file modified docs/evaluation/how_to_guides/static/expanded_view.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file added docs/evaluation/how_to_guides/static/heatmap.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file modified docs/evaluation/how_to_guides/static/metadata_in_charts.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file not shown.
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file not shown.
Binary file modified docs/evaluation/how_to_guides/static/regression_view.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file modified docs/evaluation/how_to_guides/static/select_baseline.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file not shown.
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
35 changes: 35 additions & 0 deletions docs/evaluation/how_to_guides/vitest_jest.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -576,6 +576,41 @@ ls.describe("generate sql demo", () => {
});
```

## Configuring test suites

You can configure test suites with values like metadata or a custom client by passing an extra argument to
`ls.describe()` for the full suite or by passing a `config` field into `ls.test()` for individual tests:

```ts
ls.describe("test suite name", () => {
ls.test(
"test name",
{
inputs: { ... },
referenceOutputs: { ... },
// Extra config for the test run
config: { tags: [...], metadata: { ... } }
},
{
name: "test name",
tags: ["tag1", "tag2"],
skip: true,
only: true,
}
);
}, {
testSuiteName: "overridden value",
metadata: { ... },
// Custom client
client: new Client(),
});
```

The test suite will also automatically extract environment variables from `process.env.ENVIRONMENT`, `process.env.NODE_ENV` and
`process.env.LANGSMITH_ENVIRONMENT` and set them as metadata on created experiments. You can then filter experiments by metadata in LangSmith's UI.

See [the API refs](https://docs.smith.langchain.com/reference/js/functions/vitest.describe) for a full list of configuration options.

## Dry-run mode

If you want to run the tests without syncing the results to LangSmith, you can set omit your LangSmith tracing environment variables or set
Expand Down
5 changes: 5 additions & 0 deletions docs/observability/how_to_guides/annotate_code.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -13,6 +13,11 @@ import {

# Annotate code for tracing

:::note
If you've decided you no longer want to trace your runs, you can remove the `LANGSMITH_TRACING` environment variable.
Note that this does not affect the `RunTree` objects or API users, as these are meant to be low-level and not affected by the tracing toggle.
:::

There are several ways to log traces to LangSmith.

:::tip
Expand Down
Binary file modified docs/observability/how_to_guides/static/convo.png
Binary file modified docs/observability/how_to_guides/static/convo_tab.png
Loading

0 comments on commit 80d6bad

Please sign in to comment.