Skip to content

Latest commit

 

History

History
192 lines (138 loc) · 7.48 KB

README.md

File metadata and controls

192 lines (138 loc) · 7.48 KB

Ollama Deep Researcher

Ollama Deep Researcher is a fully local web research assistant that uses any LLM hosted by Ollama. Give it a topic and it will generate a web search query, gather web search results (via Tavily by default), summarize the results of web search, reflect on the summary to examine knowledge gaps, generate a new search query to address the gaps, search, and improve the summary for a user-defined number of cycles. It will provide the user a final markdown summary with all sources used.

research-rabbit

Short summary:

Ollama.Deep.Researcher.Overview-enhanced-v2-90p.mp4

📺 Video Tutorials

See it in action or build it yourself? Check out these helpful video tutorials:

🚀 Quickstart

Mac

  1. Download the Ollama app for Mac here.

  2. Pull a local LLM from Ollama. As an example:

ollama pull deepseek-r1:8b
  1. Clone the repository:
git clone https://github.com/langchain-ai/ollama-deep-researcher.git
cd ollama-deep-researcher
  1. Select a web search tool:
  1. Copy the example environment file:
cp .env.example .env
  1. Edit the .env file with your preferred text editor and add your API keys:
# Required: Choose one search provider and add its API key
TAVILY_API_KEY=tvly-xxxxx      # Get your key at https://tavily.com
PERPLEXITY_API_KEY=pplx-xxxxx  # Get your key at https://www.perplexity.ai

Note: If you prefer using environment variables directly, you can set them in your shell:

export TAVILY_API_KEY=tvly-xxxxx
# OR
export PERPLEXITY_API_KEY=pplx-xxxxx

After setting the keys, verify they're available:

echo $TAVILY_API_KEY  # Should show your API key
  1. (Recommended) Create a virtual environment:
python -m venv .venv
source .venv/bin/activate
  1. Launch the assistant with the LangGraph server:
# Install uv package manager
curl -LsSf https://astral.sh/uv/install.sh | sh
uvx --refresh --from "langgraph-cli[inmem]" --with-editable . --python 3.11 langgraph dev

Windows

  1. Download the Ollama app for Windows here.

  2. Pull a local LLM from Ollama. As an example:

ollama pull deepseek-r1:8b
  1. Clone the repository:
git clone https://github.com/langchain-ai/ollama-deep-researcher.git
cd ollama-deep-researcher
  1. Select a web search tool:
  1. Copy the example environment file:
cp .env.example .env

Edit the .env file with your preferred text editor and add your API keys:

# Required: Choose one search provider and add its API key
TAVILY_API_KEY=tvly-xxxxx      # Get your key at https://tavily.com
PERPLEXITY_API_KEY=pplx-xxxxx  # Get your key at https://www.perplexity.ai

Note: If you prefer using environment variables directly, you can set them in Windows (via System Properties or PowerShell):

export TAVILY_API_KEY=<your_tavily_api_key>
export PERPLEXITY_API_KEY=<your_perplexity_api_key>

Crucially, restart your terminal/IDE (or sometimes even your computer) after setting it for the change to take effect. After setting the keys, verify they're available:

echo $TAVILY_API_KEY  # Should show your API key
  1. (Recommended) Create a virtual environment: Install Python 3.11 (and add to PATH during installation). Restart your terminal to ensure Python is available, then create and activate a virtual environment:
python -m venv .venv
.venv\Scripts\Activate.ps1
  1. Launch the assistant with the LangGraph server:
# Install dependencies 
pip install -e .
pip install langgraph-cli[inmem]

# Start the LangGraph server
langgraph dev

Using the LangGraph Studio UI

When you launch LangGraph server, you should see the following output and Studio will open in your browser:

Ready!

API: http://127.0.0.1:2024

Docs: http://127.0.0.1:2024/docs

LangGraph Studio Web UI: https://smith.langchain.com/studio/?baseUrl=http://127.0.0.1:2024

Open LangGraph Studio Web UI via the URL in the output above.

In the configuration tab:

  • Pick your web search tool (Tavily or Perplexity) (it will by default be Tavily)
  • Set the name of your local LLM to use with Ollama (it will by default be llama3.2)
  • You can set the depth of the research iterations (it will by default be 3)
Screenshot 2025-01-24 at 10 08 31 PM

Give the assistant a topic for research, and you can visualize its process!

Screenshot 2025-01-24 at 10 08 22 PM

How it works

Ollama Deep Researcher is inspired by IterDRAG. This approach will decompose a query into sub-queries, retrieve documents for each one, answer the sub-query, and then build on the answer by retrieving docs for the second sub-query. Here, we do similar:

  • Given a user-provided topic, use a local LLM (via Ollama) to generate a web search query
  • Uses a search engine (configured for Tavily) to find relevant sources
  • Uses LLM to summarize the findings from web search related to the user-provided research topic
  • Then, it uses the LLM to reflect on the summary, identifying knowledge gaps
  • It generates a new search query to address the knowledge gaps
  • The process repeats, with the summary being iteratively updated with new information from web search
  • It will repeat down the research rabbit hole
  • Runs for a configurable number of iterations (see configuration tab)

Outputs

The output of the graph is a markdown file containing the research summary, with citations to the sources used.

All sources gathered during research are saved to the graph state.

You can visualize them in the graph state, which is visible in LangGraph Studio:

Screenshot 2024-12-05 at 4 08 59 PM

The final summary is saved to the graph state as well:

Screenshot 2024-12-05 at 4 10 11 PM

Deployment Options

There are various ways to deploy this graph.

See Module 6 of LangChain Academy for a detailed walkthrough of deployment options with LangGraph.