This repo is a Typescript edition of the Ollama Deep Researcher and the repo structure is inspired by the langgraphjs-starter-template
Ollama Deep Researcher is a fully local web research assistant that uses any LLM hosted by Ollama. Give it a topic and it will generate a web search query, gather web search results (via Tavily), summarize the results of web search, reflect on the summary to examine knowledge gaps, generate a new search query to address the gaps, search, and improve the summary for a user-defined number of cycles. It will provide the user a final markdown summary with all sources used.
Short summary:
Ollama.Deep.Researcher.Overview-enhanced-v2-90p.mp4
See it in action or build it yourself? Check out these helpful video tutorials:
- Overview of Ollama Deep Researcher with R1 - Load and test DeepSeek R1 distilled models.
- Building Ollama Deep Researcher from Scratch - Overview of how this is built.
In ./src/agent/configuration.ts
you can set the following configurations:
- Set the name of your local LLM to use with Ollama (it will by default be
llama3.2
) - You can set the depth of the research iterations (it will by default be
3
) - You can set the base url of your llm (it will by default be
http://localhost:11434
)
ollama pull deepseek-r1:8b
- Setup web search tool:
Set the corresponding environment variable via .env
file you can simply copy from .env.example
:
TAVILY_API_KEY=<your_tavily_api_key>
- Clone the repository and launch the assistant with the LangGraph server:
# Clone the repository and start the LangGraph server
git clone https://github.com/pacovk/ollama-deep-researcher-ts.git
cd ollama-deep-researcher-ts
yarn install
docker compose up -d
You can use the provided docker-compose.yml
file to run LangGraph Studio with the Ollama Deep Researcher assistant.
- Start the LangGraph server with the Ollama Deep Researcher assistant:
docker compose up -d
Visit LangGraph Studio Web UI to interact with the assistant.
Give the assistant a topic for research, and you can visualize its process!
ℹ️ There is also a Ollama Web UI available at http://localhost:2024 to interact with the assistant.
Ollama Deep Researcher is inspired by IterDRAG. This approach will decompose a query into sub-queries, retrieve documents for each one, answer the sub-query, and then build on the answer by retrieving docs for the second sub-query. Here, we do similar:
- Given a user-provided topic, use a local LLM (via Ollama) to generate a web search query
- Uses a search engine (configured for Tavily) to find relevant sources
- Uses LLM to summarize the findings from web search related to the user-provided research topic
- Then, it uses the LLM to reflect on the summary, identifying knowledge gaps
- It generates a new search query to address the knowledge gaps
- The process repeats, with the summary being iteratively updated with new information from web search
- It will repeat down the research rabbit hole
- Runs for a configurable number of iterations (see
configuration
tab)
The output of the graph is a markdown file containing the research summary, with citations to the sources used.
All sources gathered during research are saved to the graph state.
You can visualize them in the graph state, which is visible in LangGraph Studio:
The final summary is saved to the graph state as well: