diff --git a/.github/ISSUE_TEMPLATE/bug_report.md b/.github/ISSUE_TEMPLATE/bug_report.md new file mode 100644 index 0000000000..4ccb3b71a9 --- /dev/null +++ b/.github/ISSUE_TEMPLATE/bug_report.md @@ -0,0 +1,35 @@ +--- +name: Bug report +about: Create a report to help us improve CrewAI +title: "[BUG]" +labels: bug +assignees: '' + +--- + +**Description** +Provide a clear and concise description of what the bug is. + +**Steps to Reproduce** +Provide a step-by-step process to reproduce the behavior: + +**Expected behavior** +A clear and concise description of what you expected to happen. + +**Screenshots/Code snippets** +If applicable, add screenshots or code snippets to help explain your problem. + +**Environment Details:** +- **Operating System**: [e.g., Ubuntu 20.04, macOS Catalina, Windows 10] +- **Python Version**: [e.g., 3.8, 3.9, 3.10] +- **crewAI Version**: [e.g., 0.30.11] +- **crewAI Tools Version**: [e.g., 0.2.6] + +**Logs** +Include relevant logs or error messages if applicable. + +**Possible Solution** +Have a solution in mind? Please suggest it here, or write "None". + +**Additional context** +Add any other context about the problem here. diff --git a/.github/ISSUE_TEMPLATE/custom.md b/.github/ISSUE_TEMPLATE/custom.md new file mode 100644 index 0000000000..90b8292778 --- /dev/null +++ b/.github/ISSUE_TEMPLATE/custom.md @@ -0,0 +1,24 @@ +--- +name: Custom issue template +about: Describe this issue template's purpose here. +title: "[DOCS]" +labels: documentation +assignees: '' + +--- + +## Documentation Page + + +## Description + + +## Suggested Changes + + +## Additional Context + + +## Checklist +- [ ] I have searched the existing issues to make sure this is not a duplicate +- [ ] I have checked the latest version of the documentation to ensure this hasn't been addressed diff --git a/.github/workflows/stale.yml b/.github/workflows/stale.yml new file mode 100644 index 0000000000..624bf34f4b --- /dev/null +++ b/.github/workflows/stale.yml @@ -0,0 +1,26 @@ +name: Mark stale issues and pull requests + +on: + schedule: + - cron: '10 12 * * *' + workflow_dispatch: + +jobs: + stale: + runs-on: ubuntu-latest + permissions: + issues: write + pull-requests: write + steps: + - uses: actions/stale@v9 + with: + repo-token: ${{ secrets.GITHUB_TOKEN }} + stale-issue-label: 'no-issue-activity' + stale-issue-message: 'This issue is stale because it has been open for 30 days with no activity. Remove stale label or comment or this will be closed in 5 days.' + close-issue-message: 'This issue was closed because it has been stalled for 5 days with no activity.' + days-before-issue-stale: 30 + days-before-issue-close: 5 + stale-pr-label: 'no-pr-activity' + stale-pr-message: 'This PR is stale because it has been open for 45 days with no activity.' + days-before-pr-stale: 45 + days-before-pr-close: -1 diff --git a/README.md b/README.md index cff54ac5da..554fb0db53 100644 --- a/README.md +++ b/README.md @@ -128,7 +128,7 @@ task2 = Task( crew = Crew( agents=[researcher, writer], tasks=[task1, task2], - verbose=2, # You can set it to 1 or 2 to different logging levels + verbose=True, process = Process.sequential ) @@ -256,7 +256,7 @@ pip install dist/*.tar.gz CrewAI uses anonymous telemetry to collect usage data with the main purpose of helping us improve the library by focusing our efforts on the most used features, integrations and tools. -There is NO data being collected on the prompts, tasks descriptions agents backstories or goals nor tools usage, no API calls, nor responses nor any data that is being processed by the agents, nor any secrets and env vars. +It's pivotal to understand that **NO data is collected** concerning prompts, task descriptions, agents' backstories or goals, usage of tools, API calls, responses, any data processed by the agents, or secrets and environment variables, with the exception of the conditions mentioned. When the `share_crew` feature is enabled, detailed data including task descriptions, agents' backstories or goals, and other specific attributes are collected to provide deeper insights while respecting user privacy. We don't offer a way to disable it now, but we will in the future. Data collected includes: @@ -281,7 +281,7 @@ Data collected includes: - Tools names available - Understand out of the publically available tools, which ones are being used the most so we can improve them -Users can opt-in sharing the complete telemetry data by setting the `share_crew` attribute to `True` on their Crews. +Users can opt-in to Further Telemetry, sharing the complete telemetry data by setting the `share_crew` attribute to `True` on their Crews. Enabling `share_crew` results in the collection of detailed crew and task execution data, including `goal`, `backstory`, `context`, and `output` of tasks. This enables a deeper insight into usage patterns while respecting the user's choice to share. ## License diff --git a/docs/core-concepts/Agents.md b/docs/core-concepts/Agents.md index bb054f8b95..7b93fdde77 100644 --- a/docs/core-concepts/Agents.md +++ b/docs/core-concepts/Agents.md @@ -114,7 +114,7 @@ from langchain.agents import load_tools langchain_tools = load_tools(["google-serper"], llm=llm) agent1 = CustomAgent( - role="backstory agent", + role="agent role", goal="who is {input}?", backstory="agent backstory", verbose=True, @@ -127,7 +127,7 @@ task1 = Task( ) agent2 = Agent( - role="bio agent", + role="agent role", goal="summarize the short bio for {input} and if needed do more research", backstory="agent backstory", verbose=True, diff --git a/docs/core-concepts/Crews.md b/docs/core-concepts/Crews.md index 1896c6a386..f43d6971ba 100644 --- a/docs/core-concepts/Crews.md +++ b/docs/core-concepts/Crews.md @@ -33,6 +33,7 @@ A crew in crewAI represents a collaborative group of agents working together to | **Manager Callbacks** _(optional)_ | `manager_callbacks` | `manager_callbacks` takes a list of callback handlers to be executed by the manager agent when a hierarchical process is used. | | **Prompt File** _(optional)_ | `prompt_file` | Path to the prompt JSON file to be used for the crew. | | **Planning** *(optional)* | `planning` | Adds planning ability to the Crew. When activated before each Crew iteration, all Crew data is sent to an AgentPlanner that will plan the tasks and this plan will be added to each task description. +| **Planning LLM** *(optional)* | `planning_llm` | The language model used by the AgentPlanner in a planning process. | !!! note "Crew Max RPM" The `max_rpm` attribute sets the maximum number of requests per minute the crew can perform to avoid rate limits and will override individual agents' `max_rpm` settings if you set it. @@ -133,10 +134,10 @@ Once a crew has been executed, its output can be accessed through the `output` a crew = Crew( agents=[research_agent, writer_agent], tasks=[research_task, write_article_task], - verbose=2 + verbose=True ) -result = crew.kickoff() +crew_output = crew.kickoff() # Accessing the crew output print(f"Raw Output: {crew_output.raw}") diff --git a/docs/core-concepts/Planning.md b/docs/core-concepts/Planning.md index 810309703f..36ae34437d 100644 --- a/docs/core-concepts/Planning.md +++ b/docs/core-concepts/Planning.md @@ -23,6 +23,25 @@ my_crew = Crew( From this point on, your crew will have planning enabled, and the tasks will be planned before each iteration. +#### Planning LLM + +Now you can define the LLM that will be used to plan the tasks. You can use any ChatOpenAI LLM model available. + +```python +from crewai import Crew, Agent, Task, Process +from langchain_openai import ChatOpenAI + +# Assemble your crew with planning capabilities and custom LLM +my_crew = Crew( + agents=self.agents, + tasks=self.tasks, + process=Process.sequential, + planning=True, + planning_llm=ChatOpenAI(model="gpt-4o") +) +``` + + ### Example When running the base case example, you will see something like the following output, which represents the output of the AgentPlanner responsible for creating the step-by-step logic to add to the Agents tasks. diff --git a/docs/core-concepts/Tasks.md b/docs/core-concepts/Tasks.md index 65a26e752d..8c2f5d9dd6 100644 --- a/docs/core-concepts/Tasks.md +++ b/docs/core-concepts/Tasks.md @@ -90,7 +90,7 @@ task = Task( crew = Crew( agents=[research_agent], tasks=[task], - verbose=2 + verbose=True ) result = crew.kickoff() @@ -142,7 +142,7 @@ task = Task( crew = Crew( agents=[research_agent], tasks=[task], - verbose=2 + verbose=True ) result = crew.kickoff() @@ -264,7 +264,7 @@ task1 = Task( crew = Crew( agents=[research_agent], tasks=[task1, task2, task3], - verbose=2 + verbose=True ) result = crew.kickoff() diff --git a/docs/core-concepts/Testing.md b/docs/core-concepts/Testing.md new file mode 100644 index 0000000000..45ababafb0 --- /dev/null +++ b/docs/core-concepts/Testing.md @@ -0,0 +1,41 @@ +--- +title: crewAI Testing +description: Learn how to test your crewAI Crew and evaluate their performance. +--- + +## Introduction + +Testing is a crucial part of the development process, and it is essential to ensure that your crew is performing as expected. And with crewAI, you can easily test your crew and evaluate its performance using the built-in testing capabilities. + +### Using the Testing Feature + +We added the CLI command `crewai test` to make it easy to test your crew. This command will run your crew for a specified number of iterations and provide detailed performance metrics. +The parameters are `n_iterations` and `model` which are optional and default to 2 and `gpt-4o-mini` respectively. For now the only provider available is OpenAI. + +```bash +crewai test +``` + +If you want to run more iterations or use a different model, you can specify the parameters like this: + +```bash +crewai test --n_iterations 5 --model gpt-4o +``` + +What happens when you run the `crewai test` command is that the crew will be executed for the specified number of iterations, and the performance metrics will be displayed at the end of the run. + +A table of scores at the end will show the performance of the crew in terms of the following metrics: +``` + Task Scores + (1-10 Higher is better) +┏━━━━━━━━━━━━┳━━━━━━━┳━━━━━━━┳━━━━━━━━━━━━┓ +┃ Tasks/Crew ┃ Run 1 ┃ Run 2 ┃ Avg. Total ┃ +┡━━━━━━━━━━━━╇━━━━━━━╇━━━━━━━╇━━━━━━━━━━━━┩ +│ Task 1 │ 10.0 │ 9.0 │ 9.5 │ +│ Task 2 │ 9.0 │ 9.0 │ 9.0 │ +│ Crew │ 9.5 │ 9.0 │ 9.2 │ +└────────────┴───────┴───────┴────────────┘ +``` + +The example above shows the test results for two runs of the crew with two tasks, with the average total score for each task and the crew as a whole. + diff --git a/docs/core-concepts/Tools.md b/docs/core-concepts/Tools.md index ba3071453b..f3564d83b2 100644 --- a/docs/core-concepts/Tools.md +++ b/docs/core-concepts/Tools.md @@ -84,7 +84,7 @@ write = Task( crew = Crew( agents=[researcher, writer], tasks=[research, write], - verbose=2 + verbose=True ) # Execute tasks diff --git a/docs/how-to/Installing-CrewAI.md b/docs/getting-started/Installing-CrewAI.md similarity index 91% rename from docs/how-to/Installing-CrewAI.md rename to docs/getting-started/Installing-CrewAI.md index 5a347df322..8bf58ee015 100644 --- a/docs/how-to/Installing-CrewAI.md +++ b/docs/getting-started/Installing-CrewAI.md @@ -18,4 +18,7 @@ pip install crewai # Install the main crewAI package and the tools package # that includes a series of helpful tools for your agents pip install 'crewai[tools]' + +# Alternatively, you can also use: +pip install crewai crewai-tools ``` \ No newline at end of file diff --git a/docs/how-to/Start-a-New-CrewAI-Project.md b/docs/getting-started/Start-a-New-CrewAI-Project-Template-Method.md similarity index 65% rename from docs/how-to/Start-a-New-CrewAI-Project.md rename to docs/getting-started/Start-a-New-CrewAI-Project-Template-Method.md index 9fb5cb63c2..70877cb18c 100644 --- a/docs/how-to/Start-a-New-CrewAI-Project.md +++ b/docs/getting-started/Start-a-New-CrewAI-Project-Template-Method.md @@ -1,5 +1,5 @@ --- -title: Starting a New CrewAI Project +title: Starting a New CrewAI Project - Using Template description: A comprehensive guide to starting a new CrewAI project, including the latest updates and project setup methods. --- @@ -7,13 +7,62 @@ description: A comprehensive guide to starting a new CrewAI project, including t Welcome to the ultimate guide for starting a new CrewAI project. This document will walk you through the steps to create, customize, and run your CrewAI project, ensuring you have everything you need to get started. +Beforre we start there are a couple of things to note: + +1. CrewAI is a Python package and requires Python >=3.10 and <=3.13 to run. +2. The preferred way of setting up CrewAI is using the `crewai create` command.This will create a new project folder and install a skeleton template for you to work on. + ## Prerequisites -We assume you have already installed CrewAI. If not, please refer to the [installation guide](https://docs.crewai.com/how-to/Installing-CrewAI/) to install CrewAI and its dependencies. +Before getting started with CrewAI, make sure that you have installed it via pip: + +```shell +$ pip install crewai crewai-tools +``` + +### Virtual Environments +It is highly recommended that you use virtual environments to ensure that your CrewAI project is isolated from other projects and dependencies. Virtual environments provide a clean, separate workspace for each project, preventing conflicts between different versions of packages and libraries. This isolation is crucial for maintaining consistency and reproducibility in your development process. You have multiple options for setting up virtual environments depending on your operating system and Python version: + +1. Use venv (Python's built-in virtual environment tool): + venv is included with Python 3.3 and later, making it a convenient choice for many developers. It's lightweight and easy to use, perfect for simple project setups. + + To set up virtual environments with venv, refer to the official [Python documentation](https://docs.python.org/3/tutorial/venv.html). + +2. Use Conda (A Python virtual environment manager): + Conda is an open-source package manager and environment management system for Python. It's widely used by data scientists, developers, and researchers to manage dependencies and environments in a reproducible way. + + To set up virtual environments with Conda, refer to the official [Conda documentation](https://docs.conda.io/projects/conda/en/stable/user-guide/getting-started.html). + +3. Use Poetry (A Python package manager and dependency management tool): + Poetry is an open-source Python package manager that simplifies the installation of packages and their dependencies. Poetry offers a convenient way to manage virtual environments and dependencies. + Poetry is CrewAI's prefered tool for package / dependancy management in CrewAI. + +### Code IDEs + +Most users of CrewAI a Code Editor / Integrated Development Environment (IDE) for building there Crews. You can use any code IDE of your choice. Seee below for some popular options for Code Editors / Integrated Development Environments (IDE): + +- [Visual Studio Code](https://code.visualstudio.com/) - Most popular +- [PyCharm](https://www.jetbrains.com/pycharm/) +- [Cursor AI](https://cursor.com) + +Pick one that suits your style and needs. ## Creating a New Project +In this example we will be using Venv as our virtual environment manager. + +To setup a virtual environment, run the following CLI command: + +```shell +$ python3 -m venv +``` -To create a new project, run the following CLI command: +Activate your virtual environment by running the following CLI command: + +```shell +$ source /bin/activate +``` + +Now, to create a new CrewAI project, run the following CLI command: ```shell $ crewai create @@ -195,6 +244,10 @@ def run(): To run your project, use the following command: +```shell +$ crewai run +``` +or ```shell $ poetry run my_project ``` diff --git a/docs/how-to/Conditional-Tasks.md b/docs/how-to/Conditional-Tasks.md index 20a7f39528..580565c434 100644 --- a/docs/how-to/Conditional-Tasks.md +++ b/docs/how-to/Conditional-Tasks.md @@ -79,7 +79,7 @@ task3 = Task( crew = Crew( agents=[data_fetcher_agent, data_processor_agent, summary_generator_agent], tasks=[task1, conditional_task, task3], - verbose=2, + verbose=True, ) result = crew.kickoff() diff --git a/docs/how-to/Create-Custom-Tools.md b/docs/how-to/Create-Custom-Tools.md index c5e2606871..7dc1e8f07e 100644 --- a/docs/how-to/Create-Custom-Tools.md +++ b/docs/how-to/Create-Custom-Tools.md @@ -7,6 +7,7 @@ description: Comprehensive guide on crafting, using, and managing custom tools w This guide provides detailed instructions on creating custom tools for the crewAI framework and how to efficiently manage and utilize these tools, incorporating the latest functionalities such as tool delegation, error handling, and dynamic tool calling. It also highlights the importance of collaboration tools, enabling agents to perform a wide range of actions. ### Prerequisites + Before creating your own tools, ensure you have the crewAI extra tools package installed: ```bash @@ -31,7 +32,7 @@ class MyCustomTool(BaseTool): ### Using the `tool` Decorator -Alternatively, use the `tool` decorator for a direct approach to create tools. This requires specifying attributes and the tool's logic within a function. +Alternatively, you can use the tool decorator `@tool`. This approach allows you to define the tool's attributes and functionality directly within a function, offering a concise and efficient way to create specialized tools tailored to your needs. ```python from crewai_tools import tool diff --git a/docs/how-to/Creating-a-Crew-and-kick-it-off.md b/docs/how-to/Creating-a-Crew-and-kick-it-off.md deleted file mode 100644 index 7200d75d4f..0000000000 --- a/docs/how-to/Creating-a-Crew-and-kick-it-off.md +++ /dev/null @@ -1,84 +0,0 @@ ---- -title: Assembling and Activating Your CrewAI Team -description: A comprehensive guide to creating a dynamic CrewAI team for your projects, with updated functionalities including verbose mode, memory capabilities, asynchronous execution, output customization, language model configuration, code execution, integration with third-party agents, and improved task management. ---- - -## Introduction -Embark on your CrewAI journey by setting up your environment and initiating your AI crew with the latest features. This guide ensures a smooth start, incorporating all recent updates for an enhanced experience, including code execution capabilities, integration with third-party agents, and advanced task management. - -## Step 0: Installation -Install CrewAI and any necessary packages for your project. CrewAI is compatible with Python >=3.10,<=3.13. - -```shell -pip install crewai -pip install 'crewai[tools]' -``` - -## Step 1: Assemble Your Agents -Define your agents with distinct roles, backstories, and enhanced capabilities. The Agent class now supports a wide range of attributes for fine-tuned control over agent behavior and interactions, including code execution and integration with third-party agents. - -```python -import os -from langchain.llms import OpenAI -from crewai import Agent -from crewai_tools import SerperDevTool, BrowserbaseLoadTool, EXASearchTool - -os.environ["OPENAI_API_KEY"] = "Your OpenAI Key" -os.environ["SERPER_API_KEY"] = "Your Serper Key" -os.environ["BROWSERBASE_API_KEY"] = "Your BrowserBase Key" -os.environ["BROWSERBASE_PROJECT_ID"] = "Your BrowserBase Project Id" - -search_tool = SerperDevTool() -browser_tool = BrowserbaseLoadTool() -exa_search_tool = EXASearchTool() - -# Creating a senior researcher agent with advanced configurations -researcher = Agent( - role='Senior Researcher', - goal='Uncover groundbreaking technologies in {topic}', - backstory=("Driven by curiosity, you're at the forefront of innovation, " - "eager to explore and share knowledge that could change the world."), - memory=True, - verbose=True, - allow_delegation=False, - tools=[search_tool, browser_tool], - allow_code_execution=False, # New attribute for enabling code execution - max_iter=15, # Maximum number of iterations for task execution - max_rpm=100, # Maximum requests per minute - max_execution_time=3600, # Maximum execution time in seconds - system_template="Your custom system template here", # Custom system template - prompt_template="Your custom prompt template here", # Custom prompt template - response_template="Your custom response template here", # Custom response template -) - -# Creating a writer agent with custom tools and specific configurations -writer = Agent( - role='Writer', - goal='Narrate compelling tech stories about {topic}', - backstory=("With a flair for simplifying complex topics, you craft engaging " - "narratives that captivate and educate, bringing new discoveries to light."), - verbose=True, - allow_delegation=False, - memory=True, - tools=[exa_search_tool], - function_calling_llm=OpenAI(model_name="gpt-3.5-turbo"), # Separate LLM for function calling -) - -# Setting a specific manager agent -manager = Agent( - role='Manager', - goal='Ensure the smooth operation and coordination of the team', - verbose=True, - backstory=( - "As a seasoned project manager, you excel in organizing " - "tasks, managing timelines, and ensuring the team stays on track." - ), - allow_code_execution=True, # Enable code execution for the manager -) -``` - -### New Agent Attributes and Features - -1. `allow_code_execution`: Enable or disable code execution capabilities for the agent (default is False). -2. `max_execution_time`: Set a maximum execution time (in seconds) for the agent to complete a task. -3. `function_calling_llm`: Specify a separate language model for function calling. \ No newline at end of file diff --git a/docs/how-to/Force-Tool-Ouput-as-Result.md b/docs/how-to/Force-Tool-Ouput-as-Result.md index c40d0af29d..ee812df234 100644 --- a/docs/how-to/Force-Tool-Ouput-as-Result.md +++ b/docs/how-to/Force-Tool-Ouput-as-Result.md @@ -7,7 +7,7 @@ description: Learn how to force tool output as the result in of an Agent's task In CrewAI, you can force the output of a tool as the result of an agent's task. This feature is useful when you want to ensure that the tool output is captured and returned as the task result, and avoid the agent modifying the output during the task execution. ## Forcing Tool Output as Result -To force the tool output as the result of an agent's task, you can set the `force_tool_output` parameter to `True` when creating the task. This parameter ensures that the tool output is captured and returned as the task result, without any modifications by the agent. +To force the tool output as the result of an agent's task, you can set the `result_as_answer` parameter to `True` when creating the agent. This parameter ensures that the tool output is captured and returned as the task result, without any modifications by the agent. Here's an example of how to force the tool output as the result of an agent's task: @@ -16,7 +16,7 @@ Here's an example of how to force the tool output as the result of an agent's ta # Define a custom tool that returns the result as the answer coding_agent =Agent( role="Data Scientist", - goal="Product amazing resports on AI", + goal="Product amazing reports on AI", backstory="You work with data and AI", tools=[MyCustomTool(result_as_answer=True)], ) diff --git a/docs/how-to/Human-Input-on-Execution.md b/docs/how-to/Human-Input-on-Execution.md index bae79ec696..e24a28fcdc 100644 --- a/docs/how-to/Human-Input-on-Execution.md +++ b/docs/how-to/Human-Input-on-Execution.md @@ -81,7 +81,7 @@ task2 = Task( crew = Crew( agents=[researcher, writer], tasks=[task1, task2], - verbose=2, + verbose=True, memory=True, ) diff --git a/docs/how-to/LLM-Connections.md b/docs/how-to/LLM-Connections.md index 21361d0d3c..ae3a88da02 100644 --- a/docs/how-to/LLM-Connections.md +++ b/docs/how-to/LLM-Connections.md @@ -6,33 +6,25 @@ description: Comprehensive guide on integrating CrewAI with various Large Langua ## Connect CrewAI to LLMs !!! note "Default LLM" - By default, CrewAI uses OpenAI's GPT-4 model (specifically, the model specified by the OPENAI_MODEL_NAME environment variable, defaulting to "gpt-4o") for language processing. You can configure your agents to use a different model or API as described in this guide. - -CrewAI offers flexibility in connecting to various LLMs, including local models via [Ollama](https://ollama.ai) and different APIs like Azure. It's compatible with all [LangChain LLM](https://python.langchain.com/docs/integrations/llms/) components, enabling diverse integrations for tailored AI solutions. - -## CrewAI Agent Overview - -The `Agent` class is the cornerstone for implementing AI solutions in CrewAI. Here's a comprehensive overview of the Agent class attributes and methods: - -- **Attributes**: - - `role`: Defines the agent's role within the solution. - - `goal`: Specifies the agent's objective. - - `backstory`: Provides a background story to the agent. - - `cache` *Optional*: Determines whether the agent should use a cache for tool usage. Default is `True`. - - `max_rpm` *Optional*: Maximum number of requests per minute the agent's execution should respect. Optional. - - `verbose` *Optional*: Enables detailed logging of the agent's execution. Default is `False`. - - `allow_delegation` *Optional*: Allows the agent to delegate tasks to other agents, default is `True`. - - `tools`: Specifies the tools available to the agent for task execution. Optional. - - `max_iter` *Optional*: Maximum number of iterations for an agent to execute a task, default is 25. - - `max_execution_time` *Optional*: Maximum execution time for an agent to execute a task. Optional. - - `step_callback` *Optional*: Provides a callback function to be executed after each step. Optional. - - `llm` *Optional*: Indicates the Large Language Model the agent uses. By default, it uses the GPT-4 model defined in the environment variable "OPENAI_MODEL_NAME". - - `function_calling_llm` *Optional* : Will turn the ReAct CrewAI agent into a function-calling agent. - - `callbacks` *Optional*: A list of callback functions from the LangChain library that are triggered during the agent's execution process. - - `system_template` *Optional*: Optional string to define the system format for the agent. - - `prompt_template` *Optional*: Optional string to define the prompt format for the agent. - - `response_template` *Optional*: Optional string to define the response format for the agent. + By default, CrewAI uses OpenAI's GPT-4o model (specifically, the model specified by the OPENAI_MODEL_NAME environment variable, defaulting to "gpt-4o") for language processing. You can configure your agents to use a different model or API as described in this guide. + By default, CrewAI uses OpenAI's GPT-4 model (specifically, the model specified by the OPENAI_MODEL_NAME environment variable, defaulting to "gpt-4") for language processing. You can configure your agents to use a different model or API as described in this guide. +CrewAI provides extensive versatility in integrating with various Language Models (LLMs), including local options through Ollama such as Llama and Mixtral to cloud-based solutions like Azure. Its compatibility extends to all [LangChain LLM components](https://python.langchain.com/v0.2/docs/integrations/llms/), offering a wide range of integration possibilities for customized AI applications. + +The platform supports connections to an array of Generative AI models, including: + + - OpenAI's suite of advanced language models + - Anthropic's cutting-edge AI offerings + - Ollama's diverse range of locally-hosted generative model & embeddings + - LM Studio's diverse range of locally hosted generative models & embeddings + - Groq's Super Fast LLM offerings + - Azures' generative AI offerings + - HuggingFace's generative AI offerings + +This broad spectrum of LLM options enables users to select the most suitable model for their specific needs, whether prioritizing local deployment, specialized capabilities, or cloud-based scalability. + +## Changing the default LLM +The default LLM is provided through the `langchain openai` package, which is installed by default when you install CrewAI. You can change this default LLM to a different model or API by setting the `OPENAI_MODEL_NAME` environment variable. This straightforward process allows you to harness the power of different OpenAI models, enhancing the flexibility and capabilities of your CrewAI implementation. ```python # Required os.environ["OPENAI_MODEL_NAME"]="gpt-4-0125-preview" @@ -45,30 +37,27 @@ example_agent = Agent( verbose=True ) ``` +## Ollama Local Integration +Ollama is preferred for local LLM integration, offering customization and privacy benefits. To integrate Ollama with CrewAI, you will need the `langchain-ollama` package. You can then set the following environment variables to connect to your Ollama instance running locally on port 11434. -## Ollama Integration -Ollama is preferred for local LLM integration, offering customization and privacy benefits. To integrate Ollama with CrewAI, set the appropriate environment variables as shown below. - -### Setting Up Ollama -- **Environment Variables Configuration**: To integrate Ollama, set the following environment variables: ```sh -OPENAI_API_BASE='http://localhost:11434' -OPENAI_MODEL_NAME='llama2' # Adjust based on available model -OPENAI_API_KEY='' +os.environ[OPENAI_API_BASE]='http://localhost:11434' +os.environ[OPENAI_MODEL_NAME]='llama2' # Adjust based on available model +os.environ[OPENAI_API_KEY]='' # No API Key required for Ollama ``` -## Ollama Integration (ex. for using Llama 2 locally) -1. [Download Ollama](https://ollama.com/download). -2. After setting up the Ollama, Pull the Llama2 by typing following lines into the terminal ```ollama pull llama2```. -3. Enjoy your free Llama2 model that powered up by excellent agents from crewai. +## Ollama Integration Step by Step (ex. for using Llama 3.1 8B locally) +1. [Download and install Ollama](https://ollama.com/download). +2. After setting up the Ollama, Pull the Llama3.1 8B model by typing following lines into your terminal ```ollama run llama3.1```. +3. Llama3.1 should now be served locally on `http://localhost:11434` ``` from crewai import Agent, Task, Crew -from langchain.llms import Ollama +from langchain_ollama import ChatOllama import os os.environ["OPENAI_API_KEY"] = "NA" llm = Ollama( - model = "llama2", + model = "llama3.1", base_url = "http://localhost:11434") general_agent = Agent(role = "Math Professor", @@ -85,7 +74,7 @@ task = Task(description="""what is 3 + 5""", crew = Crew( agents=[general_agent], tasks=[task], - verbose=2 + verbose=True ) result = crew.kickoff() @@ -98,13 +87,14 @@ There are a couple of different ways you can use HuggingFace to host your LLM. ### Your own HuggingFace endpoint ```python -from langchain_community.llms import HuggingFaceEndpoint +from langchain_huggingface import HuggingFaceEndpoint, llm = HuggingFaceEndpoint( - endpoint_url="", - huggingfacehub_api_token="", + repo_id="microsoft/Phi-3-mini-4k-instruct", task="text-generation", - max_new_tokens=512 + max_new_tokens=512, + do_sample=False, + repetition_penalty=1.03, ) agent = Agent( @@ -115,66 +105,50 @@ agent = Agent( ) ``` -### From HuggingFaceHub endpoint -```python -from langchain_community.llms import HuggingFaceHub - -llm = HuggingFaceHub( - repo_id="HuggingFaceH4/zephyr-7b-beta", - huggingfacehub_api_token="", - task="text-generation", -) -``` - ## OpenAI Compatible API Endpoints Switch between APIs and models seamlessly using environment variables, supporting platforms like FastChat, LM Studio, Groq, and Mistral AI. ### Configuration Examples #### FastChat ```sh -OPENAI_API_BASE="http://localhost:8001/v1" -OPENAI_MODEL_NAME='oh-2.5m7b-q51' -OPENAI_API_KEY=NA +os.environ[OPENAI_API_BASE]="http://localhost:8001/v1" +os.environ[OPENAI_MODEL_NAME]='oh-2.5m7b-q51' +os.environ[OPENAI_API_KEY]=NA ``` #### LM Studio Launch [LM Studio](https://lmstudio.ai) and go to the Server tab. Then select a model from the dropdown menu and wait for it to load. Once it's loaded, click the green Start Server button and use the URL, port, and API key that's shown (you can modify them). Below is an example of the default settings as of LM Studio 0.2.19: ```sh -OPENAI_API_BASE="http://localhost:1234/v1" -OPENAI_API_KEY="lm-studio" +os.environ[OPENAI_API_BASE]="http://localhost:1234/v1" +os.environ[OPENAI_API_KEY]="lm-studio" ``` #### Groq API ```sh -OPENAI_API_KEY=your-groq-api-key -OPENAI_MODEL_NAME='llama3-8b-8192' -OPENAI_API_BASE=https://api.groq.com/openai/v1 +os.environ[OPENAI_API_KEY]=your-groq-api-key +os.environ[OPENAI_MODEL_NAME]='llama3-8b-8192' +os.environ[OPENAI_API_BASE]=https://api.groq.com/openai/v1 ``` #### Mistral API ```sh -OPENAI_API_KEY=your-mistral-api-key -OPENAI_API_BASE=https://api.mistral.ai/v1 -OPENAI_MODEL_NAME="mistral-small" +os.environ[OPENAI_API_KEY]=your-mistral-api-key +os.environ[OPENAI_API_BASE]=https://api.mistral.ai/v1 +os.environ[OPENAI_MODEL_NAME]="mistral-small" ``` ### Solar -```python +```sh from langchain_community.chat_models.solar import SolarChat -# Initialize language model -os.environ["SOLAR_API_KEY"] = "your-solar-api-key" -llm = SolarChat(max_tokens=1024) +``` +```sh +os.environ[SOLAR_API_BASE]="https://api.upstage.ai/v1/solar" +os.environ[SOLAR_API_KEY]="your-solar-api-key" +``` # Free developer API key available here: https://console.upstage.ai/services/solar # Langchain Example: https://github.com/langchain-ai/langchain/pull/18556 -``` -### text-gen-web-ui -```sh -OPENAI_API_BASE=http://localhost:5000/v1 -OPENAI_MODEL_NAME=NA -OPENAI_API_KEY=NA -``` ### Cohere ```python @@ -190,10 +164,11 @@ llm = ChatCohere() ### Azure Open AI Configuration For Azure OpenAI API integration, set the following environment variables: ```sh -AZURE_OPENAI_VERSION="2022-12-01" -AZURE_OPENAI_DEPLOYMENT="" -AZURE_OPENAI_ENDPOINT="" -AZURE_OPENAI_KEY="" + +os.environ[AZURE_OPENAI_DEPLOYMENT] = "You deployment" +os.environ["OPENAI_API_VERSION"] = "2023-12-01-preview" +os.environ["AZURE_OPENAI_ENDPOINT"] = "Your Endpoint" +os.environ["AZURE_OPENAI_API_KEY"] = "" ``` ### Example Agent with Azure LLM @@ -216,6 +191,5 @@ azure_agent = Agent( llm=azure_llm ) ``` - ## Conclusion Integrating CrewAI with different LLMs expands the framework's versatility, allowing for customized, efficient AI solutions across various domains and platforms. diff --git a/docs/index.md b/docs/index.md index 77cdd9852c..54dfd59aa6 100644 --- a/docs/index.md +++ b/docs/index.md @@ -5,6 +5,19 @@ Cutting-edge framework for orchestrating role-playing, autonomous AI agents. By fostering collaborative intelligence, CrewAI empowers agents to work together seamlessly, tackling complex tasks.
+

Core Concepts