Skip to main content

How to add tools to chatbots

Prerequisites

This guide assumes familiarity with the following concepts:

This section will cover how to create conversational agents: chatbots that can interact with other systems and APIs using tools.

note

This how-to guide previously built a chatbot using RunnableWithMessageHistory. You can access this version of the guide in the v0.2 docs.

As of the v0.3 release of LangChain, we recommend that LangChain users take advantage of LangGraph persistence to incorporate memory into new LangChain applications.

If your code is already relying on RunnableWithMessageHistory or BaseChatMessageHistory, you do not need to make any changes. We do not plan on deprecating this functionality in the near future as it works for simple chat applications and any code that uses RunnableWithMessageHistory will continue to work as expected.

Please see How to migrate to LangGraph Memory for more details.

Setup​

For this guide, we'll be using a tool calling agent with a single tool for searching the web. The default will be powered by Tavily, but you can switch it out for any similar tool. The rest of this section will assume you're using Tavily.

You'll need to sign up for an account on the Tavily website, and install the following packages:

%pip install --upgrade --quiet langchain-community langchain-openai tavily-python langgraph

import getpass
import os

if not os.environ.get("OPENAI_API_KEY"):
os.environ["OPENAI_API_KEY"] = getpass.getpass("OpenAI API Key:")

if not os.environ.get("TAVILY_API_KEY"):
os.environ["TAVILY_API_KEY"] = getpass.getpass("Tavily API Key:")
OpenAI API Key: Β·Β·Β·Β·Β·Β·Β·Β·
Tavily API Key: Β·Β·Β·Β·Β·Β·Β·Β·

You will also need your OpenAI key set as OPENAI_API_KEY and your Tavily API key set as TAVILY_API_KEY.

Creating an agent​

Our end goal is to create an agent that can respond conversationally to user questions while looking up information as needed.

First, let's initialize Tavily and an OpenAI chat model capable of tool calling:

from langchain_community.tools.tavily_search import TavilySearchResults
from langchain_openai import ChatOpenAI

tools = [TavilySearchResults(max_results=1)]

# Choose the LLM that will drive the agent
# Only certain models support this
model = ChatOpenAI(model="gpt-4o-mini", temperature=0)

To make our agent conversational, we can also specify a prompt. Here's an example:

prompt = (
"You are a helpful assistant. "
"You may not need to use tools for every query - the user may just want to chat!"
)

Great! Now let's assemble our agent using LangGraph's prebuilt create_react_agent, which allows you to create a tool-calling agent:

from langgraph.prebuilt import create_react_agent

# state_modifier allows you to preprocess the inputs to the model inside ReAct agent
# in this case, since we're passing a prompt string, we'll just always add a SystemMessage
# with this prompt string before any other messages sent to the model
agent = create_react_agent(model, tools, state_modifier=prompt)

Running the agent​

Now that we've set up our agent, let's try interacting with it! It can handle both trivial queries that require no lookup:

from langchain_core.messages import HumanMessage

agent.invoke({"messages": [HumanMessage(content="I'm Nemo!")]})
API Reference:HumanMessage
{'messages': [HumanMessage(content="I'm Nemo!", additional_kwargs={}, response_metadata={}, id='39e715c7-bd1c-426f-8e14-c05586b3d221'),
AIMessage(content='Hi Nemo! How can I assist you today?', additional_kwargs={'refusal': None}, response_metadata={'token_usage': {'completion_tokens': 11, 'prompt_tokens': 107, 'total_tokens': 118, 'completion_tokens_details': {'reasoning_tokens': 0}}, 'model_name': 'gpt-4o-mini-2024-07-18', 'system_fingerprint': 'fp_1bb46167f9', 'finish_reason': 'stop', 'logprobs': None}, id='run-6937c944-d702-40bb-9a9f-4141ddde9f78-0', usage_metadata={'input_tokens': 107, 'output_tokens': 11, 'total_tokens': 118})]}

Or, it can use of the passed search tool to get up to date information if needed:

agent.invoke(
{
"messages": [
HumanMessage(
content="What is the current conservation status of the Great Barrier Reef?"
)
],
}
)
{'messages': [HumanMessage(content='What is the current conservation status of the Great Barrier Reef?', additional_kwargs={}, response_metadata={}, id='a74cc581-8ad5-4401-b3a5-f028d69e4b21'),
AIMessage(content='', additional_kwargs={'tool_calls': [{'id': 'call_aKOItwvAb4DHQCwaasKphGHq', 'function': {'arguments': '{"query":"current conservation status of the Great Barrier Reef 2023"}', 'name': 'tavily_search_results_json'}, 'type': 'function'}], 'refusal': None}, response_metadata={'token_usage': {'completion_tokens': 28, 'prompt_tokens': 116, 'total_tokens': 144, 'completion_tokens_details': {'reasoning_tokens': 0}}, 'model_name': 'gpt-4o-mini-2024-07-18', 'system_fingerprint': 'fp_1bb46167f9', 'finish_reason': 'tool_calls', 'logprobs': None}, id='run-267ff8a8-d866-4ae5-9534-ad87ebbdc954-0', tool_calls=[{'name': 'tavily_search_results_json', 'args': {'query': 'current conservation status of the Great Barrier Reef 2023'}, 'id': 'call_aKOItwvAb4DHQCwaasKphGHq', 'type': 'tool_call'}], usage_metadata={'input_tokens': 116, 'output_tokens': 28, 'total_tokens': 144}),
ToolMessage(content='[{"url": "https://www.aims.gov.au/monitoring-great-barrier-reef/gbr-condition-summary-2023-24", "content": "This report summarises the condition of coral reefs in the Northern, Central and Southern\xa0Great Barrier Reef (GBR) from the Long-Term Monitoring Program (LTMP) surveys of 94 reefs conducted between August\xa02023 and June 2024 (reported as β€˜2024’). Over the past 38 years of monitoring by the Australian Institute of Marine Science (AIMS), hard coral cover on reefs of the GBR has decreased and increased in response to cycles of disturbance and recovery. It is relatively rare for GBR reefs to have 75% to 100% hard coral cover and AIMS defines >30% – 50% hard coral cover as a high value, based on historical surveys across the GBR."}]', name='tavily_search_results_json', id='05b3fab7-9ac8-42bb-9612-ff2a896dbb67', tool_call_id='call_aKOItwvAb4DHQCwaasKphGHq', artifact={'query': 'current conservation status of the Great Barrier Reef 2023', 'follow_up_questions': None, 'answer': None, 'images': [], 'results': [{'title': 'Annual Summary Report of Coral Reef Condition 2023/24', 'url': 'https://www.aims.gov.au/monitoring-great-barrier-reef/gbr-condition-summary-2023-24', 'content': 'This report summarises the condition of coral reefs in the Northern, Central and Southern\xa0Great Barrier Reef (GBR) from the Long-Term Monitoring Program (LTMP) surveys of 94 reefs conducted between August\xa02023 and June 2024 (reported as β€˜2024’). Over the past 38 years of monitoring by the Australian Institute of Marine Science (AIMS), hard coral cover on reefs of the GBR has decreased and increased in response to cycles of disturbance and recovery. It is relatively rare for GBR reefs to have 75% to 100% hard coral cover and AIMS defines >30% – 50% hard coral cover as a high value, based on historical surveys across the GBR.', 'score': 0.95991266, 'raw_content': None}], 'response_time': 4.22}),
AIMessage(content='The current conservation status of the Great Barrier Reef (GBR) indicates ongoing challenges and fluctuations in coral health. According to a report from the Australian Institute of Marine Science (AIMS), the condition of coral reefs in the GBR has been monitored over the years, showing cycles of disturbance and recovery. \n\nAs of the latest surveys conducted between August 2023 and June 2024, hard coral cover on the GBR has experienced both decreases and increases. AIMS defines a hard coral cover of over 30% to 50% as high value, but it is relatively rare for GBR reefs to achieve 75% to 100% hard coral cover.\n\nFor more detailed information, you can refer to the [AIMS report](https://www.aims.gov.au/monitoring-great-barrier-reef/gbr-condition-summary-2023-24).', additional_kwargs={'refusal': None}, response_metadata={'token_usage': {'completion_tokens': 174, 'prompt_tokens': 337, 'total_tokens': 511, 'completion_tokens_details': {'reasoning_tokens': 0}}, 'model_name': 'gpt-4o-mini-2024-07-18', 'system_fingerprint': 'fp_1bb46167f9', 'finish_reason': 'stop', 'logprobs': None}, id='run-bec32925-0dba-445d-8b55-87358ef482bb-0', usage_metadata={'input_tokens': 337, 'output_tokens': 174, 'total_tokens': 511})]}

Conversational responses​

Because our prompt contains a placeholder for chat history messages, our agent can also take previous interactions into account and respond conversationally like a standard chatbot:

from langchain_core.messages import AIMessage, HumanMessage

agent.invoke(
{
"messages": [
HumanMessage(content="I'm Nemo!"),
AIMessage(content="Hello Nemo! How can I assist you today?"),
HumanMessage(content="What is my name?"),
],
}
)
API Reference:AIMessage | HumanMessage
{'messages': [HumanMessage(content="I'm Nemo!", additional_kwargs={}, response_metadata={}, id='2c8e58bf-ad20-45a4-940b-84393c6b3a03'),
AIMessage(content='Hello Nemo! How can I assist you today?', additional_kwargs={}, response_metadata={}, id='5e014114-7e9d-42c3-b63e-a662b3a49bef'),
HumanMessage(content='What is my name?', additional_kwargs={}, response_metadata={}, id='d92be4e1-6497-4037-9a9a-83d3e7b760d5'),
AIMessage(content='Your name is Nemo!', additional_kwargs={'refusal': None}, response_metadata={'token_usage': {'completion_tokens': 6, 'prompt_tokens': 130, 'total_tokens': 136, 'completion_tokens_details': {'reasoning_tokens': 0}}, 'model_name': 'gpt-4o-mini-2024-07-18', 'system_fingerprint': 'fp_1bb46167f9', 'finish_reason': 'stop', 'logprobs': None}, id='run-17db96f8-8dbd-4f25-a80d-e4e872967641-0', usage_metadata={'input_tokens': 130, 'output_tokens': 6, 'total_tokens': 136})]}

If preferred, you can also add memory to the LangGraph agent to manage the history of messages. Let's redeclare it this way:

from langgraph.checkpoint.memory import MemorySaver

memory = MemorySaver()
agent = create_react_agent(model, tools, state_modifier=prompt, checkpointer=memory)
agent.invoke(
{"messages": [HumanMessage("I'm Nemo!")]},
config={"configurable": {"thread_id": "1"}},
)
{'messages': [HumanMessage(content="I'm Nemo!", additional_kwargs={}, response_metadata={}, id='117b2cfc-c6cc-449c-bba9-26fc545d0afa'),
AIMessage(content='Hi Nemo! How can I assist you today?', additional_kwargs={'refusal': None}, response_metadata={'token_usage': {'completion_tokens': 11, 'prompt_tokens': 107, 'total_tokens': 118, 'completion_tokens_details': {'reasoning_tokens': 0}}, 'model_name': 'gpt-4o-mini-2024-07-18', 'system_fingerprint': 'fp_1bb46167f9', 'finish_reason': 'stop', 'logprobs': None}, id='run-ba16cc0b-fba1-4ec5-9d99-e010c3b702d0-0', usage_metadata={'input_tokens': 107, 'output_tokens': 11, 'total_tokens': 118})]}

And then if we rerun our wrapped agent executor:

agent.invoke(
{"messages": [HumanMessage("What is my name?")]},
config={"configurable": {"thread_id": "1"}},
)
{'messages': [HumanMessage(content="I'm Nemo!", additional_kwargs={}, response_metadata={}, id='117b2cfc-c6cc-449c-bba9-26fc545d0afa'),
AIMessage(content='Hi Nemo! How can I assist you today?', additional_kwargs={'refusal': None}, response_metadata={'token_usage': {'completion_tokens': 11, 'prompt_tokens': 107, 'total_tokens': 118, 'completion_tokens_details': {'reasoning_tokens': 0}}, 'model_name': 'gpt-4o-mini-2024-07-18', 'system_fingerprint': 'fp_1bb46167f9', 'finish_reason': 'stop', 'logprobs': None}, id='run-ba16cc0b-fba1-4ec5-9d99-e010c3b702d0-0', usage_metadata={'input_tokens': 107, 'output_tokens': 11, 'total_tokens': 118}),
HumanMessage(content='What is my name?', additional_kwargs={}, response_metadata={}, id='53ac8d34-99bb-43a7-9103-80e26b7ee6cc'),
AIMessage(content='Your name is Nemo!', additional_kwargs={'refusal': None}, response_metadata={'token_usage': {'completion_tokens': 6, 'prompt_tokens': 130, 'total_tokens': 136, 'completion_tokens_details': {'reasoning_tokens': 0}}, 'model_name': 'gpt-4o-mini-2024-07-18', 'system_fingerprint': 'fp_1bb46167f9', 'finish_reason': 'stop', 'logprobs': None}, id='run-b3f224a5-902a-4973-84ff-9b683615b0e2-0', usage_metadata={'input_tokens': 130, 'output_tokens': 6, 'total_tokens': 136})]}

This LangSmith trace shows what's going on under the hood.

Further reading​

For more on how to build agents, check these LangGraph guides:

For more on tool usage, you can also check out this use case section.


Was this page helpful?


You can also leave detailed feedback on GitHub.