Skip to main content
Tool calling lets the model decide when to request structured function inputs from your application. Your app executes the function, then sends the result back into the conversation or workflow. Use this when the model needs fresh data, side effects, or access to systems outside the prompt.

Support Matrix

ConceptResponsesChat CompletionsMessages
Tool definitiontools[]tools[]tools[]
Model tool callfunction_call itemtool_calls[]tool_use block
App tool resultfunction_call_output itemrole: tool messagetool_result block
Streaming argsresponse.function_call_arguments.deltadelta.tool_calls[].function.argumentsinput_json_delta

Typical Flow

  1. Define one or more tools in the request.
  2. Let the model choose a tool or force a specific one.
  3. Execute the requested tool in your application.
  4. Send the tool result back to the model if the workflow needs a final answer.

When It Helps

  • calling internal business logic or APIs
  • looking up live data such as weather, account state, or inventory
  • combining model reasoning with deterministic application behavior
from openai import OpenAI

client = OpenAI(
    base_url="https://api.naga.ac/v1",
    api_key="YOUR_API_KEY",
)

response = client.responses.create(
    model="gpt-4.1",
    input="Check the weather in Paris and tell me if I need a coat.",
    tools=[
        {
            "type": "function",
            "name": "get_weather",
            "description": "Fetch the current weather for a city.",
            "parameters": {
                "type": "object",
                "properties": {
                    "city": {"type": "string"}
                },
                "required": ["city"],
            },
            "strict": True,
        }
    ],
)

print(response.output)
When the model returns a function_call, continue the workflow by sending its result back as a function_call_output item:
{
  "model": "gpt-4.1",
  "input": [
    {
      "type": "function_call_output",
      "call_id": "call_123",
      "output": "{\"city\":\"Paris\",\"temperature_c\":9,\"conditions\":\"light rain\"}"
    }
  ]
}

Good Tool Design

  • keep tool names explicit and stable
  • use narrow JSON schemas instead of vague free-form arguments
  • return structured, machine-readable results when possible
  • only expose tools the model should actually be allowed to call

Common Mistakes

  • defining tools with descriptions that are too vague
  • sending tool results back in the wrong protocol shape
  • assuming the model will always produce final text in the same response as the tool request
  • exposing side-effecting tools without application-level permission checks

Choosing The Right Surface

  • use Responses for new tool workflows
  • use Chat Completions when you already have OpenAI chat tool code
  • use Messages when your agent stack already expects Anthropic content blocks

Reference