Skip to main content
Tool calling in Responses API uses an item-based model. Use it when the model needs live data, application logic, or side effects that do not exist in the prompt itself.

Define Tools

from openai import OpenAI

client = OpenAI(
    base_url="https://api.naga.ac/v1",
    api_key="YOUR_API_KEY",
)

response = client.responses.create(
    model="gpt-4.1",
    input="Check the weather in Prague and tell me if I need a coat.",
    tools=[
        {
            "type": "function",
            "name": "lookup_weather",
            "description": "Look up current weather for a city.",
            "parameters": {
                "type": "object",
                "properties": {
                    "city": {"type": "string"}
                },
                "required": ["city"],
            },
            "strict": True,
        }
    ],
    tool_choice="auto",
)

print(response.output)
Use tool_choice: "auto" for normal model-driven behavior. Force a specific tool only when your application really needs it.

Workflow

1

Define the tool in the request

Send one or more function definitions in tools[] and usually leave tool_choice on "auto".
2

Inspect the model's function call

Read the returned function_call item from output[] and parse its arguments payload.
3

Execute the tool in your application

Run the requested function using your own business logic, external API call, or internal system lookup.
4

Send the tool result back

Continue with a follow-up request that includes a function_call_output item using the same call_id.
5

Read the final answer

After the model receives the tool result, inspect the next response for the user-facing answer or any additional tool calls.

Model Tool Call Output

The model can return a function_call item in the output array:
{
  "type": "function_call",
  "call_id": "call_1",
  "name": "lookup_weather",
  "arguments": "{\"city\":\"Prague\"}",
  "status": "completed"
}

Send Tool Results Back

You return the tool result in a follow-up request using function_call_output:
{
  "model": "gpt-4.1",
  "input": [
    {
      "type": "function_call",
      "call_id": "call_1",
      "name": "lookup_weather",
      "arguments": "{\"city\":\"Prague\"}"
    },
    {
      "type": "function_call_output",
      "call_id": "call_1",
      "output": "{\"temperature_c\":7,\"raining\":true}"
    }
  ]
}
If you are continuing a longer workflow, also include any earlier context your app wants the model to keep.

Streaming Tool Arguments

When streaming, arguments arrive incrementally via response.function_call_arguments.delta events. Buffer those deltas until the matching .done event arrives. Some reasoning-capable models also emit a reasoning item before or alongside the tool call. If you want continuity across tool turns, replay that reasoning item unchanged in the follow-up request. See Responses Reasoning.

Common mistakes

  • forcing a tool when the model should be allowed to choose naturally
  • sending the tool result back in a chat-style role: tool message instead of a function_call_output item
  • parsing tool arguments before the streamed argument payload is complete
  • exposing tools that can cause side effects without application-level authorization checks