Skip to main content
Web search lets a model look beyond the prompt and ground an answer in fresh web results. Support depends on both the selected model and the API surface you use.

Support Matrix

APIHow to enable itNotes
Responsestools: [{"type": "web_search"}]Best starting point for new search-enabled LLM work
Chat Completionsweb_search_optionsUse when you already need the chat protocol
Messagesversioned server tool such as {"type":"web_search_20250305","name":"web_search"}Use when you already need Anthropic compatibility

When To Use It

  • fresh or time-sensitive answers
  • grounded answers with citations
  • workflows where the model should retrieve context instead of relying only on training data

Enablement Shapes

New search-enabled LLM workflows should usually start here.
{
  "model": "gpt-4.1",
  "input": "Find recent reporting about AI regulation in the UK and cite your sources.",
  "tools": [
    {
      "type": "web_search"
    }
  ]
}
from openai import OpenAI

client = OpenAI(
    base_url="https://api.naga.ac/v1",
    api_key="YOUR_API_KEY",
)

response = client.responses.create(
    model="gpt-4.1",
    input="Find recent reporting about AI regulation in the UK and cite your sources.",
    tools=[{"type": "web_search"}],
)

print(response.output_text)

Citations And Annotations

When search is active, citations often appear as annotations rather than only inside the generated text. Inspect the structured response, not just the final prose. For Chat Completions, search is enabled through the separate web_search_options field rather than through chat tools.

Practical Advice

  • use web search for fresh or time-sensitive questions, not for everything
  • verify that your chosen model actually supports search-enabled behavior
  • inspect annotations or citations if your app needs to display sources
  • keep a non-search fallback if your product can still answer from stored context

API-Specific Guides