Support Matrix
| API | Request controls | Visible output | Notes |
|---|---|---|---|
Responses | reasoning.effort | reasoning items in output[] and reasoning stream events | Best starting point for new work |
Chat Completions | reasoning_effort on supported models | assistant message fields such as reasoning_details or reasoning_content | Use for OpenAI chat compatibility |
Messages | thinking and output_config.effort | Anthropic thinking blocks and thinking events | Use for Anthropic compatibility |
Reasoning Effort
When reasoning effort is supported, the normalized values used in the docs are:noneminimallowmediumhighxhigh
In
Messages, output_config is a reasoning control, not a generic
structured-output feature.When To Use It
- hard planning or decomposition tasks
- agent workflows that need stronger tool selection and longer deliberation
- tasks where quality matters more than minimal latency
Recommended Example
Responses, visible reasoning can arrive as reasoning items before or alongside message items. On Chat Completions and Messages, it appears in protocol-specific fields and blocks instead.
Preserving Reasoning Across Turns
If a model pauses for tool use and you continue the conversation in another request, preserve any reasoning payload from that assistant turn unchanged.Responses: replay the priorreasoningitem unchanged in the nextinput[]turn if you want to preserve reasoning continuityChat Completions: if an assistant turn includesreasoning_details, send those details back unchanged with that assistant message when you append the laterrole: toolresultMessages: if an assistant turn includesthinkingblocks, replay those blocks unchanged, including anysignature, before the latertool_resultblock