Chat Completions API exposes reasoning through chat-compatible fields on assistant messages and streaming deltas.
Use this page if you need reasoning controls but your integration still depends on OpenAI chat request and response shapes.
Request Controls
Usereasoning_effort on supported models.
noneminimallowmediumhighxhigh
Response Shape
Depending on the model behind this compatibility layer, reasoning can appear in assistant messages asreasoning_details or reasoning_content.
Streaming Behavior
When streaming is enabled, reasoning can arrive inchoices[0].delta as:
reasoning_detailsentries such asreasoning.summary,reasoning.text, orreasoning.encryptedreasoning_contenton providers that use the older single-field shape
Preserve Reasoning Blocks
This gateway is stateless. If a model pauses for tool use and returnsreasoning_details, send those details back unchanged in the assistant message when you continue the conversation with the later role: tool result.
reasoning_details as model output. Do not edit, reorder, or partially replay them if you want the model to continue from the same reasoning state.
Caveats
- not every reasoning-capable model exposes visible reasoning on this surface
- some providers emit
reasoning_details, while others still usereasoning_content - if a turn has no visible reasoning payload, there is nothing to replay
Common mistakes
- assuming
reasoning_detailsexists on every supported model - editing replayed reasoning payloads before a follow-up tool turn
- raising reasoning effort without measuring the cost and latency tradeoff