Responses API is the main LLM surface on NagaAI. Start here when you want one API for plain text, tools, structured outputs, streaming, reasoning, and multimodal inputs.
Best Fit
- new LLM features without legacy protocol constraints
- typed output flows such as tool calls and reasoning items
- multimodal prompts that mix text with images, files, or audio
- streaming clients that want semantic event names instead of chat chunks
Request Model
Most requests start with:modelinputinstructionstoolstextreasoningstream
Quick Example
Response Model
Successful responses areresponse objects with typed output[] items.
output[0] is always the final answer. The array can also contain reasoning, function_call, and other typed items.
Learn The API In Detail
Text Generation
Start with the simplest request and response flow.
Streaming
Parse semantic SSE events and final snapshots correctly.
Tool Calling
Work with
function_call and function_call_output items.Structured Outputs
Use
text.format for schema-shaped output.Reasoning
Control and inspect reasoning items.
Multimodal Inputs
Send images, files, and audio through typed input parts.
Web Search
Enable search through the public web-search tool shape.
Conversation State
Model multi-turn state in a stateless gateway environment.
Use Another API Only When Needed
- use Chat Completions API when you need OpenAI chat compatibility
- use Messages API when you need Anthropic compatibility
- use Embeddings API, Audio API, Images API, or Moderations API for those dedicated workflows