GET /v1/models
List available models and their capabilities.
- Method: GET
- Path: /v1/models
- Auth: Bearer token in Authorizationheader
Description
Returns model metadata, including input/output modalities. Use this to determine which models support text, image, file (PDF), and audio inputs.
See also: Features → Multimodal → Overview, Images, Files, Audio.
Example Request
- Python
- Node.js
- cURL
from openai import OpenAI
client = OpenAI(base_url="https://api.naga.ac/v1", api_key="YOUR_API_KEY")
models = client.models.list()
print(models.data[0])
import OpenAI from "openai";
const client = new OpenAI({ baseURL: "https://api.naga.ac/v1", apiKey: "YOUR_API_KEY" });
const models = await client.models.list();
console.log(models.data[0]);
curl https://api.naga.ac/v1/models \
  -H "Authorization: Bearer YOUR_API_KEY"
Authentication
Provide your key as a Bearer token:
Authorization: Bearer YOUR_API_KEY
Response
[
  {
    "id": "claude-sonnet-4.5-20250929",
    "description": "Claude Sonnet 4.5 is Anthropic's most advanced Sonnet model, optimized for real-world agents and coding workflows. It achieves state-of-the-art results on coding benchmarks like SWE-bench Verified, with notable improvements in system design, code security, and following specifications. Designed for extended autonomous operation, the model maintains task continuity across sessions and offers fact-based progress tracking. Sonnet 4.5 features enhanced agentic abilities, such as improved tool orchestration, speculative parallel execution, and more efficient context and memory management. With better context tracking and awareness of token usage across tool calls, it excels in multi-context and long-running workflows. Key use cases include software engineering, cybersecurity, financial analysis, research agents, and other areas requiring sustained reasoning and tool use.\n",
    "owned_by": "anthropic",
    "object": "model",
    "supported_endpoints": [
      "chat.completions"
    ],
    "pricing": {
      "per_input_token": 0.0000015,
      "per_output_token": 0.0000075
    },
    "architecture": {
      "input_modalities": [
        "text",
        "image",
        "file"
      ],
      "output_modalities": [
        "text"
      ]
    },
    "available_tiers": [
      "paid"
    ]
  }
]
Response Fields
- id(string): Model identifier
- description(string): Detailed model description and capabilities
- owned_by(string): Model provider (e.g., "anthropic", "openai")
- object(string): Always "model"
- supported_endpoints(array): List of supported API endpoints
- pricing(object): Token pricing information- per_input_token(number): Cost per input token in USD
- per_output_token(number): Cost per output token in USD
 
- architecture(object): Model capabilities- input_modalities(array): Supported input types (text, image, file, audio)
- output_modalities(array): Supported output types
 
- available_tiers(array): Available pricing tiers for this model