Skip to main content
This guide gets you from an empty project to your first successful API call in a couple of minutes. You will create an API key, install the official OpenAI SDK, and send one request to NagaAI using the Responses API.
1

Create an API key

  1. Open the NagaAI Dashboard.
  2. Go to API Keys.
  3. Create a new key.
  4. Copy it and store it securely.
Standard inference endpoints use a normal API key. Administrative /v1/account/* endpoints use a provisioning key instead.
2

Install the SDK

NagaAI acts as a drop-in replacement for OpenAI. You can use their official libraries.
pip install openai
3

Send your first request

Initialize the client with the NagaAI base_url and your new API key.For new LLM apps, start with Responses API. It works well for simple text generation now and gives you a clean path to streaming, tools, structured outputs, and multimodal input later.
from openai import OpenAI

client = OpenAI(
    base_url="https://api.naga.ac/v1",
    api_key="YOUR_API_KEY",
)

response = client.responses.create(
    model="gpt-4.1-mini",
    input="What is the capital of France? Respond in one sentence.",
)

print(response.output_text)
4

Confirm the output

If the request succeeds, the response object will contain a text output item. You should see:The capital of France is Paris.If you get 401 Unauthorized, make sure you used a standard API key and set the base URL to https://api.naga.ac/v1.

Next Steps

Authentication

Learn which key to use for inference requests and account endpoints.

Choose an API

Decide when to use Responses, Images, Audio, Embeddings, Moderations, or a compatibility API.

Responses API

Keep building on the same API with streaming, tools, structured outputs, and multimodal input.

Models

Choose a model based on what you are building and what access your account has.