Skip to main content
POST /v1/images/generations creates images from a prompt. Use this endpoint when your workflow is primarily about generating images, not running a broader multimodal conversation.

Common fields

FieldCommon use
modelimage generation model
promptimage description or instruction
sizeoutput dimensions
response_formaturl or b64_json
nnumber of images, when supported by the model

Example Request

from openai import OpenAI

client = OpenAI(
    base_url="https://api.naga.ac/v1",
    api_key="YOUR_API_KEY",
)

result = client.images.generate(
    model="gpt-image-1",
    prompt="A cinematic watercolor fox reading in a lantern-lit library.",
    size="1024x1024",
    response_format="url",
)

print(result.data[0].url)

Response Shape

{
  "created": 1710000000,
  "data": [
    {
      "url": "https://api.naga.ac/outputs/..."
    }
  ],
  "usage": {
    "input_tokens": 42,
    "output_tokens": 512,
    "total_tokens": 554
  }
}

Notes

  • response_format can be url or b64_json
  • n and size constraints depend on the model
  • the prompt is moderated before generation

Common mistakes

  • using Images API for workflows that should really be multimodal Responses prompts
  • assuming every model supports the same size or n combinations
  • forgetting to handle b64_json differently from hosted image URLs

Reference