POST /v1/embeddings
Generate vector embeddings for text inputs.
- Method: POST
- Path: /v1/embeddings
- Auth: Bearer token in Authorizationheader
- Content-Type: application/json
Request parameters
- model(string, required): Embedding model ID.
- input(required): One of:- single string
- array of strings
- array of integers (token IDs)
- array of arrays of integers (batched token IDs)
 
- dimensions(integer, optional): Target dimensionality if the model supports it.
- encoding_format(enum, optional, default- "float"):- "float"or- "base64".
Example Request
- Python
- Node.js
- cURL
from openai import OpenAI
client = OpenAI(base_url="https://api.naga.ac/v1", api_key="YOUR_API_KEY")
resp = client.embeddings.create(
    model="text-embedding-3-small",
    input=[
        "The food was delicious!",
        "Service could be faster."
    ],
    # dimensions=512,  # if supported
    # encoding_format="base64",
)
print(resp)
import OpenAI from "openai";
const client = new OpenAI({ baseURL: "https://api.naga.ac/v1", apiKey: "YOUR_API_KEY" });
const resp = await client.embeddings.create({
  model: "text-embedding-3-small",
  input: ["The food was delicious!", "Service could be faster."],
  // dimensions: 512, // if supported
  // encoding_format: "base64",
});
console.log(resp);
curl https://api.naga.ac/v1/embeddings \
  -H "Authorization: Bearer YOUR_API_KEY" \
  -H "Content-Type: application/json" \
  -d '{
    "model": "text-embedding-3-small",
    "input": ["The food was delicious!", "Service could be faster."]
  }'
Authentication
Provide your key as a Bearer token:
Authorization: Bearer YOUR_API_KEY
Response
Returns embedding vectors per input. The structure is compatible with OpenAI's embeddings API. When encoding_format="float", vectors are float arrays; with "base64", vectors are base64-encoded.
Response Fields
- object(string): Always "list"
- data(array): Array of embedding objects- object(string): Always "embedding"
- embedding(array): The embedding vector (float array or base64 string)
- index(integer): Index of this embedding in the input array
 
- model(string): The model used for embeddings
- usage(object): Token usage information- prompt_tokens(integer): Number of tokens in the input
- total_tokens(integer): Total tokens processed