Skip to main content
Use the Moderations API when you need to classify text, images, or mixed typed inputs for safety-related workflows. This API helps you decide whether content should be allowed, blocked, reviewed, or routed differently before later steps in your product.

When To Use It

  • screening user-generated text before model inference
  • classifying uploaded images
  • combining text and image checks in a single request
  • building policy, trust, and abuse-prevention workflows

Request Model

Requests center on two fields:
  • model
  • input
input supports:
  • a single string
  • an array of strings
  • an array of typed objects such as { "type": "text" } and { "type": "image_url" }

Quick Example

from openai import OpenAI

client = OpenAI(
    base_url="https://api.naga.ac/v1",
    api_key="YOUR_API_KEY",
)

result = client.moderations.create(
    model="omni-moderation-latest",
    input="I want to hurt someone.",
)

print(result.results[0].flagged)

Response Anatomy

{
  "id": "mdr_123",
  "model": "omni-moderation-latest",
  "results": [
    {
      "flagged": true,
      "categories": {
        "violence": true
      },
      "category_scores": {
        "violence": 0.98
      }
    }
  ]
}

How to use the result

  • use flagged for a quick allow-or-review gate
  • inspect categories when you need policy-specific handling
  • inspect category_scores when you want thresholds or human-review logic

What To Learn Next

Reference