Moderations API when you need to classify text, images, or mixed typed inputs for safety-related workflows.
This API helps you decide whether content should be allowed, blocked, reviewed, or routed differently before later steps in your product.
When To Use It
- screening user-generated text before model inference
- classifying uploaded images
- combining text and image checks in a single request
- building policy, trust, and abuse-prevention workflows
Request Model
Requests center on two fields:modelinput
input supports:
- a single string
- an array of strings
- an array of typed objects such as
{ "type": "text" }and{ "type": "image_url" }
Quick Example
Response Anatomy
How to use the result
- use
flaggedfor a quick allow-or-review gate - inspect
categorieswhen you need policy-specific handling - inspect
category_scoreswhen you want thresholds or human-review logic