Logo

AI Models with Harmful Content Detection Support

This page lists Large Language Models that offer Harmful Content Detection. Compare models, see how they implement this feature, and find the best option for projects requiring robust Harmful Content Detection.

Providers

OpenAI

Models with this Capability

text-moderation-latest

OpenAI · text-moderation

ID: text-moderation-latest

Current

Input

32.8K tokens

Output

32.8K tokens

Input Cost

$0.00/1M

Output Cost

$0.00/1M

Exceptional at:

content moderation
harmful content detection

Similar Capabilities