Back to all models
Get all the details on o3-mini, an AI model from OpenAI. This page covers its token limits, pricing structure, key capabilities such as batch_api, function_calling, reasoning, available API code samples, and performance strengths.
Key Metrics
Input Limit
200K tokens
Output Limit
100K tokens
Input Cost
$1.10/1M
Output Cost
$4.40/1M
Sample API Code
from openai import OpenAI
client = OpenAI()
chat_completion = client.chat.completions.create(
messages=[
{
"role": "user",
"content": "Say this is a test",
}
],
model="o3-mini",
)
print(chat_completion.choices[0].message.content)
Required Libraries
openai
openai
Benchmarks
Benchmark | Score | Source | Notes |
---|---|---|---|
1305 | OpenLLM Leaderboard | - | |
1092 | OpenLLM Leaderboard | - | |
50 | Vellum Leaderboard | % | |
87.3 | Vellum Leaderboard | % | |
79.7 | Vellum Leaderboard | % | |
61 | Vellum Leaderboard | % | |
97.9 | Vellum Leaderboard | % | |
65.12 | Vellum Leaderboard | % | |
60.4 | Vellum Leaderboard | % |
Notes
o3-mini is a small reasoning model alternative to o3, providing high intelligence at the same cost and latency targets of o1-mini. It supports key developer features like Structured Outputs, function calling, and Batch API.
Capabilities
batch api
function calling
reasoning
streaming
structured outputs
Supported Data Types
Input Types
text
Output Types
text
Strengths & Weaknesses
Exceptional at
general reasoning
mathematics
Good at
coding
tool use
instruction following
language tasks
Additional Information
Latest Update
Jan 31, 2025
Knowledge Cutoff
Oct 1, 2023