Logo
Back to all models

o3-mini - In-Depth Overview

OpenAI · o3

Current

Model ID: o3-mini

Get all the details on o3-mini, an AI model from OpenAI. This page covers its token limits, pricing structure, key capabilities such as structured_outputs, function_calling, batch_api, available API code samples, and performance strengths.

Key Metrics

Input Limit

200K tokens

Output Limit

100K tokens

Input Cost

$1.10/1M

Output Cost

$4.40/1M

Sample API Code

from openai import OpenAI
client = OpenAI()
response = client.chat.completions.create(
    model="o3-mini",
    messages=[
        {"role": "system", "content": "You are a helpful assistant."},
        {"role": "user", "content": "Hello!"}
    ]
)
print(response.choices[0].message.content)

Required Libraries

openai
openai

Benchmarks

BenchmarkScoreSourceNotes
1302
OpenLLM Leaderboard-
1092
OpenLLM Leaderboard-
79.7%
Vellum-
87.3%
Vellum-
61%
Vellum-
97.9%
Vellum-
65.12%
Vellum-
60.4%
Vellum-
14
Vellum-

Notes

o3-mini is a small reasoning model providing high intelligence at the same cost and latency targets of o1-mini. It supports key developer features like Structured Outputs, function calling, and Batch API. It has a 200,000 context window and 100,000 max output tokens. Reasoning token support is available.

Supported Data Types

Input Types

text

Output Types

text

Strengths & Weaknesses

Exceptional at

reasoning

Good at

structured outputs
function calling
batch api

Additional Information

Latest Update

Jan 31, 2025

Knowledge Cutoff

Oct 1, 2023