Back to all models
Get all the details on o1-mini, an AI model from OpenAI. This page covers its token limits, pricing structure, key capabilities such as reasoning, long_context, streaming, available API code samples, and performance strengths.
Key Metrics
Input Limit
128K tokens
Output Limit
65.5K tokens
Input Cost
$1.10/1M
Output Cost
$4.40/1M
Sample API Code
from openai import OpenAI
client = OpenAI()
response = client.chat.completions.create(
model="o1-mini",
messages=[
{"role": "system", "content": "You are a helpful assistant."},
{"role": "user", "content": "What is the capital of France?"}
]
)
print(response.choices[0].message.content)
Required Libraries
openai
openai
Benchmarks
Benchmark | Score | Source | Notes |
---|---|---|---|
1301 | lmarena.ai | - | |
1042 | lmarena.ai | - | |
63.6% | Vellum | - | |
60% | Vellum | - | |
90% | Vellum | - | |
52.2% | Vellum | - | |
32.9% | Vellum | - |
Notes
A small model alternative to o1. o1-mini is a faster and more affordable reasoning model, but we recommend using the newer o3-mini model that features higher intelligence at the same latency and price as o1-mini.
Capabilities
Supported Data Types
Input Types
text
Output Types
text
Strengths & Weaknesses
Exceptional at
reasoning
Good at
affordability
speed
Additional Information
Latest Update
Sep 12, 2024
Knowledge Cutoff
Oct 1, 2023