Back to all models
Get all the details on GPT-3.5 Turbo, an AI model from OpenAI. This page covers its token limits, pricing structure, key capabilities such as json_mode, long_context, instruction_following, available API code samples, and performance strengths.
Key Metrics
Input Limit
16.4K tokens
Output Limit
4.1K tokens
Input Cost
$0.50/1M
Output Cost
$1.50/1M
Sample API Code
from openai import OpenAI
client = OpenAI()
response = client.chat.completions.create(
model="gpt-3.5-turbo-0125",
messages=[
{"role": "system", "content": "You are a helpful assistant."},
{"role": "user", "content": "Hello!"}
]
)
print(response.choices[0].message.content)
Required Libraries
openai
openai
Benchmarks
Benchmark | Score | Source | Notes |
---|---|---|---|
1103 | OpenLLM Leaderboard | - |
Notes
Legacy GPT model for cheaper chat and non-chat tasks.
Supported Data Types
Input Types
text
Output Types
text
json
Strengths & Weaknesses
Good at
chat
non-chat tasks
cost-effectiveness
general reasoning
instruction following
Additional Information
Latest Update
Jan 25, 2024
Knowledge Cutoff
Sep 1, 2021