Logo
Back to all models

GPT-3.5 Turbo (0125) - In-Depth Overview

OpenAI · GPT-3.5

outdated

Get all the details on GPT-3.5 Turbo (0125), an AI model from OpenAI. This page covers its token limits, pricing structure, key capabilities such as function_calling, json_mode, available API code samples, and performance strengths.

Key Metrics

Input Limit

16.4K tokens

Output Limit

4.1K tokens

Input Cost

$0.50/1M

Output Cost

$1.50/1M

Sample API Code

from openai import OpenAI
client = OpenAI()
response = client.chat.completions.create(
  model="gpt-3.5-turbo-0125",
  messages=[
    {"role": "system", "content": "You are a helpful assistant."},
    {"role": "user", "content": "Hello!"}
  ]
)
print(response.choices[0].message.content)

Required Libraries

openai
openai

Benchmarks

BenchmarkScoreSourceNotes
1106
lmarena.ai-
53.95
LiveBenchScore for GPT-3.5 Turbo family
39.75
LiveBenchScore for GPT-3.5 Turbo family
69.29
LiveBenchScore for GPT-3.5 Turbo family
41.48
LiveBenchScore for GPT-3.5 Turbo family
63.53
LiveBenchScore for GPT-3.5 Turbo family
44.68
LiveBenchScore for GPT-3.5 Turbo family
64.94
LiveBenchScore for GPT-3.5 Turbo family

Notes

A legacy model in the GPT-3.5 family, suitable for cheaper chat and non-chat tasks. It is not the latest or flagship model.

Capabilities

function calling
json mode

Supported Data Types

Input Types

text

Output Types

text
json

Strengths & Weaknesses

Good at

General text generation
Chat
Cost-effectiveness for basic tasks

Poor at

Complex reasoning
Multimodal tasks
Handling very long contexts

Additional Information

Latest Update

Jan 25, 2025

Knowledge Cutoff

Jun 1, 2024