Logo
Back to all models

GPT-3.5 Turbo - In-Depth Overview

OpenAI · GPT-3.5

outdated

Get all the details on GPT-3.5 Turbo, an AI model from OpenAI. This page covers its token limits, pricing structure, key capabilities such as function_calling, json_mode, structured_outputs, available API code samples, and performance strengths.

Key Metrics

Input Limit

16.4K tokens

Output Limit

4.1K tokens

Input Cost

$0.50/1M

Output Cost

$1.50/1M

Sample API Code

from openai import OpenAI
client = OpenAI()
response = client.chat.completions.create(
    model="gpt-3.5-turbo-1106",
    messages=[
        {"role": "system", "content": "You are a helpful assistant."},
        {"role": "user", "content": "Hello!"}
    ],
    max_tokens=100
)
print(response.choices[0].message.content)

Required Libraries

openai
openai

Benchmarks

BenchmarkScoreSourceNotes
1065
lmarena.ai-

Notes

Legacy GPT model for cheaper chat and non-chat tasks. This specific version (1106) has a 16,385 token context window and supports JSON mode and reproducible outputs.

Capabilities

function calling
json mode
structured outputs
conversation state
reasoning
streaming

Supported Data Types

Input Types

text

Output Types

text
json

Strengths & Weaknesses

Good at

chat
non chat tasks
function calling
structured outputs
cost efficiency

Additional Information

Latest Update

Nov 6, 2023

Knowledge Cutoff

Apr 1, 2023