Back to all models
Get all the details on GPT-4 Turbo, an AI model from OpenAI. This page covers its token limits, pricing structure, key capabilities such as long_context, function_calling, streaming, available API code samples, and performance strengths.
Key Metrics
Input Limit
128K tokens
Output Limit
4.1K tokens
Input Cost
$10.00/1M
Output Cost
$30.00/1M
Sample API Code
from openai import OpenAI;client = OpenAI();response = client.chat.completions.create(model="gpt-4-turbo",messages=[{"role": "system", "content": "You are a helpful assistant."},{"role": "user", "content": "What is the capital of France?"}]);print(response.choices[0].message.content)
Required Libraries
openai
openai
Benchmarks
Benchmark | Score | Source | Notes |
---|---|---|---|
1253 | OpenLLM Leaderboard | Score for gpt-4-turbo-2024-04-09 snapshot | |
1151 | OpenLLM Leaderboard | Score for gpt-4-turbo-2024-04-09 snapshot |
Notes
GPT-4 Turbo is the next generation of GPT-4, an older high-intelligence GPT model. It was designed to be a cheaper, better version of GPT-4. Today, OpenAI recommends using a newer model like GPT-4o.
Supported Data Types
Input Types
text
image
Output Types
text
Strengths & Weaknesses
Exceptional at
long context processing
function calling
multimodal understanding
advanced reasoning
Good at
general reasoning
instruction following
text generation
tool integration
Poor at
structured outputs
fine tuning
Additional Information
Latest Update
Apr 9, 2024
Knowledge Cutoff
Dec 1, 2023