Back to all models
Get all the details on Claude 3.5 Haiku, an AI model from Anthropic. This page covers its token limits, pricing structure, key capabilities such as function_calling, long_context, multimodal_input, available API code samples, and performance strengths.
Key Metrics
Input Limit
200K tokens
Output Limit
4.1K tokens
Input Cost
$0.80/1M
Output Cost
$4.00/1M
Sample API Code
import anthropic
client = anthropic.Anthropic()
message = client.messages.create(
model='claude-3-5-haiku-20241022',
max_tokens=1024,
messages=[
{'role': 'user', 'content': 'Hello, Claude'}
]
)
print(message.content)
Required Libraries
anthropic
@anthropic-ai/sdk
Benchmarks
Benchmark | Score | Source | Notes |
---|---|---|---|
1237 | OpenLLM Leaderboard | - | |
1133 | OpenLLM Leaderboard | - | |
44.98 | LiveBench | - | |
26.19 | LiveBench | - | |
53.17 | LiveBench | - | |
34.84 | LiveBench | - | |
54.12 | LiveBench | - | |
39.71 | LiveBench | - | |
61.88 | LiveBench | - | |
41.6 | Vellum | - | |
40.6 | Vellum | - | |
69.4 | Vellum | - | |
54.31 | Vellum | - | |
28 | Vellum | - |
Notes
Fastest and most-cost effective model in the Claude 3.5 family. Designed for speed and cost efficiency.
Capabilities
function calling
long context
multimodal input
tool use
vision
Supported Data Types
Input Types
text
image
Output Types
text
Strengths & Weaknesses
Good at
Code generation
Real-time chatbots
Data extraction and labeling
Content classification
Speed
Cost-effectiveness
Multilingual fluency
Additional Information
Latest Update
Oct 22, 2024
Knowledge Cutoff
No data