← Back to all models
D

DeepSeek V3.1 671B

by DeepSeek
Flagship Free & Open Source 🏆 Ranked #19 of 85
83.6
Overall Score
out of 100
About

DeepSeek's updated flagship MoE model with 671B total parameters and improved capability over V3. Balances frontier performance with efficient inference through sparse mixture-of-experts architecture.

Key Metrics
Context Window
128K
tokens
Avg Response
1100
milliseconds
Input Cost
$0.27
per million tokens
Output Cost
$1.1
per million tokens
Arena ELO
1310
Chatbot Arena rating
MT-Bench
9.1
out of 10
Benchmark Scores
MMLU
89.0%
HumanEval
91.0%
MATH
87.0%
GPQA
62.0%
MT-Bench
91.0/10
Capability Profile
Strengths & Limitations
Strengths
✓ Frontier performance ✓ MoE efficiency ✓ Open source ✓ Strong coding ✓ Cost-efficient inference
Limitations
⚠ Requires server-scale hardware ⚠ Large download ⚠ High GPU memory for local use
Ideal Use Cases
Enterprise AI Research Complex coding Data analysis On-premise deployment
Model Details
Provider DeepSeek
Released 2025-03-01
Type Free & Open Source
Multimodal No
Tier Flagship
Global rank #19 / 85
Pricing (USD)
Input tokens $0.27/M
Output tokens $1.1/M
Per 1,000 tokens ≈ $0.0003 input / $0.0011 output
All Benchmarks
MMLU 89.0%
HumanEval 91.0%
MATH 87.0%
GPQA 62.0%
MT-Bench 9.1/10
Arena ELO 1310
Compare this model View Rankings

You might also consider