← Back to all models
M

Mixtral 8x7B

by Mistral AI
Flagship Free & Open Source 🏆 Ranked #67 of 85
63.0
Overall Score
out of 100
About

Mistral AI's Mixture-of-Experts model activating 2 of 8 expert networks per token, matching GPT-3.5 quality at much lower inference cost. A landmark open-source model for quality-efficiency balance.

Key Metrics
Context Window
32K
tokens
Avg Response
700
milliseconds
Input Cost
$0.24
per million tokens
Output Cost
$0.24
per million tokens
Arena ELO
1191
Chatbot Arena rating
MT-Bench
8.5
out of 10
Benchmark Scores
MMLU
70.6%
HumanEval
75.1%
MATH
58.0%
GPQA
35.0%
MT-Bench
85.0/10
Capability Profile
Strengths & Limitations
Strengths
✓ MoE efficiency ✓ Strong reasoning ✓ Open source ✓ Good coding ✓ Fast inference
Limitations
⚠ High total parameter memory ⚠ More complex to deploy ⚠ Ageing benchmarks
Ideal Use Cases
Research High-quality chatbots Code generation Document analysis Cost-effective inference
Model Details
Provider Mistral AI
Released 2023-12-11
Type Free & Open Source
Multimodal No
Tier Flagship
Global rank #67 / 85
Pricing (USD)
Input tokens $0.24/M
Output tokens $0.24/M
Per 1,000 tokens ≈ $0.0002 input / $0.0002 output
All Benchmarks
MMLU 70.6%
HumanEval 75.1%
MATH 58.0%
GPQA 35.0%
MT-Bench 8.5/10
Arena ELO 1191
Compare this model View Rankings

You might also consider