← Back to all models
M

Mistral Nemo 12B

by Mistral AI
Efficient Free & Open Source 🏆 Ranked #73 of 85
60.9
Overall Score
out of 100
About

A compact yet highly capable 12B model developed jointly by Mistral AI and NVIDIA. Uses a 128K context window and a new tokenizer (Tekken) optimised for multilingual content, balancing size and performance.

Key Metrics
Context Window
128K
tokens
Avg Response
580
milliseconds
Input Cost
$0.075
per million tokens
Output Cost
$0.075
per million tokens
Arena ELO
1180
Chatbot Arena rating
MT-Bench
8.2
out of 10
Benchmark Scores
MMLU
68.0%
HumanEval
75.0%
MATH
55.0%
GPQA
33.0%
MT-Bench
82.0/10
Capability Profile
Strengths & Limitations
Strengths
✓ Large context ✓ Multilingual ✓ Open source ✓ Balanced capability ✓ NVIDIA-optimised
Limitations
⚠ Requires 12GB+ VRAM ⚠ Less powerful than 70B models ⚠ Newer tokenizer less compatible
Ideal Use Cases
Multilingual applications Document analysis Customer support Summarisation Code assistance
Model Details
Provider Mistral AI
Released 2024-07-18
Type Free & Open Source
Multimodal No
Tier Efficient
Global rank #73 / 85
Pricing (USD)
Input tokens $0.075/M
Output tokens $0.075/M
Per 1,000 tokens ≈ $0.0001 input / $0.0001 output
All Benchmarks
MMLU 68.0%
HumanEval 75.0%
MATH 55.0%
GPQA 33.0%
MT-Bench 8.2/10
Arena ELO 1180
Compare this model View Rankings

You might also consider