Q4_K_M
60 GBMin VRAM: 69 GB
Recommended VRAM: 78 GB
Min RAM: 90 GB
Context: 8K / 8K
Loading model details...
Fetching variants, compatibility details, and metadata.
Share GPT-OSS 120B with someone who is deciding what to run locally.
Social proof
6% of 985 scanned PCs run GPT-OSS 120B fully on GPU.
287 keep at least some work on GPU. Based on anonymous compatibility checks.
General-purpose local model brief
Best for
Consider alternatives if
Quantization tip: Benchmark at least two quantizations and validate with a task-specific eval set before production use.
New to local models? Smaller quantization variants are easier to run, while larger ones can improve quality at the cost of more memory.
Q4_K_M
60 GBMin VRAM: 69 GB
Recommended VRAM: 78 GB
Min RAM: 90 GB
Context: 8K / 8K
Q5_K_M
75 GBMin VRAM: 86.3 GB
Recommended VRAM: 97.5 GB
Min RAM: 113 GB
Context: 8K / 8K
Q8_0
120 GBMin VRAM: 138 GB
Recommended VRAM: 156 GB
Min RAM: 180 GB
Context: 8K / 8K
FP16
240 GBMin VRAM: 276 GB
Recommended VRAM: 312 GB
Min RAM: 360 GB
Context: 8K / 8K
| Quantization | File Size | Min VRAM | Recommended VRAM | Min RAM | Context |
|---|---|---|---|---|---|
| Q4_K_M | 60 GB | 69 GB | 78 GB | 90 GB | 8K / 8K |
| Q5_K_M | 75 GB | 86.3 GB | 97.5 GB | 113 GB | 8K / 8K |
| Q8_0 | 120 GB | 138 GB | 156 GB | 180 GB | 8K / 8K |
| FP16 | 240 GB | 276 GB | 312 GB | 360 GB | 8K / 8K |
These GPUs meet the recommended 78 GB VRAM for the Q4_K_M quantization. Estimated speeds are approximate and assume full GPU offloading.
Budget Pick
Apple M2 Max96 GB VRAM · ~5.3 tok/s
Lowest cost that meets recommended VRAM
Check price on AmazonFastest Pick
Apple M4 Ultra256 GB VRAM · ~14.6 tok/s
Highest estimated throughput
Check price on AmazonBest Value
Apple M1 Ultra128 GB VRAM · ~10.7 tok/s
Best speed per dollar of VRAM
Check price on AmazonNeed a detailed comparison? See all GPU rankings for GPT-OSS 120B.