Compatibility Check
Can I Run mxbai-embed-large on Apple M3?
Yes — Apple M3 runs mxbai-embed-large fully on GPU at the FP16 quantization.
Estimated ~149.3 tokens/sec on the FP16 quantization.
Full GPU
Best variant: FP16
Full GPU inference — 24 GB VRAM meets the 2 GB recommendation.
- GPU VRAM
- 24 GB
- Min VRAM (best fit)
- 1 GB
- Recommended VRAM
- 2 GB
- Estimated tok/s
- ~149.3
Share this matchup
Send this page so a friend can see if Apple M3 fits mxbai-embed-large.
Every mxbai-embed-large quantization on Apple M3
Each row runs the compatibility engine against your VRAM, RAM, and the model's requirements.
| Quantization | File Size | Min VRAM | Rec VRAM | Context | Verdict | Estimated tok/s |
|---|---|---|---|---|---|---|
| FP16Best fit | 0.67 GB | 1 GB | 2 GB | 512 / 512 | Full GPU | ~149.3 |
Apple M3 is solid pick for mxbai-embed-large
Need second card or fresh build? These links help support site at no extra cost.