My friends at Mistral quietly nerfed the open vs
My friends at Mistral quietly nerfed the “open vs good” tradeoff with Large 3. You get a 675B MoE w/ 256k context, full multimodal, Apache-2.0, and NVFP4 checkpoints that actually fit on a single 8×H100 box. In other words, frontier-ish performance without a datacenter that looks like an airport. Meanwhile, Ministral 3B / 8B / 14B push real vision + reasoning to edge hardware, not just “runs on a laptop if you like 3 tokens per minute”. If you had written Mistral off, maybe it’s time for a second look.