Governance for opaque models
Stanford’s 2026 AI Index dropped today and the number that jumped out at me: 80 of the 95 most notable models last year shipped without training code. No parameter counts. No dataset disclosures. The models keep getting better and more opaque at the same time. If you’re deploying AI in an enterprise, you’re building on increasingly powerful black boxes where even the vendors can’t fully predict model behavior. The governance and observability layer around these models isn’t a nice-to-have anymore. It’s the entire trust surface. https://hai.stanford.edu/ai-index/2026-ai-index-report