GPT-5.5 Instant, Gemma 4, and Claude Opus 4.7 all shipped within 48 hours — and together they reveal a decisive shift from raw capability to speed, cost, and workflow integration. Plus: a 12-million-token context window breakthrough, hidden tokenizer pricing at Anthropic, and Stanford's AI institutional merger.
Audio is available on Spreaker — see link below.
Three major AI model updates in forty-eight hours. That's not a coincidence.
Start with OpenAI. GPT-5.5 Instant is now the default ChatGPT model.
Now here's the development with the clearest business footprint. Anthropic shipped Claude add-ins for Excel, Word, PowerPoint, and Outlook, with persistent context that carries across all four apps and formula-aware capabilities in Excel.
Context windows are moving fast. A startup called Subquadratic has a model running at twelve million tokens using subquadratic attention scaling, which solves the traditional problem that attention costs scale quadratically with context length, making very long contexts prohibitively expensive.
One number worth flagging before moving on. Anthropic's upgraded tokenizer for Claude Opus 4.7 increased input costs twelve to twenty-seven percent, even though listed model pricing didn't change.
Two shorter items. Stanford merged its AI and data science institutes under a unified organization called HAI, with computer scientist James Landay taking the lead role and Fei-Fei Li moving into a university-wide advisory position.
Pull back for a moment. What this week's releases actually tell you is that the model quality race is becoming a model efficiency race.
Chapter summary auto-generated from the verified script. Listen to the full episode for the complete content.