AI Daily Briefing · 9 May 2026 · 5 min

Three Model Drops in 48 Hours: GPT-5.5, Gemma 4 & Claude Opus 4.7

GPT-5.5 Instant, Gemma 4, and Claude Opus 4.7 all shipped within 48 hours — and together they reveal a decisive shift from raw capability to speed, cost, and workflow integration. Plus: a 12-million-token context window breakthrough, hidden tokenizer pricing at Anthropic, and Stanford's AI institutional merger.

AI Daily Briefing
Now Playing
Three Model Drops in 48 Hours: GPT-5.5, Gemma 4 & Claude Opus 4.7

Audio is available on Spreaker — see link below.

What's covered

Three Model Drops in 48 Hours

Three major AI model updates in forty-eight hours. That's not a coincidence.

Listen now →

GPT-5.5 Instant and Gemma 4 Speed Race

Start with OpenAI. GPT-5.5 Instant is now the default ChatGPT model.

Listen now →

Claude Moves Into Microsoft Office

Now here's the development with the clearest business footprint. Anthropic shipped Claude add-ins for Excel, Word, PowerPoint, and Outlook, with persistent context that carries across all four apps and formula-aware capabilities in Excel.

Listen now →

Subquadratic's 12 Million Token Context Window

Context windows are moving fast. A startup called Subquadratic has a model running at twelve million tokens using subquadratic attention scaling, which solves the traditional problem that attention costs scale quadratically with context length, making very long contexts prohibitively expensive.

Listen now →

Claude Opus 4.7 Tokenizer Cost Surprise

One number worth flagging before moving on. Anthropic's upgraded tokenizer for Claude Opus 4.7 increased input costs twelve to twenty-seven percent, even though listed model pricing didn't change.

Listen now →

Stanford HAI Merger and Kaggle Course

Two shorter items. Stanford merged its AI and data science institutes under a unified organization called HAI, with computer scientist James Landay taking the lead role and Fei-Fei Li moving into a university-wide advisory position.

Listen now →

The Convergence Signal

Pull back for a moment. What this week's releases actually tell you is that the model quality race is becoming a model efficiency race.

Listen now →

Chapter summary auto-generated from the verified script. Listen to the full episode for the complete content.

More episodes

From AI Daily Briefing