The Pentagon just awarded seven classified AI contracts — and deliberately excluded Anthropic for refusing to strip safety guardrails from autonomous weapons systems. This is the story of how military AI policy, competitive consolidation, and a precedent-setting 'supply chain risk' designation are reshaping every AI company's incentive to maintain a safety policy.
Audio is available on Spreaker — see link below.
The Pentagon just handed out seven classified AI contracts, and the deliberate absence of one company tells you more about where military AI is headed than anything in the actual deal. Anthropic was left out.
Chapter summary auto-generated from the verified script. Listen to the full episode for the complete content.