The largest empirical study of criminal AI use finds hackers struggling with mainstream tools — and the reason explains your own team's productivity gap. Plus: the Pentagon finalises AI vendor contracts, and Anthropic is out.
Audio is available on Spreaker — see link below.
The largest empirical study of criminal AI use ever conducted just came back with a finding that cuts against the loudest fears in the security industry: hackers can't get AI tools to work for them. Researchers at the University of Edinburgh analyzed over one hundred million posts from underground cybercrime forums.
Here's what actually explains that. AI coding assistants aren't capability equalizers.
There's a second layer to this. Claude, Codex, and similar mainstream platforms are also proving resistant to jailbreak attempts in criminal contexts.
Shifting to the structural story. The Pentagon has finalized AI contracts with seven major vendors for classified military networks.
The takeaway for anyone building with AI is straightforward but worth sitting with. The skill floor finding isn't just about criminals.
Chapter summary auto-generated from the verified script. Listen to the full episode for the complete content.