AI Developer Daily: News & Tools · 6 May 2026 · 4 min

Hackers Can't Use AI Tools — And What That Means for Your Team

The largest empirical study of criminal AI use finds hackers struggling with mainstream tools — and the reason explains your own team's productivity gap. Plus: the Pentagon finalises AI vendor contracts, and Anthropic is out.

AI Developer Daily: News & Tools
Now Playing
Hackers Can't Use AI Tools — And What That Means for Your Team

Audio is available on Spreaker — see link below.

What's covered

Cybercriminals Struggling With AI Tools

The largest empirical study of criminal AI use ever conducted just came back with a finding that cuts against the loudest fears in the security industry: hackers can't get AI tools to work for them. Researchers at the University of Edinburgh analyzed over one hundred million posts from underground cybercrime forums.

Listen now →

The Skill Floor Problem

Here's what actually explains that. AI coding assistants aren't capability equalizers.

Listen now →

Guardrails Holding on Mainstream Platforms

There's a second layer to this. Claude, Codex, and similar mainstream platforms are also proving resistant to jailbreak attempts in criminal contexts.

Listen now →

Pentagon AI Vendor Consolidation

Shifting to the structural story. The Pentagon has finalized AI contracts with seven major vendors for classified military networks.

Listen now →

What Developers Should Take From This

The takeaway for anyone building with AI is straightforward but worth sitting with. The skill floor finding isn't just about criminals.

Listen now →

Chapter summary auto-generated from the verified script. Listen to the full episode for the complete content.

More episodes

From AI Developer Daily: News & Tools