Anthropic released a threat intelligence report that introduces "vibe hacking" and a bevy of other examples where criminals weaponized agentic AI.

In a blog post, Anthropic outlined how Claude has been misused including extortion, ransomware and vibe hacking. To its credit, Anthropic has developed defenses, but even the best perimeters get tested and thwarted.

By penning its threat intelligence report, Anthropic is winning enterprise credibility with the transparency. Nevertheless, the attacks and threat actors are a little jarring. In short, agentic AI has lowered the bar for cybercrime and criminals are weaponizing AI for profiling, data analysis, stealing and creating false identities.

Here's a look at some of the more interesting items in Anthropic's report.

Meet vibe hacking. Anthropic said it thwarted a cybercriminal that used Claude Code to steal personal data targeting 17 organizations. The hacker said it would expose the data to extort victims into paying ransoms topping $500,000.

Claude Code was used for automation, harvesting credentials and penetrating networks.

No-code ransomware as a service. In this case, Claude was used to develop, market and distribute ransomware including evasion capabilities.

Chinese and North Korean threat actors used Claude for various attacks and to enhance operations. North Korea used Claude to distribute malware and land fraudulent IT jobs. A Chinese threat actor used Claude to attack infrastructure in Vietnam.

The full report has in-depth case studies and are worth the read.