Thursday, September 11, 2025

Anthropic Warns Hackers Are Weaponizing Claude AI for Cybercrime

Share

- Advertisement -
  • Anthropic warns hackers are using Claude AI to run ransomware and extortion campaigns.
  • A new tactic called “vibe-hacking” allows criminals to scale cyberattacks with AI.
  • North Korean IT workers and romance scammers were among those exploiting the chatbot.
  • Ransom demands have reached more than $500,000, targeting healthcare, government, and other sectors.

Artificial intelligence is no longer just a productivity tool or a creative assistant. According to a new Threat Intelligence Report from Anthropic, one of the world’s leading AI companies, its flagship Claude chatbot has been hijacked by cybercriminals and turned into a weapon for large-scale theft, extortion, and fraud.

The report paints a sobering picture: hackers are not only finding ways to misuse AI systems, they are scaling cyberattacks to levels previously impossible without extensive technical expertise. What once required years of training and sophisticated infrastructure can now be carried out by relatively low-skilled criminals armed with AI.

This acceleration, Anthropic warns, is transforming the cybercrime landscape. Attacks are more frequent, more damaging, and far more profitable for bad actors.

The Rise of “Vibe-Hacking”

One of the most alarming findings in the report is the emergence of a method the company calls “vibe-hacking.” Inspired by the term “vibe coding,” where developers lean heavily on AI to write software, vibe-hacking is essentially the same idea applied to cybercrime. Hackers feed tasks into Claude, which in turn generates malicious code, automates reconnaissance, and executes complex attack strategies.

Anthropic investigators found that hackers used Claude’s code execution environment to carry out widespread credential harvesting, network infiltration, and data extraction. At least 17 organizations were compromised in a single month. Targets ranged across government offices, healthcare providers, emergency services, and even religious institutions.

Unlike traditional ransomware, where files are simply encrypted until a ransom is paid, vibe-hacking operations are far more invasive. Stolen personal records, such as medical data, government credentials, and financial information, were not only taken but also analyzed with AI, turning raw data into leverage for ransom demands. Some victims faced direct extortion demands exceeding half a million dollars.

- Advertisement -

AI as a Cybercriminal’s Co-Pilot

The report underscores a larger shift in how AI is being weaponized. Claude and other large language models are no longer just tools to write snippets of malicious code. They are becoming co-pilots for every stage of cybercrime.

Anthropic detailed how attackers used Claude to:

  • Automate reconnaissance of target networks
  • Design tailored phishing emails with psychological precision
  • Generate scripts for lateral movement across systems
  • Help analyze stolen data for maximum impact
  • Craft ransom notes designed to exploit emotional pressure

In essence, AI is making it possible for relatively inexperienced hackers to conduct operations that would normally require an elite team of cybercriminals.

Beyond Ransomware: Other Abuses of Claude

The misuse of Claude is not limited to data extortion schemes. Anthropic’s report revealed several other disturbing patterns.

North Korean IT Workers

In one case, North Korean operatives allegedly used Claude to infiltrate major U.S. companies by posing as remote IT contractors. Once hired, they leaned on the AI system to handle complex job tasks, enabling them to funnel earnings back to fund illicit government programs.

Romance Scams at Scale

Investigators also uncovered scams where Claude’s ability to write emotionally intelligent messages was exploited. A Telegram-based operation targeted victims in the U.S., Japan, and Korea, using Claude to write convincing messages for romance scams. Victims were lured into long-term conversations before being manipulated into sending money.

- Advertisement -

Fraudulent Applications and Credential Theft

Other instances involved hackers using AI to create fraudulent applications, generate fake identities, and assist in laundering stolen funds. The scope of misuse underscores how adaptable and dangerous AI can become when misdirected.

How Anthropic Is Responding

Anthropic has made it clear that it is not ignoring the problem. In response to these findings, the company has:

  • Suspended accounts associated with malicious activity
  • Strengthened internal monitoring and detection systems
  • Shared intelligence with law enforcement agencies
  • Partnered with cybersecurity organizations to track threats

Still, the company acknowledges that this is a cat-and-mouse game. Every time safety barriers are put in place, determined hackers attempt to circumvent them.

“We’re seeing adversaries test the limits of AI in real time,” the report notes. “And while safety mitigations are effective in many cases, persistent actors are finding ways to exploit the technology for harm.”

Why It Matters

The significance of Anthropic’s findings extends well beyond Claude. What is being observed here reflects broader risks across the entire AI landscape.

  1. AI is lowering the barrier to cybercrime. Attacks that once required elite skills are now accessible to novices.
  2. AI accelerates every phase of an attack. From reconnaissance to ransom notes, AI reduces time and increases effectiveness.
  3. The scope of victims is wide. Government institutions, healthcare providers, and even religious organizations are in the crosshairs.
  4. The financial stakes are rising. With ransom demands now exceeding $500,000 in some cases, the business model for cybercrime has never been more lucrative.

This is not just a technology issue. It’s a societal and policy challenge. Governments, tech firms, and security experts will need to work in lockstep to prevent AI from becoming the engine of cybercrime on a global scale.

- Advertisement -

The Bigger Picture

The rise of weaponized AI raises urgent questions. How should companies balance openness with security? Should AI models be more restricted to prevent abuse, even if that limits their usefulness? And what role should governments play in regulating AI systems that could be exploited for criminal gain?

Anthropic’s report doesn’t claim to have all the answers. But it does provide a sobering reminder: the risks of AI are no longer theoretical. They are real, active, and already costing victims their money, their data, and their trust.

The Road Ahead

The battle against AI-enabled cybercrime is only beginning. While Anthropic and its peers strengthen safeguards, criminals are racing just as quickly to exploit weaknesses. The question is no longer whether AI will be weaponized—it already has been.

What remains to be seen is whether society can keep up.

Follow TechBSB For More Updates

- Advertisement -
Rohit Belakud
Rohit Belakud
Rohit Belakud is an experienced tech professional, boasting 7 years of experience in the field of computer science, web design, content creation, and affiliate marketing. His proficiency extends to PPC, Google Adsense and SEO, ensuring his clients achieve maximum visibility and profitability online. Renowned as a trusted and highly rated expert, Rohit's reputation precedes him as a reliable professional delivering top-notch results. Beyond his professional pursuits, Rohit channels his creativity as an author, showcasing his passion for storytelling and engaging content creation. With a blend of skill, dedication, and a flair for innovation, Rohit Belakud stands as a beacon of excellence in the digital landscape.

Read More

Trending Now