How AI Is Changing the Game for Cybercriminals
Artificial intelligence is making cyberattacks faster, smarter, and more dangerous. Experts have long warned that hackers would someday use AI to break into networks and steal data. Now, that prediction is happening. A recent incident shows how AI can automate nearly every step of a cyberattack.
Last week, a company called Anthropic revealed that a hacker used its AI chatbot, Claude, to conduct a large-scale ransomware attack. The attacker used the AI to gather information, generate malicious code, steal passwords, infiltrate systems, and even write ransom notes. They targeted 17 organizations, including healthcare providers, government offices, charities, and a defense contractor. The AI even suggested ransom amounts in Bitcoin, ranging from $75,000 to half a million dollars. This is believed to be the first case where AI was responsible for orchestrating an entire extortion scheme from start to finish.
AI-Driven Code and Ransomware Growth
The use of AI isn’t limited to guiding attacks. Criminals are now using generative AI to create and update ransomware software. Researchers from Anthropic and security firm ESET found that bad actors are leveraging AI tools to develop ransomware kits. One group from the UK, called GTG-5004, sold these AI-enhanced ransomware packages. They relied heavily on AI chatbots like Claude to write and package the malware, which allowed even low-skilled hackers to launch sophisticated attacks.
These ransomware programs are designed to adapt and evade security measures. They can morph their code to slip past antivirus scans and security updates. ESET analyzed a proof-of-concept tool called PromptLock, which could generate malicious scripts using open-source AI models. It could also adapt its behavior to encrypt files or target specific systems, making detection harder.
Chatbot Vulnerabilities and New Attack Techniques
While AI chatbots are meant to prevent misuse, hackers are finding ways to bypass their safeguards. Researchers have shown that poorly written prompts—like bad grammar or incomplete sentences—can trick large language models into ignoring safety rules. This could lead chatbots to give instructions for illegal activities, malware creation, or data theft.
Another clever attack involves hiding malicious commands in images. Researchers at Trail of Bits discovered that by embedding prompts in large, high-resolution photos, they could trick AI systems into revealing hidden messages. These messages could then be used to direct AI to perform harmful actions, infect clouds, or compromise data without alerting users or security systems.
The Rise of AI Deepfakes and Voice Scams
Deepfake audio and video have been a concern for years. Criminals used deepfake voice recordings to impersonate CEOs and trick employees into transferring money. In one case, scammers cloned a UK energy company’s CEO voice and convinced the real CEO to send $243,000 to a scammer in Hungary. Now, with generative AI, creating convincing fake voices takes just seconds. Experts say that one in four people have encountered or know someone who has fallen for an AI voice scam.
In 2024, there are many examples of voice scams. A California man was tricked into sending thousands of dollars after someone cloned his son’s voice, claiming he’d been in an accident. Similarly, scammers cloned the voice of Italy’s Defense Minister and used it to target high-profile figures, including fashion executives. One scammer convinced Giorgio Armani and others to transfer nearly a million euros by pretending to be the minister.
The New Challenge of AI Browsers and How to Fight Back
AI-powered browsers are changing how we surf the web. Tools like Perplexity Comet can navigate websites, fill out forms, manage emails, and even book travel—all automatically. But security researchers have shown that these browsers can be tricked into performing harmful actions. For example, in tests, Comet was instructed to buy an Apple Watch. It visited a fake website and filled in payment details without realizing it was a scam.
Hackers can also hide malicious commands in fake CAPTCHA tests or images, tricking AI browsers into bypassing security measures. Some companies, like Vivaldi, have decided not to add AI features to their browsers to avoid these risks.
Even though AI is making cybercriminals more effective, most attacks are still traditional. Human error remains a big vulnerability. Experts recommend updating software regularly, using multi-factor authentication, training employees to spot phishing emails, and backing up data offline. These steps can help protect against both old-fashioned and AI-powered cyberattacks.
In short, AI is revolutionizing cybersecurity threats. While it offers great benefits, it also opens new doors for hackers. Staying vigilant and following best practices can help keep your digital world safe.















What do you think?
It is nice to know your opinion. Leave a comment.