How AI Fuels the Surge in Cybercrime

Cybercriminals and AI seem like a natural fit. How did we not see this coming? A new report from Anthropic shows us what they are doing to fight back.

When ChatGPT first appeared, we were excited about how this new technology could make us better… and then we immediately thought about how it would make us worse. Panic quickly rose about students using it to cheat on homework, and job applicants use to pass interview tests. It would turn well-intentioned people into something they were not. Education was facing a crisis that, if true, would make students less prepared for college or their first jobs. We thought this was the most terrible thing AI would do, but in our public discourse of the AI chatbot revolution, how did we not think cybercriminals would use AI too.

Article content

Spike in Cybercrime

Since the launch of ChatGPT in late 2022, and Claude a few months later, cyberattacks have increased 138% according to a recent McKinsey & Company report. It is true that cybercrime has been growing, but the availability of GPT technology created an inflection point. Like some of us might be experiencing using AI in the workplace, routine tasks are getting easier – summarizing a larger presentation, writing an email – are adding to our productivity. Even writing code has become easier with Cursor, Microsoft’s GitHub CoPilot, and Claude Code. Now, according to Anthropic’s Threat Intelligence Report, cybercriminals with no previous code experience are creating AI agents (also known as ‘vibe coding’) to create nefarious code, which has been successful.  The report details numerous breaches, including healthcare, emergency services, and religious institutions. Many resulted in ransomware threats and even used AI to draft threatening emails and set prices based on AI’s assessment of the data values to the organization.

Claude screenshot of AI generated ransomware email being composed.
Claude Used to Compose Ransomware Emails

AI For All?

AI is lowering barriers for many to explore things that they might not have been able to do before. People with creative ideas but lacking artistic skills can create stunning art. Entrepreneurs can bring their ideas to the marketplace, having never before written a technical business plan. People can create mobile apps without learning code. It only takes ideas, motivation, and the willingness to work towards an outcome. Unfortunately, cybercriminals are no different. Where they previously had barriers to creating authentic scams or lacked technical expertise, they are no longer hindered. Like Anthropic’s efforts, this steps up the urgency to self-regulate and put countermeasures in place.

While they haven’t solved the lesser threat to education, which is driving sales of old-school blue books and locking up phones in high schools, it is good to see they are addressing the negative uses of AI. Both OpenAI and Anthropic point to the many future benefits of AI and its potential positive impacts on society, from treating disease to economic welfare. However, as their technology is quickly evolving, they need to be aware that some might have other ideas on how to use it.


Thanks for reading. Consider adding a comment about your thoughts on AI risks and its evolution as a useful tool, and share this article.


Discover more from Derek W Gibson

Subscribe to get the latest posts sent to your email.

Leave a Reply