Barracuda threat researchers recently uncovered a large-scale OpenAI impersonation campaign targeting businesses worldwide. Attackers targeted their victims with a well-known tactic — they impersonated OpenAI with an urgent message requesting updated payment information to process a monthly subscription.
This phishing attack included a suspicious sender domain, an email address designed to mimic legitimacy, and a sense of urgency in the message. The email closely resembled legitimate communication from OpenAI but relied on an obfuscated hyperlink, and the actual URL differed from one email to another.
Since the launch of ChatGPT, OpenAI has sparked significant interest among both businesses and cybercriminals. While companies are increasingly concerned about whether their existing cybersecurity measures can adequately defend against threats curated with generative AI tools, attackers are finding new ways to exploit them. From crafting convincing phishing campaigns to deploying advanced credential harvesting and malware delivery methods, cybercriminals are using AI to target end users and capitalize on potential vulnerabilities.
Research from Barracuda and leading security analysts such as Forrester shows an increase in email attacks like spam and phishing since ChatGPT’s launch. GenAI clearly has an impact on the volume of the attacks and the ease with which they are created, but for now cybercriminals are still primarily using it to help them with the same tactics and types of attacks, such as impersonating a well-known and influential brand.
The 2024 Data Breach Investigations Report by Verizon shows that GenAI was mentioned in less than 100 breaches last year. The report states, “We did keep an eye out for any indications of the use of the emerging field of generative artificial intelligence (GenAI) in attacks and the potential effects of those technologies, but nothing materialized in the incident data we collected globally.” It further states that the number of mentions of GenAI terms alongside traditional attack types and vectors such as phishing, malware, vulnerabilities, and ransomware was low.
Similarly, Forrester analysts observed in their 2023 report that while tools like ChatGPT can make phishing emails and websites more convincing and scalable, there’s little to suggest that generative AI has fundamentally changed the nature of attacks. The report states, “GenAI’s ability to create compelling text and images will considerably improve the quality of phishing emails and websites, it can also help fraudsters compose their attacks on a greater scale.”
That said, it’s only a matter of time before GenAI advancements lead attackers to significant new and more sophisticated threats. Attackers are undoubtedly experimenting with AI, though, so it’s better for organizations to get ready now. Staying vigilant about traditional phishing red flags and strengthening basic defenses are still some of the best ways to guard against evolving cyber risks.