Jones IT | Managed IT Services, IT Support, IT Consulting

View Original

AI, Deepfakes, And The Evolution Of CEO Fraud

In 2019, cybercriminals employed cutting-edge artificial intelligence (AI) technology to orchestrate a sophisticated scam. The criminals utilized AI-based software to mimic the voice of a chief executive, convincing a UK-based energy firm's CEO to transfer €220,000 to a purported Hungarian supplier. The CEO, under the impression that he was conversing with his German parent company's executive, complied with the urgent request.

In another instance in the same year, Empresa Municipal de Transportes (EMT) de Valencia fell victim to a staggering €4 million fraud. This time, cybercriminals employed sophisticated social engineering techniques, impersonating identities via email and executing fake phone calls. The criminals successfully manipulated the EMT's director of administration into authorizing up to eight transfers totaling €4 million, purportedly for an acquisition in China.

Similarly, in 2020, amidst the coronavirus crisis, Zendal Pharmaceuticals became the target of a scam amounting to €9.7 million. The criminals posed as the company's CEO, instructing a financial manager to initiate transfers for an acquisition. To further deceive the manager, the fraudsters even masqueraded as professionals from KPMG, a prominent global business consultancy firm, providing false invoices and payment orders to validate their illicit transactions.

Each of the above is an example of CEO fraud. Although this kind of fraud has been around for a long time, recent advances in AI technology have given these kinds of attacks a new dimension that makes them much more threatening. In this blog post, I discuss this evolution of CEO fraud.

What Is CEO Fraud?

CEO fraud is a sophisticated phishing attack, where the attacker impersonates a high-level executive, typically the CEO or CFO, to trick employees into taking unauthorized actions, such as transferring large sums of money or divulging sensitive information. This type of attack is typically elaborate, involving extensive research, information gathering, and social engineering techniques to manipulate victims and exploit their trust in authority figures within the organization.

How Does CEO Fraud Work?

CEO fraud is a complex multistep operation, typically involving the following steps:


Step 1: Research

The attackers select a target business and do extensive research to collect information about the business, including its operations, hierarchy, office locations, recent hirings, and recent business activities like mergers and acquisitions, etc. They use various methods and sources including publicly available information from the organization's website, social media profiles, dumpster diving, watering hole attacks, honeytraps, etc.

The goal of this research is to find high-ranking executives whom they can easily impersonate and vulnerable employees whom they can target with the attack.

Step 2: Impersonation

The attackers send a phishing email to the targeted employee. The email is sent using a spoofed or compromised email address that appears to belong to the CEO or another senior executive. They may use tactics such as registering a domain name similar to the organization's official domain or compromising a legitimate email account to enhance the credibility of the phishing attempt.


Step 3: Social Engineering

The phishing email typically employs one or more social engineering principles, such as authority, urgency, and intimidation, to compel the recipient to respond quickly without verifying the request. The attacker typically instructs the recipient to initiate a wire transfer or make a payment to a fraudulent account but may also request sensitive information, such as employee payroll data, financial records, or login credentials.


Step 4: Compromise

If the targeted employee falls for the scam, the attackers request multiple transactions to be completed over time. To support their requests, the attackers will typically produce fake documentation, contracts, and invoices and send them to the employee.

By the time the company or the banks detect the fraud, the criminals move the money through different countries, making it extremely difficult to recover.

Why Do CEO Fraud Attacks Work?

CEO frauds are difficult to detect and prevent because they exploit human vulnerabilities rather than technical weaknesses in cybersecurity defenses. The attackers use social engineering techniques to deceive and manipulate the victims into believing that the request is legitimate.

These scams use a variety of techniques to get around the victim’s suspicion and resistance, forcing them to act without due diligence or contacting colleagues or superiors directly.  The typical emails used in CEO fraud invoke authority, instilling fear of consequences of non-compliance, or create a sense of urgency, exploiting emotions such as loyalty or trust in the executive's identity.

These scams are effective because they are not random but very specific and thoroughly crafted. The emails are often timed to coincide with a high level of activity in the company, expecting the target employee to be busy and stressed, hence easy to manipulate. Periods of mergers, acquisitions, expansions into new territories, and mass hiring are particularly vulnerable to such attacks.

If you’d like to learn about the mechanics of CEO fraud and other similar scams, here are a couple of useful resources:

How Is AI Used In CEO Fraud?

As the technology landscape evolves, it brings along new opportunities as well as new threats. Artificial Intelligence (AI) is particularly concerning in this regard because of the ease with which generative AI (gen AI) can be used to write convincing phishing emails and create malware, enabling cybercriminals to improve their tactics, speed, and reach. In addition, the rapid development of AI has provided attackers with other tools, which we will discuss shortly, that exacerbate the threat of AI.

As mentioned earlier impersonation is the key element of CEO fraud without which the attack is toothless. Until recently CEO fraud relied primarily on phishing emails, the asynchronous nature of which gave the victims at least some time to pause and think. However, the rapid evolution of Gen AI has opened up new avenues for CEO fraud, making the communication real-time, and giving the victims no time to think.


  1. Deepfake Videos

Deepfake videos are a type of fabricated media created using AI. These videos often involve the superimposition of one person's face onto another person's body, allowing for the creation of highly convincing but entirely fabricated footage. This technology has become so advanced that the manipulated videos appear convincingly realistic.

Deepfake videos can be easily generated using openly available apps and websites. All you need is a short video clip or a few photos of the person whose deepfake you want to create. So, company websites or social media profiles contain enough content for the attackers to create deepfakes.

To the keen eye, deepfakes are still relatively easy to spot, but when deepfake videos are used on a video call, spotting them becomes incredibly difficult. Audio and video lags, and artificial or blurred backgrounds, which are telltale signs of deepfakes are all too common in real video calls and, hence unlikely to raise suspicions.


2. Deepfake Audios

Deepfake audio, also known as synthetic voice or voice cloning, is artificially generated audio recordings created using AI. Deepfake audio software can mimic the voice of a specific individual to produce audio content that is manipulated or entirely fabricated but is convincingly realistic.

Like deepfake videos, deepfake audios are becoming increasingly difficult to identify. Deepfake audios raise similar concerns to deepfake videos, particularly regarding their potential to deceive and manipulate victims over phone calls.

On a short unexpected phone call from a purported authority figure, it can be incredibly difficult to spot Ai generated audio, especially since phone calls are often prone to poor quality.

3. Chatbots

Although, in this context, they are not as effective as deepfake videos and audio, AI chatbots can still play a role in facilitating CEO fraud by automating certain aspects of the impersonation process. AI chatbots can be used for sending initial phishing messages, gathering information about targeted individuals or organizations, and conducting preliminary interactions to establish rapport with potential victims.

This is an effective method of gathering information because AI chatbots can be programmed to mimic the language and communication style of specific individuals to make the impersonation appear genuine. Once the skepticism is overcome and trust built, it becomes easier to persuade the victims to take desired actions, such as authorizing financial transactions or divulging sensitive information.

How To Mitigate The Risks Of CEO Fraud?

Here are simple steps you can take to mitigate the risks of CEO fraud and similar social engineering attacks at your organization:

  • Conduct regular awareness training for employees,

  • Perform phishing simulations to identify potential vulnerabilities,

  • Establish clear procedures for verifying requests involving sensitive information or financial transactions,

  • Implement multi-factor authentication, and

  • Implement email authentication protocols, such as SPF, DKIM, and DMARC, to detect and prevent email spoofing.

Conclusion

CEO fraud attacks can be highly sophisticated and difficult to detect as they exploit human vulnerabilities rather than technical weaknesses in cybersecurity defenses. To mitigate the risk of CEO fraud, you need to invest in a combination of technical and administrative security controls. However, the most important defense against CEO fraud and similar social engineering attacks is to foster a security-conscious culture, which will immensely reduce the likelihood of falling victim to CEO fraud scams.


If you liked the blog, please share it with your friends

See this content in the original post