Imagine you’re sitting at your desk, in front of your laptop, and you get an urgent request from your CEO to jump on a video call. You answer, and the whole leadership team is present. You’ve worked with them before, you know them well, their questions make sense, and you finish the meeting without incident. Then, over the next few weeks, you get roped into more and more calls and instant messages. You don’t think anything of it and happily share the information they require. You might even make a few payments requested by the CEO or CFO. Business as usual, right?
What if you found out that the voices, faces, and messages from those colleagues were deepfakes designed to dupe you into handing over confidential financial information and actual cash? It’s not possible; it’s already happening.
Cyber criminals are using deepfake software and AI technology to create personalized videos to dupe employees into handing over sensitive information; and even security teams are finding it hard to detect deepfakes. These fake images or videos can be used to access customer accounts, extort money, and produce media that is damaging to your (or your company’s reputation). Consider deepfakes part of a new evolution of social engineering attacks.
Two out of three cybersecurity professionals have encountered deepfakes as a method of cyber attack in 2022, an increase of 13% from the year before. This new type of attack, known as business identity compromise, is becoming harder to detect and more prolific than ever before.
The good news is that you can implement security measures that can protect you against hackers that are creating deepfakes.
Deepfakes are synthetic media—typically involving videos or audio recordings—that have been manipulated to imitate a person’s appearance, often altering their speech or actions. They rely on powerful artificial intelligence (AI) techniques, especially deep learning, to achieve this level of believability.
Deepfakes combine real and artificially generated content to create compelling, but often false, portrayals of events or situations. Techniques like generative adversarial networks (GANs) analyze large datasets of images and audio to learn patterns and create realistic deepfake videos and audio. The below video illustrates how good this technology has become.
Deepfakes can duplicate faces, voices, and even entire body movements, making it difficult to discern authenticity. Deepfakes aren’t always used for nefarious purposes, but in the wrong hands, they can be used to imitate high-ranking professionals or existing employees to build trust and defraud businesses or their vendors. They can be used to extract sensitive information and trade secrets in order to blackmail the company.
Here’s how deepfakes are typically used by malicious actors:
In this scenario, the hacker will create entirely fabricated videos or audio recordings of someone saying or doing something that never happened. It’s used to spread misinformation, damage reputations, or manipulate public opinion.
Example: A deepfake video portrays a political candidate making offensive remarks, influencing voting decisions.
In this attack, the hacker will replace someone’s face or voice in a video or audio to impersonate them, like a CEO or executive. This method is highly targeted and used for spear phishing attacks, tricking employees into disclosing sensitive information, or performing actions unknowingly.
Example: A deepfake CEO sends employees emails requesting urgent money transfers.
An audio deepfake can mimic and replicate someone’s voice with high accuracy, even using limited audio samples. It’s used for phone scams, vishing attacks, or impersonating customers for fraudulent transactions.
Example: A deepfake voice impersonates a customer calling a bank, attempting to gain access to their account.
A hacker may combine deepfakes with other cyber attack techniques, like social engineering or malware. These attacks are often multi-layered and complex, designed to bypass basic detection methods.
Example: A deepfake email leads to a malicious website containing malware that infects the user’s device.
Shallowfakes—also known as “cheapfakes” or “simple forgeries”—are manipulated media that lack the sophistication of deepfakes. Unlike deepfakes, which rely on deep learning and AI, shallowfakes use basic video or photo editing tools and techniques readily available to the public.
Example: A photoshopped image shows a CEO holding a sign with a fabricated, offensive statement. A fake movie poster.
Deepfakes can have a long-lasting and damaging impact on your company, as well as your personal reputation. As difficult as they may be to spot, you can take proactive measures to avoid them.
It may surprise you to learn that there are many of your employees, vendors, and partners who haven’t heard about deepfakes or how they can be used. Just like tabletop exercises for email phishing that have become the norm in corporations, you need to introduce training that teaches your stakeholders how to spot fake videos and audio.
Raising awareness about the methods, risks, and presence of deepfakes can help prevent them. You should also make sure that employees know exactly which members of the leadership team can request financial information, how they will request information and payment, and how requests can be validated internally to avoid deepfake attacks.
Most of us are familiar with cyber security best practices, especially when it comes to social engineering attacks. These practices can also protect you against deepfake disinformation and phishing attacks and attempts. Regular check-ins and reviews of these foundational concepts will help your team spot sophisticated phishing techniques.
By simply scrutinizing communications more closely or checking in with managers whenever a payment request or request for information is made, employees can prevent most deepfake attacks. Teach staff and vendors to recognize and question any behavior that’s outside of the norm, such as insistent requests or requests for personal information.
It also might make sense for your organization to adopt a zero-trust framework, so you’re not leaving anything to chance.
Encourage employees to take a step back, pause, and raise concerns whenever they feel concerned about a request. If an employee does raise a concern, reward them with positive feedback.
So many cyber attacks are successful due to human error. There are so many distractions today, there only needs to be one slip up to open the door to a threat. And, as mentioned earlier, even cyber professionals have a hard time telling the difference between real and fake. So always encourage employees to trust their gut.
The principle of least privilege states that users and applications should be granted the minimum access necessary to perform their tasks effectively. This minimizes the potential damage if an account is compromised or misused. Even if you know the face or voice of the person making an unusual request, it’s good to verify and confirm that the request is legitimate before taking action. Strengthen your login credentials and authentication methods, and teach employees to “trust but verify requests.” The trust-but-verify principle acknowledges the possibility of compromise, even within trusted systems. It advocates for verifying the legitimacy of users and activities before granting access or executing commands.
Deepfakes are an emerging threat, so make sure that you include deepfake tactics in your incident response plans. If a deepfake or shallowfake pops up online, address it immediately. The further it spreads, the more people see the fake and believe the contents, so nip it in the bud with a press release, social media statement, or media appearance.
Deepfake technology poses a significant threat to businesses. Their ability to convincingly manipulate audio, video, and even text makes them potent tools for cybercriminals. Impersonation deepfakes can trick employees into transferring funds or revealing sensitive information, while synthetic content deepfakes can spread misinformation and damage reputations.
Staying ahead of this evolving threat requires a multi-pronged approach, including fostering a culture of “trust but verify” within organizations. By being vigilant and informed, you can mitigate the harmful impact of deepfakes and secure your business and your reputation.