Deepfakes are video, audio, and images created by artificial intelligence. This technology can create false images, videos, or sounds of a person, place, or event that appear to be real.
In 2018, there were about 14,698 deepfake videos circulating online. Since then, that number has skyrocketed thanks to the popularity of deepfake apps like DeepFaceLab, Zao, FaceApp, and Wombo.
Deepfakes are used in several industries, including film production, video games, fashion, and e-commerce.
However, the malicious and unethical use of deepfakes can harm people. According to a study by cybersecurity firm Trend Micro, “the rise of deepfakes is a matter of concern: they are inevitably moving from creating fake celebrity pornographic videos to manipulating company employees and procedures.”
Read more: The use of deepfakes can sow doubt, cause confusion and mistrust among viewers
Our research has shown that organizations are becoming increasingly vulnerable to this technology, and the costs of this type of fraud can be high. We focused on two public deepfake case studies targeting CEOs and have estimated losses of $243,000 and $35 million, respectively, to date.
The first case of fraud occurred in a British energy company in March 2019. The CEO received an urgent call from his boss, the chief executive of the firm’s German parent company, asking him to transfer funds to a Hungarian supplier within an hour. The scam was allegedly perpetrated using commercial voice generation software.
The second case was identified in Hong Kong. In January 2020, a branch manager received a call from a man whose voice sounded like that of a company director. In addition to the call, the branch manager received several emails that he believed were from the director. The phone call and emails were about the acquisition of another company. The scammer used Deep Voice technology to imitate the director’s voice.
In both cases, firms have been targeted by payment fraud using deepfake technology to imitate people’s voices. The previous case was less convincing than the second because it only used voice phishing.
Opportunities and Threats
Forensic accounting includes “the application of special knowledge and investigative skills possessed by [certified public accountants] collect, analyze and evaluate evidence, and interpret and report results in a courtroom, courtroom or other judicial or administrative place.”
Forensic accountants and fraud experts investigating fraud allegations continue to see a rise in deepfake scams.
One type of deepfake scam is known as synthetic identity scam, where the scammer can create a new identity and attack financial institutions. For example, deepfakes allow scammers to open bank accounts under false names. They use these fabricated identities to establish a trusting relationship with the financial institution in order to deceive them later on. These fraudulent identities can also be used for money laundering.
Websites and apps that provide access to deepfake technologies have made identity fraud easier; For example, “This person does not exist” uses AI to generate random faces. Neil Dubord, Chief of Police Department in Delta, BC, wrote that “synthetic identity fraud is reported to be the fastest growing type of financial crime, costing online lenders more than $6 billion annually.”
Deepfakes can enhance traditional fraud schemes such as payment fraud, email hacking, or money laundering. Cybercriminals can use deepfakes to gain access to valuable assets and data. In particular, they can use deepfakes to gain unauthorized access to large databases of personal information.
When combined with social media platforms such as Facebook, deepfakes can damage an employee’s reputation, cause stock prices to drop, and undermine a company’s credibility.
Forensic accountants and fraud investigators must recognize the red flags associated with deepfakes and develop anti-fraud mechanisms to prevent these schemes and reduce their associated losses. They also need to be able to assess and quantify the losses due to a deepfake attack.
In our case studies, deepfakes used the voices of senior management to instruct employees about transferring money. The success of these schemes depended on employees being unaware of the red flags associated with them. These may include secrecy (the employee is asked not to disclose the request to others) or urgency (the employee is required to take immediate action).
The fight against deepfakes
To combat the malicious use of deepfakes, a few simple strategies can be applied:
Encourage open communication: Talking and consulting with colleagues and others about anything that seems suspicious is an effective tool to prevent fraud schemes.
Learn how to assess authenticity: for example, end a suspicious call and call back the number to assess the authenticity of a person.
Pause by not reacting quickly to unusual requests.
Stay up to date with new technologies that help detect deepfakes.
Enhance certain controls and evaluations for customer identity verification at financial institutions, such as Know Your Customer.
Provide employee training and education on deepfake scams.
Cybercriminals can use deepfakes to make their schemes look more realistic and trustworthy. These increasingly sophisticated schemes have detrimental financial and other consequences for people and organizations.
Fraud experts, cybersecurity experts, authorities and forensic accountants may need to fight fire with fire and use AI-based methods to counter and detect bogus media.