Deepfakes, the new corporate threat

A recent study, conducted by the Universities of Oxford, Brown and the Royal Society, shows that 78% of users are unable to distinguish a deepfake video from an authentic video.

As young people participate in a challenge on TikTok, while a couple posts a selfie on social media, while an emerging influencer posts their latest video on YouTube, they unknowingly provide data that facilitates a new format that worries about fraud for businesses and users: deepfake. . This new threat deceives Internet users with its realism by allowing it to ingeniously scam them. In fact, a recent study, conducted by the Universities of Oxford, Brown and the Royal Society, shows that 78% of users are unable to distinguish a deepfake video from an authentic video. This threat also affects companies that are increasingly the target of such attacks to break into internal systems.

A phenomenon produced by algorithms

Deepfakes are named after the technology that shapes them: “Deep learning”. These algorithms automatically learn and store a large set of data without the need for human intervention. Also, the larger the database, the more realistic the final rendering will be.

Deepfakes then use artificial intelligence to create video and audio files that perfectly mimic a person’s characteristics, based on resources available online, including social networks. Deepfakes are created for several reasons: some legitimate, like humor or entertainment; others less so, such as facilitating fraud, manipulating an audience for political purposes or spreading “fake news”.

The ability to be able to put words in the mouth of powerful, influential, or trustworthy people – like a CEO – is undeniably harmful. Therefore, this technology can be used against companies in various forms:

● Extortion: Threats to post an incriminating deepfake video of a high-ranking decision maker in order to gain access to the company’s internal systems, data or financial resources;

● Fraud: impersonating an employee and / or customer to violate the organization’s internal systems, data or financial resources;

● Authentication theft: manipulating authentications based on biometric technologies, such as voice patterns or facial recognition, to get your hands on sensitive data;

● Reputation: damage to the reputation of a company and / or its employees towards customers and other stakeholders.

Evolution and types of associated fraud

Traditional corporate scam models, such as phishing or account acquisition, are less successful today; due to the sophistication of cyber defense technologies, such as multi-factor authentication. The drop in profits caused by these new solutions has encouraged the use of new attack vectors, most notably deepfake. This technology has also become available as a service on the dark web, thus democratizing its use among the most inexperienced criminals. Furthermore, the massive adoption of social networks in recent years, pushing their users to post more and more images and videos, has been a goldmine for developing malicious deepfake campaigns.

Cybercriminals use deepfakes primarily to carry out three types of fraud against businesses:

● Identity theft: when a scammer exploits the data of a deceased person to create a deepfake that can be used to access online services, apply for a credit card or even a loan;

● Synthetic identity theft: when scammers exploit the data of many different users to create the new identity of a person who does not exist. This profile is then manipulated to apply for credit cards or carry out large transactions;

● Credit card fraud: when stolen or created profiles are used to open new bank accounts. Criminals then maximize demands for credit cards and associated loans.

Protection that matches the sophistication of attacks

With the escalation and growing prevalence of deepfake fraud, companies need to protect their data, cash and reputation through key measures. Therefore, this new threat should be integrated into disaster scenario planning and cyber attack simulation exercises. Such measures should include the definition of a risk pyramid, an intervention strategy accompanied by related procedures and adequate mitigation measures, in particular the crisis communication process, especially if the reputation of the structure is at stake.

In addition, employee training by the cybersecurity team on the specific risks of deepfakes is essential. As with all cyber threats, employees are a line of defense, including due to the use of deepfake in social engineering practices. Furthermore, by prioritizing two-step validations with additional verifications, phone call confirmations or adding watermarks to audio and video files, it is possible to limit attempts to compromise security systems. Finally, companies must purchase insurance before the breach takes effect, as insurers have updated their offerings to include deepfake incidents.

Cybercriminals leverage the volumes of data at their disposal to power deepfake algorithms. As users and organizations venture into the world of the metaverse and the Web3, databases are growing and with them the resources to power deepfakes even more. Cybersecurity teams must therefore keep abreast of new detection firms and other technological innovations to combat this threat, which could very well replace the classic phishing link.

Leave a Comment