Deepfakes – a fallacy of false positives or a threat to cybersecurity

Does emergence of deepfakes represent a major and irrevocable shift in the scale of possible harm that could be inflicted or is it only another technology that law enforcement has to learn how to deal with – either through the change of culture or methodologies?

This piece aims to examine what is the primary purpose and likely use of deepfake technology, as well as its application for both law enforcement and rough agents through looking at its adaptability and the likelihood of rogue actors with the skills and motivation to deploy it. It will be also an attempt to assess what is the overall magnitude of potential impact likely to be and the foreseeable reaction of the LEAs to the changing landscape of threat.

The creation of false video and audio content is far from being a new concept and -especially in 21st – century there is nothing extraordinary about it. The looming era of deep fakes might change our perception because the capacity to create hyperrealistic, difficult-to-debunk videos and audio content which will spread with unprecedented speed thanks to social media or streaming on gaming platforms. The change is being driven by the rapidly growing industry, far outpacing security implications and the development of biometric countermeasures, such as defacialisation. But maybe this phenomena is being overestimated and in the five year perspective, it will only be discussed in the context of doctored sexually explicit photos and Steve Buscemi eyes meme tool.

Deepfakes are an artificial intelligence-based machine learning generative adversarial networks (GAN) used for human image or sound synthesis. GANs pit algorithms against one another to create synthetic data nearly identical to its original audio or video training data. Currently this technology is developing both with the involvement of commercial providers and for the defence industry. Because GANs can learn to mimic any distribution of data, its potential is huge and quite possibly we only realised it toe-deep so far.

So why are we currently worried about deep fakes to the point where it is included in the American National AI Strategy? We are, after all, used to combine and superimpose existing videos on the source videos. The ability to digitally manipulate content is in popular opinion mostly associated with the possibility to influence the politics. However, increasingly it is of particular concern to security agencies, these manipulated videos are posted on social media by terrorists, Transnational Criminal Organizations, or rogue individuals, not as part of disinformation programs but as a tool for cyber criminal activity.  Moreover, even without an expert knowledge of photography, editing apps can create sophisticated and high quality images. Such multimedia technologies are easy to abuse and together with the development of GANs is raising significant social concerns about its possible misuse.

To date, the industry as well as the law enforcement has been focused on the unauthorized access of data. But the motivation behind and the anatomy of cyber attacks has changed. Instead of stealing information or holding it ransom, hackers attempt to modify data while leaving it in place. There are many scenarios in which altered data can serve cybercriminals better than stolen information. Rise of deepfakes should remind us that cyber-enabled crimes are traditional crimes, only that increased in scale and reach by use of computers, networks or other forms of information communications technology (ICT); unlike cyber-dependent crimes, they can be committed without the use of ICT. Deepfakes could be used by attackers for acquiring real (and sensitive) documents through cyber-espionage or social engineered phishing, where adversaries can now create automated targeted content. Such personalized attacks would be more successful — additionally adding the scale of automated attacks. With video chat based identification becoming a comfortable solution for customers of banks and other businesses reliant on correct identification, techniques like deep fakes and their ability to alter live video material could introduce a big security hole.

Although associated with social media, deepfakes might not always require a mass audience to achieve a harmful effect. From a national security and international relations perspective, the most harmful deep fakes might not flow through social media channels. Instead, they could be delivered to target audiences as part of a strategy of reputational sabotage. Nonexistent speeches by prominent politicians are important concern but doctored videos depicting inflammatory behaviour of soldiers or police might provoke panic, distrust, harm long term war efforts and ultimately motivate a wave of violence. This approach will be particularly appealing for foreign intelligence services hoping to influence decision-making by people without access to cutting-edge detection technology.

These changes illustrate how fast information landscape is evolving, with a growing focus on the importance of being able to trust and validate the authenticity of shared information. While assessing perspectives for damages caused by deepfakes, a consideration has to be given regarding how many companies and organizations will have suffered damage caused by fraudulent data and software. New technologies will inevitably amplify it – hence increased need of entities to safeguard the chain of custody for every digital asset in order to detect and deter data tampering.

Even though understandably raising suspicions, emergence of deepfakes could be an artificially blown out crisis with potentially limited impact. Can we be certain that in the foreseeable future the deepfake related crimes will grow fast enough to pose a threat to societies? Will the abuse of consumer data exposed as a result of deepfake related crime really exceed in numbers cyber-enabled crime cases? The predicted wave of political deepfakes hasn’t materialized, and increasingly, the panic around GAN-powered crime seems like a false alarm. According to frequently used argument deepfakes won’t proliferate because the technology is not as widely available as photoshop – for now it is difficult to deploy a deepfake without special skills and hardware. Moreover existing architecture must leave predictable trace on doctored video, which is hypothetically easy to detect for another AI algorithm (while conventional film editing won’t get detected).  Of course such systems aren’t perfect, as the architectures are regularly updated with features helping them avoid detection.

This idea leads to another question important from the law enforcement perspective: how hard is to detect and disprove a deepfake? It is not an easy task, both from technological stance but also due to the time factor: popularity of various social media channels as well as their diversity created the issue of virality, which is inevitably influencing who gets the information advantage. Socmed giants, like Facebook, are increasingly trying to regulate their content and minimise harm caused by viral videos by amending their policies which can curb potential harm created by deepfakes. Saying that certain channels of disseminations  – like gaming portals or some reddit boards- are not so closely monitored and lack timely reporting mechanisms.

In the absence of regulation advanced technology could do considerable harm. Focusing on removing the content – doctored videos included- is not enough to prevent cyber enabled crime from happening and most importantly it’s too close to infringing freedom of speech a la authoritarian regimes.

Law enforcement (but also defense sector) are facing one more serious issue: videos became increasingly crucial tool, whether it comes from security cameras, police-worn body cameras, a bystander’s smartphone, or another source. But a combination of “deepfake” video manipulation technology and security issues that plague so many connected devices has made it difficult to confirm the integrity of that footage. Recently media outlets tend to portray deepfakes as only pertaining to people image manipulation[1] : there are projects allowing change of the expressions of a target video in real time. But in fact it could be scenes, background photos etc. which is significantly more nuanced and due to its nature and application – more relevant to the contemporary security industry.

How far we are from having forensic technology that would be able to conclusively tell a real from a fake? Biometric deception technology market might be able to offer some solutions, at least for now.  The inability to biometrically de-facialize deepfakes is rapidly becoming such a dangerous concern, therefore, the existing biometric industry is developing de-facializing countermeasures, which can deal with the current generation of deepfakes. Both Department of Defense (DOD) and Intelligence Community (IC) are involved in this research. Office of the Director of National Intelligence (ODNI), Intelligence Advanced Research Projects Activity (IARPA), has been sponsoring proof-of-concept research programs targeting development of facial biometric “de-identification” technologies. Additionally Defense Advanced Research Projects Agency (DARPA) has funded a Media Forensics project that’s been tasked to explore and develop technologies to automatically weed out deep fake videos and digital media.

In the light of the fact that the defense and increasingly also the private sector are working of solutions enabling deepfake detection, why is the law enforcement lagging behind? Law enforcement doesn’t have a history of making early investments in fundamental research. Unlike the military, LE is not looking at far forward challenges that might come to fruition in a decade. LE might also hope that the private sector would aid in developing law enforcement relevant AI technologies. There isn’t likely to be much indigenous tech innovation: if any, it will be provided by the vendor landscape, as typically LE is not developing tools or capabilities of this kind in-house. Having said that is should be expected from the law enforcement to be early adapter and have enough talent within their teams to be able to vet technologies and buy them. Deepfake emergence and LEA’s reaction (lack thereof) highlight the cultural divide between the private sector and the police/military world. There’s a disconnect between the innovators, the LE and the defense, creating a major obstacle to upgrading systems or reacting timely to emerging security issues.

Should this be a source of panic around deepfakes? No –  but just like any other emerging technology it should not be dismissed as a potential danger to security or underestimated. Whilst scanning the horizon now we are simply not able to picture in what way deepfakes could be used in five years time. We are also unable to see the extent to which even the concept of something like deepfakes can undermine public trust in records and videos as objective depictions of reality. Meanwhile LE and the industry should provide guidelines and raise awareness for hypothetical use cases so the private sector can calibrate how to balance the benefits of using AI systems against the practical constraints that different standards of explainability impose.  As with any other emerging technology, no industry or public sector representative has all the answers regarding potential threats that could follow. It is therefore crucial for policy stakeholders to engage in the dialogue about implications of deepfakes. As the technology evolve and our own experience is growing with it: we continue to see and learn about additional nuances as they unfold. We will only gain a fuller understanding of the trade offs and unintended consequences that current choices of the law enforcement agencies and defense sector entail.



Leave a Reply

Your email address will not be published. Required fields are marked *