Legal News

The Legal Journal covers the most significant legal news in the UK

Looking for a solicitor?

Legal Directory

The rise of deepfakes in courtrooms


Deepfake technology has been on the rise since 2017, with the AI blurring the line between reality and fiction. And, although deepfake pornography accounts for 96% of the deepfake videos online, it’s gradually moving into other areas. One of those areas being the courtroom.

This first became apparent in 2019 where, in a custody battle, one mother doctored an audio file in an attempt to prevent her husband from having access to their children.

At present, there are tools which can detect this kind of fraudulent video and audio footage. However, this AI is rapidly evolving, and due to its constant adaptation and complexity, it won’t be detectable for long.

This, of course, has serious legal implications, and could ultimately erode trust in the justice system.

Deepfake technology

Deepfakes are hyperrealistic videos or audio that have been digitally manipulated to create depictions of people saying and doing things they never actually did. While the technology for this kind of video and audio manipulation has been around for a while, the term was first coined by a Reddit user back in 2017.

On the website, the anonymous user released pornographic clips with the faces of female celebrities superimposed on the bodies of porn actresses. Ever since then things have snowballed, and deepfakes are getting more and more realistic. But, how does it all actually work?

Machine learning algorithms lie at the heart of deepfakes, and there are a number of different types. Face replacement, otherwise known as face swapping, involves sourcing someone’s face and then superimposing or stitching that face onto that of another person.

Face re-enactment, or puppetry is a slightly different kind deepfake. Here, instead of stitching a face onto another face, the features of a face are manipulated. This manipulation involves altering the movements of the face, including the mouth, eyes and head, in order to create the impression that the face is saying or doing things, which in reality they never did.

Unlike face replacement and face re-enactment, face generation refers to the process where an individual creates an entirely new image from existing datasets. This is achieved through using Generative Adversarial Networks. Meanwhile, audio synthesis involves creating a model of someone’s voice, which mimics their intonation and speech patterns.

Deepfake audio moves into the courts

According to a Legal Cheek report, audio deepfakes have already made their way into UK courts. At the beginning of this year, the legal news website published an article which detailed the experience of Byron James, a family lawyer at Expatriate Law.

In the article, the lawyer explained that in a court case involving a custody battle his client, the father, was accused of violent and intimidating behaviour. As a result, the mother was denying his client access to their children. For her supporting evidence, the mother provided an audio file, which she claimed, proved that his client had threatened her.

However, it was later revealed that the mother had in fact manipulated the audio to give the impression that his client was unfit to have access to their children. The audio file was later dismissed.

Speaking to The National about how this was discovered, the family lawyer and partner at Expatriate Law, said: “We were able to see it had been edited after the original phone call took place and we were also able to establish which parts of it had been edited”. He added: “The mother used software and online tutorials to put together a plausible audio file”.

The family lawyer went on to stress the importance of courts proceeding with caution around audio and video evidence. He stated that while this time they were able to identify the evidence as deepfake manufacture, at the rate of the technology’s evolution, it will become increasingly difficult to identify deepfakes. “With practice, a deepfake video can be so plausible that even an expert may not be able to readily identify it as manufactured,” he said.

The legal implications

Of course the emergence of this technology within courtrooms has serious legal implications and could potentially undermine the legitimacy and validity of court rulings.

According to a report by the Surveillance Technology Oversight Project (STOP), the increasing prevalence of deepfakes, has two potentially dangerous impacts. Not only do realistic deepfakes have a high risk of “condemning the innocent or exonerating the guilty,” the report outlines that they could also permit litigants and their attorneys to “cast doubt” on the legitimacy of audio and video evidence. Clearly, this could completely change the landscape of courtroom law as we know it.

While this may seem like a concern for the distant future, the report warns that the technology’s evolution is imminent and AI software companies are already focusing their efforts on its development. One company that the report mentions is SenseTime, which can already create realistic deepfakes.

These deepfakes, the report says, allow someone to “fabricate” their identity, and potentially even enable a litigant to use their own voice to create audio of the opposing party saying words that were never spoken by them.

The STOP report also revealed that programs such as Avatarify, which enables a user to “superimpose” their face onto another person’s face in real time, is now readily available, and  currently being used on conferencing platforms such as Zoom and Skype. And, of course, with more and more court proceedings moving online due to Covid-19, this creates another opportunity for deepfakes to be used.

This is evidently very concerning. But, can the technology be identified? Well, yes and no. Back in 2019, researchers from UC Berkeley and the University of Southern California, used existing tools to detect when videos were forgeries that had been synthetically generated with AI. The study, which was funded by Google and DARPA, found that the detection technique was accurate at least 92% of the time when used against face replacement and face re-enactment.

That being said, while the detection technology was effective, it relied on the availability of hundreds of hours of footage that could be analysed. Contrastingly, this amount of footage is unlikely to be available for those who have this technology used against them in the courtroom. Meaning that deepfakes will be significantly harder to identify.

Additionally, while the tools that detect deepfakes are reliable now when the AI is in its nascent stage, developers working on the technology are likely to work out the kinks that give the technology away. Subsequently, as the AI adapts to become more and more lifelike, it will become virtually impossible to identify what is real and what is fake.

Can the technology be regulated?

Considering the vast and potentially dangerous consequences that deepfakes could have for the justice system, there is no explicit legislation to regulate the technology in the UK.

Privacy and anti-harassment legislation have already been used to combat the use of the technology in a pornographic setting. In May 2018, city worker Davide Buccheri was jailed and ordered to pay £5,000 compensation, after he created a gallery of Deepfake pornographic images of a co-worker. Alternatively, the use of deepfakes for other illegal purposes could be regulated through applying defamation or copyright infringement legislation.

A report by the Centre for Data Ethics and Innovation, also pointed out that new legislation to regulate deepfakes could have the “unintended consequence” of eliminating the use of visual and audio manipulation techniques “for socially beneficial uses”. That said, the report did highlight the value of updating existing legislation to cover deepfakes more robustly.

When it comes to deepfakes in the courtroom, Riana Pfefferkorn, associate director of surveillance and cybersecurity at the Center for Internet and Society at Stanford Law School, has even made the recommendation of investing in digital forensic experts to ensure that evidence is not tampered with.

One thing is for sure, there are still many unknowns when it comes to the potential of deepfake technology. And, as the technology evolves, those in law must remain vigilant, and keep their finger on its pulse. After all, the future and legitimacy of the justice system depends on this.

Article Created By Madaline Dunn

Add Your Law Firm

If your law firm is based in the UK, then a listing on The Legal Journal could really help your firm to reach new clients that are searching for legal services.

Add Your Law Firm