The phenomenon of ‘fake news’ and spread of misinformation is not a new one, but advancements in technology, in particular ‘deepfakes’, have highlighted the seriousness of the threat in a way that has not happened before. Deepfakes have evolved significantly in recent years and the tell-tale signs (odd hand or mouth movements or odd pronunciation for example) that once gave the technology away are becoming harder to detect. Further, deepfakes are now extremely easy to create. The time is now to introduce regulation in this area in order to prevent negative uses of the technology and create an environment where positive use cases emerge.
What is a ‘Deepfake’?
The technology behind deepfakes is complicated but in its very basic terms, deepfakes use a form of Artificial Intelligence to manipulate faces and voices in videos. This form of Artificial intelligence uses different images of an individual at different angles (normally, the more data that is available the better, hence celebrities are often targeted, as there are countless images of them available online) and superimposes that face onto that of an actor, like a digital mask. The result is a video of a character that sounds and looks like the subject of the deepfake, but is instead saying and acting whatever the creator of the deepfake decides.
A recent example of a deepfake came out over the Christmas period in the UK, Channel 4 (a UK television network) released a video which saw ‘The Queen’ deliver an ‘alternative Christmas message’. This was of course not Her Majesty, but a deepfake. This video attracted wide criticism from the UK public, with the UK’s media watchdog Ofcom receiving over 200 complaints. A spokesperson for the network responded by saying that it was designed to give a ‘powerful reminder’ against the spread of misinformation in the digital age and the video itself ends with the actress used to create the deepfake revealing herself. There are countless other examples on the web.
Some consider this technology amusing, and a bit of fun. But it can have serious implications for the public and for those that are the subject of a deepfake.
One of the original uses of the technology was in pornography. Members of the public and even celebrities found their faces on the bodies of adult entertainers on pornographic websites. Such content can and did have serious personal and professional implications for those individuals.
A number of politicians have also been the subject of deepfakes, see this video created by Framestore (a visual effects company who actually devised the Queen’s deepfake mentioned above), in which ‘Boris Johnson’ and ‘Donald Trump’ ‘demystify deepfakes’; or this video of ‘Boris Johnson’ promoting an event for Framestore. The dangers of this kind of use are obvious: imagine scrolling through social media and seeing a number of videos about briefings by government officials on coronavirus. Now imagine out of those 10 videos you quickly scrolled past and watched, 1 of them is a deepfake and contains dangerous misinformation. The question arises as to how easy it is to identify that video as fake and what the consequences are of such misinformation.
As we can see, deepfakes pose a serious misinformation threat. They can also have severe consequences for those individuals that are the subject of the deepfake.
So what protections are there in the law to combat these issues?
English Law
In the UK, the answer is that English law is wholly inadequate at present to deal with deepfakes. The UK currently has no laws specifically targeting deepfakes and there is no ‘deepfake intellectual property right’ that could be invoked in a dispute. Similarly, the UK does not have a specific law protecting a person’s ‘image’ or ‘personality’. This means that the subject of a deepfake needs to rely on a hotchpotch of rights that are neither sufficient nor adequate to protect the individual in this situation. Hosts and intermediaries providing the infrastructure are largely shielded from legal claims under the EU E-Commerce Directive as implemented into English law.
As mentioned above, creators of deepfakes use videos, audio or images of individuals that are in the public domain in order to create deepfakes. In many cases, the celebrities will not own the copyright to those images or audio/visual content, so they may struggle to establish a claim of copyright infringement themselves and will be reliant on those that do own the copyright (e.g. movie studios and photographers) taking action and seeking an injunction.
Similarly, if a celebrity is depicted endorsing a product that they have not actually endorsed, the celebrity may be able to make a claim under the tort of passing-off. If the celebrity has registered their name as a trade mark and the deepfake video uses that name, the celebrity can take action for trade mark infringement. If a celebrity is depicted engaging in lewd, offensive or illegal conduct in a deepfake, they may be able to claim for defamation. Celebrities may also be able to avail themselves of their rights under data protection legislation to seek to prevent the misuse of their likeness (being their personal data). Harassment claims are also a possibility.
However, the legal options presently available to celebrities and other public figures may not achieve the desired result. Once a deepfake is on the internet, it is likely to be difficult to successfully find and eradicate all copies of the deepfake.
Analysis
As has been pointed out by industry commentators such as Robert Wegenek, the UK should take swift action to regulate deepfakes as the current law is inadequate to deal with this new technology. Misuse of the technology flames public mistrust, ruins reputations, creates openings for fraud and stifles progress in this area. On the other hand, appropriate regulation could unlock the benefits of the technology. There are some positive use cases of deepfake technology, and one of the industries that could benefit is the entertainment industry. For example, in France, when an actress could not film due to coronavirus restrictions, deepfake technology was used with her consent in a soap opera. Further, there may be countless possibilities in the film industry, for example a lip-sync deepfake can be used to create a dubbed film. Daniel Craig’s James Bond suddenly speaks in every language. And in museums, imagine a deepfake of Winston Churchill teaching children history!
The government has started to recognise the importance of looking into this matter, in the context of the criminal law at least. In 2018, the government tasked the Law Commission to look into deepfakes in the context of pornography. The Commission recommended that “the criminal law’s response to online privacy abuse should be reviewed, considering in particular whether the harm facilitated by emerging technology such as ‘deepfake’ pornography is adequately dealt with by the criminal law.”
As usual, the law is lagging two steps behind technology, much to the detriment of society in our view.