The UK's Automated Vehicles Act – a green light for self-driving vehicles in the UK (2024)

In a rain-soaked speech outside 10 Downing Street and with New Labour anthem 'Things Can Only Get Better' blasting over a loudspeaker, Rishi Sunak announced that the UK would be going to the polls on 4 July 2024. The last general election in this country was in December 2019, four and a half years ago - a long time in the world of AI. ChatGPT only launched in November 2022, but, at the first AI Safety Summit held in Bletchley Park last year, many attendees cited the threat to democracy as the most significant posed by AI in its current state.

Politicians and electoral teams are increasingly looking to harness 'big data' to maximise the effectiveness of their campaigns, looking to influence voters and gain an edge at the polls. But what happens when AI and machine learning are used unlawfully to manipulate voters? This is not a new issue - the ICO investigated Cambridge Analytica's use of data analytics for political purposes during the Brexit campaign. There is also evidence that political bot accounts were used in the US presidential election in 2016 and UK general election in 2017 to spread disinformation and 'fake news' across social media (and we discuss this issue in more detail here).

The UK Intelligence and Security Committee identified Russian interference in foreign elections using bots as far back as 2014, but deepfakes are a more recent concern. Deepfakes are AI-generated videos, photos or audio recordings which convincingly show people in situations or saying things they have never been in or said. While they have been around for some time, it has taken advances in AI for them to become barely distinguishable from reality. Some deepfakes are harmless, others are much more damaging not only to the individuals whose likenesses have been faked, but also due to the impact they can have on critical issues like voter intention, and this means they are now taken seriously as a threat to democracy.

Speaking at Meta's AI Day event in London 9 April 2024, Meta's President of Global Affairs and former UK Deputy Prime Minister Nick Clegg, suggested AI should be thought of as a "sword", when it comes to bad content because it can be used to detect it. Meta and TikTok have committed to labelling AI-generated content on their social media platforms and Apple recently announced that pictures created with Image Playground will be marked. Clegg's colleague at Meta, Yann LeCun, Meta's chief scientist, suggests that the greatest threat to democracy posed by AI is not deepfakes but rather the potential dominance of a few closed models which may be trained with particular biases.

Nick Clegg said in April that evidence from elections in the first part of the year suggested that deepfakes weren't being used to influence outcomes. However, Google's DeepMind division has recently identified AI-generated deepfakes as the most prevalent malicious use of generative AI, particularly deepfakes that impersonate celebrities and politicians, with the most common goal being to shape public opinion.The 2023 parliamentary elections in Slovakia demonstrated the impact that deepfakes can have on an election outcome. Just two days before Slovakia went to the polls, an audio recording began circulating on Facebook. The recording played two voices, one of the 'Progressive Slovakia' party leader and the other a reporter from a well-known Slovakian newspaper, apparently discussing how to rig the election, partly by buying votes from the country's marginalised Roma minority. The two immediately denounced the tape as fake. It was subsequently independently verified as being an AI-produced deepfake, however, with the country in a state-imposed news moratorium 48 hours prior to the election, the damage had already been done. The Progressive Slovakia party went on to lose the election to its close rival 'SMER'.

There are legal avenues which can be used by those depicted in deepfakes, for example IP law, data protection and privacy and online safety law. Use of deepfakes can also breach advertising regulations, however, in the tight timelines around elections, these can be blunt tools.

Having said that, while the Slovakian example might serve as a stark warning of what might come, it was a unique set of circ*mstances in terms of the timing (appearing during the news moratorium) and it exploited a loophole in Meta's removal policy (which previously only covered video deepfakes, not audio), which has since been closed. Back in the UK, there is evidence of deepfakes of Rishi Sunak appearing on TikTok, X and Instagram ahead of the general election. At this stage, the impact that deepfakes could have on voting behaviour is uncertain and distortion may be harder to spot in the immediate term, but it should still be considered a risk to the socio-political environment in the longer term.

Generally it is currently hard to fool large swathes of voters with deepfakes. Many are poor in quality (often originating from China or Russia), or entirely implausible. Countries will typically coordinate a response to a deepfake, as was seen during the 2024 London mayoral elections.

However, that doesn’t mean we can be complacent about electoral interference from hostile states. Fast-developing tech will always look to exploit a gap or gain a competitive advantage. It is more difficult for security teams to monitor disinformation at a local level, meaning there is a real risk of localised disinformation and deepfake campaigns in individual contests – which could swing a seat. It will always be difficult to manage a situation, electoral integrity and voter knowledge when supposedly credible voices appear to be presenting a truth online.

How to spot a deepfake

Perhaps the best weapon we have against deepfakes is knowing how to identify them. While we might enjoy images of the Pope in a puffer coat, we should be able to identify when an image, video or audio clip has been AI-generated.

Free detection tools are available (see here) which allow users to upload a piece of media to determine whether it could be AI-generated, however, they are not always accurate and can exhibit biases in the way they have been trained in detection. Such tools are helpful but humans can (still) use common sense to identify deepfakes and assess whether or not content is what it purports to be. Here are some tips for things to look out for:

Audio

There have already been examples of deepfakes circulating in this year's US election run-in, for example, an AI-generated audio clip of President Biden purportedly encouraging voters not to turn out for a Democratic primary. AI-detection tools have varying accuracy when used on audio clips, particularly if the clip is short or with background noise. However, AI-generated audio will often have a flatter overall tone and be less 'conversational'. Less emotion in the tone of voice and lack of proper breathing sounds, e.g. taking a breath before speaking, are often indicators that the voice is AI-generated. Clues can be picked up from background noise too, sometimes there might be a complete absence of background noise when you would expect there to be some or repetitive background noises.

Images

It can be easier to identify AI-generated images as they can be more easily closely examined than audio. Zooming in on an image to look at the details for e.g. buildings with crooked lines, uneven appendages, unusual shadows or overly smooth hair or skin can be quick tells. AI tech is improving and able to produce more realistic images but often they just appear too 'glossy' to be identifiably real when examined closely.

Videos

Videos are generally the hardest to fake as they are the most complex, containing a mixture of moving image and corresponding audio. A good example is a widely-circulated clip purporting to be Ukrainian president Volodymyr Zelenskiy instructing his armed forces to surrender in Russia (see clip here), which was a fake. There are some tells in the video – unnatural eye-blinking, uneven pixelation, inconsistencies in the way the mouth, teeth and tongue are moving – that indicate digital manipulation.

Use your common sense

If someone is doing or saying something out of character, chances are you may well be looking at a deepfake. It's easy to be taken in at first glance so education is a big part of ensuring we aren't fooled. There is certainly a risk that deepfakes become more sophisticated to the point we are unable to identify them, but we're hopefully not there yet.

The UK's Automated Vehicles Act – a green light for self-driving vehicles in the UK (2024)

References

Top Articles
Latest Posts
Article information

Author: Nathanial Hackett

Last Updated:

Views: 6758

Rating: 4.1 / 5 (52 voted)

Reviews: 83% of readers found this page helpful

Author information

Name: Nathanial Hackett

Birthday: 1997-10-09

Address: Apt. 935 264 Abshire Canyon, South Nerissachester, NM 01800

Phone: +9752624861224

Job: Forward Technology Assistant

Hobby: Listening to music, Shopping, Vacation, Baton twirling, Flower arranging, Blacksmithing, Do it yourself

Introduction: My name is Nathanial Hackett, I am a lovely, curious, smiling, lively, thoughtful, courageous, lively person who loves writing and wants to share my knowledge and understanding with you.