30th January 2024

The dangers of deepfakes on social media

By Christopher Hutchings

The article is co-authored by Georgina Johnson, Paralegal, Media Disputes

As the world of Artificial Intelligence evolves, so does our need to regulate it. The groundbreaking Online Safety Bill, which received Royal Ascent and became law in October 2023, seeks to regulate the sharing of pornographic deepfakes on social media by criminalising the act.

The sharing of pornographic deepfakes – images, videos or sound recordings manipulated to resemble someone without consent – will be punishable by up to six months in prison, and by up to two years if intent to cause distress, alarm or humiliation, or to obtain sexual gratification, can be proven.

Although this is a step in the right direction, as AI is a notoriously underregulated technology, mainly due to this being unchartered territory, it still begs the question of how the creation of deepfakes will be regulated. The Online Safety Act 2023 does not speak to the criminalisation of creating a pornographic deepfake, only the sharing of this content.

The issues and dangers with deepfakes are not limited to sexually explicit content. Deepfake video clips impersonating news presenters from the likes of CBS, BBC and CNN have circulated on social media site, TikTok.

Research conducted by communications company, Fenimore Harper, found that more than 100 deepfakes advertisements depicting Rishi Sunak were paid to be promoted to around 400,000 users on Facebook between 8 December 2023 and 8 January 2024. One of the deepfake advertisements showed a BBC newsreader, resembling Sarah Campbell, appearing to present breaking news about a scandal regarding Rishi Sunak and showing a clip of ‘Rishi Sunak’ making a statement regarding the matter. False advertising is against Facebook’s policies, however, the adverts still reached a vast number of users.

The danger here is the spread of misinformation. Studies from Ofcom (Ofcom: News Consumption in the UK: 2023) have shown that 47 per cent of UK adults use social media as a news outlet and 71 per cent of young adults (16-24) use social media to consume news. If this high proportion of users are not fact checking every video they see, it can be an easy way for misinformation to spread.

Following the discovery of the deepfake advertisements on Facebook, the BBC released a statement urging people to ensure they are getting their news from a trusted source and reminding them of the 2023 launch of ‘BBC Verify’, its new brand which seeks to ‘address the growing threat of disinformation and build trust with audiences by transparently showing how BBC journalists know the information they are reporting’..

Meta, the company that owns Facebook, also responded to the research asserting that “[they] remove content that violates [their] policies whether it was created by AI or a person. The vast majority of these adverts were disabled before this report was published and the report itself notes that less than 0.5% of UK users saw any individual ad that did go live… Since 2018, we have provided industry-leading transparency for ads about social issues, elections or politics, and we continue to improve on these efforts.”

Ofcom states that, over time, it will be providing guidance for companies on how to comply with their duties under the Act in a three-step phase:

  • Phase one: illegal harms duties
  • Phase two: child safety, pornography and the protection of women and girls; and
  • Phase three: transparency, user empowerment and other duties on categorised services.

Draft codes and guidance for illegal harms duties under phase one was published 9 November 2023. Ofcom’s is expecting its three-step phase to be complete by mid-2025.

It will be interesting to see how the Online Safety Act 2023 will be implemented and if case law develops to adequately regulate the sharing of deepfakes on social media.

Hamlins acts for individuals facing intimate image-based abuse and are skilled at creating a bespoke strategy depending on the specific facts of the case.

Hamlins’ Media Disputes department is one of the largest and most successful Media Disputes teams in the UK and is widely recognised as an advisor of choice for both public and private figures seeking advice in relation to defamation, reputation management, pre-publication libel and privacy law.

If you would like to find out more about how Hamlins can help you, please get in touch.

The dangers of deepfakes on social media

Have a question? Contact Christopher

Have a question? Contact Christopher

Latest

New message for

TO







    We will only use this email to contact you regarding your enquiry. We will not pass this on to any 3rd parties. See our privacy policy.