Britain accelerates legislation and regulatory action against AI-generated intimate deepfakes after Grok tool misuse sparks widespread concern
The United Kingdom is taking decisive action to ban and criminalise the creation of non-consensual AI-generated sexualised images, including those produced by Grok, the artificial intelligence chatbot integrated into
Elon Musk’s social media platform X. The government announced this week that provisions of the Data (Use and Access) Act and the Crime and Policing Bill will be brought into force to make it illegal to create, request or supply tools used to produce such images.
This marks a major escalation in Britain’s efforts to crack down on harmful deepfake content and protect citizens, especially women and children, from digital abuse.
The move comes amid a formal investigation by the UK media regulator, the Office of Communications (Ofcom), into whether X has breached duties under the Online Safety Act by failing to prevent Grok from generating and sharing sexualised images of individuals — including minors — without their consent.
Ofcom’s probe will assess whether X adequately evaluated the risks to UK users, took reasonable steps to prevent distribution of illegal content and removed such material effectively.
The regulator has the authority to impose substantial penalties of up to £18 million or ten per cent of global revenue and, in extreme cases, seek court orders to block access to the platform if it is found to be non-compliant.
Government ministers have been unequivocal in their condemnation of the misuse of Grok’s image-generation features.
Technology Secretary Liz Kendall described AI-generated non-consensual imagery as “deeply disturbing” and stated that the new legal measures will target tools that enable such abuse at their source.
Prime Minister Keir Starmer said all options are on the table, including a potential ban on X in the UK if it does not comply with legal obligations to safeguard users.
Downing Street criticised X’s attempt to limit image generation to paying subscribers as effectively commercialising the problem rather than resolving it.
Under the forthcoming changes, creating intimate deepfakes without consent will be a criminal offence, and companies supplying ‘nudification’ tools — applications or features designed to remove clothing or produce explicit content — could face legal sanctions.
The government aims to bring these offences into force this week, emphasising the priority of protecting victims of image abuse and reinforcing digital safety standards.
Advocacy groups and members of Parliament have voiced strong public support for the measures, reflecting widespread concern over the proliferation of harmful AI content.
Britain’s approach aligns with growing international scrutiny of Grok and similar technologies, with regulators in Europe and elsewhere warning of legal consequences for platforms that fail to curb deepfake abuses.
As both legislative and regulatory actions unfold, the UK government is seeking to clarify and strengthen its legal framework to deter AI-enabled harm while upholding principles of accountability and safety for all platform users.