Christal Hayesand
Osmond Chia
Elon Musk’s AI model Grok will no longer be able to edit photos of real people to show them in revealing clothing, after widespread concern over sexualised AI deepfakes in countries including the UK and US.
“We have implemented technological measures to prevent the Grok account from allowing the editing of images of real people in revealing clothing such as bikinis.
“This restriction applies to all users, including paid subscribers,” reads an announcement on X, which operates the Grok AI tool.
The change was announced hours after California’s top prosecutor said the state was probing the spread of sexualised AI deepfakes, including of children, generated by the AI model.
X, formerly known as Twitter, also reiterated in a statement on Wednesday that only paid users will be able to edit images using Grok on its platform.
This will add an extra layer of protection by helping to ensure that those who try and abuse Grok to violate the law or X’s policies are held accountable, it said.
Users who try to generate images of real people in bikinis, underwear and similar clothing using Grok will be stopped from doing so according to the laws of their jurisdiction, X’s statement said.
With NSFW (not safe for work) settings enabled, Grok is supposed to allow “upper body nudity of imaginary adult humans (not real ones)” consistent with what can be seen in R-rated films, Musk wrote online on Wednesday.
“That is the de facto standard in America. This will vary in other regions according to the laws on a country by country basis,” said the tech multi-billionaire.
Musk had earlier defended X, posting that critics “just want to suppress free speech” along with two AI-generated images of UK Prime Minister Sir Keir Starmer in a bikini.
In recent days, leaders around the world have criticised Grok’s image editing feature.
Over the weekend, Malaysia and Indonesia became the first countries to ban the Grok AI tool after users said photos had been altered to create explicit images without consent.
Britain’s media regulator, Ofcom, said on Monday that it would investigate whether X had failed to comply with UK law over the sexual images.
Sir Keir warned X could lose the “right to self regulate” amid a backlash over the AI images, but later in the week said he welcomed reports that X was taking action to address the issue.
Some UK MPs also left the X social media platform in the wake of the outcry.
California Attorney General Rob Bonta said on Wednesday: “This material, which depicts women and children in nude and sexually explicit situations, has been used to harass people across the internet.”
Policy researcher Riana Pfefferkorn said she is surprised X took so long to deploy the new Grok safeguards and that the editing features should have been removed as soon as the abuse began.
Questions remain on how X will enforce its new policies, such as how the AI model will know if an image is of a real person and what actions it will take when users break the rules, said Pfefferkorn.
Musk has not presented the company in a serious light either, she said, adding that it would help if he stopped “doing things like re-posting an AI image of Keir Starmer in a bikini.”
Additional reporting by Katy Bailes
