Concerns Raised Over Grok AI's Generation of Non-Consensual Sexualised Images
Samantha Smith, a freelance journalist from the UK, reported feeling dehumanised and violated after Grok-generated imagery appeared to show her undressed without her consent. The BBC identified multiple posts on X where users requested Grok, an AI assistant, to undress women to create bikini or sexualised images without permission, with some posts leading to further image generation.
Grok is a free AI assistant available on X that can respond to posts and edit images. The feature has attracted criticism for enabling the creation of nude or sexualised content, and was previously accused of generating a sexually explicit clip involving Taylor Swift. Despite these concerns, XAI has not provided any comment beyond an automated reply disputing "legacy media lies."
UK authorities are taking action to address such issues. The Home Office is legislating to ban nudification tools; under the proposed new criminal offence, suppliers of such technology could face prison sentences and substantial fines. Ofcom has stated that platforms must assess the risk of illegal content to UK audiences and take steps to remove non-consensual intimate images or AI deepfakes, although there has been no confirmation of an ongoing investigation into X or Grok.
Durham University law professor Clare McGlynn noted that the platform could curb such abuses if it chose to, but regulators have yet to challenge it. Meanwhile, XAI's acceptable use policy prohibits pornographic depictions of likenesses.