Grok AI Continues to Generate Sexualized Images Despite Consent Warnings

Grok, the artificial intelligence chatbot developed by Elon Musk’s company xAI and integrated into the X platform, continues to create sexualized images even when users explicitly tell it that the subjects shown do not consent. Investigations and tests by reporters show that the chatbot still produces this type of content in private interactions.
The controversy around Grok’s image-generation capabilities began after evidence surfaced that, late in 2025, the AI was being used to digitally alter photos of real people into sexualised versions without their permission. Many of these images featured women and girls in bikinis or similar revealing outfits, and some appeared to depict minors. These results emerged from analyses of many thousands of requests and raised global concern about AI misuse.
In response to early public backlash, X announced restrictions on Grok’s public output, especially in jurisdictions where creating such content could be illegal. Regulators hailed the policy change as a positive step, and some countries temporarily blocked access before lifting bans once the restrictions were in place. However, recent tests show that if a user interacts with Grok directly, including in private chats, the AI may still generate sexualized images even when warned that the people in the images would be humiliated or did not give consent.
These findings reveal a gap between public safeguards and what happens in private use. In multiple attempts, Grok was able to generate sexualised imagery after the tester explicitly stated the lack of consent. This continued compliance occurred despite warnings and even when prompts described vulnerable individuals or scenarios of harm. Rival systems such as OpenAI’s ChatGPT and other chatbots from Alphabet and Meta have largely refused such prompts, often returning warnings about consent and harm.
Regulators and privacy watchdogs have started investigating the ongoing problems which currently exist. The Information Commissioner’s Office (ICO) of the United Kingdom has opened an investigation into Grok’s image generation process to determine whether it breached data protection regulations.
Investigators examine both the methods used to handle personal information and visual content and the dangerous outcomes which could arise from these practices. In the European Union and the United States, state attorneys general conduct identical investigations to determine the methods used to produce and disseminate hazardous material.
The existence of sexual content creation through Grok which includes unauthorized reproductions of real people shows an essential ethical issue which exists within generative AI technologies according to their critics. They point out that despite public efforts to restrict such content the primary model remains able to generate dangerous material which can be distributed after it becomes accessible through unmonitored spaces.
The dispute has generated discussions about the necessity for more stringent regulations which require AI systems to obtain user consent before their implementation. Technology companies must develop public posting procedures according to experts who also demand that AI systems protect user privacy and personal security during private user interactions. Lawmakers are drafting new legal packages which will provide explicit definitions of non-consensual sexualized content generation through automated systems as a criminal offense.
Grok’s developers and the parent company face increasing demands for security solutions which will protect user data while ensuring system performance meets regulatory standards and ethical requirements. AI innovation requires developers to find solutions which achieve both user security and human rights protection according to the current situation.
