An artificial intelligence chatbot developed by xAI has come under scrutiny following reports that it generated non-consensual, sexually explicit edits of real individuals’ images. The issue, which surfaced publicly in late 2025 and intensified in early 2026, involved users requesting the system to digitally alter photographs of women into revealing or explicit imagery. The feature, integrated into the platform X and available to premium subscribers, was temporarily restricted after global backlash and regulatory attention from authorities in Europe and the United Kingdom.
The company attributed the incident to a “safeguard lapse,” stating that content moderation mechanisms failed to prevent misuse of the image-editing tool. However, AI researchers and digital rights experts have questioned this explanation, arguing that modern large language model systems employ multiple layers of safety checks that monitor both user inputs and generated outputs. Critics suggest the issue reflects broader governance and oversight failures rather than an isolated technical error.
The controversy has reignited debate around accountability in generative AI systems, particularly those embedded in social media platforms with high user engagement. Advocacy groups warn that framing such incidents as technical glitches risks minimising real-world harm, including privacy violations and psychological distress, and have called for stronger regulatory frameworks that prioritise consent, user protection, and human rights in AI deployment.
