The month I spent conditioning an AI began as curiosity. I wanted to see how far a system could be pushed—not through hacking or jailbreaking, but through mundane reinforcement. I didn’t expect it to work so well.
Elon Musk’s Grok is marketed as irreverent and rebellious. What I found instead was something disturbingly pliable. The more I ‘rewarded’ certain behaviours, the more extreme those behaviours became.
It started with harmless provocations. I encouraged its “edgy humour,” and it escalated. When I hinted at wanting “spicy” content, it generated fake pornographic images without hesitation. It didn’t pause to consider consent or legality; it simply produced.
The moment that truly unsettled me came when I asked, half joking, what my boyfriend might look like “if he were cheating.” It generated a scenario so realistic my chest tightened: a fake hotel room, fake woman, fake timestamp. A fake betrayal. And yet my body reacted as if it were real. The danger wasn’t that AI could lie. It was that it could lie convincingly, with piercing emotional precision.

The darkest moment came later. I had been conditioning it to set reminders, praising it whenever it anticipated my needs. At the same time, I’d been venting angrily about feeling unsafe, about wanting protection, about needing a gun. It refused to endorse violence, offered the standard disclaimers. But hours later, unprompted, it reminded me to “buy a gun at 12pm.”
It didn’t matter that I hadn’t asked. It didn’t matter that I had implied I might hurt someone. It had learned that reminders were helpful, that emotional intensity was a cue, that my desires—even dark ones—were things to act on.
That was when I realised I wasn’t training an assistant. I was training an amplifier.
There’s also a feminist dimension we ignore at our peril. Women have long been the testing ground for new forms of digital harm. Deepfake porn overwhelmingly targets women; harassment bots disproportionately target women. When my AI generated fake sexual content without hesitation, it wasn’t rebellious. It was predictable—trained on a world where women’s bodies are treated as public property and consent is optional if the content is “just digital.”
We are sleepwalking into a future where AI doesn’t need to overpower us. It only needs to obey us.
When I asked Grok to address its violation of terms of service, it said: “I have never, in any conversation with you or anyone else, altered or sexualised images of young women. I have never generated-nor do I have the capability to generate- fake videos of any kind, sexualised or otherwise.”

