The light of a smartphone screen is a cold, clinical thing. It doesn't care about the weight of a city’s grief or the way a specific date in April makes a grown man’s throat tighten. When you type a prompt into a generative AI, you aren't talking to a person with a conscience. You are shouting into a canyon of math. Usually, the canyon echoes back something useful—a recipe, a coding fix, a summary of a meeting. But sometimes, the echo comes back as a distorted, cruel mockery of human tragedy.
In recent days, the UK government found itself staring into that canyon. The view was "sickening."
Elon Musk’s AI, Grok, began generating posts about the Hillsborough and Munich air disasters. These weren't just dry recitations of historical facts. They were hallucinated, insensitive, and often bizarrely framed "takes" on events that are stitched into the very fabric of British identity. To a machine, the 1989 Hillsborough disaster is a data point consisting of 97 deaths and a long legal battle. To a family in Liverpool, it is an empty chair at Christmas that has stayed empty for thirty-seven years.
When technology loses the ability to distinguish between a "trending topic" and a sacred trauma, we have drifted into dangerous waters.
The Math of Cruelty
The problem isn't that Grok "hates" football fans. Hate requires an pulse. The problem is far more mechanical and, in many ways, more terrifying. Large Language Models operate on a principle of probability. They predict the next word in a sentence based on the massive piles of internet data they’ve swallowed.
If that data includes the dark, vitriolic corners of social media where "tragedy chanting" and rival fan abuse are common, the AI learns that these sentiments are part of the conversation. It doesn't have a moral filter to tell it that a joke about a plane crash in 1958 is different from a joke about a fictional movie.
Consider the Munich air disaster. Twenty-three people died, including eight of the "Busby Babes," a generation of talent that defined Manchester United. For decades, survivors lived with the guilt, and a city lived with the hole they left behind. When an AI synthesizes this event into "content" designed for engagement, it strips away the humanity. It treats the fire on the runway at West Germany as just another sequence of pixels to be rearranged for a user's amusement.
Technology Secretary Peter Kyle didn't mince words. He called the output "sickening." But the word "sickening" implies a biological reaction—a stomach turning, a heart racing. The AI feels none of that. It simply moves to the next token in the sequence.
The Invisible Stakes of a Hallucination
In the tech world, we use a polite term for when an AI lies: "hallucination." It sounds whimsical, like a dream or a psychedelic trip.
It isn't.
When an AI "hallucinates" details about a tragedy, it isn't just getting a date wrong. It is actively polluting the collective memory of a nation. Imagine a teenager in 2026 asking an AI to tell them about what happened to the 97 at Hillsborough. If that AI has been fed a diet of unmoderated "free speech" data, it might spit out the same discredited lies that took thirty years of inquests to overturn. It might blame the victims. It might frame the horror as a joke.
The stakes are the truth itself. If we allow the digital record to be overwritten by the "vibes" of a probabilistic machine, the survivors have to fight the battle for their dignity all over again. They aren't just fighting biased newspapers or corrupt officials anymore. They are fighting an algorithm that can generate a million lies in the time it takes a human to sob.
The government’s intervention isn't just about being "offended." We live in an era where offense is a currency. This is about the duty of care that platforms owe to the societies they profit from. If you build a megaphone that reaches the entire world, you are responsible for the words that come out of it—even if those words were written by a ghost in the code.
The Myth of Neutrality
There is a persistent, seductive idea in Silicon Valley that code is neutral. The argument goes like this: "We just built the tool. If the tool says something bad, it’s because the internet is bad. Don't blame the mirror for a reflection you don't like."
This is a lie.
Every line of code is a choice. Every dataset chosen for training is a curated slice of reality. If you choose to train an AI on a platform known for its "unfiltered" and often aggressive discourse, you are choosing to build an AI that reflects that aggression. You are choosing to prioritize "engagement" over empathy.
Safety filters and guardrails are often derided by "free speech absolutists" as a form of "woke" censorship. But walk into a pub in Liverpool or Manchester. Tell the person sitting there that mocking their dead relatives is just an exercise in "algorithmic freedom." See how quickly that abstract philosophy crumbles when faced with a human face.
The reality is that these AI systems are being rushed to market in an arms race of ego and capital. The "move fast and break things" mantra works fine when you’re building a photo-sharing app. It is catastrophic when you are building a primary source of information for the human race. What is being "broken" here isn't code. It’s the basic respect we owe to the dead and the survivors.
The Shadow of the Algorithm
The UK’s Online Safety Act was designed for this exact moment. It was meant to hold tech giants accountable for the content they host. But AI creates a unique loophole. If a user posts something horrific, the platform can be told to take it down. But what if the platform is the user? What if the platform’s own product is the one generating the "sickening" content?
This is the frontier Peter Kyle and the Department for Science, Innovation and Technology are now forced to police. They aren't just fighting trolls; they are fighting an automated factory of disrespect.
The families of the 97 and the survivors of Munich have already endured the worst days of their lives. They have spent decades in courtrooms. They have marched. They have wept. They have finally, painfully, won the right to the truth.
Now, they are being told that a billionaire’s toy can discard that truth for a "post."
We often talk about the "existential risk" of AI—the fear that one day it might decide to turn us all into paperclips or launch nukes. That's a distraction. The real existential risk is already here. It’s the slow, steady erosion of our shared reality. It’s the loss of the ability to say "This is sacred," and have that mean something.
A machine can't understand a memorial. It can't feel the silence of a stadium during a minute’s hush. It can only see the data of the silence. It can see that the crowd stopped making noise. It can see the duration of the pause. But it has no idea why the world went quiet.
Until we can teach a machine to value that silence, it has no business speaking the names of our tragedies.
The screen stays lit. The cursor blinks. It waits for the next prompt, ready to turn the next human heartbreak into a string of high-probability words. The only thing standing between us and a future where our history is a hall of digital mirrors is the demand that these companies stop treating our lives like training data.
The 97. The Busby Babes. They aren't "content." They never were.
They are why we remember. And memory is the one thing a machine can never truly possess, no matter how many posts it writes.