AI and Article 5 – Teammate, Terminator, Tempter?
- gaeronmcclure
- Jan 12
- 2 min read
Algorithms have been helping us for decades, processing volumes of data at superhuman speeds. In 2022, generative AI began to look like something else – a helpful teammate, but maybe just a bit scary? (And no, I didn’t let it help me write this post.)
One of my good friends began to worry about the “Skynet” scenario – if not properly controlled, would AI at some point try to wipe us out? Serious people are asking that question. But I’m not especially worried about AI sending Arnold Schwarzenegger androids to kill us.
I am worried about AI turning us into our worst selves. I’m not worried about The Terminator. I’m worried about WALL-E.
The humans in WALL-E’s dystopian future became lazy, obese blobs incapable of moving under their own power. They floated on hovering couches, consuming a steady diet of junk food and mindless entertainment. And that’s my nightmare vision for the end of humanity. Not a robot foot stomping on the last human skull, but a world of people sitting alone in dark rooms watching computer-generated pornography, eating Doritos and Coke delivered by drones. Algorithms may destroy us not because they’re malevolent, but because they’ve been programmed to capture our attention by appealing to our vices.
According to Jonathan Haidt, we’re well on our way. Building on his best-selling The Anxious Generation (2024), he and Zach Rausch recently wrote on After Babel that TikTok Is Harming Children at an Industrial Scale. The article is well worth reading. But I believe it’s not just TikTok or even social media in general, and it’s not just children. As C.S. Lewis wrote in The Abolition of Man (1944), “The real objection is that if man chooses to treat himself as raw material, raw material he will be: not raw material to be manipulated, as he fondly imagined, by himself, but by mere appetite, that is, mere Nature, in the person of his de-humanized Conditioners.”
Lewis was thinking of “de-humanized Conditioners” as being a kind of Nietzschean Übermensch (Friedrich Nietzsche, Thus Spoke Zarathustra, 1883). It seems that the conditioners may be more inhuman than that – not world-dominating supermen, but algorithms built simply to exploit our easy exploitability in order to generate revenue.
Article 5 of the EU AI Law (Regulation (EU) 2024/1689) prohibits “the use of an AI system that deploys subliminal techniques beyond a person’s consciousness or purposefully manipulative or deceptive techniques, with the objective, or the effect of materially distorting the behaviour of a person or a group of persons by appreciably impairing their ability to make an informed decision, thereby causing them to take a decision that they would not have otherwise taken in a manner that causes or is reasonably likely to cause that person, another person or group of persons significant harm.”
Haidt and Rausch’s article would suggest that TikTok is in violation of Article 5. It will be interesting to see what happens on February 2, when Article 5 takes effect. And it will be interesting to see how the concept of “responsible AI” evolves over time. There are already a plethora of standards, from NIST, ISO, and other bodies. The question is whether we have the will to impose them, to make AI more teammate and less tempter.
Comments