Friday Flurry #11: AI content is just a less nuanced version of our existing bullshit.
So far, the 'future' looks a lot like the present. But it's about to get worse.
Welcome to Friday Flurry. These posts, which are a mixed bag of what I’m doing, reading and thinking about, are exclusively for paid subscribers. If you’re a regular reader, and you enjoy my writing, become a paid subscriber.
Note: I’m making this article available to all readers, because I think it’s important.
This LinkedIn post by Steve Ballantyne appeared in my feed this week, with the AI-generated cover his company produced for the latest issue of NZ Marketing magazine. In a previous post, Steve explained the brief was to create something which “represented what the future marketer might look like, metaphorically.”
Here’s what they came up with:
How interesting, that the ‘future marketer’ happens to be young, white, female, slim and conventionally attractive. That’s very similar to how magazine covers look in the present, too.
Here’s 9 other contenders Steve’s company pitched. Notice a theme? 8/10 feature young women. All include attractive, slim, white models.
AI-generated content is an even less nuanced version of existing bullshit.
These covers are AI reproductions of current ideas about who is magazine-cover worthy. That makes sense. AI image and language generation tools simply regurgitate content that aligns with our existing patterns, preferences and conventions.
Trained on a large data-set, these tools are then optimised for human preferences using Reinforcement Learning from Human Feedback (RLHF) where people choose their favourite output in a series of pair-comparisons.
The result: AI gives us more of what we already make and like.
It should come as no surprise then, that AI-generated content is racist, sexist, classist and entrenches existing bias.
AI robots trained on billions of images consistently identify women as ‘homemakers’ and people of colour as ‘criminals or janitors’ - Forbes
That isn’t a problem with AI - it’s a problem with us. AI is sexist and racist because we make stuff that privileges white, male voices. It’s trained by developers who are overwhelmingly white, male and upper-class, then deployed in environments, with real-life consequences, for people who are not.
…Here’s where it takes a turn.
AI-based technologies don’t just design magazine covers. They calculate credit scores, allocate social housing, screen job applicants, detect welfare fraud, decide court case outcomes, recommend sentences for drug possession and make medical diagnoses. When those technologies are rooted in prejudice, people suffer.
There is no such thing as objective AI. The information fed to these tools represents our dominant ideas about people and power - and those ideas are biased. AI is a mirror and amplifier of our bias, not it’s creator. So we shouldn’t be surprised.
But we should be worried. In this piece, I’ll explain why.
We risk entrenching moral errors for a very long time.
“Previous technology has already enabled values to persist for longer, and with higher fidelity, than they could otherwise have done. Writing, for example, was crucial, enabling complex ideas to be transmitted many generations into the future without inevitable distortion by the failures of human memory…
Writing gave ideas the power to influence society for thousands of years; artificial intelligence could give them influence that lasts millions.”
In What We Owe The Future,Will MacAskill warns of an AI-accelerated “values lock-in.” If we entrust decision making to the machines, MacAskill warns, we might permanently entrench gross moral errors and mass inequality. These are the issues we need to grapple with, as we get excited about AI-generated magazine covers.
Unless we’re confident in the moral integrity of our current social norms and values - which we absolutely shouldn’t be - we risk perpetrating devastating outcomes for already marginalised communities for many generations.
As AI image and language models colonise the web-o-sphere, this threat gets realer by the day. AI is regurgitating exactly what we’re feeding it (for free, I might add): two-dimensional representations of a world designed for the interests of a select few.
Magical robots don’t make decisions about AI. People do.
In the breathless coverage of Chat-GPT and GPT-4, you could be forgiven for assuming these tools are magical beings, with a mind of their own. Tech companies let these stories spread, to obfuscate the ownership and responsibility of people behind the scenes who make decisions about what AI will and won’t do.
We’ve collectively framed AI in a sort of magical realism, believing things happen automatically, or exist in a mythical cloud - but this is patently incorrect.
Every day, powerful tech moguls decide which projects to fund, who to partner with, which data-sets to rely on, and how to achieve commercial advantage and social penetration. With little to no regulatory oversight, the charge is being led by predominantly young, white, wealthy men like Sam Altman (OpenAI’s CEO, who made his initial millions in a tech startup at just 20).
Women and people of colour are staggeringly under-represented in tech (despite the fact that computing jobs were originally the domain of women) - and the needle has barely moved in the last decade. Men represented almost 3/4s of computer and mathematical jobs in the US in 2020. In Seattle, tech hub of the USA, men comprise over 85% of the #1 occupation in the city… software developers.
The people shaping this technology are mostly the same kind of people. That sameness allows extreme manifestation of privilege, with little accountability.
In movies, tech bros are the new villain - replacing classics like Rich Developer Who’s Trying To Take The Family Farm. Just like for old mate Rich Developer, the consequences are slow or non-existent.
While we laugh off Elon Musk’s increasingly erratic behaviour, he’s using the billions of dollars and pivotal communication network at his disposal to promote transphobic hate speech, spread climate disinformation, and play fast and loose with people’s safety. When the Christchurch mosque shooting video resurfaced on Twitter recently, concerned journalists received an auto-responder email from the corporate media team with… a poop emoji. Meanwhile, the clip was viewed 150,000 times.
Are these the people we want to shape the future of our society?
Magical robots don’t train and optimise AI. People do - and those people are being exploited.
GPT-4 wasn’t made safer by a clever piece of code. It became capable of flagging hate speech thanks to Kenyan workers, who were paid less than $2 per hour to manually process staggering quantities of violent, hateful content.
OpenAI outsources content moderation to a digital sweatshop, just like YouTube and Facebook do. Every day, thousands of workers in India, the Philippines and across Africa are paid peanuts to work on virtual assembly lines, watching hours of traumatic, violent footage. Some of this work used to be undertaken in the US, until state-side moderators sued Facebook for psychological damage. After settling for $52M, Facebook quietly moved the bulk of it’s moderation offshore, where labour is cheaper, laws are looser, and working conditions are significantly less secure.
These virtual farms hire low-paid workers through third-party contractors to review posts, manually tag millions of images and categorise disturbing content, so people in the Global North can romanticise the magic of technology, blissfully unaware of the human cost.
AI lives on a virtual plane - but the consequences are in real life.
We’re lucky to even get content moderation. In other parts of the world, targeted disinformation and troll factories operate unchecked, amplifying hate speech, inciting riots and fuelling racial genocide. In 2021, thousands of leaked internal documents showed Meta fail to filter divisive content and protect users in non-western countries, even when their own research marks them as high risk. The Outrage Algorithm is just too effective at generating clicks and dollars.
These companies know what they’re doing, but we’re not holding them to account. Despite the 2016 US election interference revealed in the Cambridge Analytica leak, technology-fuelled political manipulation is already on the agenda for the 2024 election. Commentators are urgently warning of the risks of AI-generated content to “mislead voters, impersonate candidates and undermine elections on a scale and at a speed not yet seen,” while Donald Trump shares AI-generated content with his followers, falsely representing real-life events using voice-cloning tools.
AI-generated political disinformation already has gone viral online ahead of the 2024 election, from a doctored video of Biden appearing to give a speech attacking transgender people to AI-generated images of children supposedly learning satanism in libraries. - PBS
This is not magical realism. It’s just real. The consequences are real, too - and in the case of places like Ethiopia and Myanmar, they’re fatal. Our democracies are under threat.
Tech moguls in positions of unregulated, unbridled power are enabling politically and socially dangerous activity that harms the world’s most vulnerable people.
We’re being distracted by shiny trinkets, while change hurtles forward behind the scenes.
While we’re distracted by magazine covers, we’re missing what AI is actually being used for. Things like swaying elections, allocating scarce social housing provisions to vulnerable populations, suspending life-sustaining benefits for people living on the poverty line, and making diagnostic decisions.
Data mining and predictive models have been making life worse for poor and working class people for years, because we test our most controversial technology on maligned social groups with more malleable rights. In Automating Inequality, Virginia Eubanks recounts a chilling conversation with a young mother about her experiences with surveillance technology and electronic welfare cards -
“You should pay attention to what happens to us. You’re next.”
The release of Chat-GPT to the public was not a gift. It is a smoke screen to garner public acceptance for and dependence on AI tools, while OpenAI secures multi-billion-dollar partnerships and lucrative, ground-breaking deals across healthcare, banking, payments, oil and gas, consulting and god knows what else, largely unchecked.
We are slow-boiling frogs, making stupid AI pictures, while the world changes irreversibly. The result will not serve us, anymore than email saved us time.
It’s not our jobs that are under threat - it’s our humanity.
The defining features of humanity cannot be accurately coded into an algorithm. AI-generated art, music and writing sucks because it can’t give us what we’re looking for - validation and alleviation of our illogical, emotional, human experience.
We aren’t the rational beings we think we are. We’re driven first by emotions, then by context, which we post-rationalise later on. It’s why we need creative art-forms to try and make sense of ourselves. AI can’t replace artists and writers, because those are the people in our society who try to help us understand ourselves, using emotional truth.
AI doesn’t have emotional truth. It can replicate cliches, detect patterns and manipulate words and images, but it cannot create new understanding of the human experience. The only people who think AI-generated literature is ‘just as good’ are the tech bros selling it to you - and they’re not exactly the high-water-mark of cultural critique.
If you know how to write, then you don’t need AI. If you don’t know how to write, then you’ll think AI is good enough when it isn’t. - Doc Burford
If your goal is to create content, AI will help. (If you’ve got a sharp eye and a red pen.) If your goal is to make art or seek understanding - AI can’t help. Art - writing, painting, music - is where we tell stories to understand ourselves. And we need that because we don’t make sense.
The only thing AI can ‘understand’ are the rules and patterns we tell it, explicitly through coding, or implicitly through stuff we’ve already made. We can’t give AI rules to understand humans, because we don’t understand humans. Otherwise, we wouldn’t need psychologists, poets or motivational speakers.
AI can’t touch your soul, because it doesn’t have one. But it’s being developed in an environment where the controlling worldview is that your soul is irrelevant.
These companies worship at the altar of a rationality that doesn’t exist.
In this infuriating and terrifying conversation, OpenAI CEO Sam Altman casually discounts the possibility that consciousness would include the presence of emotions. Discussing how OpenAI would spot the emergence of Artificial General Intelligence (AGI), he muses about the facts that the computer would know, if it was conscious.
This Steven Pinker style view is predicated on an ideology of logic and rationality that bears little relationship to the complexity of the human condition, and ignores glaring blind spots about it’s failure to produce a fairer world.
In this way of thinking, all things can be reduced to a set of rules, formulas or logical chains of reason, humans included. It’s the kind of white-centric, progress-at-all-costs, capitalist mentality that’s seen us destroy our natural environment while building a world that tolerates staggering abundance and egregious poverty side-by-side. In Seattle, the burgeoning homeless population sleeps in tents outside tech company HQs, pushed out of their gentrified neighbourhoods by overpaid developers. It’s the meritocracy, you see. Objectivity. Rationality. Hard work.
I enjoyed listening to this conversation between Cristina Criado Perez and Katrine Marcal recently, about housework robots. Surely something as simple and repetitive as housework could be easily replaced by a robot?
Apparently not. After a series of frustrating failures, developers abandoned their efforts and turned instead to teaching the computers to play Go and chess. It turns out care labour is the hardest work to program.
Grappling with pesky, illogical, emotional experiences and responding with empathy is the job of a person, not a computer.
The people selling us the AI narrative have the most to gain.
As think-pieces on the impact of AI on art, literature and jobs, abound the people in power are smiling. When scientists from Microsoft appear on podcasts with shocking stories about rogue AI behaviour - it put a horn on a unicorn’s head! Like magic! - they’re not doing a public service. They’re selling the narrative.
It serves tech companies for us to be dazzled and befuddled by AI. This narrative creates a sense of inevitability and automated magic that keep them out of the bounds of regulation. It makes it easy for us to believe things are just moving too fast for the government, and that there’s nothing we can do.
Which is a pretty useful narrative for people selling and commercialising those technologies, isn’t it? If you want to develop a new drug, or sell food, the regulatory regime is slow, arduous and rigorous. For technology, it’s practically non-existent.
Never mind the experts with concrete recommendations for ethical regulation. It’s all too magic and amazing to control. (Unless you’re the guy who’s programming it and making all the money, of course. Then, you’ll behave as Sam Altman did and actively lobby to water down landmark AI legislation that threatens your power.)
The truth is much simpler. Big Tech can be regulated. Safety evaluations and ethical standards can, and should, exist.
Plus, there are precedents and guidelines already out there. The implementation of OECD AI principles, the EU’s AI act, or technical industry standards can all act to hold bad actors accountable, impose restrictions on harmfully prejudiced technology, and slow the social and political strangle-hold of AI.
It is not too late.
TL;DR - AI is not harmless entertainment. It is strategically monetised technology that exacerbates inequality, and we are not powerless to resist it.
In summary:
AI is not powered by magical robots. AI is shaped by decisions, made by people, who exploit and low-paid offshore labour for it’s development
The people who make those decisions are in positions of unregulated, unbridled power, which has catastrophic consequences for our most vulnerable
Humans are highly irrational and illogical. Trying to reduce us to an algorithm perpetuates and entrenches harmful ideas, values and biases
People are already being hurt by the use of AI, and it has the potential to get much worse - but it is not inevitable. There are concrete regulatory steps we can and should take to slow and change the impact of AI
The people selling us the AI narrative are the ones with the most to gain. They should not be our source of wisdom. It’s time to listen to the artists, philosophers, policy advisors and social scientists. We have them for a reason.
Well, that sums up this week’s Friday Flurry! It was a biggie! I really appreciate your support. I love writing and sharing with you, and this is a fully reader-supported effort, so if you can afford to upgrade to a paid subscription, please do.
Posts like this take a long time to research, process and write, because I’m not an AI robot. I’m a person, trying to make sense of the world. Please support this antiquated profession with the equivalent of 1.3 almond milk flat whites each month.
Spot on and very well said, Alicia. And in regards to even wider societal/existential implications I highly recommend this podcast (particularly the second half, if folks are familiar with multipolar traps) https://youtu.be/_P8PLHvZygo
I also referenced this cover today ( but wrote mine in a haze of period pain and drugs so I’m glad you went deep on this ( mine was around invisibility of women over 50 )