AI is not a story about technology. It’s a story about power.
AI is hyped as a revolution, but public conversation and policymaking are sparkly, shallow and shaped by commercial interests. AI isn’t just about productivity gains or quirky tools; it’s a force reshaping work, society, and democracy.
We need better conversations, driven by public good, not private profit. You can and should have these discussions in your workplace, workshops and conferences. This article will show you how.
The problem with AI keynotes
I just attended yet another conference with a mediocre AI keynote speaker. You know the one: it’s a product-demo vibe, with some spurious statistics, and a final platitude like “the only constant is change”, or “embrace the future or fall behind.”
These presentations are bad, and not just because the PowerPoints suck. They waste a rare opportunity and reinforce a dangerous story.
Conferences are high-priced, high-possibility gatherings. Hundreds of powerful people – executives, policymakers, and civic leaders - invest time and money to learn and connect. These people distribute funds, influence legislation, design organisations and shape communities. The room should hum with rich discussion about the future of work, democracy and public life.
Instead, we get lightweight tutorials and uncritical hype.
Worse, these crappy keynotes entrench a cultural myth: that technology is an inevitable, unstoppable force. Politicians and communities can only hang on for the ride and watch what happens.
I call bullshit.
In this article, I offer an alternative: a critical overview of the current state of AI, a summary of what we know (and don’t) and suggested conversations for leaders, thinkers and policymakers.
I’m not an AI expert, but neither are the grifters on stage. We don’t need more TED talks. We need careful analysis of consequences, risks, policy and power, and the political and community will stand up to tech exceptionalism.
This is a long read, so I suggest you make a cup of tea and settle in.
This work is free. If you value and can afford to support independent social and political critique, your paid support keeps it that way.
I will continue to write about power, systems and policy, but with any luck, I may never write about AI again. The research for this article took weeks and yielded enough for a full year of articles. I have no desire to become an AI ‘voice’.
As a thank you, I’ll send paid subscribers two pieces of bonus content next week: a suite of policy directions for AI and a list of AI experts and voices to follow. Become a paid subscriber now.
AI has the potential for incredible good
First, the good stuff.
AI is world-changing technology with the potential to transform healthcare, education, science, and public services. From cancer detection to climate modelling, AI is doing things humans can’t, and doing them fast. These breakthroughs could improve lives, bridge access gaps, and solve complex problems.
One Harvard-developed model predicts cancer outcomes with 94% accuracy. AI is better than humans at spotting micro cancers, detecting heart attack risk, interpreting brain scans, and accelerating drug development. With the right safeguards, this could mean earlier detection, faster treatment, and better outcomes for more people.
AI holds enormous promise for accessible services. In education, it can personalise learning at scale. In aged care, it can prevent falls, manage medication and ease loneliness. In government, it can simulate policy interventions or map complex systems – all things human brains and spreadsheets can’t do quickly or easily.
It’s speeding up scientific discovery, too. AlphaFold cracked a protein folding problem that stumped researchers for decades. AI has generated potential antibiotics, optimised energy grids, and modelled climate interventions.
(AI optimists also claim AI will save us time by automating drudge work, but I’m wary of that. Time-saving inventions rarely deliver a net gain. Washing machines shortened laundry time, but raised the standards of home cleanliness. Email was supposed to halve office hours but it made our inboxes a second job.)
AI’s opportunities are real, but a net public benefit is no guarantee, especially if we leave the rollout to the tech companies.
What the tech bros are selling us
Anything with an algorithm. AI has infiltrated your life.
‘AI’ is any rule-based system that mimics human decision-making. This encompasses everything from basic statistical models to your spam filter.
AI screens job applicants. AI runs traffic lights, adjusts flight prices based on your browser history, recommends shows on Netflix, and targets you with ads on Facebook. It shapes your social media feed, assesses insurance policies, approves mortgages and powers chatbots.
The two hottest-burning fuels in the hype machine right now are generative AI and agentic AI. One exists, the other is more promise than proof.
Generative AI
Generative AI is about large language models (LLMs) like ChatGPT. These tools make new things (text, images, code, music, video) following patterns detected in the big datasets they’re trained on. They use natural language that eerily mimics the rhythm and tone of human conversation.
These tools don’t know what they’re saying, but it feels like it, because in our normal lives, language is a symbol for meaning. Gen-AI only does words, but the words are so coherent, we mistake them for meaning. Gen-AI doesn’t think. According to Apple researchers, reasoning dialogue only creates the illusion of thinking.
Agentic AI
The latest buzz. AI agents sit atop LLMs and interact with other tools to break down big goals into smaller tasks and carry them out with limited supervision. Gen-AI might plan you a trip, but an agent would book it for you.
It is early days, and the hype machine is in overdrive. So far, most “agents” are strings of automation that require feedback and the leap to agentic AI is still theoretical.
How AI works
Developers do not know how AI works
Geoffrey Hinton won a Nobel prize for his contributions to AI a few months ago. He has been categoric: AI developers do not know how their models work. Anthropic CEO Dario Amodei confirmed this, posting on his blog: “People outside the field are often surprised and alarmed to learn that we do not understand how our own AI creations work. They are right to be concerned: this lack of understanding is essentially unprecedented in the history of technology.”
Developers build and test AI models with black-box methods – tech-bro jargon for “f**k around and find out.” It’s not clear how the models use information or make decisions. We are in public beta mode.
The next step in AI is unclear
Artificial General Intelligence (AGI) is an autonomous, sentient system with generalised human-level reasoning and the ability to adapt. It is also only a theory. Predictions about whether AGI is possible, and by when, vary. Sam Altman, OpenAI CEO, claims they already know how to build it. Some experts say it’s 5, 20, or 100 years away. Still others say it will never happen. For now, AGI is speculation, not science. While ChatGPT was a big jump forward, the gap between a convincing language model and a truly autonomous, intelligent system is massive.
AI can’t remember past interactions, reason, infer causality, act autonomously, tell the truth consistently, or explain its own answers. It hallucinates, lies, and breaks.
The biggest bottleneck is persistent memory and continual learning. The “P” in GPT stands for pre-trained, because models are static - trained once, then frozen. You can bolt on a web search, but the model forgets almost instantly. LLMs are limited to the data they’ve been fed, and as Ilya Sutskever puts it: there is only one internet.
Who pays the price for progress
AI stole the entire internet. Including the worst bits.
In the “original sin,” tech companies like OpenAI, Meta and Google scraped everything ever put online – for free. GPT models are trained on massive datasets, compiled from the internet: academic papers, news, books, blogs, Wikipedia, Reddit, et al. This stripped millions of academics, creatives, and professionals of their labour - and the backlash is strong.
In December 2023, The New York Times sued OpenAI and Microsoft for scraping news articles. In March 2025, The Atlantic updated it’s searchable database to include 7.5 million pirated books Meta trained it’s AI on. In November 2024, they launched the Hollywood AI Database search tool, featuring writing stolen from 139,000 movies and TV shows to train generative AI.
This week, Getty Images sued Stability AI in the UK High Court over the unauthorised use of 12 million copyrighted and watermarked Getty images. Last month, over 400 creatives, including Paul McCartney, Dua Lipa, and Coldplay, sent an open letter to the PM urging stronger IP protection and licence schemes for AI training.
Creative outputs are suffering
AI tools are flooding markets with derivative or directly copied content. Amazon overflows with AI-written sham books, targeting new releases like Jacinda Ardern’s new memoir. Spotify has been feeding you AI music made by fake artists. An AI startup aims to generate 8,000 books this year. It’s about to be slop city.
Commentators such as Tim O’Reilly propose alternative models of monetising and recognising copyright. O’Reilly argues that AI could be a new age for creators and platforms; an ecosystem where value flows fairly. If not, we will run out of the food that AI needs to survive and it will have to eat its own slop.
AI is trained on biased data
If they had only stolen from academics and creatives, we might be in better shape. But training AI on everything means training it on everything - the good, the bad, and the WTF. Credible sources swim in the same pond as general internet scum and the stolen datasets are too large and messy to pre-filter. So they take the lot, then pay vulnerable workers $2 an hour to sift out as much murder, rape, violence and child abuse as they can.
LAION-5B is a dataset of 5.85 billion image–text pairs, pilfered from the internet. These pairs use image alt-text to teach models to generate images from prompts, or vice versa. OpenAI, Google and others use it to train their gen-AI tools.
LAION-5B includes copyrighted images from news sites, artists, and photographers, porn, personal medical data, pictures of children posted on private social media profiles, and child exploitation material. The captions feature racism, misogyny, conspiracy and misinformation. This training data all feeds into AI output.
There’s also the matter of what’s not included. If your work isn’t online, if you don’t speak a dominant language, or your community is underrepresented, you’re invisible. As most professionals and students already rely on GPT models for their work, the future will ignore many by default. Dr. Timnit Gebru, then a leading AI ethics researcher at Google, raised these concerns in a 2020 peer-reviewed paper. Google fired her and buried the paper.
Online bias has offline consequences
AI systems replicate and amplify online bias in the real world. This is upsetting when images and content privilege already powerful voices – showing CEOs as white men, women as young and wrinkle-free, and sanitised, conservative values as the default.
But these consequences take on new life when bias is embedded everything from facial recognition technology, insurance decisions,loan outcomes, predictive policing and welfare decisions – to, now finding, tracking and targeting potential deportees, in real time, under the Trump administration. We automate inequality and embed it into decisions before we understand how these systems even work.
Why you need to pay attention
Good morning, this is your AI wakeup call
This isn’t just about shady training data or rogue chatbots. It’s about what happens when private hands are trusted with public good. When commercial interests lead critical discourse, the public loses. Right now, our leaders stand by as world-changing technology careens at breakneck speed, wreaking social and environmental devastation along the way.
AI adoption is skyrocketing, with two-thirds of professionals and over 80% of students using AI tools regularly. Most users admit to unethical or forbidden applications of the technology - workers upload sensitive company information; students submit AI-written essays. AI has infiltrated education, government, and business, despite a glaring lack of public proficiency, little understanding of how it works and no guardrails to protect privacy or prevent harm.
AI is everywhere, but people don’t know how it works and where it’s going. Experts don’t know either. If a guy on stage makes predictions with confidence, he’s dumb, lying, or selling you something. Every week, new headlines expose AI’s dirty underbelly: copyright theft, military applications, environmental devastation, deepfakes, deskilling, loss of critical thinking, even the risk of human extinction.
Public trust is faltering - not just in AI, but in democracy itself. Tech companies have defanged the fourth estate, diluted regulation, dissolved social cohesion and rewired children’s brains, while policymakers shrug their shoulders in apathy.
Powerful technology companies and their bottom-feeding minions dazzle and obfuscate, making AI acceleration and adoption appear magical and inevitable, but it is neither. This is a well-thumbed playbook, as old as power itself.
History repeats
Tech companies and individuals with vested interests in AI are behaving exactly as we would expect of a well-funded, unregulated industry. They build hype. Dismiss critics. Funnel resources into controlling the conversation. Profit in the vacuum of regulation. Externalise environmental and human costs.
We know this one. We’ve seen it before.
As smoking’s health risks became impossible to ignore, tobacco companies played the same game. They cast doubt on science, funded bogus research, and lobbied hard. They built ‘education’ institutes, bullied regulators, and warned of economic catastrophe if the industry was regulated. And it worked. It took many decades and millions of deaths before governments enforced serious regulation, and smoking products are still legal and prevalent across the world.
As concerns about the climate reached the public arena, oil companies went hard on ‘delay, deflect, downplay’. They painted themselves as exceptional: too big and too essential to economic progress to regulate. They played dumb, while they propagated misinformation. Exxon Mobil have known about climate change since the 1970s, but they poured hundreds of millions into climate denialism, delaying global response to warming, staffing think tanks, influencing politicians, and capturing policy processes. They’re now telling the public to take responsibility for climate change.
Today’s AI players follow the same script. They are prophets in public, monopolists in private, and lobbyists behind closed doors. While politicians grin about using ChatGPT to write speeches, firms colonise public infrastructure - communications, education, payments, media – and extract private value from what used to be public goods.
Today’s AI moguls are playing the same game as other unchecked monopolies before them. Found education institutes and astroturf/ sponsor public voices. Downplay risk. Talk about ethics while arming governments for war. Silence dissent.
It’s what happens when commercial incentives trump civic accountability – and we’ve seen how this ends: in crisis and cleanup. And to what end, exactly?
Truth and spin
Technical uncertainty helps AI to stay unregulated. There is no enforceable global framework to regulate AI, and national laws are patchy. The EU’s AI Act was watered down though heavy lobbying and privileged access by tech companies. GDPR and copyright laws are twisted or ignored.
In New Zealand, the government released ‘guidelines’ for government AI use, but ruled out AI specific legislation. In Australia, public agencies are bound by an AI policy that includes transparency and reporting. Commercial regulation is not on the agenda in either country. There is a public mandate for regulation, but no political will.
We’re told this laissez-faire approach will be worth it, thanks to the impact all this innovation will have on the economy, the amazing services AI will bring. Two problems: we have no idea how AI will change the economy, and we don’t know how to align AI with human values.
We don’t know how AI will change the economy
AI’s economic impact is uncertain - and could well be overstated. MIT economist Daron Acemoglu estimates only a modest 1.1–1.6% GDP boost over a decade from AI, with just 0.05% annual productivity gains. There is no evidence of aggregate productivity gains yet.
Early data suggests that workplaces might only see small changes, or that AI can lower productivity and increase stress, but this may shift over time as AI matures and integrates better into workflows.
As for jobs, who knows? AI could hollow out the middle-class, as white-collar automation targets the knowledge workers – or it could help rebuild it and create new, highly skilled positions we can’t yet conceive of. We might see mass job displacement and erosion of the tax base, or we might see higher earning driven by productivity gains. AI could polarise the labour market, with the benefit of AI tools flowing disproportionately to those who own capital – or not. Nobody knows.
We don’t know how to align AI with human values
The ‘alignment problem’ is AI jargon about safeguarding and programming human values, to stop it from acting unethically. This has proven difficult. Popular techniques like RLHF (reinforcement learning from human feedback) only instil surface-level honesty and helpfulness. This is partly because humans do not have consistent and universal ethics and values, and that even if we did, applying abstract moral principles to real-world situations can be complex and lead to unintended consequences. It is also because black box development means it is not yet a solvable problem.
Behind the curtain
Left unchecked, AI risks repeating every previous failure of industrial capitalism: monopolisation, captured regulation, growing inequality, and environmental ruin - only faster, and at greater scale. Here’s what we’re seeing.
Geopolitical manoeuvring
AI is consolidating geopolitical and economic power in a handful of countries and companies. Those with access to resources and talent - the U.S., China, and a small number of firms like OpenAI, Google, and Meta - are pulling ahead. They’re vertically integrated, hugely resourced, and lobbying hard to stay unregulated.
Different value systems – market-based in the US, ethics-based in the EU, state-led in China – and political tension makes global cooperation tricky. A high-stakes race for dominance is underway. Global inequality is, as always, at play here: the Global South provides data, labour, and energy - and receives risk, misinformation, and instability in return. Western tech firms extract value without sharing benefits.
Environmental destruction
Environmental limitations - energy use, water for cooling, and physical chip production - are a huge barrier to AI progress, which is why Google is now building nuclear plants. That’s because AI has a massive environmental footprint. Tech company emissions are up 150% in the last three years. AI’s annual global water consumption is pegged to hit 6 billion cubic meters per year by 2027 - six times Denmark’s annual water bill. Global AI energy demand is projected to rise by a factor of ten. The most vulnerable regions pay the highest price, and impacts are uneven and localised: AI water usage is higher in hot, drought-prone areas like Arizona and Chile.
Misinformation epidemic
Misinformation and disinformation is the #1 global risk for the next two years. Democracy is under threat.
Generative AI makes it cheap and easy to scale deepfakes, flooding the internet with fake content. Like these apps advertised on Facebook that remove people’s clothes, making Australian teenagers suicidal, or this fake video of Kamala Harris distributed to over 200 million people by Elon Musk during the US presidential election.
Social media algorithms personalise disinformation to the reader, creating radicalising rabbit holes. News outlets are under threat,foreign countries meddle in elections and divisive online content fuels offline violence.
The presence of deepfakes isn’t just alarming because people could believe fake news, but because it erodes trust in things that are real. Who can we trust? The news, already dying a social-media induced death, has lost credibility in its enthusiasm to use AI, with fake book reviews in the Chicago Sun-Times, or the New Zealand Herald’s AI-generated editorial.
We are particularly vulnerable in New Zealand and Australia, due to our low levels of AI proficiency. We’re less able to see AI’s limitations, less likely to fact-check outputs, and less aware of ways we’re manipulated by algorithms and misinformation. In New Zealand, AI‑driven Russian operations spread COVID‑19 lies during late 2021, spiking directly before the 2022 occupation of Parliament, when we consumed 30% more Russian propaganda than Australia or the U.S.
Cognitive decline
AI use may reduce memory, creativity, and critical thinking through “cognitive off-loading,” much like autopilot does for pilots. The long-term cost of convenience is still unknown. People who lose knowledge, connection, options and agency from AI risk experiencing a lower quality of life - defeating the purpose of automating tasks in the first place.
Socially, AI may rewire how we communicate, form relationships, and trust information. Humans forming bonds with companion AImay struggle to maintain human bonds. As Sherry Turkle writes, “we expect more from technology and less from each other.”
Existential threat
“…what is very dangerous – and likely – is what humans with bad intentions or simply unaware of the consequences of their actions could do with these tools and their descendants in the coming years.” – Yoshua Bengio
Geoffrey Hinton warns there is a 10-20% chance that AI will lead to human extinction in the next 30 years. But there is plenty of disagreement on this one. A recent RAND study found there is no plausible scenario in which AI is an extinction threat to humanity. The more immediate threats are misuse: misinformation, warfare, surveillance, and economic destabilisation.
It is more likely that humans will use AI with malicious intent (or limited understanding) to cause destruction, than AI will take us out on its own steam. That’s easy to believe, especially given the following.
Autonomous warfare
AI is already embedded in military operations. OpenAI has partnered with weapons-maker Anduril, Anthropic has announced a deal with Amazon and Palantir to deploy AI for defence use. Meta now permits military use of its open-source models and is building military equipment and in 2024, Google fired 50 employees who protested its contracts with the Israeli government.
Unlike most weapons, which are governed by international treaties, AI military technology operates in a regulatory vacuum.
Militaries use AI for autonomous drones, targeting, predictive maintenance, battlefield decision support, and logistics. Ukraine just successfully deployed AI drones against Russia. The US Army has auto-targeting rifle scopes. While Autonomous Weapons Systems (capable of selecting and engaging targets without human intervention) are under debate at the UN, there are no binding global agreements.
What you can do to change things
Take the power back
This is a lot of overwhelming information. But you have the power to change things. We might have low AI literacy here, but that’s partly because we’re already skeptical. NZ and Australia are the two countries who least believe AI’s benefits outweigh the risks.
The people living through a moral panic are often the worst-placed to accurately gauge the risk. AI is great fun; most new technologies are. But history tells us that when we let commercial interests pursue society-shaping change unchecked, they will not make decisions in the public interest.
We know they do harmful stuff. That Facebook incites wars, Instagram destroys teenage girls, oil companies destroy the climate,drug companies prey on fear and tobacco companies target children. It is in their interest to do these things. This is what happens when we do not regulate large companies or curb their thirst for profit and power. Your skepticism is valid.
But skepticism is not enough. Visibility is the antidote to unchecked power. These conversations need to happen loudly and intentionally.
Existential threats open the door for great conversations. We saw a glimpse of it with COVID before everything went back to normal. We got outside, spent time with family and reconnected to purpose. Seeded conversations about universal basic income and four-day work weeks.
We squandered that opportunity, but the AI conversation presents another.
When we lift our gaze beyond today, survival, and now, our fear of falling behind drops away. Then, we can ask better questions like: What is our responsibility? To whom? How do we want to go down in history? What legacy will we leave?
We don’t need to be alarmist or radical to have these conversations. We can talk about our hopes, dreams and shared goals for the future while staying focused on achievable political and organisational change.
Enter: your new conference agenda.
Your conference conversation toolkit
Spark structural conversations about work, community and society.
Here are better conversations to have at your next conference or workshop, instead of the mediocre AI bro with his ChatGPT screenshots. Copy, paste, and add to your next programme.
Different ways to think about work
“AI does not by itself determine ‘good’ nor ‘bad’ outcomes for the world of work. Rather, what matters is the kind of world in which AI will be developed and deployed.”(Deranty, 2022)
Sure, we can prepare for potential job changes. We can make organisations more flexible, and get ready to retrain, revalue, and redistribute labour. But we can go deeper, too, and ask what work should look like and what role it should play in our society.
What is work for? What work should be rewarded, and how?
As J.P. Deranty points out, our “... current social systems continue to be organised on the basis of full-time employment in jobs attracting good wages as the condition for the complete enjoyment of social and economic rights.” Now is a chance to ask: what would a different organising principle look like?
The real purpose of education
Instead of panicking about AI-drafted essays, we can look closer at the role and purpose of education. If it’s not about memorising information to be more employable, what is it about?
What are the social and civic values we want to uphold? Who are the kind of people we want in the world, and how do we promote those characteristics and skills?
If we want children to think critically, ask good questions, work together, find creative connections, behave as moral citizens, be good friends and contribute to society, we can design for this.
A new kind of community
If work stops being the centre of modern life, by design or necessity, how will we shape public life? What does it mean for housing, parenting, volunteering, or local democracy? How would we design and shape public spaces for fewer employed people?
Once we’ve had those conversations, we can wonder what those scenarios tell us about what we truly value. We can see what our neighbourhoods would look and feel in a world not driven by commerce. We can consider how the commons, the civic, and the citizenry might rise to new prominence in a disconnected virtual world.
The equality default
We can stand by and shrug our shoulders as a homogenising bias-perpetrator sweeps through our companies, government agencies and homes. Or, we can push back and craft legislation, procurement standards, and organisational policies that don’t dilute our principles.
We can ask: What do we stand for? Regardless of the technology, what bottom lines do we refuse to compromise on? What standards do we insist on for our staff, vendors, and tools? How will we measure progress toward equity and equality, and who will we hold accountable? What are the consequences?
Information and media literacy
Our elections are vulnerable, our kids are being led astray and people are getting sucked in and scammed. Will we take it, or will we use our individual and collective power to say: absolutely f**king not?!
We can refuse to tolerate disinformation and misinformation. How do we build institutions and systems that are resilient to large-scale mistrust? Instead of playing catch-up with fact-checking and filters, we can ask what a healthy information environment looks like, one that puts accuracy and accountability ahead of virality.
We can seize control of our conversations back from the tech companies and we can protect and promote a robust fourth estate. We can teach skills to our staff, kids, and communities, build information literacy and reward critical thought. We can ask: what consequences will there be for organisations and actors that allow or facilitate lies and tampering? What do we need to fund and what laws do we need to pass to make this happen?
Uncompromising environmental protection
AI is eating the environment alive, but we act like the cloud is made of fairy dust. This is the same conference environment that signed climate crisis statements a year ago. If we mean what we said, there’s no universe where we let an unregulated, resource-guzzling industry off the hook.
We need to ask: what limits we are prepared to impose? What responsibilities do tech companies have for environmental impacts? We need to demand better, and be willing to cop some FOMO or unpopularity to make it happen. We can ask: where does our procurement, investment, or infrastructure policy need to change to draw this line?
Rejecting tech exceptionalism
AI isn’t magic. It’s computer software built for people to make money. It can and should be regulated, just like any other new technology.
When private motor vehicles first came out, they had a speed limit of 2 miles per hour and someone had to walk in front to wave a red flag. Before we had roads, experience and supporting infrastructure, this was a temporary, safety-first approach to managing modern technology.
We wouldn’t allow pharmaceutical companies to sell untested drugs or food companies to skip safety testing, but we give AI developers free rein. What myths need busting to drive tighter control, collective scrutiny, and shared power?
And how long we will allow tech overreach in our society and economy before we do something about it. Technology companies are taking unregulated control of services the state used to provide – communications, entertainment, news, payment, and now information. It could be time to ask: WTF?
Dignified transitions
We’ve sustained a staggering amount of change in the last 100 years – technological, social, economic, political, et al. We will navigate this, just as we’ve navigated everything else. But every time there’s a big shift, the same people get crushed. Working-class communities, women, people with disabilities, and indigenous people suffer first and suffer longest. We can choose something different this time.
We can actively build support systems, retraining programs, income buffers and protective policies to stop history repeating. We can decide to value all forms of labour - paid, unpaid, formal, informal - and create safety during change. We can wonder how everyone might be better off if we extended that kind of support and dignity to all.
Accountability and the burden of proof
The people who profit from AI offload the risk, but that needs to change. If you want to release a tool, you prove it’s safe. We can demand all new technologies prove social value, guarantee safety and present a business case for public good. We can say: make the world a better place, god damn it.
We can nut out the logistics: what independent oversight, pre-deployment testing, or liability structures do we need to make this happen? What does a true safety-by-design model look like? Who foots the bill when something goes wrong - and how do we hold them accountable?
Global collaboration
AI isn’t a national project. It’s a global one, and siloed laws won’t cut it. Big Tech does not wait for Parliament to catch up. If we AI to serve people, not power, we need transnational cooperation, shared standards, and collective bargaining power. So let us ask: what existing forums could step up? What binding international agreements do we need?
And beyond that: what stops us from acting? Corporate influence, national ego, lack of imagination? How do we flatten those barriers? We know how to do this for existential threats. We pulled off nuclear disarmament, CFC elimination and COVID vaccine development, we can put these tech boys back in their box.
Smashing monopolies to demand diversity
We know monopolies don’t serve the public good. One chatbot to rule them all? That’s not innovation, it’s market capture. When a handful of companies shape how the world writes, learns, communicates and thinks, we have a problem. We can apply the same principles we use in energy, banking, or telecommunications here: regulate monopolies, insist on interoperability, and create space for alternatives.
Cultural hegemony isn’t a fait accompli, either. How can we build, scaffold or facilitate AI that supports local language, cultural nuance, and diverse values? What kinds of tools and services should be public goods? Is it safe, or desirable, for private enterprise to make all of these decisions? Who should own these tools, and their data?
Privacy as a priority
If your data is harvested, traded, and reused to train AI tools, you deserve to know – and have the option to say no. Privacy isn’t a nice-to-have, it’s a core human right.
What legal frameworks do we need to beef up to protect personal data from AI? What consent standards are non-negotiable? How can organisations adopt practices that don’t just comply, but protect? And what’s the cost - socially, economically, democratically - if we don’t?
Every new tech tool arrives with a camera, a tracker, and a justification. They claim it’s for productivity, safety, convenience. But do we want to normalise surveillance in our workplaces, schools, and public spaces, or do we want the right to exist unobserved? What do we lose without that right?
Conclusion: This is about power. Take it back.
AI is powerful. How powerful, we’re not sure. But if it might result in even a fraction of the changes predicted, we need to switch gears from product demos to policy levers.
The hype around AI is an opportunity for us to interrogate our values, aspirations and options for shaping the future. To have bigger, better conversations.
Use your next workshop, gathering, panel or conference as an opportunity to do this. Pick a topic from the list above, if you dare. None of these conversations require you to be an AI expert. They ask you to bring your curiosity, experience and leadership to the table. To listen and learn, and to push back against the norm.
Don’t just listen to the AI bros. Be the voice that shifts the room.
Or in the words of Zach de la Rocha: “Why stand on a silent platform? Fight the war. F**k the norm.”
- AM
This work is free. If you value and can afford to support independent social and political critique, your paid support keeps it that way.
I will continue to write about power, systems and policy, but with any luck, I may never write about AI again. The research for this article took weeks and yielded enough for a full year of articles. I have no desire to become an AI ‘voice’.
As a thank you, I’ll send paid subscribers two pieces of bonus content next week: a suite of policy directions for AI and a list of AI experts and voices to follow. Become a paid subscriber now.
About time that someone wrote a decent, robust, thought provoking piece on AI. You might want to hear this but we need more of this in media….but then i guess I’m preaching to the converted!
Thanks for going down the rabbit hole ...
It's very easy for people to get bamboozled by tech and I've listened to a lot of folk who are feeling powerless and fearful in the face of this AI 'wave'. I agree with you wholeheartedly - at the end of the day - we choose whether we give our power away - no one can take it from us. I also celebrate your challenge for all of us to use our platforms to ask better questions. We all have them - in our communities, our work, our whanau and friends. I'd add that elbowing your way into the room (any room - it doesn't matter) to ensure tech is firmly placed where it should be - as an enabler - is essential. The change and communications communities have been banging on about this for years, and it is time to stop being so nice about it. We've been led by the nose with tech for far too long - and it is time to stop letting the techies bamboozle us with jargon (this is coming from a tech geek). Your question bank is a fabulous tool to help people keep the conversation out of the weeds. Added note - like anyone who claims to be a specialist - a techie who cannot translate their specialty into plain language so that everyone can understand and engage with the conversation, is no expert. They're just regurgitating the same old narrative and creating a position of perceived power by throwing tech speak at you.
And no - this wasn't written by AI. The rant is all mine.