THE PEOPLE DO NOT YEARN FOR AUTOMATION
We believe that this document is fully human-written.
Hacker News Article AI Analysis
Content Label
Human
AI Generated
0%
Human
100%
Window 1 - Human
Today on Decoder, I want to lay out an idea that’s been banging around my head for weeks now as we’ve been reporting on AI and having conversations here on this show. I’ve been calling it software brain, and it’s a particular way of seeing the world that fits everything into algorithms, databases and loops — software.Software brain is powerful stuff. It’s a way of thinking that basically created our modern world. Marc Andreessen, the literal embodiment of software brain, called it in 2011 when he wrote the piece “Why software is eating the world” as an op-ed in The Wall Street Journal. But software thinking has been turbocharged by AI in a way that I think helps explain the enormous gap between how excited the tech industry is about the technology and how regular people are growing to dislike it more and more over time.In fact, the polling on this is so strong, I think it’s fair to say that a lot of people hate AI, and that Gen Z in particular seems to hate AI more and more as they encounter it. There’s that NBC News poll showing AI with worse favorability than ICE and only a little bit above the war in Iran and the Democrats generally. That’s with nearly two thirds of respondents saying they used ChatGPT or Copilot in the last month. Quinnipiac just found that over half of Americans think AI will do more harm than good, while more than 80 percent of people were either very concerned or somewhat concerned about the technology. Only 35 percent of people were excited about it.Poll after poll shows that Gen Z uses AI the most and has the most negative feelings about it. A recent Gallup poll found that only 18 percent of Gen Z was hopeful about AI, down from an already-bad 27 percent last year. At the same time, anger is growing: 31 percent of those Gen Z respondents said they feel angry about AI, up from 22 percent last year.Now, I obviously talk to a lot of tech executives and policy people here on Decoder, and I will tell you, they all know AI isn’t popular, and they can all see how that’s playing out in real life.
Window 2 - Human
Here’s Microsoft CEO Satya Nadella talking about how the tech industry needs to make the case for the investments it’s making in AI:Satya Nadella: At the end of the day, I think this industry, to which I belong, needs to earn the social permission to consume energy because we’re doing good in the world.I think it’s safe to say that the tech industry and AI have not earned any of that social permission yet. Politicians from both sides of the aisle are opposing data center buildouts. Politicians in local communities that support data centers are getting voted out of office. And in the most depressing reminder of how much political violence has become a part of everyday American life, politicians who’ve supported data centers have had their houses shot at. OpenAI CEO Sam Altman has had Molotov cocktails thrown at his house.It’s sad that I’m going to have to say this again on the show, and it’s sad that we’re going to have commenters who disagree, but this violence is unacceptable. If you want to meaningfully oppose AI in a way that lasts, you should speak loudly with your dollars in the market and your attention on the internet, and you should speak loudly with your votes. You should participate in the democratic regulatory and political process. Anything else will get dismissed and perpetuate the cycle. That dismissal is already happening.I also think it’s incredibly important for our politicians and tech executives to make sure our political process makes people feel empowered, not helpless, which is a specific kind of nihilism they have all greatly contributed to. The violence is a result of that helplessness and nihilism, and the most powerful people in our society ought to reckon with that, especially as they run around saying AI will wipe out all the jobs. I’m not even exaggerating about that — here’s Anthropic CEO Dario Amodei saying he thinks AI will wipe out all the jobs:Dario Amodei: Entry-level jobs in areas like finance, consulting, tech and many other areas like that —- entry-level white-collar work — I worry that those things are going to be first augmented, but before long replaced by AI systems. We may indeed —- it’s hard to predict the future — but we may indeed have a serious employment crisis on our hands as the pipeline for this early-stage, white-collar work starts to contract and dry up.
Window 3 - Human
What I see when I encounter clips like this is the true gap between the tech industry and regular people when it comes to AI — the limit of software brain. Like I said, everyone in tech understands how much regular people dislike AI. What I think they’re missing is why. They think this is a marketing problem. OpenAI just spent $200 million on the TBPN podcast because the company thinks it will help make people like AI more. Sam Altman has said so explicitly:Sam Altman: Oh, they are genius marketers and I would love to have better marketing. Somebody said to me recently that if AI were a political candidate, it would be the least popular political candidate in history. And given the amazing things AI can do, I think there’s got to be better marketing for AI.It feels like someone just needs to say this clearly, so I’m just going to do it. AI doesn’t have a marketing problem. People experience these tools every single day! ChatGPT has 900 million weekly users, trending to a billion, and everyone has seen AI Overviews in Google Search and massive amounts of slop on their feeds.You can’t advertise people out of reacting to their own experiences. This is a fundamental disconnect between how tech people with software brains see the world and how regular people are living their lives.Image: The VergeSo what is software brain? The simplest definition I’ve come up with is that it’s when you see the whole world as a series of databases that can be controlled with the structured language of software code. Like I said, this is a powerful way of seeing things. So much of our lives run through databases, and a bunch of important companies have been built around maintaining those databases and providing access to them.Zillow is a database of houses. Uber is a database of cars and riders. YouTube is a database of videos. The Verge’s website is a database of stories. You can go on and on and on. Once you start seeing the world as a bunch of databases, it’s a small jump to feeling like you can control everything if you can just control the data.But that doesn’t always work. Here’s an example: Elon Musk and DOGE showed up in the government, and the first thing they did was take control of a bunch of databases.
Window 4 - Human
And they ran into the undeniable fact that the databases aren’t reality, and DOGE ended in hilarious failure. It turns out software brain has a limit — the government isn’t software. People aren’t computers, and they don’t live in automatable loops that can be neatly captured in databases.Anyone who’s actually ever run a database knows this. At some point, the database stops matching reality. At that point, we usually end up tweaking the database, not the world. But the AI industry has fully lost sight of this, because AI thrives on data. It’s just software, after all. And so the ask is for more and more of us to conform our lives to the database, not the other way around.Let me offer you another example that I think about all the time, especially as AI finds real fit as a business tool. It’s the idea that AI is coming for lawyers and the legal system. The AI industry loves to talk about not needing lawyers anymore, which is already getting all kinds of people into all kinds of trouble. But I get it. I’ve spent a lot of time with lawyers. I used to be a lawyer. My wife is still a lawyer. Some of my best friends are lawyers.Verge subscribers, don’t forget you get exclusive access to ad-free Decoder wherever you get your podcasts. Head here. Not a subscriber? You can sign up here.I also spend all of my time at work talking to tech people. And so over time, I’ve learned that the overlap between software brain and lawyer brain is very, very deep. Alluringly deep. If the heart of software brain is the idea that thinking in the structured language of code can make things happen in the real world, well, the heart of lawyer brain is that thinking in the structured legal language of statutes and citations can also make things happen. Hell, it can give you power over society.There are other commonalities. Both software development and the law depend heavily on precedent. We have a body of case law in this country, and we use it over and over again to help us resolve disputes, just like software engineers have libraries of code that they turn to repeatedly to build the foundations of their products.
Window 5 - Human
The similarities run deep: at the end of the day, both lawyers and engineers do their best to use formal, structured language to guide the behavior of complicated systems in predictable and potentially profitable ways.(I am far from the first person with this idea, by the way. Larry Lessig wrote a book called Code and Other Laws of Cyberspace in 2000. It’s just as relevant today as it was a quarter century ago.)This intoxicating similarity between law and code trips people up all the time. People are constantly trying to issue commands to society at large like it’s a computer that will obey instructions. There are examples of this big and small — my favorite are those Facebook forwards insisting Mark Zuckerberg does not have the right to publish people’s photos. Honestly, I look at these, and I think it would be great if the law was actually code. Maybe things would be more predictable. Maybe we’d feel more in control.But law isn’t actually code, and society and courts aren’t computers. I have to remind our fairly technical audience on Decoder and at The Verge all the time that the law is not deterministic. You simply cannot take the facts of a case, the law as written, and predict the outcome of that case with any real certainty, even though the formality of the legal system makes people think it works like a computer — that it’s predictable.But at the end of the day, it’s actually ambiguity that’s at the very heart of our legal system. It’s ambiguity that makes lawyers lawyers. Honestly, it’s ambiguity that makes people hate lawyers because it’s always possible to argue the other side, and it’s always possible to find the gray area in the law. That’s why prosecutors end up working as defense attorneys and why our regulators tend to end up working for big corporations.You can see the obvious collision between software brain and lawyer brain here. This thing that looks like a computer isn’t actually anything at all like a computer. A lot of people even argue that the law should be more like a computer, that the system should be verifiable and consistent, and that merely issuing the right commands at the right times should lead to objectively correct outcomes.Bridget McCormack, who used to be the chief justice of the Michigan Supreme Court, was on Decoder a few months ago pitching a fully automated AI arbitration system.