We are witnessing a technological revolution unfolding in real time. The rise of smart machines opens up some scary possibilities; for our economy, for our democracy, even, in the most catastrophic scenarios, for our continued existence as a species. But, if we get our act together, the age of Artificial Intelligence could be one in which we rebuild the middle class, says MIT's David Autor, one of the top labor economists in the world.
Autor sees a potential future where we harness the power of AI to create a whole bunch of good jobs for people who have been left behind over the last few decades. Even in this cheery scenario, AI will profoundly disrupt the job market. But, Autor says, with concerted efforts and smart policies, we can bring the dream of a more prosperous and more equal economy into reality.
For the last four decades, technology has been mostly a force for greater inequality. Since the 1990s, Autor and his colleagues have uncovered a mountain of evidence about this. Autor calls it "job polarization." Basically, computers have been great for the jobs of high-income, college-educated workers, but not so great for the jobs of everyone else. Autor's research suggests that computers killed a range of jobs in manufacturing and offices that once provided solid opportunities to Americans without a college degree.
But new empirical evidence suggests that the age of AI could be different. It poses the possibility that, instead of highly skilled and college-educated workers reaping most of the benefits from the assistance of smart machines, it could be the less skilled and non-college-educated who get the biggest boost. Autor is hopeful that, with the right policies to prepare and assist Americans to succeed in this new AI economy, we could make a wider array of workers much better at a whole range of jobs, lowering barriers to entry and creating new opportunities.
"Let's use AI to reinstate the middle class," Autor declares. It could be a rallying cry for this new technological era.
When Technology Was A Force For Equality
Technological change was not always a force for greater inequality. In fact, Autor says, technology once provided a significant boost for creating and expanding the middle class.
But, it's worth noting, that story began with disruption and misery. And it's possible that, even in the happy economic future that Autor envisions, things will proceed in a similar way.
Prior to the Industrial Revolution, skilled artisans were the masters of production. These workers made a relatively good living making products, like textiles and tools, largely with their own hands.
Then came the age of machines and factories, and many artisans saw their livelihoods obliterated. Quite infamously, in 1811, the so-called "Luddites," a group of disgruntled artisans in Britain, began sabotaging the textile machines they were being forced to compete with.
These machines made the textile industry much more productive; made textile manufacturing much easier to do; and made clothing cheaper for the masses, thereby increasing living standards for society. The Luddites, with their old-school way of making things, couldn't compete, and their prosperity fell.
"The Luddites rose up for a reason," Autor says. "Their work was being devalued."
The machine-enabled factory jobs that replaced artisanal work paid less. But, over time, manufacturing jobs got better, especially as newly electrified factories began producing more complicated gadgets and machines, like automobiles. The technology used in this production was expensive and using it required a specific skill set. Titans of industry learned that it was best to pay a good wage to ensure they recruited and retained a skilled workforce. The skills these workers needed did not require a college education, but they were valuable nonetheless.
In the industrial era, many Americans, even if they didn't necessarily excel at school, could find good jobs doing what Autor calls "middle-skill work." These were jobs that required workers to know how to read, do basic math, and possess other skills, but they didn't need the type of elite skills typically acquired through years of education. Workers doing middle-skill jobs usually followed formal instructions, like "insert this widget here and turn twice" or "take all our receipts, create a document tracking them with this typewriter, and then add them all up."
Middle-skill jobs were commonly found in places like factory floors, or in offices, doing bookkeeping, compiling paper records, calling suppliers and clients on the telephone. And it turned out that they paid relatively well. It's why, Autor says, "The industrial era helped really grow the middle class."
During this era, Autor says, technology was a force for equality. It created a demand for a kind of valuable work that a wide spectrum of society could do. "It gave us this tailwind, a way of organizing work that could make very productive use of people who had only a high school education," Autor says. "Most people didn't need a college degree to be successful in the type of work and provide the type of expertise that was demanded in that time."
But it wasn't just technology, of course, that enabled this economy with more widely shared prosperity to come into being. There were political and institutional factors that helped as well, especially, Autor says, public investments in universal K-12 education, which helped prepare kids to enter growing sectors of the economy.
Computers Reboot The Labor Market
The age of the personal computer proved to be unlike the industrial era that preceded it. Technologies like electricity, plumbing, conveyor belts, manufactured gadgets, gizmos, and machines of all sorts had been a boon for creating good jobs for a wide range of Americans — even those who weren't bookish, studious, or exceptionally gifted. Computers, however, proved to be largely a bust for them. Autor's research finds that it was mainly elite, college-educated workers who reaped most of the benefits.
"The computer era actually devalued mass expertise and created massively amplified demand for elite expertise," Autor says.
The work of lawyers, doctors, Wall Street traders, corporate executives, professionals of all stripes — so-called "knowledge workers" — became much more productive and valuable with computers. They benefited tremendously from being able to send emails, create digital spreadsheets, search the Internet, create new apps, trade stocks and information instantaneously all over the world. Computers profoundly enhanced this elite group's ability to do their jobs, and because their jobs required years of education and hard-to-get knowledge and skills, the influx of other people to do them was relatively slow. That meant these professionals just kept getting paid better and better.
Meanwhile, jobs on the lower end of the pay spectrum, for janitors, fast food cooks, cashiers, dishwashers, security guards, and so on, continued to grow a bunch — but they did not see the same work benefits from new technology. Even worse, many of those "middle-skill" jobs — the ones created in industrial era that helped build the middle class — were killed off by new technology. Many jobs in manufacturing were taken over by robots. Other coveted jobs in offices — jobs that once enabled non-college-educated workers to get their foot in the door at profitable companies — were automated away by computer software. Many of these workers were forced out of well-paying occupations and into lower-paying ones in the service industry.
Almost a decade ago, Autor wrote an influential paper that argued that a central reason for this "polarization" was what computers could and could not do. Computers — at the time — could only be programmed to do tasks with explicit, step-by-step instructions or formal rules. That, unfortunately, still put many middle-skill jobs on the chopping block — because many of these jobs did exactly what computers and robots could do. They followed formal rules. Whether it was on an assembly line or doing a lot of clerical work in offices, many of these tasks could be broken down into step-by-step procedures that machines could be programmed to do. And, so, many of them got automated away, and the workers who held those jobs were forced into lower-paying occupations.
But while Autor saw computers as a major contributor to rising inequality, he was bullish on the future of job growth. He didn't think most jobs were going to get automated away. That's because he saw the limitations of computers as a kind of big security fence protecting a whole range of jobs at both the upper and lower ends of the labor market.
Computers could not do things like reasoning, decision-making, abstract thinking, problem-solving, or creativity — which meant that most high-paying, college-educated jobs were safe for the time being. Even better, the stuff that computers could do enhanced — or, in econ jargon, "complemented" — the work of highly skilled workers, allowing them to specialize and get better at their bread-and-butter skills.
At the same time, computer technology couldn't do a lot of the jobs on the low end. They didn't have people skills. They couldn't deliver a latte or a meal with a smile. Despite continual advances, robots couldn't do most manual tasks like cleaning or building various kinds of stuff. Even when robots could be used to build stuff, they were expensive and needed to be programmed with step-by-step instructions in very controlled environments, like a precisely engineered assembly line.
Autor called this gigantic fence protecting jobs "Polanyi's Paradox," a reference to a brilliant Hungarian-British scholar and philosopher Michael Polanyi. Polanyi wrote a book called The Tacit Dimension, which observed that, in Autor's words, "We know more than we can tell." That is, we understand how to do a lot of stuff — like ride a bicycle, use a hammer, clean a toilet, make a sarcastic joke, recognize the face of people we haven't seen in decades — but, if asked, it'd be super hard to write down step-by-step instructions and procedures how to do these things.
"The reason I invoked Polanyi in that paper was to make the observation that — at that time — to automate something using computers, you needed to understand it explicitly, not just tacitly, because you needed to write a program that a non-sentient, non-creative, non-problem solving machine could execute all the steps, without much human input," Autor says. "Computers couldn't improvise, couldn't figure something out on its own."
The Rise Of AI
When Autor wrote his influential paper a decade ago, machine learning, the foundation for modern AI systems, was only in its infancy. And, back then, there was a raging debate about how much these new AI systems would eventually be able to do, especially since they don't seem to work the way our brains work. These systems essentially crunch a bunch of data and then learn patterns and associations that they can replicate and remix. They don't reason, at least in the way we've traditionally thought reasoning works.
But, even a decade ago, Autor and his colleagues foresaw that machine learning was capable of doing things that traditional computer programs could not do. "It overcomes Polanyi's Paradox," Autor says. In other words, humans don't have to write down procedures for AI to do stuff. You don't have to program them with step-by-step instructions. You just feed them a bunch of data, and they're off to the races.
Over the last few years — and especially in the last few months — it's turned out that these pattern-recognizing machines are capable of incredible feats. They're now, for example, doing things like translating images of brain activity into words, in effect, reading our minds. Even some of the best computer and brain scientists are struggling to figure out how they're doing stuff like this.
Now, instead of struggling to program machines to do stuff, we're struggling to figure out how machines are doing the stuff they're doing. "So this is what I now call 'Polanyi's Revenge,'" Autor says. "Now computers tacitly understand all kinds of stuff that they can't explain to us."
So, yes, Autor believes we're in a new technological era. The era of smart machines. Computers can now jump that security fence protecting a whole range of jobs, that bright line separating what computers could do and what humans could do. And we're now watching the unfolding of a new era where there's a lot of uncertainty over what's going to happen.
The Happy Scenario
Let's start with the happy potential future: AI doesn't completely erase our comparative advantages as human beings; rather, it mostly just enhances our ability to do things.
If that's the case, Autor sees a real possibility that this new technological era could unfold similarly to the industrial era. It may begin with disruption and misery for some elite workers, as it did for artisans. But that process could also make our economy more productive, products and services cheaper, our living standards higher, and — if we pursue the right policies — it could make a whole bunch of people without a college education much more capable of doing a whole range of valuable jobs.
Autor's optimistic vision has some early evidence supporting it. Over the last few months, there have been two empirical studies of the economic effects of AI that suggest it makes lower-skilled workers much better at their jobs and has an equalizing effect in the labor market. One study, which we wrote about in the Planet Money newsletter last week, showed that less experienced, lower-skilled customer service representatives saw big gains in their job performance after their company introduced an AI chatbot to help them with their work. More experienced, higher-skilled reps saw little or no benefit.
Another study, which Autor brought to our attention, showed something similar. Two of Autor's students at MIT conducted an experiment in which ChatGPT helped people do various writing tasks. They found that less-skilled writers ended up getting much better at writing, while high-skilled workers saw some but lesser benefit. As a result, inequality between the workers decreased.
If the findings in these studies can be replicated and built upon across the economy, that creates a whole bunch of opportunities.
Autor imagines an economy, where, assisted by AI, a wider range of workers could do the kind of work that's currently being done by the upper echelon of the labor force. "The good scenario is one where AI makes elite expertise cheaper and more accessible," Autor says. "All of a sudden, these workers can summarize the literature, write a decent document, organize a schedule, do medical analysis, design a product, figure out a route, maybe even fly a plane."
In other words, jobs that are currently reserved for more elite workers could be done with lower-skilled workers now that smart machines can help them. That not only could make products and services cheaper, it could open up ladders of opportunity for more people.
But, Autor says, we'll need to retool our education system to help make this a reality, enabling people to gain "foundational skills" for various industries and teaching them how to effectively use this new technology. He sees a parallel in the industrial era, when universal K-12 education helped prepare a wide array of Americans for industrial jobs. Autor says we'll also need other policies, from the government, businesses, universities, nonprofits, to help bring this world into being.
"Basically, the middle-skilled workers of the future could be people who have foundational skills in healthcare, in the trades, in travel and services," Autor says. Then, with the help of AI, they could get really good at these jobs.
Of course, even in this rosy scenario, the expertise of some high-paid professionals would be devalued because they'll face new competition from AI-enhanced newcomers. But Autor imagines this happening gradually. "I don't think they're all gonna be just like thrown out the top floor of office buildings," Autor says.
This scenario, Autor says, could also have another potential benefit: "In the long run, it means fewer people have to go to college, which is expensive."
The Bad Scenarios
The worst scenario of all is if AI becomes sentient and kills us all. Or, more likely, as Autor says, "we use AI to kill each other."
Some leading technologists are actually worried that AI poses an existential threat to us, so let's not just laugh that possibility off. Autor himself is worried that AI could possibly empower nefarious actors, be used to spread misinformation, hijack our critical systems, and/or create a whole host of security threats.
But — ahem — putting existential threats aside, there are also some bad economic scenarios. It's possible that AI actually ends up radically increasing inequality. It could make the owners of the supercomputers behind AI systems insanely rich. It could empower all sorts of business owners to replace their workers with machines. It could erase most of our comparative advantages as human beings and make all but some workers in various industries obsolete, leaving only superstars, those with seniority, or those with connections to do those jobs.
"I don't wanna rule it out," Autor says. "I mean, you can imagine a world where you just need a few super experts overseeing everything and everything else is done by machines." In that world, we would need to completely rethink how our society functions, what we do with our time, and how income is distributed.
But, Autor says, he sees important reasons why these bad scenarios are unlikely, at least across the entire economy, and anytime soon.
Why The Happier Scenario Is More Likely Than The Bad Scenarios
First of all, AI still lags far behind humans in many realms. We still possess plenty of capabilities, talents, and skills that these machines cannot match. "First of all, we live in the physical world, which most machines do not," Autor says. "There's all kinds of things that we can do with our hands and our bodies and our faces, and so that machines are not at the moment doing."
More than that, Autor says, we still have brains that outmatch computers in many ways. We are "more adaptive problem solvers," Autor says. "We have much more common sense, and, of course, we're much better at relating to other people. And we're more creative. I don't think we've been surpassed in many realms. In some we will be surpassed quickly, but some we will not be."
Even more, Autor says, we are actually living in a period where the problem is we don't have enough people to do jobs, not that there are too few jobs for people to do. Our fertility rates have plummeted. Our population is getting older and retiring. Immigration has radically slowed. "The US population is growing at its slowest rate since the founding of the nation," Autor says. Many other countries are seeing a similar demographic crunch.
"And that's a world where we actually need a lot more automation to enable us to do the things we need to do, including care for the elderly," Autor says. "So I'm not worried about us running out of work and running out of jobs to do."
Autor says his biggest fears at the moment are actually that AI could be used to build smart weapons, spread misinformation, have us further question truth and reality with fake videos, images, and recordings, and endanger our peace and security. "In my mind, I actually think the irony is that the labor market is the least scary part of this at the moment," Autor says. "I'm actually much more scared about the impact of AI on everything else."