The Dark Ages were not entirely dark. Advances in agriculture and building technology increased Medieval wealth and led to a wave of cathedral construction in Europe. However, it was a time of profound inequality. Elites captured virtually all economic gains. In Britain, as Canterbury Cathedral soared upward, peasants had no net increase in wealth between 1100 and 1300. Life expectancy hovered around 25 years. Chronic malnutrition was rampant.

“We’ve been struggling to share prosperity for a long time,” says MIT Professor Simon Johnson. “Every cathedral that your parents dragged you to see in Europe is a symbol of despair and expropriation, made possible by higher productivity.”

At a glance, this might not seem relevant to life in 2023. But Johnson and his MIT colleague Daron Acemoglu, both economists, think it is. Technology drives economic progress. As innovations take hold, one perpetual question is: Who benefits?

This applies, the scholars believe, to automation and artificial intelligence, which is the focus of a new book by Acemoglu and Johnson, “Power and Progress: Our 1000-Year Struggle Over Technology and Prosperity,” published this week by PublicAffairs. In it, they examine who reaped the rewards from past innovations and who may gain from AI today, economically and politically.

“The book is about the choices we make with technology,” Johnson says. “That’s a very MIT type of theme. But a lot of people feel technology just descends on you, and you have to live with it.”

AI could develop as a beneficial force, Johnson says. However, he adds, “Many algorithms are being designed to try to replace humans as much as possible. We think that’s entirely wrong. The way we make progress with technology is by making machines useful to people, not displacing them. In the past we have had automation, but with new tasks for people to do and sufficient countervailing power in society.”

Today, AI is a tool of social control for some governments that also creates riches for a small number of people, according to Acemoglu and Johnson. “The current path of AI is neither good for the economy nor for democracy, and these two problems, unfortunately, reinforce each other,” they write.

Related work from others:  Latest from Google AI - Unsupervised and semi-supervised anomaly detection with data-centric ML

A return to shared prosperity?

Acemoglu and Johnson have collaborated before; in the early 2000s, with political scientist James Robinson, they produced influential papers about politics and economic progress. Acemoglu, an Institute Professor at MIT, also co-authored with Robinson the books “Why Nations Fail” (2012), about political institutions and growth, and “The Narrow Corridor” (2019), which casts liberty as the never-assured outcome of social struggle.

Johnson, the Ronald A. Kurtz Professor of Entrepreneurship at the MIT Sloan School of Management, wrote “13 Bankers” (2010), about finance reform, and, with MIT economist Jonathan Gruber, “Jump-Starting America” (2019), a call for more investment in scientific research.

In “Power and Progress,” the authors emphasize that technology has created remarkable long-term benefits. As they write, “we are greatly better off than our ancestors,” and “scientific and technological progress is a vital part of that story.”

Still, a lot of suffering and oppression has occurred while the long term is unfolding, and not just during Medieval times.  

“It was a 100-year struggle during the Industrial Revolution for workers to get any cut of these massive productivity gains in textiles and railways,” Johnson observes. Broader progress has come through increased labor power and electoral government; when the U.S. economy grew spectacularly for three decades after World War II, gains were widely distributed, though that has not been the case recently.

“We’re suggesting we can get back onto that path of shared prosperity, reharness technology for everybody, and get productivity gains,” Johnson says. “We had all that in the postwar period. We can get it back, but not with the current form of our machine intelligence obsession. That, we think, is undermining prosperity in the U.S. and around the world.”

A call for “machine usefulness,” not “so-so automation”

What do Acemoglu and Johnson think is deficient about AI? For one thing, they believe the development of AI is too focused on mimicking human intelligence. The scholars are skeptical of the notion that AI mirrors human thinking all told — even things like the chess program AlphaZero, which they regard more as a specialized set of instructions.

Related work from others:  Latest from IBM Developer : Ingest data from Apache Kafka

Or, for instance, image recognition programs — Is that a husky or a wolf? — use large data sets of past human decisions to build predictive models. But these are often correlation-dependent (a husky is more likely to be in front of your house), and can’t replicate the same cues humans rely on. Researchers know this, of course, and keep refining their tools. But Acemoglu and Robinson contend that many AI programs are less agile than the human mind, and suboptimal replacements for it, even as AI is designed to replace human work.

Acemoglu, who has published many papers on automation and robots, calls these replacement tools “so-so technologies.” A supermarket self-checkout machine does not add meaningful economic productivity; it just transfers work to customers and wealth to shareholders. Or, among more sophisticated AI tools, for instance, a customer service line using AI that doesn’t address a given problem can frustrate people, leading them to vent once they do reach a human and making the whole process less efficient.

All told, Acemoglu and Johnson write, “neither traditional digital technologies nor AI can perform essential tasks that involve social interaction, adaptation, flexibility, and communication.”

Instead, growth-minded economists prefer technologies creating “marginal productivity” gains, which compel firms to hire more workers. Instead of aiming to eliminate medical specialists like radiologists, a much-forecast AI development that has not occurred, Acemoglu and Johnson suggest AI tools might expand what home health care workers can do, and make their services more valuable, without reducing workers in the sector.

“We think there is a fork in the road, and it’s not too late — AI is a very good opportunity to reassert machine usefulness as a philosophy of design,” Johnson says. “And to look for ways to put tools in the hands of workers, including lower-wage workers.”

Defining the discussion

Another set of AI issues Acemoglu and Johnson are concerned about extend directly into politics: Surveillance technologies, facial-recognition tools, intensive data collection, and AI-spread misinformation.

China deploys AI to create “social credit” scores for citizens, along with heavy surveillance, while tightly restricting freedom of expression. Elsewhere, social media platforms use algorithms to influence what users see; by emphasizing “engagement” above other priorities, they can spread harmful misinformation.

Related work from others:  Latest from Google AI - Hidden Interfaces for Ambient Computing

Indeed, throughout “Power and Progress,” Acemoglu and Johnson emphasize that the use of AI can set up self-reinforcing dynamics in which those who benefit economically can gain political influence and power at the expense of wider democratic participation.

To alter this trajectory, Acemoglu and Johnson advocate for an extensive menu of policy responses, including data ownership for internet users (an idea of technologist Jaron Lanier); tax reform that rewards employment more than automation; government support for a diversity of high-tech research directions; repealing Section 230 of the 1996 Communications Decency Act, which protects online platforms from regulation or legal action based on the content they host; and a digital advertising tax (aimed to limit the profitability of algorithm-driven misinformation).

Johnson believes people of all ideologies have incentives to support such measures: “The point we’re making is not a partisan point,” he says.

Other scholars have praised “Power and Progress.” Michael Sandel, the Anne T. and Robert M. Bass Professor of Government at Harvard University, has called it a “humane and hopeful book” that “shows how we can steer technology to promote the public good,” and is “required reading for everyone who cares about the fate of democracy in a digital age.”

For their part, Acemoglu and Johnson want to broaden the public discussion of AI beyond industry leaders, discard notions about the AI inevitability, and think again about human agency, social priorities, and economic possibilities.

“Debates on new technology ought to center not just on the brilliance of new products and algorithms but on whether they are working for the people or against the people,” they write.

“We need these discussions,” Johnson says. “There’s nothing inherent in technology. It’s within our control. Even if you think we can’t say no to new technology, you can channel it, and get better outcomes from it, if you talk about it.”

Similar Posts