Discover more from The Hundred
Interview: Daron Acemoğlu on Technology and Prosperity
#59: With Daron Acemoğlu
Daron Acemoğlu is an Institute Professor of Economics in the Department of Economics at the Massachusetts Institute of Technology (MIT). He is a winner of the John Bates Clark Medal and the author of Why Nations Fail. His new book is titled Power and Progress: Our Thousand-Year Struggle Over Technology and Prosperity. Our questions are in bold, his transcribed answers in block quotes.
There’s a widespread impression that innovation always leads to progress. Why is that not the case?
First let me say why that premise is so appealing. If we compare ourselves, especially those of us who live in industrialised countries today, to people that lived 300 years ago, we are enormously fortunate. Much more prosperous, much more comfortable, much healthier. There is no doubt that this could not have been achieved without major innovations being applied to every dimension of our social and productive lives. It’s easy to jump from this historical sweep to say that innovation has been the main engine of economic progress and hence we should welcome all kinds of innovation.
Theoretically, we could think that innovations expand what we can do, and if there are more things we can do we should benefit from them. But reality is more complex, more nuanced. For example, for the first 100 years or so after the beginning of the Industrial Revolution in 1750, the gains were very very unequally distributed. In fact many key groups, including the majority of the workforce, experienced worse outcomes in terms of lower hourly wages, stagnant income, much longer working days, much more onerous conditions in their workplaces, much less healthy lives and much worse living conditions more broadly.
Only after both the direction of technology and the institutional fabric of society changed in a specific direction towards bolstering more shared prosperity, we see the types of outcomes that we now take for granted. Higher wages, cleaning up of filthy, disease-infected cities, better working conditions, more education. All of these were not things that happened automatically, they were the outcome of a struggle. A struggle over technology and a struggle over institutions.
Can you provide an example of an innovation that did lead to progress?
One great example comes from railways. There is some debate within economic history about exactly how beneficial railways were for the aggregate economy, but when you look at how they connected different parts of industrialising countries starting with Britain and then continental Europe and the United States, how they enabled other industries to become more efficient, how they started paying much higher wages – that’s a clear example of a technology that was broadly beneficial.
But, and that’s the reason why I gave the example of railways, it’s not in the nature of railways. It’s in the nature of the choices and institutions in which they were embedded. If you look at countries colonised by Europeans, railways were used as a method of repression. And then of course they did not have that sort of benefit. How you make choices in regards to technology matters greatly.
There’s tremendous excitement about the power of artificial intelligence (AI). Why are you sceptical about the economic impact of AI?
It’s a bit like railways. I’m not doubting the technical capabilities of AI. I’m very impressed by some of the generative AI possibilities. In my assessment, there are many directions in which it can go that would be very enriching for humans as consumers, as communicators and especially as workers if generative AI provides more accurate and better curated, more usable, and reliable information for decision-making. On the other hand, I also see generative AI as a very flexible technology that can be used in many different ways and right now the tech industry is prioritising much more manipulative uses of generative AI. Within the broad sweep of the book, one major issue is whether technologies are being used just for automation or whether they are also being used for generating new human competencies, tasks and productivity.
I’m particularly worried about generative AI being used just for automation. When it is following that path, the issue is that the industry often either purposefully or naively overestimates the productivity gains from automation. What we have seen over the last four decades is that productivity gains from automation have often fallen short of what was promised. There are some instances such as industrial robots that have increased productivity tremendously for some specific tasks. But general automation has modest effects unless it is also combined with new activities and new ways of making humans more productive. I’m worried that we’re going to get what Pascual Restrepo and I have called social automation. That means we get automation but it’s not transformative, it’s not amazing. You just displace workers but you don’t get the productivity benefits.
In Power and Progress, you write that AI is potentially undermining democracy. How?
What we know is that democracy has been doing badly over the last sixteen years or so. For example, from the mid-1970s up to around 2005-2006, generally you have strengthening democracies around the world. More countries transitioning into democracy, fewer coups. That trend reverses around 2006-2007. There is also evidence that two types of distinct but related uses of AI are contributing to democratic weakness. That’s quite well-established. One is AI being used for surveillance, censorship and anti-dissident activities such as in China – but also now in Iran and Russia and other authoritarian countries. Second, social media type activities have tended to create echo chambers and room for emotionally charged exchanges that have further polarised people while creating a hotbed of misinformation and extremist propaganda.
What we don’t know is whether these two negative effects of AI are the major causes of the decline. I don’t think that’s necessarily true, but they have been important contributing factors. Chapter 10, where we talk about AI breaking democracy, is partly forward-looking. We think AI has already had a negative effect on democracy even though it probably wasn’t the defining force. But along the current trajectory, we are afraid that it will have even more negative effects on democracy, particularly if the extent of data centralisation continues either in the hands of the Chinese Communist Party or in the hands of Amazon, Meta or Alphabet. That is going to create a very adversarial playground for democracy because you cannot have citizens being manipulated and controlled by either parties or corporations and still play their role as active democratic citizens.
How do we turn it around?
I don’t think there is an easy answer to that. The book is optimistic on the one hand and pessimistic on the other. It starts pessimistic at some level because it is a counterweight against the techno-optimism of Silicon Valley in the United States, which says just trust the technological geniuses and brilliant companies and everything is going to work out fine. But we are also optimistic in the following sense: We reject the view that we are necessarily heading to doom. We also argue very strongly that technically it is feasible to use digital tools and AI in a much more pro-human, pro-worker, pro-democracy way. But, and there’s a big but, that is the final layer of pessimism: This requires a major course-correction and that is politically difficult, especially in the United States where the government has not been engaged in regulation of the tech industry as in many other countries (including Germany, for example).
Polarisation has increased and the power of tech companies has become disproportionate. Nevertheless, we suggest not an easy but a feasible path for changing our course. It starts by recognising the problem and changing the narrative. In particular, having a broader conversation about making choices related to AI. What sort of choices do we want? What sort of social consequences should we care for? That’s very important. Building new institutions or strengthening existing countervailing powers against the dominance of the tech industry. And finally, we suggest some specific policies but with the full disclosure that while I am pretty sure that the first two steps I outlined are essential, there are many different ways of choosing policies and we also don’t know the effectiveness of some policies. You have to do experimentation. The heart of the matter is that you have to choose policies that reduce the dominance of the tech industry, create room for new business-models that are more inclusive and also push innovations in a more pro-human, pro-worker direction. Humans need to be put in the driving seat.
What is a question you wish you were asked and what is your answer to it?
The question: What’s missing from the discussion of new technology both in economics and the general media?
It’s the importance of a vision. This is something that we emphasise in the book. Technologies cannot exist independent of a vision articulated as: What is the problem to be solved? How do we solve it? What are the most interesting directions? What are socially desirable directions? What is the socially acceptable collateral damage on the way? The vision of the tech industry right now is very much driven by a view that digital technologies can perform a lot of human tasks better than most humans and that we should have technologies designed by geniuses used by not so smart people or perhaps to sideline not so smart people. This is actually a very dominant view even if people don’t always express it in these stark terms.
To have a course-correction, we need a new vision where machines are viewed as complements to humans, as useful tools for humans and that’s why the question of policy is a very difficult one. If that vision doesn’t change, you’re not going to be able to deal with it by providing fiscal incentives or subsidies. What you need are the sort of policies that will change that vision. That’s why market structure is important, that’s why the entry of new products is important, that’s why breaking the dominance of existing players is important.