The Capitalism and Freedom in the 21st Century Podcast
The Capitalism and Freedom in the Twenty-First Century Podcast
Episode 18. Simon Johnson (MIT Sloan Economics Professor and Former IMF Chief Economist) on Technology, AI, Political Economy, and Economic Development
0:00
-38:14

Episode 18. Simon Johnson (MIT Sloan Economics Professor and Former IMF Chief Economist) on Technology, AI, Political Economy, and Economic Development

Podcast Interview Transcript

Simon Johnson (MIT Sloan Economics Professor and Former IMF Chief Economist) joins the podcast to discuss his new book "Power and Progress", co-authored with his MIT colleague Daron Acemoglu, on the interplay between technology, political economy, and economic development.

Jon: “This is the Capitalism and Freedom in the 21st Century podcast, where we talk about economics, markets, and public policy. I'm Jon Hartley, your host. Today, I'm joined by Simon Johnson, who is the Ronald Kirk Professor of Entrepreneurship at the MIT Sloan School of Management, and was previously Chief Economist of the IMF, and just co-authored a fascinating, newly-published book with his MIT colleague, Daron Acemoglu, The Power and Progress of a 1,000-Year Struggle over Technology and Prosperity. Welcome, Simon.”

Simon: “Thanks for having me.”

Jon: “So, you're British, you were born in the UK, how did you first get interested in economics?”

Simon: “Well, I was looking for engineering, but on a grander scale, I think, and economics captivated me early on as something that has the potential to address really, really big issues. So, I was lucky to fall into it when I did, and lucky to get the education that I received, and I think it's been all fascinating questions since then.”

Jon: “You were an MIT economics PhD student, certainly the best place to get an economics PhD. Were you influenced by anything, say, your childhood? I know for some that grew up in the 1970s in the UK, that was sort of an important experience for them, or those that grew up sort of under, you know, when Margaret Thatcher was Prime Minister, was any of that formative for you in terms of your sort of outlook and kind of global growth and a lot of the topics that you bring up, politics as well?”

Simon: “Yes, I think in retrospect, I'm from Sheffield in the north of England, a city that was, you know, an economic powerhouse. My family was involved in making screws on one side and making steel on the other side. And by the time I was aware of, you know, the economy, things had turned downhill, and all of the industry there was troubled. There was a lot of job losses, people were moving away. So I think I'm trying to, I'm still trying to understand exactly what went wrong there and why we lost so many jobs and how we should, it does give me a bit of a lens for thinking about economic decline, but also recoveries and more sustained growth and innovation and entrepreneurship elsewhere.”

Jon: “So I want to get into your research agenda. You've written a lot of highly influential articles in the political economy. How would you describe your research agenda? For 80,000 citations, that is, you know, quite an accomplishment. You co-authored the Colonial Origins of Comparative Development, which is both your own most cited paper as well as Daron Acemoglu’s most cited paper. You've been at the center of all this groundbreaking research on institutions and growth, which has been highly influential for several decades now. It's certainly had a lot of influence on the World Bank and the IMF and their approach to promoting economic growth. How would you describe your research agenda in your own words?”

Simon: “Well, I think we're looking for the causes of poverty and wealth around the world. And many people, of course, have done that before and we're standing on their shoulders. But there's a lot of really interesting questions, including, you know, after the collapse of colonial empires, when technology in principle could have flowed freely to many places and many places around the world could, in principle and in theory, have become much richer. They didn't. So why not? What are the actual impediments that these countries face? And what could we do, you know, in terms of academia or in terms of public policy or in terms of the IMF, what could we do that would actually help the billions of people around the world who want to live better?”

Jon: “Your chief economist at the IMF, right after Andrew, or remind me again in terms of your time at the IMF, what was the big thing happening then?”

Simon: “So I became chief economist at the IMF in early 2007. I held that job until the end of August 2008. So, this was the run up to the global financial crisis. I'd worked there previously for two years under Raghuram Rajan, who, of course, is a brilliant financial thinker. He pulled the IMF and the part of the IMF that I was in the research department towards thinking about the interaction between finance and macroeconomics. And when I got that job, I was Raghuram's successor. We were immediately, you know, the IMF, my department was in the hot seat in terms of understanding what is going on with multibank securities, with credit default swap spread. How will this spread around the world? What are the sensible interventions that could be made? So, I took a wing side seat for the events that became, after I left, somewhat devastating to millions of people. And from that, I took away a lot of questions and a lot of interest in trying to resolve those questions with regard to making the financial system safer.”

Jon: “You wrote a book, 13 Bankers, on that whole episode. And you've written quite a bit on financial cycles and business cycles as well. Your latest book, Power and Progress, is primarily on economic growth, political economy, and technology. Could you tell us a little bit about the book and the point that you're trying to make with Jerome?”

Simon: “Yes. I think the book, both for Jerome and me, is a bringing together of work we've done on political economy together and work we've done on technology, which tend to be somewhat separate. So, we've merged these things. So, it's the political economy of technological choice. And it asks the question, first and foremost, is technology predestined? Is it just something that happens to you? And we think the answer is no. There's a lot of choices in there. And secondly, is it possible to redirect technological change? Can you push it one way or another? We see influential individuals doing that. We see companies doing that. Can public policy, do it? Can civil society, do it? And if so, in which direction would you like to push it?”

Jon: “So there's topics like directed technological change, distorted technological change. To me, it sounds a little bit like the words industrial policy. To what degree can policymakers really shape industries and direct technology. What are the limitations? To what degree should they really be involved in directing technology? Obviously, industrial policy has been a controversial set of words in economics for quite some time, certainly less controversial in some circles versus others. But I'm curious, what is your thesis on topics like the indirect technological change in industrial policy?”

Simon: “Well, policymakers definitely can shape technology and innovation. They do it with the tax code. They do it through a knowledge of a certain industry. They do it through the Department of Defense. I think the really interesting and still-to-be-totally-resolved question is, can they tilt it in a way that will be more broadly in the public interest? So, it's not about capture. It's not about lobbying. But it's about more good jobs, for example. And we think the answer is yes. We think it's not easy. We think it's quite a struggle. It requires a fair amount of debate and argument. That's one reason we wrote the book, to try and push that debate forward. But we think that, for example, if you think about the way that artificial intelligence is being developed now, a lot of emphasis on machine intelligence, which is a euphemism for replacing workers, and replacing them in a way that doesn't raise the marginal worker productivity of those who remain, employed in the same enterprise, we would rather tilt it towards machine usefulness, which is a term that we've invented, but we're standing on the shoulders of a lot of computer science people. The people before us who say the point of the machine should be made to augment human capabilities, not to replace them, but to augment them. I think it's easy to see that in history, which way we've gone in various moments. It's easy to talk about the creation of new tasks, stimulating the demand for labor, and potentially raising productivity and wages. But can that be done in a more deliberate, policy-driven fashion? I think that's a fair question, and that's what now lies before us, trying to sort out that question.”

Jon: “There's a popular set of words in the labor economics and technological change literature, which is skill-biased technological change. And in the introduction of new AI tools, things like Chat GPT and generative AI, there's some who argue that it's actually going to be skilled jobs that are going to be severely impacted. Coders, those in the service economy, are writing things in their service jobs. Those jobs, we don't need as many coders anymore. We'll have fewer software engineers that will use things like Copilot and Chat GPT to code faster. What is your view on that? Do you think we're in for this reversal where skilled workers have done so well for so long over their unskilled, less educated counterparts? Do you think AI is going to trigger a reversal there? Or do you think just broadly both skilled and unskilled work is sort of in jeopardy here? We'll have robots replacing unskilled workers as well pretty quickly.”

Simon: “If you read the novels of Isaac Asimov, which I really do advise to everybody, because he had a brilliant imagination of the robot novels in particular, what would happen in the 1940s and 1950s when he was writing those, the thinking was that robots would take over manual jobs first, and then later as the robots became more evolved, they'd move on to cognitive tasks. I think what we've discovered, including in the last two years, is the opposite, that artificial intelligence is quite good or better than humans at some cognitive tasks, and really not as good as humans and won't be for a long time at much more manual tasks. That's probably because we've been walking around and walking through, walking across rough terrain for millions of years, us and our ancestors. But abstract thinking is tens of thousands of years old, and we've only been going to meetings and writing memos to each other for a few hundred years of the modern sort. So, I think if you look at it, if you're unskilled, I would call it manual work and cognitive work, the pressure is on the cognitive jobs. Now, within that, there's a lot of variation already developing. Some occupations, it seems like it's the most highly skilled, highly paid people who will benefit. But in other places, they're the ones who are going to get fired, and then people who have the capital are going to replace them with much cheaper labor. So, I think it's absolutely all in flux. And our main point is it doesn't have to be about replacing workers. It can be about augmenting the capabilities, including augmenting the capabilities of lower-skilled workers with some sort of cognitive function. And that's really interesting, because then there could be much more, by way of productivity gains, individual productivity gains, economy-wide productivity gains, and wage gains that would filter down. So, this doesn't have to be a zero-sum game, or some people win, some people lose. Many more people could win than lose in this instance. But that may not be the default course we're on currently.”

Jon: “That's fascinating. I'm curious, what do you think about this whole argument that we've seen over maybe the past decade gained, I think, some traction in, I think, public intellectual circles, that this new era of AI is due to unleash massive unemployment, and hence we need something like universal basic income to sort of step in and help those that are significantly displaced from this technological innovation in AI?”

Simon: “Yeah, we're not big fans of universal basic income, UBI, primarily because we think people like to work. Now, it doesn't mean that they have to work 60 hours a week in impact-breaking jobs, but work seems to be important as a source of income, a source of identity, a source of political voice. And UBI, I think, lets the technology industry off the hook with regard to its impact. And I would rather they be on the hook and be responsible and not point fingers at other people and say, right, it's your job to take care of all these people who've gotten fired. I think it's also the case, by the way, that in an economy like the U.S., you don't get that much unemployment. What you get is people pushed down to very low-wage jobs, and then fall out of the labor force, so labor force participation declines. We've certainly seen plenty of that from the digital transformation of the past four decades, so I would not put too much weight on a UBI-type approach.”

Jon: “Interesting! If you think about people like Milton Friedman, his writing in the early 1960s in Capitalism and Freedom, he famously sorts of endorsed the negative income tax, which is sort of like universal basic income. I think he famously had some sort of analogy that, like, you know, better to pay people, you know, something like an UBI-type payment rather than to have them, you know, shovel holes and fill them up again. I think he had some story about some Soviet, you know, I think in the Soviet economy, there was people sort of that were doing something like this. Do you think that it makes sense to, I guess, direct people toward more undirected, more unproductive types of work in that? Do you think that that sort of view has any validity? I mean, obviously, it's much more complicated than that, but do you think to any degree these sorts of industrial sort of policy-type ideas are sort of pushing people toward more unproductive forms of work, where maybe it makes more sense to have something like, I guess, a more generous kind of redistribution through the system, maybe not like UBI, but maybe a more enhanced income tax credit or something like that to respond to this?”

Simon: “Yeah, I haven't seen anybody really proposing truly unproductive work. I mean, yeah, there's a question of how much you subsidize the construction of new chip fabs, for example, in the U.S. And there's a very interesting debate about how much more expensive it is to build such fabs in the U.S. versus in Taiwan, for example. Taiwanese people say it's four times expensive. My expert friends at MIT say it's 20% more expensive. So, we're going to find out.”

Jon: “It's sort of a work shortage thing. What is that as well?”

Simon: “Well, so—”

Jon: “It's one of the complaints that I think GSMC and some of these chip companies have.”

Simon: “So when you drill—yes, they do say that. When you drill down into it, it's very interesting. So they use PhD-qualified engineers on the shop floor in Taiwan, which is not— and it's not clear to everyone that that's a super productive use of PhDs. And there may be a malleability of labor and a willingness of highly skilled people to do mundane or even routine tasks. In Taiwan, you also hear that about South Korea compared to the United States. So, I think we are attempting to find out what kind of training is needed, where those people will come from. These are good jobs in the fabs. Who's going to get the job? How much education do you need to be effective in the clean rooms and so on? Very, very interesting and important questions. But I'm pretty optimistic. I mean, in this country, which is a big country with a lot of people who want to work hard, I think we will find plenty of talented people. I think the issue on the—back to just freedom for a second. I think the question that we grapple with is, first of all, are we taxing labor too much? Payroll taxes, raising the cost of labor, retirement. People are thinking, hmm, machines versus labor. Which one do I want to go for? I think that's an important issue. And I think also that we have a lot of issues around the care economy, around how much are you willing to pay out of the public purse for home health aides who take care of people who have really terrible health problems. You don't want to send them to hospital. That's way more expensive. Somebody has to take care of them. Families can't afford to do it by themselves. How much are you willing to pay? Can you pay a living wage? And if we don't, who really goes into that work? So, I wouldn't call that at all unproductive work. But I think it is a massive question of how we handle additional longevity, how we handle many extra years of health, but also some ill health at the end of life. Those are tough questions.”

Jon: “So it seems like on the whole tax side of things, it seems like in recent years there's obviously been a lot of interest in taxing wealth. There's been a lot of push from folks like Emmanuel Saez, Thomas Piketty, people who have been on sort of things like a wealth tax. And it's, I guess, more in the sort of inequality frame than perhaps in the sort of – it's certainly related to sort of AI and gains to capital discussion that we're talking about now. I'm curious. There's been people like Bill Gates have talked about a robot tax. What do you think about that?”

Simon: “So I do think the tax code is a bit too tilted towards encouraging machines versus hiring labor. So, I think that can be redressed in various ways. But in contrast to those people who are arguing for redistribution, so let the productive process do its thing, look at the distributed outcomes, and then do some tax and subsidy welfare payments if you don't like the outcomes. I think they're much more about tilting or redirecting technological progress to change the outcomes that way. And that's primarily because I don't think a place like the U.S. is ever going to do a lot of redistribution. And I think the nature of work and the kinds of jobs you get, that's important in and of itself. And redistribution, of course, doesn't address that. That just gives you a bit more money for the work that you've done. So, thinking about how to redirect technological progress, that's our main agenda in this book.”

Jon: “On this question of maximum unemployment, I guess the unemployment rate is between 3.5% and 4% currently. You don't really see there being a massive surge in unemployment any time soon? Simon: “Well, I think my former colleague and good friend Mike Musso, who sadly departed us, left us, liked to say the main advice for anybody who works at the IMF was never forecast a number and a date at the same forecast. So, look, I think the chat GPT is a wake-up call, and there's a speed of change in some of these cognitive tasks, the capabilities of AI that is disconcerting. Because we know, particularly in the climate of the U.S., we can handle a lot of different shock surges. But there is a speed of adjustment issue. So, I don't think we're going to face massive unemployment. I do think there's going to be pressure on some jobs that were previously good jobs. And I do worry that we may not be creating – that's automation, and that's a natural part of economic development. But the offset and what we were good at in the early to mid-20th century was creating a lot of new tasks so that people had jobs, were employed, and we could absorb. All the people who moved out of agriculture, all the jobs that were eliminated when Henry Ford automated car production. We didn't lose jobs in the car business. We gained jobs in the peak phase of that transition. But it takes some time. Electricity was adopted over 30 or 40 years, electrification of factories and production. AI seems to be coming at us a lot faster than that, so I think we need to step up our game in terms of response time.”

Jon: “And in this adjustment, I guess at some point some people will leave the workforce, but maybe some of those people will be retrained to manage the robots or manage the AI. It's, I guess, the hope in that. I want to just, I guess, get a little bit to the political economy sort of discussion around AI. So I'm curious, what do you think about when we talk about AI in the context of totalitarian countries like China, where they're using AI for, I guess, suppression of its people? I'm curious. There's this sort of idea that was promoted by Milton Friedman that's partially related to the work that Jerome and yourself have done. Milton Friedman's argument is like the first chapter of Catholicism and freedom. The idea that growth would cause democracy or economic freedom would cause political freedom. Jerome and his co-authors in the sort of institutional framework, I think, has also made some arguments that democracy also causes economic growth. This thing with China, I think, hasn't quite played out. It's still not a politically free society, but it has experienced quite a bit of growth in recent decades. It may be fading now. But I think something that Friedman acknowledged later in life was that what he had argued in the Catholicism and freedom was not playing out with China. That becoming a much more prosperous society did not ultimately cause it to become democratic. And to this day, it's not democratic. How do you think things like AI are sort of disrupting these traditional processes of sort of growth and democracy as being something that we would get together in a sort of endowed relationship? How do you see that impacting that symbiotic relationship?”

Simon: “Well, I think there's been a problem there for a while. I remember, for some reason, this vivid image of Milton Friedman in Hong Kong. I think it was his PBS special talking about Catholicism and freedom exactly this way. I remember thinking, well, that's good. All we need to do is get growth and become more like Hong Kong. And the first time I went to Hong Kong, probably in the early 1990s, I was quite impressed. But I think all the, have not been recently, but all the stories from Hong Kong, including right now, are quite discouraging in terms of the way in which freedom, in any Friedman sense or any sense, has been suppressed and it's become an oppressive place. So, I think that's a wake-up call. Now, is AI a technology for liberation, a technology for self-expression, or is it a technology for surveillance and suppression? And the answer is yes, it's both. And it depends on how you use it. So, I think that the new split in the world is going to be between countries that are more like us and are going to put, I think we'll have to put a lot of guardrails around surveillance, for example.

That's a good thing. And then there'll be other countries that put no guardrails around surveillance with regard to state surveillance, certainly. And they will be following China and they will be following a certain line of technology. And you think about, you know, being authoritarian is obviously a set of policies and it's implemented by a technology. So how much does it cost you to be an authoritarian or run an authoritarian regime? Well, I think the AI that's developed by the Chinese is going to make it a lot cheaper to be authoritarian. So, you know, you think in terms of an incentive framework, we're going to have more authoritarians and they're going to last for longer, facilitated by the same technology that I hope, I believe we will use to strengthen freedom and true liberty in the United States.”

Jon: “One of the interesting things I think about this book is that it really talks about technology in a sort of more political sense. And it's not usually a frame that we typically, you know, as economists or capital readers, really think about technology. It seems to me like it's only been recently where we think about the politics of big tech and censorship and obviously that's a very big topic right now. Traditionally, it's usually been, oh, you know, great, we have iPhones now or we have computers now and it's great, we can do things faster. Traditionally, at least in my own lifetime, technology sort of had a relatively sort of positive sort of connotation with it. I'm just like, what are your favorite examples from your book about this sort of struggle where elites have used sort of technology in some sort of way that may not have been to promote broad-based prosperity as much as, you know, it could have otherwise had been. I'm curious. Roman, what are some of the examples in the book that you sort of outline?”

Simon: “Well, I think the most awful example is the cotton gin, right, which was developed right at the end of the 18th century, right after American independence. It made it easier to process upland cotton. It meant you could run cotton plantations away from the East Coast across the Deep South. And what happened, of course, was enslaved people were moved from a very harsh life on the East Coast into much worse conditions across the Deep South. And that became the mainstay of the slave economy. And that lasted and remained, you know, with extremely harsh conditions until the Civil War. And, of course, that was also facilitated and supported or driven by industrialization in Britain, which was about cotton and cotton textiles. And they were buying a lot of the raw cotton from the American South. So, I think that it's actually rather pervasive throughout history that some people gain and other people lose from the deployment of technology. It's much more unusual. And, you know, with the sort of holy grail we're looking for here is the sort of Henry Ford experience, where Henry Ford automates car production, brings electricity to the factory. He replaces a lot of workers, but he also generates a vast number of new tasks. And there's a class of there are some people who call them, and we've adopted the term manager engineers, white-collar workers, who emerge to plan this, to organize it, to run all the infrastructure around car production. And that's a lot of people, and that is a lot of money that's made in very high wages that are paid through sharing. I don't think Henry Ford was that altruistic. I think he was quite paternalistic. But he faced a countervailing power, including trade unions, so he wanted to preempt that with high wages, and he also had to, at some point, negotiate with the unions and have collective bargains. But that was in a context of, you know, the company did well, his family did well, the whole auto industry did well. So, find that kind of win-win approach, win-win solution. We did a lot of that before, during, and after World War II. After 1980, it's become much more unusual.”

Jon: “I mean, it's fascinating to think, I guess, in the past few decades, that we've had these productivity gains, but we haven't quite had real wage gains to sort of follow that. Do you even have, like, a favorite explanation of why that's not, like, you know, following as much historically as has been the case, that, you know, real wage gains have generally followed productivity gains? There's a lot of, I feel like, different narratives that are told about why this is the case.”

Simon: “Right, so real wage is obviously, again, a risk for some people, more skilled people. It's for the lower, less skilled, less educated people that they haven't risen. I mean, I think the main contenders are automation and globalization. And, you know, Vernon's work with Pasquale Restrepo says that automation is 60, 70% of the explanation. I think even if you lowered it a little bit and said, well, I think globalization and the China shock is a bigger part of it, you have to remember that there's a technology dimension to globalization, including communication, including computers being applied to trade. And so, if you look at this, I would say the direct effect of technology change and automation and the indirect effect through globalization and the shifts in where you put certain kinds of jobs, including manufacturing jobs, I think you're looking at 70 to 80% being attributable one way or another to the way in which technology has changed. I do think that attitudes among management have also changed. You know, I've looked at a lot of literature, what managers were writing, what executives were saying, what people who led industrialization were saying in the 1920s. They were nowhere near as confrontational and antagonistic to workers as more recently. They were much more, much more patting themselves on the back and saying, hey, capitalism is a win-win. In fact, Herbert Hoover was, it's a perfect example of exactly this. And that's why the Republicans were so strong in the control of the presidency in the 1920s in the US because they were seen as the men, they were men of business, right? And that everybody could gain from that. So, there was a shared prosperity moment. Okay, a particular rock was hit in terms of the Great Depression, but there was also recovery from that. And Eisenhower was another Republican who was regarded as a practical man. Okay, he was a military general, but he was also pro-business and the business of America was doing business. I mean, it's also interesting to think about the 1920s even though we think of it as a time of prosperity.”

Jon: “I mean, it's also the time when, you know, zoning laws became legal in the US and the whole immigration system was sort of created, as we know it today, sort of two of the largest barriers to entry were created during that time. So, it's a very interesting sort of time period in general. The things that probably didn't have much of a great effect until much, much later on. But it's interesting too, on the points of destruction too, it's also, I think, like a regional thing as well where the automation certainly impacts the automobile industry, which is very much rooted in the Midwest, whereas the China shock impacts tax havens, very much impacts the Appalachians, two very different geographical areas with two different regional labor markets being disrupted in different ways. Do you have any thoughts on the Luddites? I mean, this whole term, you know, it's sort of a famous term, you know, to call, you know, it's used in a derogatory sense toward folks that maybe, you know, spout left-leaning tendencies. But, you know, it's very relevant and probably one of the sorts of examples I would think of first where, you know, there's this famous story of, and it may be apocryphal, I'm not totally sure, but, you know, this idea that, you know, people were destroying these textile machines, I think, in the, you know, 1800s. Do you think that, you know, that that's kind of like a relevant example here of, I'm not sure if it's an apocryphal story or not? I think there's someone who I think, there's, I think, a famous person associated with that is why I get the name first.”

Simon: “Yeah, well, Will Byron give some great speeches about it and that's in our book. Yeah, the Luddites were not apocryphal. Ned Ludd, whether there was a name called Ned Ludd, that is questionable, perhaps not. But the Luddites were, depending on how you read the history, they were either skilled worker who were threatened by the arrival of textile factories and automated looms. So, they were losing the weaving jobs that they had before. Or I would actually say they were independent entrepreneurs because by any standard like IRS definition today, they had their own equipment, they worked at home, they controlled their own hours, right? They had a lot of autonomy. And they were, and actually, either they weren't being offered jobs in the factory or if they had to come into the factory, they had to take on relatively routine tasks, relatively unskilled tasks, which were supervising the machinery that took care of what they'd previously done with skill in an artisanal fashion. So, I think that if you see them as independent entrepreneurs who were threatened by the growth of big business, people can be a little more sympathetic. But I think honestly also, stop trying to prevent automation and prevent the elimination of jobs by machines is sorting of windmills. However, it doesn't mean that's the only thing that can happen or should happen. And in some phases of industrial development, including when the railways came to Britain about the same time or slightly after the big uprising, that created a lot of new tasks, a lot of new jobs, and a lot of jobs in which the railways wanted to pay premium wages to their workers because they wanted them to take care of safety on the railways, actually that's an absolute key. You should be very responsible in a relatively independent way. So, I think technology can and has been developed in ways that generate new tasks, and a lot of new tasks, at the same time as it automates existing work. After 1980, we had less task creation than we really needed. We have less—the demand for relatively unskilled labor has been weak. We haven't been able to turn that around. And there is a concern that AI will worsen that problem, or maybe layer on some other versions of that problem now. And I don't think it has to be that way. I think we could push, and the industry could certainly push, to develop algorithms and approaches that would complement human abilities. So don't replace humans. Make them more productive. Augment their capabilities. And we would say, to the extent that raises marginal work productivity, hey, by the way.”

Jon: “I want to get back to this. Mr. O'Brien talked about growth in institutions. Obviously, it's kind of related to a political economy topic here. Can you explain which institutions matter? For me, I've always been told that government policy and politics matter for economic growth. However, you want to describe that, you call it institutions. But I think the key question has always been, which institutions or policies matter? And I think this really summarizes a lot of your work, as well as a lot of Jerome's work. Can you explain what your taxonomy of institutions is and what ones are most important?”

Simon: “Well, I think a central institution on the economic side is property rights. And that has to be supported by political rights. And if you go back to the situation in Hong Kong, for example, which had very strong property rights, limited political rights under the British, but they were definitely there. Those political rights have been stripped away, and I think the property rights are under great pressure as a result. But I would also say that if you think about big technological transformations and think about what happens in those transformations, there's often a phase in which things that were regarded as common property and shared or worked in a common way and maybe worked quite productively, like common land and medieval Britain. There is an episode or part of the modernization of Britain in which that land is turned into private property. The communal rights are stripped out and they're not well protected. The private property is then accumulated, becomes a big source of wealth, and that wealth is used politically to take more common land into private property. I think the same thing is happening now with data. There was a lot of data we put on the Internet, a lot of data we allowed other people to look at and to use through social media, for example. Those data are now being used to train algorithms in ways that were not being compensated. They didn't ask for permission. You may not like, I would predict that you wouldn't like, some of the surveillance outcomes that are going to come from pictures that you and others posted on Facebook. And that, I think, is exactly analogous to that taking of common property without permission or forcing it through in the enclosure movement in Britain before the Industrial Revolution. And I think that recognizing and respecting property rights on the Internet, including the rights to the data that you've created and the images that you've put there, that's actually really important. And if we could get some recognition of that, and we're already behind the curve, there isn't sufficient legislation, for example, on that, and it won't be anytime soon. But if we could address that, I think that would also help us a lot with regards to what we can ask from tech companies, what we can expect from them, and maybe some of the political consequences also.”

Jon: “So this is like a costume for data. I guess the idea is that we should compensate at some level for the data that these companies are selling to some degree?”

Simon: “Yes. I mean, I don't think the amount of compensation, I don't think it's worth that much. I don't think it's going to enable you to retire. But I do think recognizing that it's your property, and you should have a say in how it's used, and you should be able to say, no, this is my data, and you're not allowed to use it for workplace surveillance that I don't believe is good for me or for other people around me. So, I think that, and you can only do that if those property rights are properly defined and protected. We won't be able to protect them individually because we're too small relative to big companies, but if we could form data unions on whatever basis, that then becomes an interesting conversation.”

Jon: “So that, do you think, is really super like a regulatory authority? I guess something like the FTC or a body like that in the U.S. at least that could, I guess, impose some sort of changes on big tech companies?”

Simon: “Well, you would certainly need legislation to assert that there are such things as ownership over data and to put those restrictions on it. Would it then need to run through a regulatory body? Would you need to grant that regulatory power to someone? Yes, probably, although I think I would prefer a body that was very focused on protecting consumers and didn't have multiple tasks. I think when you give these regulators, you say, worry about market structure, worry about conduct and other things, oh, and worry about data protection, that is not necessarily going to be their top priority.”

Jon: “Fascinating. It's so amazing to think about just all these huge questions around technology, political economy, growth, institutions. This has been an amazing discussion. Thank you so much for joining us today, Simon.”

Simon: “Thank you.”

Jon: “Today, our guest was Simon Johnson, who is the Ronald Kirk Professor of Entrepreneurship at the MIT Sloan School of Management, was previously Chief Economist of the IMF. He just co-authored a newly published book with his MIT colleague, Doron Acemoglu, Power and Progress, a 1,000-Year Struggle for Technology and Prosperity. I highly recommend you check it out at your local bookstore. This is the Capitalism and Freedom of the 21st Century podcast, where we talk about economics, markets, and public policy. I'm Jon Harley, your host. Thanks so much for joining us.”

The Capitalism and Freedom in the 21st Century Podcast
The Capitalism and Freedom in the Twenty-First Century Podcast
This podcast is focused on economics, finance and public policy, with a common thread to exploring some of the ideas of the late economist Milton Friedman titled after his 1962 book "Capitalism and Freedom".