Katherine Mangu-Ward on AI: Reality, Concerns, and Optimism
What's the most exciting thing we have to look forward to in a world with AI? What valid concerns should we have? And how can we tell better stories about AI that help mitigate some of our less rational fears?
Katherine Mangu-Ward is the editor-in-chief of Reason: the Magazine for Free Minds and Free Markets. Today, we talk about what it is like to be an editor-in-chief and what that job description actually entails. She talks to us about the recent AI issue of Reason, in which they grapple with the big questions regarding the future of AI, what the valid concerns are, and what the less valid concerns are. We talk about how "tech bros" are responding to AI fears and whether being optimistic for the future has a place in this discussion.
Want to explore more?
Want to explore more?
- Katherine Mangu-Ward on Being Libertarian and More, a Great Antidote podcast.
- Mark Andreessen on Why AI Will Save the World, an EconTalk podcast.
- Scott Lincicome on the New American Worker, a Great Antidote podcast.
- Tyler Cowen on the Risks and Impact of Artificial Intelligence, an EconTalk podcast.
- Walter Donway, Neoliberalism on Trial: Artificial Intelligence and Existential Risk, at Econlib.
Read the transcript.
Juliette Sellgren
Science is the great antidote to the poison of enthusiasm and superstition. Hi, I'm Juliette Sellgren and this is my podcast, the Great Antidote- named for Adam Smith, brought to you by Liberty Fund. To learn more, visit www Adam Smith works.org.
Welcome back. Today on April 24th, 2024, we're going to be talking about the potentially big and scary elephant in the room. It's not foreign policy. That's what we talked about last week. This week we're going to be tackling the topic of AI. I'm excited to welcome Katherine Mangu-Ward to the podcast to talk about this. She's the editor in Chief of Reason, The Magazine of Free Minds and Free Markets. We're going to be talking about the June issue of the magazine, which focuses on AI. Hence today's topic since the last time she came on the podcast, this TED Talk that she mentioned way back when actually is available on YouTube. So go check it out and while you're at it, go check out the Reason Roundtable, which is also now on YouTube, but you can listen to it on I guess any streaming service, right?
Katherine Mangu-Ward
Anywhere you get your podcasts.
Juliette Sellgren
Great. So while you're listening to this one, go listen to that one. Welcome back to the podcast.
Katherine Mangu-Ward
I'm delighted to be back- one of my faves.
Juliette Sellgren (1.27)
Thanks. So first I wanted to ask you this slightly altered version of the first question that I ask everyone, but it's going to be about ai, so maybe it'll be weird, maybe it won't be. What is the most exciting thing we have to look forward to with AI? And this could be something that we're maybe too scared to be excited about because AI is this big and scary, hard to conceptualize thing.
Katherine Mangu-Ward (1.53)
Yeah, I think the most exciting thing to look forward to on AI is also the thing that some people are the most frightened about, and that is its impact on the labor market and on employment generally. So particularly early on in the kind of conversation around AI, I think this was the thing you heard the most, right? Like, oh, the robots will replace us. They're going to take our jobs. And in fact, Reason’s previous AI issue, which was maybe eight or nine years ago, that was the cover story, was sort of grappling with this concern. And I think people are especially worried about it because unlike previous debates about automation replacing people's jobs, it's like, well, those are the blue collar jobs. We'll just retrain those people, like whatever the miners can learn to code, that kind of thing. This conversation has been much more about creative class and knowledge sector jobs.
And so the people who do the worrying out loud were the people whose jobs were about to be replaced perhaps. That said, I actually think the way to think about it is that all of the bad parts of knowledge workers' jobs are going to be replaced first. The things that you hate doing and the things that you are probably not great at doing, especially writing just ordinary day-to-day written communication. Most people aren't very good writers. Most people, their jobs require them to write in order to do the thing they're actually good at. And we've already seen that. ChatGPT, even in its current fairly rudimentary form, can replace that function. And I think that you can see that across a lot of parts of the labor market. So I think the robots will replace us, is the thing people fear the most, but it's actually the thing they should be the most excited about.
Juliette Sellgren
Hopefully the robot can do my taxes too, because I don't like doing that; it's confusing.
Katherine Mangu-Ward (3.52)
I think the robots might be able to do our taxes pretty soon. But again, I think that's a great example of, okay, I'm a tax accountant, let's say. A lot of that job is knowing a certain body, right direction to do the right thing, but then there's a very painful execution of the paperwork. And I think replacing the very painful execution of the paperwork with a semi-automated or automated function that's performed by an AI will be better for everyone. I mean, we just shouldn't be burning human brain power on stuff like this. And we actually have a couple of pieces in the issue that you mentioned, the AI themed issue of Reason that talk about that same phenomenon. There's several different entrepreneurs that are working on replacing that.
Juliette Sellgren (4.52)
And we're going to get into that, especially the labor market outcomes. But I want to reroute, which is maybe silly because we already started on the topic now I want to take us off to the topic to put us back on the topic, but I wanted to ask you about your job. I'm going to be entirely candid. I don't really know what your job entails. So when you said you could speak on this edition of Reason, I thought to myself, does that mean she knows everything that's in the zine this time around? And I mean, I guess I figured that yes, every time what's happening in your own magazine, but what does it look like on the day-to-day? What does your job description?
Katherine Mangu-Ward (5.32)
Actually, my in-laws were in town for Passover, and my father-in-law asked me a very interesting question, which was, do you think that you've gotten better at being an editor in the last couple of years? You've been doing it for a long time, do you think you're still improving in your skills? And my answer was, I think if you gave me a manuscript and said, edit this manuscript, here's a draft, turn it into a final. I think that my improvement in that skill is leveled off. I'm getting a little better, but not much. But that most of my job, a lot of my job is the managerial side, and I’m getting better at managing people and specifically your people. And then the kind of big picture like the assigning side, what should we cover and why? And those things I do feel like I'm still getting better at and that those are probably the harder skills.
So yeah, I mean I absolutely have read and thought about and scrounged around in and asked questions about every single thing that's in the magazine, especially when we do these special issues. That tends to be something where you're going to put in a lot more thought upfront about what is the Reason answer to some of these questions, what is the Reason take? What are the right questions according to the kind of Reason magazine worldview? And so I'm absolutely not an expert on AI, and I want to make that super clear for your listeners. I am a dilettante and an amateur, but I think in some ways the questions that we're asking about AI right now are not only technological questions, they're questions about public policy, and there are questions about the relationship between the state and innovation. And I do feel like I have some expertise there and that this is bringing out a bunch of interesting ways to think and talk about that.
Juliette Sellgren
Oh, as well, if I had been asked to give a sentence on what you do, I would say…
Katherine Mangu-Ward (7.48)
What do I do all day? I think a lot of people's jobs are like that actually. You could say like, oh, she's a social media manager, or oh, he's a tax accountant. And then you're like, but what does that actually mean? What are you doing all day? So not to drag us forcibly back to the core question here, but I actually think these are related questions because one of the concerns that people have about the future of the labor market is like, well, I know all these jobs and I know what people do, and if we replace all those with AI, what will people do? And the fact is, most of the jobs that my friends have barely existed or existed in totally radically different forms 20 years ago. I just think we can't really imagine the jobs of the future and that's fine, but I understand why it makes people nervous. We can barely understand the jobs of the present. We definitely can't imagine the jobs of the future, and that makes planning for the future hard.
Juliette Sellgren
Well, this is a good segue, because I guess, what would it look like for you in what has technology and even the AI we have now, to make it personal maybe?
Katherine Mangu-Ward (9.03)
Yeah, so a couple of different ways. I would say, I mean, one Reason makes a tremendous amount of video and we're already using a lot of AI tools to support that work. Transcription actually is the biggest one for us. For years and years and years, we had a nice lady who transcribed stuff for us when we needed it transcribed, like long interviews or that kind of thing. And she was great, but she was expensive and she took a transcription you can do in a given amount of time if you are a single person or even an agency. And so for years we would say, are the computers good enough yet? Is the automated version good enough yet? And the answer was, no, no, no, no, no. And then all of a sudden the answer is yes, and the way that has caused prices to fall means that for us, we can now have good enough transcripts, not perfect, right?
It's hard for me to imagine that an AI will ever be able to correctly know, when we mean capital L Libertarian party and when we mean lowercase L libertarian philosophy in a transcript, that's a very subtle distinction, but a lot of things are now done very, very effectively and basically instantaneously and basically almost for free. And that has radically changed the way that journalists relate to recordings and their notes and their own memories in ways that are, I think all for the good. We do also have a podcast that you can subscribe to called The Best of Reason, and it features a robot version of me. So it's trained on my voice. It sounds exactly like me. It sounds exactly like me. It's very, very weird reading reason articles. So once a week you can have Robot Me read You a long form Reason article, and it's fantastic because I don't have time to sit and record these articles into a microphone, but I didn't have to do anything.
I mean, we took my voice from the existing Reason podcast archive. I recorded a little phrase to verify that I was really me and that I owned my voice. And 11 Labs is the tool that we used and they just spit back out this voice that truly, truly… sometimes Robot Katherine mispronounces one or two names- can be tricky. Is it DeSantis or is it DeSantis? Although I'm not even sure he knows, but that's another example of it's just a totally new product. We would not have done it because it wouldn't have been the right use of my time, but AI has enabled it to exist and people love it. So there are a bunch of places where this is true. And then of course there's Chat G PT itself, which I use at this point. The main use that I have for chat GPT is finding out what regular people think about stuff.
So it's possible that I should have a better technique for this, that I should be, I don't know, just have regular people that I talk to and I do occasionally, but it's very useful to say what? And you can ask Chat GPT, and it gives you the thing it is best designed to do is to give you the kind of bog, standard normy answer to a question like that. And then maybe I would've thought of three of them, but there's a fourth one that's like, oh yeah, of course this is something people think. And to me it's obviously wrong, or to me it's off topic for this piece, but sometimes it's like, yeah, I should just talk about this one too. So I find that little used very handy, but there are a lot more places I expected it will grow in the future.
Juliette Sellgren (13.12)
And that's funny because it's something that comes up a lot, at least for those of us who are young and intern in DC. We realize when we go back to school that DC is this bubble that people who write policy and talk about policy and think about policy all the time work themselves up. And so if you're working in that space all the time, it's actually really hard. We kind of as a generational collective, have realized to do that job and to actually stay in touch with the regular people. So it's kind of awesome, profoundly awesome that that's possible and that, I don't know, maybe it's silly to admit that you're not constantly in touch with people outside of DC, but at the same time, when you work where you live and everyone who lives around you works on similar things and has the same sort of bubble that they're dealing with, how are you supposed to fix that? chat is made to fix this problem? So it would be awesome if everyone in DC did that.
Katherine Mangu-Ward (14.20)
Certainly other tools that people in DC use for keeping in touch with people in the quote “real world,” and they're highly imperfect. So congressmen, for example, famously are more easily swayed by a phone call to their office than any amount of polling data. This is arbitrage that people can do. If 10 people call a congressman's office, that is super likely to at least make them consider that point of view. And that's a statistically insane way to form policy positions like 10 highly vested weirdos who will wait on hold, do not give you any information about either what the best policy would be or even really what your constituents want. But that is a tool that people use. Many, many years ago before we had good analytics about the kind of people who interact with our content at Reason, we used to use as a rough metric when we would put up a post on the blog, which was called the Hit and Run blog back in the day, if there were a lot of comments on it, we were like, oh, it must've been a good post.
And that was not a good metric. It turns out that as soon as we had better traffic data, it became immediately clear that the number of comments on the number of people who read the post, much less who liked it or were swayed by it, those were not related numbers. And I think that that's a period that we are in right now in a lot of different spaces. We are using bad information, we're using bad metrics to measure success because we don't have anything else. And I do think the rise of AI and other associated information that we're going to realize, a lot of the things that we were using as rules of thumb, or even just our gut feelings about things were not right. And that's going to be painful for a lot of people. I mean, there are a lot of orthodoxies that are going to be upended when we just have large amounts of data about what people think about what we do for our work or for all kinds of work.
Juliette Sellgren (16.32)
Does it freak you out at all for any reason, other than it maybe being kind of shocking to hear yourself and you know, didn't say that to have a voice that is yours, but it's not yours? And do listeners know? I'm guessing you tell them, but it's kind of weird.
Katherine Mangu-Ward (16.56)
Yeah, it's cool. So right now we do go out of our way to disclose, Hey, this is an AI read podcast. And I do think when you listen for an extended period, it does kind of like something's going on, you can sort of hear that it's not quite the totally normal human cadence, but I think that those tells will diminish quite rapidly. So right now we do say, “Hey, this is read by AI.” I actually note at the beginning of this AI issue that we're putting out, even from the time that we conceived this issue to the time that it went to press, the underlying technologies that were available changed so much that they say journalism is the first draft of history, but this is a rough draft. We published this magazine knowing that a month later there could be a new product or new regulation that would, on the artificial voice, the artificial Katherine, I have played clips for my kids and they find it a little creepy.
I think it's one thing to be like, well, I know I didn't say that, but of course other people don't know. I didn't say that. And for kids in particular, it matters what your mom says in the world. So I think they find it a little creepy. And one way that we actually tried to grapple with this in the issue is we have an occasional freelancer, Jessica Stoya, who writes for us about issues related to sex work and privacy and kind of the industry. And she wrote a piece called “The Future of Porn is Consensual Deep Fakes.” And in that piece, she talks about how a lot of sex workers in the industry, particularly people who are more on the OnlyFans side of things, so custom content, content, the personality is also the creator, is also the publisher. There's a lot of demand for tailored custom content from those audiences.
And that it just makes sense for you as the owner of your own image, as the owner of your own persona to say, well, can I make fake me’s to satisfy this demand? Can I sell the fake me’s as well as the real me to customers? And she quotes a woman, Eva O, who is doing this already, a performer, and she says that she spoke about her AI as something separate from herself that she will lose control over sounding oddly like a mother speaking about her children. So that's kind of mind blowing, right? Here's this woman who's created an AI version of herself so that she can sell more porn of herself. And she feels this kind of separate but almost maternal feeling toward these creations. I suspect that we'll see more of that. I suspect that there will be a lot of exploration around what is my relationship between real me and robot me or the many robot me’s that are performing various functions on my behalf.
Juliette Sellgren (20.16)
And I want to get into that because that's a little off putting and is, I think you're right, something we each individually have to grapple with. But there is this real risk of not even ever getting to the point where we can because we're so afraid of what that might mean. And I do think we each kind of have some sort of responsibility to at least ask ourselves and figure out how we feel about it. But what is the landscape that we're even looking at right now? You said that it changed a lot even in the past little bit of time, but what does it look like today?
Katherine Mangu-Ward (20.52)
So the example that comes to mind most immediately is when we started thinking about this issue, this was right at the beginning of people really using Dall-E and Midjourney to produce images from text. And you still see this, that one tell that an image has been produced by a generative AI is the wrong number of fingers. Have you seen those images?
Juliette Sellgren
Yeah.
Katherine Mangu-Ward
Yeah. So you get these, it's like, show me Kate Middleton, totally not dead. And then the picture is, it looks like her, except for that it has too many fingers. And that was sort of this funny to tell also, images with text in them, the text is very often misspelled or in some way, clearly not, right?
Juliette Sellgren
Yeah, I had it.
Katherine Mangu-Ward
Yeah.
Juliette Sellgren (21.47)
I had try to make, for our econ debate society, I had to try to make a Greek symbols logo situation, secret society vibes, and it was shooting back gibberish at me. And then I had it make a picture of Alfred Marshall because our thing is named after Alfred Marshall, the economist, and it put a bunch of graphs in the background, but they're physics graphs, they're not even econ graphs. And I get that physics comes or econ comes from physics, so we have a lot of the same graphs, but it was just like not right. And I was like, this is kind of funny though, but it was just not right.
Katherine Mangu-Ward (22.25)
Yeah. So there was this moment when some of that technology was new and people were using it and everyone was like, oh, ha. See, we were afraid for no reason. These silly, these silly AIs, they don't even know the difference between a physics graph and an econ graph. But I think it's pretty clear even from the time that we produced some of the first images for this issue, and when we produced the final image for this issue, because all the images in the issue are, almost all of them are produced by AI as well. So they got a lot better. And the paid tools, if you upgrade to a lot of those, were better than the free ones. And when we got better at writing prompts, they got better. And just in the few months while we were putting together this issue, that change was notable. So actually when you get the issue, you will see that there are parts of it where the images do have these problems and parts of it where the images are better because they were produced at different times in the production cycle. So if that's visible, even during this very brief period where we happen to be putting together one edition of a print magazine, I think we're going to see that that can only accelerate.
Juliette Sellgren (23.49)
So then I guess how valid are our fears? What are some of the biggest fears and has the experience of putting this sort of thing together and learning about all of this and producing these images, has it made some of them seem silly, and has it made some of them seem more valid?
Katherine Mangu-Ward (24.11)
So my perspective on this, I am definitely, I am a techno optimist. I affiliate myself with Mark Andreessen's techno optimist manifesto, not least because truly anybody that's going to be like this manifesto was inspired by Deirdre McCloskey. I'm already like 80% of the way there, but them or Virginia Postrel or Marion Tupy, all of whom are named, right? So Dispositionally, I think I was already there. And in general, I would say my big framework on a lot of questions like these is to really, really look carefully at the opportunity cost of our fear. So if we are afraid of something bad happening and we can kind of tell a story about something bad happening, fine, but we should also try to tell ourselves stories about something good happening and then weigh those. And of course, we're not probabilistically, we're not going to get those, right.
We're not going to probably end up being correct about which was more likely or less likely. But I would say I have sort of two cases for fear, and the first case is the case that's being made by the effective altruism folks. So effective altruism obviously kind of originally started out as just a critique of philanthropy and saying like, Hey, maybe don't give your money to the opera. Maybe you should buy bed nets because we're trying to do the greatest good for the greatest number here and has evolved to have a subset of the movement that focuses on existential threats because hey, who cares about the opera or bed nets if we're all about to experience some kind of global annihilation? And some of those people focus on global warming, some of those people focus on nukes. But a lot of those people focus on artificial intelligence.
And I think I am the least well equipped to evaluate the broader question of will the robots gain sentience and then consume all of our molecules as they pursue some incomprehensible robot goals? Maybe I think that could happen. But I listen to a lot of, and don't think I am fully convinced by, but do take seriously the critiques that even if there's a very, very, very, very, very small chance that unfettered AI brings about global or universal annihilation, we have to take that seriously and we have to think about whether there is something we can do now that might reduce those chances. I think that's a fair question. I don't have a lot of faith in anyone's answers at this point, and I think a lot of the people who say they have the answer are kind of full of beans. But I do think it's right to keep asking that question.
My bias is to say, well, other people overstated those problems, I think, and there are lots of people who would disagree with me. There are lots of people who would say, oh, no, global warming violation is well underway, and you think the nukes are well secured, absolutely not fair. But in my estimation, a lot of the things that were done in attempts to stave off the potential downsides of those two technologies were wasted effort or counterproductive. So I think that's worth keeping in mind. The other concern is the concern that I call the Ron Bailey concern. Ron Bailey is reason science correspondent, and he is extremely, extremely worried about the intersection between AI supported surveillance technology and governments. And I think he's right to be worried about that. And so I am not sure how to solve that problem short of a kind of borderline, but it is real.
This is just the idea that the thing that was protecting our privacy in a high surveillance age was that we aren't very good at sorting through all of the data. Humans just can't sort through the giant pile of surveillance information that's out there, but AI are really good at that. And so the protection that we had from the sheer volume of surveillance data is about to go away, and governments you better believe will be taking advantage of that. And we've seen that already in China with the Uyghurs and other populations, and he is almost certainly right to fear it here,
Juliette Sellgren (29.08)
And I think that's something that we're going to have to grapple with. But something that comes across pretty clearly in both of these situations is that it's not happening all at once. I would say if the tech industry, if private individuals are even slower on the surveillance front than we often imply when we talk about it. So I think about how, for example, economists talk about, well, in the long run people adjust to a new method of transportation, or if costs are too high, they substitute it with something else. That process can take up to a few years. We haven't really put a number on how long that takes because it depends on the situation. But in this case, sure, AI is changing a lot, but you also have this takeup thing, how fast are we changing? So part of me thinks that both of these concerns face this impending doom bias, if I can even call it that. It's kind of like this Oppenheimer-esque problem where he creates the thing, then he realizes what the possibilities are, and then he's like, ah, we have to get rid of it. We have to stop it, and maybe we should have, I don't know. We're still figuring that out I guess. But even with government where I tend to be worried and obviously stuff with China and the surveillance state is very real and very current.
Part of me wonders if they're going to be really bad at figuring out how to use it. And I don't know if that's a defense mechanism because obviously being able to then sort through the data is such a power compared to before, so it's less of a defense, but they're really bad at, I don't know, getting ready to do things.
Katherine Mangu-Ward (31.08)
Yeah, I think I absolutely agree that government incompetence, it's the coldest comfort, right? It's like, of course it is true. It is undeniably true that if the state sets out to create a panopticon, they're going to do it in a way that is self-defeating and suboptimal.
Juliette Sellgren
For sure. It'll take 20 years.
Katherine Mangu-Ward (31.39)
And it's going to take a long time, right? I mean, can't talking about it's going to take a decade to rebuild this bridge, it's going to take 20 years to revamp the IRS's computer system it's going to do. I mean, all of that I think is relevant. On the other hand, what that suggests is that it is not in fact the ill intent of the state that will harm citizens. It's good intentions executed poorly. And I think that's probably the flip side of it is, okay, well, maybe it would actually be great if we could use some of these technologies to ease the burdens of people's relationship with the state taxes. Like we said, we also have a piece in the issue and the video as well about the ways that FDA compliance might be eased by AI. And I think there's some pretty interesting players that are already operating in that space, but the government's going to screw that up too, so they're going to screw up the upsides even as they fail to harness the power to do the evil AI, that's true of all kinds of things, but it's not, I think it is neither comforting nor discomforting discomforting.
Juliette Sellgren
Yeah, no, I think you're right. It's also, I think about the Luddite point where you see the tech bros not letting their children use the things they created becomes a little offputting. And so as that advances, is that really our option? Is there a way to reap the benefits without taking all the cons with it? And I guess over time, maybe the cons will diminish, but how do we address and think about those fears of being surveilled and advertised at and manipulated and all of that?
Katherine Mangu-Ward (33.46)
Yeah, I mean, I think those fears have been present also as long as we've had any technologies, any new technologies. And so I think that that's the other thing is there's such a powerful this time, it's different mentality around AI and maybe in the lesser or more immediate time horizon around social media, for example. And I really just think this time it's not different. It's not different. And people become comfortable with the technologies of their youth and they forget what things were even like without them. And more importantly, they forget the games that those technologies brought them, right? People actually literally can't remember how they did their jobs prior to email and they hate email. But what they forget is that it used to just be super-duper hard to gather people together for communications- just way harder than it is now.
And I think that that's just a natural human tendency is to take the upsides for granted and immediately incorporate them in our worldview and to be very risk averse and also to see the downsides as more prominent in our narrative with respect to technologies. Again, not to say that it's all sunshine and roses, it's certainly not going to be all sunshine and roses, but this natural human tendency to say, this time it's different and we can somehow predict and forestall the specific negatives of this technology. I just don't think we're very good at that. I mean, I think the biggest example of this is nuclear power. So we saw nukes in all their glory and terror. We saw nuclear power. We saw a couple of accidents that were genuinely horrible and we said, Hey, we perceive as a society the potential upside of electricity that's too cheap to meter.
That was the expression people used to use. And we think we understand how valuable that would be to us as a society, but we also fear these specific negative outcomes so much that we are going to constrain this technology until we essentially strangle it from the vantage point of where we are now. That was a critical error, just a huge mistake. It was a misapprehension of the orders of magnitude of value that cheap clean energy could provide, and probably also a misapprehension of the danger that the technology to contain those dangers would've improved as well and have improved even since the period when the US in particular pivoted away from nuclear. So I think that that is the best analogy in many ways for how we should be thinking about AI. Don't repeat the mistakes that we made with respect to nuclear power. Putting up guardrails sounds perfectly innocent and perfectly fine, and it takes us places that are actually much darker and more harmful than we can imagine.
Juliette Sellgren (37.22)
And I think something a lot of people on the podcast talk about is how we have to understand history in order to kind of inverse gaslight ourselves into realizing we're not that special. This is not a new time, this is not a new challenge humanity's facing and to kind of calm down, but how can we as communicators of optimism and being risk seeking, not in the crazy adrenaline junkie sense, but in the we're not risk averse, we're looking forward to the future sense. How can we better communicate those ideas given what we know of the past? And given that we really think that the past is indicative of how fearful we should be for new things.
Katherine Mangu-Ward (38.14)
I think it's tough because I think you're always going to be accused of being pollyannish if you just say like, Hey, I think the upsides are going to be pretty cool and leave it at that. I think that means this is the least surprising answer you will ever get from a magazine journalist, but I think the answer is that you got to tell a story. You got to bring people along on the story. And so I think science fiction is a big part of shaping our fears and our hopes about new technologies. And that to the extent that the stories that we're telling draw on the power of that tradition, the more effective they're going to be. I mean, let me paint you a picture of a world where the worst parts of everyone's job, the part that they are the worst at of everyone's job is done in conjunction with, or maybe entirely by future generations of artificial intelligence.
Let me show you what that would be like for you. A regular person who has a regular job, I'm going to tell you that story. Let me show you what education might look like if we replaced the hour of a mediocre teacher, mediocre in front of 37 kids with a personal learning companion in the form of Chat GPT, or with a curriculum that's been crafted basically for free by an LLM or we're not there yet. We don't have that technology, but you can see a way to it that might be an awful lot better than what we have now. So I think we've got to get better at telling the stories of what this might look like in a way that people find plausible. I mean, always you can say, True, true communism has never been tried. Let me paint you a picture right then that doesn't make communism the best way of organizing a society. But at the very least, we should try to make our story as compelling or more compelling than the tales of doom that I think we’re evolutionarily wired to find salient and compelling already.
Juliette Sellgren (40.34)
And listeners, you should read Reason obviously, but something I tried to say science, but then I tried to SciFi or say science fiction that I actually read that was very positive in terms of our future with technology and it actually being good for humanity is this book by Neil Stephenson called Seveneves, where effectively the technology and the technological advancements we have has saved humans from the moon exploding. And he's super into physics. And so he explains what would happen to all of the solar system as the moon explodes and what humans do in response. And it's this really interesting book that I think comes out overwhelmingly positive, but it's action packed and it's super interesting and compelling. You don't really notice that it's optimistic because you're so compelled by the narrative, but you walk out being like, oh, I'm pretty smart. So I think that's the sort of thing we need.
Katherine Mangu-Ward (41.39)
Yeah, I agree. And I think that it's not always going to, each narrative is going to hit different consumers differently. And so the other answer is we just have to think and talk about it in all the different languages and all the different ways and all the different modalities that people find compelling. And hopefully when the decision makers, the power holders, either in government or somebody like Sam Altman or Elon Musk who's on the private side holding the levers to some of these technologies, we want them to have a story in their minds that correctly weights or tries to correctly weight the potential upsides against the potential downsides.
Juliette Sellgren (42.29)
And I want to talk about something really important that I know we're kind of cutting it close, but something that we haven't quite touched on a ton yet about what it means for the average person, right? You've mentioned this a few times, but I think about Scotland Lincicome and his book Empowering the New American Worker where he talks about what we need to do to achieve stability for people whose jobs are changing or getting lost. Because right now that's something that people are all up in arms about, and it's not even, in some ways it's AI related, but in some ways it isn't. But I think it particularly can relate to what's going to happen with changes in an economy. So what should we be doing and how can we not go about this change, but how can we arm people to not be scared, but also to kind of take on the future and take responsibility for that and act instead of demand that government intervene? Because even if government does intervene, regardless of if that's the right decision, people need to actually act on their own. And I think we can make the most of it. I think we can do more than make the most of it we can, or I guess make the most of a good situation, but we need to know how to do that. So what do you think? What should we be doing? How should the average person, I don't know, be gearing up for the future?
Katherine Mangu-Ward (43.59)
I think one answer is on the most basic level is just believing that there is a future. And I think that the kind of doom saying around, not just around climate change, but also this sort of line that you hear about this generation is the first generation to be less well off than their parents or to die younger or this or that. And I think there's so much Arianism and apocalyptic in our discourse just in general right now, that it's hard to have a conversation about preparing for the future in a kind of nitty gritty way when it's sort of fashionable to say, if there even is one Lowell, like, well, nothing matters. And I think that's probably the place to start, which is a super easy task and no big deal to just convince people that the world isn't ending. To some extent, people have always believed the world is ending. All major religions have the world is going to end component to them. All political philosophies have a world is going to end component to them. But it does strike me that it's sort of more casually, culturally salient now than it was maybe 10 or 15 years ago. And that trend seems to be accelerating. And so I think the first step might just be trying to turn back some of the basic pessimism about the idea that we have and that they might be kind of cool.
Juliette Sellgren (45.39)
Yeah, we need to make it cool to face the future. I'm imagining someone like, my future is so bright, I got to wear shades type of thing, but it's someone wearing sunglasses and standing in this burning light and it's like, no, screw you. I'm optimistic. If we had an army of optimists, I guess we kind of do, but go forth and multiply.
Katherine Mangu-Ward
It’s a guerilla army. So love to work on getting it to more of a standing army status.
Juliette Sellgren (46.11)
Yeah, no, this is a good point. Alright, so I want to move on to the last question because I think we can have, it is not going to be like a one and done. It might take more time to fully answer and grapple with. A lot of the time when we face new policy issues or cultural issues like this one, our general instinct is people who love freedom is we shouldn't be scared, we shouldn't intervene. It'll figure itself out. But that doesn't mean that we don't have reservations or that we don't need to think through what we're looking forward at. We don't need to think about the future. We don't need to deal with should government intervene in this case. We actually do still kind of have to take things on a case by case basis. So did you have this moment when kind of working through this issue about AI and thinking about AI generally? And then I guess coming out the other side of that moment, how have your thoughts changed?
Katherine Mangu-Ward (47.17)
So first of all, I think the way you asked that is so good because what it gets to is this very fundamental insight that sometimes people will critique libertarians and say like, oh, well, you just think the free market will take care of it. And I think I really, really try not to use that language or anything close to that language ever, because I do think it's misleading. It sort of implies a deity that will intervene. The free market is just people, I mean, the free market is made of people, just like Soylent. And what that means is, yes, I think that there are powerful mechanisms in our world that are super effective at coordinating individual behavior into things that benefit society as a whole, but individual people still got to be making choices. I think that that's so salient here because it's not just, it'll work out.
We should as individuals do what we can to help it work out if that's what we want. And so I think that when I think about AI, I think it's not, oh, the market will take care of it and it will work out. I guess I should do my little part, which is to do an AI themed issue of Reason Magazine and asked some smart people to help me figure it out and help other people who read my magazine figure it out. So I think there is this kind of need to take a personal responsibility for things like this that can happen for anyone at any level, even if it's just like, Hey, I'm personally going to go try ChatGPT. I actually think there's a real parallel here with guns that people who have never fired a gun are much more likely to have a fear and also a regulatory impulse around guns.
And if you take just a standard garden variety gun control progressive, and you take them to the shooting range and they shoot a gun, they almost always come out like, oh, maybe I overstated this fear. And I think that that's true with AI as well. I think AI as a big foreign concept is scary. And the fact that Google auto-completes your Gmail, what it knows, you're going to say, what is your availability this week? That's just AI in action and it's not scary. And so I think exposure and familiarity and taking responsibility for exposing yourself to it is probably part of that answer. And then the last thing I would say is I certainly do not have blind trust in major market players- very, very far from it. But the thing that's given me the most pause about AI generally is the fact that some of the biggest players in that space, people who are technologists and commercializers seem scared.
They seem scared of their own creation. And I think that's worth listening to in a way that I do not care even a little bit. If Chuck Schumer says things are scary, what does he know? He knows nothing. A random member of Congress whose opinions matters, not at all to me, but someone who is in it, in the space in the thick of it who's saying, yeah, there are some real risks here. My inclination is to listen to those people. And so I try to do that. And some of the things they say, give me pause. I still think that ultimately a position of humility, which we should have with respect to any kind of lawmaking or regulation making, it's the right risks. And so instead, we as individuals should try to use these technologies responsibly and that the people who own the technologies should release them responsibly and structure them responsibly, and we should be forgiving when they screw up. So I guess that's my last point, is this idea of the Google Gemini image generation scandal where they were, if you said, show me a picture of a World War II soldier, and they were all a multi-ethnic German army would come up because they had clearly given it to forgiving of those cut case they did. A responsibility should, repetitive marketplace are trying to make versions of these technologies that will be useful and powerful and interesting and let people leverage their own skills.
But if you're going to ask me in the end, what's a real reason to have some concern about the future of ai, it's because the people who know what they're talking about seem to, and I don't think that's the definitive and final word, but it's worth considering.
Juliette Sellgren (52.17)
And that's even weird for me to fully parse through because part of me trusts that they're going to be making more responsible or maybe more cautious decisions with regards to it because they seem more aware. But at the same time, we still have to analyze, well, are they doing it in order to be able to be the Zuckerberg, the guy who writes the regulation about his competitors effectively? But then you also have to ask, well, they're still doing it, so why are they still doing it? It's not cut and dry even in that case. So I almost feel more optimistic that they are speaking out about it. But it is also that you have to trust them more than you would trust just a congressman who knows nothing. So it's maybe a more complicated situation, but it's information we didn't have before.
Katherine Mangu-Ward (53.18)
You should always be alert for rent seeking. And the fact that these guys by and large are quite defensive when they're accused of rent seeking makes me think that maybe they really are engaging in that speaking the idea that they can cement their place as a leading player by bringing to bear some regulations that would favor them or other restrictions on entry, that kind of thing. Sam Altman in particular is absolutely scathing when confronted with allegations that he is engaging in rent seeking behavior when he goes in front of Congress and says, maybe you should regulate us for our own safety. Totally fair to say, hey, but maybe he is. At the same time, they're not fear mongering just for that reason. If for no other reason, the technologies are just too new, nobody really has a cemented place. This is not, Facebook didn't start that kind of nonsense until it was well, well in the lead and well established. And that is typically the pattern. So it could be different here, but probably not.
Juliette Sellgren (54.21)
But it is in a way, and I know this is going to sound crazy, but this might be something where we can thank woke, and that's the first time in my life I've ever said that. But because I say that because we are more aware of these people than we probably would've been in the past. And so we can probably gauge better how their behavior and their words line up, which allows us to tell just how genuine it is. And I think maybe with forward looking things like AI and technology, that might be a helpful tool. I mean, I could be wrong. It could backfire horribly, right? Because when we look back and do it to the past, it's not usually a good thing. But maybe in the future for newer things, it might be helpful.
Katherine Mangu-Ward
Yeah, I think that's a possibility. And I like ending on this optimistic note. We have found some good even in woke-ism, and we are ready to go boldly forward holding hands with the techno optimists and the sters together.
Juliette Sellgren
I try. Once again, I'd like to thank my guests for their time and insight. I'd also like to thank you for listening to the Great Antidote Podcast. It means a lot. The Great Antidote is sound engineered by Rich Goyette. If you have any questions, any guests or topic recommendations, please feel free to reach out to me at great antidote@libertyfund.org. Thank you.