A techno-skeptic on the A.I. revolution - with Christine Rosen

 
 

Dr. Christine Rosen is skeptical of all the techno-optimism around the coming era of artificial intelligence. In this episode, she responds to our recent guest, Tyler Cowen (episode # 120).

Christine Rosen is a senior fellow at the American Enterprise Institute, where she focuses on American history, culture, technology and feminism. Concurrently she is a columnist for Commentary magazine and one of the cohosts of The Commentary Magazine Podcast. She is also a fellow at the University of Virginia’s Institute for Advanced Studies in Culture and a senior editor in an advisory position at the New Atlantis. Previously, she was a distinguished visiting scholar at the Library of Congress.

Christine is the author or coauthor of many books. Her next book is called The Extinction of Experience. She's also a prolific opinion writer – not only on the pages of Commentary, but also the Los Angeles Times, National Affairs, the New Atlantis, the New York Times, MIT Technology Review, Politico, Slate, the Wall Street Journal, the Washington Post, and the New England Journal of Medicine.


Transcript

DISCLAIMER: THIS TRANSCRIPT HAS BEEN CREATED USING AI TECHNOLOGY AND MAY NOT REFLECT 100% ACCURACY.

[00:00:00] We still don't have an industry standard for social media platforms. We can't even get our act together on something that simple. We need to do that with AI. A moratorium isn't necessarily a good idea, but I'm not sure how we do that in a country that can't even decide what it stands for as a country anymore.

What is our position as a global leader? We're in a state of anxious. It's identity right now as a country. And I think the AI stuff has created more anxiety in part because we are feeling a little bit uncertain coming out of the pandemic, coming out of a political system and a sort of polarized political culture that, that doesn't let us have debates very reasonably anymore.

On this podcast, our guests usually offer up a healthy dose of doom and gloom on everything from geopolitics to economics and popular culture. But a recent guest, Tyler Cowen, who I thoroughly enjoyed. [00:01:00] Was surprisingly upbeat about the world. He thinks America, even its major cities, he also applies this to Europe as well, can rebound.

And a big factor in his optimism is the revolution in artificial intelligence that we're about to live through. That conversation generated a lot of heat from some of our listeners. I mean, some found it interesting and illuminating, but others not so much. And foremost in the not so much category, was Dr.

Christine Rosen. So I invited Christine on to give us her, let's call it her techno pessimist's view of the AI revolution. Christine Rosen is a senior fellow at the American Enterprise Institute where she focuses on American history, culture, technology, and feminism. She's also a columnist for Commentary Magazine and one of the co hosts of the Commentary Magazine podcast.

Christine is also a fellow at the University of Virginia's Institute for Advanced Studies in Culture and a senior editor. In an advisory role at the [00:02:00] New Atlantis, we've had guests on associated with the New Atlantis in the past, including Eric Cohn, one of its founders. Previously, Christine was a distinguished visiting scholar at the Library of Congress.

She's the author of numerous books and the co author of many books. Her next book, coming out soon, is called The Extinction of Experience. And she's a prolific opinion writer, not only on the pages of commentary, but also the L. A. Times, National Affairs. I mentioned the new atlantis also the new york times mit technology review politico slate the wall street journal The Washington Post and the New England Journal of Medicine.

Christine Rosen on the coming era of artificial intelligence. This is Call Me Back.

Pleased to welcome to this podcast, Christine Rosen of the American Enterprise Institute and Commentary Magazine, and most importantly, a regular co host of the critically acclaimed. Commentary magazine pockets. I always say critically [00:03:00] acclaimed whenever I've pot hoards on. And not once has someone questioned, where's it been reviewed?

Like, where's it just about to ask our Apple reviewers would, would, some of them would disagree with critically acclaimed, but not once has anyone ever said critically acclaimed. So I'm just going to keep doing it. Christine, thanks for being here. Thanks so much for having me. So, um, the reason I called you and said, let's have a conversation is because a couple weeks ago I had Tyler Cowen on this podcast where we talked mostly about AI.

And then you all on the, on the critically acclaimed commentary podcast, daily podcast, you guys reacted to the conversation with Tyler. And I would say a couple of you on the podcast, Matt and John were somewhat upbeat, but then you brought like the doom to the conversation. You were, you were, you were the doomer, you're the doer.

And basically saying, Tyler's too much of a techno optimist and all these people are too much of techno optimists and, um, we [00:04:00] should be worried. So I, we're gonna, we're gonna, um, in a moment, get to why we should be worried. I want, I want to give a fair hearing to, um, To, uh, kind of counter, uh, Tyler's take, um, I, before we do, I just want to just kind of a little bit of background about you because what our listeners may not realize is, first of all, you've written about this issue extensively, including, you know, more than one, uh, large essay for a commentary magazine on the topic of social media and tech.

And, um, so you come by your, shall I say, dare I say techno pessimism? Uh, honestly, this is not like a fresh. A fresh hot take, uh, you also taught a course at Tickfaw actually that my son was in, was one of your students, um, on, on social media. So you've been, um, and trying to get young people to understand the kind of the good, the bad, and the sort of the ugly of, of social media and how to think about it responsibly.

So before we get to AI and the issues that Tucker and I got into, can you just, I mean, sorry, the issues that [00:05:00] Tyler and I got into, um, Can you just talk a little bit about like how you've gotten interested in the issue of technology and its role in society and humanity and, um, And sort of the liberal arts and how we learn and how we think like, how did, how did you get into this particular area?

Sure. Um, well, I'm actually trained as a historian. So I got a PhD in history and I studied, uh, history of science, study the, uh, eugenics movement in the United States, the people who wanted to improve the human race through better breeding. So a lot of the work I do now has grown out of, uh, of my research into what happens when the best and the brightest, the elite, The technocrats, the people who know everything and really want to solve all the big problems, do that without thinking of two things.

One, what people actually want and how they behave, and human nature. Those two forces, uh, throughout history have governed a lot of how we are able to manage problem solving. And so what [00:06:00] I found with my historical research is that when you make these broad, often progressive, optimistic efforts to completely transform human behavior, to make us better, faster, stronger, all these things.

More productive, more efficient, all of these things, all of these things are good. By the way, I'm not judging these as bad goals. The means matter to get to those ends. And often because of human nature, uh, the means adopted can also be, can often be quite repressive. They can be anti democratic and they can all be invoked in the, in um, service of a larger goal that has as its, um, side effects, people's actual lives.

So in the case of the eugenics movement, you had the most progressive people in this country arguing forced sterilization is a progressive measure to prevent the wrong sorts of people from having kids because we wanted a healthier society. Now we can look back at that now and say, well, that was just terrible.

We would never do that. But of course, at the time, that's precisely what was [00:07:00] considered, you know, the, the enlightened way of looking at solving a human problem. So I come out of that historical background, spent a lot of time in archives. I have a lot of respect for the slow and arduous process of trying to figure out these problems from a historian's perspective versus, say, a political scientist's perspective or an economist's perspective.

So I bring that into a lot of these debates. And then. A couple friends and I founded the New Atlantis 20 years ago. Eric Cohn? Yes, Eric Cohn of Tikvah, Yuval Levin, my, my Who's a close friend, and both Yuval and Eric have been on this podcast. Yes, Yuval and, um, uh, Adam Kuiper and Eric Brown and I founded this little journal.

Luckily, when you find, when you start your own publication is, you know, when you start your own podcast, you can do whatever you want. So those guys just let me go off on crazy tangents. I wrote, I started writing about a history of the remote control, which became an essay about, uh, how we're kind of habituated to more on demand content.

And, you know, I ended up writing about friendship, which became a longer essay about early social media companies. MySpace. I was [00:08:00] writing about MySpace. I tell that to kids nowadays. They're like, what is that? I'm like. Go back into the mists of the past and you will learn about MySpace. So I was able to look at our use of personal technology and to ask questions about what motivates us to embrace these tools.

How does it change our behavior? If it does, what does it improve? What is it, uh, what unintended consequences often emerge from these sorts of tools? So when you launched the, the new Atlantis and which was primarily focused on bioethics Leon, Leon Kass era at the, at the Bush White House, George W. Bush White House.

So when did you make the leap from that to Tech and social media like when did when were you when did you dial into wait a minute? So bioethics is one category, but social media is like a whole other world that we should be worried about When when the first social media company started myspace early Facebook, I I had a lot of friends who were early adopters I have a sister [00:09:00] out in Silicon Valley and the conversations around how people's behavior was changing rapidly to suit the platform, whether it was MySpace or Facebook, really fascinated me because it struck me as, as something that was happening very quickly.

Whereas I know human nature is very slow moving. We have evolved over a very long time to react to certain things in certain ways. And to think about ways of altering our behavior, uh, quickly is not easy. So I started to wonder about that and the collection of friends and the way that even the word friend was changing as a result of these platforms.

So I, and then I started looking at early online dating platforms as well. I wrote an essay about romance in the information age, again, very early, pre Tinder, pre, pre smartphone even. And, and just the way that people were both so enthusiastic about these tools, but also a little bit naive about the unintended effects it had on their own.

Thinking their own way of perceiving others. The, the idea that ranking your friends and ranking your dates [00:10:00] and ranking all these things was just, of course, that's how you do it. It's more efficient. But how that can also undermine serendipity, how it can also undermine the patient's and tolerance that's required to really get to know another person before you judge whether they're worthy of your time.

But could you not say that? Or could you not reason that every new technology, ones that have transformed our lives for the better, like, I presume you think everything from Gutenberg's printing press to Google search have augmented our Abilities in ways that are incredibly productive and yet every, you know, the Gutenberg's printing press led to incredible, uh, dissemination and proliferation of, of bad information and, you know, produced, you know, we were able to, to have the bar Bible reach far and wide and also mind come for each far and wide.

So you could have made the Google Google search, you know, enabled us to augment our. Our skills and producing and writing and thinking and researching. And yet again, [00:11:00] incredible spread of misinformation. Like, couldn't you, these concerns you have, you could apply to every innovation in history. And so again, before we get to AI, the, just take those topics you were hitting just now, like social media.

Could, couldn't you make the same argument about, yes, yes. It's unfortunate that people rank friends, um, and that's weird, like really, really weird. On the other hand, if you look at, you know, the, the Arab Spring in 2011, and how technology enabled the Arab Spring in a way that created all these citizen activists.

We'll get to your recent essay in, um, And commentary about the future of cable news, but I don't, I don't jump into it now, but even you cite that in 2004, Dan Rather was toppled by a blogger who was able to use technology and sort of the citizen sponsored fact checking, as you put it, sort of crowdsourced fact checking that empowered activists to hold big media giants accountable.

So are these positive [00:12:00] trends that even you, I think. Mm hmm. Celebrate and probably use to some degree, uh, or the tools you use, um, you know, come with it some downsides and that's kind of normal with every innovation. Absolutely. And look, the, the, the real difference here is the pace and scale of the change.

So the printing press, we had a couple centuries to really acclimate ourselves to it. Uh, if you look at the telegraph, if you look at the, the, the wired telephone, these all the, uh, the, um, adoption of these technologies. took time. And with the time we also had a shift in behavioral norms, in, in social norms.

Uh, we were able to adjust at a slightly slower pace. We do not have the luxury of that time now. Thing, changes happen very rapidly. We can, we can even go through with AI. Uh, I jotted down some, some dates just to show us how rapidly it's developing. It, it does change more quickly and I think we're not necessarily wired to adapt as quickly as change is happening.

So that poses one challenge. The secondary challenge, [00:13:00] and I would say even with all the benefits of a lot of social media platforms, a lot of, uh, destruction of the gatekeepers in media, for example, which I do think was a, was a necessary thing, is that it's very easy to tear things down and it's harder to rebuild.

So we're in this new process, I think both with social media platforms and with the new media. where we're trying to figure out how to build a new thing that functions under new conditions. And that's going to be a lot of trial and error. That's going to be a lot of, um, upheaval. It's going to be a lot of misinformation that people are going to come across as a result of this.

But I worry, uh, the danger I always feel that's in the back of my mind, both with these platforms, with AI, with any of these new tools, is how are we What kinds of behaviors are we habituating ourselves to? Are we trying to become more like the machines? Are we trying to make the machines more worthy of us?

Trying to make the machines function in a way that is an extension of man, as old theorists used to say of technology? Or are we becoming more like the machine? How many of us have had the experience once a week of having to [00:14:00] prove you're not? Prove you're not a machine. You got to go on and click all the pictures.

It's, we're being trained in certain platforms and in certain spaces to be more efficient, to be more machine like, and that is fine with certain tasks. But when it comes to developing deeply rooted communities and human relationships. We do not need to be more like machines. We need to be more human. We need to actually be more patient, more tolerant, in ways that I don't think we can design an algorithm to do for us.

So, there, I mean, I think, so bringing it to this AI moment, I, I put People's, I mean this is going to be sort of a crude categorization, but I put, I put people's reactions to it in like one of four or five categories. There's the, Oh my God, AI is the terminator. And it's going to, exactly. And and we're training these, these machines that are going to like kill us all and ignore our prompts and do what they want to do.

There's so that's like the truly dystopian fear, I guess, and [00:15:00] another slightly less dystopian but still pretty dark view is just it's going to just dramatically heightened inequality or exacerbate inequality, and it's going to lead to like socioeconomic. Strife to, you know, uh, in ways that we couldn't even, you know, could never possibly imagine.

That's sort of another category. Then there's the, the sort of what I would call like the Derek Thompson category from the Atlantic, which is, you know, AI may be good. AI may be bad. The problem is it's just not that. The qualities. It's a gimmick now. It's a gimmick. Like that's kind of his beef with with a eyes.

It's just people are having fun with it. Uh, with chat. GBT and but they're not like really using it to solve real problems. It's a gimmick. It's fun. It's, it's akin to the, you know, the, the iPhone being released at first, which was fun and neat, but clunky and the app store didn't exist. And, you know, it just didn't have all these tools that we've become dependent on.

So it was, it was a fun thing to have, but it wasn't as Transformative, at least at the time. [00:16:00] Then there's the Tyler Cowen, Mark injuries in, you know, techno optimism. And so you don't fall into any of those categories. You're not in the Terminator category. You're not in the, I don't think you're in the dystopian kind of her head, right?

You're. You're mostly concerned with the impact it has just on on human beings and human interactions. I want to quote here from an essay that you and I were talking about offline by Mark Andreessen, where he kind of goes, his essay is titled Why AI will save the world. Yeah. Very subtle title. Subtle.

Tell us how you really feel, Mark. Well, you know, he, he was famous for saying software will eat the world. So, um, so this was his, his, his upbeat, um, sort of twist on that, I guess. Uh, and so he, he, I guess he sent us, he sent us an email, he blasted us an email. It's like a, it's like a rant. He's explained it.

It was his version of I'm mad as hell and I'm not going to take it anymore. He was tired of hearing from people like you, I guess, who were saying that there was huge problems with it. So he [00:17:00] decided to just. Put finger to keyboard and lay out why we should be, uh, more optimistic. And, um, I'm not going to obviously quote extensively from it, but he, he says here, and I'm quoting this part, he says, perhaps the most underestimated quality of AI is how humanizing it can be, how humanizing it can be.

AI art gives people who otherwise lack technical skills. The freedom to create and share their artistic ideas. Talking to an empathetic AI friend really does improve their ability to handle adversity. And AI medical chatbots are already more empathetic than their human counterparts. Rather than making the world harsher and more mechanistic, infinitely patient and sympathetic AI will make the world warmer and nicer.

So Christine, why are you against a warmer and nicer world? Yeah. So this, this piece drove me absolutely bonkers. Um, I, I, by the way, we'll post the piece in the show notes. Look, I [00:18:00] appreciate his optimism. I share his optimism when it comes to what. Uh, artificial intelligence is already doing in, not artificial generative intelligence, but strict old fashioned AI and what it's doing in the biomedical fields.

The way, like you can synthesize proteins in a split second. You can do all of these. Yeah. Amazing, amazing things. And where there's a tool that you can clearly see down the line. True benefits for humanity saving lives. That's a real benefit. So on on that score, I'm quite we got a version of it during the pandemic, right?

Exactly. Exactly. Exactly. That's literally had code on on the on the virus, like in January that And the, and the, um, you know, the heads of some of the drug companies said the code they used to create their mRNA vaccine was the code that landed in their inboxes in like, in January. Exactly. Of, of, of 2020.

Well, and think of like cancer screening, you know, the scans, having radiologists work with an AI tool that can pinpoint and find patterns in a moment that would take a [00:19:00] radiologist a lifetime to be adept at finding. Now that, you don't want to take the human out of that loop ever, in my opinion. But there are these hugely powerful, uh, positive things that are going to come out of these, these tools.

Where I depart from the Andreessens of the world is this idea that we should replace human interactions with, uh, AI chatbots, for example. So here's, here's the whole thing that humans have a struggle with. We can't sit in a room by ourselves alone doing nothing and be happy. We need other people. We need communities.

And we had an experiment. We had a global experiment in that, in during the pandemic. Exactly correct. Exactly. The pandemic really drove these lessons home. It drove home the lesson of the need for face to face communication, the way that mediated interaction, although brilliant, I'm talking to you by looking at you over a wonderful screen so that we can see each other's expressions during our conversation.

I'd still rather talk to you over coffee in person, that you're going to get more from people when you're in person, the connection is better. So I think what worries me about Andreessen is saying. He sees the AI chatbot [00:20:00] as a perfect replacement of the human. It's like, look, humans are impatient. They get tired.

They're not always empathetic. Wouldn't it be great if you always had something at the, at the push of a button that was those things for you? I would argue, no, that, that's not good for us. It is not good for us to do that. So in that sense, this idea that you can replace these deeply human needs, which give us a sense of meaning and purpose and belonging and, and ground us in communities that also, by the way, Remind us that we are embodied physical creatures with frailties and that our struggle to overcome those frailties and to deal with them emotionally and physically and psychologically and spiritually, that's what makes us human.

So, no, I don't want a perfectly empathic, empathetic, tireless, AI chatbot replacing my, my friends who are sometimes cranky and impatient with me because their, their human reaction forces a, a fellow bonding with, with me that says, you know what? She's having a really bad day. What can I do to help her? I'm not going to feel that for a chatbot.

I'm going to say, come on, why aren't you giving me what I [00:21:00] want? We are already a pretty entitled species. I don't think we need tools that make us more narcissistic and more entitled. Okay. So I want to quote further, cause this is exactly what he He, and Jason gets into some quoting here in our new era of AI, every child.

We'll have an AI tutor that is infinitely patient, infinitely compassionate, infinitely knowledgeable, infinitely helpful. The AI tutor will be, will be by each child's side every step of their development, helping them maximize their potential with the machine version of infinite love. I'm sure you love that phrase.

The machine version of infinite love. Every person will have an AI assistant coach slash mentor slash trainer slash advisor slash therapist that is infinitely patient, infinitely compassionable, compassion, infinitely knowledgeable. He goes on and on. Um, the, the AI system will be present through all of life's opportunities and challenges, maximizing every person's outcomes.

Every scientist will have an AI assistant collaborator partner that will greatly expand their scope of scientific research and development. Every [00:22:00] artist, every engineer, every business person, every doctor, every caregiver will have the same in their world. Now, some of this, you don't. You're not horrified by it.

I assume the scientist having a collaborator is as long as it's not a hackable collaborator. Yes. Yeah, but it gives them more horsepower, makes them more productive, can synthesize, you know, 50 research papers in one email, right? This is the stuff, um, that, that, you know, the machine version of infinite love.

You're basically saying children should not you. Have to learn. I mean, sorry, children should have to learn what it's like to have to work with people in the world who don't give you infinite love. Oh, yeah. Well, we've both had I have I have, uh, kids. You have kids. Yeah. Think of the tyranny of the three year old, the natural built in tyranny of a three year old.

He's like, I want it. I want it now. Imagine a world where that three year old, and this would be the three year olds in the developed, um, you know, sort of, uh, well off West to begin with, could turn and say, get me this, and it would appear, get me that. We already have studies, by the way, [00:23:00] of how children use, uh, smart speakers, how they can summon information and they'll walk into rooms and demand something.

This is not a skill we want young children to have. We, I used to, you know, I have, when I had kids, we had. We had stairs and they would have to sit on the step when they sort of acted out. It's like you sit on the step two minutes, you set a timer and watching them struggle with their sense of self possession in two minutes is that is necessary.

That's how humans learn self control. It's how we learn to understand our own emotions. Like, why did I blow up? I'm sitting on the step wondering why I demanded that cookie. That's the process of becoming human. And if you outsource demands to something that always gives you what you want, you will never have to learn those lessons.

And you will become an adult that expects the world to instantly gratify every need you have. Yeah. I've got to, um, I gotta be careful not using names here. Um, my, my son a couple of years ago had a math tutor who was both indispensable to his and, and fantastic teacher. Uh, [00:24:00] Indispensable to his kind of turnaround in, in math, which had been a challenge for him.

Not, this is not Eli, it's my other son. And, uh, and at the same time, every appointment in person appointment, all his sessions with her were in person. He was often late for, and finally one day she fired him. But life lesson, right? Right, right. That's my point. She fired him, uh, as a student because he was always late and he was horrified.

Um, because he was like, I can't, I won't be able to make it in math if I, if I don't can't see it. I said, dude, get on the phone and make your case and like, come up with your parole plan. Like figure out how you're, how you're going to fix this. Like, I'm not saying my son is going to be a mathematician, but I mathematician, but I do think that kind of experience is the opposite of what Mark is envisioning with the ever accommodating, ever patient, ever I'm here to serve you, um, AI tutor.

And can we, can we also [00:25:00] acknowledge that he's describing a one way relationship. Look at what this thing will give to you. The problem that we know from human computer, uh, relationship studies. We have a lot. We have decades and decades of social science and computer science research that shows we will impart motives and feelings and and behavior to the AI that it doesn't even necessarily have.

And we do that because as human beings, that's how we learn to connect. And this started with Eliza, the chatbot, which people instantly thought had feelings because it was talking to them and the sophistication and the level of of, uh, mimicry that we're going to be able to do with AI and already can do to some extent with chat GPT is, is going to exacerbate that, that likelihood.

And again, for adults, it's bad enough, but for a child to emotionally invest in something that has absolutely zero emotional investment in the human being in return, it's simply programming. Okay. So let me ask you, did [00:26:00] you feel about, um, a couple of random examples, do you think that I mean, you're, you're basically saying it's, it's de skilling, de skilling us, if you will, as humans.

De skilling us of human skills. Um, so did you think that auto correct de skilled us? Or digital maps, in a way, de skilled us? Like, de Yes, they all do. And again, the trade off we make, so take GPS. GPS uh, is indispensable to people, most people these days. I use it all the time when I travel. Um, but it does change our perception of things.

First of all, I grew up uh, learning how to navigate on paper maps. I took a road trapping, road trip in grad school with, you know, a car with, it didn't even have power steering standard. You know, we drove across the country, we had paper maps, we'd, we got lost all the time. We had to stop, we had to put the map on the hood of the car and figure out where we were going, ask human beings, where are we, where do we go, where can we stay, where can we eat, which, which, which kind of taught us about the places we were.

It was. I, I will never forget that trip. It was a [00:27:00] wonderful, wonderful experience and incredibly scary at times where you're like staying in a dodgy hotel and you're like, we don't even know where we are on the map. Those are good experiences. GPS is much safer, more efficient, puts you right at the center of the map, but you don't know where you're going or how.

And I think. We've made a trade off there where we've decided that the efficiency and the ease is worth it, and so that's good. Uh, I, I think for the most part, all those little anecdotal stories we heard at the beginning of the GPS era where people drove off cliffs or into lakes, we, we hear fewer and fewer of those.

But we do lose that navigation skill. I make my kids learn how to read a paper map because that is something that most kids don't have to do. Yes. You do that? Yeah, I do. We put out the map and I'm like, okay, find where we're going because we take a drive up to Maine every summer. And I want them, if for some reason the GPS went out, to be able to find their way if they had to.

It's a skill. So we can choose to keep those skills going. We can also decide, you know what? The trade off is worth it. I think GPS, the trade off was worth it, even as it has led [00:28:00] to a decline of certain skills. But the human skills, the emotional skills, the reading each other's feelings, the, the understanding, the true empathy skills, I really am concerned about the de skilling that's already taken place, even before we've gotten into the world of perfect AI chatbots.

So that I, I'm, I'm sympathetic to a lot of what you're saying, and then I'm just sort of. It's kind of coming up against the, the, the kind of the hard rocks of, of reality of not just technology progress as, or depending on how you look at it, but just the, the world of geopolitics and the kind of dangerous, scary world we live in.

And if you think about, there's no unified vision for AI in the United States, but to the extent that there's some kind of unified vision for the, what AI should be in China, it is quite daunting. Uh, And in terms of, I mean, you already look at what they've done with 5G and the influence and the way they use TikTok and the way they use sort of digital [00:29:00] authoritarianism to control their population.

I mean, they have a vision. Talk about dystopian. They have a vision for how to use AI and they want to be an AI superpower globally. And they have reach if you look at their, you know, uh, their one belt one road strategy and their kind of death trap. Um, that. The, what is it? What was it? That, uh, whatever the debt, death trap, uh, diplomacy around the world and gaining all this leverage around the world, AI could take that to a whole other level.

That's the reality. That's what we're up against the United States. So At a practical level, do you think there's any way to put the brakes on this without putting us at a massive disadvantage with adversaries like China? So I, I know that this is often the way it's posed. If we don't do it, if we don't do this well, i.

e. we don't, if we don't have any breaks and just. go ahead, get stuff to market as quickly as possible, then bad guys in China will. And I, I know that that that's true. That's likely true. I don't think that's [00:30:00] a, that's a, an unlikely scenario at all. However, I do think we need to think through previous revolutionary moments and how we dealt with them.

So think of a lot of people point to nuclear, but I think this is much more like recombinant DNA. research, where we could permanently alter what it means to be human. We have the skills. Now, uh, now with CRISPR, we, we can do these things. There is a global consensus that was reached pretty quickly that that's a really bad idea, because down the line, we really have no idea what that would, would generate in terms of human.

Human, humanity's future. So, I think that we need to treat AI in the same way. Now, we got China sort of on board with some of this. I mean, they, they have punished scientists who've meddled, uh, and used CRISPR to do actual germline engineering. There are ways to, to stop this. I will say my big concern with China, quite frankly, is the ghost worker economy that allows AI to function.

The 20 million workers who do this, largely in, in the [00:31:00] global south, in Africa, in Nairobi in particular. who do the actual human labor of training these systems so that they can identify things so quickly. There are human beings behind the curtain here that no one ever talks about when a, when a glamorous CEO testifies before Congress.

Human beings paid piecemeal, pennies sometimes on, uh, piecework. It's, it's digital piecework. China is, is much more of a power in the regions where those human workers who create the AEI are, are functioning. So the influence in those regions and our need as a, as a, Global power to counter that, I think, is important.

We're not doing that now. Um, I don't think, however, as the United States, which is a beacon of values, virtues, I like to use the word virtues, nobody uses it anymore. I find values kind of tepid in this context. We are supposed to stand for something and saying, and we are also still the leader in a lot of this technology.

So standing up and saying, you know what, we're doing this, but here's how we're doing it. We have an industry standard that won't go beyond this [00:32:00] point. We have oversight boards that do X, Y, and Z. My concern is that the industry itself in the U. S. has no interest in that. We still don't have an industry standard for social media platforms, because the companies can't agree.

They will not sit down around a table. Now, some, they all kind of claim to want it, but we can't even get our act together on something that simple that we know to have, you know, some impact on people's lives. We need to do that with AI. A moratorium isn't necessarily a good idea, and I think, quite frankly, strategically for the people who are a little behind, it lets them catch up.

But we do need standards that allow still for new entrants into the market. I'm not sure how we do that in a country that can't even decide what it stands for as a country anymore. This is, this is where I think the AI argument, we're talking about values and what are our values anymore? What is our position as a global leader?

We're a little, we're in a state of anxious, uh, identity right now as a country. And I think the AI stuff has made, has created more anxiety in part because we are feeling a little bit [00:33:00] uncertain coming out of the pandemic, coming out of a political system and a sort of polarized political culture that doesn't let us have debates.

Very reasonably anymore, but you said earlier and I think you have some dates there. Maybe uh, the speed with which Ai's is, uh developed. So can you just yeah, so I was I I decided to I was like, well, maybe i'm exaggerating I wanted to look back So 1997 the infamous deep blue, uh chess match where deep blue defeats kasparov.

So in 2000 then Flash forward, 2016, DeepMind wins in AlphaGo. Now this was a version of AlphaGo. Go is a very complicated game. This was a version of AlphaGo. There's like, there's like tens of thousands of moves in AlphaGo versus chess, which is a fraction of that. So much more complicated than chess. It solves it, but it does it by being trained against.

Human matches. It learned by seeing how humans played AlphaGo. That's in 2016. Flash forward, uh, to AlphaGo now. It's called AlphaGo Zero. [00:34:00] It trains itself on games it plays. The human is out of that loop. And it's, it beats humans all the time. So AlphaGo Zero. So again, that's just a, you know, five years ahead.

2019, Pluribus. This is one that actually shocked me. A program called Pluribus defeats human players in Texas, no limits, Texas Hold'em poker. Poker is a game where as human beings, we think, well, you're really good if you can read people's tells or you can bluff. The computer beat us at that. And then finally you've got Cicero developed by Meta, which this was the one that was really intriguing to me.

So it was playing a game called Diplomacy. Again, something that involves strategy making, uh, understanding motive, and it was very skilled at winning using deception. It figured out how to be deceptive and the humans didn't know. And that to me just shows that's from 1997 to 2023. That's a very short span of time where the tools that we've developed in many ways have figured out how to outsmart us.

We slow [00:35:00] biological creatures and we can't always explain how they figured that out. That's the black box part that I think the Andresens and the Tyler Cowens of the world just kind of sail right by. But if you talk, I've talked to AI researchers who work on a lot of biomedical issues, which is why I'm very optimistic about what's happening in that field.

But a lot of them will be honest with you and say, we figured out how to do this. And when we went back and tried to figure out how it did it, we couldn't explain how it did it. Now that doesn't concern them because they're dealing with very limited, uh, sort of goals with a lot of safety and guardrails up that they, that they put in the beginning, but not all AI researchers are going to have those narrow goals and guardrails.

And so if you can't explain how something learned to do something, how are you going to reverse engineer it? If something catastrophic gets out of control, that's where I think the catastrophism is not. Not entirely crazy because you've got to know how to go back and reverse engineer. So Cowan used this this thing where he did a percentage like, oh, [00:36:00] well, if the plane was 90 percent likely to land safely, wouldn't you still get on the plane?

Well, AI researchers themselves have described their work as building the plane while it's taking off. So I'm not getting on that plane. So I think we got to think about the likely risk of some of this stuff. Uh, so November of 20, November of wait, November of 22 is when November of 22 is when chat GPT was released, right?

So I mean, was it on your radar before then? A little bit, a little bit, but only in sort of, it was mentioned here and there in articles I would read and sort of pretty tech specific journals where people chat about that stuff. But yeah. Okay. So the, so, but you were focused on social media before that and social media kind of building, developing, progressing, you know, uh, becoming permeating so many parts of our lives for about a decade and a half.

Um, the computing power here and the [00:37:00] intensity of it is like at a whole other scale. So sure, we should have these protocols. Sure. I mean, I, even if I were to agree with you that we needed these protocols and we needed to, you know, kind of, um, develop some kind of universal. Okay. In the U. S. universal theory about, um, uh, where, what parts of, because you're not saying shut it all down, you're not saying halt it all down, what areas of AI should be left to kind of do its own thing and where we need to be a little more thoughtful and, and, um, less accelerated about, but like at a practical level, again, Okay, fine.

I mean, even if I, but it's, it just strikes me the speed with which this is moving. I mean, our government doesn't even, I mean, yeah, well that, that's the concern. I mean, everybody knows whenever, whenever a tech CEO testifies before Congress, it's always very amusing to see the Congressman like they're still back in 1992.

No, no, they behave like help desks, the members of Congress. So wait a minute. You're telling me what? Yeah. No, but that, so, yeah. So this is, this is the moment [00:38:00] we're in, and that's why I actually think that neither the doom, the total doom saying, shut it all down, and there's a contingent of those folks, or the, it's all fine, just let it run wild.

Both of those are extremes we cannot really accept or should not accept. There is a middle ground. The problem is, I think, For one thing that the leaders in that middle ground, I've talked again, I've talked to people who run these companies, there is zero incentive for them coming from either the risk of regulation or legislation, or, you know, fear of being held accountable later on, if something does run amok, they have only one incentive get to market first.

Get to market, get what we're doing to market first so that we can be the first ones in this space doing, using AI in this way. And I understand that. I am a free market person. I think this is a generally healthy impulse. The challenge I have is that all the mechanisms we have in place, uh, whether it's regulation, whether it's, uh, lawsuits, for example, litigation is a pretty good tool if something goes wild and harms people to deal with it.

Where's [00:39:00] responsibility here? So I, when you talk to the people who do this research, if they can't even pinpoint, well, the algorithm did it, well, that's not really legitimate. If you're being sued, the whole company's going to get sued. The, the, the way responsibility is dispersed in a lot of these companies with regard to AI, the outsourcing of a lot of the, the sort of data input that comes in to generate these models, it's just very complicated.

And if the people who are using it can't explain it in a clear eyed way, how are any human beings just. The average Joe's like me gonna gonna be able to demand that they be responsible for what they produce down the line if it harms people. You are a teacher, uh, the, you know, Tyler said on, on my, on this conversation he and I had on my podcast, I said, does this mean the end of homework?

He says, homework's already over, it's over, like that is over. He says, whatever you can get students to do in class will be where the real learning happens. Um, but homework is just. You know, you'll learn to develop other skills, but, [00:40:00] but homework the way you would think of homework, which is thinking, reasoning, writing, researching, unless it's done in class, it's kids are not going to learn how to do it.

So this, this strikes me as something that's not, there's a net gain and a net loss here. Then the net gain, I love it that you probably grew up in the era where you had to handwrite your exams in a blue book. Right? Remember when people actually taught handwriting? That was eons ago. But there are these skills that we will, in a weird way, have to bring back because By the way, I, I still Learn something in a better way and the longevity of my memory for it is higher when I actually have to write.

Yes. Rather than just scroll or type or whatever. There is cognitive research that backs that up. Students who take notes by hand in a lecture retain more information than those who use keyboards because they have to summarize in their head while they're writing. They cannot just do a transcript. Word for word so that there is actually the way our brains [00:41:00] are designed is to write to slow down the thought compress it Summarize it put it on paper So I think that those skills if we end up weirdly having to bring those back That's a that's a net positive that the net negative is and I deal with this with students I teach college level I teach everywhere from junior high to high school to college level The idea that every bit of the world's knowledge is available to them with a Google search or online or in a Wikipedia article is something I really try to dissuade them of that idea, that notion.

So much of our knowledge is still embedded in undigitized archives, in old books, in things, in places that they might never come across because they assume everything's on, the screen in front of them. And the concern I have with these AI generated, uh, chatbot, you know, chat GPT, for example, it will fake things.

So you can say, write a scientific article with, with citations about X subject and it'll create it and it'll look perfect. Some of the [00:42:00] footnotes will be faked. Some of the research will be faked. It will seem absolutely plausible, but it is not true. And so the difference between plausibility and truth is, is I think the distinction that as teachers, we're going to have to make with our students going forward all the time.

You can say, yeah, that seems plausible. Is it true? We do this already with misinformation and disinformation, but the scale at which we're going to have to do this and in the number of fields that this is going to become important, particularly scientific research is. is, is a challenge. I mean, we already have a replicability crisis in social science research.

That's been going on for a while. Imagine a world. Can you explain that? So the rep, so you have a lot of these like, gee, wow, that's totally nifty social science research experiments. They get written up in the papers. People are like, that's incredible. Look what they learn about human behavior. Two years later, someone tries to replicate that experiment to see if the results actually are legitimate.

No, not replicable. A lot of the times these are like one offs, probably badly designed studies that don't end up telling us anything about ourselves. And this is how we learn about what it means to be human [00:43:00] by studying our own behavior and doing it in a systematic way. My concern with a lot of the AI fueled research developments is that these summaries, these, you know, these quick turnarounds, they can be useful if If the limitations are understood by the researcher at the get go, where they become harmful is when a high school AP biology student gets a CHAT GPT summary of something that's incorrect or misleading, and then that's a building block for them down the line when they're in medical school.

Like, you can see a long term effect of this kind of vaguely wrong but plausible information. We need a, we need a, we need citizens in this country to actually Have some faith and trust in, in the integrity of the information that they, that they're learning. And that's the role of parents and teachers to make sure the information their kids are getting has integrity.

Two brief topics, sort of related but unrelated. Uh, we, we touched on it, we alluded to it earlier on. Uh, and I know you've written about it and thought a lot about it. Uh, The long term [00:44:00] implications of the COVID lockdowns on young people, you believe, even with the studies coming out about how certain students are behind and certain academic subjects and increasing rates of teen depression and loneliness and even teen suicide, even with all of this information that is now available to us that Uh, sort of capture, chronicle this period that we went through during the pandemic of, of, you know, mass school lockdowns.

You, you believe we still haven't fully, we still don't fully appreciate how bad it was for young people and the long term implications. That's right. And, and I, I really am concerned that we're not holding responsible the people who brought this on our children. I mean, I, I joke, but I'm only half joking.

If I were, you know, half the age I am now and a young lawyer, I would try to. Sue every teachers you did in every state that kept public schools closed by my kids are public school [00:45:00] students. They were out of school for a full year. They were high schoolers. They had the benefit of of, you know, me being able to really supplement what they did and I did do that.

So they were okay. They still lost a lot of learning. I mean, there's still huge gaps that they found, you know, in academically or social learning, both. So their entire cohort, they're about to be seniors in high school, their entire cohort socially is. fascinating to watch because they are not, they really are about a year and a half behind because in ninth grade, that first year of high school, they really were all separated.

And as, and as much as they've tried to make it up, they are kind of emotionally not at the level that I think they would have been with, uh, without those lockdowns. And I, so my concern also is the long term effects of this. If you lose. It's particularly acute for young kids, kids in elementary school who are learning the building blocks of reading and writing and, and social emotional ways of understanding each other.

They lost, some of them lost a year and a half. That is, that will have long term echo [00:46:00] effects 10, 20, 30 years down the line. Uh, their anxiety around leaving the house. even their anxiety around what is, what is actually, yeah. I mean, I, I hear you hear these stories and I actually have spoken to parents, many parents who have young kids who, because of all the safety precautions that they saw adults in their world creating for them, even these are not parents who were crazy about any of this stuff.

They just followed what was advised. You know, anytime someone gets sick, the kids are like, well, we have to test. We have to, there are all these protocols that they just grew up assuming was normal, that aren't normal. They were excessive. And so the parents are having to kind of teach them. Well, that was, just pandemic era stuff.

We don't have to do, it's just a cold. We don't have to test for that. It's just the flu. You're just going to rest. You don't need any, but you know, but the fear and anxiety, that stuff comes out down the line. And we, we, we see, I think some of the mental health crises it was building before the pandemic, what exacerbated it, particularly for young people was the sense of isolation.

And they would spend a lot of time online and they would be chatting with their friends, but they still didn't feel [00:47:00] connected to them in some way. It didn't help them. For some kids it was fine, but for kids who were already trying to deal with the mental health problems, it made it worse for them. So that kind of stuff is where I think it was both a wake up call, but actually I fear we will not hold the public officials, the public health establishment, and the teachers unions accountable for what they did to an entire generation of children.

It's a tragedy what they did to those kids. Yeah. I, I, I agree. Uh, we, we gotta, we gotta, I guess, find those lawyers who can take this on. I just want to end. Come on, you young lawyers. Get out there. I'm too old. So. Um, okay. I want to just wrap, uh, uh, with your piece. You're in the most, I think it's the most issue, recent issue of commentary.

Yeah, it is. The, uh, called, the, called, titled The End of Cable News. Uh, which you, you pivoted off of both Tucker Carlson leaving Fox News and Chris Lick stepping down from CNN. Um, and I'm just going to quote from you here. You say the overall effect for consumers is that the news is digital and atmospheric rather than coming from [00:48:00] a particular voice.

This has resulted in declining audience loyalty to individual news gathering institutions and greater engagement with the social media platforms that serve up information like a hyperactive Associated Press, a 21st century wire service with memes like that. But the part of this quote actually that got me the most was that news.

Is atmospheric. That is exactly how I feel when I give talks or lectures or, you know, I'm often asked, well, tell, tell us about your media diet. I get that. I'll a version of that. Like, what do you, what do you read every, what newspapers do you read? As I stack up like the FDA and the Wall Street Journal, New York times every day and just go through it all.

And I try to explain like, where do I get my news from everywhere? Like it's, you know what I mean? And so the atmospheric is exactly right. So based. And I encourage people to read this piece, which I think is excellent, and we'll post it. But so what is your, what is your big point? Like where, it's not about just the end of cable news, which you're writing about, it's about the future of news.

Right. So I, so I [00:49:00] started with, I, like you, I was actually quite an enthusiast of the decline of the gatekeepers. When the internet came along and smart people could fact check in real time, the, the, uh, misleading statements of, you know, the New York times or a CBS news anchor. This was actually very democratic.

It was very. Good populist, I would say. You now have to distinguish between good and bad populism. But, but it was a, but it was a bottom up movement to hold accountable people who otherwise had not been held accountable for, for their misdeeds when it came to facts in particular and political, uh, and ideological, uh, their ideological bent.

So then you get, then you get into, cable news actually, uh, did, uh, dramatically broaden the amount of content, particularly ideological political content, all along the spectrum. Again, all for the good. If you like it, if you like your left leaning news, you watch MSNBC. If you like your right leaning, you watch Fox, Fox News.

That's fine. They compete, healthy, go for it. They fact check each other. There's some sort of weird balance there. But social media really [00:50:00] upended all of that, because, because we're not, we're just getting the information in little micro doses, it doesn't really matter where it comes from, and we don't really check where the original source is.

So you read the tweet, do you ever click, do you click on the story and read it? Sometimes. Many people do not, or they just get a feed, and so everything is given the same priority. So it's like, here are pictures from my vacation, there was a volcanic eruption, oh, the election was stolen, and it's all scrolling along in the same, given the same prominence.

And there's not an encouragement of depth, depth seeking, right? So I think the danger with social media, Tucker is a perfect example. He leaves. He's, uh, well, he's technically still with Fox News, but he's broadcasting his own show on Twitter. He has no gatekeeper. It's just him in a cabin with a microphone and an audience.

I like the comparison to the Unabomber. I was, and this, I wrote this before Ted Kaczynski's death, so I was like, Oh, well, that was weirdly timed. But it has a, he has a feeling of like, I, and he says like, now I'm going to tell you the truth. I'm going to really cut loose. So when he cuts loose, what [00:51:00] you see is a man who's.

Sharing deeply anti Semitic stereotypes about global leaders who is just conspiracy mongering and making wild theories, kind of homophobic rants against senators, I mean, just kind of what your crazy uncle used to do when Facebook first came out, right? And he'd be like, I think this is happening. And it is very QAnon ish.

Um, It got a lot of viewers. So my question then is where, if that's actually the place that is the future, the future is a lot of people with big personalities and crazy theories and no fact checking, uh, going on Twitter and doing this now to, I will say one caveat to that. Um, the, the community notes function on Twitter is brilliant and I love it.

It doesn't always work perfectly, but that's an example of. the gatekeeping effect still in effect on one of the people who blew open the gatekeeping. So I like community notes. It actually holds two accounts pretty even handed politically too. Like people, people love to fact check others and tell them they're wrong.

And that community notes function is useful. [00:52:00] I don't think a lot of younger readers in particular who do everything is atmospheric news for them. Um, the problem becomes when we are very tribal politics, very polarized culture. And then you add into that, um, a lack of integrity and some of the information, uh, sourcing that people do, you just get people screaming at each other and pointing.

So I've heard people argue about the Tucker thing on one side or the other. Well, he's just telling us the truth. They all lie to us. Okay. They're post COVID. There's a lot of truth to the idea that, you know, the officials in charge often tell you the noble lies so that you don't freak out that there's evidence for that.

There's also evidence that he's just kind of a little, gone a little crazy. So how do you come to the, what democracy used to always do best, which is everybody arguing and then coming to some compromise, some sense of agreement so we can move beyond that. I feel like we're all really trapped in the same constant battles.

Social media drives them. It's not caused entirely by social media. It's caused by the way we behave as human beings. So that is my concern. The future cable news is can't [00:53:00] save us. It's kind of dead. Social media is going to make this problem worse. Where, how do we come out? What's next after this? That's going to get us away from that.

The one hopeful, uh, opportunity is I do think you will see. Uh, increase investment in the highly personalized in the, in a healthy way, highly personalized personality driven news. So for example, not to plug your critically acclaimed podcast, but you're the commentary podcast has a personality to it. It has a real personality to it.

You know, I know, I know from when I'm on it, like people talk, you know, we'll say like, they talk about you guys like your siblings, like, like, like we squabble like They squabble and they analyze like it's like the Kremlin ology of like, Oh, John and Christine argued about this. They got a little tense, but then Matt came in and, and Abe was a little quiet, but he made that key point that it was very, you know, and they, they, and that AI can't do that.

And I, by the way, I see this with other news sources, not just podcasts, but even print some, some of these sub stacks. I read some of like the Jewish insider, which is this daily news feed. I read, it has a real [00:54:00] personality to it geared towards people with a real interest. It has values. It has values and principles and a voice and a personality that I don't think these large language models are going to be able to replicate at least anytime soon.

So I do think you will see a doubling and tripling down investment in the kind of more, um, thoughtful, um, Personality driven news. Some of it will be crazy. Yeah. No, you know, so some of it will be unthoughtful, but it's a free country. I mean, if you look, we we're free speech people, like you're going to, you got to take the crazy to give everybody a voice.

That's, that's the deal. That's the deal. Right. So, all right, Christine, thank you for doing this. Thank you. I hope to have you back on. We will, we will post a couple of these pieces we discussed in the show notes. And, um, you know, thanks for bringing a douse of, of, uh, crushing morosity to the call me back podcast.

You got your, your, your on brand and that we, we shouldn't get. We shouldn't let Tyler get away with his, you know, un [00:55:00] crushing hyper optimism, so, um, so the counter has been laid out. Great. Thank you so much.

That's our show for today. To keep up with Christine Rosen's work, you can track her down at the American Enterprise Institute, AEI. org, or at Commentary Magazine, Commentary. org, and her very good piece on the end of cable news that we wound up talking about, we'll post that in the show notes as well, but you can find it at Commentary.

org. Call Me Back is produced by Ilan Benatar. Until next time, I'm your host, Dan Senor.

Previous
Previous

Is Israeli society collapsing, or just growing up?

Next
Next

The (Iran) deal that shall not be named - with Rich Goldberg