1 Big Idea to Think About

  • Technology has always been used as both a tool and a weapon. As artificial intelligence plays a larger role in our society, we need to ensure that we remain the master of technology rather than a servant of technology.

1 Way You Can Apply This

  • Become familiar with AI and learn how it can be a tool to help you achieve what’s most important.

1 Question to Ask

  • How will AI make it easier for me to focus on what is essential?

Key Moments From the Show 

  • Technology as tools and weapons (1:27)
  • How AI has changed the world so quickly (3:55)
  • Brad Smith takes AI to The White House (6:42)
  • How technology creates new barriers for connection (8:25)
  • Does technology help us better understand each other? (10:51)
  • Managing the risks and opportunities of AI (16:14)
  • The #1 unintended problem AI will create over the next several years (19:18)
  • What rules and legislation we need to minimize the downsides of AI (21:24)

Links and Resources You’ll Love from the Episode

Connect with Brad Smith

Twitter | LinkedIn

Greg McKeown:

Hello everyone. I’m Greg McKeown. I’m your host and I am here with you on this journey to learn to understand so that we can make our highest contribution. 

Have you ever wondered whether technology was a blessing or a curse? Have you ever wanted it to be more of a tool than a weapon? Well, in today’s conversation, I have invited Brad Smith, the president of Microsoft, to the conversation. He is among other things, the author of a book called Tools and Weapons in which he are articulates how technology can be terrific and useful or useless and worse. By the end of this episode, you’ll be able to take on a position with AI and also other technologies to make sure that you are the master and not them. Let’s get to it. 

Subscribe to the podcast so that you can make it effortless to receive new episodes every Tuesday and Thursday. 

Brad, you wrote the book Tools and Weapons: the Promise and the Peril of the Digital Age. That’s pretty prophetic. Can you tell me why you wrote the book in the first place?

Brad Smith:

I think one should only write a book if there’s a book worth writing.

Greg McKeown:

An essentialist tone right there.

Brad Smith:

Yeah. Don’t write a book if you don’t have something worth saying. A book length.

Greg McKeown:

Yes.

Brad Smith:

And as somebody once said to us, every book is an argument, so make sure you have an argument and Carol Ann Browne, my co-author, and I decided in 2018 that we thought we had an argument worth making that digital technology had reached the point of such ubiquitous impact around the world, that it really had become both a tool and a weapon.  And our real goal in writing the book was to share with a broader audience what we were seeing in the tech sector and at Microsoft and in working with governments around the world, we needed to adapt to what technology had become. And the book in effect is a bit of a recipe about what that means and how to do that.

Greg McKeown:

When we were together. Last, one of the things you said to me was if you were to write it now, you would update it with additional chapters. And that seems even more true over the last year with the rise of AI. Can you talk about what the basic view is you have of technology and how it relates to these latest big AI announcements?

Brad Smith:

Yeah. The basic view is encapsulated in the title, Tools and Weapons. And that digital technology is serving as both, and as a result, the tech sector and companies like Microsoft need to step up. We need to assume more responsibility and governments need to speed up to address the impact of this technology. And there is, without question, an inflection point. This year as generative AI has taken everything that we wrote about and accelerated it even more now. 

We did actually publish the first edition in 2019. We did update it in 2021. We added three chapters in 2021, and we updated the whole book, including the three chapters on AI. So the good news from my perspective is we did actually talk about the work of OpenAI in 2021. It’s just that the rest of the world wasn’t yet aware of what it is now, how important their work has become.

Greg McKeown:

Your perspective will be so useful. But it seems to me that even six months ago to get governments to even talk about AI was really difficult. That nobody was willing to enter into a proper dialogue about what governments ought to be doing to be able to help get the best out of AI without the predictable downsides or the unpredictable downsides. But that seems to have changed over this last six months, that suddenly the prime minister in England is meeting with the heads of OpenAI and others, not frantically, instantly to start to have these conversations. Is that your perspective or do you have a different one?

Brad Smith:

That’s absolutely my perspective. And fundamentally things changed on a single day. It was the 30th of November, 2022. That’s the day that Chat GPT was released to the world. And at Microsoft, we had been working with OpenAI; we had seen it come together. We were using Chat GPT, at least a few of us internally, before then. The months since have been a rollercoaster. And the way I would describe it is the first two months of the rollercoaster were, oh my gosh, yes, this stuff actually works. Then we entered the second phase, which is, oh my God, this could save the world. Look at what it can do to help us. And then the third phase was, oh my Lord, it’s going to kill us all. We’ve been through this just tumultuous set of conversations. It’s been unlike anything I’ve experienced in my 30 years in the tech sector.

I do think we’re finally reaching a point where, for now at least, the rollercoaster is a little bit more level. It’s getting a little bit more practical. Companies and governments and others are focused on what we call proofs of concept. They’re using it to really develop pilot programs to see how they can use generative AI from )pen AI and others and Microsoft to do important things. That’s good. And even on the safety side, which is so important, we’re starting to talk about real proposals. And I think that’s exactly what the world needs, that combination.

Greg McKeown:

I remember the first time I used Chat GPT. I was one of the earlier adopters of it, and I pulled my family together, literally my children, and I was like, okay, well you use this together with me for the first time. You will never forget this moment. This is like an iPhone moment. We aren’t going back after this. You’re never putting this genie back in the bottle. It’s a huge step forward in a user experience with AI. Obviously AI existed prior, but suddenly everybody was experiencing it, touching it, so to speak.

Brad Smith:

And your story, your experience is a very common one. Yes. I’ve heard so many people who talked about the first time they used it or the first time they showed it to other people. We had a very interesting time with it because we could preview it to some folks before it was released. We had to do it in a slightly awkward way. And literally what we did is we wrote some questions and got the answers and printed them out on a series of PowerPoint slides and took that to some meetings. 

And one of them was at the White House last November. It was the day that President Macron was coming to the White House for the state dinner. And I was there in part for the state dinner. But in the afternoon we had a meeting with one of the most senior advisors to the president. And I had just created three questions and asked Chat GPT to go to work. The first was, President Biden needs to toast President Macron at tonight’s state dinner. Please write a toast for him. And boom, there it was. Very nicely done. The second one was, what is President Macron most proud of in terms of his achievements in the past year? Boom, had that answer. And then third one was take those achievements and revise or rewrite the toast to include them. And you had that. 

And I’ll always remember the meeting. It’s much like your experience. I showed it to a couple of folks there and they’re like, it did this. And then what people often do, and this was a good example of that, they then said, wow, we have people who we pay to do this. Or people go, I have kids who are just young in the workforce. This is the kind of thing they do. Everybody really quickly starts thinking, what does this mean for me? What does it mean for my family? Is it good? Is it bad? Oh, it could be both.

Greg McKeown:

Yes. And this idea of good and bad, of course the idea of tools and weapons implies that captures it and captures it beautifully there. There are a few things you said in the book that stand out to me in terms of the sort of, so what of this conversation and position. You wrote, for example, “For more than a century, almost every technology that has connected people who live apart has also created new barriers between people who live close together.”

Can you just expand on that, maybe an example that comes to mind in your world?

Brad Smith:

I think there’s two examples that really illustrate this so well. The first is the automobile, the car. Before the automobile, people were confined, typically to a few miles from where they lived. And what it meant is that in any small town in America or Europe or most other places, you had every institution on which you relied, every store, the school, church, the places for entertainment, it really made these communities more cohesive. 

Once people had cars and could drive, that’s what they did. They drove their car and they went farther afield. And as they went farther afield, it enabled them to experience new things in new places. . . And at the same time, it started to erode a bit the institutions that are in that town. And indeed, you go to a small town today and you don’t see everything that was there a hundred or 110 years ago.

And then to bring it home, there is the experience of say, the last decade. I think we’ve all encountered situations. We all still encounter situations where with our spouse or parent or child or children, we’re all sitting next to each other. Everyone is on their phone. They’re accessing content that’s somewhere else. They’re texting or chatting with someone somewhere else. And so the same technology that is connecting us with people who are farther away is, to some degree, dividing us from those we are closest to. And we’re still figuring out, I think societally from a country to a state, to a community, to a family, how to manage through that.

Greg McKeown:

This really grabbed my attention, that idea and the example you’ve given, because I am particularly passionate about what I might describe as the lost art of understanding each other. And I wonder if you could delve into what you see as the weapon of technology and deeply understand the other side, whoever that is.

Brad Smith:

It’s interesting because first, on the one hand, you can use technology to learn so much about anyone faster. People seldom go into a professional meeting without first looking someone up on LinkedIn. It’s true. And you just know so much about where somebody came from, which is a wonderful tool. Not just to know something, but to start a conversation. People find common bonds that they would never have known existed. Or even if they don’t have a common bond, they know where to start and just ask curiously about someone else. That is a great thing. 

But in the same vein, people can become less curious about each other. I will be the first to admit that there are moments when my wife will point out to me that I may be in the same room, but I’m not doing a very good job of listening to her. Which actually may be true independent of technology, but the technology doesn’t help.

It is a real issue and it’s not a weapon and an intentional sense, but it’s an attractive nuisance. As is sometimes said, it draws our mind away from being present in the moment with other people and takes us somewhere else. And fundamentally, I think when you think about the role of TE technology, what this means is that we have to remain in control of it, not allow ourselves to become distracted by it.

Greg McKeown:

It’s interesting to me to have the president of Microsoft saying, I remember when Steve Jobs wasn’t having his family, have an iPad, have an iPhone, that this idea of restriction, even though there’s advocation for this for everyone, there’s a greater awareness of the problems because you’re involved in the technology so directly. 

Do you think AI exacerbates the interpersonal communication and understanding problem? Does it not really touch it?

Brad Smith:

It’s too early to know. The first thing I would say though is to, it’s important to, I think to step back and almost philosophically ground oneself. What’s the reason we create this technology in the first place?  

One of the things I’ve always enjoyed about working at Microsoft is we’ve had a series of mission statements. And since Satya Nadella became CEO in 2014, it’s been to create technology that empowers other people, organizations so they can achieve more. The goal is to create products, help people make their lives better. It’s not people in service of technology, it’s technology in service of people. And I think if you start with that as your principle foundation, it does help you at least recognize when you may encounter situations where it’s just not working out that way. 

There’s a lot of people who’ve expressed concerns, some of them valid in my view about the role of social media in terms of just seeking to drive engagement for the sake of engagement and, ultimately, profit. And one needs to be very cognizant of those situations. 

Now you come to AI, generative AI at one level can become so engrossing that it has that same impact. People forget about the rest of their day, their life, the individuals around them, and they just have a conversation with a computer. That’s a risk, and we need to protect against that, or at least give people the tools so that they can manage themselves and their technology in a way that we hope will really benefit them. 

At same time, and this is what gives me more hope about generative ai, it actually makes it possible to find answers to questions more quickly. You know what I found immediately when I started using it with Bing, our search service is that instead of writing in some words and clicking on blue links and following them to one place or another, I could ask a more complete question or even set of questions and get the information back.

And what I really found is when I could answer the first question more quickly, I then thought of a second. That to me is an extraordinarily helpful, even important and powerful tool to help us learn, to help us reason, to help us think. And, by the way, to help us interact with those around us. 

There are times when I’ve sat in meetings at nonprofit groups or hearings and the like, and there’s a conversation taking place and there’s a point that I’m thinking of, but I’m not certain I have my facts right. I can now go get my facts right and jump into the conversation. So to me, anytime we can create something that enables us to, I’ll just say participate with those we’re with in a richer way, that’s a good day for the use of technology. 

Greg McKeown:

I think we could summarize the heart of what you just said with this premise that technology makes a good servant, but a poor master. First of all, does that sound fair?

Brad Smith:

I think, yeah. Our goal is not to create technology that is a master of humans. So Absolutely. That sounds fair.

Greg McKeown:

Okay. So then we have to go beyond that to say you’ve got an intent in creating technology. I’m thinking of Oppenheimer right now because the movie is about to come out. It is so present and it’s such an interesting question of a technology that can be a tool, but also literally in that case a weapon. 

Elon Musk, of course, has been pretty bold in advocating the risks of AI and originally investing significantly in OpenAI and so on. It’s his position that basically two companies own almost all of the important AI in the world now, Microsoft and Google. Is this a fair surface level summary of where we are or you see it very differently?

Brad Smith:

I actually see it very differently. First of all, Microsoft doesn’t even have a controlling interest in OpenAI. 

Greg McKeown:

So what Is the relationship with AI?

Brad Smith:

Microsoft has a non-con controlling minority interest in a for-profit company that is wholly owned by a nonprofit. And we have no board seats or board observers or anything like that. We basically have a critical relationship where we invest and use the capital. We are investing to create the infrastructure, the data infrastructure, and the supercomputing infrastructure that OpenAI can use to build or train its new models, and then help deploy those models around the world. And then OpenAI and Microsoft can both bring this new technology to the market in our services and for our customers. 

But you know, step back from that, Google is important. Microsoft, OpenAI are important, so are others company called Anthropic. Who had heard of Anthropic a year ago? They’ve come out of nowhere and they’re regarded as having equally capable models to say these others we’re seeing meta develop open source models.

We’re seeing a whole variety of startups develop more powerful models. And there’s a very vibrant discussion and debate that is increasingly showing that it’s possible to create smaller models at lower cost than they may not be able to do everything equally well, but can do specific things just as well as the big models can do a broad number of things. In sum, what that means, in my opinion, is there are going to be a lot of entrances into this space, and we haven’t even referred to entities in China where there’s also an enormous amount of investment, including by the government and innovation. 

And then by the way, there’s also Elon Musk. Elon is investing to build his own company. So he is highlighting a problem that he is working to address. The more he highlights the problem, the more the world may well appreciate the fact that he’s addressing it, but we’ll all see.

Greg McKeown:

What do you see as the number one unintended consequence of AI over the next couple of years?

Brad Smith:

I think the biggest problem we have to address is not that AI will go rogue, but that rogue actors will use AI to pursue bad acts more effectively than they could before.

Greg McKeown:

For example?

Brad Smith:

People who want to launder money will find ways to use AI to evade government controls more effectively. People who want to create a virus or a chemical or biological weapon, and this is at least a couple years out, but you know, they could try to use AI to get smarter at how to do that. Everything that we try to stop bad actors from doing today, we should assume bad actors will continue to try to do and they may try to use AI to do it more effectively. 

Unfortunately, that is the history of technology. One of the points we made in our book is people invented money and then there was bank fraud. They invented the telegraph and there was wire fraud. I wish people were not so creative in trying to steal money from other people. 

Now what that means then is we need to make sure that the laws evolve so that people are not able to use this technology lawfully for that purpose. I think that’s a relatively straightforward proposition because most existing laws will probably catch this conduct. But what we really then need to do is ensure that as a tech company we know who our customers are and we have the kinds of controls in place so that people will at least at a minimum find it very, very difficult to use our services to do those kinds of things. And second, we need to use AI to defend against the use of the AI by bad actors who are pursuing bad acts. That in a a nutshell, really that is the framework that we need to focus on.

Greg McKeown:

Another point that you made in the book is when your technology changes the world; you bear a responsibility to help address the world you have helped create. I’ve worked with Silicon Valley companies and the ecosystem around it for at least the last 15 years now. And so often felt that company cultures were drinking the Kool-Aid of technology, that they only really discussed the upside. You know, we’re changing the world, we’re improving the world as if it’s a asymmetric bet that all of this innovation will be used for good. Rather than seeing that every innovation is going to accelerate both trends. 

If you were President Biden or a world leader and you were in their shoes, what would you be asking, what rules or legislation would you want them to put in place to be able to help minimize the downsides?

Brad Smith:

Yeah. I actually think that this White House is pursuing a very sound path. And what I see President Biden and the White House doing is fundamentally two or three things. The first learn how the technology works and in fact we’re starting to see more governments do that more quickly. And I think that’s a good thing and it’s incumbent upon us to give them the information they want and need. 

Second, they are saying this is moving so quickly that we need companies to step up and assume more responsibility right away. And so the White House has been pushing us, and OpenAI, and Google and Anthropic the four companies they brought in the first week of May. And in the couple of months that have followed, they have been nudging and cajoling in all the right ways, the the four of us to offer commitments. And they’re pulling these together in something that I think we’ll soon see that start to define a foundation I will say, for what it means to be a responsible industry participant in this AI era. 

The third thing that I think we will see emerged, it hasn’t quite happened in Washington, but it’s definitely moving well ahead in a place like Brussels is a new set of AI laws and regulations. And this is where we as a company at Microsoft, I’ve spoken and sort of offered our point of view. We do need rules, we need some licensing rules, even so that for the most powerful AI models and for the most powerful supercomputing infrastructure where they’re deployed, you know, there need to be safety and security standards in place and people ought to be obligated to meet them in order to operate, at least in areas where one is say using AI to control the electrical system or the water system or other aspects of critical infrastructure. So I think that formula is coming together, it is coming together not just unusually but extraordinarily quickly.

Greg McKeown:

After nothing.

Brad Smith:

Yes. Yeah. Literally in Brussels they had a head start because they’ve been working on what’s called the AI Act, but Washington started, they had a standing start and this has all been unfolding in a short number of months. That is not what one typically sees. The other good news in a place like Washington, this is so far, at least, thank goodness, it hasn’t been polarized, it is still being pursued in a bipartisan way. That is something I think we need to continue to push on and sustain. And it also argues, frankly, for going more quickly when the political unity is there to do so.

Greg McKeown:

Yeah, it is fascinating to see there being a cooperation at the highest points of government when polarization seems to be so extreme. It seems almost a rare example of people just getting together and trying to figure out what to do next. 

Do you have a comment about this most recent US judge that has blocked the White House officials from making various contacts with social media companies? Has that affected Microsoft in some way?

Brad Smith:

It hasn’t affected us directly yet, and I think this is an important issue. I think it’s going to evolve. One should not, in my view, want to see any government putting pressure on companies in the private sector to do what the government itself cannot do, namely infringe or curtail the first amendment rights that our people enjoy under the constitution. 

At the same time, and I think this is being recognized and reflected, I think one of the real threats we face as a nation are efforts from say the Russian government, for example, to engage in what we call cyber influence operations. You know, to knowingly so dissent and mistruths and encourage mistrust among us as Americans. And I think that this judge has recognized that that’s a different thing. And if we were not in a position to talk with the government about that or vice versa, I think we would fundamentally weaken our ability to defend the country. So as with many issues, there are some nuances that are critical and there may be a bit of a balance to be struck. And we should think through the issue from left to right in all of its complexity and figure out how to strike that right balance.

Greg McKeown:

That’s a wrap for part one of this conversation with Brad Smith. What is one insight that you got from this conversation that you can take with you? What is one thing you can do differently immediately as a result of hearing this conversation? Who can you share this with so that you can continue the conversation after the podcast is over? Thank you for listening, and I will see you next time.