1 Big Idea to Think About

  • AI, when integrated thoughtfully, has the potential to deepen our understanding and improve communication by acting as a tool to help us clarify, synthesize, and reflect on different perspectives.

1 Way You Can Apply This

  • Next time you’re preparing for a challenging conversation, consider using AI as a sounding board. Run through your thoughts or questions with it, and use its responses to refine your approach, helping you connect more clearly and empathize with the other person’s viewpoint.

1 Question to Ask

  • How can AI help me see beyond my assumptions and understand others more fully?

Key Moments From The Show 

  • Recognizing the strengths and limitations of AI (1:54)
  • Using ChatGPT to close the communication gap (6:24)
  • AI as a mediator (10:26)
  • Can AI help us solve “the mystery of our disagreement?” (17:56)
  • Technology is only as good as the models it is built on (21:31)
  • From artificial intelligence to collective intelligence (23:07)
  • Getting from problem solving to problem finding with AI (23:45)
  • The human connection – to feel appreciated, respected, and understood (29:27)
  • The multiple layers of understanding (31:01)

Links and Resources You’ll Love from the Episode

Greg McKeown

Welcome, everybody. Before we get to the podcast itself, a reminder to sign up for the One Minute Wednesday newsletter. You’ll be joining more than 175,000 people. You can sign up for it by just going to gregmckeown.com/1MW, and every week you will get one minute (or something close to it) of the best thinking to help you design a life that really matters and make that as effortless and easy as possible.So go to gregmckeown.com/1MW

Okay, we’re here for part two of a conversation with my friend Jeremy Utley, with David McRaney, and with Henrik Werdelin. We’re exploring the possibilities and limits of AI, particularly as it relates to AI helping humans understand each other more accurately than we do right now. I have my hesitations about whether AI is designed in such a way that it can do this, but that’s not the same as me thinking it cannot do it or that it couldn’t be designed and built to do it.

Well, with that, let’s get to part two of this conversation.

 

David McRaney

My day-to-day use of GPT is like it’s mostly replaced Google for me. Like, if I look up something, I’ll go there and look it up first with the awareness that, like, some things are better done over on Google. But to answer what you were describing there, Greg, I use GPT as an epiphany generator. I use it as an elaboration encourager. Like, I’m not looking for it to give me the answers when I’m using it in that way. If I ask it for the etymology of a word, sure. If I’m asking it, “Hey, what are the known side effects of this drug?”—that’s one thing, but when I’m doing it for research, I’m doing this sort of spitballing thing where I’m waiting for my brain, the associative architecture of the way it makes sense of things, to get excited the way all of us have in this conversation where we’re like, “Oh, that reminds me of a thing.” Because that’s going to happen, and that’s what conversations are all about. You know, it’s one of its usages; that’s why we evolved this weird thing that we do. And when I get this spark, I note that separately, and this is sort of building up as I’m having this interaction with this tool.

And then somewhere in the conversation, I go, “Hey, since you’re here in front of me, I’ve thought all these things. Does that seem reasonable? Is there a structure to this?” And then it synthesizes. So it’s an epiphany generator, and then it’s a synthesizer of the epiphanies as I’m trying to turn them into language. And there’s an elaboration prompt or elaboration. In psychology, my old example of this, I used to say this in front of people, but it bummed people out, which is the famous—it’s been associated with Hemingway, but Hemingway probably didn’t write this, but it’s the old, “Baby shoes, for sale, never worn.” In psychology, you can use that as an example of elaboration because notice that the story is not in the words; the story is in what you did when you read the words. And this is all on your side. It’s all on your side.

And I feel like that’s, for me, the best current use case of GPT. The magic is all on my side of the equation. It’s just giving me the opportunity to do these elaborations in that way. Now, by the way, I retired that. And now I use a Mitch Hedberg joke because it’s a better example without bumming people out. There’s a great Mitch Hedberg joke where he said, “The other day, my friend asked me if I wanted a frozen banana, and I said no, but then I thought I would like a regular banana later. So I said, ‘Yeah.’” And I love it because the joke is not—there are jokes that are all in the words, like the beginning, middle, and end. That joke is on your side, and that’s what makes it funny, that little moment where you go, “Oh.” For me, the best use of GPT right now is when it makes me do that kind of stuff. Like, we’re going back and forth, but the magic’s on my side. And I don’t feel like that’s a failing of it. I feel like that’s a good use case.

And then if I get my epiphany or my elaboration, I then return to it and say, “Hey, will you synthesize that for me since you’re capable of that?” Then I have a record, like Jeremy was describing, of the thing I did. And later on, I’ll use that for something.

 

Henrik Werdelin

But I think David, what you’re saying is you’re using it as an oracle, and what Greg is saying is that he is asking it to provide more depth.

 

David McRaney

I just don’t expect it to be able to do it. Its limitations aren’t bothering me because that’s not what I’m using it for, I guess, is what I’m saying.

 

Greg McKeown

Okay, I want to make sure I understood both of you, both of those responses—they’re both interesting. Jeremy, what I think I heard you say is, “Look, Greg, I see in you a bias about AI not being able to do this, and so then I’m going to keep finding evidence to support that.” Might not have been quite as direct as, “I see you doing this,” but it reminded you of people.

 

Jeremy Utley

I see people doing that often. “Let me try this; see, it sucks.” It’s like, well, would you say that about an intern? Would you say, “Can the intern do this?” You’d probably give them coaching. You’d probably give them your input, right? Nobody treats it even like an intern, which is like a very, very low-level kind of management of an intelligence that you would cultivate to get better output.

 

Greg McKeown

And so you’re saying—I mean, the key question I thought you posited was, “Get over the assumption it can’t; assume that it can. How would it be done?” You know, I think you were encouraging me to think through, “Assume that AI could be a really good faithful translator, listener, understand the root cause, and so on. How would it have happened?” You know, assume it’s possible. How would you do it?  Did I get that?

 

Jeremy Utley

I think it’s something like that. And I think, you know, like, one example I would give or a simple use case is think about personality types, whether it’s enneagram or whatever. Choose your thing. Say I’m a one, here’s my message, my direct report is a five. Would you translate this message in a way that’s most highly likely to resonate with or find purchase with a five? Because the feedback I’ve gotten from my five is that I’m, whatever, I’m just making these numbers up, I’m too blank as a one. Or better yet, how would my five react to this message, and what coaching would you give me to adapt the message so that my intent is received in a way that doesn’t raise their defenses or whatever? But I think thinking about it like personality types is a way to realize, you know, anybody who coaches leaders goes, “Hey, certain personality types can’t hear it that way. You gotta put it this way.” That is a very natural…

 

Henrik Werdelin

I’ve tested that very thing. My coach is big on Stephen Kessler’s Five Behavior Patterns, which I think is a fascinating piece of work. I have a specific pattern that I exert when I get overwhelmed, and other people have other patterns. And so I have, before meetings, tried to role-play a conversation out that I knew would be difficult, instructing the bot to take that pattern as their behavior.

And I would say that it’s actually been pretty useful. There was a specific case where somebody was running something roughly called aggressive pattern, and I’m something called leaving pattern. And it’s often difficult to have conversations with those two patterns, as you can imagine, just from the words. And then I tried to role-play myself into that, and I actually thought it had a much greater effect. So I guess that isn’t direct communication, but it is a way to use AI as a tool.

 

Greg McKeown

As a coach.

Okay. And I just want to come back to David as well to make sure I heard you too, because what you were saying is, “Look, I find AI to be really useful in a particular way as I’m learning.” Like, I’m using it—you had this idea, it’s the “Aha’s.” I think that’s consistent with GPT being a Google upgrade. Like, I’m going out to learn; I don’t know what’s out there. I want to try and see these things. And GPT allows you to ask much more specific questions, get an answer that’s deliberately developed for you. Yes, it might be even less reliable in certain ways than some of the sources you’d access through Google. But the precision is so helpful, and it can certainly—I mean, I’ve learned an enormous amount as I’ve used GPT myself on the research and work I’m doing. So I think that’s what you’re saying. It’s good for certain things that have been really helpful to you. And you’re like, “Yeah, that’s what the tool is good for.” So I haven’t even been evaluating whether it’s good at this other thing because I’m not using it for that.

 

David McRaney

Yeah, I’m not expecting it to be good. I don’t expect it to have the answer to the question that I’m asking it. Because the weird thing, right, it’s what an odd tool to see—like, I’m going to ask this tool a question with the expectation that it’s not going to give me the answer. It could never give me the answer that I’m looking for because I want it to spark me to go look for where the right answer would be to that question. And there’s a strange use case of it, but I know it’s really great at encouraging me to think in a certain way. And I have to think in that way specifically if I’m going to do this research well. And I’ve learned a pretty good sort of habit with it, a workflow that for me is useful.

 

Greg McKeown

Has utility.

 

David McRaney

Yeah. And I get excited about it when I’m like, “Oh, okay, today’s the day I was going to look into that.” By the end of it, that’s not where it ends. Like, I haven’t finished the work there. This has given me all of my—to-do list items. Where I will then go find this book, or I will go to Google and go to this source, or I’ll read this paper.

 

Greg McKeown

Yeah. I too find it a highly accelerating tool for understanding in certain ways. 

Henrik, you said something else. You said you’ve used it in a way that it could coach you prior to a hard conversation and found that interplay, that sort of role-play, useful. Which is consistent, Jeremy, with what you’ve talked about before, having these—you have a brilliant word, and I can’t remember it. You’ve already used it in this conversation, but this idea of, like, somebody running through a—what’s the word you’re using?

 

Jeremy Utley

Drill.

 

Greg McKeown

Drill. Yeah. Okay. I like that idea of a drill and how GPT can be used for that. So we could turn the conversation on Jeremy’s question. Like, what his—in a sense, his challenge to me is to assume it can. We’re in the future. It did. How did it? Right. Like that, that’s what you’re saying. Get over the assumption that it can’t. Assume it can. How did it? So that’s sort of where you and I left off, Jeremy, in our exchanges back and forth—how would it be done?

How would it be done that someone could be—and I think we could use this idea as a facilitator or even a peacemaker or a mediator of understanding between two people. Let’s assume in the future it has been done using, though, I think, approximately the current tools and advancements in technology that we have today. So we’re not just saying, “Okay, 20 years in the future where it’s become something.” We have no idea what it will become using approximately what we have now. How could you build something that would help mediate between people to the degree that they really get unlocked about what they currently disagree with? Like helping people to hold, restate debates so that they can actually resolve conflicts that right now seem polarized, mutually exclusive, and impossible to fit together. Of which there are, of course, many, many current political issues that would be framed like that.

 

Henrik Werdelin

I asked GPT earlier that question as I was preparing for this, and it had different suggestions. One of the suggestions was this thing it refers to as a second listener, so somebody who’s listening into the conversation and course-corrects if either the understanding of the thing that you’re talking about gets too out of whack. You could have other parameters, too, if it gets…

 

Greg McKeown

That’s… But that’s a mediator. You’re describing a mediator role. But I like the term second listener.

 

Henrik Werdelin

Yeah, I think, you know, there were many other ideas, like it was talking about cognitive load and miscommunication and that, you know, like, even in this conversation, I’m sure that you guys have looked at your screens at different times, that you watched your thing, and then you could kind of notify suddenly, kind of like, the attention of the conversation had gone somewhere else. It had a bunch of different kind of suggestions on that. After this conversation, I guess, biased by going into it, I feel pretty confident that there is a role for AI-assisted communication.

And not because necessarily it is very good, but because humans are so incredibly bad. You know, I think David mentioned the penguin example with 18%, and this is the point that my coach brought up, Alex Kaufman—just to give him credit. He works with a lot of large companies, and he feels that a lot of people talk about FTEs. How many people should we throw at it? And one of the things that he identified is that it doesn’t really matter how much time people spend on stuff if the quality of that hour is not very useful, if you’re either full of angst and so you’re not very productive or you’re just not solving the problem that your line manager has communicated.

And so in that context, I’m kind of fascinated by how much we could unlock if suddenly AI could become a better communication conduit to allow us to understand each other.

 

David McRaney

But I’ll add one little thing. I could see it as in a mediation role or as a negotiator or whatever. Like, I know a few people who work in that world, and so it could definitely help there. I love the idea of it. There are many different ways we could use it to help us get that conceptual overlap, get our Venn diagonals, get our Venns closer to being completely like this. That could be really cool. But one of the things that excites me about it is that I don’t necessarily…

There’s so much of psychology that is about, “I don’t necessarily know why I’m doing what I’m doing, why I’m thinking what I’m thinking, why I believe what I believe, or what I believe,” or… It looks like my self-knowledge is quite weak, and most of my reasoning is motivated, and most of my behavior is motivated by things that I may not know.

 

Greg McKeown

The massive subconscious underneath the iceberg.

 

David McRaney

And a lot of professional mediation and negotiation is very similar to what happens in really well-performed therapy. You help both parties actually understand what is motivating them to want whatever it is they want, or what their actual goal is, which may not be the stated goal going into the conversation. What is the actual problem we’re trying to overcome, which we understand?

 

Greg McKeown

What’s behind the position? What’s the purpose behind the position?

 

David McRaney

We actually are pretty great at that if we’re prompted toward it. And I’m imagining this prompt engine prompting us back. There could be a lot of use and a lot of payoff there. There’s also this: we often don’t realize that maybe we share values and share goals in ways that beforehand we didn’t. If we had known that going into it, we wouldn’t feel quite so much in disagreement or quite so much at odds.

And I often say when I’m discussing this, like, you know, if I’m telling people what they ought to be doing in a situation like this, it’s like “Don’t face off—get shoulder to shoulder and marvel at, like, ‘I wonder why we disagree. Hey, do you want to enter into a partnership where we solve the mystery of our disagreement? Because that kind of is kind of neat, isn’t it? Isn’t it neat that I think that you’re smart and capable and interesting, yet we don’t agree on this?…why we disagree.’” And if you both discover that, it’ll change the nature of our interaction because it may just be that you’ve got a different model of reality, a different perspective, and different motivations. Different things are pressing on you, different drives are being activated by this, and anxieties. And because of that, you see this from this angle, and I see it from this angle. And it’s not so much we disagree; it’s just we have two… we’re seeing two completely different things here. And if we were able to mush them together, we’d have this more three-dimensional view of it, and you can gain from your disagreement. Right? Which is the whole idea.

And I can see all sorts of ways that AI, whether it’s an LLM or some other thing that we create, could facilitate that. I think it could be really useful. 

I want to mention something. Are you familiar with Google Illuminated? Okay, I just learned about it. I thought you may have just… just go Google it and look it up. It’s a really… it’s a tool that I am going to use. It may suck, but I’m excited about trying it out.

Google has created a product where you give it a research paper, and later on they say you’ll be able to give it a book. But you give it a research paper, and it then auto-generates a podcast in which a number of artificial human beings discuss that research paper in a way that if you’re the kind of person that can learn a lot from a group of people who’ve already read the thing talking about the thing, it does that. And my initial reaction to that is like, “Eh.”

But then it has a bunch of examples on the page, on the sign-up-to-get-on-the-waiting-list page. And every example that I’ve listened to has thrilled me the way all this stuff thrilled me. The first time I was exposed to it, the first time I looked at ChatGPT, it gave me that feeling again. I was like, “Whoa, this is something I never even considered would be a cool thing to do with this.”

 

Greg McKeown

You’re seeing the future.

 

Jeremy Utley

I really… I wanted to read back. I wanted to just tell you, David, one thing you said and then also slightly answer your question, Greg, a little bit. David, the phrase that you used that I really loved was “the mystery of our disagreement.” I think that it changes, as you said, from face-to-face to shoulder-to-shoulder. It’s no longer… it’s us exploring this other thing called our disagreement, not me adversarially accusing or justifying or whatever, right? So, to me, there’s something really powerful about that frame. Even, Greg, in what you’re working on—the mystery of our disagreement—that’s beautiful. So, I just wanted to kind of put a fine point on that.

And then, Greg, to your question about what can it do? I mean, I think right now… actually, this isn’t a hot take, but my feeling right now is for collaboration and for interpersonal relationships, which are… you could call it a version of collaboration or collaboration a version of relationships, whatever. The primary limitation right now is the interface between us and the machine is these stupid bottlenecks called our fingers.

And the reason that we are having such a hard time with the machine is ultimately it’s like, “So, are you going to, like, type that up now and then… and then we have to read the thing and then we have to type our answer again.” And I really believe one of the reasons that our interactions with AI are so elementary is because they’re limited by our ability to reason or synthesize or think through our fingers.

And I think as voice capability grows… for example, here’s a simple example. Imagine that we created a GPT, which is a custom instance that’s been given custom instructions, right? Carl Rogers, because I don’t know anything about him and you guys do. If we created a Carl Rogers GPT that said, “Facilitate this conversation as Carl Rogers,” the participants in the conversation are going to say who they are and then say their point, right? And then Carl might say, “So, you know,” if I’m Carl again, because I know nothing about him, I can be a, you know, facsimile…

“David, what’s your beef with Greg?” This is David speaking, “I don’t like that Greg said this.” And then Carl says, “Greg, what do you think?” This is Greg speaking, “Why did you say that?” because it doesn’t know, right? And so, I mean, that’s a stupid example of the degree to which the LLM’s ability to understand even who’s speaking is so limited by who’s putting in the information, how are they identifying themselves, etc. In a world where… I mean, I would imagine this in the very near future, in a world where the LLM can hear our voices and knows who David is and who Greg is, and who doesn’t have to always be the one talking but can listen to the conversation between who it knows to be David and who it knows to be Greg and who can intervene. 

“Greg, that’s not fair. That’s not what David said. Hang on just for a second, guys. David, before you respond… that’s actually not what David said.” 

 

Greg McKeown

A mediator. 

 

Jeremy Utley

Exactly. It can… I believe that a model can be trained in the techniques, you know, of mediation and will be capable of understanding our voices and interjecting when appropriate, but perhaps only when appropriate or as appropriately trained. I can imagine… I mean, we’ve all had this happen, right? With friends, you know… “Mom, that’s not what he said.”

“He said this.” 

“Oh, I’m sorry. Misheard you.” 

That, to me, is… that’s not a far-off distance of an impossible future.

 

Greg McKeown

I agree with that.

 

Jeremy Utley

That’s absolutely coming very soon.

 

Greg McKeown

The only thing I want to add to that, that I think is really important, is all technology is built ultimately on assumptions that the creators made. Like, I ask this question sometimes, “Would you want Mark Zuckerberg to be your relationship coach?” And I’ve had different responses. Some people are like, “Wow, he’s pretty successful, he’s married, why not?” But the majority are like, “No, I wouldn’t want that.” And yet, we have given him—and, of course, others after him—an enormous amount of power in defining the rules of the interaction between humans all over the world. So there are rules that are at play in all of this technology. Same with GPT. To achieve what you just described, I agree the technology could be utilized for that, but it must be deliberately and purposefully designed for that. We need the Mediator app. And it’s not just the fingers problem, which I get is a problem. Much better if they could hear the voice, the intonation, learn who we are—all sorts of things that would help with that. Visually see us, see the visual clues, the micro-adjustments that people make, and try to understand what they mean.

But I think the bigger issue is that right now it’s one person to a machine. You know, that design alone is so at odds with interpersonal communication. So I’m now brainstorming with a machine instead of brainstorming with somebody else, as you already, you know, described earlier. And that’s even more true if you’re trying to have it be a mediator role.

 

Henrik Werdelin

I would go a little bit back to Jeremy… bias there, Greg. Potentially… focus.

 

Greg McKeown

I’ve got bias. Go ahead. I’m sure I do. Go ahead.

 

Henrik Werdelin

I don’t know, maybe it’s the wrong one to try to end the conversation with, but I’ve started to think of it as another guest on our podcast pointed out, that we should stop thinking about it as artificial intelligence and think about it as collective intelligence. And so when you talk about it as “the machine,” you know, what I hear is that there’s like a prescribed specific intent out of it. If it’s just the collective consciousness or collective kind of thought patterns of the internet, wouldn’t the way that you would think about having a discourse with such a thing change if you use that as the concept instead of a machine?

 

Greg McKeown

Yeah, the language could evolve my own way of understanding the process. I’m open to that. I will add to it that my suspicion of the whole current collective intelligence represented through AI is that people are so generally bad at this kind of deep listening and understanding with each other. There’s no reason in my mind whatsoever to imagine that the collective intelligence of everybody on earth would produce a high tendency towards listening first. I mean, there’s a metadata study that was done on problem-finding versus problem-solving, meaning idea generation as problem-solving and what to do about it.

And the ratio across industries—all preferences—is ideation and problem-solving, but the ratio is either one to six. That’s the best ratio in academic research subjects about problem-finding versus problem-solving. The higher level is one to 100. So the human tendency to jump in with an answer rather than understand what the issue is, is immense. You know, that’s just an evidence of it, but we see it all the time, you know, so many conversations—conversations with our spouse, with our children…

So what I’m saying is the ability of AI is obviously exciting, interesting, extraordinary, but it has to be built, not just, “Hey, GPT, will you be a little more like this?” No. It’s built on answers, solutions, information synthesis—the moment you really know what the issue is you’re addressing. And if it addresses the wrong issue, if it doesn’t help us get to that, then I think it’s, you know, it’s like anytime you have a…

Any intelligent group of people, you can have high talent, high creativity, high work ethic, money, resources. But if they don’t actually get to the pinpoint core issue, they will waste thousands of hours and millions of dollars, as happens in corporations constantly, not having identified the real issue in the first place. And so, to me, that is a huge opportunity. And if we could develop mediator… you know, mediator.AI that really helps people to do this, I think this could be a bit of an antidote to some of the unintended consequences of the technology we’ve had over the last 20 years.

 

David McRaney

I’ll just “yes and”… go. I agree. I would like that to exist. And you know, they teach you… therapists are told to ask without asking, “Do you want to be hugged, helped, or heard?” You don’t say that. You don’t say those words. But that’s what you’re asking. You’re sorting out what this person wants from this dyad. Are they looking for solutions to problems? Do I need to hold space for them to articulate for a while and then surprise themselves with what comes out of themselves? Or do they just want to know, “Hey, you’re not alone. You’re being supported. You are not adrift.”

We are in this together. That’s important. And there are many of us who’ve never been taught that that’s how people work. If you’re talking with a partner or whether it’s at work or in a relationship, like a romantic relationship, they come to you with, “Hey, blah, blah, blah,” and you immediately start trying to solve problems.

 

Greg McKeown

Advice.

 

David McRaney

I did not come here to get problems solved. I came here to think about it, to work it out, to know that it’s like… And the fact that you have to be taught that in school as a therapist is an indication that it’s not always our intuition and our interaction. And I can imagine that if we try to develop technology that interacts with people the way we need to be interacted with to achieve the outcomes that would surprise us, if it’s been trained on how we’ve been doing things so far… yeah, like you were saying, Greg, it’s just going to try to give us answers. It’s going to try to solve the…

 

Greg McKeown

Problems, blah, blah, blah, blah…

 

David McRaney

And another odd thing that springs up in my mind is, this used to be a pet peeve of mine. I realized I was doing… I was kind of just being “old man yells at cloud” in a way that would make sense on social media, on Reddit, and on TikTok, and things like that, and YouTube. When people have a question about what they’re consuming, they’ll ask in the comments, and I’m like, “Are you kidding me?”

You’re going to ask a bunch of internet strangers to answer your question? They could be completely wrong, and also you have to wait for them to answer, and they might not. Why didn’t you just… you’re on the internet already. Just go to Google and ask your question. But digital natives very often prefer, “I would rather have a conversation about this. And if I don’t get the answer, then it wasn’t meant to be,” which is a completely different way of interacting with content than I, as a person who was, like, gifted the internet at like 18 or 23 or somewhere in there, interact with it. And I no longer think that’s silly. I’m like, “Oh, that’s just a different way to do it.” That if that person was doing a research paper, okay, they’ll go Google it. But that’s not what they were wanting from this. They were like, “I want the person who made this content to talk to me about this.” And that’s what I was looking for.

And it was a hug-help-or-heard situation. And I wasn’t treating it like that. I was treating it like, “You… you suck at the internet.” But that’s not what was happening. And you were talking about the fingers of bottlenecks… that kind of person who’s wanting that kind of interaction with content or information… the idea of sitting down and having one of these is like, “No way.” That’s like going to the library. I’m not going to… I did not sit down to do that with this tool.

So I can see it adapting and evolving to the way people use things.

 

Henrik Werdelin

I think I might suggest that we conclude the conversation there.

 

David McRaney

Yeah. Yeah. I actually have to run. I have to give this office up to another human being.

 

Henrik Werdelin

But thank you both for doing this.

 

Greg McKeown

Thank you all. So fun. Really interesting.

 

Jeremy Utley

I feel we hardly scratched the surface. Yeah, but that’s… that’s actually… maybe that’s a great…

 

Greg McKeown

That’s the point. Yeah, that’s it right there.

 

Jeremy Utley

I will say, just as a nice coda, I feel like one… Greg, you already had the perfect wrap point insofar as there needs to be… now another one. Now that we’ve continued talking, there is something special, relationship-wise, for someone… as you did at least three times, Greg… for someone to say, “I just want to make sure that I heard you right. Is what you were saying [blank]?” Separate to anything else, any other purpose that that served…

It does serve the conversation partner to feel appreciated, to feel human, to feel respected. And I think if I just grade my conversational ability against that example that you gave, I’m woefully deficient in, you could say, the problem-finding phase instead of problem-solving. If conversation is problem-finding and problem-solving, I’m always hurtling forward. So I take whatever I heard and then keep moving forward. Rather than what you’ve done conversationally, there is… you could call it problem-finding. I don’t know that it is, but just to use that kind of dichotomy, you’ve said, “Hang on, before I move forward, I just want to make sure I heard you right.” To me, that is a hugely human way to engage.

I don’t know, like, if I asked your wife if over time she’d be like, “Dude, just stop.” Like, if that… if that becomes, you know, annoying, or if it continues to be something that feels honoring. But at least as a conversation partner, over the last 90 minutes, it felt like a way to honor your partner. And I appreciated that.

 

Greg McKeown

Now I have to add something myself… thank you for that. My one frustration in the conversation is how obvious it is that we didn’t get deeper. And that’s not because anyone did anything wrong. I don’t mean deeper as in deeper insight. I mean more precise understanding of each other, layers below. This is not unique to this conversation. I want to make that really clear. This is my curse in life because literally 25 years ago, I had an experience that taught me… I’ve come to believe this from this experience and many since, that in all conversations, in all data, there is always something infinitesimally small, infinitely important, but it’s hidden many layers below the surface.

And so all the time I know there’s a diamond underneath all of this, but we won’t get there. You know, I listen to… I watch people talk, and it’s like they’re just talking past each other. And I’m like, “There’s a diamond in that conversation, but no one’s going to even get close to it because you’re going to keep talking at the surface.”

 

Greg McKeown

It really is actually kind of a curse because you go through life aware of this now, but that one restate is just that we got through one layer. And it was affirming because we love it. But what it allows, if we could keep going, is layer after layer after layer. And underneath that, there is… there’s always deep breakthrough. Always. And so I just wanted to say that it sounds like a no-downer point I’m making. It’s not, really. But c’est la vie.

 

Henrik Werdelin

I appreciate this conversation. If you folks ever want to do any of this again, I think both Jeremy and I are always up for it. So I love it. Thank you so much for the time and the great points.

 

Greg McKeown

Thanks, everybody.

 

Jeremy Utley

Have a wonderful afternoon.

 

Greg McKeown

Bye. 

Thank you. Really, thank you for listening. That was part two of my conversation exploring the ways in which AI can and cannot be utilized currently to be able to help people understand each other. I hope very much that we can design it in such a way that it can help with this primary challenge in the world today. What’s something that stood out for you? And what’s something you can do differently as a result of this conversation? And who can you share this episode with so that you can continue the conversation now that this conversation has come to a close?