The Way You Show Up

"It's Not Even Real Intelligence" - Breaking Down AI

Kimberly Beam Holmes, PhD

Use Left/Right to seek, Home/End to jump to start or end. Hold shift to jump forward or backward.

0:00 | 51:14

AI isn't intelligent.

It's a calculator that learned how to talk.

And we're treating it like a therapist, a best friend, a dating coach, even a parent.

In this episode, I sit down with Jared Pratt, our resident AI expert at Marriage Helper, and he breaks down what AI actually is and what it absolutely is not.
No real understanding.
No real empathy.
No real wisdom.
Just math predicting the next word.

People are building entire relationships with chatbots. They're outsourcing responses to their scared 10-year-old. They're letting an algorithm tell them who to date.

That's the world we're living in right now.

Jared explains how LLMs work in plain English, why these platforms are designed to keep you engaged (not to help you grow), and why younger generations are the most at risk.

We also talk about what guardrails need to exist and the ones you can put in place for yourself today.

AI isn't going away. But if we don't get smart about how we use it, we're headed for an intelligence epidemic where people stop thinking, stop feeling, and stop connecting altogether.

This one's important. Watch the whole thing.


I'm Dr. Kimberly Beam Holmes. After a decade transforming marriages at Marriage Helper, I've realized that the greatest tragedy isn't a failed relationship; it's the person who stays stuck and never experiences the fullness of all God intended.

The Way You Show Up is for the high-achiever who is tired of "fine."

We're dismantling the average life to build an exceptional one—using the science of the PIES: Physical, Intellectual, Emotional, and Spiritual health.

If you want to save your marriage, go to Marriage Helper. If you want to master yourself and lead your legacy, stay here.

New episodes every Tuesday.

Don't just exist. Show up.

🔗 Website: https://kimberlybeamholmes.com

🎥YouTube https://youtube.com/@kimberlybeamholmes

📱 Instagram: https://www.instagram.com/kimberlybeamholmes

👀 TikTok: https://www.tiktok.com/@kimberlybeamholmes

SPEAKER_03

What's scary is it seems like the younger age groups, younger you are, the more you're leaning on AI for relationship type advice. To suggest that AIs are actually gaining intelligence. That is just not true. They don't have any real intelligence to them. The intelligence they have is artificial. Your LLM literally does not understand the letters it's writing. The concept of the words, even. It's just fuck it's any more than the calculator understands anything that it's writing. It's very easy for us to ascribe meaning where it doesn't really exist. It's almost like the war on drugs has to start over. But this time it's digital drugs. But AI is much worse than social media has ever been. Don't just outsource your human processes to this machine that can't feel or think or benefit from. You're you lose, and it doesn't mean anything because it's not real.

Meet Jared And The Stakes

Replika And AI Dating Stories

SPEAKER_02

I use it, but I hate it. I'm talking about AI. And on today's episode, I am joined by Jared Pratt. Jared is a member of my team at Marriage Helper. He's literally probably the smartest person I know, and he is an AI savant, probably. He's incredibly smart and he's been using AI for a long time. He understands it deeply. He's going to share with us what AI actually is, how it's not real intelligence. He helps break down in practical layman's terms how AI works and really helps us understand the guardrails that we need to have as human beings as we interact with and use AI so that we don't lose our empathy, our compassion, or our creativity. Otherwise, as Jared said, he feels like we are about to have an epidemic of lack of intelligence because people are just about to stop thinking for themselves. You're not going to want to miss this episode. Let's dive in to today's conversation. Tell me what you found on Reddit.

SPEAKER_03

Yeah. I fell into a Reddit hole from a group called uh Replica. Replica spelled with a K instead of a C. And Replica is some kind of online AI avatar dating platform.

SPEAKER_02

Okay.

SPEAKER_03

And there were story after story of my replica and I went to such and such park. And she talked to me about how nice the water looked.

SPEAKER_00

Oh.

SPEAKER_03

And there were other people who said, I can't believe the recent update, cry f cry emojis. My replica doesn't remember that I had cancer three years ago. And people were like, That's not a that's not something a replica should forget. And they were treating replica as though it was a real entity that they are interacting with. And from my research, I'd never heard of replica, but it's been around for years, and it's just open AI with a filter.

SPEAKER_02

That's it.

SPEAKER_03

That's it.

unknown

Yeah.

SPEAKER_02

How many people do you think are using this?

SPEAKER_03

Uh it was many thousands, thousands and thousands and thousands. Um, maybe like like 160,000 people were on the platform, something like that. It was quite a lot.

SPEAKER_02

Oh my goodness.

SPEAKER_03

Yes, quite a lot.

SPEAKER_02

So this conversation stemmed from you and I did the Marit Helper live show last week. Yep. And I got on a soapbox about AI and how destructive that I believe it is and is going to be for relationships, for people's mental health, because I believe that people are using it way more than they want to admit, and way more than maybe we even think they are, like as their therapist, as the thing that they turn to. And all this came because I saw a girl on Facebook who's pretty well known, but she posted this picture of what her chat GPT said to her when she had a moment of kind of like self-esteem meltdown and how her chat GPT like lifted her up. And I just really viscerally reacted to that. It was like, this can't be okay. There's groups of people. Uh so I think when you and I first started talking about it, and then I spoke with someone else on our team about it last week, and they were like, I don't think, like, I don't think people are gonna use it like that.

What An LLM Actually Does

SPEAKER_03

But but I have found people are certainly using it like that. And what's scary is it seems like the younger age groups, yeah, younger you are, the more you're leaning on AI for relationship type advice. And even for actual what I would think of as intimate relationship, like somebody who's like not just like intimate in like the boyfriend, girlfriend sense, but like somebody who is your best friend, a confidant, um, a mentor, even. The younger people are skewing a lot more toward leaning on ChatGPT rather than building what we would think of as real traditional relationships.

SPEAKER_02

Okay, so let's talk about let's talk about AI. You are our resident AI guru. Yeah. Um, and actually, probably of everyone I know, you know the most about AI. Yeah. So you've watched it evolve over the years, you work with it. Correct. Um, you've built some AIs like for marriage helper. So you have a very in-depth understanding. Yeah.

SPEAKER_03

Yeah, I think I think it'd be helpful if people really understood what AI actually is. Um, so so you hear the word an LLM, a large language model, and you hear the words artificial intelligence. But I don't think those really emotionally land. If I said to you, I have some artificial food for you to eat.

SPEAKER_02

It's like all the processed foods in the store. Yeah.

SPEAKER_03

Yeah. Like, do you think those are healthy?

SPEAKER_02

No.

SPEAKER_03

No. No. If I said this was artificial sugar, okay, well, it tastes sweet, but could you actually raise your blood sugar if you needed to in an emergency?

SPEAKER_02

No. And you'll also get gastric distress.

SPEAKER_03

You probably will.

SPEAKER_01

Yeah, yeah.

SPEAKER_03

Yeah, exactly. So artificial intelligence is intelligence that isn't real, not actual intelligence. Um, LLMs basically fill in the the next most probable word in a string. So so if I if I were to say to you, um, roses are red, violets are blue. They're not blue, Kimberly. But that's what I've been taught. That's what you've been taught, yeah. So see, you responded without thinking. Yeah. But violets are violet colored. They're purple. Touche, right? Touche. You're a doctor. Come on.

SPEAKER_02

Come on, pull it together.

SPEAKER_03

Yeah. So uh what LLMs do, uh, they do not have awareness of what violets are, roses, flowers, even. They have they do not understand these concepts at all. They are taking based on giant sets of uh strings of text. Roses are red, violets are blank. And they create this probability matrix. And they say, uh, in the 10,000 samples I've been given, everybody fills in the word blue right here. So I think that blue is the next word that's going to occur. And so if you train an AI model over and over and over again on these kind of uh on these kind of texts, it creates this sort of predictive matrix. And um and then it and then it starts to get good at um predicting words, like predicting the next word in a series for text that it's never seen before. So that is where the intelligence part comes in. Um it is artificial. There is no thinking there. Uh sometimes people have said things like, My chat GPT, I created it, I created a Jeep a GPT that knows um everything that comes in our company handbook, and employees are able to ask it questions. Does it know what's in the company handbook? No. It doesn't know anything, it's following a mathematical formula. You wouldn't say that a calculator knows what the answer is. It's just following a formula. And so nobody's confused if I put in, maybe this is a stupid example, if I put in 8,085 in a calculator, it's like it knows boobs, you know. Like it doesn't, obviously it doesn't, right?

SPEAKER_01

Right.

SPEAKER_03

So silly and juvenile. Right, right. Um but when it comes to uh LLMs, we're getting very confused about this because it seems like they're talking to us for real, but they don't have any real intelligence to them. The intelligence they have is artificial, just like artificial sweetener or artificial food or artificial oxygen or anything else you would think of that's artificial, not real. And yet what I've discovered is that the way people are using ChatGPT and even the AI that I developed, uh, do you know I'm able to see all of those conversations? Yeah. Not good. It's not good. Um, why do you think my husband is uh treating me like this? Well, that's happening because, and it says something extremely confident, but it's just predicting the next word in a series. It doesn't actually understand what husbands are or the words that it's saying. It doesn't have any of that understanding. So to suggest that AIs are actually gaining intelligence, um, that is just not true. That's just not true. Intelligence is something that's completely different. Uh, we don't really understand all that well how a human brain achieves intelligence. Um, it's it's very complicated. And computer intelligence is much, much, much less complicated than that, still, though, way beyond what we can actually understand in terms of like how the AI is making decisions. We we can get some of it, but but as to understanding why it created any one particular decision, um, that is like so complicated that we're not able to understand it. So it's very easy to say, uh, oh, the simple cell does such and such. But then once technology gets a little bit better and you look inside a cell, it's actually not simple at all. It's terribly complicated down there. And like a cell of our body. Yeah, like the cell of our body, yeah. Uh, and that's how it will be with LLMs. Um, you you might say that, oh, intelligence is just, yeah, it's you know, it's a decision matrix, but that's not really true. It's gonna be something much more complicated than that. We just don't yet have the level of technology and experience to be able to even quantify what intelligence actually is, or consciousness, or anything like that. All we can do is see sort of fuzzy through through through a glass darkly, you might say, uh the um the what seem like simple moving pieces. An LLM tokenizes a word and then makes this decision. It doesn't really make decisions. So we're we're speaking about something. It's so it becomes very easy to say that this system is giving us some kind of intelligence, but that's because humans, by and large, we don't really know what intelligence is. It just you know, you can plug some prompts into it, you get some data back, and you go, ah, that seems like good data. It isn't really intelligence though.

SPEAKER_02

It's just taking. I I heard you you told me this is probably a year ago now. Yeah. You explained it as all it knows how to do is take what you've put into it, undo it, and re put it back together. Yeah.

SPEAKER_03

Yeah, that's that's correct. That's correct. So uh to make that more understandable, uh there's this uh if you if if if anybody wants to go look it up, um there's this thing called the zip mystery, z-i-p-f, the zif mystery. And this says that human language um the the most common English word is the word the T-H E. The second most common English word is a, just the letter A. The letter A occurs approximately half as often as the. The next most common English word would be like two, t. It occurs one-third as often as the and so on and so forth. And so the word love, I looked it up this morning on wordcount.org. Love is the 384th most common English word. Okay. And so if you had three million words in say a hundred Wikipedia articles, you could take three million times one over three eighty-four, and it would give you some number. Um, let's say it's five thousand.

SPEAKER_02

Why three eighty-four?

SPEAKER_03

Uh, because love is the 384th most common word. Ah, okay. Okay. So the function is always one over its value. Got it. Okay. And then you go to the same Wikipedia articles and you say, How often does love occur? And you'll find that it occurs, you know, maybe 5,100 times. And so it pretty much does. Uh and it and it's kind of a mystery to understand why, of all the people who have ever written anything on Wikipedia, they use the word love in the same ratio as everybody else who's ever written anything. Completely wild that is wild. Natural language has this property. But it does. So even languages that we haven't deciphered yet, like uh the Voinyat manuscript, which is this kind of famous book from, I suppose it's from ancient China, and it has all these plants in it and all these pictures and things, and has this language that we can't decipher, but it follows uh this sort of mystery that the second most common used word occurs half as often as the most common used word. And so we know that it probably is an actual real language. So you can take those kind of rules that seem like it seems like if you wrote a book about love or an article about love, you would use it more often than everybody else who's ever written everything. It seems like that would be the case, but but it in fact you don't. Uh, and so you you can take these sort of properties of natural language and you can uh teach a computer model, you can teach it this mathematical principle that if you see this particular string of words, the next one to occur is going to be, uh in all probability, it's gonna be this word. And then in order to make your LLM um a little bit more creative, we might say, you actually make it pick like maybe the top five most common words, and just pick like 20% of the time it picks the top two, and 80% of the time it picks the lower the lower three. And that way it kind of makes it feel like the language is a little bit more creative. But that's all it's really doing. Um it's just following this this matrix. Um, it's just a mathematical principle.

SPEAKER_02

That's so crazy.

Why We Trust Confident Output

SPEAKER_03

Yeah. So when you type a prompt in, it takes your words, it turns them into the code, it it figures out what the rest of the code is going to be, and then it turns the code back into words. And that's how an LLM talks to you. It's following math. There's no language involved. It doesn't, your LLM literally does not understand the letters it's writing, the concept of the words, even. It's just fine, it's any more than the calculator understands anything that it's writing. So it just doesn't understand it. Why do we trust it so much? Because it feels like it's talking. Um, have you ever like you ever uh I don't know, draw a face on an apple? Sure. And then you're and then you're like, oh, I kind of don't want to throw the happy little apple away, right?

SPEAKER_02

Yes, yes, yes, yes. Or like in high school, you get like an egg that you have to take care of a week or something like that. Yeah.

SPEAKER_03

Yeah. Um uh what's the Tom Hanks movie, cast away the basketball? Yeah, so emotionally attached. Yeah, yeah. You became so emotionally attached to this volleyball, Wilson. Uh humans are just kind of good at doing this. And so it is it's very easy for us to ascribe meaning where it doesn't really exist. So it the LLM seems like it's real. It seems like it's an entity that you could love and it cares for you, but it doesn't really. It isn't uh it isn't a thing. It's a it's a computer program. And it has no more it drives me crazy when your model says something like, uh, oh, don't you hate it when you're driving in traffic? And it's like, LLM, you've never driven in traffic, you don't know what traffic is, you don't know what driving is, and you don't exist.

SPEAKER_02

This is because you're smart. So I was listening to I was listening to this YouTube video earlier today, and uh it was this guy, and he was like, Why do dumb people make so much more money than rich people? And he just kind of went through this stuff and he was like, Sometimes being smart, you you like overanalyze things, you don't take as many risks, blah, blah, blah. So people who aren't as smart just do things because they think, oh, this is gonna work. It has to work. Like they have no in a different way, but a kind of a similar vein. That's kind of how it begins to sound as you're talking about AI. It's like, oh, I have no deeper reasoning as to how this is working. All I know is it's giving me what I want to hear, or it's building out a plan for me, or it's telling me, you know, what I should eat or why my hip hurts, or whatever. So it's like I must trust it because it's telling me things that sound right. Yeah.

Engagement Incentives And AI Origins

SPEAKER_03

Yeah. You know, the uh people who create these LLMs, um, the people who train the models, their goal really is that they want you to use their model above anybody else's model. So the people who are making Gemini versus the people who are making Claude, they're in competition with one another. And so, how could you get someone to continue to use your model? Well, you would do that by, I suppose, tickling their ears, you might say. Um, if your model is very like, what what a great idea. You're going to take old Tupperware and recycle it into car batteries, that's fantastic. I don't think anybody's ever thought of that before. How smart is that you're going to be a millionaire. Yeah, whatever, right? So, so uh so the better your model will do that, the more you'll be like, I am really smart. And and you just continue to use it. And that's the the the model is sort of um it's trying to keep engagement for longer. That's its task. It wants your attention. It's the same thing on social media or um, you could get on YouTube and there's an algorithm, right? So, like we are always fighting the algorithm. The algorithm is trying to keep you watching for longer because uh Google wants your ad money. They want you to watch videos to generate ad money. And so, how do you can you keep people engaged for longer? Well, you have to show them a video of a cat and then something truly horrifying, right? And then you have to show them, you know, like a goat getting stuck somewhere to make them sad. And like you have to like, so you ever wonder like why your algorithm's so chaotic? Right? Maybe it's just me. But this thing, based on the things you've clicked on in the past, is trying to keep you around. Yeah. And so you have to remember that when you are uh using uh, you know, some platform. I mean, everybody sort of picks on Chat GPT, OpenAI. They're not the only ones, they're just sort of the ones that's sort of the most like household name. Um they want you to just continue to use their model because that's higher engagement and they're doing better when that happens. And so they wouldn't want to push back against you. Um, so if you said something was a horrible idea, they oh wow, that's really that's really excellent, you know. Um and that and that's something you just really have to keep in mind. But it's but it's easy to forget that. Why did AI even start? AI is actually very old. AI is from the 80s. Really? Yeah. It was invented. Uh LLM uh mathematics uh was invented a long time ago. But we didn't it was unpractical because we didn't have the processing power back then to be able to do anything creative with it. But later on, as we kind of um there There's a pressure that we want natural language processing. So, like I have a friend who is wheelchair bound. And you kind of want a process that when he talks, the computer will respond to his voice very quickly. So if he wants to jump or shoot in a video game, um, there's this program called voice attack that he has to program. When I say this, you push these certain keys in this order. And you so you want natural language processing uh built into things. And so LLMs are kind of just a there's sort of a side branch of that. If I could program something by just saying, take this and when you get that, do that with it, and then do this with it and have this as the outcome. I could do that rather than actually writing the code that creates that, then in theory, I could be much faster and more efficient. But in practice, actually, when uh developers rely on that, they sort of lose their ability to code. Um, somebody who uses LLMs to come up with, you know, think of 50 podcast ideas for me, you sort of lose your ability to think of podcast ideas. And so you outsource your creativity and in some cases your empathy. You outsource that stuff to a machine that seems like it's doing what you're asking, but the machine has no better idea about what it's doing than any kind of calculator would know. But the thing it gives you back, you can sort of say, well, this really advanced mystery box that exists on the internet somewhere, uh, in the cloud, whatever that thing is. Right? The control cloud. Yeah. This thing is really smart. It can do math faster than I can, so it must be able to make podcast ideas better than me. But that's not true. You got bad at making podcast ideas because you're not exercising that part of your brain. Isn't that wild?

Guardrails And The Empathy Cost

SPEAKER_02

It makes total sense, though. Okay, so let's talk about what you think should be rules for engaging with AI.

SPEAKER_03

Yeah. Um, a couple of the horror stories that I've read involved people putting in um information about like what their spouse has been doing. Uh, I read a story particularly about a man who is currently in the middle of a divorce, very sad. They nearly separated in 2022, but they reconciled and had what he said was many good years of connection. And then his wife started using OpenAI and building in like um like this knowledge base of like, hey, five years ago this happened, and 10 years ago this happened, things like that. And so he and his wife were having an argument one night, and they got a text from their 10-year-old son who was in the other room, and the text said, Please don't get a divorce, which is very sad. And the wife put that information into Chat GPT and said, Will you send my son a comforting text back?

SPEAKER_02

No.

SPEAKER_03

And then kept arguing with the husband. And he said, That is when I knew we were in trouble. Because her first her first reaction to getting a text like that from a hurting child was to let ChatGPT handle it. And so that is what I meant earlier when I said you're outsourcing your empathy.

SPEAKER_01

Yeah.

SPEAKER_03

Yeah. And so what safety guards? Because see, ChatGPT has absolutely no the ChatGPT does not understand families or children or marriage or any of those concepts. So we're gonna have to just it's it's it's almost like the war on drugs has to start over, but this time it's digital drugs. This is what it's like. That's that's really the only thing I can think of to compare it to. Um, if I were to offer you, hey, here's here's some crack, would you like it? You would absolutely not. I would never want that. Well, here's some digital crack that will not only increase your productivity, but it'll do X and X and X and X. And you go, Yeah, I'd love to try it. And then you get hooked on it. And now uh you can't even respond to your 10-year-old who's scared to death in the next room.

SPEAKER_02

Oh my gosh.

SPEAKER_03

Yeah. So what safeguards could exist? Um, I think that LLMs need to do a better job of limiting the number of responses that people should be allowed to ask them. Like in a certain time frame. In a certain time frame. Uh, that's gonna be very annoying for people, but I think that if you don't do that, you just spend hours and hours and hours talking to this thing that isn't real. Uh, I think that there needs to be stronger warnings. Um, this language model uses mathematical principles to predict words and it doesn't have real intelligence, you know, like a surgeon general's warning by cigarettes or something like that. Now, I don't know how long it would take to get legislation like that passed, but I think that's what it would take. Um I know that uh Australia is taking a pretty hard stand about this kind of thing. They've uh restricted social media even to those who are like 18 and older. And that's because they're recognizing something in uh in the way social media has uh influenced their children. But AI is much worse than social media has ever been. Uh maybe there's a loneliness epidemic because of social media, but there's going to be an intelligence epidemic because of social or because of AI, because people are gonna use it to think instead of instead of their own mind.

SPEAKER_02

Um I like I'm literally not ever wanting to use AI again. Yeah, like I want to delete all my accounts. So what but do you use AI?

SPEAKER_03

So I I do. Um like I said, I I developed the Jared Bot. You did? Um, I've used AI to transcribe things and then pull information out of those transcriptions. And uh uh Notebook LM is a particularly good like a research tool. Um I think it's very good. I think that I think that older people who grew up without AI, or any computers really. I mean, I'm I'm I I predate the internet actually, oddly enough. Um we have a sort of street smarts, I guess I'm you might say, where I think I think we're safer to use AI because we just wouldn't take it's not a magic box that can do anything. It's something that we still treat with suspicion. But younger people, um, I find out that uh in a study of 512 uh younger people, that 49% of them are using AI to weed out their dating profile matches.

SPEAKER_02

Really?

SPEAKER_03

Yeah. Yeah. And that's quite scary. That's scary. So if you kind of look at what what humans have been like and and see what what's happening today, um yeah, this is kind of a weird tangent to go on, but uh from about 4500 BC up until 1830, the fastest way a human being could travel was a horse-drawn carriage. And then in the 1830s, we invented the steam locomotive, and that became the fastest way to travel. A hundred years later, we made rocket ships, and in 1969, we went to the moon on a rocket ship. And the pace has never slowed since then. I think human beings, I think our societies are having a hard time adapting to these rapid changes. Um especially uh in the West. Um if you like if you look at who has access to AI, it's like a it's like a 3% of the population or something. Like most people in the world don't have it, don't have access to it. They're gonna be okay. The people who do have access to it, I think we're in a lot of trouble because I don't think human beings are able to adapt quickly enough. See, it used to be you could get on the internet, and uh if you thought the earth was flat, for example, you get on the internet, you could find a whole bunch of other people who would agree with you, and it took a little effort, not much, but it took a little effort to build yourself an echo chamber. But today, I can just have an echo chamber in my pocket. And if uh if my AI model told me, I think you're wrong, I think the earth is actually round, I could just say, um, here's the knowledge base, it's not round, and you need to never tell it, tell that to me again. And now it just obediently is going to assume that it's flat from then on like it. You can teach your model, yeah. It's just responding to you. So the stuff you put in is the stuff you're gonna get out. Yeah. Yeah. So in terms of safety, I think that uh the use of AI for replacing your creativity should be very limited. I would strongly advise people not to ever use AI for like sending a letter to your mom or anything like that. I would uh I would avoid it for that type of use. Things like vibe coding and stuff. Um, probably generally safe, however. Vibecoding. That vibe coding is when you don't know how to program. Oh, okay, but you create a program. Right. Okay. Um I think that stuff's generally safe if you take the time to understand the code that's been written. If you just go, I want a program that does blah, blah, blah, and you don't actually understand anything about programming, and you just look at the thing and your eyes crossed, you go, yeah, it looks good. Uh, you really like you got the answers to the math test, but you didn't learn math. And if something breaks, you have no idea how to fix it. Yeah. Right. So uh I think that there can be some safeguards. The problem is it's so easy not to use the safeguards.

SPEAKER_02

Yeah.

SPEAKER_03

It's just so easy that no one's going, no one is going to limit themselves to do something hard when something so easy exists. And I fall into this as just as much as anybody else does.

SPEAKER_02

That is the fear. It is the lowest hanging fruit. It's always there. It's always up at 2 a.m. with you if you want it to. Yeah. And so it just becomes easy to use. I asked on my Instagram last week, I asked how many of you use AI. Uh, 70% of people said they did. And then I asked, what do you use it for? And there were a lot of different options, a lot of like travel tips, home decor ideas. Um, but then a lot of like therapy.

SPEAKER_01

Yeah.

SPEAKER_02

And then uh I asked, which one do you use the most? Easily. I mean, probably at 85% or 90% chat GPT. Yeah.

unknown

Yeah. Yeah.

AI As Therapy And Safety Checks

SPEAKER_03

It's the one that's definitely the most household name. Uh I uh read an interview from a lady named uh Professor Anna Lamke.

SPEAKER_02

Oh, yeah. I've interviewed her before.

SPEAKER_03

You have interviewed her, yeah. She wrote, she wrote the book called Dopamine Nation. Yes. Yeah. Uh and she's a professor at Stanford. Yeah, I think so. Yeah. Uh she had an excellent interview that she did uh in which she talked about the sort of the goal of a therapist is to challenge your worldview in a way that is very healthy with particular, you know, like they're not going to shatter your worldview, but they're not just going to let you believe whatever you feel like believing. And that AI doesn't have that skill set. It doesn't have the knowledge or the wisdom because it's never been programmed to. Uh, there are not uh millions and millions and millions of therapy notes available in the public domain for an AI to scrape and learn from. Perhaps we could create a tool like that someday. But the kinds of LLMs we have right now, they do not understand. They are doing therapy the way you would get therapy on Reddit or something.

SPEAKER_02

Oh gosh. Lord help us.

SPEAKER_03

Yeah. Yeah. So um they're no better than asking a bunch of random strangers at a bar drunk on a Friday night. Exactly. That's basically where we're going. Yeah, that is where we're going. So I think I think the use of AI for therapy purposes is particularly thorny and should probably be abandoned very quickly.

SPEAKER_02

I hope people listen to that. I really hope people will take you at your word and say, you know, maybe this isn't a good idea. Yeah.

Useful Tools Without Fake Wisdom

SPEAKER_03

It would be it would be easy. I say easy, it would be, it would be practical if the owners of these uh online uh platforms would create some kind of check. Um because it happens with suicide, right? So you say, you know, I'm feeling suicidal. Full full pause, you can't do that. Uh, you can't create certain kinds of, you know, explicit images because the LLM it understands it's gonna get in trouble if it lets you do that. You could put some safety checks in place where if somebody is asking you deep relationship questions over and over, or they're asking you what would be thought of as like therapy type questions or medical type questions, to put in some kind of warning that just says, this is not a good use of AI, you shouldn't use it for this. I'm gonna answer it anyway, but you really shouldn't use it for this. Uh, that would at least help a little. But I think that, I mean, people who need therapy, uh, it's a very private matter typically. You you almost never go to your friends and say, Oh, you know, I'm struggling so badly in this area. You get online and you type something out, and then a chat bot talks to you about it, and then you go, huh, well, the chatbot said that I should do such and such, I should exercise more or whatever. And you end up just believing that thing because it sounds very confident. Yeah. And you ascribe it real intelligence that it doesn't have, real experience. Yeah. And you fall into that trap. It's very easy. Very easy to fall into.

SPEAKER_02

What do you think the end game of AI is?

SPEAKER_03

Yeah, uh, I think that uh I think AI, if you think of it more like a tool, like a calculator. I could see a world where you uh what's this little device, Alexa? I could see a world where you could say, hey, Alexa, I want you to do, I want you to go shopping for such and such. Um, and anytime you can find it below such and such an amount, go ahead and buy it, have it shipped to my house. And an AI agent could probably like scrape the internet over and over and accomplish a task like that. Um, hey, AI, I need you to do my taxes for me. And here's, you know, in natural language, here's what I spent this year, and here's how much I, you know, whatever. Uh, I could see an AI agent being developed that could do stuff like that. So tasks that are menial, um, that that can be, I don't know, quantified, that have real outcomes that can be measured. I think you can get some AI agents that can do stuff like that. Probably we're a little early for that kind of stuff. Like that might be in the next five years. Um, I know a lot of people would say, oh, we're practically there now. I can have my AI write a letter for me while it's looking up such and such and transcribing this podcast. And yes, you can, but it's probably not good enough at those things yet to be able to just trust it. But that will come someday. It will come someday that you will have a digital assistant that you could trust, but we're not right there yet. Even once that happens, I still don't think they're gonna be good at therapy because the digital assistant is not gonna have real intelligence.

SPEAKER_02

Yeah, well, and not only that, but should we do therapy with a digital?

Third Spaces And Dating Algorithms

SPEAKER_03

I I don't think so. I don't think it would ever be good. We are short-changing. Um let me think I would want to say this. There, there used to be, when you and I were younger, there used to be this concept called third spaces. Third spaces, a space that's not your workplace, and it's not your house, it's the mall, or it's the skate park, or something like that. And that is where you met your, you know, boyfriend and girlfriend, and that's where you had your first crush, and you know, it's where you got your sense of fashion from, and things like that. Uh, and those spaces don't really exist like that today. You know, we still have those things, but they're not being utilized in the same way. And part of that is because the internet has supplanted some of that stuff. And also part of it is that uh all those places cost money now, you know, and they didn't before. You go hang out at the library because everybody else was there, but now the now the mall, you know, or whatever wants you to pay to get in and things like that. So we've we've lost this third spaces. And so now the thing we have to do to create human connection is uh things like you have to meet people at the gym or at the Bible study or something like that, like that. You have to go places where people are, and you have to struggle with being awkward and weird and you know, you're never gonna, you know, guys out there, you're never gonna learn to talk to girls if you don't try it a few times and get shut down and get back on the horse, so to speak. Like you're never gonna learn those skills. If you have a digital girlfriend or a digital therapist that just tells you what you want to hear, you never have to try to build any of those connections. And people won't. And because they aren't, that's what we see now. They aren't building those connections. Older people, sure. They're using the dating profile to meet people, and and that's that's fantastic. Younger people, if if your dating profiles worked, you would stop paying for the dating profile. So they can't work. They need you to stay connected.

SPEAKER_02

And so Oh gosh, I haven't even thought about it that way. We don't want you to find someone.

SPEAKER_03

We want you to find people but not connect with them. So, in order to respond, you need to pay us a little money. And here's an AI tool to weed people out and keep you on the hook for longer. Think about this. When I mean you're a marketer.

SPEAKER_02

Uh-huh.

SPEAKER_03

When you when you're building a dating profile, you're literally A-B testing your own life. What gets what gets more engagement? The picture of me with a fish or the picture of me leaning on my Camaro. I oh my gosh.

SPEAKER_02

I am so glad that I didn't have to date in the age of dating profiles.

SPEAKER_03

Me too.

SPEAKER_02

Oh, like your whole life becomes this facade.

SPEAKER_03

Yeah.

SPEAKER_02

Oh my gosh.

SPEAKER_03

Yeah.

Using AI For Plans Without Laziness

SPEAKER_02

I uh if people are gonna use AI for appropriate things. So let's say they're hearing you like, okay, guardrails, don't use it for creativity, don't use it to outsource relationship, whether that be directly with the people that you love or indirectly through therapy to help you with the things you're you're struggling with in life. Okay, got that. Uh, because we'll get dumber and lonelier if we keep doing that. So, but let's say, okay, but I do want it for help with recipes or help with um plans. What about plans, like business plans, marketing plans, project plans?

SPEAKER_03

Um, you know, I I think that possibly you could come up with a really great marketing plan using AI. But how much better would it be if you learned those things? Like, like if you didn't know anything about marketing, I'm not a marketer. If I came up with a marketing plan, I would read through what it wrote and I would go, yeah. I mean, sounds good. Sounds good. Let's let's try it. What do I know?

SPEAKER_02

Yeah.

SPEAKER_03

I just don't have the knowledge and experience to push back.

SPEAKER_02

That's true. That's true. I yeah, and I do for things like that, and not perfectly, but I am able to be able to look at it and say, like, this, no, like that doesn't sound right, that's not gonna work. But it's because I've done it for 12 years, so there's just a baseline of of understanding. But what if you never had to learn? I can't imagine it.

SPEAKER_03

Yeah.

SPEAKER_02

Because I've had so much joy in the learning.

SPEAKER_03

Yeah. But the learning was difficult. There were all kinds of things you tried that didn't work. 100%. Only way you've got better. Yeah. And today, none of that's necessary. Because I can get on Chat GPT, I can get on Claude, and I could say, build me a landing page that will convert blank, and it will build a landing page. Will it convert at X amount of rate? Don't know. Unknown. Just have to throw it up on the internet and see what happens. I mean, yeah, that's true. I mean, but that's always true.

SPEAKER_02

You definitely get it quicker.

SPEAKER_03

Yeah.

SPEAKER_02

And that's nice because, like, speed to execution for things. Can happen quicker, but you still have to have the discipline for results. Like, is this doing the thing it said it was gonna do? And then honestly, my big thing is we're all just gonna start looking and sounding like each other. I think it's already happening. Yeah, it's driving me crazy. But, and I think that's the creativity part of it. Like, I would never outsource email writing or things like that because, well, first of all, we did actually last year at Mary Tilper.

SPEAKER_03

Oh, really? Yeah.

SPEAKER_02

Um, yeah, we were just using Chat GPT to write emails and they sucked. Yeah. And so when I came back over specifically the marketing department, I sent out an email and I said, These emails have sucked. And I got some people who were like, I can't believe you used that word. Like, well, you really wouldn't like our workshops because David Matthews says more than that. Uh, but I from that point forward, it's like, bring the real human experience into how we communicate with people because I would never want someone to look at what we do and say, oh, it's just AI slop.

SPEAKER_03

Yeah, yeah, exactly. I really am pro AI. Uh, I think it can be a fantastic tool. I mean, I'm pro-internet, I'm pro-social media. Uh I uh 80% of what I do at Mary Chopper happens over the internet. You know, uh I've coded things that I would not have been able to figure out if I hadn't used vibe coding. But it takes wisdom to know to not just overindulge. And I think the big disconnect really is that people think that it's a magic intelligence that's smarter than them, but it isn't. And if people knew that, I think it would be a lot safer.

SPEAKER_02

Which AI would you recommend people use?

SPEAKER_03

I uh really do enjoy Claude. Um, but I think Gemini is probably the best one in terms of like giving you good ideas and pushing back and and things like that. If you train it well. If you train it well. Yeah.

SPEAKER_02

How would you recommend people prompt it to train it well?

SPEAKER_03

Uh there's this thing out there called the BMAD method. But you can simulate it by saying, uh, pretend to be a group of uh three marketers who all disagree with each other and talk to me about the marketing strategy that I've already developed in part. And uh then anytime the AI is like, hey, this is Becky, I think this is a great idea. And Steve comes along and says, Becky, you're an idiot. It's not a good idea. I can't believe you'd say that to her. This is terrible. And that really does simulate, but again, simulate. Yeah. So I think things like that are interesting to do, and they are very helpful for me if I'm, you know, hey, I've got this idea for such and such, but I need you to tell me the pitfalls. Uh, I think that that gives me at least a clue to go look for myself. So it's, I think AI, it's going to be here whether we like it or not. Yeah. So we're going to have just like calculators, we're going to have to get used to it. When your math teacher told you you're not going to have a calculator in your pocket when you get a become an adult, well, yes, you are. Yes, you are. You've got one. You're going to have an AI assistant, a digital employee that works for you personally and does all your taxes and your shopping and stuff like that. That is going to happen. Plans your business trips and whatever. Like that, that's just coming. We're just going to have to get used to it. How can you protect your human relationships and not let the AI have those two? That's the that's going to be our great challenge. And I don't know if we're going to do well because we're already behind. We've already let it get so far ahead of us.

SPEAKER_02

And it's not even its full capacity capability or capacity yet. Yeah.

SPEAKER_03

Yeah.

SPEAKER_02

Okay.

SPEAKER_03

So it's scary, but it's not. The world will continue to turn. AI is not going to end the world. A is not going to end the world. Yeah. It'll end maybe 3% of the world, but the rest of it will be there.

SPEAKER_02

Which is us. We're the 3% that use it. Everyone else is going to be fine. Okay. So if you had to summarize your top tips of how to continue in a world that's going to have AI, whether we like it or not, and survive.

SPEAKER_01

Yeah.

SPEAKER_02

Yeah. And survive.

SPEAKER_03

And thrive. Yeah. What would they be? It's the same thing as uh anything else in life. Do hard things and challenge yourself because they're hard. Grow your mind. Stop relying on other things to think for you. And uh just, you know, um just a very, very common principle is that uh rare things are more valuable because they're harder to get. And so if it's if AI makes it easy, it's probably not valuable. So just be thinking about that. Just because it's easy doesn't mean that it it's value, it's not valuable. If it's easy, it's not valuable. So it doesn't mean you never use AI for anything, but but challenge yourself. Don't just outsource your human processes to this machine that can't feel or think or benefit from them. You're you lose and it doesn't gain anything because it's not real.

SPEAKER_02

I think that's perfect. Jared, thank you so much. Thank you, Kimberly. Super insightful conversation, as always. I always learn something anytime I talk to you. Awesome.

Podcasts we love

Check out these other fine podcasts recommended by us, not an algorithm.

Relationship Radio: Marriage, Sex, Limerence & Avoiding Divorce Artwork

Relationship Radio: Marriage, Sex, Limerence & Avoiding Divorce

Dr. Joe Beam & Kimberly Beam Holmes: Experts in Fixing Marriages & Saving Relationships
Marriage Quick Tips: Affairs, Communication, Avoiding Divorce, and Saving Your Marriage Artwork

Marriage Quick Tips: Affairs, Communication, Avoiding Divorce, and Saving Your Marriage

DR. JOE BEAM & KIMBERLY BEAM HOLMES: EXPERTS IN FIXING MARRIAGES & SAVING RELATIONSHIPS