Hallucinations: The Hidden Cost of AI Speed
In this episode of Rev & Reach, Lori Jo Vest discusses the hidden risks behind the speed and convenience of AI, focusing on the growing problem of AI hallucinations — when AI confidently generates false or fabricated information.
While AI can be a powerful tool for marketers, Lori explains why relying on it without verification can lead to costly mistakes, poor decision-making, and reputational harm. Through real-world examples and practical guidance, she shares how marketers can use AI strategically while protecting their business by verifying sources, reviewing outputs carefully, and treating AI as a support tool rather than a final authority.
Themes discussed in this episode:
- Why AI hallucinations happen and how they affect marketing decisions
- The risks of trusting AI-generated analysis without verification
- How hallucinations can increase over long conversations with AI tools
- The potential financial, legal, and reputational consequences of bad AI data
- When AI is most effective as a marketing tool — and when it isn’t
- Why human review and oversight are essential when using AI for content and strategy
Episode Highlights
00:00 — Introduction to AI hallucinations and why confidence in AI is rising
01:47 — A real-world example of executives making decisions based on hallucinated AI analysis
04:15 — Why hallucinations are systemic and why verification is no longer optional
05:10 — A firsthand example of AI overstating audience data by 100x
06:45 — The risks of relying on AI-generated copy and why human editing is essential
07:33 — How marketers should use AI safely as a support tool, not a decision-maker
Top Quotes
01:14 — “And when it doesn’t know, it guesses plausibly.”
03:19 — “It’s just that when it can’t generate a concise, fluent answer over admitting uncertainty, it will choose guessing every time. So think about that. AI guesses.”
05:03 — “Verification is no longer optional.”
05:26 — “It actually overstated the specifics of an audience that were really interested by 100%. It said there were 4000 of these people in our list, when, in reality, there were only 40 confirmed.”
08:03 — “But when it comes to hard data and hard advice that you feel you should follow to be successful, always double-check it.”
Episode Transcript - Click to Open
Rev & Reach Episode 24 Transcript
Hallucinations: The Hidden Cost of AI Speed
00:00
Hello and thank you for joining me for this episode of Rev & Reach, where PopSpeed Digital Marketing shares our secrets for getting ROI from digital marketing initiatives. I’m Lori Jo Vest, the co-founder of PopSpeed Digital, and I want to talk to you today about AI. There is so much conversation about AI going on right now. It’s February of 2026 when I’m recording this, and a lot of people are using AI with way too much confidence.
00:36
So what I want to talk about is AI and hallucinations. It’s really common, and it’s probably a lot more common than you think. So here’s what it is: a hallucination is when AI hallucinates and puts out confidently false and fabricated information, not because it’s doing it purposefully or there’s anyone out there trying to mess with your work, but because of how it’s built. They predict what comes next based on patterns. They use patterns in the training data, and so it’s not really using verified knowledge in a lot of cases. And when it doesn’t know, it guesses plausibly. That’s from an article from OpenAI — they literally will have the AI guess instead of saying, “I don’t know.” So I wanted to get into it a little bit, because I’ve seen so many people that are super excited about AI and marketing and how wonderful it is. Yeah, it is really, really wonderful. However, there are a lot of cases where people are overly confident in what it tells them, and when that’s the case, you could get yourself in some hot water.
01:47
My husband was reading something on Reddit this weekend about how a marketing team had been uploading their analytics into AI and asking it to summarize those analytics without going back and checking the AI’s findings against their original document. Well, guess what happened? For 90 days, the C-level executives at this company were making business decisions based on AI hallucinations, and this person had just discovered that it happened, and they were in the middle of this huge upset. It happens. And so what we tell people all the time is, when you do research using an AI tool, ask it to provide the source. And don’t just ask it to provide the source — go to the source and look at the source and make sure that the information that it’s pulled is accurate, because it won’t always be. It’s one of those cases where it’ll see some information in an article, assume it makes that conclusion that it’s telling you in the tool, and give you that link. So you’ve got to be really careful.
02:52
And the other thing that they found is that over long conversations, it tends to hallucinate more. So if you’re using your tool and you’ve got one long chat going on and you really think you’re doing great, you may be getting more and more bad data, bad information, bad recommendations, so be careful about that. And the reason, again, it’s not malicious — it’s not anybody trying to do something negative to the world with AI tools. It’s just that when it can’t generate a concise, fluent answer, over admitting uncertainty, it will choose guessing every time. So think about that. AI guesses. And if you think that in your head when you’re using the tool, you’re much less likely to get caught in a bad situation from using it and having it give you bad data.
04:15
Here’s why this matters for marketers: because when you fabricate data and put it into a proposal or a document or research, or use it to inform your marketing decisions, you could not only lose money, but you could be subject to liability or reputational harm. Things can happen like that. It’s not wrong at random. It is consistently wrong. So it’s not wrong at random. They’re not anomalies. These hallucinations are not anomalies. They’re systemic outcomes of training incentives and the way that AI works with prediction mechanics. So it’s not wrong at random. It’s wrong a lot. Verification is no longer optional. You need to always go back and check what it’s telling you against your original data.
05:10
I was on a call with one of our clients a couple weeks ago, and I got — oh man — I could have gotten my butt handed to me. I had a database that I had downloaded, uploaded into my chat tool, and thought that it would give me accurate information on who was on that list. It actually overstated the specifics of an audience that were really interested by 100%. It said there were 4,000 of these people in our list, when in reality there were only 40 confirmed. And when I went back and asked it why, it said, “Oh well, the patterns — the emails. A lot of people use Gmail for mailing lists, so we assume that some of those are based on patterns.” And it didn’t make any sense. So I asked it to analyze it again, only with the exact data that was in the document, and what came back was completely different. So use the tool if you want to, but be really careful of double-checking your data.
06:09
So a couple things that have happened. Research shows that AI responses degrade, that they give bad, dangerous medical guidance. I would assume, based on that, that they probably give bad guidance on digital marketing as well. So if you’re using ChatGPT as your tool to tell you about Meta and their ad policies and how they’re impacting health-related organizations or something like that, double-check that, because it could give you completely wrong advice. So that’s one of those things that I really want people to understand.
06:45
The other thing people use it a lot for is copy. And what’s happening out there in the ether is people are starting to get really sick of AI copy. It’s very predictable. You can look at it and see the lists of three. It has a tendency to make these long social posts with just way too much information and flowery language. If you’re using AI for your copy, Google actually delists AI-generated copy. They have tools that can tell when it’s AI. Maybe that’ll change eventually, but for right now, if you use AI as a basis for an article, go back in and rewrite it as a human being, because that is what will get you web traffic. That is what will get you good, solid, relatable, authentic content.
07:33
So I guess what this episode is, is more of a warning that, you know, we love AI. We use it a lot for all kinds of things. It’s a great tool for outlining things like articles, podcasts, books. It’s really good for that — for titles, for social content ideas and concepts, for a different way of saying the same thing. One of the things in digital marketing that we all know is we have to learn how to say the exact same thing 27 different ways. AI is a great tool for that. But when it comes to hard data and hard advice that you feel you should follow to be successful, always double-check it.
08:09
So that’s my advice for you today. We’ll be back soon with another episode of Rev & Reach, where we pull back the curtain and show you how we get ROI for our digital marketing clients. Take care. Again, I’m Lori Jo Vest. If you are watching this on YouTube, please subscribe and give us a like. We’re really trying to share this great information about marketing with a much larger audience. We are free with the information. We’d love to hear from you, too. If you have a comment or a suggestion and you’d like to have us answer your question on this podcast, we’d love to hear from you. So take care. We’ll be back again soon.
