Am I Learning, or Is It an Illusion?
How close can AI get you?
Source: Gen AI Won’t Make Your Employees Experts
Published by Harvard Business Review in March 2026
The title of a recent HBR piece had me stop and think: Gen AI Won't Make Your Employees Experts. Whoa... What!? Why? Frankly, I've been using Gen AI tools for about a year and I feel as though I've been learning a lot more quickly these past 12 months. I won't summarize the entire article for you, but I do recommend that you give it a quick read.
I'll cut to the chase... the article is built around a study from researchers at Stanford University and Harvard Business School's Digital Data Design Institute. It's a working paper, so not yet peer-reviewed, but well-designed. The study introduces what researchers call "the AI wall" — the hard limit on how much generative AI can help people complete tasks outside their area of expertise.
Q: What makes someone an “expert,” anyway?
Basically, the study is showing that success is dependent on the distance you are from the expertise you’re engaging in. That’s an interesting concept. As I’ve said before, I’m no professional writer, and I don’t have a desire to be one, but I do have a desire to get my ideas organized enough for you to understand them.
Q: If I’m using Gen AI to help organize my thoughts and write them into an article cohesively, am I not learning how to write effectively in the process?
Well, I can see where learning could be encouraged and where it might not, especially if you just accept what’s generated for you without thinking about it, or challenging it. If you’ve come across the recent conversation about “AI workslop,” you’re probably already connecting some dots here. I’m filing that one away for later.
ClearSignals is partly a result of Gen AI usage. I’m asking questions about AI technology research without much formal training or experience in this style of writing. The tool is taking my unstructured thoughts, organizing them, and showing me how the mechanics need to apply to achieve my goal. The result feels like an improved version of what I’ve drafted. Here’s where my experience starts to diverge from what the study measured. I’ve created a feedback loop: I draft something, the tool restructures it, I ask why it made the changes, and I try to do better next time. I’m getting better each time, right? Or am I?
Q: Am I learning, or is the tool giving me the illusion that I am?
Oh, no… is Gen AI creating a crutch that I can’t live without anymore? If I’m truly learning something, I should be able to apply what I’ve learned without the tool. The working paper behind this article examined how the partnership enhances expertise, not what happens when it’s taken away. That thought sits with me for a minute.
I keep coming back to something though. We’ve always invented tools to advance our capabilities. Is this really any different?
Walking through the evidence might help me think this through. The article explains that foundational knowledge is required because you need enough domain knowledge to know whether the generated responses are good or bad. The study tested three groups by asking them to write a web article, a task normally done by expert writers. When conceptualizing the writing task, all groups scored closely together and much higher than without Gen AI. But when actually completing the writing, the results split. The expert writers improved their score with Gen AI. The marketing specialists, who were closer to the writing domain, nearly matched the experts. The technology specialists, who were furthest from the expertise, barely improved at all. They couldn’t tell the good from the bad, so naturally they hit the wall.
Q: How do we break down the wall?
So, the non-experts didn’t completely fail. In fact the marketers scored almost as well as the expert writers with the use of AI, and the study attributes this to their decreased distance from the expert capabilities compared to the technology specialists. And the technology specialists, who were most distant from the expertise, saw almost no change at all. Here’s what it looked like.
Q: What if we just keep at it and use AI more, instead of less?
That question speaks to my nature a little. I've never been one to leave well enough alone. If you tell me not to do something I'm pretty much going to do it anyway. That instinct has served me well sometimes and gotten me in trouble other times. But I like to think of it as challenging the norm to seek something better. Is the intuitive path actually the incorrect path? What if we used these AI tools like an exercise machine? Could we keep using them with a keen mind and healthy skepticism and question what's being put in front of us?
Q: If we kept pushing against the wall, would it eventually move?
One thing that keeps pulling at me is what happened to the group furthest from the expertise. The technologists didn’t just struggle with the writing. They actively discarded parts of the generated content that they should have kept and kept parts they should have thrown out. They didn’t know what they didn’t know. That’s the wall.
But here’s what I keep coming back to. When you don’t know what to ask, why not just ask the tool what you should be asking? Wouldn’t it give you some ideas?
Maybe. But that assumes you know you’re missing something. The technologists in this study weren’t lost. They were confident. They edited the AI’s output through the lens of their own expertise, and for their domain, they were right. For this task, they were wrong. They weren’t ignoring the tool. They were overriding it with expertise that didn’t apply.
But if the technologists were faced with the test scores, would they have asked why they didn’t improve? I’m confident that I would have. Receiving feedback that I discarded the correct information would bother me enough to adjust. Remember, the working paper wasn’t designed to measure whether repeated practice would change the outcome, so we won’t know if they would have, but the question that doesn’t stop repeating in my head is, what if they did?
Q: What if we used Gen AI as collaborators to check our thinking?
The technologists used Gen AI to generate content. They didn’t use it to challenge their own assumptions. Is this the difference between AI workslop and finding the value in Gen AI tools? It’s a completely different kind of partnership, and it’s one the study didn’t test. Is it helpful to have this kind of thinking partner with you all the time, and does it help you learn more quickly?
I started this article because an HBR title caught my eye and made me a little defensive. Gen AI Won't Make Your Employees Experts. My gut reaction was to push back, because my own experience felt like evidence to the contrary.
But after sitting with the research, I’m less sure. The study shows that the wall is real for people who are far from the expertise. And it shows that even people close to the expertise need foundational knowledge to make the partnership work. I think I have that foundation when it comes to understanding AI and technology. But when it comes to writing? I’m honestly not sure where I stand.
This article was written with Gen AI. And I think I’ll continue to get better as I keep using it. But I’m also the one judging that, which brings me right back to where I started.
Q: Am I learning, or is it an illusion?
I still don’t know. But I think it’s still worth running towards the technology to find out rather than running the other way.
-Anthony /CS
Something still sitting with you?
The best questions don't arrive with answers.
If one followed you out of this article, let me know about it.



