Why I Started Asking Questions About AI...
And Why I Can’t Stop...
I think it’s safe to say I’ve been fascinated by technology for my entire life. I was interested in computers from the moment I got my hands on one. As of today, I’ve spent more than twenty years working in technology. I studied it, built a career in it, it enabled me to access my own capabilities and I’ve genuinely loved how it made me feel. My core belief about technology has always been simple: it enables people to be better at whatever they engage in.
About a year ago, I started noticing something in the people closest to me. Friends. Family. People I’ve known for years. When AI came up in conversation — and it was coming up everywhere — I kept hearing the same themes: fear. Skepticism. Rejection.
I was surprised and a little confused. Then, honestly, I was frustrated.
Here I was, someone who had built their identity around technology and its potential, listening to people I care about recoil from something I saw as fundamentally hopeful. I'd think: don't we want to cure cancer more quickly? My instinct was to engage in conversation and debate the other side. To make the case. To prove that this was another step forward in a long line of steps that had already delivered things we never thought possible in our lifetime.
So I started paying closer attention.
And the more I paid attention, the more complicated it got.
I saw things that reinforced my optimism. I also saw incidents that gave me pause. I watched some of the people who built these systems start to publicly raise concerns. Not fringe voices, but researchers and founders who had spent their careers on this technology. I watched the discourse swing wildly between utopia and apocalypse, often within the same news cycle.
I realized I wasn’t frustrated with the people in my circle anymore. I was confused myself.
I’ll be honest about something: I came into this wanting to prove a point. The evidence wouldn’t cooperate. That turned out to be more interesting.
That confusion is what led me here.
The more I paid attention, the more I noticed that the conversation itself had a problem. Whether someone was excited, terrified, or just exhausted by the topic, most people — myself included, at first — were more invested in defending a position than in actually examining one. That realization is what changed my approach. Instead of trying to be right, I decided to try to be honest about what I actually didn’t know.
So I’m trying to stop endlessly conversing about this and build something different — a place where I can follow the evidence honestly, sit with uncertainty, and keep asking questions past the point where most conversations stop.
I’m not a researcher, or an academic. And I’m certainly not a journalist. I’m a technology professional who has spent two decades helping organizations understand and apply technology to create success. I know how to ask hard questions, push past the surface narrative, and look for what's actually true. While I’m also interested in how AI does what it does under the hood, there are many much more brilliant technologists exploring those topics with much success. The area of AI that I feel I’m best positioned to explore and contribute to is the impact it has on people. You, me, and all of our loved ones. The most important people.
ClearSignals is my attempt to do that.
The name comes from the problem. There’s an enormous amount of noise in the AI conversation — hype, fear, marketing dressed as insight, opinion dressed as evidence. I wanted a place to cut through it. To look at what we actually know, what remains genuinely uncertain, and what questions are worth sitting with even when they don’t have clean answers.
The focus here isn’t AI as a tool. It’s AI as a force — one that’s already touching how we work, how we think, how institutions function, and how we understand ourselves as human beings. I want to examine that force seriously, without pretending I have all the answers, and without necessarily trying to prove anything. That means asking what this does to how we work, how we think, and what we believe we’re capable of — not just what it does to our economics.
You don’t need a technical background to read this. You do need an open mind and some patience for nuance.
A few things worth knowing about how I work:
If you’ve read this far you already know that I have a passion and curiosity for technology, so what’s an AI writing project that’s written by a person who loves technology if I didn’t use AI to help me write? I want to be transparent about that, because this type of writing has always been a weak spot for me and while I’ll let you decide if I’m succeeding at doing it in a passable way, one thing is for certain — an AI tool has given me the confidence to do this.
But this is exactly the tension I want to explore. The ideas here are mine. The questions come from my own genuine confusion and curiosity. What’s happening is this: I express my thoughts, then use an AI tool to take those raw thoughts and focus them — put the bloody comma in the right place (maybe) — and surface a note or two that lets me come through more clearly for you.
Whether that’s augmentation or something more complicated — that’s one of the questions we’ll wrestle with here together.
The posts here will range — some examining specific findings, others grappling with value systems, some raising questions and leaving them open. I’m following the evidence and the curiosity, trying not to land anywhere too specific. The hope is that we navigate this together.
If that sounds like something worth following, come along for the ride.
— Anthony /cs


