The Moment That Stopped Me
I’ve been tracking AI developments daily for two years. New models. New releases. New capabilities. I’ve made it a point to stay current—not because it’s required, but because understanding what’s changing helps me have better conversations with distribution leaders about what’s worth paying attention to.
Last week, I came across something in Gemini I hadn’t seen before.
My first reaction was that familiar feeling—I’m falling behind.
That feeling lasted about 30 seconds.
Then I caught myself. I’ve been deep in this space for two years. If I can miss a Gemini update, anyone can. And if that’s true, then the anxiety around keeping up isn’t a personal failure, it’s just the nature of the moment we’re in.
The Pace Isn’t the Problem
OpenAI, Google, Anthropic, Meta—they’re all moving fast, and they’re all moving at the same time. The gap between major releases has gone from months to weeks. What’s state-of-the-art in January is table stakes by March.
For distribution executives already running operations, managing teams, and dealing with margin pressure, keeping up with every model release isn’t realistic. It’s not even valuable.
The people building the models are competing to close capability gaps. That competition benefits you. Whatever tool you start with today will be meaningfully better six months from now. But only if you start.
Waiting for the right model is a way of not starting. It feels responsible. It’s not.
What Model Fluency Actually Looks Like
Here’s what I’ve observed in the distribution leaders who are genuinely making progress with AI. They’re not the ones who’ve assessed every platform. They’re the ones who’ve gotten genuinely good with one or two.
They’ve learned how to give it context. They know that a vague prompt returns a vague answer, and that the way you frame a question determines the quality of what you get back. They’ve started treating the tool like a capable colleague who needs to be briefed properly—not like a search engine you throw keywords at.
They also know when to push back on bad output. That skill only comes from repetition. You can’t develop it by reading about AI. You develop it by using AI, getting something wrong, and figuring out why.
That’s model fluency. It’s not knowing which LLM scored highest on a benchmark. It’s knowing what to do when the tool gives you something that doesn’t feel right.
The Distraction Hidden in the Headlines
There’s a real trap in following AI news too closely. Every release comes with claims. Every latest version is positioned as a leap. And if you read enough of it, you start to feel like the right move is to wait—to let the dust settle before committing to anything.
The dust is not settling. That’s not how this works.
The distribution leaders I talk to who are furthest ahead didn’t wait for consensus. They picked a tool, started using it for real work, got comfortable with how it responded, and built from there. Some of them started with ChatGPT. Some started with Claude. A few started with Gemini. None of them picked perfectly.
All of them are ahead of the ones who are still deciding.
What This Means for Distribution Operations
In distribution, we understand the difference between a decision that needs to be perfect and a decision that just needs to be made. You don’t wait for a perfect carrier before you build a logistics network. You don’t wait for perfect demand data before you set inventory levels. You make the best call you can with what’s in front of you and you adjust.
AI adoption works the same way.
The model you choose today doesn’t lock you in. The habits you build—how you prompt, how you verify output, how you integrate AI into daily work—those transfer. A team that learns to use one model well is a team that can adapt when a better one comes along. A team that never starts has nothing to transfer.
The competitive gap isn’t between distributors using GPT-4.5 and distributors using Claude 3.7. It’s between distributors using AI and distributors still watching from the sideline.
The Only Question That Matters
I’ve missed updates. I’ll miss more. The space is moving faster than anyone can fully track, including people who spend considerable time tracking it.
That’s not a reason to disengage. It’s a reason to stop treating model selection like a strategic decision and start treating it like the operational one it is.
Pick something. Use it on real work. Get good at it. Then pay attention to what’s changing and upgrade when it makes sense.
The question isn’t which model is best. The question is which one you’re using.
Because two years from now, the leaders who figured that out early are going to look vastly different from the ones who waited for the right answer.
Applied AI for Distributors Conference
If you’re ready to move from watching to doing, join us in Chicago, June 23–25, 2026. The Applied AI for Distributors Conference is built for distribution leaders who want to see what applied AI looks like in real operations—not demos, not theory. Visit appliedaifordistributors.com to learn more.
As Chief Operations Officer of a Distribution Strategy Group, I'm in the unique position of having helped transform distribution companies and am now collaborating with AI vendors to understand their solutions. My background in industrial distribution operations, sales process management, and continuous improvement provides a different perspective on how distributors can leverage AI to transform margin and productivity challenges into competitive advantages.