(Three Years After the Hype Started)
Back in 2023, the world was captivated by the public launch of generative AI. For professionals in every field, the conversation was a mix of hype, curiosity, and skepticism. The central question driving most early experimentation was, “What can this actually do to make my life better?” That’s certainly what we were asking ourselves here at S3 McMillan.
Fast forward to January 2026, and we’re still asking that question – but we also have had enough time to draw some conclusions. It’s not that we aren’t still surprised with newly developed and discovered AI possibilities (it happens daily). It’s that the novelty has worn off and AI has simply become part of our lives – at work and at home.
A recent strategic workshop at our creative agency revealed a common reality: this technological revolution is more about a human reckoning with what these tools are actually useful for. (And where they fall short.)
Here are the top 5 AI insights that surfaced during our deep dive:
1. AI Became Embedded, Not Revolutionary
The most significant change by 2026 is how utterly normal AI has become. Standalone AI tools that teams once debated using have mostly given way to AI capabilities built directly into the software we are already using. It’s to the point that if your fill-in-the-blank doesn’t have some sort of AI-assist, it makes you wonder why.
This shift has fundamentally changed the conversation inside our agency. The 2023 question, “What can this do?” has been replaced by a more practical one: “When and how should we use this best?”
The new dynamic is less about revolutionary change and more about quiet integration.
2. The ‘Human Premium’ Became the Ultimate Differentiator
We expected this one to be true – and we’re glad we were right. As AI made producing average-quality work cheaper and faster, genuinely strong human work became more valuable. Those who were out in front with purely AI-created work had the advantage of novelty, but that quickly went away.
There’s a growing recognition that when the effort behind the work is purely algorithmic, you lose that connection with the audience. Yes, the output may be acceptable…but the connection is missing.
When AI is in the mix, discernment holds more value than execution. Human-led thinking is the differentiator…how ironic.
3. It’s Not a Magic Bullet (and Sometimes It Just Makes More Work)
Three years after the initial hype, the practical limitations of AI are easier to see. While it can (and does) offer efficiencies and reach beyond human capacity, it’s not a shortcut for everything. More than that: in some cases, AI introduces friction rather than removing it.
For certain tasks, AI has added a layer of “review overhead,” forcing humans to spend valuable time verifying and fixing machine-generated work. Checking outputs, fixing errors, and making judgment calls can’t be automated away (at least not yet).
4. The Conversation Shifted from “Is It Real?” to “Do We Trust It?”
The nature of risk has evolved dramatically. In 2023, concerns felt “technical or legal”: copyright infringement, data privacy, and model weaknesses. By 2026, with AI outputs becoming more indistinguishable from human work, we have added “cultural, psychological, and trust-based” risks to the pile.
Authorship and accountability are less clear. Backlash and “AI shaming” are real threats for brands that appear inauthentic or non-transparent in their use of AI. In response, some organizations have leaned into transparency, even labeling work as AI-assisted to manage customer expectations and mitigate risk.
A few years ago, people asked, “Is this real?” Today, the far more important question has become, “Do we trust the source anyway?”
5. Our Relationship With AI Got… Personal
As AI tools have become “a professional on your shoulder,” a strange and surprisingly personal dynamic has emerged. The lines are blurring between user and tool.
There is a peculiar new reality where we are developing relationships with our software, and the way in which we engage with them is critical to the output. No matter how sophisticated or sensitive the AI seems, the accountability remains entirely human.
AI mistakes are judged as human failures.
Conclusion: Beyond the Machine
The journey to 2026 shows that the true work of AI integration wasn’t just technical. It was (and still is) cultural. It’s about how we are deliberately shaping our values, workflows, and standards of quality in response.
AI is now an ever-present partner, and that’s unlikely to change. And while we can’t predict the next seismic AI advances, we can confidently say that progress won’t just be about what the machines can do. It will largely come from the demands we put upon ourselves regarding how we decide to use it.
We believe AI’s impact won’t be defined by how advanced the technology becomes, but by the human-generated standards we set for using it. The real advantage will come from discernment – when to automate and trust vs. when to rely on human judgment, context, and craft. In an AI-infused future, those who master deliberate use – not sheer adoption – will be the winners.