top of page
MyAdvisor-ABSTRACT-BACKGROUND-10.jpg
MyAdvisor-ABSTRACT-BACKGROUND-1.jpg

The AI Bubble Is a Distraction: Here's What Really Matters

Updated: 6 days ago


ree

Executive Summary 


Lately we’ve been hearing a lot of noise about the speculation of an AI bubble. Concern is rising that AI is an over hyped novel gimmick that will come and go. But this is an illusion stemmed by misunderstanding and fear. Inherently AI is as revolutionary as world-changing technologies like the computer or the internet. AI is steadily becoming integrated in most areas of our lives, increasing in potential as it gets more complex and developed.   

 

While leaders debate whether we're in an AI bubble or on the cusp of AGI (Artificial General Intelligence)*, their organisations are either rushing in unprepared or freezing in paralysis. Speculating on hype loses focus on what's happening already within your organisation. 

 

The evidence is clear. Teams that integrate AI into knowledge work complete tasks about 25% faster with roughly 40% higher quality. Yet around 60-70% of organisations fail to scale AI beyond pilots -not because the technology failed them, but because they skipped the foundation.  

 

Rushing in or freezing on AI skips the same critical step: building a trust foundation that makes AI work. Trust that your people can use AI safely. Trust that your governance protects without throttling. Trust that your outputs are verifiable. 

 

AI doesn't need to evolve further to transform your business. The technology is already good enough to create incredible efficiency. The gap is readiness, not capability, and that gap is widening daily. 

 

This Insight shows you how to integrate AI with transparency and trust. We introduce the Value Capture Model: a three step capability sequence that helps you walk before you run: Fluency → Scaffolding → Measurement.  

 

It's not about racing to deploy. It's not about waiting for perfect certainty. It's about building readiness while competitors are still arguing about the bubble. 


 

You Don’t Need to Know Everything About AI. You Just Need to Start Right.


If you feel like you can't keep up, you're not alone. Every headline contradicts the last. Markets are reacting to AI’s future development, vendors are relabelling everything as AI, and half your team are using tools they can't explain. Most leaders can't define what AI actually is, so they do one of two things: they rush in unprepared so as not to be left behind, or they freeze and do nothing at all. 


Both responses fail. And they fail for the same reason: they skip the foundation. 

 

AI strategy isn't built on tools or timelines. It's built on trust. Without that foundation, you get the failure rate everyone's talking about. Not because the technology failed, but because the organisation wasn't ready. 

 

Two debates are drowning out signal: 

"AGI is near; everything changes." 

"AI is overblown; the emperor has no clothes." 

 

While those arguments rage, something quieter and more important is happening inside your organisation: your people are already using AI. The question isn't how the market prices AI companies or software. It's whether you can capture the value that's already available in your workflows this quarter. 

 

Key Problem 

Executives are operating in a decision fog. Budgets and attention are limited. Headlines say "no measurable ROI" one week and "40% productivity lift" the next. It's rational to pause. It's also how organisations fall behind. 

 

Three things are true at once: 

  1. Some pilots haven't scaled or delivered material value because tools were rolled out without capability and governance, resulting in lots of policies and little impact. 

  2. Teams that do integrate AI into knowledge work report meaningful time and quality gains. They’re not ahead because of better software, but because of smarter systems and habits.

  3. AI is already here, and usage is far higher than most leaders realise through unapproved tool use, creating both risk and opportunity. If you're not surfacing it, you're not governing it. 

 

Here's the real problem: most organisations are treating AI like a sprint when it's a foundation-building exercise. They're either rushing to deploy before they're ready (skipping capability, governance and trust) or freezing because they don't know where to start. Both responses lead to the same outcome: most AI initiatives fail to scale. 

 

The gap isn't the technology. It's readiness. You can't run before you can walk. And walking means building trust in your people's capability, in your governance systems and in your ability to verify outputs before they reach customers or boards. 

 

The risk isn't over or under estimating AI future capabilities or whether there is a financial bubble. The real risk is moving without foundation, or not moving at all, while competitors build the capability that compounds. 

 

Strategic Context: What Leaders Can Control (and What You Can't) 

Financial markets will do what they do. Your job is to convert capability into outcomes. That requires confronting two realities: 

 

Today's AI is already good enough to change how work gets done if you structure it correctly. 

 

Leaders who treat AI as a thinking partner (not just an automation tool) see faster analysis, richer options and stronger decision confidence. The technology doesn't need to evolve further to be transformative. It just needs to be implemented with the right foundation. 

 

AI still struggles with context. Even if an AI model outperforms a person on isolated tasks, it doesn't hold the whole picture like a team does. That's why the system around the model, or scaffolding, matters: workflows, domain knowledge, guardrails and when not to use AI. 

 

Add the Shadow AI factor: employees are experimenting, sometimes safely, often not. Leaders who ignore it lose visibility and control; leaders who surface it build trust and accelerate learning. 

That's why the next section focuses on the three levers leaders can control today. 

 

Framework / Core Model 

The Value Capture Model: Fluency → Scaffolding → Measurement 

ree

This is how you build the trust foundation that makes AI work. Skip a step, and you're building on sand. 

 

1) Fluency (Leadership & Team). Use it to think, not just to type. 

What it is: Hands-on capability to use AI for analysis, exploration and decision support, not just for drafting emails. 

Why it matters: Research shows that when knowledge workers use AI effectively, they complete tasks faster and with higher quality. That effect shows up when leaders model the behaviour, using AI to stress-test assumptions, simulate stakeholders and expand options. This is where trust begins: leaders who understand the tool can govern it effectively. 

 

How to build it: 

  • Leadership labs: Structured leadership development that deepens capability, where the leadership team explores real decision scenarios, analyses customer or board perspectives, challenges assumptions and develops confidence in using AI as a thinking partner. Leaders must experience AI's strengths and limitations firsthand. 

  • Safe disclosure: Invite teams to show how they're already using AI; recognise wins; surface risks. Build a learning culture, not a compliance culture. Create psychological safety around experimentation: this isn't a witch hunt, it's a capability audit. Name 3–5 internal champions who are already using AI effectively. Learn from them. 

  • When not to use AI: Develop the skill to recognise when AI adds little or no real value and instead creates risk or noise. Understanding when and why not to use AI is a core capability that builds discipline and trust. 

 

2) Scaffolding (Context & Governance). Wrap the model in the business. 

What it is: The discipline of aligning AI use with real business goals through trusted guardrails, ensuring applications create measurable value rather than noise or risk: domain briefs, structured prompts, workflow checkpoints, human review and policy guardrails that enable speed safely. 

Why it matters: Scaffolding is the structure that links AI work to real business objectives, ensuring it delivers measurable value instead of creating noise or risk. Without scaffolding, outputs drift; with it, they compound. Governance is not paperwork; it's how value is created safely. This is where trust is operationalised: clear guardrails that protect without throttling. 

How to build it: 

  • Context briefs: For each priority use case, create a one-pager with objectives, definitions of "good", key constraints and the data sources to use or avoid. 

  • Workflow design: Map where AI sits in the process (input → model → review → sign-off). Add pattern-interrupts, for example, a quick ‘devil's advocate pass’ before finalising. 

  • Governance anchors: Keep it simple: Transparency → Accountability → Trust (who used AI, for what, who approved it). Establish simple governance structures and clear accountability. Set data-handling rules and clarify "when not to use AI" boundaries so that expectations are transparent and trusted. 

 

3) Measurement (Proof & Iteration). Make value visible. 

What it is: A small set of metrics that prove outcomes and guide iteration. 

Why it matters: Without proof, AI stays in the ‘interesting experiment’ box. With proof, adoption scales. This is where trust is validated: you can show stakeholders (board, customers, investors) that AI use is delivering measurable value, not just activity. 

How to build it: 

  • Leading indicators: Decision cycle time, options considered per decision, assumption validation rate, percentage of work reviewed on time. 

  • Lagging indicators: Win rates, turnaround time, cost-to-serve and customer satisfaction for affected journeys. 

  • Simple A/Bs: For one workflow, run with scaffolding and without; compare cycle time and rework. 

  • Share learnings: Publish internal case studies. Show what worked, what didn't, and why. Build confidence across the organisation. 

 


ree

 

Unearned Value

Use this before approving any AI project to keep decisions anchored in outcomes. 

 

Before green lighting an AI idea, ask: 

  1. What business problem are we solving? 

  2. What context will the model need to be reliably useful? 

  3. How will we measure decision quality, not just speed? 

  4. When should a human stop, review or override? 


If you can't answer, you're not ready to ship, regardless of market pressure to adopt AI. Walk before you run. 

 

What Good Looks Like 

What does trusted AI use look like in practice? Leaders who use AI to stress-test strategy before board meetings and can explain their process. Teams who document AI assisted decisions with rationale and sources. Workflows where cycle time drops but decision confidence rises because more options were explored and weak assumptions were caught early. Governance that enables speed, not throttles it. Boards and customers who trust your outputs because you've built the verification infrastructure. That's earned capability, and it compounds. 

 

Case-in-Point (Composite) 

Two similar mid-market firms faced the same noise. 


Company A oscillated between rushing (deploying tools without governance) and freezing (pausing everything when the market or outcomes turned negative). Shadow AI grew anyway. A year later, they had scattered usage, no guardrails, rising risk and no proof of value. Their board lost confidence. Their best talent left. 


Company B ignored the market speculation and focused on building the foundation. They ran a safe harbour survey to surface Shadow AI. Leadership spent three months building fluency in real decision contexts. They built context briefs and workflow scaffolding for one high-impact use case. They measured cycle time and decision quality. Within six months, approval turnaround dropped by about 35%, decision confidence improved (measured by options explored and assumptions validated), and the model scaled to adjacent workflows. Their board gained confidence. Their talent stayed and recruited others. 

 

Same market conditions. Different approach. Different trajectory. Company B walked before they ran. Company A didn't. 

 

Conclusion 

Bubbles come and go. Capability compounds. If you redirect attention from market noise to fluency, scaffolding and measurement, you'll capture value now and be better positioned for whatever comes next. 

 

The window to lead this shift is narrowing. Your competitors aren't waiting for perfect certainty, but the ones who win aren't rushing either. They're building the foundation first. They're walking before they run. They're earning trust before they scale. 

 

Most AI initiatives that fail do so because the organisation skipped the foundation. Don't be one of them. 

 

Bubble or not, readiness wins. 

 

*AGI: AGI, or Artificial General Intelligence, is the ability to understand, learn and apply its intelligence to any intellectual task that a human can. 

 

Ready to Walk Before You Run? Start with Visibility. 

Most organisations underestimate their AI exposure by 3x. Before you can build trust, you need to see what's actually happening. Our free Shadow AI Discovery tool helps you map where AI is being used across your organisation, including the tools leadership doesn't know about yet. 

 

In 5 minutes you'll get: 

  • Map of AI tools staff are using and why 

  • Risk overview: data, IP, compliance 

  • Opportunities to turn unauthorised AI into strategic capability 

 

This is Step 1 of the capability sequence. You can't govern what you can't see. 

 

About the Author 

Mark Lucas is Senior Partner at Generation AI, where he works with New Zealand business leaders and boards to build AI capability, governance, and strategic advantage. He specialises in helping organisations adopt AI without sacrificing trust or credibility.


Information provided is general in nature and does not constitute legal advice 

 
 
 

Comments


bottom of page