top of page
MyAdvisor-ABSTRACT-BACKGROUND-10.jpg
MyAdvisor-ABSTRACT-BACKGROUND-1.jpg

The Governance Gap: What Deloitte's $440k Blunder Reveals About Your AI Exposure

Updated: 5 days ago

Deloitte Tower in Auckland CBD - representing the governance gap exposed when Deloitte delivered AI-generated content with fabricated sources to the Australian Government



Executive Summary 

When one of the world's leading consultancies delivered a government report that contained fabricated quotes and phantom sources, the AI technology wasn't the problem. The governance system was. Deloitte's AU$440,000 blunder for the Australian Government exposed a gap every organisation now faces: oversight frameworks built for traditional work can't keep pace with AI-assisted outputs. 


The lesson isn't ‘don't use AI’. It's that capability and governance must evolve as fast as adoption. Deloitte admitted their quality checks failed, issued a corrected version and refunded part of the fee. But the reputational damage was global and immediate. 


This Insight reveals what went wrong, why traditional oversight fails with AI-assisted work, and how the T-A-C-T framework (Transparency, Accountability, Capability, Trust) closes the governance gap before reputational damage compounds. 


Your governance system can't tell the difference between a real citation and a fabricated one. Neither could Deloitte's. 


One of the world's leading consultancies just delivered a government report full of false AI hallucinations. AU$440,000. The Australian Department of Employment. A document that couldn't survive a basic fact-check. 


Most people saw it as proof AI can't be trusted. That's the wrong lesson. 


AI didn't fail Deloitte. Their governance did. AI generated plausible-sounding content; court quotes that never existed, academic papers that couldn't be found. The humans didn't catch it before publication. Oversight processes existed but weren't applied rigorously enough. 


Here's the uncomfortable truth: your organisation probably has the same gap. 


Most governance frameworks were designed for human-created work where errors are obvious, authorship is clear and verification happens naturally. AI breaks those assumptions. It produces polished, confident outputs at speed, outputs that can be subtly wrong in ways that aren't immediately visible. 


The question isn't whether your team is using AI. They are. Research shows staff use AI significantly more than leadership realises. Much of this use is Shadow AI; tools and systems staff adopt without formal approval, often using personal accounts and uploading company data to public AI platforms. 


The question is whether your governance, quality assurance, and capability have kept pace. 

For most organisations, they haven't. 

 

Why AI Failures Hide in Plain Sight 

The Deloitte episode isn't an isolated glitch. It's an early signal of what happens when adoption outruns oversight. 


Across both public and private sectors, AI is writing first drafts of everything; policy briefs, board papers, client deliverables and marketing content. In many cases, those drafts are being lightly edited and published without the depth of verification that traditional outputs received. 


Why? Because the outputs look professional. They sound confident. And everyone's moving fast. 


But AI has a critical flaw that humans don't: it can fabricate with the same polish it uses for truth. It doesn't know the difference between a real citation and a plausible-sounding one. It can't tell you when it's guessing. 


That creates a new risk curve. Traditional work fails slowly and obviously. AI-assisted work can fail quickly and invisibly, until someone looks closely. 


When those failures carry your organisation's name, the reputational cost multiplies fast. One unverified claim in a client report. One fabricated statistic in board papers. One AI-generated "fact" that contradicts reality. 


Your competitors aren't slowing down. Neither are your teams. The organisations that win will be the ones who figure out how to move fast without breaking trust. 


That requires closing four interconnected gaps: Transparency, Accountability, Capability and Trust. 

 

How the T-A-C-T Framework can close your governance gap 

Effective AI governance isn't about restriction. It's about creating the conditions where speed and safety reinforce each other. 


Four pillars make that possible: 


T-A-C-T Framework: Building Trust in AI through Capability, Accountability, and Transparency. This framework emphasizes informed judgment, human accountability, and open usage to establish trust.

Transparency 

Transparency means everyone can see how AI is being used and why. 


Internally, leaders need to know which tools are active across the organisation, what work they're supporting and where high-risk outputs are being created. Not through policy documents, through actual visibility into who's using what and why. 


Externally, stakeholders need to understand where AI contributes to decisions that affect them. Not legal disclaimers. Genuine clarity about how and why AI is used. 


In practice: Not just an ‘AI activity map’, a risk register that forces leaders to ask: “Where is AI being used for high-stakes outputs? Who owns those outputs? What verification exists?” This isn't about tracking tools. It's about identifying exposure. 


Accountability 

AI can assist. But it can't be accountable, at least not yet. 


Every AI-assisted output needs a human owner who understands the technology's limits and takes responsibility for the result. That owner needs to know when AI-generated content requires deeper verification, understand the difference between summarisation (lower risk) and creation (higher risk) and follow clear sign-off protocols for high-stakes outputs. 


As organisations mature, elements of verification can be automated through risk registers and quality controls. But ownership never disappears, it just becomes better informed. 


In practice: The person who submits the work owns it. Full stop. Before any client-facing document or board paper is released, someone signs their name, not ‘reviewed by the team’. This person is accountable for accuracy, whether AI assisted or not. 


Capability 

Here's where most organisations fail. 


You can have perfect transparency. Perfect policies. Clear accountability. But if your people don't understand AI's strengths, weaknesses and failure modes, governance collapses at the point of execution. 


Capability isn't about technical skills. It's about judgment at every level, starting at the top. 


Business leaders can't govern AI effectively if they don't understand its limits. Boards and execs need fluency first; not learning to write prompts, but understanding when AI outputs need independent verification, where risk is highest and what good governance looks like in practice. 

From there, it cascades. Teams need to know when to trust AI outputs and when to verify independently. Not generic training, role-specific judgment about what good looks like for different types of work. 


The Deloitte failure reveals a verification breakdown. Multiple reviewers missed fabricated content because traditional quality assurance wasn't designed to catch AI-specific errors. Whether that's a capability gap, a process gap, or both, the result is the same: governance systems need to evolve. 


In practice: Structured capability building that starts with executive fluency, then cascades through the organisation based on role and risk exposure, not one-size-fits-all workshops, but targeted development on verification protocols and quality assurance for high-stakes outputs. 


Trust 

Trust is the outcome when the first three pillars work together. 


We need to trust the AI systems we deploy, through testing, governance and clear audit trails. Our stakeholders need to trust how and why we use them, through transparency, accountability and demonstrated capability. 


Trust isn't built by writing policy. It's built by proving governance actually works, errors get caught, quality doesn't slip and speed doesn't cost credibility. 


In practice: Think like health and safety. How much damage could this tool do if misused? Quarterly reviews aren't about catching people out, they're about proving to your team that risk awareness is valued, that responsibility is clear and that governance is working. Close gaps before they become crises. 

 

What Actually Went Wrong: The Deloitte Breakdown 

Let's be clear about what happened. 


The Australian Department of Employment and Workplace Relations commissioned Deloitte to review parts of the welfare compliance system. The expectation was straightforward: a rigorous, evidence-based report. 


What they received was a polished document with: 

  • A fabricated court quote that never existed 

  • Academic citations to papers that couldn't be found 

  • AI-generated material woven seamlessly into professional analysis 


The technical failure was small. AI generated plausible content. The governance failure was systemic. Multiple people reviewed the document. None caught the fabrications before publication. 

When the errors surfaced publicly, Deloitte accepted responsibility, refunded part of the AU$440,000 fee and issued a corrected version. The department noted the core findings remained sound. 

But the damage was done. 


This wasn't a junior analyst making a mistake. This was an entire quality assurance system failing to adapt to AI-assisted work. The framework assumed human-created content with human-visible errors. AI broke that assumption. 


The uncomfortable question: how many similar errors are sitting in your organisation's recent outputs, undetected? 

 

Close the Gap in 4 Weeks 

Every organisation can learn from Deloitte's mistake. The fix isn't more policy; it's practical governance and capability embedded into daily work. 


Week 1: Map Your AI Exposure 

Most organisations underestimate how widely AI is being used, including Shadow AI that leadership doesn't even know about. Start with visibility by creating an open environment where people can share without fear: 


  • CEO/MD: Commission a rapid ‘AI activity scan’ across all departments. Make it clear this isn't about catching people out, it's about building trust and understanding. Ask: What tools are active? What work do they support? Who's responsible for outputs? 

  • Leadership team: Review the results together. Identify where high-risk outputs (client deliverables, board papers, compliance documents) intersect with AI use, including unofficial tools people are using quietly. 


Outcome: A one-page map showing where AI is active (official and Shadow AI) and where governance gaps exist. The goal is transparency, not punishment. 


Week 2: Assign Accountability

Not all AI use carries the same risk. Differentiate between: 

  • Low-impact: Internal summaries, draft emails, research assistance 

  • Medium-impact: Internal analysis, planning documents, operational reports 

  • High-impact: Client deliverables, board papers, public communications, compliance documents 


For each category, assign clear ownership and verification protocols. This requires open conversations where people can share what they're actually doing, not what they think leadership wants to hear. 


Build Your Risk Register 


  • Strategy lead: Draft a living risk register categorising all active AI uses by risk level. Include owner names, verification requirements and sign-off protocols. Build this through conversation, not audit. 

  • HR/Operations: Ensure every high-impact output has a named owner who understands they're accountable for accuracy and compliance, particularly where personal information is involved. 


Outcome: Clear accountability for every AI-assisted output that matters, built on trust rather than fear. 


Week 3: Build Leadership Fluency (Then Cascade Capability) 

Policy without capability fails at the point of execution. But capability building must start with leadership, executives can't govern what they don't understand. 


The Deloitte case is rare, most organisations aren't rushing into AI. They're stuck. Some block it entirely. Others allow tinkering without oversight. Both create governance gaps. 

Here's what we've learned: leadership that blocks AI are leaking potential. Leadership that only tinkers can't govern it. 


  • Leadership team first: Bring your exec team together for hands-on AI immersion. Not governance theory, actual tool use. Experience the risks (data exposure, hallucinations) and opportunities (decision support, pattern recognition) firsthand. Leaders who have used AI can design governance that enables, not restricts. 

  • Then cascade to teams: For all staff producing high-impact outputs, deliver targeted capability building on verification discipline, what checks are needed for different output types, when to verify independently and how to structure quality assurance. 


This isn't generic AI training. It's building judgment from the top down, execs who understand the tools, then teams who know how to use them responsibly. 


Outcome: Leadership fluency that drives governance design. Your team knows how to verify AI outputs appropriately for their role and risk level, because leadership modelled it first. 


Week 4: Establish Your Trust System 

Governance only works if it's actively maintained. Create simple, recurring checks that prove risk awareness is valued: 


  • Quarterly: Leadership samples 5-10 AI-assisted outputs across the organisation. Validate that verification protocols were followed. Identify gaps and adjust. 

  • Board-level: Brief directors on where AI is being used, how governance is working and what capability investments are planned. 


Outcome: Governance that actually works, not just policy docs on a shelf. Trust is built through consistent demonstration that the system works. 

 

The Choice: Speed or Trust? Both. 

The temptation after any AI failure is to slow down. Tighten controls. Limit access. 

That's the wrong response. 


The organisations that win won't be the ones who moved fastest with AI. They'll be the ones who moved fastest with confidence, because their governance, capability and oversight kept pace with adoption. 

Deloitte's mistake reveals what happens when speed eclipses oversight. But the inverse is equally true: organisations that build Transparency, Accountability, Capability and Trust create resilience. They move faster because their governance is strong, not in spite of it. 


AI isn't going away. Neither is the pressure to adopt it. The only question is whether your organisation builds the capability and safeguards to use it well, or whether you're the next cautionary tale. 

Your governance wasn't built for AI. But it can be. 

 

Start with visibility: Use our free Shadow AI Diagnostic Tool 

Most organisations underestimate their AI exposure by 3x. Our diagnostic helps you map where AI is actually being used across your organisation, including the tools leadership doesn't know about yet. 

Get the free tool and start building transparency today. 

 

About the Author 

Mark Lucas, Senior Partner at GenerationAI, business leader with 25+ years scaling companies from startup to NZX listing

Mark Lucas is Senior Partner at Generation AI, where he works with New Zealand business leaders and boards to build AI capability, governance, and strategic advantage. He specialises in helping organisations adopt AI without sacrificing trust or credibility. 

Information provided is general in nature and does not constitute legal advice 

 
 
 

Comments


Commenting on this post isn't available anymore. Contact the site owner for more info.
bottom of page