When AI Writes the Report: Why Competency Still Matters

Introduction

Artificial intelligence is reshaping how organisations work, yet the recent Deloitte incident in Australia has shown that progress without process can backfire. Deloitte was engaged by the Australian Department of Employment and Workplace Relations to review its Targeted Compliance Framework, which governs welfare compliance systems. The report it delivered contained fabricated references, false citations, and even a non-existent legal quote. Once discovered, Deloitte acknowledged that generative AI had been used in the drafting process, issued an apology, refunded part of its AU$440,000 contract, and released a corrected version (1)(2)(3).

This episode is not merely about an error in a report; it is a signal of a broader challenge. As AI becomes embedded in professional workflows, firms must ask whether they still have the right people in the right seats, and whether those people are trained to recognise when something simply does not make sense.

What Happened and Why It Matters

The Deloitte report was meant to analyse a compliance framework for welfare recipients. Instead, it became an illustration of how easily generative AI can produce plausible but false information. Reviewers, including University of Sydney researcher Dr Christopher Rudge, found that several citations did not exist, and that internal quality assurance procedures had been skipped. Deloitte confirmed the use of Microsoft’s Azure OpenAI service and conceded that standard review processes “were not followed” (3)(4).

The issue was not the technology itself but the overreliance on it. AI can generate convincing and polished text with ease, but it cannot confirm factual accuracy or interpret professional nuance. Without experienced reviewers who understand the content in depth, false information can appear credible and pass through even rigorous systems of control.

Competency and the Right Seat Principle

Every organisation depends on who is doing the work. Having the right person in the right seat means more than filling a role; it means ensuring that those responsible for review and sign-off have the expertise to challenge what they see.

Subject matter experts provide the final safeguard between plausibility and truth. They bring the intuition that detects subtle inconsistencies, the judgment that questions assumptions, and the experience that verifies alignment with reality. In Deloitte’s case, that layer of scrutiny faltered.

Automation should never replace human discernment. When firms use AI without reinforcing the competency of their people, they risk producing work that is fast but fragile.

Training a Workforce that Never Lived Without AI

A new generation is entering the workforce having never known a world without AI-based tools. They have grown up with auto-correction, instant summaries, and content generators that can produce well-structured text at the touch of a button. The risk is not laziness but erosion of the deeper skills that underpin professional competence: logical reasoning, evidence checking, and the ability to write and think independently.

Employers now face a dual responsibility: to teach how to use AI effectively, and to ensure their people can function without it. The discipline of constructing arguments, verifying data, and forming opinions based on understanding must remain central. AI can assist, but it cannot think.

The balance lies in preserving critical thinking while embracing technological support. Training should not only teach which tools to use, but also when not to use them.

Maintaining Quality through Oversight and Structure

For industries such as consulting, engineering, mining, oil and gas, and renewable energy, the lesson is simple: quality depends on structure. Key practices include:

  1. Clear accountability: Assign responsibility to qualified reviewers who verify content accuracy and internal consistency.

  2. Defined QA/QC steps: Establish structured checkpoints for factual and methodological verification.

  3. Transparent disclosure: Clients should know when AI tools have been used to assist in drafting or analysis.

  4. Independent reasoning: Encourage exercises that require manual work to maintain the ability to write, analyse, and question without digital aids.

  5. Reward diligence: Recognise accuracy, integrity, and challenge as core professional values, not obstacles to efficiency.

In technical domains such as mineral processing, pipeline design, or renewable energy infrastructure, these practices can mean the difference between sound reporting and costly error. A fabricated citation in a policy paper may be embarrassing, but a false assumption in an energy model, flow analysis, or feasibility study can have real safety, financial, and environmental consequences.

Responsible Use of AI: Our Own Perspective

At Intelligenciia, we use AI as a support tool within our editorial process. Our editor has dyslexia, and AI assists in ensuring that grammar, structure, and readability meet a professional standard. The thought, originality, and research remain entirely human.

This is where AI shows its true potential: as a mobility tool that increases accessibility and allows skilled people to work to their strengths. When used responsibly, AI is not a shortcut to content creation; it is an instrument that removes barriers, improves clarity, and saves time without compromising the human thinking behind it.

Our use of AI reflects what we believe to be the ideal balance: automation should enhance capability, not replace it.

Learning from Other Sectors

The lessons extend beyond consulting. In 2023, two New York lawyers were sanctioned for submitting a legal brief that included fictitious case citations generated by ChatGPT, a case known as Mata v. Avianca. The court fined the lawyers US$5,000 each for a lack of technological competence and professional oversight (5).

In media, CNET found factual errors in over half of its AI-written stories and issued corrections on 41 of 77 published articles (6). Academic publishers such as Nature and The BMJ have since introduced policies requiring AI disclosure to maintain research integrity (7)(8).

Across all these cases, the pattern is the same: AI can accelerate productivity, but only human judgment ensures reliability.

A Future Built on Competency and Accountability

AI will continue to expand its role across professional services, energy, and resource industries. Yet the foundation of every credible organisation remains the same: skilled people, well-trained in their craft, supported by sound process.

The Deloitte incident serves as a reminder that expertise cannot be automated. Firms that treat AI as a partner, guided by ethical oversight and human verification, will strengthen both quality and trust. Those that use it as a replacement for understanding risk eroding both.

The next generation must learn to think with AI and without it. Critical thinking, factual verification, and domain knowledge are still the anchors of professionalism. AI may change how we work, but not why quality matters.


Speak with us.
Follow for more!

References:

  1. Deloitte to partially refund Australian government for report with apparent AI-generated errors, AP News.

  2. Deloitte to pay money back to Albanese government after using AI in $440,000 report, The Guardian.

  3. Deloitte will refund Australian government for AI hallucination-filled report, Ars Technica.

  4. Deloitte apologises for flawed AI-generated report, Australian Financial Review.

  5. Lawyers sanctioned for submitting fake AI-generated legal citations in Mata v. Avianca, CBS News.

  6. CNET retracts AI-written stories after factual errors discovered, The Verge.

  7. Tools such as ChatGPT threaten transparent science: Nature’s ground rules for use, Nature.

  8. Publishers’ and journals’ instructions to authors on AI-generated content, The BMJ.

Previous
Previous

Powering Progress: Why British Columbia’s AI and Crypto Restrictions Highlight the Real Value of Mining and Energy

Next
Next

When Two Careers Move: What Canada’s 2025 Job Market Means for Mining Recruitment and Relocation