
The technology is genuinely transformative. But knowing where to deploy it—and where not to—is what separates cost savings from costly mistakes.
If you’ve sat through a vendor presentation recently, you’ve probably heard that AI will revolutionise your compliance programme. Fewer people, faster testing, dramatic cost reductions. The pitch is compelling.
Some of it is even true.
I’ve spent the past several years integrating AI and process automation into controls testing programmes for organisations ranging from multinationals to PE-backed growth companies. We’ve seen what works, what doesn’t, and where the technology genuinely delivers on its promise.
Here’s our honest assessment.
Let’s start with what the technology does well—because in the right applications, the impact is substantial.
Traditional controls testing involves significant time spent requesting, chasing, organising, and reviewing documentation. Invoice approvals, journal entry support, reconciliation sign-offs—the administrative burden is considerable.
AI-powered document processing can now extract relevant data points from invoices, contracts, and approval records with high accuracy. What once required a tester to manually review hundreds of documents can be completed in a fraction of the time, with the AI flagging exceptions for human review.
For high-volume, document-heavy controls, this alone can reduce testing time by 40-60%.
Controls testing has historically relied on sampling. You test 25 transactions and extrapolate conclusions about thousands. It’s a defensible methodology, but it’s also a compromise driven by practical constraints.
AI enables a different approach. Rather than sampling, you can analyse entire populations—every journal entry, every payment, every access change—and identify anomalies that sampling might miss. The technology excels at spotting patterns that deviate from established norms: unusual timing, atypical amounts, unexpected combinations of approvers.
This isn’t about replacing professional judgement. It’s about giving testers better information to apply that judgement to.
Much of the cost in compliance programmes isn’t the testing itself—it’s the coordination. Chasing control owners for evidence. Tracking what’s been completed. Managing remediation timelines. Preparing status reports.
Automation handles this efficiently. Evidence requests can be triggered automatically. Dashboards update in real-time. Reminders escalate appropriately. The project management overhead that typically consumes 20-30% of programme effort can be reduced significantly.
Human testers, even excellent ones, introduce variability. Two experienced professionals might assess the same control differently based on their individual judgement, their workload that day, or simply how the evidence was presented.
AI applies the same criteria, the same way, every time. For routine controls where the testing approach is well-established, this consistency has value—both for quality and for demonstrating to auditors that your programme operates reliably.
Here’s where we part company with some of the more enthusiastic predictions about AI in assurance.
AI can tell you that a control operated as designed. It cannot tell you whether that control matters.
Determining which controls are material, how testing should be scoped, and where to focus limited resources requires understanding the business context, the regulatory environment, and the risk appetite of the organisation. These are judgement calls that draw on experience and commercial awareness.
When we design testing programmes, we’re not just mapping controls to frameworks. We’re thinking about what could actually go wrong, what the auditors will focus on, and where management attention should be directed. That synthesis isn’t automatable.
Controls testing inevitably surfaces grey areas. The approval was obtained, but via email rather than the system. The reconciliation was completed, but two days late. The segregation of duties conflict exists on paper, but compensating controls are in place.
How you assess these situations—and how you communicate them to control owners and auditors—requires nuance. AI can flag the exception. A skilled tester determines what it means and how to address it constructively.
Effective compliance programmes depend on cooperation from control owners who have day jobs beyond responding to testing requests. Getting timely, complete evidence requires relationships built on trust and mutual respect.
When a control fails, delivering that message in a way that prompts remediation rather than defensiveness is a human skill. So is the conversation with external auditors about your methodology, your findings, and your remediation tracking.
Technology cannot conduct these conversations. And the quality of these interactions materially affects programme outcomes.
Organisations evolve. They acquire businesses, implement new systems, restructure teams, enter new markets. Each change has implications for the control environment that may not be immediately apparent.
A tester with genuine business understanding recognises when a control that worked last year may not be fit for purpose today. AI operates on the data it’s given. It doesn’t know that the finance team was reorganised in March or that the new ERP module went live in June. Humans maintain that contextual awareness.
When your compliance programme is examined—by auditors, regulators, or the board—someone needs to stand behind the work. To explain the methodology, defend the conclusions, and take responsibility for the quality of the output.
That accountability rests with people, not algorithms. And it should.
The organisations getting the most value from AI in controls testing aren’t replacing their experienced professionals. They’re redeploying them.
Instead of spending time on administrative tasks and routine document reviews, skilled testers focus on judgement-intensive work: assessing complex controls, investigating anomalies, advising on control design improvements, and managing relationships with stakeholders.
The economics are compelling. You need fewer total hours to complete the programme. The hours you do need are spent on higher-value activities. Quality improves because humans aren’t fatigued by repetitive tasks and AI catches patterns that sampling might miss.
But this only works if the technology is deployed thoughtfully, by people who understand both its capabilities and its limitations.
Our team combines over 70 years of Big 4 assurance experience with dedicated AI and automation capability. We’ve seen enough compliance programmes—successful and otherwise—to know where technology adds value and where it introduces risk.
When we take on a controls testing engagement, we don’t start with the technology. We start with understanding your control environment, your risk profile, and your objectives. Then we design an approach that uses AI and automation where they’re genuinely effective, while ensuring experienced professionals handle the work that requires judgement.
The result, for most organisations, is a programme that costs significantly less than traditional approaches while delivering equal or better quality. Not because we’ve replaced expertise with algorithms, but because we’ve combined them intelligently.
AI is changing controls testing. The organisations that adapt thoughtfully will operate more efficient compliance programmes with better coverage and fewer surprises.
But “thoughtfully” is the key word. The technology is a tool, not a solution. Its value depends entirely on how it’s deployed—and by whom.
If you’re evaluating how AI might improve your compliance programme, we’d welcome a conversation. No sales pitch, just an honest assessment of where the technology might help your specific situation.
Contact us to discuss your compliance programme →
Trust3C helps organisations reduce compliance costs—by up to 50%—while improving quality and confidence in their controls. Our team brings Big 4 expertise with the agility and efficiency of modern technology.
Written by Tim Fairchild
CEO, Bison Grid