Empower Auditors to Think Big Picture on AI

The new white paper, Auditing Artificial Intelligence, provides an overview of what AI is, why auditors need to be aware of AI, and how the COBIT 2019 framework relates to AI auditing.

The guidance addresses the somewhat nebulous definition of AI, as there is no agreed-upon definition even in the research community, since AI encompasses a wide swath of ground, including machine learning, deep learning (a subset of machine learning), and some types of rule-based systems. ISACA wisely takes a neutral stance regarding definitions and capabilities of AI, as fact vs. fiction is still under active investigation.

As AI implementations are still in embryonic stages of deployment for the vast majority of companies outside of Silicon Valley, and there is a lack of regulatory requirements for assuring AI, there is still no definitive and comprehensive set of auditing standards for AI. Research is progressing, however—with ISACA at the forefront, as this whitepaper and its cited papers suggest.

The shortage of specialized tech talent for implementations, the “black box” nature of AI, and the deficiency of research regarding the holistic impact of AI on organizations, are just some of the challenges confronting IT auditors who are tasked with auditing AI. Approaches for addressing the black box nature of algorithms exist, such as sensitivity analysis and the like, but these approaches are often time-consuming and best left to modeling specialists for technical evaluation. The paper makes a recommendation for bifurcating an audit of AI between model specialists and IT auditors, with IT auditors looking at the holistic process and how technology stacks integrate. The authors highlight that in small-to-medium-sized enterprises that implement AI, third-party vendor management may be one of the critical aspects of an audit. The use of vendors allows less technical users to access the AI solution; however, keep in mind that many vendor products cannot be customized.

The paper states that auditors look at the holistic risks and integration of AI into the organization, and approach AI as they have approached cybersecurity and cloud computing, with an iterative, adaptive approach focusing on the implications. Commonly, auditors mistakenly believe that they need to know the low-level details of how algorithms work before conducting an AI audit. This is not the case, and it may actually be more beneficial when auditors do not know the intricate details of how AI works, as they will be able to take a holistic, 40,000-foot view into how AI makes sense in the enterprise instead of getting caught in the weeds.

I believe that this is the area that is currently missing the most in enterprises: a holistic view of AI. Technologies now rule the roost, but no matter how impressive the technical capabilities, an AI system needs to make sense in the grand mission of the company. The technological sophistication takes a backseat, and sometimes a less technical system that is more controllable is better for organizations. Let the technologists take care of the technical details, and empower auditors to think big picture, which is where they can provide tremendous value and shine.

The paper concludes with context of how COBIT 2019 can be used to create an audit plan for AI, along with an enumeration of the nine main challenges to an effective AI audit that ISACA has identified, with similar best practice approaches to tackle these challenges.

Auditing Artificial Intelligence is arguably the most comprehensive analysis of the current state of AI auditing, governance and assurance. It is the ideal stepping-off point for beginning your governance analysis and planning an AI audit for your enterprise.


Written by Andrew Clark, Data Economist, Block Science, excerpted from the ISACA Now Blog