top of page
  • Writer's pictureChristine LAUT

Auditing AI: the beginner’s guide

Updated: Jul 30, 2022

The practice of auditing AI is still being developed.


With regard to advanced AI technologies, such as machine learning, there are no universal standards governing AI.





The practice of AI auditing



I remember when I first started wondering how to audit algorithms, I went through different forums and regulatory frameworks to try to understand how AI auditing works. And it was very confusing and frustrating; there is no single standard for AI auditing.


Several AI auditing frameworks have been published by international organizations. Among them, we can quote

  • The GAO framework developed by US Government Accountability Office department

  • The COBIT Framework

  • The COSO Enterprise Risk Management Framework

  • Institute of Internal Auditors (IIA) the Artificial Intelligence Auditing Framework from Institute of Internal Auditors

  • Singapore Personal Data Protection Commission (PDPC) Model AI Governance Framework

  • The AI Auditing Framework published by the UK Information Commissionner’s Office (ICO)

  • Auditing machine learning algorithms, white paper from the Supreme Audit Institutions of Finland, Germany, the Netherlands, Norway and the UK

Because of the profusion of different frameworks, it becomes difficult to build up a comprehensive approach for AI auditing.


The broader outcome of an auditing process is to improve confidence or ensure trust of the underlying system.


The prescription below is not meant to be exhaustive nor complete. Instead, here I focus on highlighting some audit aspects that should not be missed as a starting point.


Here is exactly how to begin with


1. Start engaging with the stakeholders and the rest of the organization


Very often the AI audit comes when it is too late. AI audit is not relevant after the event or incident.

Auditors need to be involved from the beginning of AI initiatives and during the AI life cycle.

They should be visible and engage with their stakeholders and the rest of the organisation to assess existing emerging technology risks and develop capabilities to address them during the AI life cycle.


If auditability is not integrated "by design" from the start, it will be difficult to assure these models and the decisions they make after the event.



2. Take these international frameworks as a starting point for education and awareness


AI auditing is a new area for the audit team and all AI teams involved in the organisation.

You can use the AI audit framework to develop effective communication and relationships based on a common understanding of the same terminology and begin to educate your organisations on the AI audit principles promoted in these international frameworks.



3. Build your own AI audit framework


You can select a framework that suits you, but you can also extract the components that are relevant to your own business to build your own framework.



4. Consider risks of your organization related to the AI initiative


Start by looking at the unknown and known risks that the AI initiative poses to your organisation. The risk area can be summarised through a traditional risk control matrix. To this end, Cobit 2019 provides an effective framework for reviewing the risks of any AI initiative.



5. As in any audit exercise, define your own scope of audit


The different parts of the AI environment and their mutual interactions can be taken into consideration. Here are below some basic indications which are not meant to be complete


Data


  • Data and data preparation: proper understanding of the data, data bias understanding, data collecting, storing, extracting, transforming, and loading data.

  • Structuration of the data pipelines, data and software artifacts specifications, designed and documentation



Algorithm ML development & lifecycle

  • Reviewing the documentation available will provide some indication of the development process and performance.

  • Assess performance against defined metrics to ensure the AI system functions as intended is sufficiently robust.

  • Evaluate how algorithm choices are consistent, appropriate sustainable and have been implemented correctly

  • Evaluate performance based on different datasets

  • Identify potential biases, inequities, and other societal concerns resulting from the AI system and how they are anticipated and communicated to relevant stakeholders.

  • Evaluate how the solution reached a decision and therefore may be subject to malicious manipulation, both by humans or other machines.

  • Evaluate steps to identify and prevent unintended uses and abuse of the model and plans to monitor these once the model is deployed

  • Identify potential privacy considerations and plan to protect and secure data

  • Evaluate safety vulnerabilities

  • Evaluate process in place for reporting: presenting the results to key stakeholders, evaluating the impact of the algorithmic system to the business

  • Review Project management artifacts, processes in places, from IT to Business,

  • Review process and evidence available to continuously monitor and evaluate the AI system to ensure it addresses program objectives and continuously monitor appropriateness in its current operating context


AI governance


The AI governance aims at promoting accountability by establishing processes to develop, manage and oversee implementation.


Main topics included in the process will cover

  • The AI initiative is aligned with a strategic vision

  • Accountability, responsibility and oversight are established.

  • Policies and procedures are properly documented and followed.

  • The necessary skills and expertise on AI to perform the AI responsibilities

  • The alignment of AI activities and AI-related decisions and actions with the organization’s values, and ethical, social and legal responsibilities.

  • Third-party risk management procedures performed around vendors (if relevant)


And in relation to governance, the AI audit can be difficult: as underlined by the ICO, AI audits involve making trade-offs between privacy and other competing rights and interests.



Conclusion


Auditing AI systems is not a simple task based on simple lines of code or traditional audit expertise, but a socially, organizational change and technically complex system that requires a set of skilled resources involved at inception.


With this article, you no longer need to search for the main international frameworks and references for AI auditing.


You now have everything you need to start developing your own AI audit plan.



Before you start on your first audit, make sure to leave a quick comment to let me know what you think of ‘Auditing AI The Beginner’s Guide’.







Comments


christian-lue-8Yw6tsB8tnc-unsplash.jpg

Subscribe to Our Newsletter

Thanks for submitting!

bottom of page