top of page

Why should you stop traditional risk management methods for AI?

Updated: Feb 9


Compared to other sectors, risk control principles are largely formalized in financial services, with a three-line defense organization and clear allocation of roles and responsibilities, providing a solid foundation for the management of AI-induced risks.



Upcoming regulations on artificial intelligence, known as the European AI Act are putting pressure on the risk control framework currently in place in many organizations.


THE EU AI ACT


This forthcoming regulation named the EU Intelligence Act is structured differently from the existing risk control framework.


  • It defines a level of “a priori” risk based on usage

  • It defines a set of models beyond the scope of Machine Learning.

  • For "high risk" systems, it aims to control overall risks by means of ex ante control and documentation on 7 different obligations.

  • There is no distinction between risk factors and ad hoc measures.


THE TRADITIONNAL APPROACH OF RISK MANAGEMENT


In this case, the traditional approach of managing risk need to evolve. For example:


  • The assessment of the AI impact is still specific to each AI model and each context of use

  • Impacts are neither quantified in absolute terms nor ranked

  • Underestimation of potential intrinsic risks of AI

  • Putting emphasis on high risk systems independently from existing risk systems increases drastically the cost of compliance and risk management


DRIVERS OF CHANGE


Traditional approaches to risk management need to evolve to meet the new challenges of digitalisation and the new types of risk posed by this changing environment.


The main drivers of change are :

AI and data are part of the digital transformation ... and are here to stay!

New regulations on AI will emerge worldwide.

AI risk detection analysis remains largely unindustrialised despite some standardisation efforts


GROW WITH NEW CAPABILITIES


So you need review and update your Risk Management practice.


First step is to learn and implement new capabilities more aligned with this future. It includes for example:


👉 The ex-ante approach to certain risk categories contained in the regulation implies that you clarify first whether your AI systems fall within the scope of this new regulation.


Based on the latest developments in the regulation, assess whether your AI systems are listed in Annex 3 for high risk systems. Assess your role and contribution in the AI value chain (provider, user, distributor, importer). Measure which systems are then potentially impacted.



👉 Start designing a comprehensive control process covering the entire AI lifecycle beyond traditional controls.


AI risk assessment needs to be extended to new risk categories (bias, ethics, etc.) and integrated into every phase of the entire AI lifecycle. This approach facilitates the integration of risks by stakeholders and promotes the continuous preparation of an audit position.


👉 Start scaling tools and processes to be compliant at the enterprise level


Identify best practices in your organization, assess what can be replicated from certain use cases or AI initiatives, evaluate what is missing in terms of policy, processes and tools to ensure more efficient and effective compliance.


Design your target at the enterprise level and start evaluating/compare the many IT solutions available on the market right from the start to automate the control and resporting process by design.



👉 Develop according to opportunities, strong AI governance from the top.


68% of executives agree that AI should be part of their company's executive management. But there's a gap between expectations and facts. Only 50% said that responsible AI is high on their priority list.


Top management support may not be needed for 100% of AI systems.


But for risky applications, requiring more senior management oversight, senior management must be aware of AI and its potential benefits and risks to the business. In this case, senior management could link responsible AI to larger corporate and social (CSR) responsibilities.


LEVERAGE THE FULL BENEFIT OF AI


The expected benefits to your organisation are very significant: it will help you build cost-effective risk management capabilities, better anticipate risks through continuous monitoring and save time by industrialising risk detection and monitoring




Armed with this new mindset, governance, process and tools, you will reap the full benefits of AI while being prepared for future regulation.


Take note for the development of AI projects in 2023 and build your future RM capabilities.





Do you want to know more on how to save time and energy in setting up your own risk framework for AI?







8 views
christian-lue-8Yw6tsB8tnc-unsplash.jpg

Subscribe to Our Newsletter

Thanks for submitting!

bottom of page