top of page

Has Artificial Intelligence become critical?


Last week I had the chance to attend to the Swiss Risk Association's 3 Lines of Defense (LoD) meeting.

Thanks to the brillant panel who shared interesting insights on their journey towards 3LoD.


We also discussed the revised circular from FINMA, the Swiss Financial Market Supervisory Authority, on operational risk management for banks which takes into consideration, among other things, the latest principles of the Basel standards.



While the received operational risk circular is expected to be adopted in December 2022, we discussed potential plans for further integration

of ICT into operational risk frameworks.


But nothing was said about Artificial Intelligence.


Here is below my point of view.


IS AI CRITICAL FOR YOU?


The use of AI applications in the banking sector today includes a broad spectrum from customer and transaction monitoring, portfolio analysis and suitability analysis, trading systems and trading strategies to process automation with different functional applications.


In 2021, FINMA recognized that the financial market offers a promising field for the use of AI.


So if you fall into the categories covered by this regulation, here is your first action:

📌 assess how your AI systems are a critical part of your processes and business.



HOW WILL THE NEW CIRCULAR IMPACT YOU?


The 3 areas of information and communication technology (ICT), critical data, and cyber risks are materially changed in the regulation and these 3 areas are also key for artificial intelligence.


AI and the revised circular interact on those 3 areas.


What does this mean for you in 2023?


1. Develop your AI strategy and governance as part of your ICT obligations

  • Integrate AI systems into your ICT strategy and gain approval for your AI strategy

  • Ensure that the body responsible for senior management regularly monitors the effectiveness of AI risks

  • Ensure that AI risks related to the institution's critical processes are identified, assessed, mitigated and monitored.

  • Ensure that the senior management body regularly monitors the effectiveness of AI risks.

  • Ensure that procedures, processes, controls, tasks, and functions related to AI management are clearly documented and implemented at each stage of the life cycle (in AI development and operations).


2. Develop a robust cybersecurity plan for AI systems


AI presents the same opportunities for exploitation and attack as any other technology, but it offers a multitude of attack "opportunities" at every stage of the ML process.


Examples include data corruption, poisoning training data to make inferences, attacks that fool a task-specific ML model, online system manipulation (false inputs) not to mention data confidentiality and privacy.


  • Review/update your cyber risk plan

  • Introduce systematic cyber exercises based on AI systems scenarios

  • Report to management on the evolving threat and risk profile, any damage caused by cyber attacks, and the effectiveness of key controls on AI.


3. Review critical data used by the AI systems


The regulation expands the qualitative requirements on managing the risks associated with critical data in terms of confidentiality, integrity and availability.


Data is also a critical component of AI systems.


Here are some examples to leverage this circular to review the framework for critical data used by AI systems


  • Review the protection of critical data used by AI systems

  • Make sure your data strategy covers the data used by AI systems.

  • Be sure to assess the availability, quantity, and relevance of data sets.

  • Ensure that the data collection processes, preparation process, hypothesis formulation, and relevant design choices are properly documented and integrated into the ICT framework and processes

  • Confirm that the governance and organizational framework, data and information architecture, and data security are equally effective for AI system and covers other data risks related to AI.


2023: AROUND THE CORNER


FINMA provides a transition period prior to implementation in the 1st quarter of 2024.

Use this new regulatory constraint to think about AI risks. Don't recreate more independent silos disconnected from operational or ICT risk framework.


Given their nature, AI risks have their own specificities but the risk management framework should not be disconnected.

Pragmatism is key!


Identifying synergies and interoperability between different forces in banking is crucial for consistency and effectiveness in risk management.


2023 is just around the corner !

If you need help, you can contact me:

Christine Laut

SAFE AI NOW

www.safeainow.com

contact@safeainow.com


Photo by Revisions on unsplash



Do you want to know more on how to save time and energy in setting up your own risk framework for AI?







31 views
christian-lue-8Yw6tsB8tnc-unsplash.jpg

Subscribe to Our Newsletter

Thanks for submitting!

bottom of page