top of page
  • Writer's pictureChristine LAUT

AI Inventory & AI Risk Assessment: ready, set, go!

Updated: Jul 14, 2022




Today, I'm going to show you how a 3-step process can help build the foundation for AI governance.


This is the case of Sandra who is the global head of AI in a large international group. In her role, she is responsible for setting the AI strategic vision and roadmap, overseeing a team of AI data scientists while solving mission-critical problems and introducing new technologies and new paradigms to innovate and accelerate AI/ML in the group.


She can draw on her strong background in big data, data management, and the application of AI, but as a newcomer to the financial services industry, she's not familiar with risk management in financial sector.



Like other industries, her company began its journey with the same patterns: they find an interesting business case, develop a proof of concept and then try to put it into production.


They performed AI applications in vertical functional silos (e.g., compliance, market risk, credit risk, marketing, ...), with minimal cross-functional collaboration between the different teams. In addition, the sanitary situation of the last two years has not facilitated her own integration and has reduced to very little or no physical interaction within the data and AI teams located in different cities and countries.



One day, after a conversation with a local regulator, Sandra realized she needed to speed up the AI risk management process in the organization. But how to start ?


Here is the exact 3-step process she used to lay the foundation for the AI risk management process.



Step one


Her first step was to build a network of trust to help her identify new developments using AI technologies.


To that end, she worked to engage and mobilize a broad population of AI representatives.


She identified three different AI communities in the global organization: the chief data officers and the data scientists , the risk representatives (risk managers audit and model risk managers) and the AI "friends" (IT, business, compliance, etc) from different lines of business, different divisions, and different geographies.


They were invited to join selected key meetings to first present their own AI application, share experience, challenges and lessons learned.


Step two


From the established forum, she was quickly able to structure an inventory of all AI initiatives across the organization.


And the AI inventory wasn't just a comprehensive, updated list of AI initiatives under development in the organization.


Sandra also collected some key characteristics of each AI application: use case description, goals, domain application, technology used (NLP, Chatbots, time series predictions, recommendation engines, graph machine learning, generative modeling...), data used to train the model (structured, unstructured, internal, external, sources), IT infrastructure, governance, challenges and potential need of cross fertilization or support from other AI initiatives.


She implemented a process to share and maintain a comprehensive overview of AI applications and their characteristics in the organization in less than 3 months!


And she was able to assess which AI applications are potentially the most critical and need the most governance.


Step three


Since a comprehensive risk strategy always starts with a risk assessment, she began to identify, on the most critical applications, the different natures of risk, their potential impact and the risk mitigation controls and response in place.


Then, based on a risk-based approach, it began to identify potential gaps (security, data, model evaluation, etc.) to mitigate those risks.



Ready to start ?


Risk analysis and business inventory are familiar to business leaders and risk managers, and we can apply the same principles to AI.


To start, the AI team needs to be able to inspire and convince a group of people across the organization to get on board with AI governance. To move forward, everyone needs to be connected and aligned. Being able to convince people is the new power of the AI teams.


The best part is that you can achieve the same results as Sandra by following this step-by-step process.


I'd also love to hear from you: have you used any of these strategies for AI governance? Leave a comment and let me know.


Comments


christian-lue-8Yw6tsB8tnc-unsplash.jpg

Subscribe to Our Newsletter

Thanks for submitting!

bottom of page