For over 2 years, government agencies, regulators, corporates and non profit organizations have released more than 80 proposals to support and sustain responsible and ethical AI development and maintenance.
What happens when you need to be compliant and ethical with AI?
Dilemma 1. International standards are not mature and continue to evolve.
AI is developing at a rapid pace, and it is hard for regulators to keep up.
Because regulations are evolving and not stabilized, you will need to follow the flow of the new rules on specific topic, area.
Dilemma 2. Standards and recommendations have different scope and addresses different issues.
In some cases, the focus of regulator or agencies was more on personal data privacy (e.g. RGPD), restrictions on AI (biometrics) , categorization of high-risk AI application or proposal of a risk based approach (EU, OECD).
In other instances or international entities (e.g. IEEE, CNIL, Singapore Monetary Authority, ACPR, ...) the focus is more on the intrinsic challenges of AI, particularly Machine Learning. So these reports are adressing issues mostly related to fairness, reliability, explainability.
Dilemma 3. Despite some significant differences, there is good news: some regulations or recommendations focus on common ethical principles! When grouping the analysis of these regulations together, common actions and measures can be identified and leveraged.
It will save you energy and work in defining and implementing "package of actions" to comply with homegeneous rules.
Dilemma 4. It's not enough to do a gap analysis and get a plan to close the gap for each specific AI initiative.
You need to develop an action plan not only at the AI project level but also at the corporate/organization level to ensure consistency in front of regulators.
Dilemma 5. Compliance and ethics apply throughout the lifecycle.
Unlike other technical projects, ethics and compliance are not over once the AI-ML developments phase are completed. It already starts at inception, follows with development, and should be extended to the go live/production phase to analyze any significant deviations created by the AI learning experience.
Dilemma 6. Compliance and ethics are a complex topic that requires strategic thinking.
You won't get there by allocating additional IT Data scientist resources to your existing team to detect bias, prevent discrimination or protect data.
You need an holistic and customized approach that encompasses different strategic dimensions: data analysis, process, organization, communication, training and technology, compliance, risk ... with significant involvement of different stakeholders (business, legal, compliance, model validation, risk managers, customers, etc)
Dilemma 7. You won't achieve compliance and ethics in "one go".
You need to develop your own strategy, taking one step at a time on the road to compliance, as regulations change. And structured and "step by step" approaches can help to save time and money in this area.
Dilemma 8. Last but not least. compliance and ethics are related to culture and corporate ownership.
You need to instill and distill from the top, a culture of AI compliance and risk management throughout the organization. And it takes time !
And that goes hand-in-hand with leadership and assigning clear roles, responsibilities and accountabilities throughout the entire organization.
Dilemma 9. Maybe our own dilemma ...
Rome wasn't built in a day !
But AI - ML are everywhere... and so is regulation.
So it's time for you to take note of this new environment, assess ethical and compliance impacts, evaluate your own dilemmas and build an appropriate strategy with a structured and progress plan.
You will be in a better position when regulator rings your doorbell on AI!