top of page

How to deal with the impacts of AI?




Artificial Intelligence (AI) is already transforming many areas in our lifetimes.


As AI deployment continues to grow, the societal and economic impacts are expected to be significant and regulation will play a key role.


Most companies plan to increase investment in responsible AI and see AI regulation as a priority.


But do you know that only 6% have operationalized their capabilities to be responsible by design?




HERE IS WHERE WE ARE NOW


Private and public institutions are increasingly deploying artificial intelligence (AI) and algorithmic decision-making systems to speed transactions, improve efficiency, automate and personalize services and products.


The scope of AI is complex due to the fact that AI systems typically involve a complex ecosystem with actors involved at different stages of their life cycle.


Blurring lines of accountability and responsibility

The result is a blurring of lines of accountability and responsibility between the different stakeholders involved in deployment. And deployments still lack often robust due diligence, controls or transparency to end users.


Several policy and regulatory developments are being considered to govern the public and private use of AI systems.


Recent examples of emerging hard and soft laws

The EU draft requires vendors using high-risk AI systems to evaluate their AI systems, engage in ongoing risk management, and record their evaluations and documentation in a public database.


In September 2022, the EU proposed harmonization of national liability rules to access information and ease the burden of proof for AI damage.

This new regulation introduces two main new features: the presumption of causation in circumstances where relevant fault has been established and a right of access to evidence for companies and providers in which high-risk AI is involved,


For providers, these obligations include requirements for training and testing of data sets, system monitoring, and system accuracy and robustness.


The proposed legislation incentivizes organizations to disclose information: where a provider can demonstrate that sufficient evidence and expertise is reasonably available to the claimant, the presumption of causation can be rebutted.


In addition to these regulatory discussions, soft law and multiple guidance frameworks around the world call for accountability and impact assessment.


On october 4, 2022 (last week), the White House Office of Science and Technology Policy released a blueprint for an AI Bill of Rights (the "AI Bill"), with the goal of protecting the public from adverse outcomes or harmful use of AI.


Companies are encouraged to follow the five principles of the AI bill: safe and effective systems, protection from algorithmic discrimination, data privacy, notice and explanation, human alternatives, and consideration and pushback.


The pace of publication of recommendations and regulatory requirements seems to be accelerating this fall.


I am not sure that we will achieve a standard regulation, valid and applicable in all places and all industries because the risks and impacts are intrinsically linked to the context and are different from one sector to another.


However, as regulatory pressure increases in a dispersed order and as technology advances even faster, what options are left to companies today to better master the impacts of AI?



SHAPING YOUR RESPONSIBLE AI FUTURE


While AI is maturing quickly, you can take some actions to deploy AI responsibly.


Imagine what your future will look like if you are prepared.


  • You have performed an impact analysis on all stakeholders potentially affected by the AI: not only operators, developers, business stakeholders, but also external stakelholders . end users and/or citizens and government.


  • Your executive management has in mind the impact of AI on the institution's strategy, missions, values and potential risks affecting the internal and external environment.


  • You perform systematically ethical risk due diligence when evaluating AI.


  • You have identified potential unanticipated consequences of the AI implementations.


  • You have systematically documented the quality of your data sets used in AI systems.


  • You have obtained detailed specifications of the decision design embedded in the AI system.


  • You have already conducted tests to assess key considerations related to explainability, reproducibility, safety, and fairness.


  • You have established clear governance mechanisms for adequate management and oversight.


  • You have evidence on how decision making based on AI is performed and its consequences (e.g human in the loop)


  • You have established a robust system to continuously identify and assess AI risks , define mitigations actions and report them


  • You save time in gap analysis with any new regulations that are released.


  • Based on your experience and evidence gathering, you actively contribute to discussions with regulators to shape the future of AI regulations and framework.



HOW TO GET THERE ?


None of this is an easy journey. Not all stakeholders are fully aware of the risks of AI. And it won't happen overnight. Here's how to get there.


You need a specific plan with progressive and concrete steps for your organization to articulate your journey, understand the impacts and risks of AI use for your organization and be prepared in case of liability litigation or new regulation.


An integrated risk management framework such as SAFE AI NOW helps you establish a holistic view of AI risks and impacts, identify critical gaps, define your own key priorities, and set up your action plan to operationalize your journey to robust, legal, and responsible AI.



Do you want to know more on how to save time and energy in setting up your action plan for robust, legal and responsible AI ?



Christine Laüt

claut@safeainow.com

Founder and CEO of SAFE AI NOW



9 views
christian-lue-8Yw6tsB8tnc-unsplash.jpg

Subscribe to Our Newsletter

Thanks for submitting!

bottom of page