Risk in AI: Glass Full or Half Empty?

ai compliance ai governance ai regulation ai risk eu ai act innovation iso/iec standards
AI - ISO/IEC Standards - AI Act

 

In the recent development of artificial intelligence (AI) regulation, understanding the concept of "risk" varies widely depending on regulatory frameworks and standards.

 

General Risk Definition: Understanding Risk in Regulation

In the recent development of artificial intelligence (AI) regulation, understanding the concept of "risk" is critical.

Generally speaking, risk refers to the likelihood that a hazard will result in harm.

According to Article 3 of the AI Act, risk is defined as "the combination of the probability of harm occurring and the severity of that harm." This definition underlines the Act's commitment to identifying and mitigating risks posed by AI systems, particularly concerning health, safety, and fundamental rights.

 

AI Act Perspective: Focusing on Preventing Harm

The AI Act places significant emphasis on preventing risks to health, safety, and human rights. This cautious approach reflects a commitment to safeguarding individuals and communities from potential negative impacts of AI technologies.

Critics argue, however, that this focus on risk prevention may overlook the potential positive impacts of AI and fail to incorporate cost-benefit analyses that could foster innovation and societal progress. In other words, by not considering the positive aspects of the technology in the regulatory framework, the AI Act fails to promote AI for good.

Yet, the AI Act also promotes a proportionate approach to strike a balance between risk reduction and cost reduction.

 

ISO / IEC Perspective: Balancing Risks and Opportunities

In contrast, ISO/IEC standards offer a broader perspective on risk management in AI. These standards encompass organizational objectives, including those specific to AI systems, and recognize that risk can lead to both positive and negative outcomes. By acknowledging the potential benefits of AI alongside risks, ISO/IEC standards provide a framework that encourages innovation while managing potential harms.

They also provide definitions, principles, and structured processes for AI risk management, which align with EU AI Act Article 9 requirements. However, these specifications are considered too generic to fully address the specific requirements of the EU AI Act, particularly concerning health, safety, EU values, and fundamental rights.

 

Impact: Implications for AI regulation in Europe

The divergence between the AI Act and ISO/IEC standards lies in their fundamental views on risk. The AI Act adopts a cautious approach ("half glass view"), primarily focused on reducing risks to health, safety, and fundamental rights. In contrast, ISO/IEC standards present a broader perspective ("full glass view"), balancing risks and opportunities for AI systems.

 

Recommendation: Bridging perspective for effective regulation

Utilizing ISO/IEC standards as a foundational framework can facilitate preparedness for AI risk management. However, it is essential to recognize that compliance with ISO/IEC alone may not fully meet the specific regulatory requirements of the EU AI Act.

As AI continues to evolve, bridging these perspectives will be essential for comprehensive and effective AI regulation in Europe and worldwide.

 

 

 

Stay connected with AI regulation and ethics updates!

Join our mailing list to receive monthly AI regulatory and ethics updates.
Don't worry, your information will not be shared.

We hate SPAM. We will never sell your information, for any reason.