top of page
  • Writer's pictureChristine LAUT

Diversity is de-risking AI

Diversity is not only on the rise, it is producing tangible results.

And AI is no exception.

And I appreciate this not only as a professional but also as a citizen.


  • Diversity is one of the fundamental properties for the survival of species, populations and organizations.

Historically, large-scale technological transformations have always led to profound societal, economic, and political changes; but it has always taken time to understand the effects and establish safe and ethical practices.

And that is the case today with these technologies, AI and ML.

  • As we have learned through several AI systems, when certain population groups are underrepresented in training sets, those populations are left out or may be subject to higher error rates. And those most vulnerable to negative impacts are often not able to engage.

  • Overestimating the capabilities of AI is a well-known problem.

  • Technology is only as good, or bad, as the people who develop it

  • Ethics dialogue is often confined to the ivory tower.


Addressing diversity is a way to minimize the risks created by AI in our society.

Some actions of this new paradigm for AI include:

  • Establish multi-functional and diverse discussion forum for AI thought leaders, race, gender and age issues, biologists, policy makers and ethicists

  • Include diversity not only at the development stage of the AI model but also throughout the life cycle (from research design to maintenance).

  • Developing a set of validation guidelines and standards for testing the Ml solution for racial, gender, age, and ethnic bias.

  • Ensure data diversity in your dataset.

  • Develop guidelines and policies for managing data from an inclusive perspective.

  • Ensure that your model can be generalized to a broader set of scenarios once in production.

  • Include inclusive practices in model building (e.g., if your model is accurate 85% of the time, what does that mean for the other 15%?).

  • Test with a diverse set of end users at scale.

  • After deployment, re-train your model to ensure that it works the same for each group of users while maintaining performance.

  • Create conditions to gather user feedback and take immediate action

  • Leverage ML to detect, analyze, prevent and combat human bias and discrimination.


If you approach AI development with an inclusive lens, you'll ideally find other opportunities to make your AI systems safe.And this goes beyond the critical development of an AI system.

For AI to work, it will need to work for operations, users, customers and citizens.

Now is the time to think about how to de-risk AI for the benefit of organizations and society.

Do you want to know more on how to de-risk AI ?

Christine Laüt



Subscribe to Our Newsletter

Thanks for submitting!

bottom of page