Proactivity is key: Why it is not enough to wait for compliance to improve AI?

The rapid adoption and spread of ChatGPT is a sign that AI is widespread in our society. It is a form of AI commoditization that indicates that we have entered a new era of data and AI use.
ChatGPT is not a magic tool!
But ChatGPT is no different from other AI systems: it is not a magic tool. It is a data-driven system that can generate fantastic results but also irrelevant, biased, contradictory, copyrighted and even produce results that may seem plausible but are either factually incorrect or irrelevant to the given contex
Options on the table
Banning or suspending ChatGPT is already an option in place in many key institutions.
Another common approach is to wait and rely on compliance.
Waiting for compliance rules to dictate how AI should be used can lead to a reactive approach that is often too late. Banning without explanation is not sustainable for the future!
Proactive approach
Instead, you should take a proactive approach by considering the potential implications of using generative AI.
Here are 3 ways on how you can start !
📌 1. Empower all stakeholders to understand this new technology and its potential impact on your organization, as well as the risks involved.
Even if you decide to ban it or put it on the back burner, this first step of education is essential to facilitate understanding of the final decision and to make your stakeholders aware of your responsibilities.
📌 2. If you want to take advantage of generative AI, build diverse and inclusive teams to ensure that AI systems are designed and implemented in a way that takes into account a wide range of new perspectives.
Examples include social impact, new tasks, new roles and responsibilities, new products, new recommended services, new customer relationships, business model transformation, etc.
Then, assess the investment costs and capabilities required (infrastructure, security, computing costs, new skills, new data and new suppliers) to make an opt-in or opt-out decision.
📌 3. Define an ongoing process to manage critical risks associated with generative AI (e.g. bias, intellectual property infringement, fraud, cyber risk, factually incorrect answers, data privacy,etc...).
Start assessing potential risks at every stage of the AI journey and ensure that any problems are detected and addressed in a proportionally and timely manner.
nstead of waiting for compliance regulations to dictate how AI is used or simply banning it without education,
🥁 Take a proactive approach to building trust with customers, employees and other stakeholders, which will ultimately result in greater long-term success.
What do you think? how do you deal with ChatGPT/ Generative AI ?
SAFE AI NOW
Do you want to know more on how to save time and energy in setting up your own risk framework for AI?