top of page

Navigating the Regulatory Challenges of Generative AI - What You Need to Know

Updated: Feb 9


2 months after the launch, Chat GPT has reached more than 100 million users, making it the "fastest growing application in history".


Generative AI with ChatGPT (openAI- Microsoft) and more recently Bard (Google) has created the buzz in the media and social networks.


Generative AI is a technology capable of creating content by itself, such as text, images, videos, art, music, computer code, etc.


According to Gartner,

  • "By 2025, 30% of outbound marketing messages from large organizations will be synthetically generated, up from less than 2% in 2022.

  • By 2030, a major blockbuster film will be released with 90% of the film generated by AI (from text to video), from 0% of such in 2022".



SOME RISKS AND ETHICAL ISSUES


However, let's remember that generative AI involves some ethical issues or risks, for example:

  • Deepfakes: these AI-generated contents such as texts, images, videos or audios can be difficult to distinguish from real information.

  • Inaccurate results: despite rigorous data testing, generative AI can produce incorrect or erroneous results.

  • Bias: the result is the reflection of data available until 2021.

  • Misuses (including malware and content misuse despite ambitious content controls§)

  • Black box effect - lack of transparency

  • Impact on organizations and ecosystems traditionally associated with "knowledge work"

  • Concentration of power and political risks

  • Intellectual property rights because it is not yet clear where intellectual property rights would apply


REGULATING GENERATIVE AI IN EUROPE


Given the few risks just discussed, it seems appropriate to regulate generative AI.


At the very least, as suggested in the initial version of the EU AI Act, users should be informed that they are interacting with a chatbot.


But given the potential breadth of applications, that won't be enough!


In the EU AI Act, generative AI is also considered a high-risk system under Article 4b of the AI Act due to its flexibility. Its general purpose also makes it a priori part of the list of high-risk applications in Annex III of the 2021 proposal.


In the compromise version of Annex III of the EU law released on February 6, 2022, a category was introduced to cover generative AI in the list of high-risk AI.


"Any AI-generated text that might be mistaken for human-generated is considered at risk unless it undergoes human review and a person or organisation is legally liable for it.


AI-generated deep fakes, audio-visual content representing a person doing or saying something that never happened, the high-risk category applies unless it is an obvious artistic work."


Therefore, a comprehensive risk management system and the more stringent obligations required for high-risk systems (data governance, transparency, etc.) must be established for the possible uses of the system.


This task will prove difficult to put into practice given the number of potential applications.


OBLIGATIONS TAILORED TO SPECIFIC CHALLENGES


The EU regulator wants to introduce tougher rules on generative AI (please refer to statement of Thierry Breton in the footnote (1)).


Instead of a direct and straightforward application of these requirements, it would be good to specify how generative AI should be applied according to its specific (and not only general) purpose, based on a detailed risk impact assessment and taking into account the specific characteristics of these systems related to their contribution to the final outcome (component of an AI system), the human oversight, the need for transparency, and the safeguards required for content moderation.


The one-size-fits-all regulatory approach will be both costly and very difficult to implement!


----------------------------------------------------------------------------------

(1) https://www.reuters.com/technology/eus-breton-warns-chatgpt-risks-ai-rules-seek-tackle-concerns-2023-02-03/


------------------------------------------------------------------------------------------------------- If you would like to discuss further this topic, feel free to contact:

Christine Laut

SAFE AI NOW

www.safeainow.com

contact@safeainow.com


Photo by Revisions on Unsplash

Do you want to know more on how to save time and energy in implementing your risk management framework for AI?







christian-lue-8Yw6tsB8tnc-unsplash.jpg

Subscribe to Our Newsletter

Thanks for submitting!

bottom of page