Regulating AI: Navigating Old Laws in the Face of Technological Advancements

0
77
Monitor screen with OpenAI logo on black background

Europe Takes the Lead in Drafting New AI Rules, but Enforcing Them Will Take Time

BRUSSELS – As the development of powerful artificial intelligence (AI) services, such as ChatGPT, gains momentum, regulators find themselves relying on outdated laws to govern a technology that has the potential to revolutionize societies and businesses.

At the forefront of this regulatory effort is the European Union (EU), which is actively drafting new AI rules to address privacy and safety concerns associated with the rapid advances in generative AI technology, the backbone of OpenAI’s ChatGPT.

However, the enforcement of these new regulations will likely take several years.

“In the absence of regulations, the only recourse for governments is to apply existing rules,” says Massimiliano Cimnaghi, a European data governance expert at consultancy BIP. This means utilizing data protection laws to safeguard personal data and applying relevant regulations to ensure the safety of individuals, even if they have not been specifically designed for AI.

The privacy watchdogs in Europe established a task force in April to tackle issues related to ChatGPT. The Italian regulator Garante even took the service offline, accusing OpenAI of violating the EU’s General Data Protection Regulation (GDPR), a comprehensive privacy regime enacted in 2018. After implementing age verification features and allowing European users to block their information from being used to train the AI model, ChatGPT was reinstated.

Furthermore, data protection authorities in France and Spain launched probes into OpenAI’s compliance with privacy laws.

The reliability of generative AI models, like ChatGPT, has come under scrutiny due to their tendency to produce errors or “hallucinations,” which often involve disseminating misinformation with unwarranted certainty.

These errors can have severe consequences, potentially resulting in unfair loan rejections or denial of benefit payments if banks or government departments utilize AI for decision-making. Several major tech companies, including Google and Microsoft, have stopped using ethically questionable AI products in areas like finance.

A smartphone with a displayed ChatGPT logo is placed on a computer motherboard in this illustration taken February 23, 2023. REUTERS/Dado Ruvic/Illustration
A smartphone with a displayed ChatGPT logo is placed on a computer motherboard in this illustration taken February 23, 2023. REUTERS/Dado Ruvic/Illustration

Regulators and Experts Seek to Address Privacy, Safety, and Copyright Concerns Surrounding Generative AI

Regulators aim to apply existing rules covering copyright, data privacy, and two key concerns: the data used to train AI models and the content they produce. This approach involves regulators and experts in the United States and Europe interpreting and reinterpreting their mandates. The U.S. Federal Trade Commission, for instance, has investigated algorithms for discriminatory practices under its existing regulatory powers.

The EU’s proposed AI Act will require companies like OpenAI to disclose any copyrighted material used to train their models, potentially leaving them vulnerable to legal challenges. However, proving copyright infringement may not be straightforward, as it would involve assessing the extent to which an AI model directly copies and publishes someone else’s material.

Read Also: AI Tech Like ChatGPT To Get Copyright Protection In New EU Bill

Regulators are also exploring creative ways to apply existing laws to AI. The French data regulator, CNIL, is taking the lead on addressing AI bias, an area traditionally handled by the Defenseur des Droits. CNIL is considering using provisions of the GDPR that protect individuals from automated decision-making, although the legal sufficiency of these measures remains uncertain.

A response by ChatGPT, an AI chatbot developed by OpenAI, is seen on its website in this illustration picture taken February 9, 2023. REUTERS/Florence Lo/Illustration
A response by ChatGPT, an AI chatbot developed by OpenAI, is seen on its website in this illustration picture taken February 9, 2023. REUTERS/Florence Lo/Illustration

The pace of technological advancements necessitates regulators to adapt quickly. However, industry insiders have called for greater engagement between regulators and corporate leaders. Dialogue between the two parties has been limited, and there is a need to strike the right balance between consumer protection and business growth.

As the race to regulate AI intensifies, it becomes imperative to navigate the intersection of old laws and new technology to ensure privacy, safety, and responsible use of AI systems. – Reuters