Wed. May 15th, 2024
The European UnionImage Credits: Vincent Kessler/Reuters

European Union policymakers clinched a landmark deal on December 8th, on the world’s first comprehensive artificial intelligence rules, setting the stage for legal supervision of AI technology, which holds the promise of reshaping everyday life but has also prompted concerns about existential threats to humanity.

The “AI Act” emerged following extensive negotiations lasting approximately 38 hours between lawmakers and policymakers. Both sides are scheduled to finalize the details in the upcoming days, potentially influencing the final form of the legislation.

European Union (UN) chief, Ursula von der Leyen mentioned that “the AI Act is a global first. A unique legal framework for the development of AI you can trust.”

“And for the safety and fundamental rights of people and businesses. A commitment we took in our political guidelines – and we delivered. I welcome today’s political agreement,” she remarked.

Proposed by the executive branch of the EU in 2021, efforts gained momentum following the introduction of OpenAI’s ChatGPT last year, bringing the swiftly evolving field of AI to widespread public attention.

Why it needs a legislation?

Recognized as a global benchmark, governments seek to leverage AI’s advantages while mitigating risks such as disinformation, job displacement, and copyright infringement.

AI systems such as ChatGPT have burst into global awareness, impressing users with their capacity to generate human-like text, images, and music. However, concerns have arisen about the risks associated with this rapidly advancing technology, including its impact on jobs, privacy, and even the potential threats it poses to human life.

Originally designed to address the perils posed by specific AI functions graded according to their risk levels, ranging from low to unacceptable, the act underwent expansion at the insistence of lawmakers. This extension now encompasses foundation models, the sophisticated systems that form the fundamental backbone of general-purpose AI services, such as ChatGPT and Google’s Bard chatbot.

Furthermore, foundation models appeared to be a major point of contention for Europe, but negotiators successfully achieved a tentative compromise in the early stages of the talks.

The Director General Cecilia Bonefeld-Dahl remarked: “We have a deal, but at what cost? We fully support a risk-based approach based on the uses of AI, not the technology itself, but the last-minute attempt to regulate foundation models has turned this on its head.”

Government across the globe are seeking to balance the advantages of ever evolving technology, which can engage in human-like conversations, answer questions and write computer code, against the need to put guardrails in place.

Change it put forth

The state can only use real-time biometric surveillance in public spaces in cases of victims of certain crimes, prevention of genuine, present, or foreseeable threats, such as terrorist attacks, and searches for people suspected of the most serious crimes.

The agreement prohibits behavioral manipulation, the indiscriminate extraction of facial images from the internet or CCTV footage, social scoring, and biometric categorization systems used to deduce political, religious and philosophical beliefs, sexual orientation, and race.

Consumers would retain the right to submit complaints and obtain substantial explanations. Penalties for breaches would vary from 7.5 million euros or 1.5 percent of turnover to 35 million euros or 7 percent of global turnover.

Leave a Reply

Your email address will not be published. Required fields are marked *