The European Parliament made a significant announcement on March 13, 2024, declaring the passage of the "AI Act," set to be implemented in phases beginning in mid-2025.
With the EU taking the lead in AI legislation, there is optimism that it could replicate the achievements of the General Data Protection Regulation (GDPR), serving as a model for lawmakers globally as they craft their regulatory frameworks for the technology.
However, the AI Act also underscores the current regulatory challenges. Even fundamental issues such as what constitutes an AI system still spark lengthy debates.
Varying degrees of risks
The EU categorizes AI systems into four risk levels: Unacceptable Risk, High Risk, Limited Risk, and Minimal Risk. The higher the risk level, the greater the responsibility for companies, with some uses even outright banned. Conversely, lower-risk AI systems carry less weighty responsibilities.
AI systems deemed as unacceptable risks will be completely prohibited. For example, AI cannot harm specific groups based on gender, religion, or race, nor conduct social credit scoring. Biometric recognition under certain circumstances is also prohibited.
In November 2023, US President Joe Biden's executive order highlighted the risks of biometric recognition in AI and planned to establish guidelines to prevent misuse.
High-risk AI systems involve health, safety, or other fundamental rights information, critical to people's daily lives. For instance, AI related to basic infrastructure affects public safety, while using AI to scan and screen resumes during recruitment could yield unfair results.
Banks using AI to determine individual loan eligibility that could infringe on personal rights also fall under such categories. Such High-risk AI requires rigorous risk assessments and testing to operate in the EU.
The transparency of GenAI
Generative AI chatbots fall into the Limited Risk category, along with image generation and deepfake technologies. Limited Risk AI emphasizes transparency, requiring companies to disclose to users when they interact with AI and when an image is AI-generated to prevent deception.
OpenAI and Google have begun supporting AI-generated image watermarking features for easier identification. OpenAI's DALL-E 3 will support the C2PA standard, providing watermarked images and AI-created metadata. C2PA, endorsed by the AIGC, includes members like Microsoft, Google, Adobe, Arm, Intel, Sony, BBC, and the Publicis Group.
Google DeepMind's SynthID tool, developed in collaboration with Google Cloud, offers subtle watermarking between pixels, invisible to the naked eye but detectable with tools, addressing aesthetic concerns.
With 2024 being an election year for many countries, tech companies are gearing up to avoid accusations of influencing election results. While watermarking tools could help identify AI-generated images, they still require verification on the users' part.
The Minimal Risk category includes recommendation systems and spam blockers, requiring compliance with EU general laws, with the AI Act not specifically regulating them. While search engine and e-commerce ranking systems may be off the hook, precise advertising systems may still toe the line.
The AI Act also defines "General Purpose AI Models" (GPAI), which require transparency, completion of risk assessments, and compliance with copyright regulations.
Regulation difficulties
A violation of the AI Act could result in fines of up to EUR 30 million or 6% of the company's global revenue. However, enforcing it across the board might prove more challenging than anticipated as EU member states must designate supervisory authorities, which could strain smaller states financially and operationally.
Moreover, criticism from both industry and academia highlights the broad and ambiguous definitions of AI systems, leading to regulatory uncertainty that makes investors and developers wary of the EU market due to potential compliance costs. Concerns within the industry also revolve around the law potentially hindering AI industry growth in the EU, with questions arising about the culpability in cases of unlawful use of open-source models throughout the AI industry chain.
Consequently, there's a growing demand for the EU to develop effective, measurable regulatory methods focusing on the consequences of AI rather than solely on the technology itself. Academic opinions on the matter vary, with some echoing industry concerns about overregulation while others find current regulations insufficient. Additionally, if AI systems are regarded as "technically neutral," there's a risk of overlooking the potential of AI development.
On a different note, post-Brexit UK is crafting AI legislation, emphasizing LLM development. Unlike the EU, the UK government's national AI strategy explicitly aims to "encourage innovation," as Prime Minister Rishi Sunak stressed the need to avoid hasty AI regulation. In late 2023, the UK hosted an AI Summit, drawing over 20 countries and nearly a hundred major companies to declare its ambition to become Europe's AI hub.
Embracing innovation, the UK government attracted OpenAI to establish an office in London. Amidst US-China competition, the EU must strategize how to support its businesses, quickly catch up, and ensure regulations foster rather than stifle the industry environment.