Regulating the Buzzword – What the AI Act Means for Your Business
Client Alert | 3 min read | 12.12.23
First Mover
On Friday 8 December 2023, after a period of lengthy and intense negotiations, European legislators have reached a political agreement on the AI Act. Proposed by the European Commission in April 2021, the Act positions the EU as a trailblazer in regulating the groundbreaking technology of Artificial Intelligence. Although the final text is not yet available, a final political agreement on the content of the regulation has been achieved.
The AI Act is the first significant, all-encompassing regulation focused on the development and use of Artificial Intelligence. It establishes a uniform legal framework across the EU, with the explicit goals of ensuring that AI used in the European market is legal, safe, and trustworthy.
While the legislation doesn't regulate the technology as such and is in that sense technology-neutral, it sets rules for the development and use of AI in specific cases.
Importantly, the AI Act intends to have an extraterritorial scope of application. In some cases, the rules will also apply to providers and users established outside of the EU.
Risk-Based Approach
The EU legislators have adopted a risk-based approach: AI systems posing minimal risks are not heavily regulated, “high-risk” AI systems are in principle permitted but will carry a more significant regulatory burden, and those considered to pose an unacceptable risk for the health, safety and fundamental rights of individuals will be banned.
Organizations will need to carry out a thorough mapping of all AI systems to assess whether obligations apply, such as mandatory fundamental rights impact assessments. In doing so, organizations can build on much of the data mapping work that should have been done for GDPR compliance. Consequently, privacy professionals will play a pivotal role in compliance efforts related to AI.
General Purpose AI / Foundation Models
In the past, AI-based applications were designed for specific tasks. But recent years have seen the development of AI systems that can be employed for a wide array of tasks (including those previously unforeseen) with minimal modifications, thus serving a general purpose.
This has led to the creation of “foundation models”, which serve as the basis for a multitude of different applications. This type of development presents a so-called “single point of failure” risk: if there's a flaw in the model, it can affect all downstream applications built on it.
Regulation of these models proved to be a significant point of contention throughout the interinstitutional legislative process. Now that a political agreement has been reached, reports indicate that one way in which these models will be regulated is through enhanced transparency requirements. Even stricter obligations will apply to high-impact, general purpose AI models that pose a systemic risk. The AI Office, a supervisory body to be created within the European Commission, will play a key role in the enforcement of these provisions.
Enforcement
Similar to the General Data Protection Regulation and other recent EU legislation, fines for non-compliance are high: from 35 million euro or 7 % of global turnover to 7.5 million or 1.5 % of turnover, depending on the infringement and annual worldwide revenue of the organization.
Next Steps
The text agreed on Friday 8 December 2023 will have to be formally adopted by both Parliament and Council. The AI Act will enter into force 20 days after publication in the Official Journal of the European Union and, with exceptions for some specific provisions which apply earlier, become applicable two years after its entry into force.
To bridge this transitional period, the Commission will be launching the “AI Pact”, convening developers from around the world to commit to the AI Act’s obligations voluntarily ahead of its application. During this transitional period, certain specific provisions such as the prohibition on AI systems posing an unacceptable risk will already apply 6 months after entry into force, while the rules on general purpose AI will already apply 12 months after entry into force.
Crowell continuously and actively monitors developments regarding the regulation of AI in the EU. Feel free to reach out if you have any questions.
Contacts
Insights
Client Alert | 3 min read | 12.10.24
Fast Lane to the Future: FCC Greenlights Smarter, Safer Cars
The Federal Communications Commission (FCC) has recently issued a second report and order to modernize vehicle communication technology by transitioning to Cellular-Vehicle-to-Everything (C-V2X) systems within the 5.9 GHz spectrum band. This initiative is part of a broader effort to advance Intelligent Transportation Systems (ITS) in the U.S., enhancing road safety and traffic efficiency. While we previously reported on the frustrations with the long time it took to finalize rules concerning C-V2X technology, this almost-final version of the rule has stirred excitement in the industry as companies can start to accelerate development, now that they know the rules they must comply with.
Client Alert | 6 min read | 12.09.24
Eleven States Sue Asset Managers Alleging ESG Conspiracy to Restrict Coal Production
Client Alert | 3 min read | 12.09.24
New York Department of Labor Issues Guidance Regarding Paid Prenatal Leave, Taking Effect January 1
Client Alert | 4 min read | 12.06.24