The EU AI Act and Obligations for Companies Operating in the European Union
Client Alert | 4 min read | 03.20.24
In an era where regulatory landscapes are rapidly evolving, companies with a footprint in the European Union must stay vigilant and adaptable. The EU has recently unveiled a comprehensive set of guidelines that impose fresh obligations on both EU and non-EU based companies operating within its borders. This client alert is the first in a series designed to decode the complexities of the new EU regulations and provide actionable insights for businesses to ensure full compliance[1]. Stay tuned as we unravel the details of these pivotal changes and guide you through the steps your business needs to take to align with the EU's heightened regulatory standards.
Scope of the AI Act:
The AI Act casts a wide net, encompassing companies that design, develop, or deploy AI systems within the EU. This includes both EU-based entities and non-EU companies with a presence in the region.
Prohibited AI Practices
The Act identifies practices that are off-limits, aiming to prevent any potential misuse of AI that could harm individuals or society,
- No Manipulation or Deception, pushing people into decisions they wouldn't naturally make.
- Protecting Vulnerabilities of certain groups, such as based on age or disability.
- Ban on Social Scoring that could result in discrimination or unjust treatment.
- Profiling Restrictions to determine a person's likelihood of engaging in criminal activity.
- Compiling facial recognition databases through indiscriminate data scraping is strictly forbidden.
- Contextual Limits on Emotion Recognition
- Biometric categorizations based on biometric data to deduce or infer e.g., race, religion are largely prohibited.
- Controlled Use of Real-Time Biometric ID in public spaces is generally banned.
Mandatory Obligations for High-Risk AI Systems
For AI systems identified as high-risk, the AI Act prescribes a series of stringent requirements aimed at ensuring these technologies are safe and transparent:
- Risk Management System: Providers must implement robust systems to identify, assess, and mitigate risks throughout an AI system's life cycle.
- Data Governance: The quality, representativeness, and security of data used in AI systems must be maintained e.g. to avoid biases.
- Human Oversight: There must be mechanisms in place allowing human intervention in AI decision-making, ensuring that technology remains under control and accountable.
- Technical Documentation: Detailed documentation is required to demonstrate compliance with the Act.
- Transparency and instructions for use: Deployers should be provided with clear, accessible information about how the AI system works and its limitations, enhancing understanding and trust.
Finally, the AI Act mandates strict transparency requirements for General-Purpose AI (GPAI) systems. These entail adhering to EU copyright laws and providing clear summaries of training datasets to ensure the ethical use of data. For GPAI models with potential systemic risks, additional safeguards include comprehensive performance evaluations, systemic risk assessments, and incident reporting to proactively manage and mitigate risks.
Furthermore, the Act addresses concerns around "deepfakes" by requiring that all artificially generated or manipulated multimedia content be explicitly labeled. This initiative aims to foster an environment where users can readily distinguish between authentic and altered content, reinforcing accountability and trust in the digital ecosystem.
Penalties
The stakes are high. Violations of the AI Act can result in significant penalties. For severe violations related to prohibited AI practices, fines can escalate to €35 million or 7% of annual global turnover. Companies cannot afford to take compliance lightly.
Timeline for AI Act Enforcement
The AI Act is in its final review stages and is expected to be adopted before the current legislative session ends. After formal approval by the Council and publication in the Official Journal, it will activate within twenty days. The enactment timeline is staggered: prohibitions will apply after six months, codes of practice after nine, general-purpose AI regulations after twelve, and obligations for high-risk systems after 36 months. This phased approach allows stakeholders ample time to understand, prepare for, and comply with the new regulatory framework, ensuring a smooth transition into this new era of AI governance.
Note: Our lawyers leveraged AI in creating this client alert, including using a transcript summary created by generative AI. As we explore the potential of generative AI in the legal space, it is our intention and our practice to be transparent with our readers and to showcase the results we are achieving using generative AI with publicly available resources. Crowell’s AI group is comprised of lawyers and professionals across our global offices, including from Crowell & Moring International (CMI), our international public policy entity, with decades of sector-specific experience. We intend to lead by example in our own responsible use of AI, as it pertains to both the risks and benefits. Should you have questions about the use of generative AI in the legal sector or Crowell’s use of AI, please contact innovation@crowell.com. For this particular client alert, all text was generated by generative AI tools on the basis of the text of the voted AI Act and the EP press release. The human contribution was limited to the selecting of relevant paragraphs and the correcting of mistakes. No human authorship is claimed considering the limited input. The Brussels AI team can however be contacted if you have further questions. Please reach out to Sari Depreeuw. |
[1] Disclaimer: the AI system promised a series of alerts to which we (humans with limited time) cannot commit.
Contacts
Insights
Client Alert | 3 min read | 12.10.24
Fast Lane to the Future: FCC Greenlights Smarter, Safer Cars
The Federal Communications Commission (FCC) has recently issued a second report and order to modernize vehicle communication technology by transitioning to Cellular-Vehicle-to-Everything (C-V2X) systems within the 5.9 GHz spectrum band. This initiative is part of a broader effort to advance Intelligent Transportation Systems (ITS) in the U.S., enhancing road safety and traffic efficiency. While we previously reported on the frustrations with the long time it took to finalize rules concerning C-V2X technology, this almost-final version of the rule has stirred excitement in the industry as companies can start to accelerate development, now that they know the rules they must comply with.
Client Alert | 6 min read | 12.09.24
Eleven States Sue Asset Managers Alleging ESG Conspiracy to Restrict Coal Production
Client Alert | 3 min read | 12.09.24
New York Department of Labor Issues Guidance Regarding Paid Prenatal Leave, Taking Effect January 1
Client Alert | 4 min read | 12.06.24