The AI Act
Overview
The AI Act sets up a comprehensive legal framework designed to govern the sale and utilization of AI systems within the EU market. The regulation aims to establish a harmonized framework for the responsible development, deployment, and use of AI systems while prioritizing both innovation and safeguarding fundamental rights, and facilitating the free flow of AI-based goods and services throughout the EU.
Following several iterations with unusually significant differences between drafts, the EU reached a provisional agreement on a final draft text of the Artificial Intelligence Act (AI Act) on December 8, 2023.
The EU aims to formally adopt the regulation early 2024, and it will most likely come into force in the EU two years after its adoption, possibly also with an additional transition period of up to two years depending on the AI-systems risk classification.
The AI Act will almost certainly be deemed EEA-relevant, and therefore also included in the EEA-agreement. Norway will presumably aspire to implement the AI Act on a similar timeline as the EU. However, EEA ratification procedures could potentially also cause delay.
Risk-based approach
The AI Act adopts a risk-based approach, classifying AI systems into four risk-based tiers depending on their potential impact on fundamental rights of individuals and societal values.
- Unacceptable risk AI: AI systems that significantly conflict with fundamental rights are prohibited, except for use in certain narrowly defined exceptions relating to law enforcement purposes etc. Examples of such systems include social scoring systems and remote real-time monitoring of individuals in public spaces.
- High-Risk AI: High-risk AI systems, typically those used in critical societal applications like transport, healthcare, and employment, are allowed but will be subject to detailed requirements regarding risk and quality control. Providers must conduct conformity assessments, register their systems, and implement human oversight mechanisms in addition to a number of other requirements such as data governance/management and transparency obligations, and the obligation to carry out a fundamental rights impact assessment.
- Limited and minimal-risk AI: AI systems with limited or minimal risk, and which includes those typically often used in general business applications, chatbots, and deepfakes, face only very limited regulatory requirements under the AI Act.
The AI Act also introduces specific requirements for so-called foundation models and general AI systems, that includes generative AI solutions such as Chat GTP. These systems must undergo thorough risk assessments and meet certain ethical guidelines, in addition to transparency requirements relating to their training data etc. High-impact general purpose AI models with systemic risk will be subject to additional obligations, including model evaluations, an obligation to assess and mitigate systemic risks, adversarial testing, reporting on serious incidents, cybersecurity related requirements, and reporting on their energy efficiency.
It should be noted that the risk classification of the AI Act relates to certain societal risks, and will not necessarily give any relevant indication of the commercial and legal business risks that the use of the AI-system may entail for an enterprise. An AI system with a low-risk classification under the AI Act may still pose a high business risks to a company, or yield significant compliance obligations under other regulations than the AI Act, and vice versa. For example, the use of an AI system by an enterprise for processing personal data could be considered high risk and trigger several obligations under the GDPR, even if the AI Act classifies the system as limited or minimal risk. This also highlights that the legal risks associated with the use of AI by an enterprise must be seen in a wider and more enterprise-specific context than what the AI Act mandates.
Duty bearers under the AI Act
The AI Act is predominantly a product safety regulation, that imposes obligations on providers of AI systems, with focus on AI-systems that are deemed to entail a significant risk to fundamental values such as health, safety and the fundamental rights of individuals.
Providers of high-risk AI systems will bear the primary burdens of compliance under the AI Act, and must adapt their products, working processes and compliance frameworks to meet the obligations of the AI Act. This includes several requirements related to various risk management and quality control efforts.
Enterprises that merely utilize AI systems, without being providers or having a similar role on the supply chain of the AI-system, face limited direct obligations under the AI Act. However, the use of high-risk AI systems will still require risk assessments and human oversight, and a fundamental rights impact assessment mirroring the DPIA under the GDPR may also be required for use of such sustems.
Governance
A governance framework with supervisory authorities and market monitoring mechanisms will ensure the effective enforcement of the AI Act. Violations can result in substantial fines, mirroring the enforcement regime of the GDPR.