What are you looking for?

Legal Issues with use of Artificial Intelligence in Businesses

Artificial intelligence (AI) presents opportunities and challenges for businesses. While AI can offer significant benefits, and the risk of not implementing AI can also be perceived as high by many, it is crucial to understand and address the associated legal risks to ensure compliance and ethical AI development. Failure to adhere to legal requirements, or to take proper account of them when planning an AI strategy, can also limit an enterprise’s ability to harness the potential of AI.

The convergence of generative AI’s disruptive capabilities, the widespread availability of various AI solutions, and the growing integration of AI into normal business applications, has resulted in an accelerated utilization of AI in businesses. Enterprises across most sectors are now actively exploring the incorporation of AI into their operations, and devising AI strategies, to leverage their services and stay competitive.

Regulators have also increasingly directed their attention toward AI during the last years. Their focus primarily revolves around ensuring ethical use, data privacy, accountability, and transparency in AI systems, aiming to strike a balance between fostering innovation and safeguarding against potential risks associated with AI.

The EU recently reached a provisional agreement on a final draft text of the Artificial Intelligence Act (AI Act), and it represents a pivotal milestone as the first comprehensive regulation specifically targeting AI. But the AI Act will only be one of several legal regulations relevant to the use of AI for business purposes.

The AI Act

Overview

The AI Act sets up a comprehensive legal framework designed to govern the sale and utilization of AI systems within the EU market. The regulation aims to establish a harmonized framework for the responsible development, deployment, and use of AI systems while prioritizing both innovation and safeguarding fundamental rights, and facilitating the free flow of AI-based goods and services throughout the EU.

Following several iterations with unusually significant differences between drafts, the EU reached a provisional agreement on a final draft text of the Artificial Intelligence Act (AI Act) on December 8, 2023.

The EU aims to formally adopt the regulation early 2024, and it will most likely come into force in the EU two years after its adoption, possibly also with an additional transition period of up to two years depending on the AI-systems risk classification.

The AI Act will almost certainly be deemed EEA-relevant, and therefore also included in the EEA-agreement. Norway will presumably aspire to implement the AI Act on a similar timeline as the EU. However, EEA ratification procedures could potentially also cause delay.

Risk-based approach

The AI Act adopts a risk-based approach, classifying AI systems into four risk-based tiers depending on their potential impact on fundamental rights of individuals and societal values.

  • Unacceptable risk AI: AI systems that significantly conflict with fundamental rights are prohibited, except for use in certain narrowly defined exceptions relating to law enforcement purposes etc. Examples of such systems include social scoring systems and remote real-time monitoring of individuals in public spaces.
  • High-Risk AI: High-risk AI systems, typically those used in critical societal applications like transport, healthcare, and employment, are allowed but will be subject to detailed requirements regarding risk and quality control. Providers must conduct conformity assessments, register their systems, and implement human oversight mechanisms in addition to a number of other requirements such as data governance/management and transparency obligations, and the obligation to carry out a fundamental rights impact assessment.
  • Limited and minimal-risk AI: AI systems with limited or minimal risk, and which includes those typically often used in general business applications, chatbots, and deepfakes, face only very limited regulatory requirements under the AI Act.

The AI Act also introduces specific requirements for so-called foundation models and general AI systems, that includes generative AI solutions such as Chat GTP. These systems must undergo thorough risk assessments and meet certain ethical guidelines, in addition to transparency requirements relating to their training data etc. High-impact general purpose AI models with systemic risk will be subject to additional obligations, including model evaluations, an obligation to assess and mitigate systemic risks, adversarial testing, reporting on serious incidents, cybersecurity related requirements, and reporting on their energy efficiency.

It should be noted that the risk classification of the AI Act relates to certain societal risks, and will not necessarily give any relevant indication of the commercial and legal business risks that the use of the AI-system may entail for an enterprise. An AI system with a low-risk classification under the AI Act may still pose a high business risks to a company, or yield significant compliance obligations under other regulations than the AI Act, and vice versa. For example, the use of an AI system by an enterprise for processing personal data could be considered high risk and trigger several obligations under the GDPR, even if the AI Act classifies the system as limited or minimal risk. This also highlights that the legal risks associated with the use of AI by an enterprise must be seen in a wider and more enterprise-specific context than what the AI Act mandates.

Duty bearers under the AI Act

The AI Act is predominantly a product safety regulation, that imposes obligations on providers of AI systems, with focus on AI-systems that are deemed to entail a significant risk to fundamental values such as health, safety and the fundamental rights of individuals.

Providers of high-risk AI systems will bear the primary burdens of compliance under the AI Act, and must adapt their products, working processes and compliance frameworks to meet the obligations of the AI Act. This includes several requirements related to various risk management and quality control efforts.

Enterprises that merely utilize AI systems, without being providers or having a similar role on the supply chain of the AI-system, face limited direct obligations under the AI Act. However, the use of high-risk AI systems will still require risk assessments and human oversight, and a fundamental rights impact assessment mirroring the DPIA under the GDPR may also be required for use of such sustems.

Governance

A governance framework with supervisory authorities and market monitoring mechanisms will ensure the effective enforcement of the AI Act. Violations can result in substantial fines, mirroring the enforcement regime of the GDPR.

Other Legal Considerations

The AI Act complements, but it does not limit, obligations arising from other regulations relating to the use of AI. For normal users of AI, other regulations than the AI Act will likely also be more relevant to their use of AI, and impose more legal obligations on them, than the AI Act as such.

As mentioned above, the GDPR is likely to remain significant to any processing of personal data, and the use of AI for personal data processing purposes can trigger additional obligations under the GDPR, such as an obligation to perform a data protection impact assessment (DPIA), regardless of how the system classifies under the AI Act.

The use of AI systems could also affect information security, and the use of AI systems by enterprises that are subject to regulatory information security information requirements could be restricted under such legislation.

Sector-specific regulations may also apply to the use of AI for certain enterprises, especially within regulated sectors like health, finance, transport etc. Many of the sectors that are subject to sector specific legislation will also fall within the high risk AI categories, but additional obligations and restrictions may follow from sector specific law.

Additionally, contracts could place contractual obligations affecting a businesses’ use of AI. For example, confidentiality obligations could limit a company’s right to use certain data for AI related purposes. And companies in the AI supply chain should expect that any obligations on customers relating to AI may be mirrored against them.

Determining liability for damages resulting from AI remains a complex and partly unresolved matter. While accountability for AI-induced harm is essential, the allocation of risks and liabilities within this realm is unclear. Applying traditional principles of torts and damages to this scenario is not entirely straightforward, and relevant case law is still missing. In instances where a contractual relationship exists between the liable party and the affected party, liability is frequently governed by the terms outlined in the contract, and should be observed by the parties when entering into contracts relating to AI. Outside of contract, the EU has proposed extending product liability regulations to encompass AI systems, and to introduce rules that shift the burden of proof for damages related to AI, in an attempt to address these emerging challenges in liability attribution.

The use of AI also elevates concerns pertaining to intellectual property law, especially regarding the utilization of copyrighted training data and the protection of AI created contents.

Recommendations for Use of AI

Businesses can leverage AI’s potential while navigating the legal landscape by taking proactive steps to address the regulatory and commercial concerns raised by use of AI.

We recommend that all companies that are considering use of AI have focus on the following:

  1. AI mapping: Businesses should map what AI systems are in use by the organization (both through its own computer facilities and online services), and what legal responsibilities and restrictions that are applicable to such use.
  2. Risk Assessment: Businesses should conduct a risk assessment to identify and address potential risks associated with its AI usage, including both legal and commercial risks.
  3. Strategy: A clear AI strategy and guidelines, aligned with the company’s overall business objectives and risk management framework, should be established.
  4. Technical, organizational, and contractual measures: Businesses should implement appropriate technical, organizational, and contractual measures to mitigate identified risks. AI approaches should be aligned with existing privacy and security measures where appropriate. Contracts relating to AI systems should contain appropriate clauses to ensure legal compliance and risk management.
  5. AI guidelines: Businesses should adopt guidelines for AI development and usage within their organization to encourage responsible and value-adding use of AI, ensure compliance with legal and ethical principles, and restrict undesired use of AI by employees.
  6. Board oversight: While day-to-day AI management should be left to the organization, the company’s use of AI is ultimately the responsibility of the board. The company’s AI usage should be on the board’s agenda to align AI strategy with overall business goals and risk management.
Read more about the AI Act here

Do you have any questions? Contact us: