AI Watch: Global regulatory tracker - OECD

White & Case LLP
Contact

White & Case LLP

The OECD's AI recommendations encourage Member States to uphold principles of trustworthy


Laws/Regulations directly regulating AI (the “AI Regulations”)

The OECD's Recommendation of the Council on Artificial Intelligence1 (the "Recommendation") adopted by 46 governments2 as of July 2021 (the "Adherents"), contains:

  • The OECD's AI Principles (the "Principles"), which were the first intergovernmental standard on AI and formed the basis for the G20's AI Principles3
  • Five recommendations to be implemented in the Adherents' national policies and international cooperation for trustworthy AI (the "Five Recommendations")

The OECD's policy paper on "Assessing future AI risks, benefits, and policy imperatives",4 published on November 14, 2024, emphasizes the need for proactive AI governance.

On February 28, 2025, the OECD published a policy paper entitled, "Towards a common reporting framework for AI incidents", which proposes a common framework for AI incident reporting. The proposed framework consists of a detailed set of criteria for reporting AI incidents (e.g., "description of incident", "date of first known occurrence", "severity", and "harm type"). The OECD considers that these criteria "summarise the information needed to understand an AI incident", while recognising that additional criteria may be necessary to align with specific reporting.5 It remains to be seen whether governments will adopt these criteria.

On May 2, 2025, the OECD published a report entitled, "The Adoption of Artificial Intelligence in Firms", which offers a detailed analysis of AI uptake across businesses in the G7 and Brazil. The report indicates that 83% of businesses who responded indicated a desire to receive more information regarding current and forthcoming regulations around data or AI, or on expected returns from investment in AI. Many businesses also indicated that specific policies including tax incentives, partnerships with educational institutions and public sector initiatives, would help strengthen AI uptake. The report concludes with recommendations for governments and policymakers to facilitate the continued adoption of AI.6

The OECD's Expert Group on AI Futures explores potential AI impacts, guiding policymakers on crafting forward-looking policies. Key identified benefits include accelerating scientific progress, improving economic growth, reducing inequality, enhancing decision-making, and empowering citizens. However, significant risks such as cyber threats, disinformation, AI safety lapses, power concentration, and privacy violations are also highlighted. The report suggests ten policy priorities including establishing clear AI liability rules, restricting harmful AI uses, ensuring AI transparency, and promoting international cooperation to manage competitive race dynamics. Governments are urged to implement these strategies to maximize AI benefits and mitigate risks, with ongoing initiatives indicating progress, yet emphasizing the need for more concrete actions. Nevertheless, it remains unclear whether such urgings will be sufficient to stem the divergence of AI regulatory approaches that has arisen from one jurisdiction to the next.


Status of the AI Regulations

The Adherents have agreed to promote, implement, and adhere to the Recommendation. The Principles contribute to other AI initiatives, such as the G7's Hiroshima AI Process Comprehensive Policy Framework (including the International Guiding Principles on AI for Organizations Developing Advanced AI Systems and the International Code of Conduct for Organizations Developing Advanced AI Systems).

Other laws affecting AI

While certain OECD instruments can be legally binding on members, most are not. However, OECD recommendations represent a political commitment to the principles they contain and entail an expectation that Adherents will endeavor to implement them.7 Notwithstanding, a non-exhaustive list of OECD guidance that does not directly seek to regulate AI, but may affect the development or use of AI includes:

  • The Recommendation of the Council concerning Guidelines Governing the Protection of Privacy and Transborder Flows of Personal Data
  • The OECD Guidelines for Multinational Enterprises
  • The Recommendation of the Council on Consumer Protection in E-commerce

Definition of “AI”

The OECD's definition of "AI system" was revised on November 8, 2023 to ensure that it continues to accurately reflect technological developments, including with respect to generative AI.8 AI is defined in the Recommendation using the following terms:

  • "AI actors" means "those who play an active role in the AI system lifecycle, including organizations and individuals that deploy or operate AI."
  • "AI knowledge" means "the skills and resources, such as data, code, algorithms, models, research, know-how, training programmes, governance, processes and best practices required to understand and participate in the AI system lifecycle."
  • "AI system" means "a machine-based system that, for explicit or implicit objectives, infers, from the input it receives, how to generate outputs such as predictions, content, recommendations, or decisions that can influence physical or virtual environments. Different AI systems vary in their levels of autonomy and adaptiveness after deployment."
  • "AI system lifecycle" involves the following phases: "i) ‘design, data and models'; which is a context-dependent sequence encompassing planning and design, data collection and processing, as well as model building; ii) ‘verification and validation'; iii) ‘deployment'; and iv) ‘operation and monitoring'. These phases often take place in an iterative manner and are not necessarily sequential. The decision to retire an AI system from operation may occur at any point during the operation and monitoring phase."

Territorial scope

The Adherents (who are expected to promote and implement the Recommendation – see above) include the following 46 OECD members and non-members.9

Specific obligations would be placed on AI actors by Adherents implementing the Recommendation. However, the term "AI actors" is not defined in the Recommendation by reference to territory.

Sectoral scope

The Recommendation is not sector-specific. As discussed above, Adherents are expected to promote and implement the Recommendation and, by doing so, specific obligations should be placed on AI actors. However, the term "AI actors" is not defined in the Recommendation by reference to sector.

Compliance roles

Adherents are expected to comply with the Recommendation, although the Recommendation does not explicitly govern compliance or regulatory oversight. Certain Principles relating to human-centered values and fairness, transparency and accountability are applicable to AI actors. Whether and to what extent AI actors have to comply with the Principles depends on the relevant Adherent state's approach to implementation.

Core issues that the AI Regulations seek to address

The OECD's AI Regulations intend to help shape a stable policy environment at the international level that promotes a human-centric approach to trustworthy AI, fosters research, and preserves economic incentives to innovate.10

Risk categorization

AI is not categorized according to risk in the Recommendation.

In order to promote a stable policy environment with regard to AI risk frameworks, the OECD has stated that it intends to analyze the criteria that should be included in a risk assessment and how to best aggregate such criteria, taking into account that different criteria may be interdependent.11

Key compliance requirements

The Adherents are expected to promote and implement the following Principles:12

  1. AI should pursue inclusive growth, sustainable development and well-being: This includes reducing economic, social, gender and other inequalities, and protecting natural environments.
  2. AI should incorporate human-centered values and fairness: AI actors should respect the rule of law, human rights and democratic values throughout the AI system lifecycle, and implement appropriate safeguards to that end.
  3. AI should be transparent and explainable: AI actors should provide information to foster a general understanding of AI systems, make stakeholders aware of their interactions with AI systems, and enable those affected by an AI system to understand and challenge the outcome.
  4. AI systems should be robust, secure, and safe so that, in conditions of normal use, foreseeable use or misuse, or other adverse conditions, they do not pose an unreasonable safety risk. To this end, AI actors should ensure traceability to enable analysis of the AI systems' output and apply a systematic risk management approach.
  5. Accountability: AI actors should be accountable for the proper functioning of AI systems and for the respect of the Principles.

The Adherents are also expected to promote and implement the Five Recommendations:13

  1. Investing in AI research and development. Governments should consider long-term public investment and encourage private investment in research, development, and open datasets that are representative and respect data privacy and data protection in order to spur innovation in trustworthy AI and support an environment for AI that is free of inappropriate bias.
  2. Fostering a digital ecosystem for AI. Governments should foster the development of, and access to, a digital ecosystem for trustworthy AI by promoting mechanisms, such as data trusts, to ensure the safe, fair, legal and ethical sharing of data.
  3. Shaping an enabling policy environment for AI. Governments should: (i) promote a policy environment that supports an agile transition from the research and development stage to the deployment and operation stage for trustworthy AI systems; and (ii) review and adapt policy and regulatory frameworks and assessment mechanisms as they apply to AI systems to encourage innovation and competition for trustworthy AI.
  4. Building human capacity and preparing for labor market transformation. Governments should: (i) collaborate with stakeholders to ensure people are prepared for AI-related changes in society and work by equipping them with necessary skills; (ii) take steps to ensure a fair transition for workers affected by AI, by offering training and support; and (iii) promote the responsible use of AI at work to enhance worker safety and the quality of jobs.
  5. International co-operation for trustworthy AI. Governments should: (i) actively co-operate to advance the Principles and progress the responsible stewardship of AI; (ii) work together in the OECD and other forums to foster the sharing of AI knowledge; (iii) promote the development of multi-stakeholder, consensus-driven global technical standards; and (iv) encourage the development, and their own use, of internationally comparable metrics to measure AI research, development, and deployment, using the evidence to assess progress in the implementation of the Principles.

Regulators

The OECD does not regulate the implementation of the Recommendation, although it does monitor and analyse information relating to AI initiatives through its AI Policy Observatory. The AI Policy Observatory includes a live database of AI strategies, policies and initiatives that countries and other stakeholders can share and update, enabling the comparison of their key elements in an interactive manner. It is continuously updated with AI metrics, measurements, policies and good practices that lead to further updates in the practical guidance for implementation.14

The Recommendation does not stipulate how Adherents should regulate the implementation of the Principles in their own jurisdictions.

Enforcement powers and penalties

As the Recommendation is not legally binding, it does not confer enforcement powers or give rise to any penalties for non-compliance. The OECD relies on Adherents to implement the Recommendation and enforce the Principles in their own jurisdictions.

1 Read the OECD's "Recommendation of the Council of Artificial Intelligence" here.
2 OECD Members: Australia, Austria, Belgium, Canada, Chile, Colombia, Costa Rica, Czech Republic, Denmark, Estonia, Finland, France, Germany, Greece, Hungary, Iceland, Ireland, Israel, Italy, Japan, Korea, Latvia, Lithuania, Luxembourg, Mexico, Netherlands, New Zealand, Norway, Poland, Portugal, Slovak Republic, Slovenia, Spain, Sweden, Switzerland, Republic of Türkiye, United Kingdom, United States, and Non-Members: Argentina, Brazil, Egypt, Malta, Peru, Romania, Singapore, and Ukraine.
3 Read about the Principles here.
4 See OECD policy paper "Assessing future AI risks, benefits, and policy imperatives" here.
5 See OECD policy paper "Towards a common reporting framework for AI incidents" here.
6 See the OECD report "The Adoption of Artificial Intelligence in Firms" here.
7 "Decisions are adopted by the Council and are legally binding on all Members except those which abstain [whereas] Recommendations are adopted by the Council and are not legally binding [but do] represent a political commitment to the principles they contain and entail an expectation that Adherents will do their best to implement them." See the OECD Legal Framework here.
8 Read the Recommendation here.
9 OECD Members: Australia, Austria, Belgium, Canada, Chile, Colombia, Costa Rica, Czech Republic, Denmark, Estonia, Finland, France, Germany, Greece, Hungary, Iceland, Ireland, Israel, Italy, Japan, Korea, Latvia, Lithuania, Luxembourg, Mexico, Netherlands, New Zealand, Norway, Poland, Portugal, Slovak Republic, Slovenia, Spain, Sweden, Switzerland, Republic of Türkiye, United Kingdom, United States, and Non-Members: Argentina, Brazil, Egypt, Malta, Peru, Romania, Singapore, and Ukraine.
10 "RECOGNISING that given the rapid development and implementation of AI, there is a need for a stable policy environment that promotes a human-centric approach to trustworthy AI, that fosters research, preserves economic incentives to innovate, and that applies to all stakeholders according to their role and the context." See the Recommendation, 'Introduction', here.
11 "The OECD Experts Working Group, with members from across sectors and professions, plans to conduct further analysis of the criteria to include in a risk assessment and how best to aggregate these criteria, taking into account that different criteria may be interdependent." See the "OECD Framework for the Classification of AI systems" here, pg.67.
12 See the Recommendation, Section 1 (1.1 – 1.5), here.
13 See the Recommendation, Section 2 (2.1 – 2.5), here.
14 See the OECD's Policy Observatory here

[View source.]

DISCLAIMER: Because of the generality of this update, the information provided herein may not be applicable in all situations and should not be acted upon without specific legal advice based on particular situations. Attorney Advertising.

© White & Case LLP

Written by:

White & Case LLP
Contact
more
less

PUBLISH YOUR CONTENT ON JD SUPRA NOW

  • Increased visibility
  • Actionable analytics
  • Ongoing guidance

White & Case LLP on:

Reporters on Deadline

"My best business intelligence, in one easy email…"

Your first step to building a free, personalized, morning email brief covering pertinent authors and topics on JD Supra:
*By using the service, you signify your acceptance of JD Supra's Privacy Policy.
Custom Email Digest
- hide
- hide