June 12, 2024
.

Updated OECD AI Principles to keep up with novel and increased risks from general purpose and generative AI

On May 3, 2024, the OECD released a significant update to its AI Principles, expanding them to address the risks posed by generative AI and large language models. This article breaks down what changed from the 2019 version, why those changes matter for regulated industries, and how organizations can take practical steps toward compliance -- particularly around privacy, bias, and risk management.

Kathrin Gardhouse

On May 3, 2024, the Organisation for Economic Co-operation and Development (OECD) released an updated version of its foundational AI Principles, the first major revision since the original framework was adopted in 2019. The update was not cosmetic. It reflects five years of hard-won lessons about what AI systems actually do in the real world -- and how quickly the risk landscape has evolved with the rise of general purpose and generative AI.

For organizations operating in regulated industries, this revision carries real weight. The OECD AI Principles have long served as a reference framework for national AI strategies, regulatory development, and corporate governance standards across OECD member countries and beyond. When the OECD updates these principles, it signals a direction of travel for regulators, procurement officers, and compliance teams worldwide.

This article provides a detailed breakdown of what changed in the 2024 update, why each change matters, and what practical steps organizations should take in response -- particularly around privacy protection, bias mitigation, and AI risk management.

What Are the OECD AI Principles?

The OECD AI Principles were first adopted in May 2019, making them one of the earliest intergovernmental standards for responsible AI. They were subsequently incorporated into the G20 AI Principles, giving them an even broader reach. The framework is structured around five core principles for trustworthy AI, alongside a set of recommendations directed at governments and other stakeholders.

The five principles address: inclusive growth and well-being; human-centred values and fairness; transparency and explainability; robustness, security, and safety; and accountability. Together, they are intended to guide the design, development, deployment, and operation of AI systems in ways that benefit people while minimizing harm.

What made the 2019 version significant was that it established a shared international vocabulary for AI governance at a moment when very little such vocabulary existed. What makes the 2024 update significant is that it demonstrates those principles are living documents -- ones that must evolve alongside the technology they govern.

Why Did the OECD Update Its AI Principles in 2024?

The short answer is generative AI. Between 2019 and 2024, the AI landscape shifted dramatically. Large language models (LLMs) became commercially available at scale. General purpose AI systems capable of performing across a wide range of tasks -- rather than narrow, single-purpose applications -- moved from research environments into enterprise workflows, consumer products, and critical infrastructure.

These developments introduced categories of risk that the 2019 Principles did not fully anticipate. The environmental cost of training and running large models grew substantially. The potential for AI systems to generate and amplify misinformation at scale became a documented concern. Bias embedded in training data was shown to produce discriminatory outputs in high-stakes decisions. And the complexity of multi-actor AI supply chains made it harder to assign clear accountability when things went wrong.

The 2024 update responds directly to these developments. Rather than starting from scratch, the OECD revised the existing five principles with targeted additions and clarifications, preserving the structure of the original framework while filling in the gaps that five years of real-world AI deployment had revealed.

What Are the Key Changes in the 2024 OECD AI Principles?

Principle 1.1: Environmental Sustainability Is Now Explicit

The 2024 revision adds environmental sustainability to the first principle, which addresses AI's potential to contribute to inclusive growth and sustainable development. This is a net-new addition to the framework -- one that did not appear in the 2019 version at all.

The inclusion reflects a growing body of concern around the energy and resource demands of large-scale AI systems. Training frontier models requires enormous computational power, which translates directly into significant carbon emissions and water consumption. As organizations scale their AI operations, the environmental footprint of those systems is increasingly a factor in both regulatory conversations and corporate sustainability reporting.

For organizations subject to ESG disclosure requirements or public sector sustainability mandates, this addition signals that AI governance and environmental governance are becoming linked concerns.

Principle 1.2: Rule of Law, Human Rights, and the Misinformation Problem

Principle 1.2, which addresses human-centred values and fairness, received significant revisions in 2024. The heading was expanded to now explicitly include respect for the rule of law, human rights, and democratic values, alongside the privacy and fairness commitments that were already present in the original.

Most notably, the 2024 version specifically calls out the need to address misinformation and disinformation amplified by AI systems -- while at the same time respecting freedom of expression. This is a careful and deliberate balance. Generative AI systems can produce false or misleading content at scale, and the OECD is making clear that organizations deploying such systems have a responsibility to address this risk. At the same time, the principle makes clear that addressing misinformation cannot become a pretext for suppressing legitimate speech.

The 2024 revision also strengthens the emphasis on human agency and oversight, adding more detail about the safeguards required to ensure that AI systems remain under meaningful human control. It also explicitly acknowledges the risks that arise from misuse -- recognizing that even well-designed systems can be weaponized in ways their developers did not intend.

Principle 1.3: Capabilities and Limitations Must Now Be Disclosed

Principle 1.3 covers transparency and explainability. The 2024 update introduces a specific clarification about what information must be provided to enable meaningful understanding of an AI system. The revised text explicitly requires disclosure of an AI system's capabilities and limitations -- a small but significant addition.

This matters because one of the documented failure modes of deployed AI systems is overconfidence: both on the part of the system itself and on the part of users who do not understand what the system can and cannot reliably do. By requiring that capabilities and limitations be disclosed, the OECD is pushing toward a model of AI transparency that is substantive rather than performative.

For organizations deploying AI systems internally or selling them to customers, this has direct implications for documentation, user interface design, and the way AI-assisted outputs are communicated.

Principle 1.4: Accountability Is Reorganized and Strengthened

The 2024 revision made structural changes to Principle 1.4, moving several elements into Principle 1.5 on accountability. In their place, the revised Principle 1.4 adds new requirements around override, repair, decommissioning, and information integrity.

The addition of override and decommissioning requirements reflects a maturation in thinking about the AI system lifecycle. It is no longer sufficient to consider how an AI system behaves when it is working well. Organizations must now also plan for how to correct a system that is behaving incorrectly, how to shut it down safely if necessary, and how to maintain the integrity of the information it produces and consumes throughout its operational life.

These are not abstract concerns. As AI systems become more deeply embedded in organizational workflows, the ability to override or decommission them without causing downstream harm is a genuine operational challenge.

Principle 1.5: Systematic Risk Management and Responsible Business Conduct

Principle 1.5 received the most substantial expansion of any of the five principles. The 2024 update adds a large new section covering ongoing, systematic risk management across the full AI system lifecycle, with a specific emphasis on responsible business conduct.

The revised principle calls on AI actors to apply risk management at every phase of the AI system lifecycle on a continuous basis -- not as a one-time assessment at deployment, but as an ongoing discipline. It also specifically calls out the need for cooperation between different AI actors, including suppliers of AI knowledge and resources, system users, and other stakeholders. This reflects the reality that modern AI systems are rarely built or deployed by a single organization; they exist within complex supply chains involving model developers, infrastructure providers, application builders, and end users.

The specific risks called out in the revised principle include harmful bias, human rights (encompassing safety, security, and privacy), labour rights, and intellectual property rights. Bias is particularly notable here -- it did not appear anywhere in the 2019 version of the Principles. Its explicit inclusion in 2024 signals that bias mitigation is now considered a core obligation of responsible AI governance, not a secondary concern.

What the 2024 Updates Mean for Regulated Industries

For organizations in healthcare, financial services, insurance, life sciences, and other regulated sectors, the 2024 OECD AI Principles update is worth reading carefully -- not because it creates new legal obligations directly, but because it shapes the regulatory environment in which legal obligations are developed.

National AI strategies across OECD member countries are built in dialogue with the Principles. The EU AI Act, for example, reflects many of the same values and concepts. Regulators in the United States, Canada, Japan, and Australia have also drawn on OECD frameworks in developing their own guidance. When the OECD adds environmental sustainability, bias mitigation, and enhanced privacy requirements to its principles, those additions are likely to appear in subsequent national and sectoral regulations.

The three new additions -- environmental sustainability, misinformation, and bias -- are especially relevant for organizations that train or fine-tune AI models on sensitive data. In healthcare, for example, training data frequently contains patient records, clinical notes, and other documents rich in personal and demographic information. If that data is used without proper de-identification, the resulting model may not only expose personal information but also encode demographic biases that affect care recommendations.

The same dynamic applies in financial services, where training data drawn from loan applications, customer records, or call transcripts may carry embedded biases related to race, gender, or socioeconomic status. In insurance, claims data and underwriting records present similar risks.

This is where privacy-preserving data practices become a genuine compliance strategy, not just an ethical aspiration.

How Limina Supports Compliance with the OECD AI Principles

The most effective way to protect individual privacy in AI development is to ensure that personal identifiers are not present in training or fine-tuning data to begin with. Where personal data redaction or replacement with synthetic data is feasible from a data utility standpoint, modern technology can handle this at scale -- even across large volumes of unstructured text.

Limina's data de-identification solution is purpose-built for this task. Unlike pattern-matching approaches that rely on rigid rules and regular expressions, Limina's technology is built by linguists and trained to understand language contextually. It identifies personal information not just by recognizing specific formats -- like a phone number or a date -- but by understanding the role that a piece of information plays within a document. This context-aware approach produces meaningfully higher accuracy, particularly in the complex, domain-specific language common in healthcare records, financial documents, and legal filings.

Limina detects over 50 entity types across more than 52 languages, processing text at speeds exceeding 70,000 words per second with greater than 99.5% accuracy. The solution is available as an on-premises deployment or via API, and works across a wide range of file formats and document types.

If your organization is preparing AI training datasets, handling sensitive customer data, or building AI-assisted workflows in regulated environments, get in touch with Limina's team to understand how de-identification can be integrated into your data pipeline.

Privacy: From Background Obligation to Core Requirement

One of the clearest signals in the 2024 OECD update is the elevated status of privacy. The word "privacy" appears three times in the revised Principles, compared to once in the 2019 version. This is not a coincidence. As AI systems have become more capable of processing personal information at scale -- and as high-profile data incidents involving AI have become more common -- the OECD is placing privacy at the center of responsible AI governance rather than treating it as a downstream compliance concern.

For organizations in healthcare and life sciences, this alignment between AI governance and privacy protection is familiar territory. HIPAA, GDPR, and other data protection regimes have long required robust handling of personal health information. The 2024 OECD Principles make clear that these expectations now apply throughout the AI development lifecycle, not just in the final deployment environment.

Limina's healthcare data de-identification and pharma and life sciences solutions are designed to address exactly this challenge -- enabling organizations to build and deploy AI systems on sensitive data without exposing individuals to privacy risk.

Bias: A New Core Obligation

The explicit inclusion of bias mitigation in the revised Principle 1.5 is significant. It represents the OECD's formal acknowledgment that bias in AI systems is not merely a technical imperfection but a governance failure that must be actively managed.

One concrete mechanism for reducing bias in AI outputs is removing the demographic and identity-related information that bias most commonly attaches to. When indirect personal identifiers -- including origin, race, gender, and physical attributes -- are removed from training data, the resulting model has less material from which to learn and reproduce biased patterns.

This approach to bias mitigation is technically feasible today, and it is directly supported by the kind of comprehensive de-identification that Limina provides. For organizations operating in financial services, insurance, and contact centers -- where AI-assisted decisions can have significant consequences for individuals -- this is a practical step that supports both the letter and the spirit of the updated OECD Principles.

2024 OECD AI Principles: Line-by-Line Comparison

The table below provides a colour-coded comparison of the original 2019 OECD AI Principles and the revised 2024 text. Green indicates net-new additions, yellow indicates reorganization or relocation of existing text, and red indicates deletions.

Principle 2019 Version 2024 Changes
1.1 Inclusive growth, sustainable development, and well-being AI should benefit people and the planet by driving inclusive growth, sustainable development, and well-being. Added: explicit reference to environmental sustainability.
1.2 Human-centred values and fairness AI actors should respect the rule of law, human rights, and democratic values. Includes privacy, non-discrimination, and fairness. Heading expanded; explicit mention of mis- and disinformation and the obligation to address it while respecting freedom of expression. Additional detail on human agency, oversight, and misuse risks.
1.3 Transparency and explainability AI actors should commit to transparency and responsible disclosure. Added: specific requirement to disclose capabilities and limitations of AI systems.
1.4 Robustness, security, and safety AI systems should be robust, secure, and safe throughout their lifecycle, with mechanisms to manage risks. Elements moved to 1.5; added: override, repair, decommissioning requirements, and information integrity obligations.
1.5 Accountability AI actors should be accountable for the proper functioning of AI systems. Major expansion: added systematic risk management across the full lifecycle; responsible business conduct; cooperation between AI actors; explicit risk categories including harmful bias, human rights, safety, security, privacy, labour, and intellectual property rights. Bias is a net-new addition not present in 2019.

What Organizations Should Do Now

The 2024 OECD AI Principles update does not require organizations to rebuild their AI programs from scratch. It does, however, point clearly to a set of practices that responsible AI governance now requires. Organizations should assess their current approach against each of the five principles as revised, with particular attention to the areas that are genuinely new: environmental impact tracking, misinformation risk assessment, bias documentation, override and decommission planning, and continuous lifecycle risk management.

For organizations that develop or fine-tune AI models on proprietary or sensitive data, the privacy and bias implications of the revised Principles deserve immediate attention. Data de-identification should be considered not as an optional privacy enhancement but as a foundational component of a responsible AI data pipeline.

The regulatory environment will continue to evolve. Organizations that build robust data governance practices now -- including the technical capacity to de-identify sensitive information before it enters AI workflows -- will be better positioned to adapt as national regulations catch up with the OECD's updated framework.

If you are building AI systems on sensitive data and want to understand how Limina's de-identification technology can support your compliance posture, speak with our team today.

Related Articles