Artificial intelligence (AI) is moving from experimental use to operational deployment across the drug development lifecycle. As regulators converge on principles rather than prescriptive rules, organizations that fail to invest in AI governance risk being unable to scale AI, even when the technology performs as intended. In regulated environments, sustainable value depends on trust among regulators, sponsors, investigators, and patients.

That imperative underpins the Guiding Principles of Good AI Practice in Drug Development, jointly issued in January 2026 by the U.S. Food and Drug Administration (FDA) and the European Medicines Agency (EMA).[i],[ii] Rather than mandating specific technologies, the principles articulate a shared regulatory philosophy: AI systems that influence decisions affecting drug quality, safety, or efficacy must be governed with the same rigor as any other component of regulated evidence generation.

 

For sponsors, contract research organizations (CROs), and investigators, AI readiness is the capability to deploy AI responsibly, aligning ethical principles, scientific rigor, and operational excellence to support regulatory confidence.

[i] U.S. Food and Drug Administration. (2026). Guiding principles of good AI practice in drug development. FDA.

[ii] European Medicines Agency. (2026). EMA and FDA set common principles for AI in medicine development.

Why Regulators Led with Principles, Not Prescriptions

The FDA–EMA collaboration is notable for its content and its intent. By aligning early on a common set of principles, regulators are signaling convergence across jurisdictions even as formal guidance, standards, and inspection practices continue to evolve. This approach mirrors earlier regulatory responses to cloud computing and electronic data capture: establish guardrails first, then allow innovation to mature within them.

The FDA–EMA document frames AI governance as a systems-level challenge rather than a software problem. The focus is on how AI-enabled systems are designed, validated, monitored, and embedded into regulated decision-making processes.

The FDA–EMA 10 Guiding Principles as a Common Language for AI

The FDA and EMA defined 10 guiding principles that serve as an integrated framework of expectations for AI in drug development. They include:

  • Human-centric by design
  • Risk-based approach
  • Transparency
  • Data quality and integrity
  • Continuous monitoring
  • Lifecycle management
  • Accountability
  • Defined context of use
  • Benefit–risk orientation
  • Documentation and traceability

Taken together, these principles define a governance philosophy. Viewed through an execution lens, they group into four operating imperatives that leaders can use to evaluate AI investments, vendors, and capabilities across the drug development lifecycle

From Principles to Execution Through Four Imperatives for AI Governance

Table 1 groups the ten principles into four execution-oriented imperatives that reflect how AI is deployed, governed, and inspected in practice.

Table 1. Translating AI Principles into Imperatives

The sections below discuss each strategic imperative in more detail.

 

  1. Anchor AI in people and purpose

The first imperative reflects several foundational principles, including human-centric design, defined context of use, and benefit–risk orientation. Together, they establish a clear expectation that AI does not replace human accountability. In practice, anchoring AI in people and purpose requires clarity on three levels:

  • Who is affected by AI-informed decisions, including trial participants, patients, investigators, and safety reviewers
  • Where human oversight applies across the workflow, from data inputs to final decision-making
  • What decisions are influenced, and the consequences if outputs are incorrect or misinterpreted

Regulators expect organizations to define why an AI system is used, what role it plays, and what it is not intended to do. Industry experience reinforces this point. Many sponsors are cautious about AI adoption due to uncertainty about data ownership, intellectual property, and loss of control. An articulated context of use, paired with transparent data protection and governance practices, is a prerequisite for success.

 

  1. Build quality, security, and integrity by design

Not all AI applications carry the same regulatory or patient impact. FDA and EMA emphasize a risk-based approach, calling for proportional validation, oversight, and mitigation based on context of use and risk.

This approach aligns with broader regulatory trends, including ICH E6(R3), which reinforces proportionality in clinical quality management.[i] Low-risk applications may warrant lighter controls, while systems that influence safety reporting, dose decisions, or benefit–risk assessments demand heightened scrutiny.

Also important is risk-based performance assessment. Regulators expect organizations to evaluate the full socio-technical system, including how humans interact with AI outputs in real-world conditions. Vendor benchmarks and laboratory metrics are insufficient on their own, and performance must be demonstrated in the environments where regulated decisions are made.

 

  1. Operationalize transparency and accountability

Opacity is the enemy of credibility. The guiding principles place strong emphasis on transparency, accountability, and documentation, requiring traceable records of data provenance, processing steps, and analytical decisions consistent with GxP expectations with clear explainability

Many AI initiatives falter because organizations cannot reconstruct how outputs were generated or justify their reliability over time. Robust documentation, auditability, and version control are therefore enablers of scale.

The principles also call for coherence across legal, ethical, technical, cybersecurity, and regulatory domains. While AI-specific standards continue to mature, organizations can align with established frameworks to create an integrated and defensible governance posture.[ii],[iii]

 

  1. Continuously monitor and adapt

Perhaps the most consequential insight from the FDA–EMA principles is that AI is a managed capability whose risk profile evolves over time. AI systems must therefore be governed across their full lifecycle.

Effective lifecycle management spans:

  • Model design and development using fit-for-purpose data and sound software engineering.
  • Ongoing monitoring and change control, including detection of data drift and performance degradation.
  • Transparent communication using clear, plain language to explain performance, limitations, and appropriate use.

Global frameworks reinforce this lifecycle view, emphasizing sustained stewardship, explainability, and human oversight as conditions for long-term trust.[iv],[v]

 

[i] International Council for Harmonisation of Technical Requirements for Pharmaceuticals for Human Use. (2025). ICH harmonised guideline: Guideline for Good Clinical Practice E6(R3).

[ii] International Organization for Standardization. (2023). ISO/IEC 23894: Artificial intelligence—Guidance on risk management.

[iii] National Institute of Standards and Technology. (2023). Artificial Intelligence Risk Management Framework (AI RMF 1.0) (NIST AI 100-1). U.S. Department of Commerce.

[iv] Organisation for Economic Co-operation and Development. (2019). Recommendation of the Council on Artificial Intelligence (OECD/LEGAL/0449).

[v] World Health Organization. (2021). Ethics and governance of artificial intelligence for health.

A Useful Analogy: AI Today Is Where Cloud Computing Once Was

Industry hesitation around technology has precedent. A decade ago, sponsors expressed similar concerns about cloud computing: loss of control over data, uncertainty around security and compliance, and unclear accountability and auditability, resulting in a fear that regulators would not accept cloud-based systems for regulated use.

In practice, the opposite occurred. As governance frameworks matured, cloud providers implemented controls that often exceeded those of on-premises infrastructure, and regulators adapted inspection models as needed.

AI appears to be following a comparable trajectory in regulated life sciences. Initial resistance is driven more by unfamiliarity with new control mechanisms than by evidence of harm. As with cloud adoption, competitive advantage will accrue to organizations that invest in governance, vendor oversight, and audit readiness.

 

What This Means for Sponsors and CROs

Consider a common pharmacovigilance example. AI systems used to support case intake. However, when outputs influence prioritization, follow-up timing, or escalation decisions, downstream implications for patient safety elevate validation and oversight expectations. This dynamic illustrates why context of use, not algorithmic complexity, must anchor risk assessment.

Taken together, the guiding principles suggest a pragmatic readiness pathway:

  • Define context of use and decision impact
  • Classify AI risk and align oversight proportionately
  • Establish robust data governance and documentation
  • Validate performance in human-in-the-loop environments
  • Implement lifecycle monitoring and change management
  • Communicate transparently with regulators and clients

 

Key Takeaway

The FDA–EMA guiding principles provide a durable framework for aligning AI-driven innovation with the regulatory mandate to protect patients while advancing scientific progress. The Ergomed Group partners with sponsors to translate AI governance into operational advantage, helping ensure AI-enabled processes are efficient, trusted, and regulator-ready, grounded in these guiding principles

Contact Ergomed to learn how disciplined governance can accelerate AI adoption across your clinical development programs.