iSpeak Blog

AI, Quality, and Regulatory Insights: A Preview of ISPE’s Latest Podcast Episode with Tina Kiang, PhD, US Food and Drug Administration (US FDA)

ISPE
AI-Quality-and-Regulatory-Insights-Podcast-750px

Artificial intelligence is no longer a future concept for pharmaceutical development and manufacturing—it is an active, rapidly evolving capability with real implications for quality, safety, and regulatory expectations. The latest ISPE podcast episode examines AI at the intersection of regulatory guidance and pharmaceutical innovation, featuring insights from two seasoned leaders: David Churchward, Head of Operations Quality Compliance and External Affairs at AstraZeneca, and Tina Kiang, PhD, Director of the Division of Regulations and Guidance in OPQ/OPPQ at the US FDA.

This conversation offers practical direction on how AI can deliver value across the product lifecycle while remaining firmly anchored in quality and compliance. It also clarifies how regulators are approaching AI-enabled processes today—and how guidance is evolving. Below is a preview of the themes covered in the episode.

Why This Conversation Matters

The dialogue between industry and regulators is essential as organizations evaluate where and how to deploy AI responsibly. The episode highlights that the promise of AI is significant, but so are the responsibilities that come with its use. The discussion underscores the importance of early and open engagement between sponsors and regulators, the need for human oversight, and the role of clear documentation and risk-based controls within established quality frameworks.

Meet the Guests

  • Tina Kiang (US FDA) brings more than two decades of regulatory experience across medical devices, software, AI policy development, and pharmaceutical quality. She currently contributes to CDER’s AI Council and OPQ’s AI-focused efforts, shaping cross-center perspectives on AI.
  • David Churchward (AstraZeneca) facilitates the conversation from an industry vantage point, focusing on the practicalities of integrating AI in development, manufacturing, and quality systems to benefit patients safely and efficiently.

How AI Has the Potential to Reshape the Drug Lifecycle

The episode walks through concrete use cases for AI across the entire product lifecycle, including:

  • Drug discovery and development: molecule selection, clinical trial design, and analysis of complex clinical datasets.
  • Manufacturing and quality: real-time process monitoring, parameter control, and faster investigation of deviations through pattern recognition across large datasets.
  • Post-market surveillance: analyzing signals, discovering patterns in adverse event data, and identifying potential drug–drug interactions that traditional approaches might miss.

The conversation frames AI as an advanced software tool—capable of speed and scale—but not a substitute for human judgment. Humans remain accountable for decisions informed by AI outputs.

US FDA’s Current Posture and Guidance Trajectory

The discussion emphasizes that existing regulations (e.g., 21 CFR Parts 210/211 for cGMP) are generally flexible enough to accommodate AI-enabled systems when supported by appropriate validation, oversight, and documentation. In parallel, US FDA has issued new guidance on AI in drug development (January 2025) and signaled upcoming guidance on AI/machine learning in quality and manufacturing from CDER.

Equally important is global convergence: the episode points to ongoing collaboration with regulatory partners such as European Medicines Agency and the value of aligning expectations to support global supply chains and consistent quality outcomes.

What Reviewers Look For in AI-Enabled CMC Submissions

For CMC submissions that incorporate AI, the conversation highlights the familiar—but essential—elements that regulators expect to see:

  • Validation and credibility of the model relative to its context of use
  • Risk assessment and evidence that risks are adequately controlled
  • Data sufficiency to support reliability and performance claims

Sponsors are encouraged to engage early and often with US FDA through pathways such as the Emerging Technology Program (and with CBER’s CATT program when applicable). Early dialogue helps both sides clarify expectations and ensure efficient, well-documented integration.

Inspections and GMP: AI Output, Human Decisions

A key reframing discussed in the episode: AI does not make GMP decisions—people do. AI produces outputs that inform human decision-making within a quality management system. As a result, inspection expectations remain grounded in familiar principles:

  • Clear documentation of how AI outputs are generated, reviewed, and used
  • Defined human oversight and accountability within the quality unit
  • Alignment to existing quality frameworks, including established controls, records, and review processes

With this framing, AI-enabled systems can be examined using the same core GMP principles that apply to more traditional software or process models.

Lifecycle Management of AI Models

The episode outlines a pragmatic approach to AI model lifecycle management:

  • Define boundary conditions early and establish routine check-ins (e.g., time-based cadence or performance-based triggers).
  • Implement drift detection and pre-specified reassessment triggers when outputs approach limits or deviates from expectations.
  • Apply risk-based design and control aligned with principles in ICH Q8–Q10 and change management principles aligned with Q12, determining what changes require internal approval versus regulatory notification.
  • Do not wait for product failures to initiate review; proactive monitoring is central to responsible AI use.

Data Quality, Bias, and the Human-in-the-Loop

The conversation reiterates that AI model performance depends on data quality. Training and validation datasets must be:

  • Representative of the intended context
  • Managed under robust governance and lineage controls
  • Designed to mitigate bias and reduce the potential for hallucinations or misinterpretation

Maintaining a human-in-the-loop is emphasized to ensure outputs remain fit for purpose, with roles, frequencies, and responsibilities predefined for ongoing oversight.

How US FDA Is Using AI—With Guardrails

The episode provides a look inside US FDA’s own experimentation with AI tools, such as Elsa, built with retrieval-augmented generation libraries and guardrails to reduce hallucinations. Even with such safeguards, human expertise remains decisive: staff are trained to evaluate AI outputs, refine prompts, and ensure that any use is consistent with policy, quality, and confidentiality requirements.

Looking Ahead: Progress, Practicality, and Responsible Adoption

The future of AI in regulatory science spans advanced and continuous manufacturing, signal detection, real-world evidence, and new ways to visualize and interpret complex datasets. The episode takes a balanced view: while AI can unlock earlier insights and more efficient operations, it also brings computational costs, environmental considerations, and the need for benefit-risk assessments that separate meaningful gains from “shiny object” deployments.

Ultimately, the path forward blends innovation with discipline: strong quality culture, transparent validation, appropriate human oversight, and ongoing collaboration between industry and regulators.

Who Will Benefit from This Episode

  • Development and clinical teams exploring model-based design and data analysis
  • Manufacturing and quality leaders integrating AI into process monitoring and deviation investigations
  • Regulatory and CMC professionals preparing AI-enabled submissions and inspection readiness
  • Pharmacovigilance and safety teams interested in post-market signal detection and real-world data

Listen to the Full Conversation

This episode demystifies how regulators view AI today, clarifies where guidance is headed, and offers practical signals for responsible adoption. For professionals across development, manufacturing, quality, and regulatory affairs, it provides a timely, grounded perspective on how to realize AI’s value while managing risk.1, 2, 3, 4

Listen to the full podcast episode

Call for Proposals

ISPE Podcast: Shaping the Future of Pharma features leading subject matter experts discussing critical and relevant topics in pharmaceutical manufacturing. These podcast episodes are designed for industry professionals and provide insights that can add value to your company while helping you prepare for the future.

Submit Proposal

References