iSpeak Blog

Highlights from the Validation and GAMP® Track at ISPE AI in Life Sciences Summit – Powered by GAMP

Frank Henrichmann
Brandi Stockton
standard-quality-control-concept

The life sciences industry stands at a pivotal moment. Artificial intelligence (AI) and generative models are rapidly moving from experimental environments into GxP-regulated workflows. At the same time, regulatory expectations, including those reflected in the EU Annex 22 Draft and recent US Food and Drug Administration guidance, are evolving, with increased scrutiny on data integrity, model transparency, lifecycle governance, and ongoing state of control.

As part of the 2026 ISPE AI in Life Sciences Summit – Powered by GAMP®, the Validation and GAMP track, led by the Chair of the GAMP Global Steering Committee, Frank Henrichmann (QFINITY), and the secretary of the GAMP Global Steering Committee, Brandi Stockton (The Triality Group), provides a structured, practical pathway for organizations navigating this transition. Grounded in the principles of the ISPE GAMP framework and aligned with the ISPE GAMP® Guide: Artificial Intelligence, released in July 2025, this track focuses on the validation of AI-enabled computerized systems, translating innovation into inspection-ready with compliant implementation.

The track opens with Gourav Pandey from Takeda, who will explore “From Generative AI to Validation Feasibility.” His session introduces an “Audit Intelligence” architecture that stitches together standard operating procedure changes, corrective and preventive action trends, regulatory updates, and supplier histories using retrieval augmented generation- powered agents. He maps these AI agents to GAMP 5 controls—user requirements specifications, guardrails, and synthetic validation artifacts—showing a path to evaluating AI in audit preparation without breaking GxP rules.

Next, Luca Zanotti Fragonara, PhD, and Robert Stoop, PhD, both from PQE Group, follow with “Evaluation Framework for AI Models in Pharmacovigilance.” They lay out a GAMP aligned rubric covering accuracy, recall, hallucination rate, and explainability (using SHAP and attention tracing). Attendees receive practical templates for model drift monitoring, bias detection, and audit trail generation—tools that make AI-driven safety signal detection defensible to regulators.

Elliot Abreu of Catalyx shares “The AI Advantage in Regulated Manufacturing.” He explains how to meet 21 CFR Part 11 requirements in AI enabled systems, balances performance gains with risk based controls, and provides an adoption checklist for production lines that ensures compliance while unlocking efficiency.

Rose Mary Aversa of ProQuality Network, Kathy Zielinskis of Roche, and Matthew McMenamin of The Sentinel Consulting Group will discuss “Building Inspection Readiness for AI Driven Systems.” The presentation and discussion yield a structured playbook for inspection readiness when deploying AI in good manufacturing practice environments, supported by short case studies that illustrate how these concepts are applied. Participants learn the language of auditors and how to embed AI oversight into the quality management system.

Srividya Narayanan from Freyr Solutions presents a six step methodology in her presentation, “Step by Step AI Risk Assessment: A Practical Guide.” Her methodology fuses the International Medical Device Regulators Forum’s risk classification with GAMP 5 lifecycle principles, covering scope definition, risk categorization, granular analysis of data, algorithm, and process components, targeted mitigation controls, continuous monitoring, and drift management. Real-world case studies illustrate how the framework addresses model drift, algorithmic bias, and evolving regulatory expectations.

Nikolai Makaranka, Founder and CEO of Daikon, follows with “Practical Guide to Measuring Large Language Models (LLMs).” He explains why LLMs pose a challenge to traditional computer system validation and introduces metrics that matter for GxP (e.g., factual consistency, hallucination rate, prompt sensitivity, and robustness). He then outlines an evidence-driven framework for generative AI validation that aligns with the proposed Annex 22 provisions, showing how to bring scientific rigor to stochastic AI outputs.

The day closes with a brief wrap-up and networking session, giving participants a chance to discuss ideas with the speakers and fellow attendees.

Attendees will gain actionable approaches and real-world case studies designed to bridge the gap between visionary AI concepts and the controls required to demonstrate fitness for intended use. From risk assessment through lifecycle management and continuous monitoring, the sessions reflect the latest GAMP guidance and emerging AI validation expectations—equipping participants to move forward with clarity, confidence, and compliance.

This track is essential for:

  • Validation engineers and computer system validation specialists who need to extend traditional validation practices to AI/LLM-based systems
  • Quality and compliance leaders seeking a risk-based view of AI adoption, pharmacovigilance teams looking for a GAMP-aligned framework for AI-driven safety signal detection
  • Manufacturing managers wanting practical guidance on integrating AI into 21 CFR Part 11 bound lines
  • Regulatory affairs professionals needing up-to-date insight into agency interpretations of AI under existing GxP guidelines
  • Data scientists and AI engineers who want clear expectations for model qualification and documentation
  • Senior executives who need solid business case evidence of return on investment, risk mitigation, and competitive advantages

Register now for the 2026 ISPE AI in Life Sciences Summit – Powered by GAMP, secure your seat in the Validation and GAMP track, and join the conversation that will shape the next decade of regulated AI in life sciences.

Learn more and register today

References