iSpeak Blog

Pharma 4.0™ and Artificial Intelligence (AI): Unlocking Innovation and Driving Transformation

Alice Redmond, PhD
Pharma 4.0™ and Artificial Intelligence (AI): Unlocking Innovation and Driving Transformation

The pharmaceutical industry is undergoing a technological revolution, driven by digital transformation and emerging technologies such as AI/machine learning (ML). These innovations are being applied across the entire product lifecycle from research and development and clinical development to manufacturing, quality assurance, supply chain, and patient support. A critical focus area is the GMP-regulated segment of the supply chain. An industry-regulator panel at the 2025 ISPE Pharma 4.0™ Conference discussed the opportunities and limitations of AI/ML in this context, addressed key challenges, and explored how to manage them without compromising the safety and efficacy of medicines while fostering innovation in this rapidly evolving sector.

The panelists included:

  • Ib Alstrup, Medicines Inspector, GxP IT, Danish Medicines Agency
  • Ronald Bauer, PhD, Head of Institute Surveillance, Austrian Agency for Health and Food Safety
  • Michelangelo Canzoneri, PhD, Global Head of Group Smart Manufacturing, Merck KGaA Darmstadt, Germany
  • Jakob Joachim, Digital Strategy Delivery Lead, Roche
  • Manuel Ibarra Lorente, Head of Pharmaceutical Inspection and Enforcement Department, Spanish Agency of Medicines and Healthcare Products
  • Lineke Pelleboer, Director of Business Development, Batavia Biosciences
  • Markus Zeitz, PhD, Head of Digital Quality, Takeda

The discussion was moderated by Teresa Minero, Strategic Advisor, LifeBee, a ProductLifeGroup Company.

Regulatory Landscape, Expectations, and AI Governance

The panel opened with a deep dive into how regulators are approaching AI within the pharmaceutical ecosystem, based on a recent European Medicines Agency (EMA) Heads of Medicines Agencies meeting. Regulatory agencies confirmed that AI is already in routine operational use—supporting regulatory intelligence, visual inspection, and early signal detection—underscoring the technology’s growing footprint across the product lifecycle. At the same time, regulators reiterated that the forthcoming AI Act will function as an additional layer of compliance, sitting atop existing frameworks. Across agencies, a consistent set of expectations emerged: interpretability, explainability, traceability, accountability, and transparency. Yet regulators emphasized that they are “followers, not leaders” in digital transformation and have not yet inspected high-risk AI used in batch-release decisions—illustrating slow uptake and industry uncertainty. The overarching message remained clear: regulators do not regulate technology itself, but rather the risks it introduces.

AI Maturity, Organizational Readiness, and Digital Culture

Despite enthusiasm for digital transformation, the ISPE Pharma 4.0 survey shows that nearly 30 percent of companies remain stuck in pilot mode, reflecting a maturity gap across the industry. A lack of compelling business cases emerged as a critical barrier; many organisations attempt AI initiatives without grounding them in clear value drivers. Companies also frequently deploy technology without involving business leaders, resulting in underused systems and stalled progress. Successful digital transformation, the panel argued, requires coordinated investment in purpose, leadership sponsorship, cross-functional integration, and new digital skillsets. Cultural barriers, fear of failure, unclear ownership, and fragmented data continue to limit scalability. Yet organizations further along the maturity curve are beginning to see benefits in predictability, speed, and reusability of digital assets.

AI Use Cases, Intended Use, and Model Lifecycle Management

When discussing practical AI use cases, panellists stressed the need for a well-defined intended use, human-in-the-loop roles, acceptance criteria, and version governance. AI models must be validated and monitored like process equipment, complete with independent test sets and robust performance metrics. High-risk AI applications require strict controls for sensitivity, specificity, and accuracy. While companies are observing significant operational improvements, AI also introduces challenges, such as maintaining explainability in probabilistic models and managing blackbox vendor tools that complicate compliance for Marketing Authorization Holders (MAHs). Generative AI, meanwhile, is finding fast adoption in documentation support, deviation summarisation, and communication efficiency, delivering practical benefits with lower regulatory risk.

Industry Challenges: Data, Validation, Resources & Cybersecurity

Data integrity and data management surfaced as foundational yet weak points for most organisations. Highmaturity companies face resource shortages and complex validation demands, while lowmaturity organisations struggle with fragmented truths, poor master data quality, and inadequate feedback loops. Cybersecurity emerged as a growing threat, with panellists citing real-world examples of manufacturing sites disrupted by cyberattacks, often unreported due to reputational concerns. As pharma increasingly becomes a target, regulators urged the industry to treat manufacturing as critical infrastructure. Misalignment between IT, quality, and regulatory expectations further complicates digital advancement. Across agencies, a consistent set of expectations emerged: interpretability, explainability, traceability, accountability, and transparency. Yet, regulators emphasised the need to learn together with industry in digital transformation and have not yet inspected high-risk AI used in batch release decisions, illustrating slow industry uptake and uncertainty. The overarching message remained clear: regulators do not regulate technology itself, but rather the risks it introduces.

What Regulators Need from Industry

Regulators favored earlier dialogue, shared sandboxes, and stronger collaboration with industry to shape governance frameworks. Supplier qualification remains non-negotiable: regulators expect full transparency and validated capabilities regardless of vendor size. They also emphasised the need for a shared language across regulators and industry, as well as stronger internal competence to evaluate AI outcomes. Importantly, accountability for AI-enabled decisions cannot be outsourced; MAHs remain responsible for proper oversight.

Closing Reflections

The panel concluded with a call for collective action. Digital transformation must engage the whole organisation, not just digital pioneers. Both regulators and industry representatives highlighted the importance of transparency, collaboration, and continuous dialogue. The message was unanimous: progress depends on shared learning, openness, and the courage to move beyond pilots and scale AI responsibly for patients' benefit.

DISCLAIMER

This is an informal summary of a panel discussion held on 10 December at the 2025 ISPE Pharma 4.0™ Conference in Barcelona, Spain. It has not been vetted by any of the regulators or agencies mentioned in this article, nor should it be considered the official positions of any of the agencies mentioned.


ISPE members: View ISPE Communities of Practice. 
Not an ISPE member? Join today.

Disclaimer

iSpeak Blog posts provide an opportunity for the dissemination of ideas and opinions on topics impacting the pharmaceutical industry. Ideas and opinions expressed in iSpeak Blog posts are those of the author(s) and publication thereof does not imply endorsement by ISPE.


Submit Your Best Content to ISPE

ISPE’s official blog, iSpeak accepts contributions from our Members and professionals in the pharma industry.  

What We Look For 

References