iSpeak Blog

AI in Pharma Regulation: Key Takeaways from the ISPE 2025 Global Regulatory Town Hall

Christopher Potter, PhD
ISPE 2025 Global Regulatory Town Hall

On the final day of the 2025 ISPE Annual Meeting & Expo, regulators and senior industry leaders convened for the Global Regulatory Town Hall—a long-standing and popular feature of the annual event. The discussion focused on the growing impact of digitalization and the increasing use of artificial intelligence (AI) by both regulators and industry. The session featured a mix of pre-prepared questions and live questions from the large audience.

Members of the panel were:

  • Tala Fakhouri, PhD, Vice President, Regulatory Consulting: AI and Digital Policy, Real-World Research, Parexel
  • Tina Kiang, PhD, Director of the Division of Regulation, Guidance and Standards,, US Food and Drug Administration (US FDA)/OPQ/OPPQ (attended remotely)
  • Jeong Yeon Kim, PhD, Pharmaceutical Quality Management, Ministry of Food and Drug Safety, Republic of Korea
  • Ian Rees, Unit Manager Inspectorate Strategy and Innovation, Medicines and Healthcare products Regulatory Agency (MHRA)
  • Roger Nosal, Principal at Roger Nosal Pharma CMC Regulatory Consultants, LLC; Senior Vice President and Head, Regulatory Affairs and Quality Assurance, Vaxcyte
  • Kevin O'Donnell, PhD, Market Compliance Manager and Senior GMP Inspector, Health Products Regulatory Authority (HPRA) (attended remotely)

The session was moderated by Shanshan Liu, Technical Director, No Deviation Pte Ltd., and a member of the ISPE International Board of Directors; and Sarah Pope Miksinski, PhD, Executive Director, CMC Regulatory Affairs, Gilead Sciences, Inc., and a member of the ISPE International Board of Directors from 2022-2025.

To start the session and allow the panel members to introduce themselves, Miksinski asked the question, “Can you share a pivotal moment in your career?” Each panel member gave examples from their experiences of topics important to them, which were, decentralised manufacturing (Rees), collaboration (Nosal), communication and mutual understanding (Kim), innovation (Fakhouri), and taking a multi-disciplinary approach (Kiang).

Miksinski continued by asking Kim and Nosal, “What do you think are the next steps for global harmonization?” Kim responded that South Korea is a member of the Pharmaceutical Inspection Co-operation Scheme (PIC/S), whose mission is a single inspection. Sometimes, however, there are differences in interpretation especially for high tech products. More communication and understanding between agencies are required with more use of mutual reliance. Nosal simply stated that the goal is mutual reliance between agencies for both review and inspection. Tiang also stressed the need for more communication and understanding, such as PIC/S and the International Council for Harmonisation of Technical Requirements for Pharmaceuticals for Human Use (ICH) as an example. She indicated that the US FDA has shared draft guidances with the European Union (EU), and also commented on recent EU GMP guides as another example. Tiang emphasized that one size does not fit all, requiring communication and collaboration with partners.

O’Donnell agreed that communication and engagement are essential. Harmonization should be at the level of key principles with more use of mutual reliance built on trust and communication. Rees agreed, stressing that it is better if guidances are written at the principle level with not too much detail. Speed is important since the technology is moving so quickly. Nosal recalled a recent workshop on AI and machine learning organized by the ISPE Germany/Austria/Switzerland (D/A/CH) Affiliate with senior regulators and industry leaders present. There was great opportunity for engagement, including with the increasing use of AI, for which case studies were shared, and the recently issued draft EU GMP Guide Annex 22 was discussed. This discussion led to senior EU regulators considering that the Annex 22 draft should perhaps look a little different. Industry appreciated the concerns of regulators, such as on data source transparency. The investment in time and energy for these types of workshops is a great value and ISPE has a role arranging similar workshops and engagements.

Transparency is important with Fakhouri encouraging industry to publish its experiences and case studies since regulators read these papers. In answer to the question regarding how to ensure transparency, Fakhouri continued there is a need to build trust with patients explaining how AI tools and potential patient data are being used to build trust. Regulators and patients would like AI tools to be explained, however, there are not requirements regarding how this should be achieved.

In response to a question, “How can legislation match the pace of AI use?” Kim commented that all governments are investing in AI so, again, communication is the key. South Korea hosted a World Health Organization (WHO) meeting on AI last September at which many regulators including Fakhouri were present. PIC/S is working with the EU on Annex 22 on GMP inspection elements of AI. Legislation should be at the principles level, ensuring safety and quality of medicines. Although the technology is moving quickly, regulators should work at their pace, not too slow or quickly to maintain patient trust. Industry and regulators should continue to communicate and exchange experiences as AI moves forward.

From this learning, Kiang explained that US FDA could consider using AI in several regulatory applications such as dossier submission review. For industry, Nosal gave the example of predictive modelling, which could be applied to quality by design (QbD) approaches. QbD studies using experimentation are very hard to design to achieve “edge of failure,” however, we could examine how predictive modelling could be used to investigate response surfaces to the edge of failure.

In answer to a question regarding use of a risk-based approach to AI applications, O’Donnell explained that there are considerations in the updated ICH Guideline on Quality Risk Management, Q9(R1) which are particularly applicable to evaluation of risk to AI. The risk assessment step in a quality risk management (QRM) process is important, however, the whole QRM process should be performed with considerations of different degrees of formality, and the degree of formality depends on:

  • Importance: the level of importance of the AI application, i.e. intended use,
  • Uncertainty: the understandability, interpretability and transparency of the applications,
  • Complexity consideration should be given both to complexity of the model itself and to the architecture of development and routine application of a model. For example, there could be third-party developers and use of the cloud

Using these concepts should assist, for example, with helping a team decide how much validation and verification a model requires, how to explain the model, and also how much level of human involvement is required. These activities are part of risk control and communication.

Management of subjectivity during all QRM steps is also important so that, for example, multi- disciplinary teams are involved, input from any opinionated expert is managed, and consideration is given to how risk-based decisions are made.

Nosal agreed with O’Donnell, explaining further that risk assessments should look holistically at risk to the patient in terms of safety and quality and not be restricted to risk to the next step. O’Donnell returned with reference to the EMA concept paper on AI from 2024, which referenced risk to the patient and risk to the regulatory process i.e. how much information should be included in dossiers and how much in the pharmaceutical quality system (PQS).

A question was asked by an audience member, “Does anybody on the panel know a forum where government, regulators, and industry could come together to discuss AI, and if not, would it be beneficial to have one?” Fakhouri started by referring to a recent workshop where US FDA and industry discussed AI. She also mentioned US FDA’s listening sessions. The audience member asked if law makers were present. There are difficulties with implementation of many ICH guidelines where regulators agree a global guideline and then implementation is challenging due to local legislation, practice and interpretation. ICH Q12, Lifecycle Management is one good example. Fakhouri acknowledged that there were challenges with guidance development between vertical regional discussions and horizontal between regions discussions.

Rees commented that the UK was careful in drafting legislation since changing it is extremely difficult. Guidances can be changed relatively quickly. Hopefully regulators can articulate what is required in guidances within the footprint of the legislation, and industry and regulators can align and move forward together. There is related legislation on, for example, data privacy, which could impact AI. Fakhouri added that complexity was even higher in the US where individual states are legislating on AI transparency, and she did not know how to fix this. The audience member suggested that perhaps a discussion forum would be worthwhile to consider.

A further question from the audience related to impact on pharma industry of the EU AI act signed off in August 2025 with issues concerning transparency. Nosal acknowledge he was not a lawyer, so he could not give a definitive answer; however, he surmised that industry would have to adapt when more is known and understood. Fortunately, industry is relatively early in AI implementation and still learning and talking.

In answer to a question concerning the impact of legacy systems when changing, such as using AI, O’Donnell indicated that a risk-based approach should be taken. The question had also referred to major regulatory change initiatives such as update of the common technical dossier (M4Q(R2), the International Coalition of Medicines Regulatory Authorities (ICMRA) Pharmaceutical Quality Knowledge Management System (PQKMS) project and potential introduction of structured product quality data. O’Donnell reminded us that the goal of ICMRA is one global dossier and that regulators were working within these initiatives diligently to facilitate innovation, with AI as an example. Similarly, EU GMP Annexes and guidelines are being updated to facilitate innovation, such as Annex 11, Computerised Systems and Annex 22, AI.

Having guidances that are flexible and can be changed was a consistent message.

Nosal expressed a fear from application of AI by regulators in that there could be AI-generated responses and new regulatory requirements, to which industry will not know the source data because industry does not see company-specific data. Discussion was ongoing regarding industry sharing pre-competitive data that could be used to develop tools, which could help everybody, however, nothing has been agreed upon yet.

Liu asked, “How do we see AI shaping pharmaceutical manufacturing and our sustainability efforts?” Rees stated that AI has been used for some time by clinical colleagues, such as with the analysis of pharmacovigilance issues and some of these issues have had GMP implications. Different areas in agencies are, however, moving at different rates.

From an industry perspective, Nosal referred to the ISPE work on Enabling Global Pharmaceutical Innovation in which AI is one opportunity and stressed that for companies, return on investment (ROI) is paramount. ISPE has conducted a survey in the last two years and confirmed this. So, what are the barriers to implementation of innovation? Many innovations including AI are applicable to many products, which increases value. The key issues are:

  • What are the regulators going to accept?
  • Will the company implementation proposal be acceptable?
  • Can regulatory acceptance be predictable?
  • Can the implementation be globally acceptable?
  • What goes in a dossier and what goes in the pharmaceutical quality system (PQS)?

Nosal noted that this is the perfect time to have engagement and conversations to work through these issues.

Liu continued, “What are the challenges to validation of AI applications?”

Using her experience in the device (combination products) area in US FDA where software has been used for many years, Kiang stressed that the principles are the same, i.e., the application meets its purpose under all conditions of its use. This requires exploration of boundaries. If there are anomalies during validation, the root cause requires to be established and cause removed or mitigated. Fakhouri pointed out that “context of use” was also important since a checklist approach is not applicable to AI validation—one must consider how AI is being used and what is required of it. The principles, however, remain the same. From a GMP perspective, O’Donnell stressed that model performance over time is important, which requires long-term monitoring. Changes happen. Metrics for model performance are required, which are suitable for the purpose. Model performance needs a little more guidance and he thought the US FDA draft guidance, “Artificial Intelligence-Enabled Device Software Functions: Lifecycle Management and Marketing Submission Recommendations,” was very useful. Change management procedures within a company almost certainly will require modification to support changes to the AI model and associated procedures.

A final question was, “What role, do you think, should the patient and public play in shaping the regulatory approach to AI in pharma?” Fakhouti started by emphasizing that the patient is always at the core. Trust is paramount, which requires transparency for example as part of patient consent for clinical studies. Kim added that communication with patients is needed with the patient feeling safe and protected. Nosal confirmed that transparency is important for maintaining the integrity of the industry. When there has been a lack of transparency, issues have occurred, which have led to more regulation and requirements. Ree said that MHRA in common with other agencies talks to patients and patient groups to seek their feedback. There have been issues such as over-hyping and data being stolen. Patients and public need to be reassured.

Kiang stated the US FDA’s mission is to promote and protect patient and public health. Risk has been constantly mentioned; however, the patient benefit should always be emphasized. One should constantly ask, “How is the use of AI producing benefit for the patient?”

To wrap up, panelists were asked to summarize their perspectives on AI:

  • Kiang: AI is a tool, and industry and regulators must think about where the human is required, Humans take decisions. Even if an AI tool makes some form of decision, a human decision is required to assure how that process is managed.
  • O’Donnell: Caution is required; caution on how regulatory guidelines are developed; caution on how we assess fitness for use and use a considered risk-based approach to implementation of AI. We do need to keeping moving forward, however.
  • Fakhouri: Use it! Don’t be scared.
  • Kim: AI will not replace human expertise but should amplify it.
  • Nosal: Compatibility with how we work currently
  • Rees: Engage with regulators. Keep AI moving and draft guidelines which can be easily updated. Do not hinder industry.

Miksinski thanked the panel, particularly O’Donnell who joined the panel remotely from Japan where it was late at night., and Kiang, who overcame some technical issues joining remotely from FDA headquarters.

The session was extremely informative and helpful to a large and enthusiastic audience as well as the transparent and engaged panel.

Disclaimer

This is an informal summary of a panel discussion held on 29 October 2025 at the 2025 ISPE Annual Meeting & Expo in Charlotte, North Carolina, USA. It has not been vetted by any of the regulators or agencies mentioned in this article, nor should it be considered the official positions of any of the agencies mentioned.

AI

References