Features
September / October 2023

New EU AI Regulation and GAMP® 5

Anders Vidstrup
New EU AI Regulation and Gamp® 5

This article describes how ISPE GAMP® 5: A Risk-Based Approach to Compliant GxP Computerized Systems (Second Edition) and related GAMP Good Practice Guides can be effectively applied to help meet the requirements of the proposed European Union (EU) artificial intelligence (AI) regulation for qualifying GxP-regulated systems employing AI and machine learning (ML).

On 21 April 2021, the EU Commission presented the long-awaited draft on the regulation of AI. The document is based on a number of reports from the EU Commission and aims to ensure citizens’ trust in AI systems. The regulation is the first targeted legal regulation of AI. As such, it will have great significance in Europe and the rest of the world in relation to the development and use of AI. The AI regulation applies alongside the General Data Protection Regulation (GDPR), as systems must comply with both, e.g., when using personal data for training algorithms or when using AI systems for automatic decisions with legal effect for the data subjects1.

The GAMP guidance may potentially prove useful for other areas and industries in supporting the quality assurance activities and methods described in the draft regulation, at the discretion of the organizations involved. GAMP® 5: A Risk-Based Approach to Compliant GxP Computerized Systems (Second Edition) covers AI/ML components and their life cycles2, and the GAMP® RDI Good Practice Guide: Data Integrity by Design covers the data life cycle aspects of such systems, which can help ensure data integrity, a key requirement for these types of applications3.

Description of the AI Regulation1

The AI regulation contains four types of regulations:

  1. Prohibition of the use of certain AI systems (Article 5)
  2. Special requirements for the use of AI systems that are considered to present a high risk (Articles 6–51)
  3. Transparency requirements for AI systems interacting with humans (Article 52)
  4. A framework for voluntary “codes of conduct” for AI systems that are not high-risk systems (Article 69)

Prohibited AI systems are ones that harm people physically or psychologically with subliminal techniques or by exploiting vulnerabilities, that implement “social score cards” by monitoring citizens, and that use special forms of facial recognition/personal recognition.

High-Risk Systems

The focus of the AI regulation is to regulate high-risk systems, which are defined as those within eight areas:

  1. Biometric identification and categorization of natural persons
  2. Management and operation of critical infrastructure
  3. Education and vocational training
  4. Employment, worker management, and access to self-employment
  5. Access to and enjoyment of essential private services and public services and benefits
  6. Law enforcement
  7. Migration, asylum, and border control management
  8. Administration of justice and democratic processes

For management and operation of critical infrastructure, this includes AI systems intended to be used as safety components in the management and operation of road traffic and the supply of water, gas, heating, and electricity. For employment and worker management, this includes AI systems intended to be used for the recruitment or selection of natural persons, notably for advertising vacancies, screening or filtering applications, and evaluating candidates in the course of interviews or tests. For access to private and public services, this includes AI systems intended to be used to dispatch or to establish priority in the dispatching of emergency first response services, including by firefighters and those administering medical aid.

For these systems, for example, a risk management system and a quality assurance system must be established, just as requirements for human involvement, transparency, robustness, cybersecurity, and correctness must be established. This is where GAMP® 5: A Risk-Based Approach to Compliant GxP Computerized Systems (Second Edition)2 and the GAMP® RDI Good Practice Guide: Data Integrity by Design3 can be useful. The GAMP® 5 framework and other GAMP guides already contain strong and mature guidance on the establishment of quality assurance systems and risk management systems, and on ensuring the integrity of data, which is essential for robustness and correctness.

Appendices

Appendix D11 in GAMP® 5: A Risk-Based Approach to Compliant GxP Computerized Systems (Second Edition) focuses on AI/ML2. It provides a basic understanding of AI, the use of static and dynamic ML subsystems in industry, and guidance on how to ensure compliant integration and fitness for use in a regulated environment. It also presents an overview of a risk-based, regulatory-compliant AI/ML life cycle framework that aligns with GAMP® 5 principles and phases (concept, project, and operation).

It describes the importance of data integrity to the overall quality of AI/ML, in addition to presenting an understanding of inherent risks, and acknowledges the iterative nature of developing AI/ML as a subsystem within the overarching IT application and/or business solution, all in conjunction and support of good software quality engineering practices.

Appendix S1 in GAMP® RDI Good Practice Guide: Data Integrity by Design3 examines the area of ML and the importance and implications of data integrity on the outcomes of what “machines” are able to process and/or learn from the data made available to them. Both Appendix D11 and Appendix S1 describe a life cycle approach, from concept to project (i.e., data modeling and evaluation) and operation, including deployment and continuous monitoring.

AI Technical Documentation and GAMP® 5

In GAMP® 5: A Risk-Based Approach to Compliant GxP Computerized Systems (Second Edition)2, Article 111 describes the technical documentation needed. It outlines that it shall contain at least the following information, as applicable to the relevant AI system, as shown in the following tables.


Table 1: Regulation of AI and corresponding GAMP® 5 guidance that covers a general description of the AI system.
A General Description of the AI System, Including: GAMP® 5 Sections and Appendices that Support These Requirements
Its intended purpose, the person(s) developing the system, date, and system version D6–System Descriptions
How the AI system interacts or can be used to interact with hardware or software not part of the AI system, where applicable
  • D6–System Descriptions
  • D1–Specifying Requirements
Versions of relevant software or fi rmware and any requirement related to version update D6–System Descriptions
Description of all forms in which the AI system is placed on the market or put into service
  • Not supported by GAMP® 5
  • Partially covered in D11–Artificial Intelligence and Machine Learning (as part of concept phase)
Description of hardware on which AI system is intended to run
  • D6–System Descriptions
  • D1–Specifying Requirements
Marking and internal layout of products when the AI system is a component of products, photographs, or illustrations showing external features Not supported by GAMP® 5
Use and installation instructions
  • Main section chapter 6.1.3
  • Main section chapter 7.12

Conclusion

AI and ML are transforming the way in which industry is doing business and processing data. The pharmaceutical industry is increasingly relying on such innovative technologies to automate many functions previously performed by humans. As computer systems become more integrated and datasets become more extensive, computer science is advancing our ability to learn from that data and draw conclusions.


Table 2: Regulation of AI and corresponding GAMP® 5 guidance that covers a detailed description of the elements of the AI system and the process for its development.
A Detailed Description of the Elements of the AI system and the Process for Its Development, Including: GAMP® 5 Sections and Appendices that Support These Requirements
  • Methods and steps performed for the AI system development
  • Third-party pretrained systems and tools
  • How third-party systems and tools have been used, integrated, or modifi ed by the provider
  • Main section chapter 3 and chapter 4 describe activities in general
  • D11–Artifi cial Intelligence and Machine Learning partially covers the pretraining system
  • The pretraining system could be described in a functional specifi cation (Appendix D1) or partly in a validation plan (Appendix M1)
  • System design specifi cations (general logic of the system and algorithms)
  • Key design choices (including rationale and assumptions)
  • Key design choices with regard to persons or groups of persons on which the system is intended to be used
  • Main classifi cation choices
  • What the system is designed to optimize for and relevance of different parameters
  • Decisions about any possible tradeoff made regarding the technical solutions adopted to comply with the requirements set out in Title III, Chapter 2
  • D1–Specifying Requirements (partly covered)
  • D11–Artificial Intelligence and Machine Learning covers main classification choices
  • S1–Artificial Intelligence: Machine Learning covers main classification choices
  • Description of the system architecture, explaining how software components build on or feed into each other and integrate into the overall processing
  • Computational resources used to develop, train, test, and validate the AI system
  • D11–Artificial Intelligence and Machine Learning
  • S1–Artificial Intelligence: Machine Learning
  • Assessment of the human oversight measures needed in accordance with Article 14
  • Assessment of the technical measures needed to facilitate the interpretation of the outputs of AI systems by the users, in accordance with Articles 133(d)
  • Not directly covered by GAMP® 5
  • D11–Artificial Intelligence and Machine Learning covers to some extent
  • Detailed description of predetermined changes to the AI system
  • Detailed description of the AI system’s performance
  • All relevant information related to the technical solutions adopted to ensure continuous compliance of the AI system with the relevant requirements set out in Title III, Chapter 2
  • O8–Periodic Review
  • D6–System Descriptions
  • D11–Artificial Intelligence and Machine Learning
  • Validation and testing procedures used
  • Information about the validation and testing data used and their main characteristics
  • Metrics used to measure accuracy, robustness, cybersecurity, and compliance with other relevant requirements set out in Title III, Chapter 2
  • Potentially discriminatory impacts
  • Test logs and all test reports dated and signed by the responsible persons, including with regard to predetermined changes as referred to in the row above
  • Main section chapter 7.10 (Testing)
  • D5–Testing of Computerized Systems
  • Metrics used to measure accuracy, robustness, cybersecurity, and compliance with other relevant requirements set out in Title III, Chapter 2
  • Main body section and M3–Science-based Quality Risk Management should be considered for challenge test

Table 3: Regulation of AI and corresponding GAMP® 5 guidance that covers detailed information about monitoring, functioning, and controlling the AI system.
Detailed Information about Monitoring, Functioning, and Controlling the AI System, Including: GAMP® 5 Sections and Appendices that Support These Requirements
  • Its capabilities and limitations in performance
  • Degrees of accuracy for specific persons or groups of persons on which the system is intended to be used
  • Overall expected level of accuracy in relation to its intended purpose
  • Foreseeable unintended outcomes and sources of risks to health and safety
  • Fundamental rights and discrimination in view of the system’s intended purpose
  • Human oversight measures needed in accordance with Article 14
  • Technical measures to facilitate interpretating the outputs
  • Specifi cations on input data, as appropriate
  • D11–Artificial Intelligence and Machine Learning
Detailed description of the risk management system in accordance with Article 9
  • GAMP® 5 main body section 5 (Quality Risk Management)
  • D11–Artificial Intelligence and Machine Learning (partially covered)
Description of any change made to the system through its life cycle
  • O6–Operational Change and Confi guration Management
  • O8–Periodic Review
  • List of the harmonized standards applied in full or in part (the references of which have been published in the Official Journal of the European Union)
  • Where no such harmonized standards have been applied, a detailed description of the solutions adopted to meet the requirements set out in Title III, Chapter 2, including a list of other relevant standards and technical specifi cations applied
Not covered or supported in GAMP® 5
Copy of the EU declaration of conformity Not covered or supported in GAMP® 5
Detailed description of the system in place to evaluate the AI system’s performance in the postmarketing phase in accordance with Article 61, including the postmarketing monitoring plan referred to in Article 613
  • O8–Periodic Review
  • D11–Artificial Intelligence and Machine Learning (partly supported)
  • S1–Artificial Intelligence: Machine Learning (partly supported)

Underlying algorithms are sophisticated enough to begin making robust decisions in the form of AI. The listed requirements in the draft regulation for developing and operating high-risk AI systems are all based on good engineering practice. Many activities in GAMP® 5 and supporting guidance, like GAMP® RDI Good Practice Guide: Data Integrity by Design, are also based on good engineering practice and, as such, can serve as the basis for how to fulfill the listed requirements.

Even though high-risk AI systems are not evaluated as GxP systems, it will be beneficial to use the GAMP-based quality activities from the company’s quality management system. GAMP® 5: A Risk-Based Approach to Compliant GxP Computerized Systems (Second Edition) and related GAMP Good Practice Guides can be effectively applied to help meet the requirements of the proposed EU AI regulation for GxP-regulated systems employing AI/ML that fall under the scope of that regulation. GAMP guidance may also prove useful for any organization wishing to meet the quality assurance requirements of the draft regulation for other AI/ML systems.

View Guidance Documents

Not a Member Yet?

To continue reading this article and to take advantage of full access to Pharmaceutical Engineering magazine articles, technical reports, white papers and exclusive content on the latest pharmaceutical engineering news, join ISPE today. In addition to exclusive access to all of the content in Pharmaceutical Engineering magazine, you will get online access to 24 ISPE Good Practice Guides, exclusive networking events, regulatory resources, Communities of Practice, and more.

Learn more about the valuable benefits you'll receive with an ISPE membership.

Join Today