Technical
March / April 2017

Achieving & Maintaining GAMP 5 Compliance: Risk-Based Approach to Software Development & Verification

Diana Bagnini
Barbara De Franceschi
Margherita Forciniti
Article

Given the growing level of automation, validation of computerized systems must be an integral part of projects to guarantee the quality of products and process controls. This article focuses on the software verifications of two machine models. A multidisciplinary group performed a software risk assessment and control to identify the level of the risk for each software module and to carry out a series of activities and tests. This process increased software quality and improved maintenance.

The reference standards and methods used to validate the systems are those set by GAMP® 5, which follows a risk-based approach.1

Introduction

Automatic machine suppliers must always be conscious of the regulatory requirements placed upon their customers to ensure that all critical equipment is capable of being implemented to meet validation requirements and ensure patient safety, product quality, and data integrity. GAMP® (Good Automated Manufacturing Practices) guidelines are designed to interpret validation requirements and apply them to all aspects linked either directly or indirectly to pharmaceutical product quality.

GAMP 5: A Risk-Based Approach to Compliant GxP Computerized Systems introduces the concept of risk management for automated and computerized systems, focusing validation and control only where necessary, and identifying the functions and processes that pose the most risk for the pharmaceutical product.1

Given growing levels of automation, the functions and processes previously managed by mechanical devices are now carried out with the aid of software, giving increasing importance to these components. As an automatic machine supplier operating in the pharmaceutical sector, our aim is to guarantee the quality of products and process control, even where a computerized system takes the place of a manual operation. In the development of our latest machine models, therefore, we have paid particular attention to software design, using GAMP 5 principles and framework.

The multiphase risk-assessment and control project described in this article involved the collaboration of various professional figures, including members of the quality assurance department, software engineers, validation and risk assessment experts, as well as coworkers from research laboratories and the University of Bologna (Figure 1).

    • 1 a b International Society for Pharmaceutical Engineering. GAMP® 5: A Risk-Based Approach to Compliant GxP Computerized Systems. ISPE, 2008. https://ispe.org
    Figure 1
    Achieving & Maintaining GAMP 5 Compliance: Risk-Based Approach to Software Development & Verification fig 1
    Given the growing level of automation, validation of computerized systems must be an integral part of projects to guarantee the quality of products and process controls. 
    Following the risk assessment, resources focused on the most at-risk modules and functions, and corrective actions were directed solely where necessary.

    Case Study

    The project focused on the software of two machine models: the first is an automatic rotary tablet press for producing single-layer tablets that allows all production volumes to be processed. The second is a laboratory system for granulation and/or core coating processes. To ensure that the software used in these machines can be classified as Category 3 (non-configured), we had to carry out a series of activities, as follows:

    1. Study

    Before conducting the risk assessment, it was necessary to understand the context of application and make an in-depth study the software architecture of the two machine models under consideration.

    Software architectures are made up of two macro systems: machine control and user interface. Both were designed and implemented in-house.

    The machine control software processes all machine movements, phases, data, and functions. It can be installed on a PC or on a programmable logic controller.

    The user interface, on the other hand, is the means of communication between the machine and the operator. It manages data flow to and from the machine control and displays information on the monitor; it also collects data and statistics regarding product quality. It is installed on a PC.

    Figure 2 shows a schematic representation of the two macro systems. Both consist of three software layers: operating system (present only if the machine control is installed on a PC), libraries, and application. The libraries implement the functions shared by all machine models. The application layer is specific to the machine model; through the libraries, it implements the main functions for which machine control and user interface are responsible.

    The risk assessment covered the libraries and application layer only; no further action was envisaged for the operating system, as it was accepted by virtue of its successful widespread use.

      Figure 2
      Achieving & Maintaining GAMP 5 Compliance: Risk-Based Approach to Software Development & Verification fig 2

      2. Method

      The method used for the risk assessment and risk control is the failure mode effect and criticality analysis (FMECA), one of the standard methodologies put forward by ICH Q9.2 Based on the steps provided by the FMECA, the critical issues of a process or product are analyzed as follows:

      • Risk identification/risk analysis: Various failure modes (hazards) are assessed, as are the severity of their effects on the system, their probability of occurrence, and whether or not controls are in place to detect the failure modes.
      • Quality risk evaluation: Quantitative values are assigned to the severity, occurrence, and detection ratings to calculate the risk priority of each failure mode.
      • Risk control: Depending on the risk priority number generated, activities and corrective actions are established to make the risk level acceptable.

      Since the object of the risk assessment and control application in this project is the software, the hazards considered do not concern broken or damaged components, but software behavior (bugs) unforeseen or not assessed by the designer.

      The goal of this phase was to define objective evaluation scales and avoid the arbitrary attribution of values. From this phase onward, collaboration with various professional figures was essential: software designers who had the required expertise on the software in the machine models, validation experts who understood the documentation required by the customer, coworkers from research laboratories and the University of Bologna who were armed with the very latest software-design methodologies, and representatives from the quality assurance department, due to their expertise in the risk assessment methodology.

      Given the different characteristics of the machine control and user interface, it was necessary to define the evaluation scales and assign values to the three parameters:

      • Severity: The impact of the hazards on patient health and/or data corruption, with particular focus on data consistency and integrity.
      • Probability: A preliminary study on the probability of software bugs found two determining factors—maturity and complexity. The more mature the software, the lower the probability of bugs, as the software has been widely used over time by numerous users. Complexity depends on factors such as the programming language and the foundations on which it is based. Complexity scales present in the literature were studied and then adapted to the automation context. In the machine control study, for example, only cyclomatic complexity was considered; the user interface required several additional indicators specific to the programming language (object-oriented programming), such as coupling between object class and depth of inheritance.
      • Detection: This parameter required examination of each instrument capable of providing evidence of a hazard or its underlying cause, from both developer and end user perspectives (e.g., error trace or error message visible on user interface).

      The scale for each parameter listed above has three levels: high, medium, and low.

      3. Execution

      Software modules that implement various functionalities were analyzed individually. All possible hazards were listed for each one, along with the effects they could have on the product, the integrity of the data, and the patient’s health. The main hazards encountered for machine control affected patient health; user interface risks relating to data integrity and consistency were more common.

      Based on the assessment scales, values of severity, occurrence, and detection were given for each hazard-effect pair and the risk priority was calculated, as suggested by the tables in chapter 5.4 of GAMP 5. Depending on the risk priority that emerged for each pair, identified actions were taken. Table A shows an example of the analysis carried out on three different modules with resulting different risk priorities. For simplicity, a single hazard is shown for each module.

      4. Actions

      Depending on the risk priority levels detected, activities were identified for each module. For low risk priority modules, no additional action was taken other than the tests normally performed during internal testing on the machine (e.g., checking report generation, verifying production charts). Modules with a medium risk priority were evaluated on a case-by-case basis, with corrective actions such as code review or targeted tests performed as needed. For high risk priority modules, activities were aimed at lowering the risk priority. An assessment was made whether to carry out dedicated tests or to intervene directly on the software, for example to lower the probability of a hazard occurring or increase detection by adding specific controls.

      For the dedicated tests, the goal was to reproduce the system status by simulating the hazard presumed in the risk assessment, checking the response, and correcting any errors found in the software. All tests were fully documented and the results obtained were noted.

      If a particularly large module was found to be high risk and require a very long series of activities, it was separated during the analysis phase into its underlying functions to isolate those that were most critical and focus activities solely on them.

      Choosing the granularity (module or function) with which software was separated did not follow a fixed rule; the developer in the analysis team decided based on context and type of software.

      Table A: Sample analysis of three modules
      Module Lamps handler (machine control) Lubrication pump handler (machine control) Recipe OCX handler (user interface)
      Function Signaling column management Pumping lubrication oil activation Writing values in the recipe archives that are then
      exported to the audit trails
      Hazard Wrong configuration of color-machine status
      couplings
      Wrong lubrication pump deactivation Incorrect recipe values recording
      Potential effect Signaling column’s color not consistent with machine
      status
      Overabundance of oil in the machine Wrong recipe values in the audit trails
      Severity Low: impact only on visualization High: possible product contamination High: incorrect values
      Probability Low: very mature function with low complexity Low: very mature function with low complexity Low: very mature function
      Detection Medium: visual feedback Medium: visual feedback Low: detectable only with a targeted verification of audit trails data
      Risk priority Low Medium High
      Actions Functional tests already on the machine Test: turn off the pump and make sure it does not
      pump oil
      Test: enter recipe data and check that the values in the archives and audit trails are consistent with those entered

      5. Maintenance

      The inherent nature of software is that it undergoes continual evolution for the purposes of improvement, to accommodate mechanical and electrical modifications, or adapt to new machine functions. This means that risk assessment and control must also evolve and be updated at the same pace as the software itself. Therefore, for every software version issued after the first one analyzed, it is necessary to update the risk assessment and control table also, reviewing modules that have been modified or added and those that the modifications or additions affect.

      This produces a risk assessment and control table for each software version.

      Conclusions

      The project described here had a variety of benefits.

      Resources were optimized: Following the risk assessment, resources focused on the most at-risk modules and functions, and corrective actions were directed solely where necessary. Of all the modules analyzed, approximately 20% were high risk—almost entirely application-layer modules developed from scratch for new machine models. Approximately 30% were medium risk. Eighty percent of these were application-layer modules. The remaining 20% were library modules shared by all machines. Consequently, the greatest efforts were focused on approximately 50% of the software, instead of the software in its entirety.

      Another benefit was an improvement in software quality. Following the risk assessment, corrective actions included software changes to eliminate errors, add further controls (thus increasing hazard detection), or simplify software functionality.

      In addition, the analysis team was directed to evaluate not only software functionality hazards, but those of product quality and data integrity and consistency as well. Finally, the risk priority resulting from the assessment is objective, thanks to the method of constructing the evaluation scales of the severity, probability, and detection parameters.Given that risk assessment and control tables are always aligned to software revisions, they are also key document tools. If software changes need to be implemented, even by a different designer than the one who wrote the original software, it will be possible to implement them more quickly and accurately.