Special Reports
November / December 2017

Lessons Learned from the Sarbanes-Oxley Act

James Canterbury
Chris Jacobson
ISPE Pharmaceutical Engineering

Fifteen years ago, corporations embarked on a journey toward SOX compliance; along the way they have learned a tremendous amount about data integrity as it relates to financial systems. Those lessons learned are directly applicable to many of the data-integrity challenges facing the pharmaceutical industry today.

 In 2002, the US Congress passed the Sarbanes-Oxley Act (SOX) act to protect investors, creditors, and employees from harm due to fraudulent financial reporting and accounting activities by public corporations. The law was a reaction to front-page news of the direct impact of financial reporting scandals and the accompanying overall decline of trust in financial reports and the institutions that produced them.

SOX focused on four key areas: auditor oversight and independence, restrictions and ethical expectations of analysts, executive responsibility for financial reporting, and internal control reporting (section 404), which outlined requirements for information technology (IT) departments regarding electronic records and the need to establish internal controls regarding the completeness and accuracy of information.

The bipartisan SOX legislation, enacted in July 2002, included the creation of the Public Company Accounting Oversight Board (PCAOB) to regulate the auditors of public companies, a profession that previously had been self-regulated. Since then thousands of companies of different sizes across diverse industries have journeyed through SOX compliance, each working to apply the related regulations to their unique situations and implement a system of internal controls that met the requirements. Supporting technology has evolved and companies have been able to optimize their control environments, allowing them to more efficiently and effectively know that their financial data is materially accurate.

When we reflect on data integrity as it relates to GxP (e.g., good manufacturing practice or good laboratory practice), the same issue of trust applies. In the GxP scenario, however, the US Food and Drug Administration (FDA) replaces the PCAOB, and plays the roles of both auditor and regulator. By not requiring that an independent third-party attest to the completeness and accuracy of the reports, the onus of data integrity for GxP is placed squarely on the shoulders of the owning organization.
There is also a fundamental element common to the control of GxP processes and financial processes: reports. Financial statements are reports that represent a formalized, consolidated view of real-world transactions that the public relies on (and that the US Securities and Exchange Commission [SEC] regulates). This is similar to how batch release approval is obtained through reports, which are consolidated views of individual test results and raw data: The principles of data integrity must be embedded throughout the process in order to elicit confidence that the final statements are true.

Hierarchy of SOX data
Hierarchy of SOX data

HOW DID COMPANIES START THEIR SOX JOURNEYS?

Asking “What can go wrong?” is where most SOX programs began. Given the objective to prove that a business is creating accurate financial reports, four types of risks are typically identified in any given process:

  1. Process and/or supporting system is not designed correctly
  2. Systems do not function as intended
  3. Human error—accidental (some people make mistakes)
  4. Human error—not accidental (some people cheat)

Companies identified business processes, the risks associated with them, and controls to address those risks. Controls could be designed to prevent an error or to detect and correct one in a timely manner. Some companies seized the opportunity to challenge and enhance their processes, while others sought solutions that complied with the law and avoided foundational change. In time, industries and service providers began to join together through professional organizations to develop standards governing what a control framework should include and how it should work.

One of the most recognized of these organizations is the Committee of Sponsoring Organizations of the Treadway Commission (COSO), a joint initiative whose mission is to provide thought leadership through the development of comprehensive frameworks and guidance on enterprise risk management, internal control, and fraud deterrence designed to improve organizational performance and governance and to reduce the extent of fraud in organizations.1 COSO issued its initial “Internal Control—Integrated Framework” in 1992, and the framework and its subsequent updates became the main standard that companies follow to assess internal controls.

The PCAOB’s initial Auditing Standard No. 2 was widely criticized for being unwieldy and prescriptive.2 In 2007, the SEC unanimously repealed Auditing Standard No. 2 and replaced it with the much shorter Auditing Standard No. 5, which was intended to make standards principles-based, flexible, and scalable. Companies shifted their focus to a smaller set of “key” controls, and much of the SOX testing effort was put into making sure these were designed and operating effectively.

Organizations put a lot of thought into specific testing approaches for these controls, requiring them to be tested by people who were both competent and objective, and where possible implementing automated controls to prevent issues from happening. A classic example is a three-way match between a vendor invoice, a purchase order, and goods receipt before payment is distributed—most enterprise resource planning (ERP) systems now handle this as core functionality. As awareness of controls increased, companies put pressure on software vendors to bake controls (and configurations for controls) into their systems. This has led to more software-driven compliance and systems that are designed with controls in mind.

Out of this grew a new breed of software for governance of risks and controls (GRC). These software programs were geared initially toward managing an entity’s risk and controls framework while automating or at least better organizing much of the routine testing of controls. They have expanded into software administration platforms that help manage access, design role-based security and monitor changes to the environment. The prevalence of GRC and its ability to drive business value beyond compliance suggests that it is an approach that might have a positive impact on data integrity initiatives within GxP environments as well.

The main lesson learned with determining key controls is: Pick your controls carefully. There needs to be a balance between preventing and detecting, and not every control needs to be tested. Understanding the risk that the control addresses is a critical aspect of picking the right controls. It is also helpful to ask “What must go right?” when establishing a controls framework. If SOX is any indicator of the direction that GxP software vendors might take in response to increased scrutiny on data integrity, we could find an increased level of configurable security controls and audit trails within standard software packages.

RELIANCE ON REPORTS

In the world of SOX, reports are everywhere and are intended to instill confidence in a public company’s overall consolidated financial reports. To achieve that goal, a company must rely on hundreds of individual reports and sources of data. Many controls are considered IT-dependent manual (ITDM) controls, meaning that a system generates some data in the form of a report or data extract but a person is responsible for reviewing that report to execute the control. The control is only as good as the quality of the data in the report. The reliance on system output in ITDM controls is similar to how pharmaceutical companies rely on reports within GxP processes. It is in these reports that we can glean many of the lessons learned about data integrity.
The testing approach for ITDM controls provides a good parallel to efforts currently underway in many data integrity initiatives. Understanding the source, of which there are usually four categories, is a good starting point:

  1. Standard system report (comes with system functionality; can’t be configured)
  2. Custom system report (developed for a specific need; configurable)
  3. Data extracts/queries (user-defined parameters)
  4. Spreadsheets

After understanding the source, we focus on the logic. Each report performs the following sequence:
Input➞ Transform/Aggregate ➞ Output

Let’s look at these in reverse order:

Output: When dealing with reports, completeness and accuracy are two sides of the same coin. It is no coincidence that completeness and accuracy are two components of ALCOA+.* “Completeness” means that a given report represents everything that it was designed to represent and meets the criteria (filters) specified in the report. In other words, the data was not cherry-picked to tell a particular story. “Accuracy” means that the data is true. The majority of the data-integrity issues that life sciences companies face today fall under the context of accuracy. Verifying accuracy can be a much more difficult task than verifying completeness.

It is worth noting that inaccurate or false data is not necessarily aberrant. Performing tests that focus on identifying outliers (control limits, standard deviations, etc.) may help identify human error or data generated by a system acting abnormally, but it is not sufficient in detecting false data that is fraudulent—most of the time that data appears to be legitimate and requires more sophisticated testing to detect.

Regulatory tension still exists, and requirements often change under the guise of “continuous improvement”

* The FDA introduced the acronym “ALCOA” (attributable, legible, contemporaneous, original, accurate) to provide attributes of integrity; the term “ALCOA+” adds four additional attributes: complete, consistent, enduring, available. Source: Data Quality and Data Integrity: What's the Difference

Transform/Aggregate: This is the processing logic of the report, a combination of the configuration and the computer code that applies programmed functions under given conditions. Processing may be as simple as displaying raw data that meets certain conditions or providing simple sums of defined data sets. Or it may be very complex, requiring statistical calculations, time series, or even advanced processing such as artificial intelligence. And if the report includes a graphical interface (e.g., a dashboard), then the charts and graphs in that dashboard may also perform some calculations.

The transform functions of a report should be treated the same way as a system; in most cases, in fact, they are systems. For SOX, we might require a review of the source code, an understanding of how the report was tested during development, a parallel calculation, or evidence that the report logic has not changed since the last time a full review was performed. In GxP, this could fall under the computer system validation approach.

Input: This is the source data for the report. Data sources can come in many different shapes and forms, with some reports having multiple data sources. For companies that are required to be SOX-compliant, that source is typically an ERP system used to support their financial processes. Identifying and understanding underlying systems are critical components of SOX scoping and testing, because a company needs to determine if it can rely on these systems throughout the time period under review to produce complete and accurate reports.

Service organization control reports

SOX placed the onus on management to demonstrate controls over their entire organization, including any third parties to whom processes may have been outsourced (e.g., payroll or IT services). And it didn’t make sense to have a service organization supporting several customers get their controls evaluated by each customer and their auditors separately.

The American Institute of Certified Public Accountants (AICPA) and its international counterparts developed what are known as “SOC 1” reports that create a framework for an organization to publish a report on their controls relevant to financial reporting and the results of independent testing. SOC 2 and SOC 3 reports also exist that may cover areas of internal control broader than financial reporting, such as security, privacy, confidentiality, availability, and processing integrity. The explosive growth of cloud-based services has made SOC 1, 2 and 3 reporting increasingly common. The reports also set expectations regarding the controls a company using a service organization should have in place (a company outsourcing its payroll, for example, must provide accurate timesheet records to its payroll firm).

While there is no industry standard (yet) that guides an attestation approach for GxP services and systems provided by third parties for pharma companies, many service providers (specifically those that offer cloud-based services) do provide guidance on how to apply their services within a corporate environment. This guidance often references the service provider’s IT controls documented in its available SOC reports (Amazon Web Services is an example). As the overlap between the data integrity controls from a finance and GxP perspective become more obvious, we expect that third-party pharma service providers (contract research organizations, contract manufacturing organizations, etc.) may take an approach similar to SOC reporting to provide evidence of consistent data integrity controls across their customer base.

RELIANCE ON SYSTEMS

Developing confidence in the systems that generate the reports or enforce other SOX-related controls at a business-process level has been, and still is, a focus of many SOX programs. It can be quite difficult to get comfortable with a process output if one does not have confidence in the systems that support it. To address this, companies perform IT general controls (ITGC) testing, which is designed to confirm that the system has been operating as intended over a specified period of time. ITGC testing covers three general areas:

Access controls

Who has access to the system, and what can they do? Most systems today have some sort of role-based access control (RBAC) that limits what system users are able to do based on their role in the organization. RBAC design typically incorporates organizational functions, training requirements, and segregation of duties (such as designing permissions to prevent a single individual from having too much control over a process, similar to checks and balances in government).
Along with enforcing sufficient password parameters, access controls also need to account for “super users” and system administrators (who might have the ability to grant themselves permissions and erase audit trails). By following the principle of least-required-access and by checking the level of access enjoyed by active employees, companies are often able to identify a large number of employees and contractors who have far more access in the system than they need to perform their jobs, or uncover access that could allow someone to circumvent an internal control (e.g., by logging in using a terminated employee’s ID.)

Change control

How the system is modified and kept in accordance with approved design requirements is more than just good development practice. The change control process has become the cornerstone of trust that a system continues to work as intended over time. In the absence of continuous monitoring or evidence that no changes have been made, change control must be effective.

IT operations

Backup/data retention, processing of scheduled jobs, interfaces, reliability and incident management, and physical and network security: these some-times-underappreciated factors are foundational to the overall IT control environment. A company must have confidence that data flows as intended between systems, that data is backed up and recoverable in a timely matter in the event of an IT event, and that the IT environment has protection and, an ability to recover from cyberattacks. It’s important to note here that because an application, operating system, or database layer can affect controls and access to underlying data, the concepts described above need to be applied to all three of these elements (which comprise the “application stack,” a set of applications typically required by an organization).

LESSONS LEARNED

From the start, it was apparent—following passage of Sarbanes-Oxley— that programs would need to stabilize and evolve to become more efficient. Industry expected this to happen fairly quickly; it didn’t quite work out that way. SOX programs have become more efficient, and audit findings have been the impetus for process and IT change that drive value. Yet many organizations still see SOX as an onerous process and adopt a “just get it over with” mentality with regard to audit and controls testing. Temporary solutions are preferred over investment in robust process improvements. And an overall lack of dialogue between departments or entities within an organization leads to redundancy in controls and inconsistent processes.

Lesson 1: SOX costs

The cost of SOX compliance was expected to drop drastically in the first few years and then continue to modestly decline as programs matured. In most cases, there has been a drop from the Year 1 stand-up costs, but year-over year SOX compliance costs have been sustained.

Lesson 2: Spreadsheets still rule

It seemed that the role of GRC technology and the integration of controls into standard ERP software would drive continuous automated testing, and control logs would provide all the evidence auditors would need. While GRC and analytics have come a long way in improving audit techniques, there are still a lot of Excel spreadsheets in use and manual controls testing being performed.

Lesson 3: Control deficiencies persist

Fifteen years after the enactment of Sarbanes-Oxley, control environments should be mature, there should be a very low volume of errors, and the few false positives detected should be used for training in audit programs. As it turns out, there are still many persistent control deficiencies. Instead of fixing root issues, companies are spending energy to prove that deficiencies do not result in any significant errors or trying to argue that the control is inconsequential.

Lesson 4: Each company is unique

“SOX in a box” was touted as a canned suite of standard controls and leading practices that could be implemented in nearly any company as it became public. But most companies still struggle significantly (and spend accordingly) with their first year of SOX compliance; every company is unique.

Lesson 5: Continuous improvement needed

It was thought that the regulatory environment would stabilize and attention would turn to improving specific areas, and encourage leading practice behavior. Yet regulatory tension still exists, and requirements often change under the guise of “continuous improvement.” The PCAOB continues to see significant findings when reviewing an auditor’s work, which in turn drives changes to the audit approach.

Lesson 6: Outsourcing not necessarily the answer

The industry envisioned a world where remote testing would be performed continuously using offshore resources and then summarized in annual audit reports. This has proven to be difficult to achieve. Many of the audits are being performed onshore by auditors who understand the organization and have long-standing relationships with process owners.

CONCLUSION

There is much to be learned from the SOX journey that is directly applicable to data integrity within a GxP environment. An organization that does not consult with its internal audit team when designing a data integrity program is potentially missing a wealth of knowledge and may be setting itself up to repeat mistakes.