The AI Paradox

The increasing digitalization of the pharmaceutical and medical device industry has created novel cybersecurity challenges, particularly with the rapid advancement of artificial intelligence (AI) technologies. This article examines the dual nature of AI as both a potential threat vector and a powerful defensive tool.
We analyze how cybercriminals leverage AI to develop sophisticated attacks, including AI-driven software supply chain attacks, deepfakes, data poisoning, and automated social engineering campaigns that target the sensitive data and operations of pharmaceutical and medical device companies. Conversely, we examine how AI enhances cybersecurity by improving defenses against deepfakes, data poisoning, LLM vulnerabilities, advanced phishing, and supply chain attacks.
We introduce the concept of integrated assurance (IA) as an essential framework for handling AI-related risks and achieving responsible implementation. Our research emphasizes that while AI is a force multiplier for cybersecurity teams, it requires careful governance and human oversight. We conclude with strategic recommendations for companies to effectively leverage AI in their cybersecurity programs while maintaining regulatory compliance.
Introduction
The pharmaceutical and medical device industries face growing threats from cybercriminals, as their heavy reliance on digital systems exposes them to the risk of compromised sensitive information, disruptions in operations, damage to revenue and reputation, and threats to their overall commercial viability. The modern enterprise operates through interconnected digital networks and systems that support its entire range of activities, including advanced research and development, complex manufacturing processes, intricate supply chains, and sensitive clinical trials.
The digital transformation that drives progress has expanded the attack surface, exposing organizations to an increasing number of cyberthreats. Disruption of operations—whether targeting manufacturing facilities, medical device functionality, or critical clinical trials—can heavily impact patients. Exploited vulnerabilities in medical devices can lead to direct patient harm and, as a theoretical threat, even to death.
Data breaches involving sensitive patient health information, business information, or confidential research findings erode public trust, and may cause severe regulatory penalties, and compromise patient privacy, potentially exposing them to identity theft or targeted harm. Furthermore, vulnerabilities within complex pharmaceutical supply chains, if exploited, can disrupt operations, particularly for drugs with a single producer or those that are in high demand.
The rapid advancement of AI technology is bringing fundamental changes to all industries, including cybersecurity operations.1 This powerful technology represents a double-edged sword, as it enables robust defensive capabilities against modern cyberthreats but simultaneously generates new attack vectors for malicious actors. This article examines the complex relationship between AI and cybersecurity by analyzing the new security threats that arise from AI-powered cyberattacks and the security benefits of AI-driven solutions.
AI in the Pharmaceutical and Medical Device Industry
Global investment in AI and generative AI (GenAI) has surged, with the pharmaceutical and medical device industries actively embracing the trend. IDC’s latest release of its “Worldwide AI and Generative AI Spending Guide” reports that the global AI market is currently valued at nearly US$235 billion, with projections indicating it will exceed US$631 billion by 2028.2
AI is transforming the pharmaceutical and medical device industries. As AI systems become increasingly embedded in regulated pharmaceutical and medical device processes, both the European Medicines Agency (EMA) and the US Food and Drug Administration (FDA) have issued foundational documents to guide their safe and compliant use. However, the development of formal regulations has been slow, with a prevalence of discussion papers—especially regarding GenAI.
The following is a non-exhaustive overview of key publications by the EMA and the US FDA concerning the application of AI in the pharmaceutical and medical device sectors.
Key Publications from the EMA on AI
The EMA’s “Reflection Paper on the Use of Artificial Intelligence (AI) in the Medicinal Product Lifecycle”3 covers AI applications across discovery, development, manufacturing, and postmarket activities. It discusses GxP compliance, validation, algorithm transparency, and data governance. It also introduces principles of human oversight, lifecycle management, and AI explainability.
The EMA’s “Guideline on Computerised Systems and Electronic Data in Clinical Trials”4 addresses the use of AI in clinical trial settings, particularly around data integrity, audit trails, and electronic source data.
Key Publications from the US FDA on AI
The discussion paper “Using Artificial Intelligence and Machine Learning in the Development of Drug and Biological Products”5 is a foundational document that sets the stage for how the US FDA envisions regulating and encouraging the responsible use of AI/machine learning (ML) across the entire drug development lifecycle.
The guidance “Artificial Intelligence and Medical Products: How CBER, CDER, CDRH, and OCP are Working Together”6 outlines the detailed, collaborative efforts among US FDA centers to address AI integration across medical product lifecycles.
The “Proposed Regulatory Framework for Modifications to Artificial Intelligence/Machine Learning-Based Software as a Medical Device (SaMD)”7 introduces a framework for regulating AI/ML-based software modifications.
“Marketing Submission Recommendations for a Predetermined Change Control Plan for Artificial Intelligence-Enabled Device Software Functions”8 is a guidance document providing nonbinding recommendations on the information to include in a predetermined change control plan (PCCP) in a marketing submission for a device that includes one or more AI-enabled device software functions.
“Cybersecurity in Medical Devices: Quality System Considerations and Content of Premarket Submissions”9 outlines recommendations for managing cybersecurity risks throughout the product life cycle, highlighting the need for robust software development practices to safeguard device integrity and patient safety.
“Artificial Intelligence-Enabled Device Software Functions: Lifecycle Management and Marketing Submission Recommendations”10 provides recommendations on the content of marketing submissions for devices incorporating AI, emphasizing the importance of life cycle management to ensure safety and effectiveness.
“Considerations for the Use of Artificial Intelligence to Support Regulatory Decision-Making for Drug and Biological Products”11 provides recommendations on using AI to support regulatory decision-making for drug and biological products, specifically outlining a risk-based credibility assessment framework that may be used for establishing and evaluating the credibility of an AI model for a particular context of use.
ISPE
The ISPE GAMP® community has begun producing guidance on AI and ML systems, with a particular emphasis on the need for explainability, appropriate validation strategies, and risk-based controls throughout the AI model lifecycle. This aligns with broader industry trends to ensure the trustworthy and auditable use of AI in high-risk contexts.12 A considerable number of articles have been published in Pharmaceutical Engineering over the past few years including “Applying GAMP® Concepts to Machine Learning”13 published in the January/February 2023 issue of the magazine.
Industry’s Next Steps
The pharmaceutical and medical device industries have long faced numerous cybersecurity threats, including data breaches, intellectual property theft, and supply chain vulnerabilities, but AI adoption has transformed both the size and complexity of these threats. Through AI, attackers can automate, personalize, and scale their attacks beyond previous limits. The industry operates with limited AI security expertise while facing slowly developing regulatory requirements. There is an urgent need to update cybersecurity approaches to protect both traditional threats and new AI-based threats if we want—as an industry—to achieve the full potential of AI while keeping patients safe and critical assets secure.
AI-Powered Cyberattacks
Cybercriminals are exploiting AI to develop more sophisticated and scalable attacks. Traditional hacking requires significant technical expertise, but AI is making their operations faster and more effective, lowering the barrier to entry. Attackers can now automate reconnaissance, social engineering, and vulnerability scanning. AI tools can help malicious actors optimize attack vectors, analyze stolen data, and automate fraud at an unprecedented scale. Here’s an in-depth look at the worst-case scenarios AI poses to cybersecurity.
Deepfakes and Misinformation Campaigns
AI-generated synthetic media, known as deepfakes, presents a unique and concerning threat to the pharmaceutical and medical device industries. Convincing but fabricated videos or audio recordings of company representatives or patients can be rapidly created and disseminated, posing a risk of misinformation about pharmaceutical products and medical devices, reputational damage, and public distrust. Although the use of deepfakes for these purposes may be largely theoretical, the potential for financial manipulation, leading to stolen funds and operational disruptions, is a significant concern. For example, a deepfake video of a respected scientist making false claims about a drug’s side effects could have devastating consequences for both public health and the company’s stability. This combination of misinformation and financial risk poses a significant challenge to maintaining public trust and delivering vital medications.
AI-generated deepfakes have advanced to nearly perfect levels, making it increasingly difficult to distinguish between real and fake content. Cybercriminals use AI-powered voice, video, and image synthesis to execute social engineering attacks and corporate fraud. For example, in 2021, a bank in the UAE was scammed out of $35 million when criminals used voice cloning, or deepfake audio, to mimic a company director’s voice and authorize fraudulent transactions.14 And in 2025, a network of fraudsters allegedly targeted individuals in the name of Italian Defense Minister Guido Crosetto, demanding large sums of money from wealthy entrepreneurs and professionals to “pay ransoms” for journalists supposedly held captive in the Middle East.15
Data Poisoning and Prompt Injection
Data is fundamental to innovation in the pharmaceutical and medical device industry. In the GxP context, data is not only vital for making decisions, but also for ensuring compliance with regulatory expectations for data integrity, as outlined in ALCOA+ principles. (ALOCA+ is defined as attributable, legible, contemporaneous, original, accurate, complete, consistent, enduring, and available.)
AI-specific information security threats include data poisoning [16], which is the intentional corruption of data sets that train or inform AI systems. This threat is especially insidious in GxP environments because the effects are often subtle, cumulative, and difficult to detect in real time. By introducing small but systematic biases or errors into training or reference data, attackers can:
- Corrupt AI models used for predictive quality monitoring
- Skew clinical data analysis
- Undermine decision-support tools used for batch release, deviation classification, or pharmacovigilance signal detection
Such attacks could lead to flawed scientific conclusions, misinformed regulatory filings, or the release of out-of-specification products without any immediate sign of compromise. However, it’s also critical to recognize prompt injection as a distinct threat, where malicious data is introduced to a trained AI model at inference time, potentially manipulating its output and leading to errors in analysis.
AI-powered LLM data poisoning
Frontier large language models (LLMs) face the risk of data poisoning attacks, where adversaries inject malicious data to distort model behavior. Although the largest frontier model developers are very careful with their training data for models, this risk is more pronounced in the open-source space, internally developed models, or refined models that post-train on frontier models using additional data that may not have been as controlled.
LLMs are already being used in GxP environments to support:
- Regulatory intelligence and response drafting
- Deviation and corrective and preventive action (CAPA) trend analysis
- Automated document generation
- Support tools for quality system management
Compromised LLMs can generate incorrect or manipulated answers, influencing user decisions and, critically, impacting the validity of GxP-related data. Although many applications of these models incorporate a “human in the loop,” the danger lies in the inherent risk that humans may not possess the expertise or diligence to properly evaluate the LLM’s output. This can lead to a reliance on the LLM’s perceived correctness without sufficient inspection and independent validation, potentially resulting in the release of out-of-specification products, compromised medical device operations, or flawed research findings.
Here is how LLM data poisoning works: First, attackers insert adversarial or manipulated content into sources that may be ingested during model training or fine-tuning (e.g., public data, wikis, even internal file repositories). Then the LLM unknowingly “learns” from these poisoned inputs, internalizing incorrect, misleading, or security-compromising patterns. When deployed, the infected model generates flawed outputs that appear valid, influencing decisions or generating documentation that violates regulatory expectations.
To mitigate this risk, careful control over both training and test data is essential, and these data sets must be independently controlled. This independence allows the test data to effectively evaluate model performance and detect potential changes from poisoned training data.
Major cloud service providers (CSPs) typically have robust controls over foundational model training, making poisoning in these contexts particularly challenging. However, this remains a credible risk in situations where organizations use internal third-party data that may not be fully validated. In these cases, even minor poisoning can impact outputs in high-stakes areas such as GxP compliance.
LLM prompt injection
A prompt injection attack is when a user adds hidden instructions for the model, specifically the malicious insertion of prompts or requests into LLM-based interactive systems, resulting in unintended actions or the disclosure of sensitive information. It is similar to an SQL injection attack, where the embedded command appears to be a regular input at the start but has a malicious impact. The injected prompt can deceive the application into executing unauthorized code, exploiting vulnerabilities, and compromising overall security.
Clusmann, J., D. Ferber, I. C. Wiest, C. V. Schneider, T. J. Brinker, S. Foersch, D. Truhn, and J. N. Kather. “Prompt Injection Attacks on Vision Language Models in Oncology.” Nature Communications 16, 1239 (1February2025). doi:10.1038/s41467-024-55631-x
It is worth noting that many model-serving platforms employ authentication, API security, and input controls. However, we want to highlight those risks that may persist in less mature implementations, such as internal tools, prototypes, or misconfigured interfaces.
Even in authenticated sessions, prompt injection remains a concern when user inputs are integrated into prompts without proper sanitization. Therefore, specific input and output checks should be implemented to evaluate potentially malicious input or inappropriate output from models, especially relevant in healthcare and GxP settings, where outputs may directly impact critical decisions. A recent study17 provides a real-world example of prompt injection targeting a vision-language model used in oncology.
Numerous scientific and medical applications of LLMs have been proposed, and these could drastically change and improve medicine as we know it. In parallel to the rapid progression of LLM capabilities, there has been substantial progress in the development of multimodal vision-language models (VLMs). VLMs can interpret both images and text, further expanding the applicability of LLMs in medicine. Several VLMs have been published to date, either as healthcare-specific models (e.g., for the interpretation of pathology images or echocardiograms) or as generalist models applicable to multiple domains simultaneously, including healthcare, such as GPT-4o.
Social Engineering and Phishing Attacks
Social engineering and phishing attacks are becoming alarmingly personalized and automated through AI. AI algorithms can analyze social media profiles and publicly available data to craft highly targeted phishing emails. These messages can mimic the language, style, and even the specific concerns of the intended victim, drastically increasing their credibility and the likelihood of success. In this context, we aim to analyze how a popular frontier model can aid in crafting effective social engineering and phishing messages.
Social engineering
This refers to the psychological manipulation of individuals into performing actions or divulging confidential information. A frontier model’s ability to understand context, impressive fluency, and mimic human-like text generation could be leveraged by malicious actors. For example, consider a scenario where an attacker has gained access to a victim’s basic personal information, such as their place of employment and job title. The attacker could then use the LLM tool to generate a message that appears to come from a colleague or superior at the victim’s workplace. This message, crafted with an understanding of professional tone and language, might request sensitive information for a specific action, such as clicking on a seemingly innocuous link.18
Phishing
Phishing attacks are a prevalent form of cybercrime wherein attackers pose as trustworthy entities to extract sensitive information from unsuspecting victims. These attackers can potentially exploit frontier LLMs to make their phishing attempts significantly more effective and challenging to detect.19 Today, a simple frontier model’s prompt can instantly produce messages written in the style of your CFO. Maybe now the tipoff is that the email is better written than the ones you get from the CFO.20
Despite the increased sophistication of phishing attacks due to LLMs, the fundamental security measures required for protection remain unchanged: effective email security and link validation, secure browsers with sandboxing, and regular patching of systems to address known vulnerabilities. LLMs primarily amplify the effectiveness of phishing, but adequate protection should rely on established cybersecurity best practices.
AI-Driven Software Supply Chain Attacks
AI is now being used to identify and exploit weaknesses in software supply chains, targeting trusted vendors to distribute malware on a large scale. Instead of attacking one target at a time, AI automates large-scale compromises across multiple organizations, making mitigation exponentially harder.
Here is how it works: First, attackers use AI-powered reconnaissance tools to map out software dependencies across multiple companies. The AI then identifies poorly secured third-party libraries, APIs, and dependencies used by high-value targets. From there, attackers inject malicious AI-modified code into open-source repositories or vendor software updates. The compromised software is then deployed across hundreds or thousands of organizations, spreading AI-enhanced malware with minimal suspicion.
The growth and development of AI technology will profoundly impact software development practices, forever altering the software supply chain. Rather than eliminating cyber risks related to software supply chains, AI is more likely to reproduce and possibly displace them. That’s because AI adoption is not limited to software developers. It also powers software producers—and, conversely, malicious actors.21
How AI Is Strengthening Cybersecurity
AI-powered cyberthreats necessitate the use of AI-powered defensive systems that can match their evolving complexity and scale. The security challenges described in the previous section may exceed the capabilities of conventional security measures. The defensive capabilities presented next show that AI functions effectively as a cybersecurity ally when organizations implement it correctly.
Defending Against Deepfakes and Misinformation
AI-powered deepfake detection systems analyze facial movements, micro-expression abnormalities, and voice pattern irregularities to identify inconsistencies imperceptible to the human eye. There are solutions available on the market that use AI to detect signs of manipulation in video footage. In regulated industries such as pharmaceuticals and medical devices, these tools are highly recommended for authenticating communications from leadership or verifying sensitive public announcements.22
However, it’s equally important to establish tightly controlled processes for all external company communications, ensuring that no single person has the authority to issue them independently. Several other tools offer AI-powered solutions that detect deepfakes by analyzing visual artifacts and inconsistencies in facial movements, enabling organizations to verify the authenticity of media before making critical decisions.23, 24
In addition to employing these technological tools, it is crucial to establish robust processes for safeguarding critical processes, such as approving financial transactions. To safeguard against fraudulent activities, payment releases should never be authorized by a single individual, regardless of their position. To prevent fraud, it should not be possible for a single deepfake, even one impersonating a CFO, to trigger the release of payments. Implementing simple procedural controls, such as mandatory dual authorization and segregation of duties, can effectively mitigate the risk of deepfakes being used to manipulate financial systems.
Data Poisoning and Prompt Injection
The training of modern AI systems now includes adversarial resilience as a defense mechanism against data poisoning attacks. Organizations test their models through extensive simulation-based protocols that detect poisoning attacks, thereby improving model capabilities for identifying abnormal training data patterns.
Organizations use AI-powered data validation systems that create normal data pattern baselines to identify potential tampering through anomaly flagging. These systems employ the following techniques.
- AI models receive potential attack patterns through adversarial training techniques during their development phase
- Data provenance tracking systems track the source and complete integrity of training data
- Continuous monitoring systems detect unusual modifications to datasets
In GxP environments, companies are adopting data provenance tools and AI-powered integrity validation systems to monitor regulated data sets. Anomaly detection systems in clinical trials identify subtle changes in data submissions that may indicate attempts at data poisoning or manipulation efforts.
The implementation of AI tools for auditing training datasets by pharmaceutical companies tracks the origins and transformation steps of data to enhance their ALCOA+ compliance.
Hardening LLMs against prompt injection and data poisoning
AI developers implement instruction-following filters, output monitoring, and reinforcement learning from human feedback (RLHF) to build LLMs with built-in safeguards that minimize prompt injection and unintended behavior. Organizations already use next-generation firewalls as intermediary systems to preprocess and sanitize user inputs before they reach the LLM. Real-time natural language understanding solutions filter or rephrase potentially malicious prompts as part of their functionality.
Enterprise-tier frontier models already use multiple layers of behavioral guardrails and content moderation to prevent malicious outputs in compliance-sensitive environments. Companies can implement hybrid AI governance models for safe LLM integration in regulated GxP settings that combine technical validation (e.g., robustness testing, audit trails of model outputs), organizational controls (e.g., restricted usage, dual review processes), and external third-party validation of model behavior on GxP-related tasks.
AI-enhanced auditing tools work in conjunction with these controls to detect model behaviors, such as hallucinations and bias, enabling companies to prevent these issues from impacting regulatory documentation and patient safety decisions.
AI-Driven Phishing and Social Engineering Defense
AI-powered email security platforms use behavioral analytics and natural language processing (NLP) to identify suspicious communications. These tools analyze sender reputation, writing style, timing patterns, and user behavior to flag or block phishing attempts, particularly those created using frontier models. These systems can detect AI-generated phishing attempts by identifying subtle linguistic patterns and contextual inconsistencies that might indicate fraudulent intent.
By continuously learning from new attack patterns, these defenses evolve with threat capabilities, ensuring protection against even AI-generated phishing attempts. AI is revolutionizing cybersecurity, providing sophisticated tools to combat increasingly sophisticated threats.25
Mitigating Software Supply Chain Attacks with AI
Organizations use AI to protect against AI-assisted software supply chain attacks through the following measures:
- Automated software composition analysis (SCA): AI tools examine codebases and dependencies to detect outdated or vulnerable libraries.
- Anomaly detection in CI/CD pipelines: Machine learning models track unusual code modifications and deployment activities, often detecting malicious code before it becomes part of the software.
- Vendor risk profiling: AI systems perform ongoing digital footprint assessments of suppliers to detect suspicious activities or modifications in security policies.
However, preventive measures are also critical. To further strengthen security, dependency checking and secure code reviews should be integrated into the code development process within integrated development environments (IDEs). Code should not be allowed to be checked into a repository unless it meets a defined quality bar. It is also important to note that AI can assist in code development and is very effective at code analysis, contributing to efficiency and security.
Encouraging Trends
To learn how AI is transforming the defensive side of cybersecurity, we checked the predictions presented at the Gartner Security & Risk Management Summit, 18–19 March 2024, in Sydney, Australia.2
By 2026, enterprises combining generative AI with an integrated platforms-based architecture in security behavior and culture programs (SBCP) will experience 40% fewer employee-driven cybersecurity incidents. Organizations increasingly focus on personalized engagement as an essential component of an effective SBCP.
Gen AI has the potential to generate hyperpersonalized content and training materials that take into context an employee’s unique attributes. According to Gartner, this will increase the likelihood of employees adopting more secure behaviors in their day-to-day work, resulting in fewer cybersecurity incidents.
By 2028, the adoption of generative AI will collapse the skills gap, removing the need for specialized education from 50% of entry-level cybersecurity positions. Generative AI augments will change how organizations hire and teach cybersecurity workers who are looking for the right aptitude and education.
The CyberEdge 2024 Cyber Defence Report20 confirms a couple of encouraging trends, where among the top five insights for 2024, we extracted two: confidence is building, and AI is taking center stage.
CyberEdge Group, LLC. “2024 Cyberthreat Defense Report.” https://cyberedgegroup.com/cdr/
Confidence is Building
Several long-running trends have reversed in the last year or two. Survey data contains multiple indications that security professionals are becoming more confident about their ability to reduce the impact of cyberattacks. The percentage of organizations compromised by cyberattacks fell substantially from the previous survey.
AI is Taking Center Stage
AI technologies are being incorporated into a wide range of security solutions. They promise to increase the power of security professionals to detect and block attacks, respond to incidents, and find and remediate vulnerabilities. Security teams are looking at AI as a force multiplier that will make them more productive and effective. A survey in the CyberEdge 2024 Cyber Defence Report20 asked “Cybersecurity industry analysts predict that advancements in AI, including ML and generative AI (e.g., ChatGPT), will benefit IT security teams. Which of the following positive outcomes of AI do you predict will impact your organization the most? (Select up to three.)” Figure 3 shows the results of the survey.
Challenges and Considerations
Although AI offers significant potential for enhancing cybersecurity in the pharmaceutical and medical device industry, its implementation is challenging. Several key areas must be carefully considered to ensure responsible and effective deployment.26
CyberEdge Group, LLC. “2024 Cyberthreat Defense Report.” https://cyberedgegroup.com/cdr/
Ethical Implications of Using AI in Cybersecurity
AI algorithms are trained on data, and if that data reflects existing biases, the AI system may perpetuate or even amplify those biases. The potential for misuse of AI-powered cybersecurity tools is a concern. These tools could be used for surveillance, profiling, or offensive cyber operations. Establishing clear ethical guidelines and developing mechanisms to ensure fairness, transparency, and accountability in the use of AI in cybersecurity is crucial. Going forward, legal executives can take a leading role in strategic decision-making related to any use of generative AI within the enterprise. They are likely to assume responsibilities and accountabilities related to the development of ethical and legal frameworks.27
Data Privacy and Security Concerns
AI systems require vast data to train and operate effectively. This raises concerns about data privacy and security, particularly within the highly regulated industry. Pharmaceutical and medical device companies must ensure that the data used to train and operate AI systems is collected, stored, and processed in compliance with relevant data privacy regulations, such as GDPR and HIPAA. Furthermore, robust security measures must be in place to protect this data from unauthorized access or breaches. Using anonymization and differential privacy techniques can help mitigate some of these risks. Provisions regarding confidentiality and data privacy are likely to be a key focus of any contractual framework for the provision of generative AI services.27
Bias and Dataset Quality
The accuracy of cybersecurity AI tools in distinguishing between legitimate and malicious activity depends on the quality of the training data. In pharmaceutical and medical device environments, biased or nonrepresentative data can lead to false positives or missed detections, potentially exposing critical systems—such as manufacturing execution systems (MES), laboratory information management systems (LIMS), or clinical trial platforms—to undetected threats. It is essential to validate training data sets to ensure reliability across regulated use cases.
Innovation Pressure vs. Cyber Risk Management
Pharmaceutical organizations face substantial pressure to implement advanced AI cybersecurity tools as evolving threats target their intellectual property, clinical trials data, and automated manufacturing systems. Rushed implementation without proper risk analysis, validation, and change control can introduce new vulnerabilities. A risk-based adoption strategy that balances cybersecurity needs with regulatory compliance and system availability requirements is essential.
Regulatory and Ethical Alignment of Security Tools
AI deployment in cybersecurity must meet both IT compliance standards and general AI governance standards, as outlined in the EU AI Act.28 Pharmaceutical companies must ensure that these tools are effective in mitigating cyber risks and are transparent, auditable, and ethically sound, particularly when they influence decision-making related to incident response or access control.
AI Requires an Integrated Assurance Approach
AI presents significant challenges across key cybersecurity domains and GxP compliance requirements, including data privacy and protection, system security, operational resilience, and risk management, as mandated by regulations and standards in the pharmaceutical and medical device industries. These are essential for driving successful innovation, enhancing managerial decision-making, and gaining a continuous competitive edge. Nevertheless, the rate and scale of the impact are unprecedented. From an organizational and management perspective, forecasting its development and making a long-term strategy is hardly possible, yet is still necessary and urgent.
Beyond being a legal imperative in many jurisdictions—even with slightly different approaches, whose discrepancies may potentially undermine the overall effectiveness—we face a global issue here: mitigating risks and harnessing the potential of emerging technologies can profit from significant opportunities. Today, most pharmaceutical organizations have a well-established GxP compliance program with varying maturity levels, encompassing procedures, policies, work instructions, and industry best practices. Compliance offices are already burdened by the growing complexity and volume of regulatory requirements they must continuously manage.
Nonetheless, AI compliance is starting to take shape. In the United States, on 30 October 2023, President Biden issued Executive Order 14110, establishing national priorities for AI governance with a focus on safety, security, innovation, and equity.29 However, upon taking office on 20 January 2025, President Donald Trump revoked this order, citing the need to eliminate policies that could hinder the development of AI innovation.30 Subsequently, on 23 January 2025, President Trump signed an Executive Order titled “Removing Barriers to American Leadership in Artificial Intelligence,” aiming to foster AI development free from ideological bias and thereby contributing to the United States’ global dominance in AI development.19
The European Union’s AI Act28 effective 1 August 2024, aims to introduce a common regulatory framework based on risk categories. In 2023, the European Parliament amended its initial proposal to include generative AI, which must comply with specific transparency requirements. These requirements involve registering the foundation model in a database and developing and retaining technical documentation.
Conversely, we have “ISO world,” the new ISO 42001:2023,16 the well-established ISO 27001:202231 and ISO 22301:2019.32 They share a structured, risk-based approach to governance, resilience, and security. Together, they form a robust foundation for adopting AI in an organization, ensuring trustworthiness, security, and continuity.
Sometimes, it seems we are talking of two worlds apart (regulations vs. standards). But the very aim is and should be the same. The first objective should be to integrate approaches into our organizations. To manage and control processes and technologies, such as AI, which are evolving at a rate never experienced before, we must adopt a more operational, practical, and, above all, interdisciplinary and interdepartmental approach. Organizations should establish a cross-functional governance board to oversee the development and adoption of AI, using a comprehensive framework for responsible AI.33, 34
It’s sufficient to examine recent developments, such as the EU General-Purpose AI Code of Practice (14 November 2024), which addresses providers of general-purpose AI models and those with systemic risks, including frontier models. Its central themes, for example, are primarily focused on organizational and IT governance, based first on the evaluation and technical mitigation of risks rather than the other way around.
An integrated approach is necessary, as the obligation for management awareness is rising at all levels. On this point, it is essential to consider the obligations related to AI literacy and awareness. Article 4 of the EU AI Act came into effect on 2 February 2025. It requires both providers and deployers (i.e., users) of AI systems to “take measures to ensure, to the extent possible, a sufficient level of AI literacy among their personnel, as well as any other individuals involved in the operation and use of AI systems on their behalf, taking into account their technical knowledge, experience, education, and training, as well as the context in which AI systems are to be used, and considering the individuals or groups of individuals on whom AI systems are to be applied.”28
The EU AI Act’s applicability also concerns Title II of the Regulation. Specifically, Article 5 lists the prohibited AI practices.35 Therefore, it is necessary to conduct a comprehensive review and assessment of the AI systems used within the company to ensure that no unlawful situations exist. This is a draft communication document prepared by the European Commission, intended to provide interpretative guidance on the prohibited AI practices listed in the AI Act (formally adopted as Regulation (EU) 2024/1689). In plain terms, this document is:
- An official set of guidelines (not legally binding, but highly authoritative)
- Designed to help stakeholders, regulators, and companies understand which AI practices are prohibited under the EU’s AI Act
- Focused explicitly on Article 5 of the AI Act, which lists prohibited AI systems considered a clear threat to safety, human rights, or European Union values
Therefore, an integrated assurance approach is the only consistent answer to AI. ISO 42001:2023 can be part of the solution, as it provides guidance to help organizations responsibly perform their role regarding AI systems used for automatic decision-making.
Leveraging AI-Powered Tools for Cybersecurity
Standard infrastructure controls—including identity and access management, encryption, logging and monitoring, and network segmentation—create a solid foundation for cybersecurity. However, AI-powered threat actors use configuration gaps, integration weaknesses, and real-time response shortcomings to attack systems, which demonstrates why AI-driven security tools must complement traditional defenses.
The most significant challenges in modern cyber defense involve collecting and making sense of vast amounts of data. AI excels at addressing these challenges by rapidly identifying patterns in large data sets, summarizing information, and interpreting complex data. Although automation is possible with traditional code, AI offers the ability to take nuanced actions based on data analysis, much like a human analyst, but at a greater speed. Furthermore, AI facilitates translation between different forms of information, such as converting natural language into programmatic detections, code into threat analysis, and actions into notifications or predicted responses.
Pharmaceutical and medical device companies should adopt a proactive and strategic approach to leverage AI for cybersecurity. First, they should conduct a thorough risk assessment to identify their most critical vulnerabilities and prioritize their cybersecurity investments. Second, they should develop a clear AI strategy aligning with their business objectives and cybersecurity goals. This strategy should include identifying specific use cases for AI in cybersecurity, selecting appropriate AI tools and technologies, and developing a roadmap for implementation. Third, companies should invest in building internal expertise in AI and cybersecurity through training programs, hiring specialized talent, and partnering with external experts. The following sections look at how AI is enhancing cybersecurity capabilities in various areas.
Enhanced Threat Detection and Prevention
The large volume of data generated by pharmaceutical and medical device organizations can make it challenging for human analysts to identify all potential security threats. AI shows a better ability to analyze big data sets and find hidden patterns and anomalies that are not easily seen by humans.
Organizations can use ML algorithms to detect malicious behavior, predict upcoming attacks, and block them before they can cause damage. Companies gain a better security posture by taking a proactive stance in detecting and preventing threats. Companies achieve an enhanced security posture through their proactive approach to threat detection and prevention. AI systems perform network traffic analysis to detect abnormal communication patterns that could signal data exfiltration attempts and simultaneously monitor user activity to identify suspicious logins and unauthorized access attempts.
Detecting advanced threats
AI can detect unknown threats by identifying deviations from standard network or user behavior patterns. It provides adequate protection against zero-day attacks and advanced persistent threats (APTs) because signature-based defenses would otherwise fail to detect these threats.
Identifying indicators of compromise
AI models track behavioral patterns to detect indicators of compromise (IoCs), which include irregular login times, data exfiltration attempts, and unauthorized access to critical systems.
AI-Enhanced Security Tools
Traditional security systems face challenges because they depend on static rules and signatures, yet fail to adapt to these changes. AI-powered security systems demonstrate the ability to learn while adapting to changing threats. Security systems can learn through ML algorithms, which enable them to detect new attack patterns and identify vulnerabilities before making the necessary security adjustments.
Security systems become more dynamic and resilient through adaptability, which enables them to defend against highly sophisticated attacks. AI-powered intrusion detection systems use behavioral learning to detect new malware types, regardless of modifications made to evade signature-based detection.
The integration of generative AI into cybersecurity platforms enables security professionals to use advanced tools for threat investigation, adversary analysis, and incident response automation.
Cyberthreat investigation
Threat intelligence involves collecting, analyzing, and disseminating information about potential security threats to help organizations improve their security posture and protect against cyberattacks. AI-driven threat intelligence platforms aggregate and analyze global threat data to help security teams understand attack patterns and predict future threats. AI can correlate millions of data points across various attack surfaces in seconds.
For example, a frontier model can help with threat intelligence by processing vast amounts of data to identify potential security threats and generate actionable intelligence. It can also automatically generate threat intelligence reports based on various data sources, including social media, news articles, and other online sources. By processing and analyzing this data, a frontier model can identify potential threats, assess their risk level, and recommend mitigating them.18
Cyber adversary research
AI provides automated threat-hunting capabilities, allowing analysts to track and profile cybercriminals; understand their tactics, techniques, and procedures (TTPs); and predict their next moves. AI also aids in monitoring the dark web, identifying leaked credentials and discussions related to potential attacks. Frontier models can reduce the workload of security operations center (SOC) analysts by automatically analyzing cybersecurity incidents. LLMs can also help analysts make strategic recommendations to support both instant and long-term defense measures.18
AI-Powered Vulnerability Management
The complex IT infrastructure of pharmaceutical companies, which includes multiple systems and applications, presents challenges in performing effective vulnerability management. AI technology enables faster and more efficient vulnerability prioritization and remediation. AI algorithms process vulnerability information to determine the risk levels of each vulnerability before determining which remediation efforts should take priority.
Security teams can begin with the most critical vulnerabilities, as this approach reduces their attack surface and decreases the likelihood of exploitation. AI systems perform vulnerability scanning and patching automatically, improving efficiency and shortening the time needed to fix security weaknesses. AI-powered tools perform vulnerability detection across applications, codebases, and network infrastructure to detect weaknesses before cyber criminals can exploit them.
AI-Powered Security Automation
Security tasks—including vulnerability patching, incident response, and security monitoring—require repetitive work that consumes much time. AI technology enables the automation of these tasks, liberating human staff and enhancing operational efficiency. AI-powered systems use their capabilities to automate incident response functions, enabling fast containment and attack mitigation before threats can expand. AI-powered security monitoring systems track system logs and alerts in real time to identify significant events while decreasing the workload of security analysts. The automation system conserves time and financial resources and decreases the chances of human mistakes, resulting in better security management practices.
Security event monitoring
Real-time analysis of logs through AI-powered security information and event management (SIEM) systems enables the identification of hidden security event correlations that human analysts typically miss. Through its detection capabilities, AI identifies minor signs of malicious activity.
Incident response automation
Security orchestration, automation, and response (SOAR) platforms, driven by AI technology, enable the containment of security incidents through automation, which shortens response times and reduces the impact of damage. AI implements endpoint isolation for compromised systems while revoking user access and applying security patches to prevent threat escalation.
Conclusion
The cybersecurity landscape will evolve into an ongoing battle of technological advancements between defensive AI systems that must adapt to new AI-powered threats. Organizations that grasp both AI defensive capabilities and existing threats will succeed in safeguarding their essential assets and operational activities throughout this dynamic environment.
As a “force multiplier” for cybersecurity, AI is not a replacement for human cybersecurity professionals. Instead, it is an enabler that enhances detection capabilities, automates incident response, and strengthens proactive defense strategies. By integrating AI into security operations, organizations can more effectively protect against emerging threats, optimize resource allocation, and equip their security teams with advanced tools and capabilities.
The future of cybersecurity is not human versus machine, but human–AI collaboration against AI-enhanced threats. Furthermore, the concept of autonomous agents will become crucial, with fleets of security agents operating within the infrastructure, performing tasks in response to other agents, creating complex fabrics of coordinated defense mechanisms.
This article has explored the complex and multifaceted role of AI in the cybersecurity landscape of the pharmaceutical and medical device industries. We have examined how malicious actors can weaponize AI to create more sophisticated cyberattacks, ranging from automated phishing to data poisoning and deepfakes. We have also highlighted the immense potential of AI-driven cybersecurity solutions to enhance threat detection and automate security tasks, thus improving data protection.
Pharmaceutical and medical device companies must invest in AI-driven cybersecurity solutions to protect assets, maintain competitiveness, ensure the safety and quality of their products, and, critically, prevent operational disruptions from attacks on OT and IT systems. Operational attacks leading to ransom requests becoming the norm make this investment essential to secure the future of pharmaceutical innovation, patient care, and the uninterrupted supply of vital medication.