Integration of Artificial Intelligence in the Pharmaceutical Industry: A US FDA Perspective

 Listen to Audio

Audio file

 

Listen Anywhere

March 2026

This episode features insights from Tina Kiang, PhD, Director of the Division of Regulations and Guidance in the Office of Pharmaceutical Quality (OPQ)/Office of Policy for Pharmaceutical Quality (OPPQ) at the US Food and Drug Administration (US FDA). In a discussion facilitated by David Churchward, Head of Operations Quality Compliance and External Affairs at AstraZeneca, Kiang examines AI at the intersection of regulatory guidance and pharmaceutical innovation, sharing how AI has the potential to reshape the drug lifecycle, the US FDA’s current posture and guidance trajectory, how US FDA is currently using AI, and more.

  • Guest

    Tina Kiang, PhD
    Director of the Division of Regulations and Guidance in OPQ/OPPQ
    FDA
    David Churchward
    Head of Operations Quality Compliance and External Affairs
    AstraZeneca
  • References

    1. FDA/EMA Guiding Principles of Good AI Practice in Drug Development 
    2. FDA Artificial Intelligence for Drug Development page 
    3. CDER AI Guidance - Considerations for the use of Artificial Intelligence to Support Regulatory Decision-Making for Drug and Biological Products 
    4. CDER Guidance Agenda 
  • Transcript

    Download Transcript 


     

    1

    00:00:00,080 --> 00:00:10,080

    Welcome to the ISPE podcast, shaping the future of pharma, where ISPE supports you on your journey, fueling innovation, sharing insights, thought

    2

    00:00:10,080 --> 00:00:14,240

    leadership, and empowering a global community to reimagine what's possible.

    3

    00:00:15,335 --> 00:00:20,614

    Hello, and welcome to the ISPE podcast, shaping the future of pharma.

    4

    00:00:21,094 --> 00:00:23,094

    I'm Bob Chew, your host.

    5

    00:00:23,255 --> 00:00:33,670

    And today, we have another episode where we'll be sharing the latest insights and thought leadership on manufacturing, technology, supply chains, and regulatory

    6

    00:00:33,670 --> 00:00:36,949

    trends impacting the pharmaceutical industry.

    7

    00:00:37,590 --> 00:00:44,905

    You will hear directly from the innovators, experts, and professionals driving progress and shaping the future.

    8

    00:00:45,225 --> 00:00:46,905

    Thank you again for joining us.

    9

    00:00:46,984 --> 00:00:49,625

    And now let's dive into this episode.

    10

    00:00:50,905 --> 00:00:58,184

    Our topic today is integration of artificial intelligence in the pharmaceutical industry and FDA perspective.

    11

    00:00:58,759 --> 00:01:03,559

    Our guests today are David Churchward and Doctor Tina Kiang.

    12

    00:01:04,119 --> 00:01:09,959

    David is head of operations, quality, compliance, and external affairs at AstraZeneca.

    13

    00:01:10,855 --> 00:01:20,935

    Previously, he spent seventeen years at the UK MHRA, where he led the inspectorates expert circle, engaging with global

    14

    00:01:20,935 --> 00:01:27,120

    regulators and industry in the assessment, regulation, and adoption of new technologies.

    15

    00:01:27,840 --> 00:01:28,240

    Doctor.

    16

    00:01:28,240 --> 00:01:35,680

    Kiang is director, division of regulation and guidance within the Office of Pharmaceutical Quality, CDER, FDA.

    17

    00:01:36,115 --> 00:01:46,594

    She began her FDA career as a lead reviewer at CDRH and over the years has been involved in assessing new technologies, especially software.

    18

    00:01:47,234 --> 00:01:52,650

    I will now turn the microphone over to David, who will moderate this discussion with Doctor Kiang.

    19

    00:01:52,650 --> 00:01:53,370

    Qiang.

    20

    00:01:53,689 --> 00:01:55,209

    Hello, and welcome.

    21

    00:01:55,930 --> 00:02:06,329

    One of the great benefits of our ISP community is the ability to share conversations and perspectives with other professionals across a wide range of roles and career

    22

    00:02:06,329 --> 00:02:06,890

    experiences.

    23

    00:02:07,694 --> 00:02:13,135

    And a really important part of that is the dialogue between industry and regulators.

    24

    00:02:13,854 --> 00:02:24,360

    So I'm delighted to welcome doctor Tina Kian to this podcast episode and for the opportunity to explore one of the most significant topics of recent times,

    25

    00:02:24,599 --> 00:02:26,680

    that being artificial intelligence.

    26

    00:02:27,479 --> 00:02:27,879

    Doctor.

    27

    00:02:27,879 --> 00:02:34,280

    Qian joins us from FDA, where she's held a wide range of roles during her twenty years with the agency.

    28

    00:02:35,145 --> 00:02:42,745

    So, Tina, I think your career overview and its relevance to your expertise in AI is best described in your own words.

    29

    00:02:42,745 --> 00:02:53,020

    So, please, could you tell us a little bit about your career and what brought you to your current position as a thought leader influencing FDA's AI and machine

    30

    00:02:53,020 --> 00:02:53,980

    learning efforts.

    31

    00:02:54,060 --> 00:02:56,939

    Thank you, and thank you for having me on this podcast.

    32

    00:02:57,419 --> 00:03:02,860

    I started my career at FDA almost twenty one years ago now.

    33

    00:03:03,235 --> 00:03:05,314

    I was a reviewer in medical devices.

    34

    00:03:05,555 --> 00:03:14,115

    I started my career reviewing ophthalmic raw materials and neurodevices, neuromaterial devices.

    35

    00:03:14,675 --> 00:03:19,770

    And, you know, through through my career, I've always sought different opportunities.

    36

    00:03:20,489 --> 00:03:30,694

    So when when time came for me to go into leadership positions, and advancing my leadership positions, I went to, what is what was called the division of

    37

    00:03:30,694 --> 00:03:34,455

    anesthesiology, respiratory, general hospital, infection control, and dental devices.

    38

    00:03:34,455 --> 00:03:36,135

    So a big, very big mouthful.

    39

    00:03:36,854 --> 00:03:40,134

    There was my first exposure to software products.

    40

    00:03:40,134 --> 00:03:45,840

    And so, you know, there were many, many products which I jokingly had set had a battery or plugged into a wall.

    41

    00:03:46,159 --> 00:03:54,000

    And, therefore, you know, software products and exposure to software products and learning more about how to regulate software.

    42

    00:03:54,000 --> 00:04:04,305

    And so taking that and those many years of experience over there and when I came over to CEDR, you know, there aren't that many people with device experience or software experience here in CDER

    43

    00:04:04,305 --> 00:04:07,425

    and and certainly not in the area of policy making.

    44

    00:04:08,064 --> 00:04:18,199

    And, you know, so when, you know, documents or, prox came across my our desk regarding software or, you

    45

    00:04:18,199 --> 00:04:23,479

    know, now artificial intelligence, you know, myself there was a very small number of people who could look at it.

    46

    00:04:23,479 --> 00:04:33,295

    And so, you know, myself and there there's a couple of other peoples in in our group that, would look at these, you know, documents or these policy statements and whatnot.

    47

    00:04:33,455 --> 00:04:35,775

    And so, you know, it it became a natural fit.

    48

    00:04:35,855 --> 00:04:39,055

    It you know, I knew a little bit about software and validation.

    49

    00:04:39,055 --> 00:04:47,170

    You know, if you extend it more once you learn about artificial intelligence, you kind of extend those, criteria a little bit more.

    50

    00:04:47,170 --> 00:04:53,649

    There's it's slightly different, but, you know, soft what is artificial intelligence, but very advanced software.

    51

    00:04:55,455 --> 00:05:02,175

    And so those kind of, ideas and the the knowledge that I had was transferable.

    52

    00:05:02,175 --> 00:05:08,895

    And so now, you know, I'm part of the, CDER AI Council, their policy and review subcommittee.

    53

    00:05:09,579 --> 00:05:19,740

    You know, we have a subcommittee in, OPQ, which is my home office, you know, which talks about not only policy with regard to industry, but also how to use it

    54

    00:05:19,740 --> 00:05:20,939

    within the agency.

    55

    00:05:20,939 --> 00:05:27,514

    So I've been it's been an exciting kind of journey here, to artificial intelligence.

    56

    00:05:28,875 --> 00:05:29,595

    That's great.

    57

    00:05:29,595 --> 00:05:38,314

    And and what what a great opportunity for us to to tap into some of that experience, as we look towards the the use of AI, in our in our pharmaceutical industry.

    58

    00:05:39,240 --> 00:05:43,959

    So let's let's dive into the questions, and and see what what comes out of the discussion.

    59

    00:05:44,120 --> 00:05:54,345

    So if we if we're looking ahead five years, how do you see the integration of advanced AI techniques transforming drug development, regulatory

    60

    00:05:54,345 --> 00:05:57,225

    submissions, and also manufacturing oversight.

    61

    00:05:58,345 --> 00:06:08,389

    So I think I think there are many different ways that AI could be used, you know, some of which would be actively regulated by FDA, some of it which, you know, is within your own, you

    62

    00:06:08,389 --> 00:06:12,149

    know, industry's own control systems to be able to moderate and regulate.

    63

    00:06:12,550 --> 00:06:23,485

    And so I think, you know, again, in, drug development and, know, drug development, advanced manufacturing in particular, you know, molecule

    64

    00:06:23,485 --> 00:06:33,724

    selection, designing clinical trials, you know, analysis of clinical data, for example, on the drug development and drug testing side, you know, and advanced manufacturing,

    65

    00:06:33,724 --> 00:06:42,139

    you know, not only, you know, being able to control process parameters, you know, in in an effective way using real time data, for example.

    66

    00:06:42,460 --> 00:06:50,694

    And so, you know, there I think there are many aspects and different parts of the drug life cycle where it can be used.

    67

    00:06:50,694 --> 00:07:00,295

    It can be effectively be used, you know, for example, in kappa investigations if you see a deviation in manufacturing or, you know, deviation in in the final drug product.

    68

    00:07:00,295 --> 00:07:08,860

    You know, looking back at data, artificial intelligence could be used to analyze the data to see, you know, what those failure points are.

    69

    00:07:08,939 --> 00:07:19,214

    Because quite frankly, it can, you know, given the right boundary conditions, and this is always the important part, given the right boundary conditions and ask the right questions, the AI

    70

    00:07:19,214 --> 00:07:22,975

    can analyze the data far quicker than a human being can.

    71

    00:07:23,454 --> 00:07:23,935

    You know?

    72

    00:07:23,935 --> 00:07:26,814

    But ultimately, you know, a human being has to check it.

    73

    00:07:26,814 --> 00:07:36,649

    A human being has to make sure the boundary conditions are set correctly, that the parameters are set correctly, and that, you know, whatever comes out makes sense within the context of what you're using.

    74

    00:07:36,889 --> 00:07:39,610

    So I think there there's a there are great opportunities.

    75

    00:07:39,689 --> 00:07:43,129

    With opportunities, there are risks, and you have to control for those risks.

    76

    00:07:43,435 --> 00:07:53,754

    But I don't think that we should shy away from those opportunities in the same way that we haven't shied away from, adding or using newer technologies and,

    77

    00:07:53,754 --> 00:07:57,514

    for example, manufacturing or molecule selection that we have in the past.

    78

    00:07:57,514 --> 00:07:59,769

    We're starting to use process models.

    79

    00:07:59,769 --> 00:08:03,209

    We're used computational modeling in order to do molecule selection.

    80

    00:08:03,289 --> 00:08:06,810

    These I think of this as, the next logical step.

    81

    00:08:06,970 --> 00:08:14,444

    But, you know, using caution and understanding that you you as the human being have to set the boundary conditions for the use.

    82

    00:08:15,245 --> 00:08:16,045

    That's great.

    83

    00:08:16,045 --> 00:08:18,525

    I mean, those opportunities are really exciting.

    84

    00:08:18,605 --> 00:08:27,110

    I guess, with any new technology, there's always the question relation to guardrails that protect the patient without impeding that innovation.

    85

    00:08:27,110 --> 00:08:29,590

    Some of that maybe you touched on a little bit there.

    86

    00:08:30,230 --> 00:08:40,470

    What do you think are the opportunities to update regulatory frameworks to keep pace with the rapid AI innovations that we're seeing, in the pharmaceutical sector?

    87

    00:08:41,164 --> 00:08:41,725

    Yeah.

    88

    00:08:41,725 --> 00:08:44,924

    So, you know, I'm of the opinion, and this may change.

    89

    00:08:44,924 --> 00:08:47,644

    This may change depending on, you know, what we see.

    90

    00:08:47,644 --> 00:08:57,690

    But currently, I'm of the opinion that the regulatory framework, meaning statute and regulation, are flexible enough to allow for the integration

    91

    00:08:57,690 --> 00:08:58,409

    of AI.

    92

    00:08:58,409 --> 00:09:08,644

    You know, we went from human beings looking at products to and inspecting products on a process line, for example, to, you know, machinery with a human intervening to

    93

    00:09:08,644 --> 00:09:10,804

    now software with human checks.

    94

    00:09:10,804 --> 00:09:16,085

    But, you know, you know, software without human intervention that moves that moves across the line.

    95

    00:09:16,085 --> 00:09:21,365

    And now AI, again, as I stated before, which is just a much more sophisticated type of software.

    96

    00:09:21,649 --> 00:09:32,129

    And so I think because we have been able to use our quality system regulations, for example, you know, the the two ten and two eleven, for g CGMP

    97

    00:09:32,129 --> 00:09:32,690

    regulations.

    98

    00:09:32,690 --> 00:09:33,009

    I'm sorry.

    99

    00:09:33,009 --> 00:09:34,529

    I said quality system regulations.

    100

    00:09:34,529 --> 00:09:37,034

    That was a old device terminology.

    101

    00:09:37,034 --> 00:09:47,115

    The, you know, CGMP regulations in order, you know, in order and having these incremental advances throughout the years, this should be

    102

    00:09:47,115 --> 00:09:48,394

    able to be fit in.

    103

    00:09:48,769 --> 00:09:51,730

    You know, of course, we'll need additional guidance.

    104

    00:09:52,049 --> 00:10:02,289

    You know, you know, we've FDA has already published a guidance on AI develop use and develop in drug development, last January, in January 2025.

    105

    00:10:04,165 --> 00:10:14,404

    The CDER has published their guidance agenda, which on the top of the, CMC quality, list is an AIML

    106

    00:10:14,404 --> 00:10:17,205

    guidance for pharmaceutical quality and manufacturing.

    107

    00:10:17,205 --> 00:10:27,299

    And so that will you know, when that publishes, that will hopefully provide some frameworks on how to think about integration into, pharmaceutical manufacturing and advanced

    108

    00:10:27,299 --> 00:10:28,660

    manufacturing in general.

    109

    00:10:28,899 --> 00:10:33,315

    So, you know, we always have to look for opportunities for convergence.

    110

    00:10:33,315 --> 00:10:42,274

    We have to have conversations with our fellow regulators across the globe to make sure that there's, you know, as much of a singular voice as possible.

    111

    00:10:42,274 --> 00:10:52,460

    You know, there are the points, you know, points to consider or just, general guidelines that we published with EMA, in, last in January.

    112

    00:10:52,460 --> 00:10:55,820

    You know, that I think was received really well.

    113

    00:10:55,820 --> 00:11:05,654

    And so, I think, again, you know, we have to look across not just within, you know, within the scope of what the FDA regulates.

    114

    00:11:05,654 --> 00:11:15,779

    You know, we we have to use our regulatory partners to make sure that we are giving the the industry a consistent message on how to integrate

    115

    00:11:15,779 --> 00:11:17,059

    this new technology.

    116

    00:11:17,299 --> 00:11:17,620

    Yeah.

    117

    00:11:17,620 --> 00:11:19,059

    That sounds great.

    118

    00:11:19,059 --> 00:11:29,455

    I mean, with with global manufacturing supply chains, that convergence of of regulatory expectations is really important and, you know, totally recognize that it takes time

    119

    00:11:29,455 --> 00:11:31,054

    to build some of that confidence.

    120

    00:11:31,774 --> 00:11:35,615

    And the obvious challenge in the AI space is speed of development.

    121

    00:11:36,095 --> 00:11:46,339

    So I I really wish you and your colleagues success in those discussions towards alignment because for industry and for getting some of these technologies to patients, that alignment really matters.

    122

    00:11:48,740 --> 00:11:58,584

    We've we've heard a bit about the the different phases of the drug life cycle where AI can have relevance, perhaps we could explore that a little bit.

    123

    00:11:59,225 --> 00:12:09,544

    So thinking about the regulatory filing, what are the most common challenges that reviewers might encounter when evaluating AI driven evidence or

    124

    00:12:09,544 --> 00:12:11,464

    models in in those CMC submissions?

    125

    00:12:11,840 --> 00:12:12,399

    Right.

    126

    00:12:12,480 --> 00:12:19,519

    I think, you know, from that point of view, I think that, you know, we've been dealing with process models for years in the CMC space.

    127

    00:12:19,519 --> 00:12:29,615

    We're we're starting, you know, with, you know, there was a paper that was published, a few years ago by, you know, FDA, you know, our members in OPQ along

    128

    00:12:29,615 --> 00:12:34,014

    with, some partners in EMA about how to think about process models.

    129

    00:12:34,014 --> 00:12:44,490

    And in that process model framework, there was an AI example in in fact, where, you know, it goes through how to think about, you know, credibility

    130

    00:12:44,490 --> 00:12:48,410

    assessment and risk assessment, of AI models.

    131

    00:12:48,490 --> 00:12:53,529

    I think, you know, as with anything challenging, it's, you know, something is new.

    132

    00:12:53,929 --> 00:12:54,250

    You know?

    133

    00:12:55,274 --> 00:12:58,954

    There's always challenges of, you know, what what do we need to look at?

    134

    00:12:58,954 --> 00:13:00,634

    How deeply do we need to look?

    135

    00:13:00,634 --> 00:13:11,000

    But when we're looking at the credibility framework and the risk framework, we I think, you know, we need to make sure that products are well validated, that they're fit for use, that they're correct

    136

    00:13:11,000 --> 00:13:15,879

    for the context of use, that the risk is appropriate for that part of the system.

    137

    00:13:16,759 --> 00:13:24,075

    And that, you know, when looking at it, that there's enough data to provide that assurance that those risks are properly controlled.

    138

    00:13:25,274 --> 00:13:35,595

    You know, I don't know how much I'm not on the assessment side, I couldn't comment on how much or how how little they're receiving in terms of, you know, what's come in in an application

    139

    00:13:35,159 --> 00:13:39,000

    versus what's come in in, like, a meeting, you know, premeeting or whatnot.

    140

    00:13:39,079 --> 00:13:49,754

    But we highly encourage anyone who's wants to file, or wants to integrate, an AI model into their manufacturing to come to our, you know, e

    141

    00:13:49,754 --> 00:13:54,235

    t ETP program, you know, emerging technologies program, have those conversations.

    142

    00:13:54,235 --> 00:14:04,235

    Or if it's on the cyber side, the CAT program, to have those conversations early, in order to make sure that and have those conversations early and often

    143

    00:14:04,379 --> 00:14:10,620

    to make sure that, you know, everyone is on the right track and that we're thinking about the integration and the risk in the right way.

    144

    00:14:12,539 --> 00:14:12,940

    Okay.

    145

    00:14:12,940 --> 00:14:13,740

    That's good to hear.

    146

    00:14:13,740 --> 00:14:20,475

    And good good to to know there are those routes that that we can use also in the in the AI, you know, AI space and and to drop that technology.

    147

    00:14:21,434 --> 00:14:28,475

    So an area that's particularly close to my career experience is compliance in in manufacturing operations.

    148

    00:14:28,554 --> 00:14:28,794

    Mhmm.

    149

    00:14:29,115 --> 00:14:38,419

    And we're already seeing industry working on AI integration into a wide range of of quality operations and supply chain activities.

    150

    00:14:38,659 --> 00:14:41,940

    And, you know, I'm sure that those are gonna come under scrutiny during inspections.

    151

    00:14:41,940 --> 00:14:52,195

    So from your perspective, how should companies prepare for inspections where AI supported decision making is integral to GMP

    152

    00:14:52,195 --> 00:14:53,075

    compliance?

    153

    00:14:53,235 --> 00:15:02,090

    You know, are there are there new expectations for human oversight, for failure mode analysis, or or for real time monitoring?

    154

    00:15:02,649 --> 00:15:08,889

    So I I'm gonna I'm going to, try to, play into fix some nomenclature.

    155

    00:15:10,554 --> 00:15:19,035

    Even though we use the terminology of AI decision making, we have to remember that by definition, AI is not making a decision.

    156

    00:15:19,035 --> 00:15:20,154

    It's providing output.

    157

    00:15:20,795 --> 00:15:22,154

    Human beings make decisions.

    158

    00:15:22,554 --> 00:15:32,940

    And so, you know, when we look at our regulations, you know, we're we're never software had an output and that output was used without human intervention,

    159

    00:15:32,940 --> 00:15:34,540

    we would still call it output.

    160

    00:15:34,700 --> 00:15:41,634

    Just because AI behaves more human like, we kind of start using this this nomenclature of decision making, but it's still output.

    161

    00:15:42,034 --> 00:15:45,314

    And so, you know, ultimately, AI is a tool.

    162

    00:15:45,314 --> 00:15:48,834

    It provides output on which decisions by people are made.

    163

    00:15:49,314 --> 00:15:59,370

    And, you know, whether or not, you know, the output that is given by the AI is used without any additional human intervention or human in the loop, that

    164

    00:15:59,370 --> 00:16:00,089

    is a decision.

    165

    00:16:00,089 --> 00:16:04,169

    That's the decision, not the output that the that was given by the AI.

    166

    00:16:04,169 --> 00:16:07,154

    And so I think that distinction needs to be clearly made.

    167

    00:16:07,154 --> 00:16:12,995

    And if that distinction is clearly made, then then the thought process about inspections becomes easier, actually.

    168

    00:16:13,394 --> 00:16:16,355

    Because then you're still thinking about human beings.

    169

    00:16:16,355 --> 00:16:17,875

    You're still thinking about record keeping.

    170

    00:16:17,875 --> 00:16:28,250

    You're still thinking thinking is our is the output being given by this very advanced software still appropriate to maintain the quality of the product that comes out on the other end?

    171

    00:16:28,250 --> 00:16:38,375

    The quality unit is ultimately responsible for the end product and for making sure that along the way, all the parts of the processes are operating the way

    172

    00:16:38,375 --> 00:16:43,894

    they should be, whether it's validation, whether it's, you know, the specifications, whether it's the output, etcetera.

    173

    00:16:43,975 --> 00:16:54,079

    And, you know, so I think, you know, once we frame the the use of the AI in the appropriate way, like, AI, yes, AI decision making,

    174

    00:16:54,079 --> 00:16:57,919

    quote unquote, it seems like it's making a decision, but it's really AI output.

    175

    00:16:58,254 --> 00:17:03,615

    It's AI output and human decision making on what to use with that out how to use that output.

    176

    00:17:03,615 --> 00:17:14,210

    And, again, if we frame it that way, I think how to prepare for inspection and what what materials and how to think about inspection becomes a lot easier because it still fits within

    177

    00:17:14,210 --> 00:17:15,970

    the framework that we have now.

    178

    00:17:16,289 --> 00:17:25,009

    You know, we wouldn't expect there to be any additional difficulty because we're using a process model that is a traditionally program process model.

    179

    00:17:25,089 --> 00:17:27,250

    It shouldn't be any different just because it's AI.

    180

    00:17:29,144 --> 00:17:30,025

    That's that's great.

    181

    00:17:30,025 --> 00:17:34,025

    A bit of bit of demystifying there and stripping it back to its to its bare essentials, I guess.

    182

    00:17:34,025 --> 00:17:44,349

    So, yeah, it's a really, you know, interesting way to kind of put put that that kind of reality back into into how how some of those view models are being being viewed.

    183

    00:17:44,669 --> 00:17:45,309

    Yeah.

    184

    00:17:45,309 --> 00:17:55,309

    So, of course, we we can't implement we can't just implement AI across the drug life cycle without thinking about maintaining and developing the AI

    185

    00:17:55,309 --> 00:17:56,029

    model itself.

    186

    00:17:56,934 --> 00:18:06,934

    So from a regulatory perspective, what are considered to be best practices for lifecycle management of the AI model, and particularly regarding things like

    187

    00:18:06,934 --> 00:18:10,934

    change control and revalidation, some of which you kind of talked touched on just a little bit previously?

    188

    00:18:11,589 --> 00:18:12,149

    Yeah.

    189

    00:18:12,389 --> 00:18:22,389

    You know, I think the the big thing is to understand, you know, when when you're gonna check-in with the AI, and establishing those boundary conditions very early

    190

    00:18:22,389 --> 00:18:22,789

    on.

    191

    00:18:22,789 --> 00:18:28,994

    Like, when and I think that that goes for any time, anything that you're talking about within with regard to change control.

    192

    00:18:28,994 --> 00:18:33,634

    It's just that, you know, depending on the model, changes could be happening.

    193

    00:18:33,714 --> 00:18:40,169

    You know you know, if it's an open model, for example, changes could be happening, day to day.

    194

    00:18:40,490 --> 00:18:44,490

    And so the question is, how how do you know when to check-in with the model?

    195

    00:18:44,490 --> 00:18:46,009

    How do you know when to check-in?

    196

    00:18:46,569 --> 00:18:56,005

    And it shouldn't be, oh, there's something happened that's bad, and we had a bad result, or, you know, we have a whole lot of product that it that doesn't meet our quality standards.

    197

    00:18:56,005 --> 00:18:57,044

    It can't be that.

    198

    00:18:57,205 --> 00:19:00,484

    You know, that's way too late and way too far down the line.

    199

    00:19:00,644 --> 00:19:05,525

    And so I think, you know, wherever AI is implemented, you have to think about, okay, what are the boundary conditions?

    200

    00:19:05,525 --> 00:19:07,924

    What are the what are the signals?

    201

    00:19:07,924 --> 00:19:12,089

    What are the triggers that you have in place to say, okay.

    202

    00:19:12,089 --> 00:19:13,130

    It's time to check-in.

    203

    00:19:13,130 --> 00:19:23,369

    It could be as simple as we're gonna check-in, you know, every six months, you know, to make sure that, you know, it's that the output is still appropriate for that

    204

    00:19:23,035 --> 00:19:24,714

    unit operation, for example.

    205

    00:19:24,795 --> 00:19:33,355

    It could be, you know you know, product is coming out from a specific unit operation, there's there's testing further down the line.

    206

    00:19:33,595 --> 00:19:43,330

    And you have, you know, certain triggers or certain you know, once it gets too close to a boundary condition or specification, you know, something becomes too high or too low, okay.

    207

    00:19:43,330 --> 00:19:45,890

    Maybe we need to see if the model is drifting.

    208

    00:19:45,970 --> 00:19:50,369

    You know, it could be any number of those things so long as they are well defined.

    209

    00:19:50,769 --> 00:19:51,090

    You know?

    210

    00:19:51,724 --> 00:20:01,805

    And that when changes happen and when you when you do need to change something, you know, we have to tweak the model in some way or make

    211

    00:20:01,805 --> 00:20:04,605

    a minor change to the model or even a major change to the model.

    212

    00:20:04,605 --> 00:20:06,779

    What what reporting category does it come in?

    213

    00:20:06,779 --> 00:20:08,539

    When do you have to come into FDA?

    214

    00:20:09,259 --> 00:20:17,499

    And so we have to think about it again using, you know, q eight, q nine, q 10 principles on good manufacturing and then q 12 principles on change control.

    215

    00:20:17,819 --> 00:20:19,819

    You know, what are what's essential?

    216

    00:20:20,345 --> 00:20:22,505

    And what's essential to report?

    217

    00:20:22,505 --> 00:20:24,105

    What's essential to look at?

    218

    00:20:24,184 --> 00:20:32,585

    What's essential to make sure that the output in terms of in terms of the drug product that you are getting at the end is meeting quality standards?

    219

    00:20:33,049 --> 00:20:43,130

    And having those guardrails in place along the way, just as you do now when you are in a when you're when you're looking at unit operations or the process as a whole, again,

    220

    00:20:43,130 --> 00:20:52,004

    I think if you think about it in the same way, you know, along the line as you do now, the principles still hold.

    221

    00:20:52,085 --> 00:20:58,644

    You just have to understand, you know, can the AI, in in its context of use, can it change?

    222

    00:20:59,019 --> 00:21:03,659

    Can it change on its own, or will it only change because you made a change to it?

    223

    00:21:03,819 --> 00:21:11,179

    And that will help you define the framework and define the boundary conditions and define the triggers for when you have to go back in.

    224

    00:21:11,179 --> 00:21:13,674

    But they have to be defined early.

    225

    00:21:13,835 --> 00:21:16,555

    You can't define you can't say, oh, wait.

    226

    00:21:16,555 --> 00:21:23,994

    There's there's a lot of product that doesn't meet our standards or and then that's that's the point where you have to go check-in.

    227

    00:21:23,994 --> 00:21:25,515

    That's not the way to do it.

    228

    00:21:25,515 --> 00:21:35,639

    So I think, again, we if we look at good principles, you know, are the principles that we've utilized all along, and apply it in the same way with the same

    229

    00:21:35,639 --> 00:21:38,440

    rigor and not just say, oh, well, it's AI.

    230

    00:21:38,440 --> 00:21:40,999

    It'll fix itself because we know that's not true.

    231

    00:21:41,559 --> 00:21:48,065

    I think we can you know, people will be able to, you know, be able to process it and think about it in the appropriate way.

    232

    00:21:48,065 --> 00:21:53,184

    Again, and always, you know, if you need advice from FDA, come talk to FDA.

    233

    00:21:53,664 --> 00:21:53,984

    You know?

    234

    00:21:53,984 --> 00:21:59,579

    Is there is, you know, for for this part of the process, we're intending on using AI.

    235

    00:21:59,660 --> 00:22:03,820

    We have, you know, a temperature control, you know, further down the line.

    236

    00:22:03,820 --> 00:22:07,579

    You know, well, this trigger maybe this trigger may be appropriate, maybe not.

    237

    00:22:07,579 --> 00:22:09,180

    There's another test down the line.

    238

    00:22:09,180 --> 00:22:10,140

    Is this appropriate?

    239

    00:22:10,140 --> 00:22:10,619

    Is it not?

    240

    00:22:11,125 --> 00:22:18,164

    And, you know, perhaps and it's going to be a process to learn as people are starting to integrate and learn more.

    241

    00:22:18,484 --> 00:22:21,525

    So, I guess, building that into a into a control strategy.

    242

    00:22:21,525 --> 00:22:21,765

    Yeah.

    243

    00:22:21,765 --> 00:22:22,325

    Exactly.

    244

    00:22:22,325 --> 00:22:22,884

    Exactly.

    245

    00:22:22,964 --> 00:22:30,359

    Build it into your control strategy from the start and, you know, be able to modify that as you learn.

    246

    00:22:30,519 --> 00:22:30,920

    You know?

    247

    00:22:30,920 --> 00:22:34,599

    And and, again, a control strategy, it's not a be all and end all.

    248

    00:22:34,599 --> 00:22:35,400

    This is it.

    249

    00:22:35,400 --> 00:22:42,964

    You know, you have to you have to look at it, reassess, come back to it, reassess risk, change the control strategy as you need it.

    250

    00:22:43,924 --> 00:22:44,484

    Yeah.

    251

    00:22:44,565 --> 00:22:47,684

    So so just kind of thinking about those risk based frameworks.

    252

    00:22:47,684 --> 00:22:57,970

    Now as we develop those frameworks for model change management and also the datasets that they use, what level of evidence is required to demonstrate that robust

    253

    00:22:57,970 --> 00:23:06,129

    data lineage, governance, and controls when we're actually training or validating the AI models that are used in regulated contexts?

    254

    00:23:06,289 --> 00:23:06,929

    Yeah.

    255

    00:23:07,009 --> 00:23:17,024

    I think, you know, with everything, you when when we're looking at data for validation and and especially with an AI model, because it's not a human

    256

    00:23:17,024 --> 00:23:19,424

    programming it and then debugging and whatever.

    257

    00:23:19,904 --> 00:23:23,904

    It's it's it learns from data, and then it is validated with data.

    258

    00:23:24,460 --> 00:23:33,259

    And, you know, when once it's deployed, it's using that body of data in order to do what's functionally meant to do and trained to do.

    259

    00:23:33,660 --> 00:23:44,044

    So as with training, and this is where I will make that human analogy, as with training a human being, if you give it bad data, it's going to produce bad data.

    260

    00:23:44,365 --> 00:23:44,924

    You know?

    261

    00:23:45,325 --> 00:23:49,404

    And so, you know, I you know, we've all heard the phrase garbage in, garbage out.

    262

    00:23:50,284 --> 00:23:58,240

    And so I think you have to look at to make sure, you know, the datasets that are used that are being used for training are appropriate for the context of use.

    263

    00:23:58,640 --> 00:24:06,160

    You have to make sure, you know and this may or may not be easy, you know, making sure that there's no bias in that data.

    264

    00:24:06,515 --> 00:24:12,595

    You know, you have to making sure that that the data, you know, is representative of what you want it to be.

    265

    00:24:12,595 --> 00:24:19,795

    I think in manufacturing, you may have less of a chance versus clinical, but there's it's still a possibility.

    266

    00:24:19,795 --> 00:24:29,369

    And you have to you have to make sure that the data is such that you're trying to minimize the potential for hallucination if you're you're talking about something that's generative or or something that's,

    267

    00:24:29,369 --> 00:24:34,329

    you know, a learning model that that it doesn't that there isn't a chance to hallucinate.

    268

    00:24:34,329 --> 00:24:34,490

    You know?

    269

    00:24:34,865 --> 00:24:39,345

    You want the data to be tight and clean as clean as possible.

    270

    00:24:39,744 --> 00:24:40,065

    You know?

    271

    00:24:40,065 --> 00:24:41,424

    And sometimes that's difficult.

    272

    00:24:41,424 --> 00:24:42,545

    It can be difficult.

    273

    00:24:42,545 --> 00:24:43,025

    You know?

    274

    00:24:43,025 --> 00:24:54,210

    Historical data as it is, you know, can be can have discrepancies in it, you have to make sure that it that that the model can appropriately,

    275

    00:24:54,369 --> 00:25:04,450

    when it's trained, not only see what it needs to see and and is able to to move the products to and do what it's supposed to do, but also be able to to identify, oh, this is a discrepancy.

    276

    00:25:04,450 --> 00:25:05,329

    This is not good.

    277

    00:25:05,329 --> 00:25:14,904

    And so, again, being able to train a model on the data that it's and to perform its duties, but also to be able to to identify, no.

    278

    00:25:14,904 --> 00:25:16,184

    That's a discrepancy or, no.

    279

    00:25:16,184 --> 00:25:17,464

    That's out of specification.

    280

    00:25:17,464 --> 00:25:18,024

    No.

    281

    00:25:18,265 --> 00:25:18,904

    You know?

    282

    00:25:18,904 --> 00:25:28,679

    So I think that, you know, it's important also to go back to what I said in in our last question about the guardrails.

    283

    00:25:28,759 --> 00:25:31,639

    You know, having a human in the loop at some point.

    284

    00:25:31,720 --> 00:25:41,774

    You know, it may not be at that particular unit operation, but having a human in the loop at some point, you know, will help, you know, make sure that that

    285

    00:25:41,774 --> 00:25:45,534

    the the AI continues to operate the way it's intended.

    286

    00:25:45,934 --> 00:25:47,694

    You know, when is that check-in?

    287

    00:25:47,694 --> 00:25:48,815

    How often is that check-in?

    288

    00:25:48,815 --> 00:25:50,014

    Who is responsible?

    289

    00:25:50,174 --> 00:25:51,854

    And having that predefined.

    290

    00:25:52,815 --> 00:25:58,380

    So datasets that are designed to to mitigate risks of bias Yes.

    291

    00:25:58,460 --> 00:26:03,900

    Making sure that we can have interpretability of of of the outputs, the periodic check ins.

    292

    00:26:03,900 --> 00:26:08,859

    All of those things are are kinda are just common themes I'm hearing is that form part of those guardrails.

    293

    00:26:09,345 --> 00:26:09,744

    Yeah.

    294

    00:26:09,744 --> 00:26:19,825

    And and I think that it's no I think it's not very much different than what we do now, you know, in in terms of of, you know, how how we monitor

    295

    00:26:19,825 --> 00:26:30,200

    or I hope it's not very different than that what we do now of how how how, you know, manufacturing is monitored, you know, and and that how the processes are monitored to ensure that

    296

    00:26:30,200 --> 00:26:40,295

    the product from unit operation to unit operation risks are controlled and that the quality of product is maintained throughout the manufacturing cycle so that you have

    297

    00:26:40,295 --> 00:26:45,254

    a quality and product that meets your specifications and meets your needs for the intended population.

    298

    00:26:45,734 --> 00:26:56,079

    And so I think if we continue to think about that that's the end goal, you know, it it becomes, you know, where where to insert, you know, a human being,

    299

    00:26:56,079 --> 00:26:59,919

    where to where to find those boundaries becomes easier.

    300

    00:26:59,919 --> 00:27:05,679

    And and understanding, at least right now, that AI is not the be all and end all.

    301

    00:27:05,679 --> 00:27:10,955

    You can just plug it in and wave your hands at and say, you know, I don't need to look at this.

    302

    00:27:10,955 --> 00:27:15,115

    That's that's not, I don't believe that's where we are right now.

    303

    00:27:15,115 --> 00:27:16,954

    We may be there in the future.

    304

    00:27:17,434 --> 00:27:25,730

    But, again, even if we got there in the future, it's still a human being that needs to sign the dotted line on the paper and take responsibility of everything that's going on.

    305

    00:27:26,609 --> 00:27:27,009

    Yeah.

    306

    00:27:27,009 --> 00:27:27,410

    Okay.

    307

    00:27:27,410 --> 00:27:28,289

    That's that's yeah.

    308

    00:27:28,289 --> 00:27:34,849

    And and wouldn't that be an interesting development if we got to the point of, you know, something perhaps further towards something truly autonomous?

    309

    00:27:36,535 --> 00:27:45,894

    How is the FDA addressing concerns about things like bias, transparency, and explainability in the models that are used for regulatory decision making?

    310

    00:27:46,375 --> 00:27:47,095

    Yes.

    311

    00:27:47,575 --> 00:27:57,719

    Know, we are one of the things we have, you know, a document, an internal document, you know, the, you know, points to consider or, you know, just good practices.

    312

    00:27:57,720 --> 00:27:58,279

    You know?

    313

    00:27:58,679 --> 00:28:07,914

    And we the big thing is making sure the people the people are still key in the the whole process.

    314

    00:28:07,914 --> 00:28:18,269

    Meaning, we we wanna make sure that the people who who are doing, you know, doing the assessments or writing the policy

    315

    00:28:18,269 --> 00:28:28,190

    or, you know, conducting reviews or conducting inspections, they are still the training and their skill sets are still essential.

    316

    00:28:28,509 --> 00:28:33,274

    Because as with anything, if you're not trained well, you don't know what you don't know.

    317

    00:28:33,755 --> 00:28:34,315

    Right?

    318

    00:28:34,474 --> 00:28:44,690

    And so, you know, while we use tools, some tools, we have I think, you know, everyone knows that ELSA is the tool that's used, that that FDA

    319

    00:28:44,690 --> 00:28:54,769

    has, built, our AI, chatbot or, you know you know, and there are plug ins that can that can be utilized with ELSA to

    320

    00:28:54,769 --> 00:28:59,265

    help with certain parts of the review or certain part, you know, looking at policy documents.

    321

    00:28:59,265 --> 00:29:09,424

    We have a number of RAG libraries with documents where, you know, you can focus, your your inquiries so that there isn't, like, kind of this hallucination from

    322

    00:29:09,424 --> 00:29:10,144

    all of the Internet.

    323

    00:29:10,829 --> 00:29:13,869

    Although I'm assured that, Elsa is blocked off from the Internet.

    324

    00:29:13,869 --> 00:29:16,910

    So if we upload a document, it's not going out into the wide world.

    325

    00:29:16,910 --> 00:29:19,470

    So I wanna make sure that people do understand that as well.

    326

    00:29:19,869 --> 00:29:24,349

    But, you know, but it it was trained on the Internet.

    327

    00:29:24,349 --> 00:29:29,414

    So there is information that, you know, could potentially cause hallucinations.

    328

    00:29:29,494 --> 00:29:34,375

    And so we have other tools where we can use it, where we can focus it on a RAG library.

    329

    00:29:34,375 --> 00:29:37,734

    Like, this is our RAG library of quality policy.

    330

    00:29:37,975 --> 00:29:41,015

    We want to make a new call policy statement about x.

    331

    00:29:41,080 --> 00:29:51,240

    We can point to that regulatory lock that RAG library so that the output that we get is based on the information that's there and not and won't contain any

    332

    00:29:51,240 --> 00:29:57,585

    of the noise that possibly opinion pieces on the Internet about how certain things should be regulated get seeked their way through.

    333

    00:29:58,065 --> 00:30:01,264

    But that output still needs to be checked by a person.

    334

    00:30:01,424 --> 00:30:04,625

    And, ultimately, that person who is doing the work is the one that's responsible.

    335

    00:30:04,625 --> 00:30:06,224

    And so we have to say, okay.

    336

    00:30:06,224 --> 00:30:16,419

    Do you still have the knowledge and skills to be able to look at this statement that comes out or best analysis that comes out if you're looking at, you know, data

    337

    00:30:16,419 --> 00:30:24,019

    or, you know, this this, you know, graph or whatever that comes out table that comes out.

    338

    00:30:24,944 --> 00:30:29,505

    Do you still have the skills and knowledge to look at it and say, yes.

    339

    00:30:29,505 --> 00:30:31,984

    This makes sense, and this is what I want.

    340

    00:30:31,984 --> 00:30:32,464

    Or, no.

    341

    00:30:32,464 --> 00:30:34,464

    This actually doesn't make sense.

    342

    00:30:34,784 --> 00:30:39,664

    And, you know, perhaps, you know, we need to change it or maybe I have to change my prompt.

    343

    00:30:40,009 --> 00:30:42,809

    And so there's a lot of prompt engineering going on.

    344

    00:30:43,049 --> 00:30:53,129

    You know, we have work groups to work on prompt engineering to make sure that the output is, you know, appropriately worded or formatted, you know, in this in the, appropriate way

    345

    00:30:53,129 --> 00:30:54,409

    for the work that we're doing.

    346

    00:30:54,694 --> 00:31:01,575

    There are different tools for different assessors, for different stages of assessment that are currently being developed and tested.

    347

    00:31:01,815 --> 00:31:03,335

    Some of them have been deployed.

    348

    00:31:03,335 --> 00:31:05,494

    Some of them, you know, are still being tested.

    349

    00:31:05,654 --> 00:31:15,990

    And so I think that we we have to keep continuing to improve, you know, what, you know, the and

    350

    00:31:15,990 --> 00:31:26,205

    continuing to validate in the same way that we we want you to validate to make sure that the that the outputs that we use, that we could potentially use, are still appropriate

    351

    00:31:26,205 --> 00:31:27,724

    for the work that we do.

    352

    00:31:27,725 --> 00:31:37,890

    But it comes down to whoever is the one who's who's utilizing that AI still needs to have that background, that historical knowledge, the training in

    353

    00:31:37,890 --> 00:31:41,329

    order to make sure that the output is appropriate for the work.

    354

    00:31:42,450 --> 00:31:43,089

    Yep.

    355

    00:31:43,089 --> 00:31:43,409

    Okay.

    356

    00:31:43,409 --> 00:31:43,970

    That's great.

    357

    00:31:43,970 --> 00:31:44,609

    Thank you.

    358

    00:31:45,009 --> 00:31:55,195

    So maybe as we start to think about bringing our our discussion to a close, what future directions or innovations in AI do you believe will have the

    359

    00:31:55,195 --> 00:31:58,875

    greatest impact on regulatory science and patient outcomes?

    360

    00:32:00,154 --> 00:32:03,355

    You know, I think that there again, I think we're only seeing the beginning.

    361

    00:32:03,480 --> 00:32:06,200

    Like I said earlier, I think we're only seeing the beginning.

    362

    00:32:06,200 --> 00:32:14,519

    I think there's there is there's a number of there's many, many ways throughout the drug development cycle and post post marketing.

    363

    00:32:14,519 --> 00:32:22,335

    You know, once once the product is out there, a product is out there that can where AI could be utilized.

    364

    00:32:22,335 --> 00:32:30,015

    You know, we already talked about drug development and and, you know, molecule suction and clinical trials and and, you know, adverse events.

    365

    00:32:31,930 --> 00:32:37,690

    But, you know, we've talked about process, you know, throughout throughout, the manufacturing.

    366

    00:32:37,690 --> 00:32:46,585

    You know, in addition to, you know, continue advanced manufacturing, continuous manufacturing, can we utilize AI to kind of make it more robust?

    367

    00:32:46,585 --> 00:32:56,585

    You mentioned, you know, maybe one day we might get to the point where we have a fully autonomous, you know, autonomous, line where, you

    368

    00:32:56,585 --> 00:33:03,220

    know, hopefully from beginning to end with some, you know, obviously, some monitoring along the way, you know, check ins along the way.

    369

    00:33:03,220 --> 00:33:09,299

    You know, you go from beginning to end, and, you know, everything that comes out on the other end is is fantastic.

    370

    00:33:09,779 --> 00:33:16,734

    But I think, you know, we also have our opportunities, you know, post marketing, you know, looking at patient populations.

    371

    00:33:16,815 --> 00:33:23,534

    Because once a medication, a drug goes out into, the the general population, you never know what's going to happen.

    372

    00:33:23,534 --> 00:33:30,349

    You know, it's going to touch people, and people are gonna be utilizing it that we're not within that well controlled trial.

    373

    00:33:30,429 --> 00:33:40,744

    And so maybe you see, for example, additional benefits that you could look at data, you know, with people reporting, you know, additional benefits and, you know,

    374

    00:33:40,744 --> 00:33:49,625

    possibly look into that even further and refine that further and, you know, possibly get another indication based on that kind of real world evidence, for example.

    375

    00:33:49,865 --> 00:33:50,184

    You know?

    376

    00:33:51,019 --> 00:33:52,619

    You can you you know?

    377

    00:33:52,619 --> 00:34:02,700

    And looking at, you know, potential looking at, post market monitoring, looking at potential side effects, you might be able to see, you know, drug drug interactions that you would not

    378

    00:34:02,700 --> 00:34:12,875

    have anticipated in development, you know, where, you know, again, an AI can analyze tranches of data much faster than a human being can

    379

    00:34:12,875 --> 00:34:14,235

    and find the patterns.

    380

    00:34:14,235 --> 00:34:15,914

    And that's the big thing about AI.

    381

    00:34:15,914 --> 00:34:26,039

    The AI is the strength of AI is being able to recognize patterns and perhaps recognize patterns more rapidly than a human can because they're able to access all

    382

    00:34:26,039 --> 00:34:28,760

    of the data much, much more quickly.

    383

    00:34:28,920 --> 00:34:39,034

    And so maybe perhaps, you know, identifying, oh, there are drug drug interactions that we did not anticipate, but we found, you know, this cardiovascular product and this

    384

    00:34:39,034 --> 00:34:44,819

    renal product, you know, when these two things are used together cause x.

    385

    00:34:44,819 --> 00:34:47,139

    And we've seen it over and over and over again.

    386

    00:34:47,380 --> 00:34:57,380

    But someone just analyzing individual adverse event reports may not put those two things together, you know, or a company who are analyzing, you know, reports

    387

    00:34:57,005 --> 00:34:59,085

    may not put those two things together.

    388

    00:34:59,085 --> 00:35:03,885

    And so I think there are great opportunities to utilize the power of AI.

    389

    00:35:04,204 --> 00:35:07,085

    I think we also have to look at just generally.

    390

    00:35:07,085 --> 00:35:08,925

    I mean, we talk about risk all the time.

    391

    00:35:09,885 --> 00:35:13,829

    You know, when we utilize AI, we have to be judicious.

    392

    00:35:14,230 --> 00:35:18,789

    If we utilize AI in this particular situation, are we getting back?

    393

    00:35:18,789 --> 00:35:22,230

    Are we benefiting, you know, in a meaningful way?

    394

    00:35:22,230 --> 00:35:27,715

    If the benefit is minimal, is it really a good idea to use it there at that point?

    395

    00:35:27,715 --> 00:35:34,515

    Or is is what you were using or what you were using right, at the moment previously, you know, still appropriate?

    396

    00:35:34,675 --> 00:35:40,519

    And maybe the technology has to advance a little bit more before it's, you know, good to use there.

    397

    00:35:40,519 --> 00:35:50,840

    And so because AI takes a lot of computing power, it takes there's a environmental impact, there's a whole bunch of other things about AI besides just, oh, does it give me the right answer

    398

    00:35:50,840 --> 00:35:52,119

    that have to be considered?

    399

    00:35:52,505 --> 00:35:55,945

    And so we have to balance that as well with risk and benefit.

    400

    00:35:55,945 --> 00:35:58,744

    Is the benefit is juice worth the squeeze, in other words?

    401

    00:35:58,744 --> 00:36:01,144

    Is this the appropriate use of AI?

    402

    00:36:01,144 --> 00:36:06,105

    Or is it just because it's a new shiny object and we wanna say that we we're using AI for this?

    403

    00:36:06,265 --> 00:36:14,559

    And, you know, again, we can't deny the other impacts that it has besides does it affect our work.

    404

    00:36:15,119 --> 00:36:15,760

    Yeah.

    405

    00:36:16,000 --> 00:36:24,554

    I mean, certainly I mean, what a fantastic time for science and technology to really start pushing the boundaries of of what we can do in in in health care generally.

    406

    00:36:24,554 --> 00:36:31,755

    So it's it's, you know, certainly something that is is super interesting just to see where this where this is gonna end up taking us really.

    407

    00:36:32,210 --> 00:36:32,690

    Yeah.

    408

    00:36:32,690 --> 00:36:41,329

    And and I think, you know, as with every technology, you know, once there's adoption, I think it tends to move pretty quickly.

    409

    00:36:42,210 --> 00:36:45,409

    I think that it improves more quickly once there's adoption.

    410

    00:36:45,894 --> 00:36:49,894

    And I but I think, you know, there's always early adopters and later later adopters.

    411

    00:36:49,894 --> 00:36:52,215

    And I'm not saying one is better than the other.

    412

    00:36:52,375 --> 00:36:52,934

    You know?

    413

    00:36:52,934 --> 00:36:59,335

    I typically, for technologies, I tend to wait till the second version comes around because I don't wanna be a beta tester.

    414

    00:36:59,539 --> 00:37:00,900

    That's not my thing.

    415

    00:37:01,139 --> 00:37:01,699

    You know?

    416

    00:37:01,699 --> 00:37:10,260

    But there are others who wanna be the first out of the gate, and that's okay so long as you control the risks that come with being the first out of the gate.

    417

    00:37:10,900 --> 00:37:11,460

    Yep.

    418

    00:37:11,460 --> 00:37:12,019

    Absolutely.

    419

    00:37:13,125 --> 00:37:14,644

    What a great conversation.

    420

    00:37:14,885 --> 00:37:16,324

    I think we're at time.

    421

    00:37:16,644 --> 00:37:20,965

    It's been an absolutely, fascinating discussion, Tina.

    422

    00:37:20,965 --> 00:37:31,210

    Thank you for for giving up your time to to join me in and giving us some insight into this technology that does bring so much opportunity and yet still right now presents a

    423

    00:37:31,210 --> 00:37:35,130

    number of questions that I think we're all still trying to get to grips with.

    424

    00:37:35,289 --> 00:37:45,335

    I'm really looking forward to continuing the discussion between regulators and industry as we develop those guardrails that really help us to enable the delivery of innovation for

    425

    00:37:45,335 --> 00:37:45,815

    patients.

    426

    00:37:45,815 --> 00:37:47,494

    So thank you again very much indeed.

    427

    00:37:47,494 --> 00:37:49,015

    Thank you so much for having me.

    428

    00:37:49,015 --> 00:37:50,534

    I really enjoyed this conversation.

    429

    00:37:50,534 --> 00:38:00,059

    In summary, I was excited to hear the innovative ways in which regulators are viewing quality system oversight of these new technologies.

    430

    00:38:00,780 --> 00:38:10,954

    My three key takeaways are the strength of AI is in its ability to recognize patterns and provide outputs based on those

    431

    00:38:10,954 --> 00:38:11,755

    patterns.

    432

    00:38:12,155 --> 00:38:18,875

    It is important to recognize that AI is not making decisions, it is providing outputs.

    433

    00:38:19,275 --> 00:38:20,875

    Only humans make decisions.

    434

    00:38:22,260 --> 00:38:22,660

    Doctor.

    435

    00:38:22,660 --> 00:38:32,820

    Kiang's thoughts on approaching change control from an overall perspective versus trying to use current thinking and methods really shines the light on the way

    436

    00:38:32,820 --> 00:38:38,865

    forward for adoption and innovative application of AI and related technologies.

    437

    00:38:39,744 --> 00:38:48,625

    And lastly, first movers just need to think through how these technologies will be used and apply appropriate risk controls.

    438

    00:38:49,760 --> 00:38:54,800

    I'd like to thank David and Tina for their engaging and thought provoking conversation.

    439

    00:38:55,280 --> 00:38:59,440

    I am really excited about the potential these new technologies offer.

    440

    00:39:00,800 --> 00:39:07,644

    That brings us to the end of another episode of the ISPE podcast, shaping the future of pharma.

    441

    00:39:08,284 --> 00:39:17,724

    Please be sure to subscribe so you don't miss future conversations with the innovators, experts, and change makers driving our industry forward.

    442

    00:39:18,909 --> 00:39:22,989

    On behalf of all of us at ISPE, thank you for listening.

    443

    00:39:23,309 --> 00:39:30,850

    And we'll see you next time as we continue to explore the ideas, trends, and people shaping the future of pharma.

     

Listen to Past Episodes

Audio file

In this episode, Susan Szathmary and Richard Jaenisch, both of Open BioPharma Research and Training Institute, join the podcast to share how to accelerate the adoption of new technologies through applied AI in pharma manufacturing and for workforce