Exploring AI with the newest ISPE Community of Practice

 Listen to Audio

 

Listen Anywhere

June 2025

ISPE AI Community of Practice Steering Committee members, Ben Stevens and Eric Staib, join the podcast to share more about the new CoP, the upcoming ISPE GAMP® Guide: Artificial Intelligence, and their thoughts on the current state of AI in the pharmaceutical industry.

  • Guests

    Benjamin Stevens
    Director of CMC Policy & Advocacy
    GlaxoSmithKline
    Eric Staib
    Vice President, Corporate Quality
    Syneos Health
  • Transcript

    Back to Top

    Download Transcript


    1
    00:00:00,000 --> 00:00:07,000
    Welcome to the ISPE podcast, Shaping the Future of Pharma, where ISPE supports you on your journey,

    2
    00:00:07,000 --> 00:00:10,000
    fueling innovation, sharing insights, thought leadership,

    3
    00:00:10,000 --> 00:00:14,000
    and empowering a global community to reimagine what's possible.

    4
    00:00:15,000 --> 00:00:23,000
    Hello, and welcome to the ISPE podcast, Shaping the Future of Pharma. I'm Bob Chew, your host.

    5
    00:00:24,000 --> 00:00:29,000
    And today, we have another episode where we'll be sharing the latest insights and thought

    6
    00:00:29,000 --> 00:00:37,000
    leadership on manufacturing, technology, supply chains, and regulatory trends impacting the

    7
    00:00:37,000 --> 00:00:44,000
    pharmaceutical industry. You will hear directly from the innovators, experts, and professionals

    8
    00:00:44,000 --> 00:00:50,000
    driving progress and shaping the future. Thank you again for joining us. And now,

    9
    00:00:50,000 --> 00:00:58,000
    let's dive into this episode. Our topic today is the new ISPE Artificial Intelligence

    10
    00:00:58,000 --> 00:01:06,000
    Community of Practice and the associated upcoming ISPE GAMP Guide, Artificial Intelligence.

    11
    00:01:07,000 --> 00:01:12,000
    To share more about these exciting initiatives, I would like to welcome Ben Stevens,

    12
    00:01:13,000 --> 00:01:22,000
    Director of CMC Policy and Advocacy at GSK. Ben is the ISPE AI Community of Practice

    13
    00:01:22,000 --> 00:01:26,000
    Steering Committee Chair. I also want to welcome Eric Staib,

    14
    00:01:27,000 --> 00:01:36,000
    Vice President of Corporate Quality at Syneos Health. He is the ISPE AI Community of Practice

    15
    00:01:36,000 --> 00:01:43,000
    Steering Committee Secretary and the GAMP AI Guide Co-Lead. Ben and Eric,

    16
    00:01:43,000 --> 00:01:47,000
    welcome to the podcast. We're so glad to have you both with us.

    17
    00:01:47,000 --> 00:01:51,000
    It's a pleasure. Thank you. Appreciate it.

    18
    00:01:51,000 --> 00:01:58,000
    All right. Well, let's dive into this topic. The ISPE Artificial Intelligence Community of Practice,

    19
    00:01:59,000 --> 00:02:05,000
    also known as the ISPE AI COP, was launched last year.

    20
    00:02:06,000 --> 00:02:10,000
    Taking a step back, what is an ISPE Community of Practice?

    21
    00:02:11,000 --> 00:02:17,000
    Yeah, thanks, Bob. Maybe I can start with this and Eric can jump in. So, the COPs are actually a

    22
    00:02:17,000 --> 00:02:24,000
    pretty common thing within ISPE. And in fact, there's quite a few of them beyond the AI COP.

    23
    00:02:24,000 --> 00:02:31,000
    Really, what the COPs are, the Community of Practices, it's a way for ISPE members to come

    24
    00:02:31,000 --> 00:02:39,000
    and be a part of essentially a group that shares a common interest, has a passion for that area of

    25
    00:02:39,000 --> 00:02:47,000
    focus, and is able to get involved in ways around discussing key topics, especially as it interfaces

    26
    00:02:47,000 --> 00:02:54,000
    with regulatory aspects of that topic, but also technical best practices and other considerations

    27
    00:02:54,000 --> 00:03:01,000
    that are more broadly covering the industry interests. It's a benefit for ISPE members

    28
    00:03:01,000 --> 00:03:05,000
    because it's not limited to any specific group. It's really open for any ISPE member. And so,

    29
    00:03:06,000 --> 00:03:11,000
    really, when we went out to establish this COP, the idea was that we were trying to cast a very

    30
    00:03:11,000 --> 00:03:15,000
    wide net and get in a lot of folks who had a very broad interest in this really critical topic.

    31
    00:03:16,000 --> 00:03:19,000
    So, that's kind of, in a nutshell, what the COPs are all about.

    32
    00:03:19,000 --> 00:03:24,000
    I think that's a good description, Ben. And as you can imagine, with regards to the topic of

    33
    00:03:25,000 --> 00:03:30,000
    artificial intelligence, that broad net brought a lot of people across industry, Bob. So,

    34
    00:03:31,000 --> 00:03:38,000
    whether it be large pharma, biotech, clinical research, contract manufacturing organizations,

    35
    00:03:38,000 --> 00:03:42,000
    we've had a lot of interest since the inception of the Community of Practice,

    36
    00:03:42,000 --> 00:03:47,000
    and a lot of good work that has already been done within the COP that we can talk more about today.

    37
    00:03:47,000 --> 00:03:52,000
    But I think, Ben, it really covered it well with respect to what we represent and what we're trying

    38
    00:03:52,000 --> 00:03:59,000
    to do with regards to bringing people together on a like topic with similar interests and really

    39
    00:03:59,000 --> 00:04:08,000
    helping to shape and form that community with regards to education within ISPE, as well as

    40
    00:04:08,000 --> 00:04:13,000
    industry itself. So, there's many ways we do that within the Community of Practice, whether it is

    41
    00:04:13,000 --> 00:04:20,000
    through guides, such as the GAAP AI Guide we'll talk a little more about, or specifically blogs,

    42
    00:04:20,000 --> 00:04:26,000
    pharmaceutical engineering articles, podcasts such as this, and webinars as well.

    43
    00:04:26,000 --> 00:04:33,000
    So, with respect to this specific artificial intelligence COP, what are your goals,

    44
    00:04:33,000 --> 00:04:40,000
    what's your intentions, and what specific challenges and opportunities is the COP looking

    45
    00:04:40,000 --> 00:04:46,000
    to address? Yeah, I mean, we can talk a little bit more in detail on this, but, you know, right now,

    46
    00:04:46,000 --> 00:04:51,000
    I think one of the things we've been trying to do, you know, myself with Eric and Nick as well and

    47
    00:04:51,000 --> 00:04:57,000
    others, trying to kind of focus on building out some of those key topic areas that we know that

    48
    00:04:57,000 --> 00:05:01,000
    the community is going to be interested in. And, you know, as time goes on, I think that will

    49
    00:05:01,000 --> 00:05:07,000
    continue to grow. But right now, we've been focusing a lot on establishing subcommittees

    50
    00:05:07,000 --> 00:05:12,000
    that are actually, you know, driving some actual efforts in establishing content and also

    51
    00:05:13,000 --> 00:05:19,000
    plans going forward. So, for example, we have three subcommittees right now that are actively

    52
    00:05:19,000 --> 00:05:28,000
    engaged in areas around applications model and data preparedness and workforce and regulatory

    53
    00:05:28,000 --> 00:05:33,000
    more broadly. But, you know, as you can imagine, those are relatively, it's a relatively small

    54
    00:05:33,000 --> 00:05:37,000
    piece of the overall puzzle. And so, we expect that there's going to be a lot more of the

    55
    00:05:37,000 --> 00:05:41,000
    subcommittees that will grow out and become pretty engaged. And I think the other thing is,

    56
    00:05:41,000 --> 00:05:45,000
    and, you know, Eric obviously mentioned the connection already to GAMP, but we do have

    57
    00:05:45,000 --> 00:05:50,000
    larger groups that are already doing a lot of work that dovetails with this, you know, for example,

    58
    00:05:50,000 --> 00:05:55,000
    GAMP, but also the Pharma 4.0 group. And so, we're establishing a more, I'd say, you know,

    59
    00:05:55,000 --> 00:06:00,000
    direct and regular connection between all that existing infrastructure and ISP and making sure

    60
    00:06:00,000 --> 00:06:04,000
    that it's all kind of tied into a broader, you know, movement forward for AI as a whole.

    61
    00:06:11,000 --> 00:06:16,000
    I would add to what kind of Ben, what Ben has already said, because we're in such a highly

    62
    00:06:16,000 --> 00:06:22,000
    regulated industry, a lot of people like to jump directly to the compliance. And how do we ensure

    63
    00:06:22,000 --> 00:06:28,000
    that regulators, that industry is going to be accepting of such innovative technology? And

    64
    00:06:28,000 --> 00:06:34,000
    that's, you know, the big question, you know, the elephant in the room most of the time. But this

    65
    00:06:34,000 --> 00:06:39,000
    community of practice is much broader than that. As Ben mentioned, we started with really three

    66
    00:06:39,000 --> 00:06:46,000
    subcommittees, and it's beyond just the compliance aspects of AI and machine learning itself as well.

    67
    00:06:47,000 --> 00:06:53,000
    So, really focusing on larger type concepts, use cases within industry, you know, whether it be

    68
    00:06:53,000 --> 00:06:59,000
    regulated or non-regulated, GXP or non-GXP, and things such as he talked about in one of our

    69
    00:06:59,000 --> 00:07:04,000
    subcommittees around workforce preparedness. So, there's a lot of companies now that are really

    70
    00:07:04,000 --> 00:07:12,000
    facing, and I know my own company, is how do you get people to understand how to use AI? What's the

    71
    00:07:12,000 --> 00:07:17,000
    best approach? There's a learning curve, especially when it comes to generative AI, large language

    72
    00:07:17,000 --> 00:07:23,000
    models. Prompting is a big thing, right? I'm sure all of us have experienced it with chat GPT with

    73
    00:07:23,000 --> 00:07:31,000
    respect to how do you give the large language model the right prompts to return the information

    74
    00:07:31,000 --> 00:07:37,000
    you're looking for, right? It can be challenging, just like we've used with Siri and other things

    75
    00:07:37,000 --> 00:07:42,000
    like that. You have to prompt it correctly. So, getting people to understand within your company

    76
    00:07:42,000 --> 00:07:47,000
    how to use it, how not to use it, and how to be most effective in using it is extremely important

    77
    00:07:47,000 --> 00:07:55,000
    as well. So, going back to individuals and their interest level, you mentioned subcommittees.

    78
    00:07:56,000 --> 00:08:02,000
    What does it take for a person who, let's say, is listening to this podcast for the first time,

    79
    00:08:03,000 --> 00:08:09,000
    is interested in this community of practice? What are the steps that person needs to go through

    80
    00:08:10,000 --> 00:08:16,000
    to really get engaged with, say, a subcommittee? It's actually pretty straightforward if you're an

    81
    00:08:16,000 --> 00:08:21,000
    ISPI member. So, there are obviously a number of folks who are directly, you know, leading those

    82
    00:08:21,000 --> 00:08:26,000
    subcommittees. Peter Gordon and Prem Iyengar for the applications model and data.

    83
    00:08:28,000 --> 00:08:34,000
    Robert Jacinich, Richard Jacinich, Robert Parks, and Jason Schneider for the preparedness and

    84
    00:08:34,000 --> 00:08:42,000
    workforce subcommittee, and Kabir Aiwal and Gert Throe for regulatory. But if you need some

    85
    00:08:42,000 --> 00:08:46,000
    direction to actually get, you know, set up into the groups that you're interested in, you can

    86
    00:08:46,000 --> 00:08:52,000
    essentially reach out to the contact information that's provided on the ISPI website for the COP,

    87
    00:08:52,000 --> 00:08:57,000
    and they will guide you through the process of making sure that you're involved in all the areas

    88
    00:08:57,000 --> 00:09:02,000
    that, you know, you might have interest in. So, as I mentioned before, as a member, you can be on

    89
    00:09:02,000 --> 00:09:07,000
    all the subcommittees. You can actually propose, if you're interested, to start up a new subcommittee.

    90
    00:09:07,000 --> 00:09:13,000
    There's lots of ways to get involved. So, you know, that direct link is, you know, straightforward to

    91
    00:09:13,000 --> 00:09:17,000
    make. And, of course, you know, obviously, myself and Eric can also be, you know, directly approached

    92
    00:09:17,000 --> 00:09:23,000
    if we need, if you need some assistance with that. Absolutely. Nick Armstrong's also the visor or

    93
    00:09:23,000 --> 00:09:29,000
    co-chair, if you will, of the steering committee as part of the three leaders. So, Ben and myself

    94
    00:09:29,000 --> 00:09:38,000
    and Nick. But also, with respect to that, thinking about timing, now is a great time to get involved

    95
    00:09:38,000 --> 00:09:44,000
    in this COP. It has been established for a little while now, but we're really kind of storming and

    96
    00:09:44,000 --> 00:09:49,000
    norming within these subcommittees, really looking at what are some of their own objectives and what

    97
    00:09:49,000 --> 00:09:55,000
    do they want to do to add to this community of practice? What are they doing in order to bring

    98
    00:09:55,000 --> 00:10:01,000
    knowledge and understanding to industry, to the ISPE community itself? So, there's no time like

    99
    00:10:01,000 --> 00:10:08,000
    the present to reach out to one of us, to go to the ISPE website, select the AICOP as one you're

    100
    00:10:08,000 --> 00:10:15,000
    interested in, and get involved. And really find your way and find your place that fits the

    101
    00:10:15,000 --> 00:10:21,000
    need and the interest that you have. And as Ben also said, we're willing to expand, right? We started

    102
    00:10:21,000 --> 00:10:26,000
    with three subcommittees because that's where we kind of coalesced around general interest with the

    103
    00:10:26,000 --> 00:10:32,000
    people we have. But it doesn't mean that we can't have additional subcommittees or people that spin

    104
    00:10:32,000 --> 00:10:37,000
    out of one subcommittee into another subcommittee. So, we're very open and dynamic at this point in

    105
    00:10:37,000 --> 00:10:41,000
    time. Again, storming and norming and really figuring out what some of those deliverables

    106
    00:10:41,000 --> 00:10:45,000
    and things are going to be beyond just what we started with already.

    107
    00:10:45,000 --> 00:10:53,000
    Great. Well, hopefully people now have a better understanding of how easy it is to get involved in

    108
    00:10:53,000 --> 00:11:00,000
    the AI or any community of practice with ISPE. So, getting technical for a minute,

    109
    00:11:01,000 --> 00:11:08,000
    artificial intelligence, machine learning, and digital twins each involve sophisticated

    110
    00:11:08,000 --> 00:11:15,000
    statistical techniques as their foundations. What's the difference between these technologies?

    111
    00:11:16,000 --> 00:11:21,000
    Yeah, I can take this one. But just to say right up front, as we talked about, the COP is a very

    112
    00:11:21,000 --> 00:11:26,000
    diverse group. And so, I am certainly not a deep subject matter expert on this. But I know from some

    113
    00:11:26,000 --> 00:11:30,000
    of the discussions with some of the really brilliant folks that work very deeply in the area,

    114
    00:11:30,000 --> 00:11:37,000
    some general aspects of it. So, I can touch on it a little. So, AI is really, I'd say, the broadest,

    115
    00:11:37,000 --> 00:11:43,000
    you know, delineation of the technology, right? It's essentially the computer science field that's

    116
    00:11:43,000 --> 00:11:48,000
    focused on actually generating solutions that will actually perform tasks that are very much like

    117
    00:11:48,000 --> 00:11:54,000
    human intelligence, right? And so, when we talk about AI, it's incredibly broad, right? You can

    118
    00:11:54,000 --> 00:11:58,000
    talk about, you know, for example, large language models, but you can also talk about, you know,

    119
    00:11:58,000 --> 00:12:04,000
    self-driving cars or even things that are relatively mundane. In some cases,

    120
    00:12:04,000 --> 00:12:10,000
    when we get into specific applications of that technology, for example, when we talk about

    121
    00:12:10,000 --> 00:12:16,000
    machine learning, we're talking about really a subset of that AI, actually a quite narrow subset

    122
    00:12:16,000 --> 00:12:23,000
    of it, where that is a set of algorithms that's actually using data to train the model to perform,

    123
    00:12:24,000 --> 00:12:31,000
    to learn like a human, right? And actually to start to be able to improve its performance over

    124
    00:12:31,000 --> 00:12:35,000
    time. And that's, I think, a key attribute of the machine learning-based models, because

    125
    00:12:36,000 --> 00:12:42,000
    many of the models which we use historically, particularly in manufacturing, were oftentimes

    126
    00:12:42,000 --> 00:12:49,000
    incapable of updating themselves over time or evolving, right? They were usually static models,

    127
    00:12:49,000 --> 00:12:53,000
    and you had to do quite a lot of work to actually update them. Or they were based on, you know,

    128
    00:12:53,000 --> 00:12:58,000
    physics and mechanistic-based models where it was really locked into your understanding of a

    129
    00:12:58,000 --> 00:13:05,000
    particular natural phenomenon. Digital twins are actually not so much AI directly, but it's really

    130
    00:13:05,000 --> 00:13:10,000
    just a digital representation of something physical in its broadest form. But where it

    131
    00:13:10,000 --> 00:13:16,000
    dovetails a little bit with AI and machine learning is that in a lot of cases now, companies like mine

    132
    00:13:16,000 --> 00:13:23,000
    and many others are using digital representations of manufacturing processes or, you know,

    133
    00:13:23,000 --> 00:13:29,000
    components of that manufacturing process to then essentially couple that to machine learning

    134
    00:13:29,000 --> 00:13:35,000
    models where you're able to use that digital representation and then couple it with a

    135
    00:13:35,000 --> 00:13:39,000
    machine learning model that's able to take in data from that digital representation and actually

    136
    00:13:39,000 --> 00:13:45,000
    update that representation. And then using that data, it can make real- time projections on how

    137
    00:13:45,000 --> 00:13:50,000
    certain things that are happening based on a digital representation are likely to make

    138
    00:13:50,000 --> 00:13:55,000
    downstream effects on, for example, product quality, right? So in the most, I'd say,

    139
    00:13:56,000 --> 00:14:02,000
    direct and obvious case for where you could see benefit here, you can directly couple that

    140
    00:14:02,000 --> 00:14:07,000
    prediction to an active control loop. So that information that's coming in from your process

    141
    00:14:07,000 --> 00:14:12,000
    real-time is informing the digital twin as being coupled with that machine learning model, which is

    142
    00:14:12,000 --> 00:14:18,000
    providing essentially direction to the process, is able to move that process to make sure that

    143
    00:14:18,000 --> 00:14:23,000
    the predictions are going to give you the most optimal outputs at the end. So it's more of, I'd

    144
    00:14:23,000 --> 00:14:30,000
    say, a part of how AI models are being used. It's not actually artificial intelligence itself.

    145
    00:14:31,000 --> 00:14:35,000
    Hopefully that's, you know, somewhat helpful to folks out there. I'm sure some of the subject

    146
    00:14:35,000 --> 00:14:40,000
    matter experts can give a much more deeper dive and be more informative on the subject.

    147
    00:14:40,000 --> 00:14:44,000
    And if folks are interested, you know, come to the COP, and I'm sure we can do some deeper

    148
    00:14:44,000 --> 00:14:51,000
    discussions on the topic. Well, I know that the aircraft industry has used digital twins for a

    149
    00:14:51,000 --> 00:15:01,000
    long time to model jet engines, failure modes and effects, reliability, and predictive maintenance.

    150
    00:15:02,000 --> 00:15:10,000
    And over the last several years at various ISPE conferences, I've certainly heard case studies

    151
    00:15:10,000 --> 00:15:18,000
    of digital twins being used in pharma. Do you know of any significant early adopters

    152
    00:15:19,000 --> 00:15:26,000
    of digital twins or AI? Yeah. I mean, I'd say actually quite a large percentage of the

    153
    00:15:26,000 --> 00:15:36,000
    companies that I know, including my own GSK, is actively using these models in various aspects of

    154
    00:15:36,000 --> 00:15:44,000
    their, both their development process, but also their actual manufacturing as well. So for example,

    155
    00:15:44,000 --> 00:15:48,000
    you know, we touched on a little bit before about, you know, the part that becomes

    156
    00:15:48,000 --> 00:15:55,000
    critical for sort of addressing regulatory considerations. There's a whole, you know,

    157
    00:15:55,000 --> 00:16:00,000
    big component of this that can happen and is happening right now that really, I'd say,

    158
    00:16:00,000 --> 00:16:08,000
    to some extent, is not, is not sort of within the scope of, you know, regulated processes or

    159
    00:16:08,000 --> 00:16:13,000
    needs to be, you know, a concern from the regulatory standpoint. So in particular,

    160
    00:16:13,000 --> 00:16:20,000
    when we talk about things like process development, you know, or early R&D development activities,

    161
    00:16:21,000 --> 00:16:26,000
    we can use a lot of these, we are using a lot of these models to gain insight from our

    162
    00:16:26,000 --> 00:16:33,000
    our previous data sets and to help us design, you know, new manufacturing processes. There are also

    163
    00:16:33,000 --> 00:16:39,000
    some cases, for example, right now that are active even for manufacturing processes at my company,

    164
    00:16:39,000 --> 00:16:45,000
    for example, where we're using it more for the purposes of monitoring and not so much to actually,

    165
    00:16:46,000 --> 00:16:50,000
    you know, itself directly impact an ongoing process. So good example, we call it

    166
    00:16:51,000 --> 00:16:58,000
    multivariate statistical process monitoring or MSPM. And so you can use essentially a twin of

    167
    00:16:58,000 --> 00:17:05,000
    your manufacturing process, have that model be predicting where certain elements of that process

    168
    00:17:05,000 --> 00:17:11,000
    are maybe going to lead to, for example, an excursion in a certain critical process parameter

    169
    00:17:11,000 --> 00:17:16,000
    or a critical quality attribute. And you don't have to have that model do anything itself to

    170
    00:17:17,000 --> 00:17:21,000
    the actual control of the process. It can be completely separate from that. And so it's not

    171
    00:17:21,000 --> 00:17:26,000
    actually a part of your GMP process, but it can be a nice tool for, for example, operators who

    172
    00:17:26,000 --> 00:17:31,000
    are working on the line who have, you know, set protocols and things they need to do.

    173
    00:17:31,000 --> 00:17:35,000
    But it can also get an early warning sign from these models to let them know something may be

    174
    00:17:35,000 --> 00:17:40,000
    going in the wrong direction. And so it's a really great tool to have. I think a lot of companies

    175
    00:17:40,000 --> 00:17:44,000
    are using it already, even though it may not be something that they're, you know, submitting,

    176
    00:17:44,000 --> 00:17:49,000
    for example, to regulators. There are also companies, you know, who are, you know, very

    177
    00:17:49,000 --> 00:17:54,000
    much also working with some of these models and actually actively engaging with regulators on

    178
    00:17:54,000 --> 00:18:00,000
    more advanced and more, I'd say, what you can call higher impact applications as well.

    179
    00:18:00,000 --> 00:18:06,000
    So the whole scope, I think, is being explored. And again, GSK and many other companies,

    180
    00:18:06,000 --> 00:18:10,000
    Roche, for example, is leading one of the sub teams. They're actively involved in this,

    181
    00:18:10,000 --> 00:18:14,000
    a lot of players in the area. You know, you mentioned multivariate

    182
    00:18:14,000 --> 00:18:21,000
    process control or statistical analysis. I have a close family member who 30 years ago,

    183
    00:18:21,000 --> 00:18:32,000
    working for Unilever Foods, modeled a pizza sauce process using multivariate statistical techniques

    184
    00:18:32,000 --> 00:18:39,000
    to predict how that process was behaving. So we've come a long ways in 30 years.

    185
    00:18:39,000 --> 00:18:44,000
    And I think you've already given two good examples. Yeah. I mean, it's been getting

    186
    00:18:44,000 --> 00:18:49,000
    used for a long time, right? Very quickly. You saw things like automation

    187
    00:18:49,000 --> 00:18:54,000
    is the statistical process controls and those sorts of things and bots. And that was more

    188
    00:18:54,000 --> 00:18:58,000
    kind of static systems that were programmed, right? So that was some of the early stages

    189
    00:18:59,000 --> 00:19:04,000
    leading into where we are today with true AI and more dynamic models. And I think

    190
    00:19:04,000 --> 00:19:09,000
    a lot of the early adopters kind of used that crawl, walk, run mentality where they

    191
    00:19:09,000 --> 00:19:16,000
    were more controlled, a lot more human oversight, more static type systems than moving into more

    192
    00:19:16,000 --> 00:19:21,000
    dynamic type systems that are really thinking and making some decisions and changes on their own.

    193
    00:19:21,000 --> 00:19:25,000
    So I always encourage people to use that approach, right? To start small,

    194
    00:19:26,000 --> 00:19:33,000
    begin with areas that are first non-regulated to understand what your scope is and what some

    195
    00:19:33,000 --> 00:19:41,000
    of the outcomes may be before they start using that analysis to make regulated or regulatory

    196
    00:19:41,000 --> 00:19:46,000
    type decisions. But you've seen a lot now, especially I come from a clinical background

    197
    00:19:46,000 --> 00:19:53,000
    and laboratory background. So GCP, GLP is a lot of the generative AI, large language models

    198
    00:19:54,000 --> 00:20:00,000
    to establish study protocols for clinical research, patient enrollment, enrollment

    199
    00:20:00,000 --> 00:20:06,000
    and recruitment at sites being used that way. But you've also seen it on the other side as well

    200
    00:20:06,000 --> 00:20:10,000
    from a machine learning. And the big difference here is really the data, the data to train the

    201
    00:20:10,000 --> 00:20:16,000
    algorithms and then the data that provides the output or the intended use of the system. And

    202
    00:20:16,000 --> 00:20:22,000
    you're seeing that with regards to pharmacovigilance. So areas of greater efficiency

    203
    00:20:22,000 --> 00:20:28,000
    to make the industry more efficient and cost effective, as well as safety from the perspective

    204
    00:20:28,000 --> 00:20:35,000
    of processing things like adverse event reports much quicker and drawing comparisons between what

    205
    00:20:35,000 --> 00:20:43,000
    might be minor adverse events, but pile up over time, right? So it creates a dynamic where you

    206
    00:20:43,000 --> 00:20:49,000
    can more easily process them more quickly, meet the regulated timelines for reporting of serious

    207
    00:20:49,000 --> 00:20:56,000
    adverse events or reporting of those, as well as looking at them across the board. So improving

    208
    00:20:56,000 --> 00:21:01,000
    efficiency and improving safety of the subject or of the patient as well.

    209
    00:21:02,000 --> 00:21:09,000
    So what's holding industry back from faster and more broad industry adoption of AI?

    210
    00:21:10,000 --> 00:21:17,000
    Are devaluation and change management a concern? And I think it's the guy in the dress.

    211
    00:21:17,000 --> 00:21:23,000
    Yeah, it's been a real scary industry with respect to, hey, is this going to be accepted, right? Are

    212
    00:21:23,000 --> 00:21:28,000
    people going to really flock to this and are they going to embrace it? And I think it's building

    213
    00:21:28,000 --> 00:21:35,000
    that confidence there first needed to be, again, a slow approach to building confidence and knowing

    214
    00:21:35,000 --> 00:21:40,000
    what you're being provided and that it's accurate and precise. And then moving from there and being

    215
    00:21:40,000 --> 00:21:46,000
    able to demonstrate that the intended use is indeed being fulfilled and you have the appropriate

    216
    00:21:47,000 --> 00:21:53,000
    evidence and documentation for regulators or in my case for a sponsor company, right?

    217
    00:21:53,000 --> 00:22:00,000
    So I think that was an initial reluctancy to do some of this stuff, but it's much more

    218
    00:22:01,000 --> 00:22:07,000
    pronounced today and really started that I feel is probably one of the highest risk areas is

    219
    00:22:07,000 --> 00:22:13,000
    probably around medical device, medical devices that are treating and helping deliver medications

    220
    00:22:13,000 --> 00:22:21,000
    and so forth and diagnosis for patients in a medical setting. But that's coming around and

    221
    00:22:21,000 --> 00:22:26,000
    people are starting to grasp and get more comfortable and understand the technology better

    222
    00:22:26,000 --> 00:22:32,000
    and things that we're doing within the COP and the new gap guide on AI that's coming out where

    223
    00:22:32,000 --> 00:22:36,000
    we address some of those things for people, I think to give them a better level of comfort.

    224
    00:22:36,000 --> 00:22:44,000
    Well, great. Sort of coming to a wrap up here, what key initiatives and topic areas

    225
    00:22:45,000 --> 00:22:51,000
    is the community of practice focusing on and also any further topics you want to discuss

    226
    00:22:51,000 --> 00:22:57,000
    from the GAMP guide on artificial intelligence? Yeah, I was missing a year. Do you want to talk

    227
    00:22:57,000 --> 00:23:03,000
    a little bit more about GAMP? Sure. I'm happy to talk a little more about the AI guide that

    228
    00:23:03,000 --> 00:23:13,000
    is coming out. So it's really focused around addressing the GXP compliant design and development

    229
    00:23:13,000 --> 00:23:21,000
    operation and use of AI in industry, in particular, also machine learning as a subset of AI.

    230
    00:23:22,000 --> 00:23:29,000
    But it really covers and really ties together and pulls together concepts that have been within

    231
    00:23:29,000 --> 00:23:36,000
    industry itself from a regulatory perspective, as well as within ISP as a community and their

    232
    00:23:36,000 --> 00:23:43,000
    communities of practice. So GAMP, looking at risk-based computer systems validation,

    233
    00:23:43,000 --> 00:23:52,000
    or today CSA, to ensure that the systems that either use AI or are AI- based systems themselves

    234
    00:23:53,000 --> 00:23:58,000
    meet and are applicable to their intended use. So verification from that perspective.

    235
    00:23:59,000 --> 00:24:03,000
    It also touches on and brings and pulls in concepts of knowledge management,

    236
    00:24:04,000 --> 00:24:09,000
    critical thinking, data integrity, all of those things that have been out there for quite a while,

    237
    00:24:09,000 --> 00:24:15,000
    but haven't been essentially geared towards AI specifically. So that's what the guide is really

    238
    00:24:15,000 --> 00:24:21,000
    pulling together, things that have been out there from a lifecycle perspective, how to validate,

    239
    00:24:21,000 --> 00:24:25,000
    how to test, how to do verification of some of these systems that have an AI or machine learning

    240
    00:24:25,000 --> 00:24:32,000
    component to, again, build that trust and that confidence within your organization and with the

    241
    00:24:32,000 --> 00:24:37,000
    regulators as well. But really taking what we've already known and applying it to now this

    242
    00:24:37,000 --> 00:24:44,000
    innovative technology in a manner in which it meets, again, those industry and regulatory expectations.

    243
    00:24:45,000 --> 00:24:51,000
    All right, crystal ball moment. You know, new technologies, there's early adopters,

    244
    00:24:51,000 --> 00:24:58,000
    and then the dam breaks, and all of a sudden everybody's doing it, and then you kind of reach

    245
    00:24:58,000 --> 00:25:03,000
    a mature stage where there's continuous innovation improvement, but you've gone through that

    246
    00:25:05,000 --> 00:25:12,000
    change phase. Do you see that same sort of adoption curve happening with

    247
    00:25:13,000 --> 00:25:21,000
    AI and related technologies, or will it be very cautious, tepid, crawling for quite a while yet?

    248
    00:25:21,000 --> 00:25:27,000
    What do you think? I'm pretty optimistic, actually. I mean, I think the, you know,

    249
    00:25:27,000 --> 00:25:32,000
    it's been pretty clear, at least from, particularly from the FDA side, but also from, you know,

    250
    00:25:32,000 --> 00:25:38,000
    EMA and outside of the U.S., that they're very supportive of the technology. And I think,

    251
    00:25:38,000 --> 00:25:43,000
    you know, to some extent, the, you know, as Eric was kind of mentioning before, like,

    252
    00:25:43,000 --> 00:25:49,000
    the barriers that people are sort of seeing are often, I'd say, more anticipated barriers. And

    253
    00:25:49,000 --> 00:25:53,000
    we just are trying to kind of make sure that we're kind of proactively working with some of these

    254
    00:25:53,000 --> 00:25:58,000
    folks on, you know, the side of the regulators because they want the same as us, which is to,

    255
    00:25:58,000 --> 00:26:03,000
    you know, responsibly deploy the technology in such a way that we can really speed up,

    256
    00:26:03,000 --> 00:26:06,000
    you know, the ability to benefit patients, but, you know, ultimately,

    257
    00:26:06,000 --> 00:26:10,000
    you know, get the most value out of it, too. And so a lot of what I think we're, you know,

    258
    00:26:11,000 --> 00:26:15,000
    would be some somewhat of the initial slowdown phase, because we're all just kind of trying to

    259
    00:26:15,000 --> 00:26:20,000
    figure out that initial, you know, the ground game, will, I think, to some extent, you know,

    260
    00:26:20,000 --> 00:26:25,000
    get sorted out as long as we keep having those types of dialogue. You know, there are things

    261
    00:26:25,000 --> 00:26:28,000
    that are coming out of left field, you know, legislation is something like that, right? You

    262
    00:26:28,000 --> 00:26:33,000
    can't always anticipate, you know, the way legislation will work. And we obviously will

    263
    00:26:33,000 --> 00:26:38,000
    continue to sort of try to work, you know, to understand how that is going to adopt or impact

    264
    00:26:38,000 --> 00:26:42,000
    the field more generally. But even in those situations, I think, you know, the regulators

    265
    00:26:42,000 --> 00:26:48,000
    often are very good at, you know, helping us to kind of find paths to, you know, both be continue

    266
    00:26:48,000 --> 00:26:52,000
    to be compliant, like Eric was saying earlier, but also to try to, you know, minimize the amount

    267
    00:26:52,000 --> 00:26:56,000
    of impact that we get, you know, from a day to day basis. Frankly, I think a lot of the,

    268
    00:26:56,000 --> 00:27:02,000
    you know, the sort of trajectory may largely be due to, you know, to some extent, trying to apply

    269
    00:27:02,000 --> 00:27:07,000
    it to lots of things that it ultimately may not be a great application for and sort of figuring

    270
    00:27:07,000 --> 00:27:13,000
    out where you get the most value out of it from the industry standpoint over time. And then seeing

    271
    00:27:13,000 --> 00:27:18,000
    where some of these models continue to evolve to be really, really helpful and get better. Other

    272
    00:27:18,000 --> 00:27:22,000
    ones maybe have some limitations that ultimately don't carry them through to more general use,

    273
    00:27:22,000 --> 00:27:25,000
    but I'm pretty optimistic about where things are going to go with it in general.

    274
    00:27:25,000 --> 00:27:27,000
    Eric, what's your prediction?

    275
    00:27:28,000 --> 00:27:33,000
    I think we're going to continue to be in this growth phase for quite a while in a very exponential

    276
    00:27:33,000 --> 00:27:39,000
    type of environment. I think it really got kicked off with generative AI and large language models.

    277
    00:27:39,000 --> 00:27:46,000
    Some of these barriers are sort of falling, especially with more regulatory guidance

    278
    00:27:46,000 --> 00:27:51,000
    coming out now as of recent, as well as the experience and the learnings that are taking

    279
    00:27:51,000 --> 00:27:57,000
    place in industry and things like the GAMP AI guide that's coming and more that will come out

    280
    00:27:57,000 --> 00:28:03,000
    of this AI COP itself with an ISPE. So I think we're going to be there for a while. It's not

    281
    00:28:03,000 --> 00:28:12,000
    going to slow down anytime soon. There's a large motivation to continue to use AI to help industry

    282
    00:28:13,000 --> 00:28:20,000
    again to deliver products faster to market as well as safer, more effective products. And I think,

    283
    00:28:20,000 --> 00:28:24,000
    you know, that's going to continue for a while. I'm not seeing any slow up.

    284
    00:28:24,000 --> 00:28:27,000
    I'm seeing things continuing to accelerate for quite a while yet.

    285
    00:28:28,000 --> 00:28:35,000
    Well, it's a very exciting time that we live in, and I'm really appreciative of the fact that we've

    286
    00:28:35,000 --> 00:28:44,000
    got this ISPE AI community of practice and the new GAMP guide on artificial intelligence.

    287
    00:28:44,000 --> 00:28:50,000
    It's great to see industry really moving forward with these technologies.

    288
    00:28:52,000 --> 00:28:58,000
    We heard that it's pretty easy for individuals to get involved in this community of practice,

    289
    00:28:58,000 --> 00:29:03,000
    and so I encourage everyone to take a look at either this one or some other

    290
    00:29:04,000 --> 00:29:12,000
    community of practice that relates to a topic of interest for you. Eric and Ben, thank you.

    291
    00:29:14,000 --> 00:29:22,000
    So thank you both for your insight and your leadership on this topic of artificial intelligence.

    292
    00:29:24,000 --> 00:29:30,000
    That brings us to the end of another episode of the ISPE podcast, Shaping the Future of Pharma.

    293
    00:29:31,000 --> 00:29:37,000
    A big thank you to our guests, Ben Stevens and Eric Stibbe, for joining us and sharing their

    294
    00:29:37,000 --> 00:29:43,000
    thoughts about the new ISPE Artificial Intelligence Community of Practice and the

    295
    00:29:43,000 --> 00:29:50,000
    upcoming ISPE GAMP guide, Artificial Intelligence. Please be sure to subscribe so you don't miss

    296
    00:29:50,000 --> 00:29:57,000
    future conversations with the innovators, experts, and change makers driving our industry forward.

    297
    00:29:58,000 --> 00:30:05,000
    On behalf of all of us at ISPE, thank you for listening, and we'll see you next time as we

    298
    00:30:05,000 --> 00:30:12,000
    continue to explore the ideas, trends, and people who are shaping the future of pharma.

Listen to Past Episodes

Audio file

In part one of our two-part series on the future of ATMPs, David Phasey, Business Development Director at 3P Innovation, joins the podcast to discuss the drivers of the high cost-of-goods (COGs) for ATMPs and how automation can help reduce these COGs