January 2026
In this episode, host Bob Chew shares insights on accelerating technology adoption in the pharmaceutical industry. Dive into this discussion to explore the future of pharma innovation, including transitioning from document-centric to data-driven quality systems, leveraging AI and digital twins for enhanced manufacturing processes, and reimagining risk management with statistical tools.
1
00:00:00,000 --> 00:00:10,080
Welcome to the ISPE podcast, Shaping the Future of Pharma, where ISPE supports you on your journey, fueling innovation, sharing insights, thought
2
00:00:10,080 --> 00:00:14,240
leadership, and empowering a global community to reimagine what's possible.
3
00:00:15,734 --> 00:00:24,934
As you can imagine with regards to the topic of artificial intelligence, that broad net brought a lot of people across industry, Bob.
4
00:00:24,934 --> 00:00:35,039
So whether it be large pharma, biotech, clinical research, contract manufacturing organizations, we've had a lot of interest, since the inception
5
00:00:35,039 --> 00:00:41,679
of the community of practice, and a lot of good work that has already been done within the COP that we can talk more about today.
6
00:00:41,679 --> 00:00:51,975
But I think Ben really covered it well with respect to what we represent and what we're trying to do with regards to, bringing people together on a like topic with
7
00:00:51,975 --> 00:01:02,070
similar interest and really helping to shape and form that community with regards to education, within ISP as well as
8
00:01:02,070 --> 00:01:03,109
industry itself.
9
00:01:03,109 --> 00:01:13,365
We have a deviation AI assistant, which facilitates drafting of investigation reports based on input entered by the
10
00:01:13,365 --> 00:01:14,484
investigator.
11
00:01:15,045 --> 00:01:24,725
It has a series of questions per investigation element, such as problem statement, initial impact assessment, scope, etcetera.
12
00:01:26,099 --> 00:01:37,140
The investigator then responds each question, and then at the end, the AI assistant generates a structured narrative, saving,
13
00:01:37,140 --> 00:01:47,834
of course, time from repetitive manual drafting, increasing readability and quality of the final report.
14
00:01:48,395 --> 00:01:53,674
It also has multilingual capabilities, which is actually pretty cool.
15
00:01:54,520 --> 00:02:03,319
And right now, we have it in four languages, including English, German, French, and Japanese.
16
00:02:03,960 --> 00:02:14,074
You know, right now, I think one of the things we've been trying to do, you know, myself with Eric and Nick as well and and others, trying to kind of focus on building out some of those
17
00:02:14,074 --> 00:02:18,794
key topic areas that we know that the community community is gonna be interested in.
18
00:02:18,794 --> 00:02:21,275
And, you know, as time goes on, I think that will continue to grow.
19
00:02:21,275 --> 00:02:31,330
But right now, we've been focusing a lot on establishing subcommittees that are actually, you know, driving some some actual efforts in establishing content and also,
20
00:02:31,810 --> 00:02:33,409
plans going forward.
21
00:02:33,409 --> 00:02:43,685
So for example, we have three subcommittees right now that are actively engaged in areas around, applications model and data preparedness
22
00:02:43,685 --> 00:02:47,444
and workforce and, regulatory more broadly.
23
00:02:47,444 --> 00:02:57,189
But, you know, as you can imagine, those are relatively it's a relatively small piece of the overall puzzle, and so we expect that there's gonna be a lot more of these subcommittees that will grow out and
24
00:02:57,189 --> 00:02:58,550
become preengaged.
25
00:02:58,550 --> 00:03:08,305
And I think the other thing is and, you know, Eric, obviously, you mentioned the connection already with GAMP, but we do have larger groups that are already doing a lot of work that dovetails with this.
26
00:03:08,305 --> 00:03:10,944
You know, for example, GAMP, but also the pharma four point o group.
27
00:03:10,944 --> 00:03:21,025
And so we're establishing a more, I'd say, you know, direct and and regular connection between all that existing infrastructure and ISB and making sure that it's all kinda tied into a broader, you know,
28
00:03:21,025 --> 00:03:23,460
movement forward AI as a whole.
29
00:03:23,780 --> 00:03:28,980
What I do know is that there's a focus everywhere because there's value.
30
00:03:28,980 --> 00:03:29,460
Right?
31
00:03:30,260 --> 00:03:34,740
I can I can talk about Takeda, and Takeda has been very intentional?
32
00:03:35,365 --> 00:03:44,885
One of our, objectives, obviously, is to become the biodigital, a digital biopharmaceutical company.
33
00:03:45,205 --> 00:03:55,340
So we we have been very intentional in creating an engine that will drive this digital transformation that we're striving,
34
00:03:55,580 --> 00:04:02,780
to achieve, including a lot of, obviously, training.
35
00:04:02,780 --> 00:04:12,805
We have many different courses and certifications that allows many of our employees
36
00:04:13,284 --> 00:04:18,689
to gain understanding up to the level that they want to get.
37
00:04:18,689 --> 00:04:19,009
Right?
38
00:04:19,009 --> 00:04:29,009
There's some training that allows some of our employees to even develop their own AI companion tools.
39
00:04:29,009 --> 00:04:35,855
Because we're in such a highly regulated industry, a lot of people like to jump directly to the compliance.
40
00:04:35,855 --> 00:04:42,654
And how do we ensure that regulators, that industry is gonna be accepting of such innovative technology?
41
00:04:42,654 --> 00:04:47,535
And that's, you know, the big question, you know, the elephant in the room most of the time.
42
00:04:48,330 --> 00:04:51,449
But this community of practice is much broader than that.
43
00:04:51,449 --> 00:05:00,410
As Ben mentioned, we started with really three subcommittees, and it's beyond just the compliance aspects of of AI and machine learning itself as well.
44
00:05:01,145 --> 00:05:11,225
So really focusing on larger type concepts, use cases within industry, you know, whether it be regulated or non regulated, GXP or non GXP,
45
00:05:11,305 --> 00:05:16,425
and things such as he talked about in one of our subcommittees around workforce preparedness.
46
00:05:16,425 --> 00:05:26,000
So there's a lot of companies now that are really facing it, and I know my own company is how do you get people to understand how to use AI?
47
00:05:26,560 --> 00:05:27,839
What's the best approach?
48
00:05:27,839 --> 00:05:32,399
There's a learning curve, especially when it comes to generative AI and large language models.
49
00:05:32,704 --> 00:05:34,305
Prompting is a big thing.
50
00:05:34,305 --> 00:05:34,704
Right?
51
00:05:34,704 --> 00:05:44,944
I'm sure all of us have experienced it with ChatGPT with respect to how do you give the large language model the right props to return the the
52
00:05:44,944 --> 00:05:46,545
information you're looking for.
53
00:05:46,545 --> 00:05:46,785
Right?
54
00:05:46,920 --> 00:05:51,800
It can be challenging just like we've used with Siri and and and other things like that.
55
00:05:51,800 --> 00:05:53,400
You have to prompt it correctly.
56
00:05:53,400 --> 00:06:02,120
So giving people to understand within your company how to use it, how not to use it, and how to be most effective in using it is extremely important as well.
57
00:06:02,144 --> 00:06:06,144
Especially if you provide for the environment.
58
00:06:06,144 --> 00:06:06,384
Right?
59
00:06:06,384 --> 00:06:16,500
And when we talk about the AI labs, for people to experiment, I think that also helps people feel more comfortable and helps people it
60
00:06:16,500 --> 00:06:25,860
helps us demystify AI and GenAI in a way, so it actually help us augment capabilities and help people feel more comfortable.
61
00:06:26,100 --> 00:06:29,699
But to your point, yes, we we do have governance.
62
00:06:29,699 --> 00:06:29,939
Right?
63
00:06:30,394 --> 00:06:36,235
So we might have solutions that could be developed.
64
00:06:36,475 --> 00:06:41,995
However, not every solution, it's approved for wider, implementation.
65
00:06:42,490 --> 00:06:48,729
First of all, we need to understand the the problem that we are, you know, solving.
66
00:06:49,529 --> 00:06:53,689
We are, also evaluating for scalability.
67
00:06:53,930 --> 00:06:54,490
Right?
68
00:06:54,569 --> 00:07:00,655
Is this a solution that is scalable across the entire organization?
69
00:07:01,774 --> 00:07:04,095
And what's the level of effort?
70
00:07:04,095 --> 00:07:06,735
Also, what's the what's the value creation?
71
00:07:07,535 --> 00:07:13,949
Because everything takes time, and and time, represents also cost.
72
00:07:14,350 --> 00:07:16,350
So what is the value creation?
73
00:07:16,350 --> 00:07:26,625
How is these tools, or, yeah, digital tools that are being created contribute to
74
00:07:26,625 --> 00:07:33,985
our vision of value creation of being, better, faster, more efficient.
75
00:07:34,305 --> 00:07:38,865
So, we we have government in place to to determine that.
76
00:07:39,029 --> 00:07:49,910
Artificial intelligence, machine learning, and digital twins each involve sophisticated statistical techniques as their foundations.
77
00:07:50,230 --> 00:07:53,235
What's the difference between these technologies?
78
00:07:53,634 --> 00:07:54,035
Yeah.
79
00:07:54,035 --> 00:07:55,235
I can I can take this one?
80
00:07:55,235 --> 00:08:00,194
But just to say right up front, as we've talked about, the the COP is a very diverse group.
81
00:08:00,194 --> 00:08:09,310
And so I am certainly not a a deep subject matter expert on this, but I know from some of the discussions with some of the really brilliant folks that work great deeply in the area, some general air aspects
82
00:08:09,310 --> 00:08:09,789
of it.
83
00:08:09,789 --> 00:08:11,870
So I can I can touch on it a little?
84
00:08:11,870 --> 00:08:17,229
So AI is really, I'd say, the the broadest, you know, delineation of of the technology.
85
00:08:17,229 --> 00:08:17,470
Right?
86
00:08:17,470 --> 00:08:27,564
It's it's essentially computer science field that's focused on actually generating solutions that will actually perform tasks that are very much like human intelligence.
87
00:08:27,564 --> 00:08:27,805
Right?
88
00:08:27,805 --> 00:08:31,165
And so when we talk about AI, it's incredibly broad.
89
00:08:31,165 --> 00:08:31,404
Right?
90
00:08:31,404 --> 00:08:40,629
You can talk about, you know, for example, large language models, but you can also talk about, you know, self driving cars or or even things that are relatively mundane.
91
00:08:40,710 --> 00:08:50,945
We have an enterprise DD and T organization that establishes the
92
00:08:50,945 --> 00:09:01,105
framework for the enterprise and the strategy that will enable Takeda to identify and address the unmet needs across the different
93
00:09:01,105 --> 00:09:05,899
business units that we have through share investments.
94
00:09:06,379 --> 00:09:12,220
They store the DD and T spend across Takeda and drive enterprise innovation.
95
00:09:12,940 --> 00:09:23,975
As part of that enterprise DD and T organization, we have business partners who sit on the DD and T leadership team, and they represent,
96
00:09:24,695 --> 00:09:31,815
many TT organizations across Takeda, and TT is an acronym for Takeda executive team.
97
00:09:32,190 --> 00:09:42,190
And what I mean by that to your to your question is these organizations are like global manufacturing and supply and
98
00:09:42,190 --> 00:09:49,595
quality, R and D, Japan, PDT or plasma derived therapies, among others.
99
00:09:49,914 --> 00:09:59,980
So we have the enterprise DD and T or data, digital and technology organization, and then we have DD and T heads in all
100
00:09:59,980 --> 00:10:06,059
of these sister organizations or business units across the the organization.
101
00:10:06,379 --> 00:10:14,485
The structure makes us more agile to drive value for our users and customers.
102
00:10:15,284 --> 00:10:25,860
And based on the aspiration I mentioned earlier of becoming a digital biopharmaceutical company, yes, we are working with AI, GenAI,
103
00:10:26,659 --> 00:10:32,980
virtual reality, augmented reality, in all parts of the organization.
104
00:10:32,980 --> 00:10:42,235
And, yes, we we do have we are also piloting and using digital twins, not only in manufacturing, but even in in r and d.
105
00:10:43,834 --> 00:10:48,154
So that's that's just, some of the examples that come to mind.
106
00:10:48,315 --> 00:10:58,639
Quite a large percentage of the companies that I know, including my own GSK is is actively using these models, in
107
00:10:58,639 --> 00:11:07,774
various aspects of their, both their development process, but also, their their actual manufacturing as well.
108
00:11:07,774 --> 00:11:18,014
So for example, you know, we touched on a little bit before about, you know, the part that becomes critical for for sort of addressing regulatory considerations.
109
00:11:18,014 --> 00:11:28,179
There's a whole, you know, big component of this that can happen and is happening right now that really, I'd say, to some extent, is not, is not sort of within
110
00:11:28,179 --> 00:11:36,355
the scope of of, you know, regulated processes or needs to be, you know, con a concern from the regulatory standpoint.
111
00:11:36,355 --> 00:11:46,370
So in particular, when we talk about things like, process development, you know, or early r and d development activities, We can use a lot
112
00:11:46,370 --> 00:11:56,769
of these we are using a lot of these models to gain insight from our our, previous datasets and to help us design, you know, new manufacturing processes.
113
00:11:57,345 --> 00:12:07,504
There are also some cases, for example, right now that are active even for for manufacturing processes at my company, for example, where we're using it more for the purposes of
114
00:12:07,504 --> 00:12:13,629
monitoring and not so much to actually, you know, itself directly impact an ongoing process.
115
00:12:13,629 --> 00:12:19,709
So good example, we call it, multivariate statistical process monitoring or MSPM.
116
00:12:19,709 --> 00:12:29,934
And so you can use essentially a twin of your manufacturing process, have that model be predicting where certain elements of that process
117
00:12:29,934 --> 00:12:37,855
are maybe gonna lead to, for example, an excursion in in a in a certain, critical process parameter or a critical quality attribute.
118
00:12:38,335 --> 00:12:43,360
And you don't have to have that model do anything itself to to the actual control of the process.
119
00:12:43,360 --> 00:12:47,440
It can be completely separate from that, and so it's not actually a part of your GMP process.
120
00:12:47,440 --> 00:12:55,440
But it can be a nice tool for for example, operators who are working on the line who have, you know, set protocols and things they need to do.
121
00:12:55,565 --> 00:13:01,164
They But can also get an early warning sign from these models to let them know something may be going in the wrong direction.
122
00:13:01,164 --> 00:13:03,325
And so it's a really great tool to have.
123
00:13:03,404 --> 00:13:09,725
I think a lot of companies are using it already even though it may not be something that they're, you know, submitting, for example, to regulators.
124
00:13:10,389 --> 00:13:20,629
From the manufacturing perspective, in if we talk about digital twins, we're piloting that, like I said, in r and d
125
00:13:20,629 --> 00:13:23,350
and in manufacturing.
126
00:13:24,845 --> 00:13:34,845
From the quality perspective, one of the stumbling, areas that I can I can share, has to
127
00:13:34,845 --> 00:13:45,190
do with the data quality, right, and and perhaps the inconsistency that we
128
00:13:45,190 --> 00:13:55,685
have found in data structure and particularly when you are dealing with different data sources?
129
00:13:56,165 --> 00:14:07,220
So having data that is not standardized or from different systems has been a challenge, at least for the digital tools that we are working
130
00:14:07,220 --> 00:14:08,580
on in quality.
131
00:14:09,059 --> 00:14:19,139
To that extent, Takeda has put in place a data governance organization that is working on
132
00:14:19,139 --> 00:14:21,220
on different aspects of data.
133
00:14:22,274 --> 00:14:28,595
Because, obviously, that's the that's the raw material, right, for all of these initiatives.
134
00:14:28,914 --> 00:14:38,970
So master data, data quality to ensure that the requirements for new systems are in place, And for tools that are being developed that need to
135
00:14:38,970 --> 00:14:49,115
leverage, whatever systems we have right now, that we have better guidance in terms of what we will need in order to make those,
136
00:14:49,434 --> 00:14:56,235
initiatives or tools successful in, achieving what we what we want to achieve.
137
00:14:57,595 --> 00:15:02,154
In some cases, that might be, you know, data cleanup and standardization.
138
00:15:02,154 --> 00:15:09,269
So, yeah, I I understand what you mean about taking some of these years.
139
00:15:10,070 --> 00:15:20,105
We have had some situations where we have tried some tools sometimes, and we have to recall that, you know,
140
00:15:20,105 --> 00:15:24,024
AI, GenAI, we we have kind of learned as we go.
141
00:15:24,024 --> 00:15:24,424
Yeah.
142
00:15:24,424 --> 00:15:34,519
And so in many cases, we have designed some solutions with the output and
143
00:15:34,519 --> 00:15:38,200
not the input in mind, and that has represented some challenges.
144
00:15:38,200 --> 00:15:43,000
AI is really, I'd say, the the broadest, you know, delineation of of the technology.
145
00:15:43,000 --> 00:15:43,159
Right?
146
00:15:43,159 --> 00:15:53,414
It's it's essentially the computer science field that's focused on actually generating solutions that will actually perform tasks that are very much like human intelligence.
147
00:15:54,134 --> 00:15:57,095
When we talk about AI, it's incredibly broad.
148
00:15:57,174 --> 00:16:06,289
You can talk about, for example, large language models, but you can also talk about self driving cars or even things that are relatively mundane.
149
00:16:06,289 --> 00:16:16,434
In some cases, when we get into specific applications of that technology, for example, when we talk about machine learning, we're talking about really a subset of
150
00:16:16,434 --> 00:16:27,300
that AI, actually a quite narrow subset of it where that is a a set of algorithms that's actually using data to train the model to perform,
151
00:16:27,779 --> 00:16:34,980
to to learn like a human, right, and actually to start to be able to, improve its performance over time.
152
00:16:34,980 --> 00:16:45,985
And that's, I think, a key attribute of the machine learning based models because many of the models which we used historically, particularly in manufacturing, were oftentimes
153
00:16:45,985 --> 00:16:50,865
incapable of of updating themselves over time or evolving.
154
00:16:50,865 --> 00:16:51,105
Right?
155
00:16:51,105 --> 00:16:55,399
They were usually static models, and you had to do quite a lot of work to actually update them.
156
00:16:55,399 --> 00:17:04,039
Or they are based on, you know, physics, and and mechanistic based models where it was really locked into your understanding of a particular, natural phenomenon.
157
00:17:04,795 --> 00:17:13,115
Digital twins are are actually not so much AI directly, but it's really just a digital representation of something physical in its broadest form.
158
00:17:13,115 --> 00:17:23,329
But where it dovetails a little bit with with AI and machine learning is that in a lot of cases now, companies like mine and and many others are using digital representations
159
00:17:23,329 --> 00:17:33,650
of manufacturing processes or or, you know, components of that manufacturing process to then essentially couple that to machine learning models where
160
00:17:33,805 --> 00:17:44,525
you're able to use that that digital representation and then couple it with a machine learning model that's able to take in data from that digital representation and actually update that representation.
161
00:17:44,765 --> 00:17:54,830
And then using that data, it can make real time projections on how certain things that are happening based on a digital rep representation are likely to make, downstream
162
00:17:54,830 --> 00:17:56,990
effects on, for example, product quality.
163
00:17:56,990 --> 00:17:57,230
Right?
164
00:17:57,230 --> 00:18:07,444
So in the most, I'd say, direct and obvious case for for where you could see benefit here, you can directly couple that that prediction to an active
165
00:18:07,444 --> 00:18:08,404
control loop.
166
00:18:08,404 --> 00:18:18,509
So that information that's coming in from your process real time is informing the digital twin as being coupled with that machine learning model, which is providing essentially direction to the
167
00:18:18,509 --> 00:18:25,229
process, is able to move that process to make sure that the predictions are gonna give you the most optimal outputs at the end.
168
00:18:25,630 --> 00:18:31,805
So it it's more of a, I'd say, a a part of how, digital or or how AI models are being used.
169
00:18:31,805 --> 00:18:34,125
It's not actually artificial intelligence itself.
170
00:18:34,125 --> 00:18:35,805
Is this gonna be accepted?
171
00:18:35,805 --> 00:18:36,125
Right?
172
00:18:36,125 --> 00:18:39,484
Are are people gonna really flaunt to this, and are they gonna embrace it?
173
00:18:39,859 --> 00:18:42,019
And I think it's building that confidence.
174
00:18:42,019 --> 00:18:52,019
They're first needed to be, again, a slow approach to building confidence and knowing what you're being provided and that it's accurate and precise, and
175
00:18:52,019 --> 00:19:02,305
then moving from there and being able to demonstrate the that the intended use is indeed being fulfilled, and you have the appropriate, evidence and documentation
176
00:19:02,305 --> 00:19:05,744
for regulators or, in my case, for a sponsor company.
177
00:19:05,744 --> 00:19:06,065
Right?
178
00:19:06,440 --> 00:19:16,759
So I think that was an initial, reluctancy to to do some of this stuff, but it's it's much more pronounced today and and really started
179
00:19:16,759 --> 00:19:21,255
that I feel is probably one of the the highest risk areas is probably around medical device.
180
00:19:21,255 --> 00:19:31,654
Medical devices that are treating and helping deliver medications and and and so forth in diagnosis for patients, in a medical setting.
181
00:19:32,375 --> 00:19:38,980
But that's that's coming around, and people are starting to grasp and and get more comfortable and understand the technology better.
182
00:19:39,220 --> 00:19:49,059
And things that we're doing within the COP, and the new GAMP guide on AI that's coming out where we address some of those things for people, I think, to give them a a better level of comfort.
183
00:19:49,304 --> 00:19:58,504
I think we all see the value of applying, these, technologies internally.
184
00:19:59,944 --> 00:20:05,399
For once, you know, it allows us to to be more predictive.
185
00:20:05,880 --> 00:20:16,679
And and I think it changes the mindset of going from firefighting to more risk awareness and being
186
00:20:16,679 --> 00:20:27,595
more more predictive on everything that you do, there's definitely a need to augment, understanding
187
00:20:27,914 --> 00:20:30,714
because you don't really validate AI.
188
00:20:30,714 --> 00:20:31,034
Right?
189
00:20:31,034 --> 00:20:31,835
It learns.
190
00:20:32,329 --> 00:20:42,970
So it's something that is still kind of, an area that we all need, particularly regulators, get into
191
00:20:43,289 --> 00:20:49,884
into understanding these models a little bit more because it does require a shift in mindset.
192
00:20:50,284 --> 00:20:52,204
I've been pretty optimistic, actually.
193
00:20:52,204 --> 00:21:00,684
I mean, think the you know, it's been pretty clear, at least, particularly from the FDA side, also from, you know, EMEA and outside of The U.
194
00:21:00,684 --> 00:21:00,765
S.
195
00:21:00,765 --> 00:21:04,710
That they're very supportive of, the technology.
196
00:21:04,710 --> 00:21:15,465
I think, you know, to some extent, the, you know, as Eric was kind of mentioning before, like, the the barriers that people are are sort of seeing are often, I'd say, anticipated
197
00:21:15,465 --> 00:21:16,105
barriers.
198
00:21:16,105 --> 00:21:26,984
And we just are trying to kind of make sure that we're kind of proactively working with some of these folks on the side of the regulators because they want the same as us, which is to responsibly
199
00:21:26,984 --> 00:21:35,769
deploy the technology in such a way that we can really speed up, you know, the ability to benefit patients, but, you know, ultimately, you know, get the most value out of it too.
200
00:21:35,769 --> 00:21:44,569
And so a lot of what I think were you know, would be some somewhat of the initial slowdown phase because we're all just kinda trying to figure out that initial, you know, the ground game.
201
00:21:45,095 --> 00:21:50,694
Well, I think to some extent, you know, get sorted out as long as we keep having those types of dialogue.
202
00:21:51,174 --> 00:21:53,095
You know, there are things that are come out of left field.
203
00:21:53,414 --> 00:21:55,335
You know, legislation is something like that.
204
00:21:55,335 --> 00:21:55,654
Right?
205
00:21:55,654 --> 00:22:05,960
You can't always anticipate, you know, the way legislation will work, and we obviously will continue to sort of try to work, you know, to understand how that is gonna adopt or or impact the field
206
00:22:05,960 --> 00:22:06,759
more generally.
207
00:22:06,759 --> 00:22:16,044
But even in those situations, think, you know, the regulators often are are very good at, you know, helping us to kinda find paths to to, you know, both be continue to be compliant, like Eric was saying
208
00:22:16,044 --> 00:22:21,884
earlier, but also to try to, you know, minimize the amount of impact that we get, you know, from a day to day basis.
209
00:22:21,884 --> 00:22:32,210
Frankly, I think a lot of the, you know, the the the sort of trajectory may largely be due to, you know, to some extent, trying to apply it to lots of things that it ultimately may not be
210
00:22:32,210 --> 00:22:39,410
a great application for and sort of figuring out where you get the most value out of it from the industry standpoint over time.
211
00:22:39,410 --> 00:22:49,224
And then seeing where some of these models continue to evolve to be really, really helpful and and get better, Other ones may have some limitations that ultimately don't carry them through to more general
212
00:22:49,224 --> 00:22:49,704
use.
213
00:22:49,704 --> 00:22:52,585
But I'm pretty optimistic about where things are gonna go with it in general.
214
00:22:52,585 --> 00:22:59,859
I think we're gonna continue to be in this growth phase for quite a while in a very exponential type of environment.
215
00:22:59,859 --> 00:23:04,019
I think it really got kicked off with generative AI, large language models.
216
00:23:04,259 --> 00:23:14,575
Some of these barriers are are sort of falling, especially with more regulatory guidance coming out now as of recent, as well as the experience
217
00:23:14,575 --> 00:23:24,815
in in the learnings that are taking place in the industry, and things like the the GAMP AI guide that's coming in and more that will come out of this AI COP itself within ISPE.
218
00:23:25,309 --> 00:23:27,549
So I I think we're gonna be there for a while.
219
00:23:27,549 --> 00:23:29,789
It's it's not gonna slow down anytime soon.
220
00:23:29,789 --> 00:23:40,024
There's a a large motivation to continue to use AI, to help industry, again, to to deliver products
221
00:23:40,024 --> 00:23:44,105
faster to market as well as safer safer, more effective products.
222
00:23:44,105 --> 00:23:46,744
And I think, you know, that's gonna continue for a while.
223
00:23:46,744 --> 00:23:48,904
I I'm not I'm not seeing any slow up.
224
00:23:48,904 --> 00:23:52,690
I'm seeing things continuing to accelerate for for quite a while yet.
225
00:23:52,690 --> 00:23:58,450
This brings us to the end of another episode of the ISPE podcast, Shaping the Future of Pharma.
226
00:23:58,450 --> 00:24:06,450
Please be sure to subscribe so you don't miss future conversations with the innovators, experts, and change makers driving our industry forward.
227
00:24:07,085 --> 00:24:17,565
On behalf of all of us at ISPE, thank you for listening, and we'll see you next time as we continue to explore the ideas, trends and people shaping
228
00:24:17,565 --> 00:24:18,924
the future of pharma.
