
Sign up to save your podcasts
Or
0:00 Defining innovation
01:00 Incremental innovation vs big changes
01:45 On designing back from the unmet need, and introducing innovation
(Interruption by a phone call �)
02:54 Problem backwards vs solution forwards
03:55 On the ‘guided random walk’ and adoption of agility/ serendipity (low validity environments in pharma)
04:45 Prediction, hubris and certainty in process
06:50 The stopping rule in drug development (07:30 interruption by a phone call �)
07:50 Zombie projects
08:00 The ‘Keytruda story’ as ‘the biggest poison in our industry’
08:50 On ‘busters’ vs blockbusters
09:20 On breadth of exploration in Discovery/ ‘pick the winners’/ ‘kill the losers’
11:05 The misaligned incentives that lead to decisions to continue - the ‘legions of zombies’
11:50 Spreading resource too broadly without good filters
12:20 On the development of better filters, and too much resource in the ecosystem
13:00 Does constraining resource lead to better outcomes?
15:00 On ‘Follow the Science’
16:30 On giving people the benefit of the doubt… Now what…?
(17:40 One more phone interruption - sorry! � Leads to some audio spiking from here…)
19:00 On a disease like Alzheimer’s - pinning a tail on a large donkey
19:50 On ‘value signals’ in development
20:45 On hubris in selection of models
22:15 On allowing ‘the whole market’ to distort clinical development
25:00 How important are measures of innovation? The role of the incentive structure
25:30 On decision quality (and the distraction of ‘resources’)
26:30 How does more data improve decision quality?
27:00 On being successful or not being blamed for failure
29:00 On the feedback loop and its utility in pharma
30:20 What would a better incentive structure look like?
31:00 What do we mean by failure?
32:00 On the misattribution of error
33:30 The way we misuse language, biases, and the impact of language on ‘failure’
34:40 What are the most important lessons you’ve learned over time?
34:55 On the power of dissociating asset from infrastucture, idea from process
37:20 On the ‘organisation’ problem - separate nodes with a ‘project pilot’
38:20 On the translation of success in one therapeutic area into another - ‘process structures are not transferable’
39:15 On ‘retrenchment’ in major pharma, into fewer therapeutic areas
40:50 On the ‘nonsense’ of product profiling too early
42:30 ‘Instead of recognising you’re pinning the tail on a donkey, you think you’re aiming’
42:50 What drives David Grainger?
44:30 What is the role of tech and AI in early development?
45:00 What problem is AI solving?
45:30 Better predictions in a low validity environment
46:30 What kind of ‘training data’ would we use?
47:00 Unknown vs unknowable data
48:30 On which books David would recommend
50:30 What are David’s ambitions?
52:30 Does 2019 look very different than 1999?
5
22 ratings
0:00 Defining innovation
01:00 Incremental innovation vs big changes
01:45 On designing back from the unmet need, and introducing innovation
(Interruption by a phone call �)
02:54 Problem backwards vs solution forwards
03:55 On the ‘guided random walk’ and adoption of agility/ serendipity (low validity environments in pharma)
04:45 Prediction, hubris and certainty in process
06:50 The stopping rule in drug development (07:30 interruption by a phone call �)
07:50 Zombie projects
08:00 The ‘Keytruda story’ as ‘the biggest poison in our industry’
08:50 On ‘busters’ vs blockbusters
09:20 On breadth of exploration in Discovery/ ‘pick the winners’/ ‘kill the losers’
11:05 The misaligned incentives that lead to decisions to continue - the ‘legions of zombies’
11:50 Spreading resource too broadly without good filters
12:20 On the development of better filters, and too much resource in the ecosystem
13:00 Does constraining resource lead to better outcomes?
15:00 On ‘Follow the Science’
16:30 On giving people the benefit of the doubt… Now what…?
(17:40 One more phone interruption - sorry! � Leads to some audio spiking from here…)
19:00 On a disease like Alzheimer’s - pinning a tail on a large donkey
19:50 On ‘value signals’ in development
20:45 On hubris in selection of models
22:15 On allowing ‘the whole market’ to distort clinical development
25:00 How important are measures of innovation? The role of the incentive structure
25:30 On decision quality (and the distraction of ‘resources’)
26:30 How does more data improve decision quality?
27:00 On being successful or not being blamed for failure
29:00 On the feedback loop and its utility in pharma
30:20 What would a better incentive structure look like?
31:00 What do we mean by failure?
32:00 On the misattribution of error
33:30 The way we misuse language, biases, and the impact of language on ‘failure’
34:40 What are the most important lessons you’ve learned over time?
34:55 On the power of dissociating asset from infrastucture, idea from process
37:20 On the ‘organisation’ problem - separate nodes with a ‘project pilot’
38:20 On the translation of success in one therapeutic area into another - ‘process structures are not transferable’
39:15 On ‘retrenchment’ in major pharma, into fewer therapeutic areas
40:50 On the ‘nonsense’ of product profiling too early
42:30 ‘Instead of recognising you’re pinning the tail on a donkey, you think you’re aiming’
42:50 What drives David Grainger?
44:30 What is the role of tech and AI in early development?
45:00 What problem is AI solving?
45:30 Better predictions in a low validity environment
46:30 What kind of ‘training data’ would we use?
47:00 Unknown vs unknowable data
48:30 On which books David would recommend
50:30 What are David’s ambitions?
52:30 Does 2019 look very different than 1999?
1,768 Listeners
111,083 Listeners
319 Listeners
700 Listeners
5,920 Listeners
31 Listeners
2,143 Listeners
15,313 Listeners
11 Listeners
3,096 Listeners
12 Listeners
2,241 Listeners
11 Listeners
96 Listeners
2 Listeners