Rob Sutton is an incoming junior doctor in Wales and a former Parliamentary staffer. He is a recent graduate of the University of Oxford Medical School.
The combined effect of emergency measures which allow legislation to bypass parliamentary scrutiny, and a viral pandemic which requires the rapid interpretation of ever-changing and highly technical data, has exposed a troubling weakness at the heart of government: that our expert advisers, however talented and hard-working they might be as individuals, have left much to be desired.
This is not entirely due to the advisers themselves, but the internal structures and incentives which Number 10 relies upon to provide advice. Those organisations which Johnson’s administration turns to for expert opinion (SAGE, Public Health England, the Department of Health, and the Government Office for Science), hold effective monopolies within their own niches.
Despite the breadth of talent these groups pull from, and the impression of depth of available opinion, there is relatively little overlap of their briefs, and they are ultimately machines of consensus: built to produce a unified position, rather than competing proposals.
From a political communications perspective, this is ideal. Presenting a position drawn from the interpretation of ambiguous (and unstable) data as being scientific consensus gives some degree of protection from criticism.
Yet this is hardly a good approach to building policy. These problems, though longstanding, were dramatically exposed by the Covid-19 pandemic and the corresponding increase in the government’s reliance on expert advice. The natural monopoly of ideas held by these bodies of experts has led to a predictably narrow scope for policy debate.
This is a concern which has riled many in Parliament, who feel increasingly marginalised in favour of unelected experts who face no public scrutiny or internal competition. Steve Baker, ever the prolific organiser and influencer, has been among those leading calls for reform of expert advice in government, arguing that this should be addressed as a matter of priority in a letter to the Prime Minister.
A government which retreats from parliamentary scrutiny and has been defined by a vision of centralised control hardly encourages open discussion. Yet the importance of balancing contrasting advice has become, more than ever before, a critical requirement for effective policymaking. At the root of the problem is the question of what expert advisers should be doing. Is it their job to dispassionately report the available evidence? Or to interpret it in a broader societal and political context?
This uncertainty has been, in part, a problem of the Government’s own making. The unwavering fixation on the “following the science” assumes “the science” to be an immutable corpus of knowledge.
This is an untrue and unhelpful representation. The scientific method demands a narrow and well-defined hypothesis, from which it follows that any interpretation should have a narrow and well-defined applicability. To test that hypothesis, metrics will be proposed to observe and quantify the phenomenon under investigation.
These metrics, being a representation, not the phenomenon itself, moves us a degree away from reality (for instance, positive results does not mean number of infections; it is a proxy). Data analysis and statistical methods move us a further degree away, as does one’s ultimate interpretation of what, if anything, that analysis tells us.
The power of the scientific method is therefore also its weakness – that we get results with narrow applicability, have to apply human biases to interpret them and then apply those findings to real world situations, with all their intractable messiness.
Add predictive methods such as modelling, which are extremely sensitive to both initial parameters and the specific model used, and the problems are compounded. To assume that there is a single fountain of scientific knowledge from which the answers to all our policy queries must unambiguously flow is a political fiction. And it is designed, rather cynically, to place those answers beyond reproach from the scientific laity.
We therefore have two issues which combine to limit the effectiveness of expert advice in government: an exclusive inner circle of advisers who hold an effective monopoly on policy proposals (even to the exclusion of parliament itself), relying on research data which inevitably has a narrow scope of applicability and is subject to differing interpretations. Science can tell us much about the world as it is; it is a powerful means of answering “what,” but on questions of “should” it is silent.
Under normal conditions, Parliamentary scrutiny would serve as a means of tempering the most extreme of Government policy suggestions. But under the emergency legislation enacted in March, we no longer enjoy this luxury. This has exposed the fragility of the Government’s market on expert policy suggestions.
Without internally competitive processes to broaden the conversation and provide alternative options, there is a worrisome absence of incentives to encourage policymakers to stray from the consensus. With an effective monopoly on advice, there is little reason for ideas to be good, or even workable, as long as they are presented with an air of agreement.
This is the reason why interdisciplinary and intradisciplinary competition for policy proposals is so vital. Interdisciplinary competition would allow us to balance the public health implications of Covid-19 against broader considerations of, for instance, the economy and mental health. intradisciplinary competition would allow conflicting interpretations of data to be debated in a rigorous manner.
Yet capturing this kind of competition, which comes so naturally to the private sector, is notoriously difficult to embed within the public. There are ways this might be built into the current organisational structure. “Red teams,” groups whose primary purpose is to play devil’s advocate, and thereby exposing weaknesses and unforeseen complications would be a step in the right direction.
Baker and, ironically, Dominic Cummings (who has frequently been a source of frustration amongst those lamenting the Government’s overreliance on a small number of expert voices) are among those who have argued for their implementation.
There are few who would, I suspect, attempt to make the case that the expert advice this Government has so heavily relied upon during the Coronavirus pandemic has been an overwhelming success. But the current parliamentary term is young, and if reforms in the procurement of expert advice were implemented with determination, we should quickly see them paying off.
Rob Sutton is an incoming junior doctor in Wales and a former Parliamentary staffer. He is a recent graduate of the University of Oxford Medical School.
The combined effect of emergency measures which allow legislation to bypass parliamentary scrutiny, and a viral pandemic which requires the rapid interpretation of ever-changing and highly technical data, has exposed a troubling weakness at the heart of government: that our expert advisers, however talented and hard-working they might be as individuals, have left much to be desired.
This is not entirely due to the advisers themselves, but the internal structures and incentives which Number 10 relies upon to provide advice. Those organisations which Johnson’s administration turns to for expert opinion (SAGE, Public Health England, the Department of Health, and the Government Office for Science), hold effective monopolies within their own niches.
Despite the breadth of talent these groups pull from, and the impression of depth of available opinion, there is relatively little overlap of their briefs, and they are ultimately machines of consensus: built to produce a unified position, rather than competing proposals.
From a political communications perspective, this is ideal. Presenting a position drawn from the interpretation of ambiguous (and unstable) data as being scientific consensus gives some degree of protection from criticism.
Yet this is hardly a good approach to building policy. These problems, though longstanding, were dramatically exposed by the Covid-19 pandemic and the corresponding increase in the government’s reliance on expert advice. The natural monopoly of ideas held by these bodies of experts has led to a predictably narrow scope for policy debate.
This is a concern which has riled many in Parliament, who feel increasingly marginalised in favour of unelected experts who face no public scrutiny or internal competition. Steve Baker, ever the prolific organiser and influencer, has been among those leading calls for reform of expert advice in government, arguing that this should be addressed as a matter of priority in a letter to the Prime Minister.
A government which retreats from parliamentary scrutiny and has been defined by a vision of centralised control hardly encourages open discussion. Yet the importance of balancing contrasting advice has become, more than ever before, a critical requirement for effective policymaking. At the root of the problem is the question of what expert advisers should be doing. Is it their job to dispassionately report the available evidence? Or to interpret it in a broader societal and political context?
This uncertainty has been, in part, a problem of the Government’s own making. The unwavering fixation on the “following the science” assumes “the science” to be an immutable corpus of knowledge.
This is an untrue and unhelpful representation. The scientific method demands a narrow and well-defined hypothesis, from which it follows that any interpretation should have a narrow and well-defined applicability. To test that hypothesis, metrics will be proposed to observe and quantify the phenomenon under investigation.
These metrics, being a representation, not the phenomenon itself, moves us a degree away from reality (for instance, positive results does not mean number of infections; it is a proxy). Data analysis and statistical methods move us a further degree away, as does one’s ultimate interpretation of what, if anything, that analysis tells us.
The power of the scientific method is therefore also its weakness – that we get results with narrow applicability, have to apply human biases to interpret them and then apply those findings to real world situations, with all their intractable messiness.
Add predictive methods such as modelling, which are extremely sensitive to both initial parameters and the specific model used, and the problems are compounded. To assume that there is a single fountain of scientific knowledge from which the answers to all our policy queries must unambiguously flow is a political fiction. And it is designed, rather cynically, to place those answers beyond reproach from the scientific laity.
We therefore have two issues which combine to limit the effectiveness of expert advice in government: an exclusive inner circle of advisers who hold an effective monopoly on policy proposals (even to the exclusion of parliament itself), relying on research data which inevitably has a narrow scope of applicability and is subject to differing interpretations. Science can tell us much about the world as it is; it is a powerful means of answering “what,” but on questions of “should” it is silent.
Under normal conditions, Parliamentary scrutiny would serve as a means of tempering the most extreme of Government policy suggestions. But under the emergency legislation enacted in March, we no longer enjoy this luxury. This has exposed the fragility of the Government’s market on expert policy suggestions.
Without internally competitive processes to broaden the conversation and provide alternative options, there is a worrisome absence of incentives to encourage policymakers to stray from the consensus. With an effective monopoly on advice, there is little reason for ideas to be good, or even workable, as long as they are presented with an air of agreement.
This is the reason why interdisciplinary and intradisciplinary competition for policy proposals is so vital. Interdisciplinary competition would allow us to balance the public health implications of Covid-19 against broader considerations of, for instance, the economy and mental health. intradisciplinary competition would allow conflicting interpretations of data to be debated in a rigorous manner.
Yet capturing this kind of competition, which comes so naturally to the private sector, is notoriously difficult to embed within the public. There are ways this might be built into the current organisational structure. “Red teams,” groups whose primary purpose is to play devil’s advocate, and thereby exposing weaknesses and unforeseen complications would be a step in the right direction.
Baker and, ironically, Dominic Cummings (who has frequently been a source of frustration amongst those lamenting the Government’s overreliance on a small number of expert voices) are among those who have argued for their implementation.
There are few who would, I suspect, attempt to make the case that the expert advice this Government has so heavily relied upon during the Coronavirus pandemic has been an overwhelming success. But the current parliamentary term is young, and if reforms in the procurement of expert advice were implemented with determination, we should quickly see them paying off.