CHAPTER X: EVIDENCE-BASED POLICY MAKING
This chapter examines the ‘Evidence-Based Policy Making’ (EBPM) account of the use of evidence in policy-making. I start by providing two examples of EBPM in practice. I then review the main features of the EBPM account, consider its historical roots and increased popularity in the early 2000s, and finally explore some critical responses to the account and its application. Although widely criticised, the EBPM account remains influential in government thinking.
To illustrate the impact of EBPM, I will discuss two contrasting examples. My first example comes from the field of criminal justice in the form of a popular and ostensibly feasible intervention that evidence has shown actually has a negative impact. “Scared Straight” involves organised visits to prison facilities by young offenders or children at risk of becoming offenders. The approach was popularised in a 1978 US documentary following a group of juvenile delinquents and their three-hour session with actual convicts at Rahway State Prison. In the film, a group of inmates known as the ‘lifers’ scream at and terrify the young offenders in an attempt to ‘scare them straight’, to encourage the teenagers to avoid prison life. Programmes using this approach include confrontational ‘rap’ sessions in which adult inmates share graphic stories about prison life with the juveniles. Other less confrontational methods and more educational sessions include inmates sharing life stories and describing the choices they made that led to imprisonment. The aim of these is to deter those at risk by showing them the reality of incarceration. The program appears intuitively rational and has achieved some popularity in the US and UK.
There have been nine randomised controlled trials of the ‘scared straight’ approach in eight different US states (and two in Michigan). A Cochrane review of all the trials concludes:
‘Scared Straight and other ’juvenile awareness’ programs are not effective as a stand-alone crime prevention strategy. More importantly… these programs likely increase the odds that children exposed to them will commit offenses in future.’ (Petrosino et al. (2013), p.14)
In other words, there is strong research evidence suggesting that the intervention has increased crime. After accounting for bias, reoffending was estimated as 68% higher amongst those juveniles who participated in the programme, as compared to those who did not. Participant reoffending was higher compared to offenders who did not receive the intervention in 7 of the 9 studies considered in a recent review (WWCR, 2015). Vulnerable young people continue to be subjected to an intervention which repeated robust research has shown if anything increases the risk of them committing future offences.
Moving to the field of medicine, my second example concerns a policy and practice choice which is important for every parent: Sudden Infant Death Syndrome (SIDS). SIDS is the sudden, unexpected and unexplained death of an infant under 1 year of age, with the onset of the lethal episode apparently occurring during sleep. SIDS remains the leading cause of unexpected death in infants with over 200 deaths annually in the UK alone.
The most common advice to parents from health professionals in the 1980s was for babies to sleep face down, as this was viewed as likely to reduce the risk of choking compared to sleeping on their back. In the 1980s, large case-control studies in several countries identified prone (face down) sleeping as the major risk factor. Compared with back sleeping, stomach sleeping carries up to 12.9 times the risk of SIDS. Addressing the concerns of parents and health professionals, careful study of the infant’s airway has shown that healthy infants placed on their back (supine) for sleep are less likely to choke on vomit than prone sleeping infants. Fortunately in this example the research evidence has radically affected healthcare practice. Information to parents and healthcare professionals has enabled a one-third reduction in SIDs over the last decade. Encouraging parents to sleep their infants in a supine position was associated with a fall in the US SIDS-rate from 1.2 to 0.53 per 1000 live births between 1992 and 2000 (Rasinski et al., 2003). Repeated saturation of infant-care guidelines with the Back to Sleep message resulted in a very low national prevalence of prone infant sleep wherever implemented (Colson et al. (2005), Hackett (2007), Von Kohorn et al. (2010)).
These examples illustrate that whether policy makers listen to evidence can have a major effect on individuals and society.
Definition of evidence-based policy
There are a broad range of definitions of ‘evidence-based policy’ – ranging from the highly restrictive to the almost all-encompassing. In the early 1990s thinking on evidence-based policy, often emanating from the evidence-based medicine movement, focused on a particular methodology producing a particular type of evidence: systematic reviews of randomised controlled trials aimed at assessing the effectiveness of health and social policy initiatives. By the late 1990s, definitions often took a broader perspective, for example Huw Davies defined EBPM as ‘an approach that helps people make well informed decisions about policies, programmes and projects by putting the best available evidence from research at the heart of policy development and implementation’ (Davies (1999),Nutley (2007), Pawson (2006)).
The question of what counts as ‘evidence’ is critical in understanding the EBPM perspective. The EBPM account conventionally privileged an empirical ‘scientific’ epistemology developed in the natural sciences in deciding how the quality of evidence is assessed. Proponents argued decisions and policies should be based on ‘facts’ acquired by means of accepted methods of gathering and analysing information. In particular, they focussed on the study designs used to produce this evidence – suggesting verifiable systematic reviews of randomised controlled trials as a ‘gold standard’ of evidential validity (Sedlačko and Staroňová, 2015).
EBPM seeks to apply a rationalist paradigm by linking policy analysis to policy action whereby ‘policymakers seek to manage economic and social affairs “rationally” in an apolitical, scientized manner such that social policy is more or less an exercise in social technology’ (Schwandt (1997):74). This thinking fits well with an ideal type of rational policy development involving a process which evidence could influence at various stages. The sequential policy-making account has appeared in various guises, such as the policy-making system (Smith, 1976) and the policy cycle (May and Wildavsky, 1979), which Sabatier and Jenkins-Smith labelled ‘the stages heuristic’ (Sabatier and Jenkins-Smith (1993): 1).
Diagram 1 (below) illustrates a typical policy making cycle model consisting of the following stages (Cairney (2013) :1)
- Agenda setting. Identifying problems that require government attention, deciding which issues deserve the most attention and defining the nature of the problem.
- Policy formulation. Setting objectives, identifying the cost and estimating the effect of solutions, choosing from a list of solutions and selecting policy instruments.
- Legitimation. Ensuring that the chosen policy instruments have support. It can involve one or a combination of: legislative approval, executive approval, seeking consent through consultation with interest groups, and referenda.
- Implementation. Establishing or employing an organization to take responsibility for implementation, ensuring that the organization has the resources (such as staffing, money and legal authority) to do so, and making sure that policy decisions are carried out as planned.
- Evaluation. Assessing the extent to which the policy was successful or the policy decision was the correct one; if it was implemented correctly and, if so, had the desired effect.
- Policy maintenance, succession or termination. Considering if the policy should be continued, modified or discontinued.
The use of ‘evidence’ is conceived in various stages, for example in agenda setting (such as analysis of the strategic context and environment), policy formulation (such as cost-benefit modelling of options) and evaluation.
Diagram 1: Typical ‘policy making cycle’ (Cairney, 2013)
EBPM in historical perspective
EBPM builds on enlightenment period thinkers such as Francis Bacon (1561—1626), who promoted a critical empirical and inductive system built on careful experimentation. Twentieth century philosopher and reformer John Dewey argued for the philosophy of ‘pragmatism’ or ‘instrumentalism’ which is mirrored in EBPM thinking. Interestingly, given some of the concerns EBPM has raised about its ambition to ‘de-politicise’ policy making, Dewey himself clearly situated this in the context of his advocacy for democracy (Sanderson, 2004). In the late 19th century the Social Survey Movement, with Booth’s ‘Life and Labour of the People of London’ and Rowntree’s ‘Poverty’ studies prompting a raft of social reform in England (Gordon, 1973).
Twentieth century developments including scientific management (Taylor, 1939), the development of performance and productivity measurement building on cost accounting (Sargiacomo and Gomes, 2011), and the extension of economic indicators to the social indicator movement (Cobb and Rixford, 1998) in the 1960s and 70s. Donald Campbell’s 1969 article ‘reforms as experiments’ argued influentially that social reforms should be routinely linked to rigorous experimental evaluation. ‘Social engineering’ built on ‘social experiments’ became a popular concept in the USA and social science. Examples include social experiments in America in response to concern that providing even modest income subsidies to the poor would reduce motivation to find and keep jobs. Rossi and Lyall (1976) showed that work disincentives were in fact less than anticipated. In the field of prison rehabilitation, Langley et al. (1972) tested whether group therapy reduced re-offending rates. The results suggested that this approach to group therapy did not affect re-offending rates. (Campbell and Russo (1999), Nutley (2007), Berk et al. (1985)). However, more generally meaningful experiments proved more difficult than anticipated to deliver in the field, and even robust experiments were often ignored by policy makers.
Both US and UK government in the 1980s questioned the value of academic research in social policy and proposed significant budget cuts to this research. In the UK, social science research council funding was reduced by £6 million between 1983 and 1986 (ESRC (2005), Smith (2013a), Bulmer (1982), Flather (1982)).
The influence of the rational account continued to expand in the field of health and medicine. The term ‘evidence-based medicine’ appears to have been first used by investigators from a US university in the 1990s where it was defined as ‘a systemic approach to analyze published research as the basis of clinical decision making.’ The term was more formally defined by David Sackett et al. (1996) as ‘the conscientious and judicious use of current best evidence from clinical care research in the management of individual patients’. In the UK, the House of Lords Select Committee on Science and Technology in 1988 concluded that too little good quality, policy-relevant research was being completed. This led to first NHS research strategy in 1991. The ‘Evidence-Based Medicine’ (EBM) movement mirrored changes in expectations around health care delivering ‘value for money’, and societal changes in how clinical decisions should be justified, from ‘autonomy based on professional status to shared knowledge based on clinical trials’ (Coote, 2004:5). (Nutley et al. (2000), Coote et al. (2004),Claridge and Fabian (2005))
Since the late 1990s there has been a revival of interest in linking research with policy in the UK and many other countries, with evidence of the transfer of EBPM discourses for example from the UK to Australia and New Zealand. More recently, randomised controlled trials (which have an important role in the EBPM approach in evaluating policy options) have been more used in wider policy areas than their traditional field of health (Stoker (2019):69). Particularly for health issues, there has also been significant interest in evidence-based approaches in Canada and the Netherlands (Smith (2013a), Banks (2009), Lavis (2006), Bekker et al. (2010), Nutley et al. (2002)). Why did the disillusionment of the 80s turn to renewed interest from the late 1990s? In the next section I explore the underlying social, governmental and public management changes facilitating this transition.
Reasons for the expansion of EBPM (1997-2015)
In this section I outline how social, governmental and public management changes in the UK and elsewhere set up a context which was highly favourable to the concepts and promises of evidence-based policy. In response to funding pressures, social science adopted a ‘utilitarian turn’ and generated a large volume of new policy-relevant knowledge.
It has been suggested that social changes, including an increasingly dynamic and complex society and a reduction in public deference to governments, created a context supportive to EBPM. The evidence for declining trust in politicians is mixed – although surveys reported declines from the 1960s to 1990s, Bowler and Karp (2004) point out that there is a lack of more historical data to test whether the 50s/60s levels of trust were unusually high. In apparent contradiction to this narrative, survey evidence from the European Union suggests increasing satisfaction with “democracy” in member states through the 1990s. For UK citizens, satisfaction in 1992 was 48% (and dissatisfaction 47%), which improved to 62% satisfaction (28% dissatisfaction) in 1999 (European_Commission, 2000). However, it is possible that trust in politicians was affected by specific “scandals”. Using data from the U.S. and the U.K. Bowler and Karp (2004) show that scandals involving legislators can have a negative influence on their constituents’ attitudes toward institutions and the political process. An example in the UK was the ‘cash for questions’ affair involving five Conservative politicians leading to the Gordon Downey report in 1997 and concerns about several MPs making inappropriate claims for expenses in 2009..
New Labour’s manifesto for the 1997 election advocated a ‘post-ideological’ approach to government: ‘New Labour is a party of ideas and ideals but not of outdated ideology. What counts is what works. The objectives are radical. The means will be modern’ (Labour (1997):1). Once in government, New Labour’s Modernising Government white paper continued this theme stating that
‘… policy decisions should be based on sound evidence. The raw ingredient of evidence is information. Good quality policy making depends on high quality information, derived from a variety of sources – expert knowledge; existing domestic and international research; existing statistics; stakeholder consultation; evaluation of previous policies …’ (Cabinet Office (1999):31).
New Labour in government supported the idea that government should actively intervene to address social problems, which helped to set the scene for renewed interest in EBPM in the UK. Solesbury (2001) argued that EBPM reflected a change in the nature of politics; specifically through less ideology, less class-based party politics and empowered consumers (Coote, 2004, Powell (1999):23, Davies (1999):3, Nutley et al. (2000), Sanderson (2009), Smith (2013a)).
Paradoxically, it has been argued that EBPM itself is ideological in that it supports particular values compatible with the dominant cultural paradigms that define how society functions. Angela Packwood (2002) suggested that these values defined ‘effectiveness as a quantitative measure, professionalism as performativity, teaching as technicist delivery, research as randomised controlled trials, and ‘credible’ evidence as statistical meta-analysis’ (Packwood (2002):267)
Public management changes providing a supportive context for EBPM ideas included the development of a managerial agenda in public services involving the extensive use of performance indicators, audit and evaluation. This was the case both within the UK (with the expansion of the role of the Audit Commission) and internationally (with increased accountability for policy effectiveness through organisations such as the OECD). Greenhalgh and Russell (2006) argue that ‘the normative goals of evidence-based practice (finding out what works and then implementing it) are closely aligned with some of the new public management principles (such as defining explicit performance outputs and promoting efficiency and cost-effectiveness)’ (p.37). (Hood (1991), Nutley and Webb (2000), Hammersley (2005)).
In terms of social science, the late 1990s saw a new mood in the funding of social research, which Bill Solesbury termed the ‘utilitarian turn’ and Ken Young suggested showed ‘even a return to an expectation that social science should be useful’. Whereas research in the 1980s focused on understanding society, during the 1990s there was an increased expectation from funders (both government and philanthropic) that social science would also produce some guidance on how to improve society. The election of the New Labour government in 1997 led to a major expansion in interest and investment in evidence based policy in the UK, for example ESRC saw its budget rise from under £70 million to over £110 million from 2000 to 2002. Investment has continued under the Conservative-led governments from 2010, with the ESRC grant reaching £159 million in 2014-15. Increased funding enabled an expansion in social science research and evaluation activity (Solesbury (2001), Young et al. (2002), ESRC (2015)).
The UK coalition government from 2010 to 2015 continued an explicit commitment to EBPM, including the formation of a network of seven (sector-based) ‘What Works Centres’ to advise policy makers and practitioners. According to the Cabinet Office, this initiative aims to improve the way government and other organisations create, share and use (or ‘generate, transmit and adopt’) high quality evidence for decision-making. It supports more effective and efficient services across the public sector at national and local levels. The network is made up of seven independent What Works Centres and two affiliate members. Most Centres receive a combination of Government and Research Council funding, but the size of their budgets varies considerably. The Education Endowment Foundation (EEF), NICE and the Centre for Ageing Better benefit from very substantial grants/endowments (£125m endowment, £66m grant and £50m grant respectively). Other Centres receive much smaller grants ranging from £2.3m to £4m over three years (Cabinet Office (2013), Bristow et al. (2015)).
Criticism of EBPM
The rational account of evidence-based policy has been subject to significant criticism both in theory and in practice. Theoretical critiques have included those from a constructivist and from a post-modern perspective. From the constructivist perspective, policy learning is viewed as a socially-conditioned argumentative process which questions the ends and assumptions of policies. Policy development is seen as a ‘process of deliberation which weighs beliefs, principles and actions under conditions of multiple frames for the interpretation and evaluation of the world’ (Dryzek (1990) quoted in Van der Knaap (1995), p. 202). However, there is a real problem of reconciling the postmodernist position with the practical requirements of processes of collective decision making and action that rely on assumptions of ‘grounded knowledge’ (Sanderson, 2002)
In terms of practical impacts, government officials such as members of the Prime Minister’s Strategy Unit argued strongly in 2004 that the EBPM account was delivering effectively, albeit with ‘minor tweaks’ to the original model. Philip Davies, the government’s chief social researcher in the Unit argued that the Labour government delivered several examples of evidence-based government and policy (‘far too many to chronicle in one paper’ – Davies (2004): 22). Examples quoted include the Sure Start programme and the New Deal for Communities. Sure Start was an area-based initiative, announced in 1998 with the aim of ‘giving children the best possible start in life’ through improvement of childcare, early education, health and family support, with an emphasis on outreach and community development. It had similarities to the Head Start programme in the US. Evaluation of mature schemes concluded that ‘Children in Sure Start areas showed better social development, exhibiting more positive social behaviour and greater independence/self-regulation than their counterparts in non-Sure Start areas’ (Melhuish et al. (2010).160). The ‘New Deal for Communities’ was a regeneration programme in some of the England’s most deprived neighbourhoods. The Programme was designed to transform 39 deprived neighbourhoods (each of around 10,000 residents) in England over a decade period. The 39 NDC partnerships implemented local regeneration schemes each funded by on average £50m of Programme spend (a total of £1,700m) and successfully narrowed some targeted gaps in outcomes, compared to other parts of the country (Batty et al., 2010).
However, the assertions that these programmes were designed and delivered in line with the evidence base have been challenged by other researchers. For example, Anna Coote (2004) reviewed the operation of five major government programmes at the time (including the Sure Start and the New Deal for Communities) and concluded that the programmes were not strongly evidence based. She concludes that they ‘have been designed, by and large, on the basis of informed guesswork and expert hunches, enriched by some evidence and driven by political and other imperatives’ (p. xi).
Empirical studies of the connection between research evidence and the policy process include case studies and comparative studies. Case study research has focussed on particular policy areas, for example exploring how far a policy area has been influenced by identifiable research (eg sustainability) or following the impact of a particular piece of research on policy (such as repeat victimisation). Comparative studies have looked at institutional arrangements that seem to encourage research use in policy making and the relative receptivity of different nations to research. Much of the focus has been on national-level policy, especially in the health field. Most of the research has been based on interviews with practitioners and (particularly) researchers, and most showed no evidence of being informed by policy theory (Daniels and Solesbury (2002), Wilensky (1997), Weiss (1999), Marinetto (1999), Hanney et al. (2000), Laycock (2001), James (2002), Embrett and Randall (2014), Oliver et al. (2014)).
Evidence of the impact on policy of attempts to implement EBPM is limited. A review of EBPM in health services found that the account worked best for ‘practice policy’ but the relation between ‘service policies’ and research evidence was generally weak and that the direct influence of research on ‘governance policies’ was negligible (Packwood, 2002). Assessments of the government’s use of evidence, and whether particular public policy outcomes reflect available evidence, overall show limited and highly selective use of evidence by government. For example, an independent review found only 2% of projects in a major ‘evidence based programme’ around crime prevention were well targeted in the light of the evidence. Another review found that even well-established National Institute for Health and Care Excellence (NICE) guidelines on caesarean interventions, which were known by over 93% of medical respondents, had changed behaviour of only barely half the respondents (Nutley and Homel (2006):18, Shepherd (2014), Smith (2013a)). Reflecting on over a decade of EBPM, Cameron et al. (2011) concludes that assumptions that the use of evidence would improve the outcome of the policy process ‘remains relatively untested empirically’ and Karen Smith identified ‘a growing sense of disappointment in the relationship between research and policy’ (Smith (2013a):19).
The finding of limited direct impact of evidence on policy and practice was in fact not at all surprising. It reflected what had been known about the impact of research for at least twenty years. Two decades earlier, Carol Weiss reviewed the extant literature and interviewed American mental health officials on their use of research in reaching decisions. Her primary research found a ‘remarkable receptivity to research’ in broad terms: to gain a general direction and background, to keep up with developments in the field, and to reduce uncertainties about their policies and programs. She identified a number of different ways in research is used in practice (box 1). (Weiss, 1979)
Box 1: The many ways research is used (Weiss, 1979)
- The knowledge-driven model: where insights from basic research are applied directly to develop new policy
- The problem-driven model: where policy makers turn to research to provide insights to identified social issues
- The interactive model: where researchers are one of many parties who engage in discussions with policy makers working on a given topic
- The political model: where research is used as ‘ammunition’ to support a pre-existing viewpoint
- The tactical model: where the act of conducting research is useful to policy makers, rather than any content of the research
- The enlightenment model: where research-derived concepts and understandings permeate policy making. Weiss viewed this as the most frequently occurring model.
- Research as part of the intellectual enterprise of society: Social science and policy interact, influencing each other and being influenced by the larger fashions of social thought.
However, past literature and Weiss’s own research suggested that concrete, direct use of research was rare. Nevertheless, Government proclamations have predominantly adopted the rational EBPM account and promoted ‘instrumental’ use of evidence in policy development. Alternative theories of policy making highlighting the many factors shaping policy decisions and research showing that policy makers are unlikely to use research directly have largely been absent from the discussion in official documents (Weiss (1979), Smith (2013b)). I will explore in later chapters what the implications of taking seriously the evidence of Weiss and wider policy research could be for the EBPM approach.
Critics have focussed on the ‘dual follies’ of assuming evidence provides objective answers to policy questions, and that policy making can become rational and depoliticised (Clarence (2002), Parsons (2002)). In the following sections I review the ‘dual follies‘ in more detail and then summarise the response to these criticisms from EBPM proponents.
Criticism 1: the folly of rational policy making
The first ‘folly’ concerns the nature of the policy making process itself and the sequential account concept of a series of stages into which technical evidence can be ‘fed’. Researchers have found that this seriously underestimates the messiness of policy making in practice, and the importance of argumentation in the policy process. This is particularly the case for complex cross-cutting issues which policy-makers frequently have to deal with (Majone (1989), Sanderson (2009), Pawson (2006), Greenhalgh and Russell (2006), Mitton et al. (2007), Head (2008))
Sanderson (2004) argued that EBPM’s instrumental rationality ignores the political, normative and organisational context of policy making. In terms of politics, evidence-based policy appealed to those working to impose rationality on increasingly complex policy issues but failed to acknowledge the political context in which policy is made (Boaz et al. (2008), Cameron et al. (2011)). In practice, evidence tends to lack power to impact the policy process compared to political interests (Heineman, 2002). Walker (2000) argued that research is but one influence on the policy process and is not always influential, supplanted by ‘the powerful political forces of inertia, expediency, ideology and finance’ (pp. 162-3). Pawson (2006:viii) memorably refers to evidence as ‘the six-stone weakling of the policy world’ confronted by the ‘four-hundred pound brute called politics’.
In terms of normative and organisational context, by accepting the agenda of powerful political elites the EBPM account could be accused of excluding those with alternative views. Relying on specific types of allowable evidence could devalue democratic debate and underplay the ethical, moral and political perspectives of less powerful citizens. (Hammersley (2005), Schwandt (1997)). Sanderson (2009) supports Westbrook (1993)’s argument that normalising the domination of powerful elites undermines an alternative normative vision. He highlights the danger that the ‘reality’ of evidence-informed policy making becomes the new ideal, legitimising the status quo and undermining ideals to increase the influence of knowledge in the guidance of human affairs.
One of the reasons policy making is rarely strictly ‘rational’ is that policy makers operate in an information-rich environment and therefore are subject to information overload. Simon (1957)’s concept of ‘bounded rationality’ suggests that humans experience limits in formulating and solving complex problems and in processing (receiving, storing, retrieving, transmitting) information. Social science is only one of multiple sources of information and ideas for policy-makers (Weiss, 1980). Lindblom (1959)suggested that in practice decision makers respond to information overloads by ‘the science of muddling through’ – which Etzioni (1967)applied particularly for less fundamental and more routine (incremental) decisions (Heineman, 2002).
But even if policy makers are able to process the available evidence, it may not affect their central values or beliefs. Nancy Shulock (1999) identified the ‘paradox of policy analysis’ where existing research provides pointers that policy makers rarely follow because of the messiness and uncertainties around political reality. More fundamentally, policy makers may not be open to evidence changing their core beliefs at all. This leaves open the possibility of it affecting more peripheral aspects, which I will return to in the next chapter (Young et al., 2002).
Criticism 2: the folly of evidence providing all the answers
Social science may have over-promised, or governments’ expectations may be been unrealistic. Experience shows that human behaviour does not follow simple universal laws, social problems have contested definitions and solutions, interventions cannot be standardised, outcomes can’t be robustly measured and researchers’ values impact on objective investigation. In summary, scientific evidence hardly ever directly solves policy problems in the short term (Sarason (1978), Howard (1985), Prus (1992), Bogenschneider et al. (2000) Sabatier (1987), Hammersley (2005), Contandriopoulos et al. (2010), Stevens (2011)).
The focus of EBPM on particular types of ‘evidence’ or knowledge, for example through classifications of the degree of evidential support for practices, raises questions about what criteria are used to make judgements about the rigour of ‘evidence’. Study design has generally been used as the key marker of the strength of evidence. This is then moderated by critically appraising the quality with which a study was undertaken. EBPM advocates often place different study designs a hierarchy to determine the standard of evidence in support of a particular practice or programme. Almost always systematic reviews of randomised controlled experiments are placed near the top of the hierarchy (Table x).
Table x: Two illustrations of simplified hierarchies of evidence based on study design (Nutley et al., 2013)
|Level 1: well conducted, suitably powered randomised controlled trial
Level 2: Well conducted, but small and under powered randomised controlled trial
Levlel 3: Non-randomised observational studies
Level 4: Non-randomised study with historical controls
Level 5: Case studies without controls
Critics argue that it would be more appropriate and productive to focus on methodological ‘aptness’ to the questions specific research is seeking to address. What counts as good evidence depends on what I want to know, for what purposes, and in what contexts I envisage that evidence being used. Nutley et al. (2013) argue that hierarchies neglect many important issues, undervalue good observational studies, lose useful evidence and provide an insufficient basis for making policy recommendations (Petticrew and Roberts (2003), Greenhalgh and Russell (2006), Nutley et al. (2013))
Responses to criticisms of EBPM
In response to these extensive criticisms, some proponents of EBPM have adopted pragmatic and more flexible epistemologies. Other proponents have sought to develop new areas of scientific evidence in the hope of influencing policy makers.
The epistemological challenges include the fundamental constructivist critiques, Sanderson (2009) rejects what he labels this ‘postmodernist counsel of despair’ by retaining faith that it can promote increased social welfare through the application of reason. He calls this a ‘neo-modernist’ position founded on twin pillars: complexity and pragmatist philosophy. More generally, proponents of EBPM have acknowledged the need to recognise the importance of experience, expertise and judgement of decision makers; the need to look at cost-effectiveness as well as ‘what works?’, and tensions with political ideologies. EBPM proponents have agreed on the need for careful management of expectations of evidence; and that evidence is only one of many factors influencing policy and competing to influence decision makers (Nutley and Webb (2000), Mulgan (2005), Davies (2004)). For example, Geoff Mulgan, head of the Prime Minister’s Strategy Unit from 2002 to 2004, argued that for many areas of policy the aim should be for policy to be ‘evidence-informed’ rather than ‘evidence-based’. Mulgan conceded that the latter could probably only be meaningfully applied to ‘stable’ policy field where ‘governments broadly know what works, there is a strong evidence base and the most that can be expected is incremental improvement’. Mulgan recognises three limits to EBPM: democracy (stating that ‘people, and representative politicians have every right to ignore evidence’), ambiguity (particularly where different groups have diametrically opposed views or interests) and the different time horizons of researchers and decision-makers (Morgan, 2005). In retrospect it is ironic that one of the examples he suggests of such ‘stable’ fields is macroeconomics, three years before the global economic crash.
EBPM proponents have also attempted to address some of the critiques by adopting and promoting ‘scientific’ approaches in new ways. The use of randomised controlled trials has been promoted beyond its traditional dominance in the field of medicine to other policy domains. A linked development has been the promotion of behavioural science in providing new behavioural tools for government (such as ‘nudges’) which are often amendable to formal trials (Peter John in Stoker (2019))
The Evidence-Based Policy account assumes a natural sciences empirical ‘scientific’ epistemology and a rationalist paradigm instrumentally linking policy development to research findings. Whilst apparently aligned to less ideological approaches to government, in practice it reinforces the dominant social and political values. It promotes a simplistic version of policy development which over-estimates the guidance available from social research and under-plays the importance of politics and democracy in public policy development. I will explore in later chapters what the implications for EBPM might be if these issues were afforded the consideration they deserve. Nevertheless, the account has proved popular with UK governments of all political persuasions over the last two decades and is the dominant account promoted in official documents. It is therefore important that this thesis considers the potential value of the EBPM account in understanding research use in Combined Authorities, whilst maintaining awareness of the various criticisms of the account.
Banks, G. (2009) ‘Evidence-based policy making: What is it? How do we get it?’, How do we get it.
Batty, E., Beatty, C., Foden, M., Lawless, P., Pearson, S. and Wilson, I. (2010) ‘The New Deal for Communities experience: A final Assessment (The New Deal for Communities Evaluation: Final Report–Volume 7)’.
Bekker, M., van Egmond, S., Wehrens, R., Putters, K. and Bal, R. (2010) ‘Linking research and policy in Dutch healthcare: infrastructure, innovations and impacts’, Evidence & Policy: A Journal of Research, Debate and Practice, 6(2), pp. 237-253.
Berk, R. A., Boruch, R. F., Chambers, D. L., Rossi, P. H. and Witte, A. D. (1985) ‘Social policy experimentation a position paper’, Evaluation Review, 9(4), pp. 387-429.
Boaz, A., Grayson, L., Levitt, R. and Solesbury, W. (2008) ‘Does evidence-based policy work? Learning from the UK experience’, Evidence & Policy: A Journal of Research, Debate and Practice, 4(2), pp. 233-253.
Bogenschneider, K., Olson, J. R., Linney, K. D. and Mills, J. (2000) ‘Connecting research and policymaking: Implications for theory and practice from the family impact seminars’, Family Relations, 49(3), pp. 327-339.
Bowler, S. and Karp, J. A. (2004) ‘Politicians, scandals, and trust in government’, Political Behavior, 26(3), pp. 271-287.
Bristow, D., Carter, L. and Martin, S. (2015) ‘Using evidence to improve policy and practice: the UK what works centres’, Contemporary social science, 10(2), pp. 126-137.
Bulmer, M. (1982) ‘Models of the Relationship between Knowledge and Policy”, M. Bulmer (ed).
Cabinet Office (1999) ‘Modernising Government’.
Cabinet Office (2013) What Works Network. Available at: https://www.gov.uk/guidance/what-works-network (Accessed: 19/5/17.
Cairney, P. (2013) ‘Policy concepts in 1000 words: The policy cycle and its stages’, Paul Cairney: Politics and Policy.
Cameron, A., Salisbury, C., Lart, R., Stewart, K., Peckham, S., Calnan, M., Purdy, S. and Thorp, H. (2011) ‘Policy makers’ perceptions on the use of evidence from evaluations’, Evidence & Policy: A Journal of Research, Debate and Practice, 7(4), pp. 429-447.
Campbell, D. T. and Russo, M. J. (1999) Social experimentation. Sage Publications, Inc.
Clarence, E. (2002) ‘Technocracy reinvented: the new evidence based policy movement’, Public Policy and Administration, 17(3), pp. 1-11.
Claridge, J. A. and Fabian, T. C. (2005) ‘History and development of evidence-based medicine’, World journal of surgery, 29(5), pp. 547-553.
Cobb, C. W. and Rixford, C. (1998) Lessons learned from the history of social indicators. Redefining Progress San Francisco.
Colson, E. R., McCabe, L. K., Fox, K., Levenson, S., Colton, T., Lister, G. and Corwin, M. J. (2005) ‘Barriers to following the back-to-sleep recommendations: insights from focus groups with inner-city caregivers’, Ambulatory Pediatrics, 5(6), pp. 349-354.
Contandriopoulos, D., Lemire, M., DENIS, J. L. and Tremblay, É. (2010) ‘Knowledge exchange processes in organizations and policy arenas: a narrative systematic review of the literature’, Milbank Quarterly, 88(4), pp. 444-483.
Coote, A., Allen, J. and Woodhead, D. (2004) ‘Finding out what works’, Building knowledge about complex, community-based initiatives. London: Kings Fund.
Daniels, D. and Solesbury, W. (2002) ‘Sustainable Livelihoods Approach: tracing the influence of research on policy and practice’, London: Department for International Development.
Davies, P. (1999) ‘What is evidence‐based education?’, British journal of educational studies, 47(2), pp. 108-121.
Davies, P. ‘Is evidence-based government possible’. Fourth Annual Campbell Collaboration Colloquium.
Embrett, M. G. and Randall, G. (2014) ‘Social determinants of health and health equity policy research: exploring the use, misuse, and nonuse of policy analysis theory’, Social Science & Medicine, 108, pp. 147-155.
ESRC (2005) ‘SSRC and ESRC: the first forty years’.
ESRC (2015) ‘ESRC Annual Report and Accounts 2014-15’.
Etzioni, A. (1967) ‘Mixed-scanning: a’third’approach to decision-making’, Public administration review, 27(5), pp. 385-392.
European_Commission (2000) ‘EUR BAROMETER: PUBLIC OPINION IN THE EUROPEAN UNION’.
Flather, P. (1982) ‘Pulling through’, conspiracies, counterplots, and how the SSRC escaped the axe in, pp. 353-72.
Gordon, M. (1973) ‘The social survey movement and sociology in the United States’, Social Problems, 21(2), pp. 284-298.
Greenhalgh, T. and Russell, J. (2006) ‘Reframing evidence synthesis as rhetorical action in the policy making drama’, Healthcare Policy, 1(2), pp. 34-42.
Hackett, M. (2007) Unsettled sleep: The construction and consequences of a public health media campaign. ProQuest.
Hammersley, M. (2005) ‘Is the Evidence- Based Practice Movement Doing More Good than Harm? Reflections on Iain Chalmers #039; Case for Research- Based Policy Making and Practice’, Evidence and Policy, 1(1), pp. 85-100.
Hanney, S., Packwood, T. and Buxton, M. (2000) ‘Evaluating the Benefits from Health Research and Development Centres A Categorization, a Model and Examples of Application’, Evaluation, 6(2), pp. 137-160.
Head, B. W. (2008) ‘Three lenses of Evidence‐Based policy’, Australian Journal of Public Administration, 67(1), pp. 1-11.
Heineman, R. A. (2002) The world of the policy analyst: Rationality, values, and politics. CQ Press.
Hood, C. (1991) ‘A public management for all seasons?’, Public administration, 69(1), pp. 3-19.
Howard, G. S. (1985) ‘The role of values in the science of psychology’, American Psychologist, 40(3), pp. 255.
James, S. (2002) British Government: a reader in policy making. Routledge.
Labour, N. (1997) ‘Because Britain deserves better’, The Labour Party Manifesto.
Langley, M., Kassebaum, G., Ward, D. A. and Wilner, D. M. 1972. Prison Treatment and Parole Survival. JSTOR.
Lavis, J. N. (2006) ‘Research, public policymaking, and knowledge‐translation processes: Canadian efforts to build bridges’, Journal of Continuing Education in the Health Professions, 26(1), pp. 37-45.
Laycock, G. (2001) ‘Hypothesis-based research: the repeat victimization story’, Criminology and Criminal Justice, 1(1), pp. 59-82.
Lindblom, C. E. (1959) ‘The science of” muddling through”‘, Public administration review, pp. 79-88.
Majone, G. 1989. Evidence, argument and persuasion in the policy process / Giandomenico Majone. New Haven ; London: Yale University Press.
Marinetto, M. (1999) Studies of the policy process: A case analysis. Prentice Hall Europe.
May, J. V. and Wildavsky, A. B. (1979) The policy cycle. SAGE Publications, Incorporated.
Melhuish, E., Belsky, J. and Barnes, J. (2010) ‘Evaluation and value of Sure Start’, Archives of disease in childhood, 95(3), pp. 159-161.
Mitton, C., Adair, C. E., McKenzie, E., Patten, S. B. and Perry, B. W. (2007) ‘Knowledge transfer and exchange: review and synthesis of the literature’, Milbank Quarterly, 85(4), pp. 729-768.
Mulgan, G. (2005) ‘Government, knowledge and the business of policy making: the potential and limits of evidence-based policy’, Evidence & Policy: A Journal of Research, Debate and Practice, 1(2), pp. 215-226.
Nutley, S., Davies, H. and Walter, I. (2002) ‘Evidence based policy and practice: cross sector lessons from the UK’, ESRC UK Centre for evidence basedpolicy and practice: working paper, 9.
Nutley, S. and Homel, P. (2006) ‘Delivering evidence-based policy and practice: Lessons from the implementation of the UK Crime Reduction Programme’, Evidence & Policy: A Journal of Research, Debate and Practice, 2(1), pp. 5-26.
Nutley, S. and Webb, J. (2000) ‘Evidence and the policy process’, What works, pp. 13-41.
Nutley, S. M. 2007. Using evidence : how research can inform public services. In: Davies, H.T.O. & Walter, I. (eds.).
Nutley, S. M., Davies, H. T. and Smith, P. C. (2000) What works?: Evidence-based policy and practice in public services. MIT Press.
Nutley, S. M., Powell, A. E. and Davies, H. T. O. (2013) What counts as good evidence. Available at: http://www.alliance4usefulevidence.org/publication/what-counts-as-good-evidence-february-2013/.
Oliver, K., Innvar, S., Lorenc, T., Woodman, J. and Thomas, J. (2014) ‘A systematic review of barriers to and facilitators of the use of evidence by policymakers’, BMC health services research, 14(1), pp. 1.
Packwood, A. (2002) ‘Evidence-based policy: rhetoric and reality’, Social Policy and Society, 1(03), pp. 267-272.
Parsons, W. (2002) ‘From muddling through to muddling up-evidence based policy making and the modernisation of British Government’, Public policy and administration, 17(3), pp. 43-60.
Pawson, R. (2006) Evidence-based policy: a realist perspective. Sage publications.
Petrosino, A., Turpin‐Petrosino, C., Hollis‐Peel, M. E. and Lavenberg, J. G. (2013) ”Scared Straight’and other juvenile awareness programs for preventing juvenile delinquency’, The Cochrane Library.
Petticrew, M. and Roberts, H. (2003) ‘Evidence, hierarchies, and typologies: horses for courses’, Journal of epidemiology and community health, 57(7), pp. 527-529.
Powell, M. A. (1999) New Labour, New Welfare State?: The” Third Way” in British Social Policy. MIT Press.
Prus, R. (1992) ‘Producing social science: Knowledge as a social problem in academia’, Perspectives in social problems, 3, pp. 57-78.
Rasinski, K. A., Kuby, A., Bzdusek, S. A., Silvestri, J. M. and Weese-Mayer, D. E. (2003) ‘Effect of a sudden infant death syndrome risk reduction education program on risk factor compliance and information sources in primarily black urban communities’, Pediatrics, 111(4), pp. e347-e354.
Rossi, P. H. and Lyall, K. (1976) ‘Reforming public welfare’, New York: Russell Sage.
Sabatier, P. A. (1987) ‘Knowledge, policy-oriented learning, and policy change an advocacy coalition framework’, Science Communication, 8(4), pp. 649-692.
Sabatier, P. A. and Jenkins-Smith, H. C. (1993) Policy change and learning: An advocacy coalition approach. Westview Pr.
Sackett, D. L., Rosenberg, W. M., Gray, J. M., Haynes, R. B. and Richardson, W. S. 1996. Evidence based medicine: what it is and what it isn’t. British Medical Journal Publishing Group.
Sanderson, I. (2002) ‘Evaluation, policy learning and evidence‐based policy making’, Public administration, 80(1), pp. 1-22.
Sanderson, I. (2004) ‘Getting evidence into practice’, Evaluation, 10(3), pp. 366-379.
Sanderson, I. (2009) ‘Intelligent policy making for a complex world: Pragmatism, evidence and learning’, Political Studies, 57(4), pp. 699-719.
Sarason, S. B. (1978) ‘The nature of problem solving in social action’, American Psychologist, 33(4), pp. 370.
Sargiacomo, M. and Gomes, D. (2011) ‘Accounting and accountability in local government: contributions from accounting history research’, Accounting History, 16(3), pp. 253-290.
Schwandt, T. A. (1997) ‘Evaluation as practical hermeneutics’, Evaluation, 3(1), pp. 69-83.
Sedlačko, M. and Staroňová, K. (2015) ‘An Overview of Discourses on Knowledge in Policy: Thinking Knowledge, Policy and Conflict Together’, Central European Journal of Public Policy, 9(2), pp. 10-31.
Shepherd, J. (2014) How to achieve more effective services: the evidence ecosystem. Available at: http://www.vrg.cf.ac.uk/Files/2014_JPS_What_Works.pdf.
Shulock, N. (1999) ‘The paradox of policy analysis: If it is not used, why do we produce so much of it?’, Journal of Policy Analysis and Management, pp. 226-244.
Simon, H. A. (1957) ‘Models of man; social and rational’.
Smith, B. C. (1976) Policy-making in British government: An analysis of power and rationality. Rowman & Littlefield Publishers, Incorporated.
Smith, K. (2013a) Beyond evidence based policy in public health: The interplay of ideas. Springer.
Smith, K. (2013b) ‘Institutional filters: the translation and re-circulation of ideas about health inequalities within policy’, Policy & Politics, 41(1), pp. 81-100.
Solesbury, W. 2001. Evidence based policy: Whence it came and where it’s going. ESRC UK Centre for Evidence Based Policy and Practice London.
Stevens, A. (2011) ‘Telling policy stories: an ethnographic study of the use of evidence in policy-making in the UK’, Journal of Social Policy, 40(02), pp. 237-255.
Stoker, G. (2019) Evidence-based Policy Making in the Social Sciences: Methods that Matter. Policy Press.
Taylor, F. W. (1939) ‘Scientific management’, Critical studies in organization and bureaucracy, pp. 44-54.
Van der Knaap, P. (1995) ‘Policy evaluation and learning: feedback, enlightenment or argumentation?’, Evaluation, 1(2), pp. 189-216.
Von Kohorn, I., Corwin, M. J., Rybin, D. V., Heeren, T. C., Lister, G. and Colson, E. R. (2010) ‘Influence of prior advice and beliefs of mothers on infant sleep position’, Archives of pediatrics & adolescent medicine, 164(4), pp. 363-369.
Walker, R. (2000) ‘Welfare policy: tendering for evidence’, What works, pp. 141-166.
Weiss, C. H. (1979) ‘The Many Meanings of Research Utilization’, Public Administration Review, 39(5), pp. 426-431.
Weiss, C. H. (1980) ‘Knowledge creep and decision accretion’, Science Communication, 1(3), pp. 381-404.
Weiss, C. H. (1999) ‘The interface between evaluation and public policy’, Evaluation, 5(4), pp. 468-486.
Westbrook, R. B. (1993) John Dewey and american democracy. Cornell University Press.
Wilensky, H. L. (1997) ‘Social science and the public agenda: Reflections on the relation of knowledge to policy in the United States and abroad’, Journal of Health Politics, Policy and Law, 22(5), pp. 1241-1265.
WWCR, W. W. i. C. R. (2015) ‘”Scared Straight” programmes’.
Young, K., Ashby, D., Boaz, A. and Grayson, L. (2002) ‘Social Science and the Evidence- based Policy Movement’, Social Policy and Society, 1(3), pp. 215-224.