A Very Public Deception

On the manufacture of mortality statistics in gambling

7th June 2024
Dan Waugh

In recent years, the claim that up to 496 deaths a year in England are associated with problem gambling has become a staple of the debate on gambling market reform. The estimates originate from a 2023 report by the British Government’s Office for Health Improvement and Disparities (‘OHID’) and have been used to support demands for a wide range of additional controls on consumers and the market. There is just one problem – they are based on junk science. While it has long been recognised that people with gambling disorder are at elevated risk of self-harm, the specific estimates produced by OHID – accepted uncritically by many in Parliament and the news media – rely on a number of ‘flat-Earth’ assumptions.

Here, we examine the methods used (and errors made) in calculating these figures and consider the conduct of those who have propagated them. First, we consider why estimates prepared by Public Health England (‘PHE’) and the Office for Health Improvement and Disparities (‘OHID’) are unsound. Then, we examine the conduct of PHE and OHID, including attempts to mislead and misdirect, before exploring the role played by the Gambling Commission, the Advisory Board for Safer Gambling and others in propagating the PHE-OHID claims while suppressing concerns about their reliability. Finally, we address the wisdom of attempts to boil down a matter as complex as suicide to any single factor.

The first state-sponsored estimate of gambling-related suicides in Britain appeared in September 2021 with the release of Public Health England’s (‘PHE’) report, ‘Gambling-related harms evidence review: the economic and social cost of harms’. It contended that, in England, 409 suicides a year were “associated with problem gambling only”. In January 2023, the PHE report was replaced (due to identification of errors) by an update from OHID. It offered a choice of either 117 or 496 suicides ‘associated with problem gambling‘.

Both the PHE and OHID estimates were based on a 2018 study of the medical records of patients treated in Swedish hospitals between 2006 and 2016. Dr Anna Karlsson and Professor Anders Håkansson from Lund University found that patients in the dataset with a clinical diagnosis of ICD-10 “pathological gambling” (renamed gambling disorder in the ICD-11) were on average, 15.1 times more likely to die by suicide compared with the general population. PHE applied suicide mortality ratios from this study to NHS Health Survey estimates of the prevalence of PGSI “problem gambling” in England to produce a figure of 409 deaths a year.

In 2023, OHID repeated the exercise, using precisely the same information, and produced figures of either 117 or 496 deaths (the lower figure based on the application of the Swedish mortality ratios to the population prevalence of DSM-IV “pathological gambling“). In doing so they ignored critical information and clear warnings that their methods were unsound. The hospital patients whose records were analysed in the ‘Swedish study’ suffered from a wide range of diagnosed mental and physical health conditions (see charts 1 and 2, below). As a group, they were at elevated risk of self-harm, regardless of the presence or absence of gambling disorder. PHE-OHID thought otherwise – assuming that health risks for hospital patients in Sweden with a wide range of illnesses were the same as for people in England with no diagnosed health disorders whatsoever. In other words, they made the ‘flat-Earth’ assumption that there is no association between mental and physical ill-health and risk of suicide.

In making this assumption, PHE and OHID ignored a clear warning from Karlsson & Håkansson. Their paper advised that the hospital patients whose records they had studied were likely to suffer from particularly severe and complex disorders:

It is therefore likely that results may be skewed toward a population of individuals with more severe forms of GD [gambling disorder]. It is likely that this once again implies that this study sample might contain patients with higher mental health comorbidity, as well as individuals with more severe forms of GD, since these individuals are more likely to receive specialized psychiatry care.

The PHE-OHID researchers also ignored findings from the follow-up to this study (the second in a series of five undertaken by the researchers from Lund University). Håkansson & Karlsson (2020) showed that comorbid health conditions were even higher within the group of patients who had attempted or completed suicide (see chart 3).

Professor Håkansson and Dr Karlsson showed that risk of suicide attempt was five times higher for patients with gambling disorder if they also had diagnoses of alcohol use disorder and drug use disorder. Of those patients who had made a suicide attempt, 70% had a diagnosis of alcohol use disorder or drug use disorder or both. The researchers at Lund University provided a range of adjusted odds ratios based on the presence of other diagnosed mental health conditions (see table 1). This study – which was published ten months prior to the PHE report – indicated that suicide risk for patients with gambling disorder was halved where no alcohol use or drug use disorders were diagnosed. Even before adjusting for other risk factors, these findings clearly demonstrated the inappropriateness of PHE’s approach.

A third study assessed the effect of socioeconomic factors on risk of suicide attempt. In the fourth study, a control group was used to identify discrete risks associated with gambling disorder. It concluded that:

gambling disorder did not appear to be a significant risk factor for the increase in suicide and general mortality when controlling for previously known risk factors.

This finding creates a dilemma for OHID and those who have propagated its claims. If one believes that analysis of the Swedish National Patient register by Karlsson & Håkansson provides a reliable basis for assessing suicide risk in England, then one must conclude that – contrary to PHE-OHID assertions – gambling disorder is not “a significant risk factor”. If on the other hand, one does not believe this is a suitable approach, then the PHE-OHID claims also cannot stand because they rely entirely on the mortality ratios from the first of the Swedish studies.

The fact that PHE and OHID got things wrong does not mean that underlying concerns about gambling disorder and self-harm are misplaced – or that gambling operators, treatment providers and policy-makers should ignore the issue. It has long been recognised that people with the disorder are at elevated risk of suicide, even if the precise nature of the relationship is complex. A number of recent inquests in England have determined that excessive gambling contributed to loss of life. Operators should do more to promote positive mental health and to address risk of self-harm among their customers and employees – whether gambling is involved or not. The PHE-OHID claims are, however, irretrievably flawed and should be disregarded by policy-makers. There is simply no coherent logic that allows them to stand.

The Tobacco Road: why did PHE make such unsound claims?

In May 2018, at the conclusion of its review into gaming machines and social responsibility, the British Government’s Department for Culture, Media and Sport asked PHE to “conduct an evidence review of health aspects of gambling-related harm to inform action on prevention and treatment”. More than three years later, in September 2021, PHE responded with the publication of five reports on the subject. One of these reports (‘The economic and social cost of harms’) claimed annual costs of £1.27bn a year associated with ‘problem gambling’ – with roughly 50% attributable to deaths by suicide.

It was this rather speculative document, rather than PHE’s more robust quantitative review of evidence from NHS Health Surveys, that officials chose to emphasise – prompting Britain’s Gambling Commission to surmise that PHE’s goal was, “to ensure gambling is considered as a public health issue.”

The Gambling Commission had already been given a glimpse of what “a public health issue” would entail. In a draft press release (seen by the Commission), PHE officials called for:

a public health approach to gambling…similar to how we tackle tobacco consumption or unhealthy food consumption…

In the summer of 2022, the PHE researchers (now transferred to OHID) spelt out what this tobacco-style offensive would involve. Their paper, published in the Lancet Public Health, contained 81 measures for state intervention in the gambling market. The list included prohibitions on: all gambling advertising and marketing (including at racecourses); all in-play betting; and the sale of wine, beer and spirits in bingo clubs and casinos. It also included limits on the number of people permitted on a website at any one time, annual tax increases above the rate of inflation and even ‘plain packaging’ for all gambling products (no colours, logos or images permitted on playing cards, gaming machines, National Lottery tickets and so on).

There were other indications that PHE’s endeavours were not entirely objective – or morally neutral. In 2020, for example, its project leader stated that “more research is required to support advocacy and action” against gambling – hardly a statement of impartiality or scientific rigour. Meanwhile, documents made available under the Freedom of Information Act (‘FOIA’) reveal that PHE had agreed to be part of a research group set up by the activist charity, Gambling With Lives (‘GwL’) during the review period – an engagement it failed to disclose within its report.

Why did OHID publish its report…and did officials mislead?

In January 2023, the Department of Health and Social Care (‘DHSC’) withdrew the PHE report and published an updated set of cost estimates – this time in the range of £1.05bn to £1.77bn a year (underpinned by a choice of 117 or 496 deaths). OHID described the decision to review PHE’s work as “a standard approach for previously published reports”; but this seems to be untrue. The decision to re-examine the PHE cost estimates alone (none of the other four reports was reviewed – despite the presence of errors) was taken in July 2022 and announced to Parliament shortly afterwards. We have found no evidence that reviewing state agency reports within ten months of publication is a “standard approach” or that any such policy exists.

Disclosures made under FOIA reveal the true reason for review. On 26th July 2022, an unnamed DHSC official circulated a memorandum, stating:

We are going to need to make changes to two of the evidence review reports as an error has been spotted, and as it’s a change to results, its [sic.] probably what you would classify as a major change.

Given that the PHE report contained quite a few errors, it is difficult to know which particular mistake prompted re-examination; but the decision was certainly not part of a “standard approach”. This raises the possibility that OHID may have deliberately misrepresented the grounds for review.

The Gambling Commission and the Advisory Board for Safer Gambling were both told by OHID researchers that “nothing in the report has changed substantially”; but this is incorrect. In fact, every single line item in the OHID cost estimate differed from the PHE version – in some cases substantially. Its estimate of direct costs to the Government was £234.1m lower than PHE’s – a reduction of more than one-third. This was masked by the introduction of a new area of intangible costs, relating to depression and several revisions to the suicide calculation. OHID’s estimates were also based on a ‘harmed population’ 59% smaller than in PHE. As chart 1 (below) shows, the claim that ‘nothing changed substantially’ appears misleading.

In August 2022, the then Health Minister, Maggie Throup MP advised Parliament that the PHE report would be reviewed and that the calculations underpinning its estimates would be published. The review however, has never been made public and – according to disclosures made under FOIA – no such document is held by the DHSC. Contrary to the minister’s pledge, the PHE calculations have still not been released. To do so would reveal a number of errors, such as the fact that PHE’s suicide figure was based on a 21% over-statement of the population prevalence of ‘problem gambling’.

The mystery of the OHID expert panel

OHID was at least prepared to admit – with a heavy dose of understatement – that its estimates were “uncertain”. It relied on a study of hospital patients in Sweden with a clinical diagnosis of gambling disorder (among many other health issues) to estimate the health risks for people in England with no diagnosed mental or physical health conditions whatsoever. In consequence, OHID leaned heavily on the opinion of its expert panel of health economists and academics who, it is claimed, approved the approach.

There are, however two problems where this opinion is concerned. The first is that one member of the expert panel, Dr Henrietta Bowden-Jones of the NHS had publicly criticised the PHE-OHID methodology. At a fringe meeting of the Conservative Party Conference in September 2022, Dr Bowden-Jones stated: “we cannot extrapolate from Swedish studies, from Norwegian studies – it doesn’t work”.

The second issue is that the meeting of the expert panel – to discuss the most significant matter in the OHID report – is entirely undocumented. In February 2023, the DHSC admitted that:

there was no agenda or papers shared before the meeting or minutes circulated afterwards.

It is difficult to understand how this panel of experts might have been expected to review OHID’s work without access to any documents; and why officials did not consider it necessary to record the panel’s deliberations on this critical point.

Inappropriate behaviour?

The task attempted by PHE-OHID was always going to be challenging, given the dearth of actual data available. This does not explain or excuse the large number of errors and omissions made by researchers and officials:

PHE and OHID ignored warnings by Karlsson & Håkansson about the representativeness of the sample in the 2018 Swedish study (upon which they relied);

  • PHE and OHID ignored findings in the 2018 study of high rates of mental and physical health comorbidities.
  • PHE and OHID ignored the follow-up study by the Swedish researchers (Håkansson & Karlsson, 2020), which found that risk of suicide attempt was significantly mediated by the presence of other disorders.
  • PHE and OHID ignored the opinion of Dr Anna van der Gaag, chair of the Gambling Commission’s Advisory Board for Safer Gambling, that the PHE calculation was likely to be inaccurate.

A large number of issues with the PHE-OHID reports were brought to the attention of its Director-General, Jonathan Marron in July 2022 and again in September 2023. On both occasions, Mr Marron promised to investigate. Last year, he wrote that he would provide “a proper explanation” for the errors and methodological flaws; but more than seven months later, none has been forthcoming. In what may well be a breach of the Civil Service Code, OHID officials resorted to ad hominem disparagement of their critics – including one national news media outlet – rather than engage constructively.

What is particularly disturbing about the PHE-OHID scandal is not the fact that researchers (presented with an unenviable task) made so many mistakes; but that state officials proved so unwilling to confront them – responding with hostility to legitimate scrutiny.

Why did many who knew that gambling suicide statistics were unsound keep quiet?

By April 2022, Britain’s Gambling Commission knew that estimates of suicide mortality published by PHE were “unreliable” and based on “inaccurate” assumptions. This may have been a somewhat uncomfortable discovery, given that the regulator had previously described the review as “important and independent” – an opinion based on a reading of nothing more than the executive summary (it had not been party to even this much when it agreed to provide PHE with “a supportive quote”). It also knew that PHE was far from “independent” or impartial, having been made aware of its desire to apply tobacco-style controls to participation in betting and gaming.

At a meeting in March 2022, Gambling Commission officials admitted that they did not understand how PHE had arrived at some of its estimates (no-one could have been expected to – the calculations were mathematically incorrect). In April, these officials circulated a highly critical review of the PHE report, in which they noted that the suicide claims were not based on “reliable data”. The Commission however, elected not to take up the matter with the OHID (which had subsumed PHE upon the latter’s disbandment) or to inform the Secretary of State. The market regulator – which counts “doing the right thing” among its corporate values – elected to suppress its own critique. In one rather sinister coda, an official speculated that PHE’s claim of more than 400 suicides might be rescued, if only future prevalence surveys could show a higher rate of ‘problem gambling’ in the population. At this point, the Commission had already started work on a new Gambling Survey for Great Britain in the expectation that – through methodological artefact alone – it would produce a higher rate of ‘problem gambling’ than reported by NHS Health Surveys.

When asked by journalists whether it considered the PHE claims to be reliable, the Gambling Commission responded that it was not its role to review the work of other state agencies; but failed to mention that this is precisely what it had done. As late as 2023, its chief executive, Andrew Rhodes, continued to defend the OHID estimates, despite being aware of problems; and it seems likely that the market regulator has been involved in promoting the PHE-OHID claims via approval of regulatory settlement funds (to Gambling with Lives and the Association of Directors of Public Health among others).

The ABSG and the irrelevance of accuracy

In the summer of 2022, the OHID wrote to the Gambling Commission’s Advisory Board for Safer Gambling (‘ABSG’) to ask for its opinion. In her response, the ABSG’s chair, Dr Anna van der Gaag agreed with critics of the PHE report, writing: “I see their point about basing calculations on the Swedish hospital study leading to an over estimation of the numbers”. She suggested however, that accuracy in such matters was unimportant and that any attempt to apply scrutiny was “a distraction from what matters to people and families harmed by gambling”. In other correspondence, Dr van der Gaag disparaged the efforts of researchers examining PHE’s claims, comparing them without foundation to ‘Big Oil’. Dr van der Gaag’s apparent indifference to accuracy represented a change of heart from three months earlier when the ABSG had described PHE’s highly exact (but false) estimate of 409 suicides associated with problem gambling as a “catalyst towards action”. The Gambling Commission, for its part, allowed the ABSG to publish this statement in the full knowledge that it was based on unreliable data.

The following year, Dr van der Gaag was one of two co-adjudicators responsible for allocating around £1m in regulatory settlement funds for the purposes of research into suicide and gambling. Applicants were specifically directed towards the OHID analysis (i.e. estimates that the ABSG knew were flawed) as well as claims by the activist group, Gambling With Lives (‘GwL’) – despite the fact that even the OHID had criticised the basis of one GwL claim. One of the successful bids (a £582,599 award to a consortium led by the University of Lincoln) included GwL as an active member of the research team.

The Silence of the ‘Independents’

Among those who have supported the claims of PHE-OHID are a number of self-styled ‘independent’ researchers. These include academics from the universities of Cambridge, Hong Kong, Lincoln, Manchester, Nottingham and Southampton, as well as King’s College, London, who have cited the estimates uncritically in their work. Perhaps they considered (naively, if so) that research produced by the Government is unimpeachable; yet the errors made by PHE-OHID are so glaring that no researcher of any calibre could have failed to notice them. An unwillingness to subject such serious claims to critical analysis before repeating them indicates – at the very least – an absence of intellectual curiosity. Much is made of the need for research independence (typically defined solely by an absence of industry funding, regardless of ideology or other affiliations); but independence has little value if it is not accompanied by intelligence and integrity.

Breaking ground

A small number of groups and individuals have been prepared to apply scrutiny and challenge, despite the circumstances. In addition to the current series, Cieo has published a number of articles on the problems with PHE-OHID (as well as other issues with activist research). The Racing Post and a handful of journalists, including Christopher Snowdon, Steve Hoare and Scott Longley have been prepared to challenge the PHE-OHID claims. Figures from trade groups, bacta and the Gambling Business Group have spoken out publicly on issues with PHE-OHID.

Officials at the Department for Culture, Media and Sport have displayed a capacity for critical analysis, notable by its absence elsewhere in Whitehall. Their White Paper on reform of the betting and gaming market acknowledged valid concerns about self-harm but conspicuously omitted the OHID figures. Lord Foster of Bath, a stern critic of the gambling industry, has acknowledged that the PHE-OHID claims are not reliable and – in a display of honesty and humility rare in the gambling debate – apologised for using the figures himself. He continues to make the case for self-harm to be treated seriously in a gambling context; but without recourse to spurious statistics. Philip Davies, the Conservative Member of Parliament for Shipley, has challenged unsound statistics in parliamentary debates; and Dame Caroline Dinenage’s select committee for Culture, Media and Sport noted concerns of reliability in its report on gambling regulation. The Gambling Commission’s executive director for research and statistics, Tim Miller, has been prepared to discuss and acknowledge problems with PHE-OHID where his senior management colleagues have not. Doing the right thing can sometimes be a lonely endeavour.

Time to come clean

The PHE-OHID deception happened because people in positions of authority considered it acceptable to publish inaccurate mortality statistics. One even suggested that scrutiny of misinformation – rather than its manufacture – is unethical. It is reasonable therefore to ask how far the organisations involved in the cover-up might be trusted. In July this year, the Gambling Commission intends to publish statistics on the prevalence of suicidal behaviour amongst gamblers. It has also (as noted above) sponsored Gambling Research Exchange Ontario’s (‘GREO’) programme of research into wagering and self-harm – a programme explicitly grounded in the PHE-OHID fabrications. If the Gambling Commission, the ABSG, GREO and others are unwilling to come clean about the problems with PHE-OHID, any further research is likely to be viewed with suspicion. Contrary to what some appear to believe, it is the production of unreliable research – rather than its scrutiny – that undermines public trust in authority. Attempts to address health harms in any domain will be ineffective if they are based on inaccurate evidence – and manufacturing mortality statistics should never be acceptable.

An independent and open review should now be carried out into the PHE-OHID deception; but it is difficult to see how this will happen. The Department of Health and Social Care and the Gambling Commission are unlikely to embrace scrutiny; and the DCMS will not wish to embarrass either its regulator or another government department. There are too many people in Parliament and the media who have been complicit; and too few prepared to break ranks. The gambling industry meanwhile (with a number of notable exceptions) has shown little appetite for challenging misinformation. There is one hope – that the Office for Statistics Regulation will be prepared to take an interest in the integrity of public health estimates. Such an intervention would go some way towards restoring trust in public bodies.

Is it ever ethical to invent mortality statistics?

It has long been accepted that people with gambling disorder are at elevated risk of death by suicide. The DSM-5 (the American Psychiatric Association’s ‘bible’) observes that people in treatment for gambling disorder are at elevated risk of self-harm (something that is true of a range of other mental health conditions). This warrants concern. It is also widely accepted that suicide is a complex matter. In their 2016 meta-analysis of 50 years of suicide research, Franklin et al. made the following observation:

…any individual with nearly any type of mental illness (i.e. internalizing, externalizing, psychotic, or personality disorder symptoms), serious or chronic physical illness, life stress (e.g. social, occupational, or legal problem), special population status (e.g. migrant, prisoner, non-heterosexual), or access to lethal means (e.g. firearms, drugs, high places) may be at risk for [suicidal behaviours and thoughts]. A large proportion of the population possess at least one of these risk factors at any given time, with many people possessing multiple factors.

Gambling disorder is a risk factor for suicide – but one that demands context. Understanding this can be helpful when it comes to devising self-harm prevention strategies. For example, Hakansson & Karlsson (the Swedish researchers whose analysis was misused by PHE-OHID) conclude their 2020 study with the following recommendation:

The findings call for improved screening and treatment interventions for patients with gambling disorder and other mental health comorbidity.

It is questionable however whether discrete associations between any single activity or human characteristic and death by suicide should – by itself – be used to justify state controls on that activity. By way of illustration, a 2021 study on the prevalence of suicidal behaviour in a group of patients with behavioural addictions (Valenciano-Mendoza et al., 2021) found:

the highest prevalence of suicide attempts was registered for sex addiction (9.1%), followed by buying–shopping disorder (7.6%), gambling disorder (6.7%), and gaming disorder (3.0%).

Such findings are useful for addressing risk of self-harm within population groups suffering from these conditions. They do not – by themselves – justify state controls on sex, shopping or playing video games. A 2017 study of young adults in England (aged 20-24 years, n=106) by Appleby et al., found that four deaths by suicide were linked to ‘gambling problems’; and this has been used by activists to claim that 250 deaths by suicide each year are ‘gambling-related’ (i.e. 4% of all such deaths). The same study however, also found that 44 of those who had died “had a reported history of excessive alcohol use. Illicit drug use was reported in 54 (51%)”; seven “were reported as experiencing problems related to being a student” (including five experiencing “academic pressures”). Those who have used this study to allege there are 250 ‘gambling-related suicides’ every year, must therefore believe that 3,200 suicides are related to illicit drug use; 2,625 to excessive alcohol use; and 440 to academia. The findings in Appleby et al. should prompt concern; but it is questionable whether they should be used to demand bans on advertisements for betting, beer or university degree courses.

Some activists have called for coroners to assess, as a matter of routine, the possible involvement of gambling in suicide cases. The Bishop of St Albans has doggedly pursued a Private Members Bill to mandate this. While understanding the causes of suicide is an important endeavour, this requirement would impose impossible expectations on coroners; and create distortion if other factors were not investigated with the same degree of rigour. The presence of Adverse Childhood Experiences (‘ACEs’), for example, is a well-documented antecedent of suicide. One study (Dube et al., 2001) found that as many as 80% of suicide cases analysed had a history of ACEs. There are also well-documented associations between relationship breakdown and self-harm. Franklin et al. (2016) found that “accurate STB {Suicidal Thoughts and Behaviour] prediction will likely require a complex combination of a large number of factors (i.e., > 50), many of which are time varying”. The practicality and wisdom of asking coroners to probe into every corner of the deceased’s life should be carefully considered.

Those determined to produce figures on the prevalence of gambling-related suicide should first set out a clear operationalised definition of what this term means. How is the relationship to be characterised? Does the individual need to have gambled in the prior 12 months? Does he or she need to have a diagnosis of gambling disorder? To what extent is evidence of causality necessary? Finally, they should be required to contextualise their findings by reference to other risk factors.

Running through some of the institutional responses to PHE-OHID is the idea that unreliable estimates of mortality serve a valid purpose pending the production of more robust statistics – a matter of ‘fake it until you can make it’. The chair of the Gambling Commission’s Advisory Board for Safer Gambling (‘ABSG’), Dr Anna van der Gaag, defended PHE-OHID’s manipulations by writing:

Good research, especially if it is on an under-researched area like this one, tends to begin and end in a different place, prompting challenge, replication, debate, and the research in this important area is no different.

It is a view that overlooks four important points. First, the PHE-OHID work on the cost of gambling harms – riven with basic errors, deceptions and indications of bias – cannot be considered “good research”. Second, Dr van der Gaag and the ABSG have shown little interest in “replication and debate”, demanding instead that demonstrably bad research should prompt “action”. Third, rather than welcoming challenge, the ABSG and the OHID have reacted to scrutiny with evasion, hostility and ad hominem disparagement. Dr van der Gaag herself has likened critics of PHE-OHID – without any substantiation – to ‘Big Oil’. Fourth, it is questionable how far we should trust ‘better research’ if those responsible for it have propagated or tolerated misinformation in the past.

The fabrication of statistics about gambling and suicide is not simply an academic matter. PHE-OHID’s claims provided the justification for the inclusion of gambling in the Department of Health’s Suicide Prevention Strategy for England; and are also cited in National Institute for Health and Care Excellence draft guidelines for treating harmful gambling. They formed the backdrop for the introduction of a regulatory requirement that all licensed operators (with the exception of the National Lottery) should be required to report any customer death by suicide to the Gambling Commission, regardless of how recently, frequently or intensely the individual had gambled. These may prove to be positive developments – but policy should not be based on misinformation; and the consequences of doing so can be harmful.

Suicide risk among people with a gambling disorder is a legitimate issue and warrants an intelligent response; but this is unlikely to be achieved through the publication of spurious prevalence estimates. As the US economist, Professor Douglas Walker has observed:

If researchers continue to offer social cost estimates, they should estimate costs that are measurable. But for other costs such as psychic costs that cannot be measured…let us identify them without providing spurious empirical estimates. Offering methodologically flawed cost estimates does not improve our understanding nor does it promote sound policy…In areas where research is still quite primitive, perhaps no data would be better than flawed data.

Coda

We are aware that some people may resent this series of articles on PHE-OHID (not least the OHID officials who have displayed such aversion to scrutiny). Our intention in writing them has not been to hurt, insult or distract – but to shine a light on the way that statistics are created and the distortive effect that junk science can have on regulatory policies. The application of scrutiny to research is an important part of the scientific process; and where state bodies are concerned, an important part of the democratic process too. It is entirely consistent to be concerned about a particular issue (e.g. risk of self-harm in a gambling context) and at the same time to believe that policy should not be based on misinformation.

The involvement of PHE, OHID and the Gambling Commission in the manipulation of statistics appears to fit a wider pattern of behaviour by public bodies in Great Britain; involving the abuse of authority and cover-up. It is time for these organisations to set aside their agendas, acknowledge and take responsibility for past mis-steps and start to engage with a wide range of stakeholders (including licensees) on addressing an important and complex issue with intelligence and sensitivity.

Dan Waugh is a partner at the global strategic sports and leisure advisory firm, Regulus Partners.

List of abbreviations

DSM-III: The third edition of the American Psychiatric Association’s Diagnostic and Statistical Manual of Mental Health Disorders.

DSM-IV: A screening questionnaire published by the American Psychiatric Association within the fourth edition of its Diagnostic and Statistical Manual of Mental Health Disorders

OHID: the Office for Health Improvement and Disparities. Part of the Department of Health and Social Care.

PGSI: The Problem Gambling Severity Index. A screening instrument developed by Ferris & Wynne (2001).

PHE: Public Health England. A state agency, reporting to the Department of Health and Social Care. It was disbanded in 2021.

Photo by Chris Liverani on Unsplash