The COVID-19 modelling that sent Britain into lockdown, shutting the economy and leaving millions unemployed, has been slammed by a series of experts. The Daily Telegraph reports that Professor Neil Ferguson’s computer coding was derided as “totally unreliable” by leading figures, who warned it was “something you wouldn’t stake your life on”.
The model, credited with forcing the UK government to make a U-turn and introduce a nationwide lockdown, is a “buggy mess that looks more like a bowl of angel hair pasta than a finely tuned piece of programming”, says David Richards, co-founder of British data technology company WANdisco. “In our commercial reality, we would fire anyone for developing code like this and any business that relied on it to produce software for sale would likely go bust.”
The report says the comments are likely to reignite a row over whether the UK was right to send the public into lockdown, with conflicting scientific models having suggested people may have already acquired substantial herd immunity and that COVID-19 may have hit Britain earlier than first thought. Scientists have also been split on what the fatality rate of COVID-19 is, which has resulted in vastly different models.
Up until now, though, significant weight has been attached to Imperial’s model, which placed the fatality rate higher than others and predicted that 510,000 people in the UK could die without a lockdown. It was said to have prompted a dramatic change in policy from the government, causing businesses, schools and restaurants to be shuttered immediately in March. The Bank of England has predicted that the economy could take a year to return to normal, after facing its worst recession for more than three centuries.
The Imperial model works by using code to simulate transport links, population size, social networks and healthcare provisions to predict how coronavirus would spread. However, questions have since emerged over whether the model is accurate, after researchers released the code behind it, which in its original form was “thousands of lines” developed over more than 13 years.
In its initial form, developers claimed the code had been unreadable, with some parts looking “like they were machine translated from Fortran”, an old coding language, according to John Carmack, a US developer, who helped clean up the code before it was published online. Yet, the problems appear to go much deeper than messy coding.
Many have claimed that it is almost impossible to reproduce the same results from the same data, using the same code. Scientists from the University of Edinburgh reported such an issue, saying they got different results when they used different machines, and even in some cases, when they used the same machines.
“There appears to be a bug in either the creation or re-use of the network file. If we attempt two completely identical runs, only varying in that the second should use the network file produced by the first, the results are quite different,” the Edinburgh researchers wrote on the Github file.
After a discussion with one of the Github developers, a fix was later provided. This is said to be one of a number of bugs discovered within the system. The Github developers explained this by saying that the model is “stochastic”, and that “multiple runs with different seeds should be undertaken to see average behaviour”.
However, it has prompted questions from specialists, who say “models must be capable of passing the basic scientific test of producing the same results given the same initial set of parameters…otherwise, there is simply no way of knowing whether they will be reliable.”
It comes amid a wider debate over whether the government should have relied more heavily on numerous models before making policy decisions.
Sir Nigel Shadbolt, principal at Jesus College, said that “having a diverse variety of models, particularly those that enable policymakers to explore predictions under different assumptions, and with different interventions, is incredibly powerful”. “We’d be up in arms if weather forecasting was based on a single set of results from a single model and missed taking that umbrella when it rained,” says Michael Bonsall, professor of mathematical biology at Oxford University.
Concerns, in particular, over Ferguson’s model have been raised, with Konstantin Boudnik, vice-president of architecture at WANdisco, saying his track record in modelling doesn’t inspire confidence.
A spokesperson for the Imperial College COVID19 Response Team said: “The UK government has never relied on a single disease model to inform decision-making. As has been repeatedly stated, decision-making around lockdown was based on a consensus view of the scientific evidence, including several modelling studies by different academic groups.
“Multiple groups using different models concluded that the pandemic would overwhelm the NHS and cause unacceptably high mortality in the absence of extreme social distancing measures. Within the Imperial research team we use several models of differing levels of complexity, all of which produce consistent results. We are working with a number of legitimate academic groups and technology companies to develop, test and further document the simulation code referred to. However, we reject the partisan reviews of a few clearly ideologically motivated commentators.
“Epidemiology is not a branch of computer science and the conclusions around lockdown rely not on any mathematical model but on the scientific consensus that COVID-19 is a highly transmissible virus with an infection fatality ratio exceeding 0.5% in the UK.”
David Richards, founder and CEO of WANdisco and Dr Konstantin Boudnik, the company’s vice-president of architecture comment in The Daily Telegraph.Full report in The Daily Telegraph Full comment in The Daily Telegraph