The editors of more than two dozen cardiology-related scientific journals worldwide have published a joint editorial to sound the alarm that medical misinformation is putting lives at stake. Meanwhile, NHS England’s national medical director has urged social media companies to ban “irresponsible and unsafe” adverts for health products of dubious value by celebrities such as Kim Kardashian and Katie Price.
In their editorial, doctors say the mainstream media holds some responsibility in resolving the problem. “It is easy to find a rogue voice but inappropriate to suggest that voice carries the same weight as that emerging from mainstream science,” they write.
These physicians describe regularly encountering patients hesitant to take potentially lifesaving medications or adhere to other prescribed treatments because of something they read online. Or heard from friends. Or saw on television.
“There is a flood of bad information on the internet and social media that is hurting human beings,” said Dr Joseph Hill, the architect of the essay and editor-in-chief of the American Heart Association journal Circulation. “It’s not just an annoyance, this actually puts people in harm’s way.”
The primary example illustrated in the editorial is the use of statins, a cholesterol-lowering medicine that can reduce heart attack and stroke risk in certain people. But doctors say too many of their patients shun taking statins because of bad information they picked up – often from politicians, celebrities and others who lack medical expertise.
“We trust aeronautical science when we board an airplane; we trust the science buried within our cellphones; we trust mechanical engineering science when we cross a bridge; yet, many are uniquely sceptical of biological science,” the doctors write in their essay.
Another example highlighted is “the entirely unfounded” concern that vaccines cause autism, a claim that has been debunked by 17 major studies.
Some patients think doctors are motivated by financial gain, even going as far to suggest physicians get kickbacks for prescribing certain medications, said Hill, the cardiology chief at UT Southwestern Medical Centre in Dallas. “My practice over the years has been to say, ‘Your insurance company is paying me to give you the best advice I can based on modern science. You are free to accept it or reject it. I hope you accept it, and I’m happy to provide you with evidence that backs my recommendations, but in the end, it’s your life and your body,'” Hill said.
Dr Haider Warraich, a cardiology fellow who recently wrote about the dangers of medical misinformation in a New York Times essay, said he had to get personal to convince one heart attack patient to take his advice. The patient, a young woman who was healthy aside from sky-high cholesterol, had refused to take a statin that another doctor had previously prescribed. She agreed to try the statin only after Warraich shared that, after his own father had a heart attack, he urged doctors to start his father on the highest statin dose possible.
Because the benefits of medication and proper advice are often invisible, it can be difficult for patients to see how it affects them, said Warraich, a fellow in advanced heart failure and transplantation at Duke University Medical Centre in North Carolina. “If you have a fracture, or something that happens to you that’s visceral, you’re not going to take an herb to fix it. You’re going to go to a surgeon to get it fixed and your pain will get better,” he said. “But when it comes to something that prevents the risk of a future stroke or heart attack, or something like a vaccine which doesn’t make you feel better immediately but is backed by very robust data, it’s not as clear-cut in front of you.”
Hill said leaders from Facebook and other social media platforms also need to help find a solution. “We’re trying to highlight the realities of the harm that social media also can do to our patients,” he said. “I’m hoping that we can make it clear that this is a matter of life and death for many patients.”
Joseph A Hill, Stefan Agewall, Adrian Baranchuk, George W Booz, Jeffrey S Borer, Paolo G Camici, Peng‐Sheng Chen, Anna F Dominiczak, Çetin Erol, Cindy L Grines, Robert Gropler, Tomasz J Guzik, Markus K Heinemann, Ami E Iskandrian, Bradley P Knight, Barry London, Thomas F Lüscher, Marco Metra, Kiran Musunuru, Brahmajee K Nallamothu, Andrea Natale, Sanjeev Saksena, Michael H Picard, Sunil V Rao, Willem J Remme, Robert S Rosenson, Nancy K Sweitzer, Adam Timmis, Christiaan Vrints
“The most frustrating part of my job as a public health scientist is the spread of false information – usually online – that overrides years of empirical research,” writes Junaid Nabi is a public health researcher at Brigham and Women’s Hospital and Harvard Medical School, Boston in a report carried on the Bhekisisa site. Nabi writes: “It is difficult enough for doctors to counter medical falsehoods in face-to-face conversations with patients. It becomes even harder to do so when such fakery is transmitted via the Internet.”
He writes: “I recently witnessed this pattern first hand in Kashmir, where I was raised. There, parents of young children trusted videos and messages on Facebook, YouTube, or WhatsApp that spread false rumours that modern medications and vaccines were harmful, or even that they were funded by foreigners with ulterior motives. Discussions with local colleagues in paediatrics revealed how a single video or instant message with false information was enough to dissuade parents from believing in medical therapies.
“Physicians in other parts of India and Pakistan have reported numerous cases in which parents, many of them well educated, refuse polio vaccinations for their children. Reports that the US Central Intelligence Agency once organised a fake vaccination drive to spy on militants in Pakistan have added to mistrust in the region. Given the high stakes involved, Pakistani states sometimes resort to extreme measures, such as arresting uncooperative parents, to ensure that vulnerable communities are vaccinated.
This is just one regional example of the global threat that online misinformation poses to public health.
“In the US, a recent study reported how Twitter bots and Russian trolls have skewed the public debate on vaccine effectiveness. Having examined 1.8m tweets over a three-year period from 2014 to 2017, the research concluded that the purpose of these automated accounts was to create enough anti-vaccine content online to develop a false equivalence in the vaccination debate.
(Editor’s Note: The term “false equivalence” is often used to describe a logical fallacy in which two opposing arguments appear to be logically equivalent when in fact they are not. This happens, for instance, when journalists – in trying to abide by traditional media ethics – present “both sides” of the vaccine debate. But doing this gives equal weight in articles to scientifically-proven facts and anti-vaxxer hoaxes. This, in turn, can inadvertently yet falsely present vaccination myths as credible rebuttals to scientific evidence, a 2014 study published in the Journal of Vaccines and Vaccination found.)
“Such misinformation programmes succeed for a reason. In March 2018, researchers from the US Massachusetts Institute of Technology reported that false stories on Twitter spread significantly faster than true ones. Their analysis revealed how the human need for novelty, and the information’s ability to evoke an emotional response, are vital in spreading false stories.”
Nabi writes: “The Internet amplifies the damage caused by these ‘alternative facts,’ because it can disseminate them at massive scale and speed – a few fake or troll accounts are enough to spread misinformation to millions. And once it spreads, it is virtually impossible to retract.
“The role of Twitter bots and trolls in the 2016 US elections and the UK’s Brexit vote to leave the EU is clear. Now they have affected global health as well. If we don’t take robust and coordinated steps to address this alarming trend, we may lose out on a century’s worth of successes in health communication and vaccination, both of which depend on public trust.”
But, Nabi says, we can take several steps to start reversing the damage.
He writes: “For starters, health officials and experts in both developed and developing countries need to understand how this online misinformation is eroding public trust in health programmes. They also need to engage actively with global social media giants such as Facebook, Twitter, and Google, as well as major regional players including WeChat and Viber. This means working in tandem to create guidelines and protocols for how information of public interest can be disseminated safely.
“In addition, social media companies can work with scientists to identify patterns and behaviours of spam accounts that try to disseminate false information on important public-health issues. Twitter, for example, has already started using machine-learning technology to limit activity from spam accounts, bots, and trolls.
“More rigorous verification of accounts, from the moment of signing up, will also be a powerful deterrent to the further expansion of automated accounts. Two-factor authentication, using an email address or phone number when signing up, is a prudent start. CAPTCHA technology requiring users to identify images of cars or street signs – something humans can do better than machines (for now, at least) — can also limit automated sign-ups and bot activity.
“These precautions are unlikely to infringe upon any individual’s right to voice an opinion.
“Public health officials must err on the side of caution when weighing free-speech rights against outright falsehoods that endanger public welfare. Abusing the anonymity provided by the Internet, spam accounts, bots, and trolls serve to disrupt and pollute available information and confuse people. Taking prudent action to avert situations where lives are at stake is a moral imperative.
“Global public health took huge strides forward during the 20th century. Further progress in the 21st will come not only through ground-breaking research and community work but also through online engagement.
“The next battle for global health may be fought on the Internet. And by acting quickly enough to defeat the trolls, we can prevent avoidable illnesses and deaths around the world.”
When Ashleigh Johnson’s small son developed a fever, she went online to try to work out what was wrong, says a Sunday Times report. The family had been camping in an area where there were ticks, and Johnson had read on a CBS News site that a little girl in the US had died in 2017 from organ failure caused by tick-bite fever. It turned out that Johnson’s child had a slight case of tonsillitis. He missed just one day of school.
The report says according to Cape Town doctor Saville Furman, who spoke to fellow GPs at a University of Cape Town conference, Johnson had fallen prey to “cyberchondria”, a growing phenomenon that is causing havoc for patients and doctors alike. The condition involves googling symptoms, jumping to conclusions about diagnosis and prognosis, and in some cases even deciding on treatment.
The report says Furman, who has a mug that reads “Don’t confuse your internet search with my medical degree”, said: “The term ‘cyberchondria’ started cropping up in newspapers around 2001, referring to anecdotal reports of patients bringing their doctors printouts from the internet.” Computer scientists at Microsoft discovered the full extent of it by analysing the internet searches of hundreds of thousands of people.
“About one-third of people searching for medical-related terms tended to ‘escalate’ their search – ratcheting up to more and more dire outcomes,” Furman is quoted in the report as saying. An initial search for “headache”, for example, might be followed by “headache tumour” then “brain tumour treatment”.
The report says another study, at St John’s University in New York City, found that more than two-thirds of patients who googled their symptoms went to a doctor, but the rest were so anxious about what they had discovered online that they were too scared to hear what a doctor might say. Of those who were seen by the physician, 71% were told they had worried excessively.
William Bird, the head of Media Monitoring Africa, said the challenge with searching for medical information online “isn’t so much about the internet itself but the search engines and the results that come up first. Unless you know how to search and are very cautious in your approach, you may well find information that isn’t trustworthy but seeks to scare you and sell you products.”
According to the report, Furman said an example of a reliable site was the one run and developed by the Mayo Clinic in the US, and Bird said “government-sanctioned sites from countries with well-functioning health systems” were also valuable.
Another problem is the spread of simplistic or inaccurate medical and health information on social media. The report says an example of this was fraudulent research carried out by Andrew Wakefield 20 years ago that falsely claimed a link between autism and the vaccine for measles, mumps and rubella. Wakefield had his research published in The Lancet, but the medical journal later retracted the study, and Wakefield was convicted of fraud. But two decades later, with social media at the helm, there are still parents who don’t vaccinate their children because of Wakefield’s research.
On the flipside, said Bird, social media does have benefits. “It can offer huge potential,” he says, “if we direct resources and energy to ensuring we use it to offer good medical advice.” According to the report, he says people should rather familiarise themselves with reputable resources. “An example is MomConnect, where moms get accurate information about their babies through their mobile phones.”
Unfortunately, the limitless nature of the internet and social media means that “people will have to wade through the reams of content from the loopy, mad and dangerous to those trying to sell things”.
NHS England’s national medical director is, meanwhile, urging social media companies to ban “irresponsible and unsafe” adverts for health products of dubious value by celebrities such as Kim Kardashian and Katie Price.
The Guardian reports that Professor Steve Powis wants the likes of Instagram, Facebook and Twitter to stop running ads in which famous people are paid to market diet pills, detox teas and appetite-suppressing sweets. “Highly influential celebrities are letting down the very people who look up to them by peddling products which are at best ineffective and at worst harmful,” he said.
“Social media firms have a duty to stamp out the practice of individuals and companies using their platforms to target young people with products known to risk ill health.”
The report says NHS chiefs are worried endorsements by celebrities will lead to some of their many followers on social media using products that can pose a risk to their physical or mental wellbeing, despite claiming to improve health. For example, weight-loss supplements often contain ingredients that can lead to an irritated stomach or diarrhoea and make contraception less effective. They can also offer false hope that someone unhappy with their body shape will be able to change their appearance by using them.
The report says Vicky Pattison, who appeared in Geordie Shore and Ex on the Beach, has used Instagram several times to advertise Boombod weight-loss products to her 4.2m followers. The product promises a “simple, quick and tasty way to lose weight without feeling hungry. Each 10-calorie shot contains a unique vitamin blend and a natural fibre that works to reduce your appetite and is clinically proven to aid weight loss.”
The report says Price, who won Celebrity Big Brother in 2015, promoted Boombod to her 1.9m Instagram followers twice last year. She has also advertised Bootea, a “natural” tea marketed as an aid to detox and losing weight. However, it contains senna, a laxative.
The report says Powis’s intervention comes as pressure grows on social media operators to be more responsible about deciding what content to allow. There are fears that content about self-harm, eating disorders and suicide can normalise such behaviours and even encourage some people to act. On Thursday, the science and technology select committee said companies should have a legal duty of care imposed on them.
Lauren Goodger from The Only Way is Essex has used Instagram, which is owned by Facebook, to promote Boombod as well as three other health-focused products: the Skinny Coffee Club, Nutribuddy UK and Protein World. Gemma Collins, who also appeared in Towie and has 1.2m Instagram followers, has used the platform to endorse Boombod.
Powis is quoted in the report as saying: “If a product sounds like it is too good to be true then it probably is. The risks of quick-fix weight loss far outweigh the benefits, and advertising these products without a health warning is damaging and misleading.
“Promoting potentially damaging products with no clinical advice or health warning can be really detrimental to someone’s physical and mental health.
“With pressure on young people to live up to idealised images greater than they have ever been, it’s too often families and the health service who are left to pick up the pieces.” He demanded social media platforms remove posts they are hosting that advertise potentially harmful products, as well as banning promotions of them in future.
The report says Kardashian, who has 126m Instagram followers, used the platform to promote Flat Tummy meal replacement shakes last month. She said she had used them for two days and already felt better. She has also previously advertised lollipops that suppress appetite. Her sister Kourtney has also recently promoted Flat Tummy Tea to her 72.9m Instagram followers.
The Royal Society for Public Health accused celebrities and social media platforms of putting profits before consumers’ health. “There are so many bogus and snake oil weight loss products on the market today, which either have dubious evidence to back them up or are a waste of money,” said Duncan Stephenson, the RSPH’s director of external affairs in the report.
“It is shameful that major advertisers, leading celebrities – many of who are role models for young people – together with advertisers and social media platforms are complicit in exploiting and potentially putting people’s health at risk, simply to further line their pockets.”
Kitty Wallace, a trustee of the Body Dysmorphic Disorder Foundation, said: “The bombardment of idealised body images is fuelling a mental health and anxiety epidemic in young people. If celebrities will not step up to protect their young fans then companies such as Instagram, Facebook and Twitter should be compelled to take down these damaging posts.”
Daniel Dyball, the executive director of the Internet Association, which represents companies including Facebook and Google, said in the report: “All advertising in the UK is regulated under the Advertising Standards Authority’s rules, including internet advertising. Internet companies rigorously observe the ASA’s rules and have strong responsible advertising policies. Our members are focused on their responsibilities around advertising aimed at children and the advertising of health products, as well as the recently published ASA guidance on influencer advertising.”