Putting Science into Science Journalism
How to improve media communications about health and science
Most people learn about developments in science through popular media, both traditional news outlets like newspapers, radio, and television, and the variety of social media. It is therefore critically important that what gets reported about science is timely and accurate.
Unfortunately, that is not always the case. There has been a marked reduction in recent years in the number of newspapers that have dedicated science sections and the cadre of dedicated science journalists is dwindling. According to David Scales MD PhD, who researches and writes extensively about science and health communication, “the lack of science journalists means scientific issues are now often covered by other beat reporters – like on legal or politics. This means they’re well versed in the controversy but less well versed in the science, which can cause problems for complex scientific issues.” Thus, ensuring that the public gets timely and accurate updates on scientific developments is in jeopardy.
In terms of accuracy, there are also challenges. Without a strong group of journalists who have advanced training in science and keep themselves up to date, stories about science can fall far off the mark in conveying the meaning and importance of any alleged new development.
A major problem in science journalism today is the so-called “n of one” issue. Journalists like to tell stories, not burden readers with technical details and data. They often select a single individual’s story and use it to illustrate a problem with a medical test or treatment. An example of this was a recent New York Times article about people taking antidepressants for long periods of time who supposedly cannot stop them because of debilitating withdrawal symptoms when they try to do so. The story capitalized on a handful of individual accounts of people who experienced trouble getting off antidepressants. Of course, it did not include stories from individuals who had no trouble getting off their medication. Although data may not be as compelling, the story begged for some details on the numerator and denominator of these problems. That is, of the millions of people who take antidepressant medication, (the denominator) how many have experienced such serious withdrawal symptoms that they are unable to discontinue their use (the numerator)? Is that ratio one in one million or one in ten? Clearly, the difference is critical in deciding whether antidepressant withdrawal is a serious problem or an uncommon event. By avoiding this question, the reporter gives readers the impression that the latter is the case; in fact, it could easily be the former, in which the story probably should not have been published in the first place.
Furthermore, although the New York Times story correctly noted that there have been relatively few rigorous studies investigating withdrawal problems from antidepressants, it gave only passing mention to the fact that there are many high quality studies clearly showing the very high rate of relapse following antidepressant discontinuation and that antidepressant medication effectively reduce this risk for relapse. Thus, for many patients with recurrent or very severe depression, what is thought to be “withdrawal” is actually a return of anxiety and depression symptoms. Such patients are generally best-served by staying on their medication for long periods of time.
The New York Times story is a classic case of pandering to our emotions by rehearsing–uncritically-individual stories and neglecting to present the less gripping but more germane relevant science. We know psychiatrists who faced a wave of patients inquiring about the safety of staying on their antidepressants after reading the story. A story about our health in the New York Times gets our attention. There is of course nothing wrong with raising such concerns in people’s minds and motivating them to ask questions of their physicians. But there is something amiss about sensationalizing a problem and not responsibly reviewing the evidence.
A second major problem plaguing science journalism today arises when scientifically unskilled journalists receive press releases from medical schools, academic hospitals, or scientific journals touting a “breakthrough.” Some busy journalists under deadline pressure may simply transcribe these press releases into their story, thus transmitting the idea that something important has been discovered. In reality, the “breakthrough” can be something that has not yet undergone peer review, has been the subject of animal but not human research, or comes from an observational study that confounds cause and effect. Under pressure to increase their “market share,” academic institutions frequently exaggerate the importance of findings by their own faculty members or obscure the fact that what has been “found” is preliminary and may not hold up when subjected to replication attempts.
A case in point is a press release from the Yale Cancer Center on February 28, 2018 titled “Nut consumption may aid colon cancer survival.” The release begins with the statement “People with stage III colon cancer who regularly eat nuts are at significantly lower risk of cancer recurrence and mortality than those who don’t, according to a new, large study led by researchers at Yale Cancer Center.” The study in question was published in the Journal of Clinical Oncology and found that patients with colon cancer who at tree nuts, but not peanuts, had a 57% improvement in overall survival.
The story was picked up by a number of mainstream media outlets with headlines declaring that eating tree nuts could help colon cancer patients survive longer. A story in the New York Daily News about the Yale study, for example, begins with the sentence “Nuts may hold the key to beating colon cancer.” Of course, the Yale press release does not use the phrase “beating colon cancer” and the study itself suggests no such thing. It was an observational study, a type of study design that can find an association between two things, like eating tree nuts and having longer survival time, but not establish cause and effect. In other words, the study does not tell us whether it is actually eating tree nuts that prolongs colon cancer survival or something else that people who eat tree nuts also eat and/or do, like exercise more or adhere to medical regimen recommendations better. The press release also doesn’t help journalist understand the critical difference between relative and absolute risk reduction. Saying that there is a 57% improvement in length of survival does not actually tell us how strong the effect is for any individual cancer patient or how long the survival benefit is. But the technicalities of risk assessment are unlikely to be obvious to many journalists and therefore are unknown by their readers.
We at Critica have are keenly interested in identifying and making contact with people and organizations that work to improve the quality of science journalism. In the case of uncritical reporting of press releases, an outstanding example of one such organization is HealthNewsReview. Affiliated with the University of Minnesota School of Public Health, provides critical reviews of press releases and news stories about health-related topics and rates them with a one to five-star rating system. It provides a weekly newsletter and offers to review press releases before they are sent to the media in order to ensure they are accurate.
A fine example of their work is their review of the tree nut and colon cancer story, in which they point out that “…like all observational studies, this one has significant pitfalls. For one, there is no way of knowing if it is nuts only that lead to health benefits—it could be healthier eating in general. There hasn’t been a definitive mechanism discovered that links nuts to cancer survival, meaning that it can be interpreted as a correlation, but not a causation. The authors of this study also used the same cohort of patients in 2015 to claim that coffee boosted colon cancer survival rates (more on that below), leading us to wonder if they are simply fishing in the data for associations.” The review also notes that the study was partially funded by a tree nut industry organization, raising conflict of interest concerns.
Another excellent resource is the work of Lauren F. Friedman, deputy health editor of Consumer Reports. Ms. Friedman, who also teaches science journalism at the CUNY Graduate Center, describes her approach as follows: “I favor evidence over anecdote and aim to find clarity amidst the confusion.” Indeed, we have found her stories in Consumer Reports on health issues to reflect exactly that approach. For example, her story titled “Take Charge of Your Heart Health” clearly explains things like the evidence for taking statins, the importance of knowing a hospital’s cardiac surgery results before having an operation there, and when more heart testing is not going to help in making treatment decisions and therefore can be avoided.
We are also pleased to have made contact recently with six-month old SciLine, a non-profit affiliated with the American Association for the Advancement of Science (the publisher of the prestigious journal Science) that links reporters to scientists. SciLine has a database of 3,000 scientists and just fielded its 100th request for help from a journalist. In addition to helping journalists understand the meaning and significance of what they read in journals and press releases, SciLine issues fact sheets for reporters on scientific findings that are likely to attract attention and is planning a series of bus tours to U.S. cities to meet with local news organizations and review with them how their reporters cover science stories.
In speaking recently with SciLine’s director, Rick Weiss, who worked for many years as a science journalist himself, we were pleased to learn that SciLine is following scientific principles with its own work by evaluating its impact and outcomes. SciLine follows up with reporters who have asked for its help to see if the consultation with a SciLine-recommended scientist affected the reporters’ story, if the scientists consulted are actually quoted in their stories, and if the story is of acceptable or better quality. So far, indications are that when reporters get help from scientists recommended by SciLine, their grasp of the evidence and its implications improves.
The communication of science to the public is an area of vital importance to our work at Critica. In that vein, we, like many others, often bemoan the quality of current science journalism. Among other things, we fear that misleading or sensationalized reporting about scientific findings that ultimately don’t hold up will foment more public mistrust of science and the scientific method. So it is important to also highlight the great efforts being made by organizations like HealthNewsReview, Consumer Reports, and SciLine to help those who write press releases and those who write about press releases develop greater skills and sensitivity to science’s nuances. We urge Critica’s followers to point out other such organizations to us.