We may earn revenue from the products available on this page and participate in affiliate programs. Learn more ›
Brian Wansink’s research almost always made the news. The social psychologist is the head of Cornell’s Food and Brand Lab, known for turning out eye-catching studies on everything from behavior at pizza buffets to inspiring children to eat vegetables.
But a recent Buzzfeed investigation revealed the lab was massaging data and squeezing results in order to draw conclusions, most of which slid neatly into a buzzy narrative—appealing to prestigious research journals, and easily sold to the press and to the public.
Chasing viral fame was, as the investigation showed, one of Wansink’s major goals. Nudging that priority to the top of the list is out of the norm for most scientists. But in a lot of ways, Wansink bluntly illustrated an extreme example of a wider trend in research. Whether explicitly or implicitly, scientists are more and more often encouraged to broadcast the popular appeal of each study they publish. They’re pushed to generate the type of flashy results that will draw attention, both from the research community and from the public.
“There’s an idea that you have to sell your work, and sell its sexy side,” says Michael Eisen, professor of genetics at the University of California at Berkeley, and co-founder of the open access publisher Public Library of Science (PLOS).
This isn’t to say that you shouldn’t believe the results of scientific studies when you see them. The vast majority of scientists would never willingly manipulate their data, and those that would are likely to get caught. But there are some systems in place that play a role in determining which research gets done in the first place, which results you end up seeing, and how they’re presented. To some in the scientific community, that appears to have an effect on the quality and breadth of science produced.
The pressure generated by the structures of science can incentivize researchers to chase prestige. It pushes some scientists to do research in areas that appear trendy or prestigious, rather than following their interests more organically. At its most egregious, it might cause some scientists to artificially inflate or falsify results in an effort to make them seem more significant.
There’s a constellation of factors—professional and financial, internally and externally created, and all intertwined—that produce this type of scientific culture.
Success and fame in science
Funding to support scientific research is limited, and increasingly, there are more junior researchers and students than there are full time positions available for scientists in the academic world. That increases a sense of competition, says Ottoline Leyser, plant biologist at the University of Cambridge.
“There’s tendency for that to drive efforts to big up what you’re doing, to argue that it is earth-shattering even when it just may be important,” she says.
In 2014, Leyser worked with the Nuffield Council on Bioethics to conduct a survey of scientists in the United Kingdom. “We found that there is a pressure cooker feeling,” she says. “The survey was focused in the United Kingdom, but it’s clear this is a worldwide phenomenon.”
Despite the efforts of many scientists, this pressure can also perpetuate the idea that prestige is measured by the journal a particular bit of research is published in. Finishing up a study is only the first step of entering work into the scientific record: scientists then take their findings and submit to journals, who decide if they want to publish the work by having outside experts review each study.
Every scientific journal has an impact factor, a metric that calculates the average number of times, in previous years, the research published in a particular journal was cited by other studies. In other words, how often does the average study from PLOS go on to inspire or otherwise support research by other scientists? It wasn’t originally intended to be, but impact factor has grown into a surrogate measure of the quality of scholarship, says Randy Schekman, cell biologist at the University of California at Berkeley. Publishing in journals like Nature, or the New England Journal of Medicine, is seen as indication that a piece of research is particularly excellent. Getting multiple papers into those journals is seen as one of the best ways to build a reputation as a scientist, and get grant funding and academic jobs.
Most scientists—particularly young, up-and-coming scientists—are frustrated by the focus on impact factor. But they still feel like they have to play that game, says Schekman. “They understand the problem, but they feel powerless.”
In the United Kingdom, where the scientific success of each university is measured through the Research Excellence Framework, there is the perception that the impact factor of the journals professors publish in will be the driving factor behind the results—even though the guidelines specifically exclude it, Leyser says.
“If you go out into the community a lot of people are utterly convinced that’s what’s used,” Leyser says. “So a lot of it is the community doing it to themselves. An anxious group of people will happily convince themselves of it.”
But real or not, the specter of impact factor and prestige can push scientists to pursue one area of research over another.
“Those areas, like stem cell biology, or CRISPR, that are perceived to be hot will attract attention from young people, who feel they have to work in these areas and generate papers that will attract high impact journals,” Schekman says.
Journals may also want to publish research that has more public appeal. “There are lots of things that hit the headlines that aren’t published in fancy papers,” Leyser says. “But it’s kind of well known that the really high profile journals like stories that can be sold in the popular press.”
That happens at the expense of studies that may not appear glamorous, but could go on to prove foundational to scientific understanding, Schekman says. He points to the 2016 Nobel Prize in Physiology or Medicine, which went to Japanese scientist Yoshinori Ohsumi, who identified the genes involved in autophagy (how cells digest and and recycle their internal bits). The work didn’t initially make a huge splash, Sheckman says. “It became more important on more reflection,” he says. Understanding autophagy led to realizations, around a decade later, of its involvement in everything from cancer to Parkinson’s disease. But, Scheckman says, findings that lead to slow-build acclaim couldn’t cut it today. “That’s the kind of work that I don’t think would even be reviewed at high impact journals today,” he says.
The perception that journals want to publish new, groundbreaking and original research may also hold up response to the so-called ‘reproducibility crisis‘ in science. Since around 2010, various fields of science, particularly psychology, have found that many published studies and apparent fact-based conclusions didn’t hold up when other researchers tried to replicate them.
But despite calls for more reproductions and more adherence to the ideal of a self-correcting science, few repeat studies make it into the pages of journals. Scientists don’t have incentives to reproduce studies, and that work can take away from time spent on projects that look at something new. Journals also have the reputation of being uninterested in replication. Even if that’s unfounded, it certainly keeps scientists from trying: In a survey conducted by Nature, only a small proportion of scientists who did replication studies bothered sending them to journals.
The push into the public eye
When a big scientific finding hits the news, that’s often because the journal (or the university that the researcher works at) put out a press release. Even if the study itself didn’t come to a flashy conclusion, the press release often makes it seem that way. This hype often bleeds over into the news coverage, especially if reporters lack the training to read and evaluate scientific studies for themselves. Chris Chambers, a cognitive neuroscientist at Cardiff University in the United Kingdom, combed through major press releases about health-related topics from 2011, and found that around 40 percent of them contained exaggerated claims.
Exaggerations happen because universities (and, though to a slightly lesser extent, journals) are under pressure to generate media impact, says Chambers. Scientists typically sign off on the press releases that their universities write, but they often distance themselves from the process. They’re also vulnerable to participating in the hype themselves.
“People want to be seen as doing important things, and it’s easy to slip into a trap of believing your own spin,” he says.
Though it’s hard to say where in the process the hype creeped in, some recent pumped up research includes a study done in mice on potential new Alzheimer’s treatment, covered by ABC News as a potential cure (even though similar therapies have proved ineffective in the past); and an investigation into an anxiety drug’s ability to reverse alcohol-induced brain cell death—headlined by the International Business Times as something that could “treat alcoholics,” even though the study was done in mice.
For big funding institutions, like the National Institutes of Health, buzzy projects and big results may help justify their budgets. Until the 1990s, biology didn’t have a tradition of large, collaborative, and centrally funded research projects, says Eisen. Then came the Human Genome Project: an international effort to sequence the entirety of the human genome. The project cost just under $3 billion, and was completed in 2003.
“The genome project was incredibly successful, both in a PR sense—it was a very headline-grabbing scientific achievement—but also successful scientifically,” Eisen says.
The consequence of success is that funding agencies want to repeat it. But the Human Genome Project had a well-defined, singular goal. That isn’t the case for many of the big data projects that came after, like the ENCODE project, or the human brain mapping project, says Eisen. The motivation to create something that will catalyze future research isn’t a bad one, but there wasn’t a good project that fit the criteria for a successful result.
“There’s a positive feedback loop, with a combination of the agencies, Congress, and the media. The media love these things,” Eisen says. “The big data projects are good PR machines. Funding agencies have incentives to do these projects because they get good attention and lots of money.”
But big projects can concentrate money in one particular area, and funnel scientists into using the resources that the projects generate, even if they’re not actually the best fit for the type of question a scientist is trying to answer, Eisen says. “It turns data collection into something generic.”
All together, the pressures that push so-called impactful work—from grant money, to journal publication, to institutional priorities—are related, and working in tandem, Chambers says.
“It’s all part of the same incentives structure,” he says. “As a scientist, I’m rewarded for high impact papers with positive, striking results. And then rewarded for generating more external impact of that work. The whole system pushes me toward selling research as hard as possible.”
It’s a problem, Eisen says, because that leads an individual result to be taken taken as an independent product, and judged on its own. “Science is very rarely advanced in an obvious way by one work,” he says. “That’s the whole point of science. It’s advanced collectively.”