Advertisement

Study tried to replicate 50 cancer experiments and fell short more than half the time

A researcher holds a mouse, a type often used in cancer research.
An eight-year effort to re-create 50 influential cancer experiments ended in disappointment — about half could not be reproduced. A researcher holds a mouse, above, a type often used in cancer research.
(Robert F. Bukaty / Associated Press)
Share via

Eight years ago, a team of researchers launched a project to carefully repeat early but influential lab experiments in cancer research.

They re-created 50 experiments, the type of preliminary research with mice and test tubes that sets the stage for new cancer drugs. The disappointing results were reported Tuesday: About half the scientific claims didn’t hold up.

“The truth is we fool ourselves. Most of what we claim is novel or significant is no such thing,” said Dr. Vinay Prasad, a cancer doctor and researcher at UC San Francisco, who was not involved in the project.

Advertisement

It’s a pillar of science that the strongest findings come from experiments that can be repeated with similar results.

In reality, there’s little incentive for researchers to share methods and data so others can verify the work, said Marcia McNutt, president of the National Academy of Sciences. Researchers lose prestige if their results don’t hold up to scrutiny, she said.

And there are built-in rewards for publishing discoveries.

But for cancer patients, it can raise false hopes to read headlines of a mouse study that seems to promise a cure “just around the corner,” Prasad said. “Progress in cancer is always slower than we hope.”

The new study reflects on shortcomings early in the scientific process, not with established treatments. By the time cancer drugs reach the market, they’ve been tested rigorously in large numbers of people to make sure they are safe and effective.

Advertisement

Too many landmark studies can’t be replicated in independent labs, and the consequences for medicine, public policy and how we see the world can’t be overstated.

For the project, the researchers tried to repeat experiments from cancer biology papers published from 2010 to 2012 in major journals such as Cell, Science and Nature.

Overall, 54% of the original findings failed to measure up to statistical criteria set ahead of time by the Reproducibility Project, according to the team’s study published online Tuesday by eLife.

Among the studies that did not hold up was one that found certain gut bacteria were tied to colon cancer in humans. Another was for a type of drug that shrunk breast tumors in mice. A third was a mouse study of a potential prostate cancer drug.

Advertisement

A co-author of the prostate cancer study said the research done at Sanford Burnham Prebys research institute has held up to other scrutiny.

“There’s plenty of reproduction in the [scientific] literature of our results,” said Erkki Ruoslahti, who started a company now running human trials on the same compound for metastatic pancreatic cancer.

This is the second major analysis by the Reproducibility Project. In 2015, it found similar problems when it tried to repeat experiments in psychology.

Study co-author Brian Nosek of the Center for Open Science said it can be wasteful to plow ahead without first doing the work to repeat findings.

“We start a clinical trial, or we spin up a startup company, or we trumpet to the world, ‘We have a solution,’ before we’ve done the follow-on work to verify it,” Nosek said.

“The first principle is that you must not fool yourself — and you are the easiest person to fool,” physicist Richard Feynman famously told the young scientists graduating from CalTech in 1974.

The researchers tried to minimize differences in how the cancer experiments were conducted. Often, they couldn’t get help from the scientists who did the original work when they had questions about which strain of mice to use or where to find specially engineered tumor cells.

Advertisement

“I wasn’t surprised, but it is concerning that about a third of scientists were not helpful, and, in some cases, were beyond not helpful,” said Michael Lauer, deputy director of extramural research at the National Institutes of Health.

NIH will try to improve data sharing among scientists by requiring it of grant-funded institutions in 2023, Lauer said.

“Science, when it’s done right, can yield amazing things,” Lauer said.

For now, skepticism is the right approach, said Dr. Glenn Begley, a biotechnology consultant and former head of cancer research at drug maker Amgen. A decade ago, he and other in-house scientists at Amgen reported even lower rates of confirmation when they tried to repeat published cancer experiments.

Cancer research is difficult, Begley said, and “it is very easy for researchers to be attracted to results that look exciting and provocative, results that appear to further support their favorite idea as to how cancer should work, but that are just wrong.”

Advertisement