Over at Cafe Hayek, Russ Roberts points to an article from Reuters, In cancer science, many ‘discoveries’ don’t hold up. This is from the article:
A former researcher at Amgen Inc has found that many basic studies on cancer — a high proportion of them from university labs — are unreliable, with grim consequences for producing new medicines in the future.
During a decade as head of global cancer research at Amgen, C. Glenn Begley identified 53 “landmark” publications — papers in top journals, from reputable labs — for his team to reproduce. Begley sought to double-check the findings before trying to build on them for drug development.
Result: 47 of the 53 could not be replicated. He described his findings in a commentary piece published on Wednesday in the journal Nature.
That’s not science, folks. That’s randomness or noise.
No surprise. Scientists, after all, are humans and like all of us they respond to incentives. They want to get their studies published, they want to get their names out, they want to prove their theories. This satisfies their egos and keeps them employed. They aren’t monks. They’ll report noise if it gets them attention.
This was confirmed later in the article:
“If you can write it up and get it published you’re not even thinking of reproducibility,” said Ken Kaitin, director of the Tufts Center for the Study of Drug Development. “You make an observation and move on. There is no incentive to find out it was wrong.”
Begley’s experience reminds me of what Gary Taubes’, author of Good Calories, Bad Calories, discovered as he reviewed landmark diet and health studies from over the past century. He found the “scientific foundation” on which much of the conventional wisdom and government dietary recommendations are based is shaky.
Here’s one of the best stories from the article that demonstrates the deception that we often call science:
“We went through the paper line by line, figure by figure,” said Begley. “I explained that we re-did their experiment 50 times and never got their result. He said they’d done it six times and got this result once, but put it in the paper because it made the best story. It’s very disillusioning.”
That should have been an important fact to mention before publishing the results.
I deal with this often in the business world too. Many folks play fast and loose with the facts. I often hear people support their position by saying, “Research shows that this is the best way” or “this is what customers prefer.”
Refer to “research” and most people will hear that and accept on blind faith that the research must be correct and correctly interpreted. Much to the chagrin of some of my business partners, I don’t.
I ask, “Would you mind if I took a look at that research?”
So far, about 10% of the folks I’ve asked showed me the research they referred to. The other 90% backed off their point quickly. And, of those 10% who did show me the research, I managed to point out several potential issues with the research method and interpretation that dramatically lowered the confidence in their conclusions in all cases.
I call that “pushing on the putty.” Backing your conclusion with flimsy research is like building a wall with putty. Push on it just a little bit and the wall crumbles.