Over at Cafe Hayek, Russ Roberts points to an article from Reuters, In cancer science, many ‘discoveries’ don’t hold up. This is from the article:
A former researcher at Amgen Inc has found that many basic studies on cancer — a high proportion of them from university labs — are unreliable, with grim consequences for producing new medicines in the future.
During a decade as head of global cancer research at Amgen, C. Glenn Begley identified 53 “landmark” publications — papers in top journals, from reputable labs — for his team to reproduce. Begley sought to double-check the findings before trying to build on them for drug development.
Result: 47 of the 53 could not be replicated. He described his findings in a commentary piece published on Wednesday in the journal Nature.
That’s not science, folks. That’s randomness or noise.
No surprise. Scientists, after all, are humans and like all of us they respond to incentives. They want to get their studies published, they want to get their names out, they want to prove their theories. This satisfies their egos and keeps them employed. They aren’t monks. They’ll report noise if it gets them attention.
This was confirmed later in the article:
“If you can write it up and get it published you’re not even thinking of reproducibility,” said Ken Kaitin, director of the Tufts Center for the Study of Drug Development. “You make an observation and move on. There is no incentive to find out it was wrong.”
Begley’s experience reminds me of what Gary Taubes’, author of Good Calories, Bad Calories, discovered as he reviewed landmark diet and health studies from over the past century. He found the “scientific foundation” on which much of the conventional wisdom and government dietary recommendations are based is shaky.
Here’s one of the best stories from the article that demonstrates the deception that we often call science:
“We went through the paper line by line, figure by figure,” said Begley. “I explained that we re-did their experiment 50 times and never got their result. He said they’d done it six times and got this result once, but put it in the paper because it made the best story. It’s very disillusioning.”
That should have been an important fact to mention before publishing the results.
I deal with this often in the business world too. Many folks play fast and loose with the facts. I often hear people support their position by saying, “Research shows that this is the best way” or “this is what customers prefer.”
Refer to “research” and most people will hear that and accept on blind faith that the research must be correct and correctly interpreted. Much to the chagrin of some of my business partners, I don’t.
I ask, “Would you mind if I took a look at that research?”
So far, about 10% of the folks I’ve asked showed me the research they referred to. The other 90% backed off their point quickly. And, of those 10% who did show me the research, I managed to point out several potential issues with the research method and interpretation that dramatically lowered the confidence in their conclusions in all cases.
I call that “pushing on the putty.” Backing your conclusion with flimsy research is like building a wall with putty. Push on it just a little bit and the wall crumbles.
Hopefully finding out what it is won’t unravel reality.
…does light have a speed?
Seems strange. I’m sure Einstein or someone answered that, but I’m not sure I’ve ever heard the reason.
Update: The auto “related post” generator reminded me that I asked this question at least once before back in ’08.
Russ Roberts, econ professor at George Mason University, podcaster of EconTalk, blogger on Cafe Hayek, writer the Keynes vs. Hayek Rap and skeptic of the econometric models wrote about the “science’ of economics in Saturday’s the Wall Street Journal. Here’s some key paragraphs posted on Cafe Hayek.
If economics is a science, it is more like biology than physics. Biologists try to understand the relationships in a complex system. That’s hard enough. But they can’t tell you what will happen with any precision to the population of a particular species of frog if rainfall goes up this year in a particular rain forest. They might not even be able to count the number of frogs right now with any exactness.
We have the same problems in economics. The economy is a complex system, our data are imperfect and our models inevitably fail to account for all the interactions.
The bottom line is that we should expect less of economists. Economics is a powerful tool, a lens for organizing one’s thinking about the complexity of the world around us. That should be enough. We should be honest about what we know, what we don’t know and what we may never know. Admitting that publicly is the first step toward respectability.
I also believe other fields that have recently (in the last 50 -100 years) adopted rigorous math also fall into the trap of thinking that their field is much more scientific than is really the case. I call this seduced by sophistication.
Math provides a veneer of science to non-scientific things. Such math is purveyed by economists, business consultants, investment managers, statisticians, psychologists, educators, medical researchers, nutritionists, climatologists and more to sell their services and peddle their influence.
I’ve witnessed this seduction firsthand in my career. While there are some benefits to be gained from modeling, the danger comes in not understanding its limitations. This mistake is made by people who should know better – the people running the models.
They confuse the models with the real world, rather than realizing the models are simplistic representations of the real world that lack effective treatment of some very important real world factors and relationships.
Nassim Taleb writes about such mistakes in his books The Black Swan and Fooled by Randomness.
Read up on Professor Roberts and Taleb’s writings. Don’t let yourself be seduced by sophistication.
According to this website, major network news has still not reported on climategate.
And this guy is asks people “to just say what they actually know.” Fair request. I’ll add that they should separate what they know from what they feel.
When a scientist says I know such-and-such is happening and he really means that he believes it’s happening, that’s irresponsible. Others don’t pick on the scientist’s misuse of know. Based on the studies I’ve seen of global warming, no scientist should be able to say they “know what’s happening.” If you’ve seen study that would allow a scientist to correctly say “they know what’s happening”, please let me know. I’d like to take a look at it.