Perhaps you’ve heard that a large proportion of published scientific research findings are false. If anyone qualified took the time to validate them, up to half or more could be exposed as erroneous at the time of publication. Most of the errors are due to bad research methodology. Most of the bad research methodology is either due to insufficient training, incentives that are in conflict with good science, or both. The primary area of training that’s lacking is in statistics. Researchers are incentivized to be productive and innovative above all else. It’s all about getting published and then getting cited. It isn’t about getting it right.
A recent commentary in the Lancet, medicine’s foremost journal, titled “Offline: What is medicine’s 5 sigma?” (Volume 385, April 11, 2015) by Editor-in-Chief Richard Horton, describes his concern that “something has gone fundamentally wrong with one of our greatest human creations.”
The case against science is straightforward: much of the scientific literature, perhaps half, may simply be untrue. Afflicted by studies with small sample sizes, tiny effects, invalid exploratory analyses, and flagrant conflicts of interest, together with an obsession for pursuing fashionable trends of dubious importance, science has taken a turn towards darkness.
If poorly conducted research exists to this extent in mature fields of study such as medicine and physics, it isn’t surprising that it’s even more prevalent in the fledgling field of information visualization. Horton ended his commentary with some good news and some bad news:
The good news is that science is beginning to take some of its worst failings very seriously. The bad news is that nobody is ready to take the first step to clean up the system.
So far in the realm information visualization research, only the bad news applies.