Tuesday, January 18, 2011

Statistics Meets the Decline Effect

A week ago, I went through this extremely perceptive piece by Josh Lerner in the New Yorker Magazine addresses the "Declining Effect" in scientific studies. The claims are quite strange because among the things one learns about the validity of a scientific study is the fact that the results would hold if a well-designed study was replicated. And yet the author makes the claim that almost all scientific studies that make one claim often find that the power of the effect being measured erodes with time. To my mind, this must concern scholars of all kinds because recent policy debates are increasingly reliant on the power of replicable and scientifically sound studies in both the social and physical sciences.

The author of the piece explores a number of reasons to explain this equally strong phenomenon but reaches no firm conclusion save that there is a discernible bias towards positive results in scientific journals and the tendency to chase results that meet the significance test. The first thing that came to my mind was the possibility that the systematic weakening in results that are initially robust is itself a form of reversion towards the mean but this too is discounted.  So what it leaves is the fact that there is a lot of randomness and that proving an idea one way or another is at some level a matter that has no certainty. 

No comments: