Much of economics is a sham science…

Prof. Julie Nelson  slams much of economics as a sham:

Good science can be described as a process of systematic and open-minded investigation. Results should be carefully and intelligently compared to evidence brought forth from a wide and diverse community of investigators before being accepted as reliable. Models should be presented as what they really are: devices that some particular group of humans have found to be useful for examining some particular set of issues.

Examined in light of these standards, much of economics is a sham science. Instead of being open-minded about our core models, assumptions, and methods, we have made narrow selections and then allowed these to harden into dogma. There is a clear “macho” bias in preferring explanations based on self-interest to consideration of community interest, preferring mathematical analysis to qualitative analysis, preferring consideration of rational motivations to inclusion of emotional ones, and so on. In our textbooks, we teach our narrow models as revealed truth, rather than as limited tools. Instead of seriously evaluating the reliability of our knowledge, we follow established habits of claiming “rigor,” based on the mathematics of our models and on econometric “tests.” The recent popularity of Randomized Control Trials has tended to revitalize a belief that objectivity can be achieved by simply following formulaic rules, with little attention to context or to the possibility of implicit biases.

She says relying extensively on p-score is nothing but p-hacking:

I could point out how these biases comprise economic practice with many examples, but for brevity let me focus on just one. Recently, there has been a growing awareness in many fields–particularly in the biomedical sciences and in psychology–about the dangers of using “statistical significance” to decide which results are worthy of dissemination. The notion of rejecting null hypotheses based on p-values had been, for a long time, taken as the definition of “rigor” in empirical practice. Yet, as is now being shown, the following of such simplistic, mindless rules can actually cause severe distortions to arise in a literature.

With many variations in data samples and model specifications open to most researchers, “p-hacking” to create publishable results has become rife. In a recent meta-analysis I undertook of the economics literature on preferences for risk-taking, I found not only publication bias (a preference towards statistically significant results), but also confirmation bias (a preference for results which confirm an author’s own stereotypes about gendered behavior). Yet I have seen little action within the economics profession, much less within economics education, to—honestly and ethically–face up to the fact that our customary beliefs about “rigor” are seriously flawed.

There is little doubt that way too much focus has been on making economics a science which it isn’t and can never be. What was essentially a way to understand evolution of society via a different economic lens has been reduced to just some math modelling and equations. What a loss really…

Advertisements

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s


%d bloggers like this: