Monday, November 28, 2005

Freaky Economics revisited

A salient point in my encounter with Lord's paradox is that multiple regression can easily mislead if the data is not represented properly . It seems that our favorite economist Steve Levitt has been rebutted at least in part because he failed to adjust certain inputs in his regression. The Wall Street Journal reports that two economists from the Federal Reserve recalcuated Levitt's regressions, this time adding a few additional variables to account for variations in crime. They also normalizing arrests to be per capita. The Journal reports that Levitt's finding that abortion reduces crime is greatly exaggerated. Are we suprised? I think the whole thing is a post-hoc fallacy.

While we're on the topic of education statistics

There is a "paradox" discovered by a guy name Lord, concerning the proper way to analyze pre/post test score data. Generally a comparison of difference scores (i.e. post - pre ) finds no statistically signficant difference between groups, but the same data analyzed using regression shows a statistically significant difference. Lord demonstrated this paradox using weights of college freshman recorded at the beginning and end of the school year. An analysis of the difference in weights for each student reveals that the females and males do not gain weight on average: the school "diet" does not act differently on each sex. On the other hand, a multiple regression analysis of the weight change using starting weight and Sex reveals a statistically significant difference between the sexes. Two reasonable methods, two reasonable but opposite conclusions. On the one hand, the difference scores imply that the "freshman diet" acts no differently for men than women. On the other hand, the regression implies that the women tend to gain less weight than a man at the same starting weight. I have created a simple dataset that shows Lord's Paradox in all its glory.The average value of GAIN is zero for both groups. On the other hand, a multiple regression to predict GAIN using SEX and SEPT weight fits a line for both men and women with a common slope but with a statistically significant difference in intercept. That is, at a given september weight the expected GAIN is less for women than for men. What's the truth? In this example, we randomly generated a TRUE weight for each student with men having a higher mean than women and a higher SD as well. The september weight for both men and women is the true weight plus a random normal (SD around 5 lbs). The June weight has the same distribution. Thus, for each student the pair of weights is a bivariate normal with correlation around .9. So under this model, there is NO DIFFERENCE between men and women with respect to the average weight change. So how do we explain the multiple regression? In the simulation it is clear that the multiple regression uses the wrong variable. A man and woman who weigh the same are not the same- the man is more likely to be under his true weight and a woman is more likely to be above her true weight. Both regress to the mean- but the man regresses upwards and the woman downwards. If we repeat the regression this time using SCALED starting weights- that is we subtract the group mean from every individual- the sex effect disappears! That is, there is no difference in expected weight gain for a man and woman who are at the same weight in september RELATIVE to their group mean (180 for men, 130 for women). Is there a conclusion here? yes- multiple regression can easily lead to terrible error. Is Lord's Paradox not really a paradox at all? Perhaps it's just the Regression Fallacy.

Friday, November 25, 2005

Greenhouse gases increasing--but still no evidence of warming

One of the motivator for this blog was the battle over greenhouse gases. Well, it appears that the greenhouse worry warts finally have some convincing evidence on their side. ( slashdot on green house gases)

Still to be addressed are

  • what the implications will be
  • what the economics choices are
  • is the benefit of doing something larger than the cost.
This last question is the line that I personally draw in the stand. If the benefit is 100 years away, we need almost a 100 times return before it is a positive value project. There are so many other things that sound more important, that I truly have trouble getting excited about this one.

Even physics has its faith based science

Science is defined not by where the ideas come from but by the fact that the ideas are testable. In this day in age, where we don't like failing any students, we seem to have encourage tests that everyone can pass. If this attitude gets extended to science it will kill it. What makes science special, is that it is defined by tests that can be flunked. In a word, falsifiable.

So the problem with religion based science is not the fact that religion might be used as a source of ideas, but instead that falsifiablity might be dropped. The primary battle that gets time on the news is intelligent Design. But in physics, there is another theory that equally qualifies: string theory.

An article in slate convincingly makes the argument that string theory is no better than ID. OK, so if we did build a particle colider the size of the milkyway, it could be falsified. But somehow that is a distinction without a difference.

Is theory better than a controlled experiment?

Hirsch argues in a lovely but disconcerting article that controlled experiments in education are a waste of time. When I started reading this, my laptop was at serious risk of flying out the window. But by the end of the article, I was hooked.

I think we would restate hi main claim as saying, "Know your area of application." He argues that we can learn more about education by thinking hard about research from cognitive science and developing theories.

Friday, November 18, 2005

Journal of Obnoxious Statistics

I'm not sure how much intersection there is between Politically Incorrect and Obnoxious, but it is worth checking out the recent one-shot production of the "Journal of Obnoxious Statistics": I haven't poured through all 101 pages myself, but I may some day.

Monday, November 14, 2005

No such thing as Western medicine

I read last night, in David McCullagh's excellent biography of John Adams that the technique of smallpox inocculation was brought to America by African slaves. Just an important reminder that the defining characteristic of medicine is testing and that statistics is the ingredient that makes this distinction.

Alternative Medicine

I glanced at the Inquirer this morning and spotted a headline that suggests that certain vitamins can alleviate pain associated with gout. The dull statistician will teach the experiment: the treatment, controls, placebo, blah, blah, blah and finish with a t-test and a p-value. Important? absolutely. Dull? Devestatingly so. The politically incorrect statistician turns the problem around to address the topic of alternative/Non-Western medicine. Alternative medicine is a HUGE industry supported intellectually by a politically correct ideology that holds that all cultures are created equal. Ergo, all medicinal cultures are equal. Perhaps a starting point for the discussion is Richard Dawkins essay "Snake Oil" which appeared in his collection of essays titled A devils Chaplain. Dawkins' introduces a snappy definition of "alternative medicine": any remedy that has never been tested or has been blocked from testing, or failed when tested. Teach this in class and someone, who swears that her headaches were remedied by the Shaman's balm, will walk out insulted.

Friday, November 11, 2005

Does Statistics have to be boring?

We had a visit to our department recently by Professor Joel Best. Unfortunately, I missed the lecture because of (yet another!) Jewish holiday, but I did not miss the ensuing talk in the play pen we call the Wharton School Stat department. Joel's essential point is that while statistical literacy is essential and irrefutably important it is nevertheless unloved and disrespected. Futhermore, Joel speculates, that this is unlikely to change since there is no obvious constituency that can take ownership of this problem. I aim to remedy this. To do this, we must confront the problem head on: Statistics is boring. Come on, fellows, come clean. Compare our world to other mathematical disciplines. Not looking good, eh? How about computer science or electrical engineering? Do we measure up? These worlds are sparkling with orginality; with important problems solved with complex and elegant techniques. Yet statistics has a leg up on all these fields because it is not relegated to esoteric domains. Statistics is RELEVANT and not just to farmers measuring crop yield, scientists studying worms or doctors administering treatments. Statistics are at the heart of some of the most interesting intellectual debates of our age. Yet, in our classes, textbooks and conversations, we assiduously avoid these topics. Why? Because as professors of statistics, we argue, it is not our place to teach global warming or sex discrimination. Our place is the t-test and the central limit theorem. So we limit ourselves to the straightforward and the dull. I'm falling asleep just thinking about it.