Tuesday, September 30, 2014

What is a p value and why doesn't anyone understand it?

 I feel like I've written this too many times, but here we go again.
There was a splendid article in the New York times today concerning Bayesian statistics, except that, as usual, it had some errors.

Lest you think me overly pedantic, I will note that Andrew Gelman, the Columbia professor profiled in much of the article, has already posted his own blog entry highlighting a bunch of the errors (including the one I focus on) here.

Concerning p-values the article states:
"accepting everything with a p-value of 5 percent means that one in 20 “statistically significant” results are nothing but random noise."  This is nonsense.  I found this nonsense particularly interesting because I recently read almost line in a work written by an MIT professor.

P-value explained in brief

Before I get to explaining why the Times is wrong, I need to explain what a p-value is.  A p-value is a probability calculation, first of all.  Second of all, it has an inherent assumption behind it (technically speaking, it is a conditional probability calculation).  Thus, it calculates a probability assuming a certain state of the world.  If that state of the world does not exist, then the probability is inapplicable.

An example: I declare:"The probability you will drown if  you fall into water is 99%."  "Not true," you say, "I am a great swimmer."  "I forgot to mention," I explain, "that you fall from a boat, which continues without you to the nearest land 25 miles away...and the water is 40 degrees."  The p-value is a probability like that -- it is totally rigged.

The assumption behind the p-value is often called a Null Hypothesis. The p-value is the chance of obtaining your particular favorable research result, under the "Null Hypothesis" assumption that the research is garbage.    It is the chances that, given your research is useless, you obtained a result at least as positive as the one you did.  But, you say, "my research may not be totally useless!"  The p-value doesn't care about that one bit.

More detail using an SAT prep course example
Suppose we are trying to determine whether an SAT prep course results in a better score for the SAT. The Null Hypothesis would be characterized as follows:
H0=Average Change in Score after course is 0 points or even negative.  In shorthand, we could call the average change in score D (for difference) and say H0: D<=0.  Of course, we are hoping the test results in a higher score, so there is also a research hypothesis: D>=0.  For the purposes of this example, we will assume the change that occurs is wholly due to the course and not to other factors, such as the students becoming more mature with or without the course, the later test being easier, etc.

Now suppose we have an experiment where we randomly selected 100 students who took the SAT and gave them the course before they re-took the exam.  We measure each students change and thus calculate the average d for the sample (I am using a small d to denote the sample average while the large D is the average if we were to measure it across the universe of all students who ever existed or will exist).  Suppose that this average for the 100 students is an score increase of 40 points.  We would like to know, given the average difference, d, in the sample, is the universe average D greater than 0?  Classical statistics neither tells us the answer to this question nor does it even give the probability that the answer to this question is "yes."

Instead, classical statistics allows us only to calculate the p-value: P(d>=40| D<=0).  In words, the p-value for this example is the probability that the average difference in our sample is 40 or more, given that the Universe average difference is 0 or less (Null Hypothesis is true).  If this probability is less than 5%, we usually conclude the Null Hypothesis is FALSE, and if the NUll Hypothesis were in fact true, we would be incorrectly concluding statistical significance.  This incorrect conclusion is often called a false positive.  The chance of a false positive can be written in shorthand as P(FP|H0), where FP is false positive, "|" means given, and H0 means Null hypothesis.  (Technically,  but not important here, we calculate the probability at D=0 even though the Null Hypothesis covers values less than zero, because that gives the highest (most conservative) value.)  If the p-value is set at 5% for statistical significance, that means P(FP|H0)=5%.

A more general way of defining the p-value is that the p-value is the chance of obtaining a result at least as extreme as our sample result under the condition/assumption that the Null Hypothesis is true.  If the Null Hypothesis is false (in our example if the universe difference is more than 0), the p-value is meaningless.

So why do we even use the p-value?  The idea is that if the p-value is extremely small, it indicates that our underlying Null Hypothesis is false.  In fact, it says either we got really lucky or we were just assuming the wrong thing.  Thus, if it is low enough, we assume we couldn't have been that lucky and instead decide that the Null Hypothesis must have been false.  BINGO--we then have a statistically significant result.

If we set the level for statistical significance at 5% (sometimes it is set at 1% or 10%), p-values at or below 5% result in rejection of the Null Hypothesis and a declaration of a statistically significant difference.   This mode of analysis leads to four possibilities:
False Positive (FP), False Negative (FN), True Positive (TP), and True Negative(TN).
False Positives occur when the research is useless but we nonetheless get a result that leads us to conclude it is useful.
False Negatives occur when the research is useful but we nonetheless get a result that leads us to conclude that it is useless.
True Positive occur when the research is useful and we get a result that leads us to conclude that it is useful.
True Negatives occur when the research is useless and we get a result that leads us to conclude that it is useless.
We only know if the result was positive (statistically significant) or negative (not statistically significant)--we never know if the result was TRUE (correct)  or FALSE (incorrect).  The p-value limits the *chance* of a false positive to 5%.  It does not explicitly deal with FN, TP, or TN.

Back to the Question of how many published studies are garbage, but it gets a little technical
Now, back to the quote in the article: "accepting everything with a p-value of 5 percent means that one in 20 “statistically significant” results are nothing but random noise."
Let's consider a journal that publishes 100 statistically significant results regarding SAT courses that improve scores and statistical significance is based on p-values of 5% or below.  In other words, this journal published 100 articles with research showing that 100 different courses were helpful.  What number of these courses actually are helpful?
Given what we have just learned about the p-value, I hope your answer is 'we have no idea.' There is no way to answer this question without more information.  It may be that all 100 courses are helpful and it may be that none of them are.  Why?  Because we do not know if these are all FPs or all TPs or something in-between--we only know that they are positive, statistically significant results.

To figure out the breakdown, let's do some math.  First, create an equation, using some of the terminology from earlier in the post.
The Number of statistically significant results = False positives (FP) plus True positives (TP).  This is simple enough

We can go one step further and define the probability of a false positive given the Null hypothesis is true and the probability of a true positive given the alternative hypothesis is true -- P(FP|H0) and P(TP|HA).  We know that P(FP|H0) is 5% -- we set this is by only considering a result statistically significant when the p-value is 5%.  However, we do not know P(TP|HA), the chances of getting a true positive when the alternative hypothesis is true.  The absolute best case scenario is that it is 100%--that is, any time a course is useful, we get a statistically significant result.

Suppose that we know that B% of courses are bad and (1-B)% of courses are helpful.  Bad courses do not improve scores and helpful courses do.   Further, let's suppose that N courses in total were considered, in order to get the 100 with statistically significant results.   In other words, a total of N studies were performed on courses and those with statistically significant results were published by the journal.  Let's further assume the extreme concept above that ALL good courses will be found to be good (no False Negatives), so that P(TP|HA)=100%.  Now we have the components to figure out how many bad courses are among the 100 publications regarding helpful courses.

The number of statistically significant results is :
100= B*N*P(FP|H0) + (1-B)*N*P(TP|HA)
This first term just multiplies the (unknown) percent of courses that are bad by the total studies performed by the percent of studies that will give the false positive result that says the course is good.  The second term is analogous, but for good courses that achieve true positive results.  These reduce to:
    100 = N(B*5% + (1-B)*100%)  [because the FP chances are 5% and TP chances are 100% ]
           = N(.05B +1 - B)      [algebra]
          = N(1-.95B)              [more algebra]
        ==> B =  (20/19)*(1- 100/N) [more algebra]
The published courses equal B*N*P(FP|H0), which in turn equals (1/19)*(N-100) [using more algebra].

If you skipped the algebra, what this comes down to is that the number of bad courses published depends on N, the total number of different courses that were researched.
If N were 100, then 0 of the publications were garbage and all 100 were useful.
If the N were 1,000, then about 947 were garbage, about 47 of which were FPs and thus among the 100 publications.  So 47 garbage courses were among the 100 published.
If the total courses reviewed were 500, then about 421 were garbage, about 21 which were FPs and thus among the 100 publications.
You might notice, that given our assumptions, N cannot be below 100, the point at which no studies published are garbage.
Also, N cannot be above 2000, the point at which all studies published are garbage.

You might be thinking--we have no idea how many studies are done for each journal article accepted for publication though, and thus knowing that 100 studies are published tells us nothing about how many are garbage--it could be anything from 0 to 100% of all studies! Correct.  We need more information to crack this problem. However, 5%  garbage may not be so terrible anyway.

While it might seem obvious that 0 FPs is the goal, such a stringent goal, even if possible, would almost certainly lead to many more FNs, meaning good and important research would be ignored because its statistical significance did not meet a more stringent standard.  In other words, if standards were raised to 1% or 0.1%, then some TPs under the 5% standard would become FNs under the more stringent standard, important research--thought to be garbage--would be ignored, and scientific progress would be delayed.






Monday, May 5, 2014

Another perspective on the admissions game--early admission

One thing I failed to consider in my previous blog is early admissions.

By admitting many or most of their students early, a college can appear to be very selective when, in fact, it is only selective for people who do not apply early.  Applying early decision is the equivalent of ranking a school first, and schools thus know it will improve their matriculation rate by admitting students early.  Also, students who really wanted to attend a particular school will perhaps be better than students who may have chosen the school 2nd or 3rd or worse.

A summary of actual acceptance rates at Ivy League schools, early and otherwise,  appears here.  To understand what is happening here, take Harvard, with the lowest overall acceptance rate of 5.8%.  If you apply there through regular admissions, you have a 3.8% chance (less than 1 in 25) of being admitted.  However, if you apply early decision, your chances increase to 18.4% (about 1 in 5 or 6).  Of course, the quality of the students is likely different between the group that applies for regular admission and the group that applies early, so that the difference between two equally qualified students is likely lower.  However, it seems doubtful that the entire difference is in quality of the application pool.

At a recent presentation I heard from an admissions officer at a local college, he stated outright that the standards change between early and later admissions even for "rolling" admissions schools.  Put simply, early applicants get priority and are more likely to be accepted.

So what's the strategy?  Apply early, but you only have one shot at early decision (typically you can only apply to one school).  Therefore, apply to a top choice but the one in which you have a decent chance of getting into, according to that school's average SATs, grades, etc.  If you reach too high, you will be rejected and relegated to the regular application pool, where chances of getting into top schools is far lower.

Monday, April 28, 2014

Getting into College

Now that I have a 9th-grader, I am starting to think about college admissions.  The urban myth is: "If you were applying to college now, you'd never get into the (great) college you went to (in the 1980s or 1990s)."
This belief is driven by lower acceptance rates at many elite colleges, as well as the parents and peers of those who went to elite schools.  This washington post article debunks this myth. It refers to an article about a study at the Center for Public Education, which has more detail.  On the other hand, this paper shows that while overall selectivity fell, the top schools are more difficult to get into, at least as measured by SAT/ACT scores.

Here are some factors that could be at play:
1) Regression to the mean.  People who went to great schools are, on average, high achievers compared to the general public.  However, if you take the cohort who were accepted to these schools, some fraction will have gotten in by chance, scoring better or doing better just by chance.  The next generation will regress to the mean, and this means it will appear as if colleges are more selective,. among those who went to more selective colleges (by the same token, among those who went to the least selective schools, there will be the opposite effect)...all else being equal of course.  This is the same effect that results in the children of the tallest people being shorter than their parents, even though they still may be taller than the average person.

2) People apply to more schools.  When your average person applies to 10 schools, whereas the average person used to apply to 3, acceptance rates can go down, resulting in higher perceived selectivity.  This article shows the number of people applying to four or more schools more than doubling since the 70s.  The increase in applications might also imply that students that never would have applied to, say, Harvard, are now applying.  This is why a lower acceptance rate doesn't actually mean it is more difficult to get admitted, once you adjust for the quality of the student.

3) Slight increase in actual selectivity at a few schools.  The New York Times had an interesting article regarding the changes in selectivity, which focused on the number of spots per 100,000 population (rather than the number admitted). Harvard, with the greatest drop in selectivity, had a drop of 27% (the article focused only on US student rates) in the last 20 years.  While this might seem large, keep in mind that their admissions rate has dropped about two-thirds, from 18% to 6%, a much larger change.

4) Student quality improved.  There is certainly room in the equation for a true increase in student quality.  As the article above implies, the top schools did have moderate increases in test scores.

No matter whether college is the same or more difficult to get into, it certainly appears that it is more stressful.  One solution for this is the med school solution (and NYC schools solution): a ranking and matching program.  This is fairly simple and goes as follows: each student ranks each school he/she applies to in order of preference.  Colleges rank the students that apply in order.  Colleges are matched students that are highest on their list, beginning with students who ranked them first.  Students are required to go to the college they are matched with, or enter a second consolation round.


Sunday, December 29, 2013

CitiBike share--what are the chances?

I have been working with Joe Jansen on the Citibike data in the R Language.  Citibike is New York's bike sharing program, which started in may and currently has more than 80,000 annual members.  The R Language is a freely available object oriented programming language designed originally for doing statistics at Bell Labs.

Joe has downloaded all the data and done an extensive analysis, which you can find here.  I did a simpler analysis predicting trips using a statistical regression model and graphed it using the function ggplot2 in R.  I found that maximum temperature, humidity, wind, and amount of sunshine to be significant factors in predicting the number of trips that will be taken on any given day.  While rain was not a significant factor, it is likely confounded with sunshine, so it is only not a factor after accounting for amount of sunshine.  Also, keep in mind that a number of days with rain, especially in the summer, are generally sunny days with an hour or two of rain or thunderstorms.  The day of the week, surprisingly, was not an important factor influencing number of trips.  The R-squared, which is a typical measure of predictive power and is on a scale from 0 to 100%, was more than 70%.

Here is a graph of the results that shows the predicted number of trips per 1,000 members versus the actual number of trips.  The day of the week is indicated by the color of the point.
I am an amateur with the function ggplot, and so the legend for day of the week has the days of the week in alphabetical order rahter than Monday , tuesday, etc.  Help on that and other aspects of ggplot for this graph would be welcome (please comment accordingly).

If day of the week made a difference, for any given point on the x-axis (predicted trips) you would have more of a certain color that is high on the y-axis than other colors.  For example, if more trips occurred on weekends, you would have more of the green colors (Saturday and Sunday) on top.  However, no such affect seems to exist.  I guess people are enjoying Citibike every day of the week, or casual riders on the weekends are roughly making up for weekday commuting riders.

Monday, November 25, 2013

Highest property taxes in America?

I read on CNN's money website today that Westchester County, NY has the highest property taxes in America (see Nov 25 Money website). Moreover, the New York area in general seems to have the highest taxes.  That surprises me, because, as an owner of a co-op in Brooklyn, I know that my property taxes, and property taxes in general in the city, are extremely low.

So what's the problem?  If you click on the "interactive graph" you find that you can display results in two ways.  The headline and accompanying map refers to the taxes in dollars.  This type of information is little more than a graph of housing prices in the US, because expensive houses have higher taxes than cheaper houses.  Sure, tax rate comes into play, but the owners of a $10 million mansion in a low tax district still generally pay more property taxes per year than the owners of a $200,000 house in a high tax district.

Here's an example.  Click on Brooklyn on the interactive map and you will see taxes of $3,050.  Click on Richland County, South Carolina (where my parents live), and you will see that taxes average $1,129, nearly one-third the "high" taxes of Brooklyn.  Yet this belies the fact that housing prices are much higher in Brooklyn.

How much higher?  Well, to see this, go to the interactive map that shows taxes as a percentage of home prices.  This map accounts for different housing costs and shows taxes in the familiar manner, as a rate.  In this map, you can see that Brooklyn property taxes are 0.53% of housing prices and Richland County's are 0.75%.  (By the way Westchester County is 1.76%, which is high but certainly not the highest).

Thus, while taxes on the map shown in the headline are nearly three times higher in Brooklyn than in Richland County, S.C., taxes are actually 30% lower in Brooklyn, when looked at as a percentage of home prices.

Monday, August 12, 2013

What are the chances of different "splits" in bridge?

If you know how to play bridge, skip to the fourth paragraph!
In bridge, 13 cards are dealt to each of 4 players (so all 52 cards are dealt).  Players sitting across from each other are partners, so we could think of the two teams positions as North and South and East and West on a compass.  A process of "bidding" ensues, in which the team with the highest bid has selected a "trump" suit and a number of rounds, or "tricks" that they have contracted to take.

Suppose North-South had the highest bid and North is playing the hand.  Then East "leads" a card, meaning East places a card (any card he/she wants) face up on the table.  The play goes clockwise, East-> South-> West -> North.  South, West and North must play a card of the same suit that East played.  When four cards are down, the highest one wins the "trick" and that winner puts any card of his/hers down, in order to begin a new trick.  Play continues until 13 rounds of 4 cards each have been played.

Suppose that West wins a trick and thus gets to play a card.  He plays the Ace of Hearts.  North, who is next and otherwise required to play hearts, is out of hearts.  North can play any other suit, but if he chooses to play the "trump" suit (say Spades are trump), then he automatically wins the trick unless East or South is also out of hearts and play a higher card in Spades (the trump suit).  In other words, trumps are very valuable.  In the bidding process, the teams try to bid in such a way that the trump suit is one in which they have a lot of cards.  Generally, the team with the winning bid (the "contract") will have at least 7 of the 13 trumps between the two of them, meaning the other team will have 6 or fewer.  Whatever the number the opponents have, it is generally advantageous to the contract winners if they have the same number each rather than them being skewed to one or the other opponent.

Bridge players begin here:
So here is the probability piece.  Suppose you and your partner hold 7 trumps between you, what are the chances the opponents each have 3?  have 4 and 2?  have 5 and 1?  have 6 and 0?  To solve this sort of problem, we use combinations.  See my earlier post for some detail (and more odds of bridge hands).

The opponents have 26 cards altogether and we want to know the number of different groups of six among those 26 cards.  Think of this process as a process of picking six cards from the 26.  You have 26 choices for the first card, 25 for the second, and so on, and thus there are 26*25*24*23*22*21 total 'permutations' of size 6.  However, we do not care what order they are in so for each first card, there are 6 possible positions, for each second card, 5, etc., and thus we need to divide these permutations by 6*5*4*3*2*1, in order to get the number of unique sets when order does not matter. Again, see my earlier post for a more detailed explanation of this concept.

The R language allows for calculation of this combination of 6 out of 26 with the command "choose(26,6)." This is the denominator when we calculate probabilities, because it gives the total number of equally likely combinations of 6 cards.  The numerator is split into the two bridge hands of 13 cards each.   The number of combinations with an even 3-3 split are "13 choose 3" for both hands.
To calculate that probability in R, we write:   choose(13,3)*choose(13,3)/choose(26,6) and get 35.5%

How about hands with a 4-2 split?  That is the chance that Opponent 1's hand has 4 trumps multiplied by the chance that Opponent 2's hand has 2 trumps PLUS the chances that Opponent 2's hand has 4 trumps multiplied by the chance that Opponent 1's hand has 2 trumps.  Since the chance that either Opponent has 4 are the same, we can just double the probability of Opponent 1 having 4 and Opponent 2 having 2.  We get: choose(13,4)*choose(13,2)*2/choose(26,6) = 48.4% of one opponent having 4 and the other having 2 trumps.

Continuing this calculation, we get the following chances for hands with 6 trumps in the opponents hands( 6 trumps "out"):
3-3 split : 35.5%
4-2 split: 48.4%
5-1 split: 14.5%
6-0 split:  1.5%

For hands with 5 trumps out, we get:
3-2 split: 67.8%
4-1 split: 28.3%
5-0 split: 3.9%

For hands with 4 trumps out:
2-2 split: 40.7%
3-1 split: 49.7%
4-0 split: 9.5%

For hands with 3 trumps out:
2-1 split: 78%
3-0 split: 22%

For hands with 2 trumps out:
1-1 split: 52%
2-0 split: 48%

I find it interesting that the even split (for 2, 4, or 6 trumps out) is only the most likely scenario when 2 trumps are out.  When 4 trumps are out, a 3-1 split is more likely.  When 6 are out, a 4-2 split is more likely.


Monday, April 29, 2013

Simpson's Paradox

A North Slope real estate broker (named North) is trying to convince you that North Slope is a more affluent neighborhood than South Slope.  To prove it, he explains that professionals in North Slope earn a median income of $150,000, versus only $100,000 in South Slope.  Working class folks fare better in North Slope also, with hourly workers making $30,000 a year to South Slope's $25,000.

The South Slope real estate broker (named South) explains that North is crazy.  South Slope is much more affluent.  The median income in South Slope is $80,000 versus the North Slope median of $40,000.

Question: Who is lying, North or South?
Answer: It could be neither.
Consider the breakdown of income shown below.


We can see that North is not lying.  Half the hourly South Slope workers earn $20K and half $30K, for a median of 25K.  A similar calculation for the North Slope workers yields an hourly median of 30K.  For professionals in the South Slope, the median is $100K, with half earning $80K and half earning $120K.  In the North Slope, a similar calculation yields the median of $150,000.

South is not lying either.  For the South Slope, the median is $80,000, since more than half of the workers make less than or equal to $80,000 and more than half make $80,000 or more (according to the definition of median, at least half must be above the median and at least half must be below).  For the North Slope, the median is $40,000.

What happened here?  The problem, and the reason for the conflict between the wages according to type of work and the overall wages, is that the percentage of residents in each category does not match.  Thus, though professionals and hourly workers make more in the North Slope, there are far more hourly workers in the North Slope than in the South Slope.  Thus, the overall median (or mean) income is lower in the North Slope.

While Wikipedia has an entry for Simpson's Paradox, a specific example of which I described above, it seems that most people are unaware of it.  My motivation for writing about it is not the made-up example I present above but the fact that I encounter it so much in my everyday work.  I either make my clients very happy by explaining that the 'bad' effect they have found may well be spurious or, anger them when I explain the interesting relationship they have found is a mere statistical anomaly.