Unless you’ve been living under a rock you’ve probably heard of the “Harvard Speaker Point Scandal” *cue ominous music.* However #harvardgate is being dramatically misinterpreted by a few loud and predictable voices on the NDTCEDA group. The fact that average speaker points at the Harvard Tournament are higher is indisputable, the claim that these average speaker points are higher because a group of ‘policy’ judges conspired together to take back debate from minorities and reinstate white hegemony as backlash against the Kentucky tournament is indefensible.
There is no evidence that has been presented thus far that teams that did not read or defend a plan in the style of policy debate were disadvantaged. Instead you have one anecdotal story about a judge who watched two teams attempt to adapt to him by doing something they were not comfortable with, behaving in such a manner that, by their own admission, if it were seen by school administrators it would destroy their debate program and then proceeding to lose the round. While the people making somewhat dramatic assertions that this person who has dedicated a substantial portion of their finite lifetime to policy debate for no discernible personal gain is actually spending time looking for ways to oppress public-speaking proficient college students, they seem to have missed the forest for the trees. Regardless as to if Nate Cohn is biased against particular arguments (and bias is not exactly a new phenomenon in judging, that’s why judges post their preferences publically), there is zero evidence that the debate community as a whole has this same bias or that this bias is related in any way to inflation of speaker points at the Harvard tournament which were likely caused by judges seeing previous speaks for teams posted publicly.
Factual evidence indicates that this correlation between reading an affirmative that defends federal government action and gaining higher speaker points does not exist. Here you can find a link to the data I used to reach the below conclusions, some of the information may be old / recorded prior to Harvard so if several teams changed their approach to debate dramatically in the past 2 or 3 weeks I haven’t updated my data to deal with that. Some of the data may also be mis-typed. Please, if you would make a copy of the document with corrected data and reply with a link in the comments below I will gladly re-run my regressions and see if a mistake on my part influenced the data in a statistically relevant way.
From the Harvard results packet we have information on the speaker points of 154 different debaters. From the wiki we can make general conclusions about whether or not they defend a plan text in the traditional sense – this is somewhat difficult to determine with a few teams but I did my best to be balanced and representative of what these teams’ 2ARs likely sound like based on the 1AC. Note that teams with a plan text and ‘kritikal’ advantages are considered to be engaged in plan debate, only teams that make an explicit roll of the ballot type argument, critique the resolution or do not read any form of “[government agent should] topical action” in the 1AC are designated as non-plan teams. [i] This data is very tedious to look up and enter, at times it felt like my hand would fall off from clicking through and downloading endless open source documents, if I made mistakes they will be here. If you see a mistake in this part of the data please correct it and send it to me to calculate again.[/i]
My results are pretty straight forward. For the 154 observed data points:
The Mean High-Low speaker point calculation was 171.67 with a standard deviation of 2.06. The Minimum High-Low score was 166.8 and the maximum 177.7.
The Mean Total speaker point calculation was 229.08 with a standard deviation of 2.68 (it’s pretty interesting that this number is larger for entirely unrelated reasons). The Minimum was 222.1 the Maximum 236.
Of the teams which had wikis uploaded I determined that 67.53% defended USFG action for critical or policy reasons. This was coded into the data above as a dummy variable 1 if they did read a plan, 0 if they did not.
I also included records to control for the fact that teams that lost more debates on average would likely have lower speaker points than teams that won. It’s possible that reading a plan increases the chances of having a good record (although only in a trivial way – see below) but that is irrelevant to the claims that policy debaters who had preffed policy-only judges were receiving anomalously high speaker points.
Running a simple regression on High-Low speaker points in STATA outputs the following chart:

This proves pretty conclusively that there is no statistically significant correlation between defending USFG action and gaining speaker points but there is a statistically significant correlation between winning rounds and getting higher speaker points. Both of these claims make intuitive sense. This means that, even if policy judges were affording higher speaker points to affirmatives that reflected their preferences, non-policy judges were either doing it to the same degree or the impact of these preffed policy-favoring judges had no impact on the overall outcome of the tournament.
For those of you who aren’t familiar with statistics / have forgotten them over time, basically if the value in the P>|t| column is less than .000 there is a statistically significant correlation at the 99% confidence interval, if the P>|t| column is less than .05 there is a statistically significant correlation at the 95% confidence interval. If the P>|t| correlation is greater than .05 generally that is considered evidence that there is no statistically significant correlation .
I also ran a regression on Total speaks with similar results:

Looking at total speaks there is still no statistically significant relationships between defending USFG action and getting higher speaker points at the 2013 Harvard Debate Tournament. There is a correlation between winning rounds and getting higher speaker points, as expected.
So what about reading a plan and winning debate rounds? Here’s that regression:

In this case you end up with a slightly significant correlation between defending a plan text and winning debate rounds. On average this amounts to .69 ballots over the course of 8 rounds. However, the fact that the p value is so large already without controlling for any other factors makes it dubious that this correlation is robust. For example, I would not be surprised if statistical significance vanished entirely if one were to control for factors like “years in debate” – as this data is not available I could not do so. Clearly this warrants further investigation on its own merits but it does not implicate the conclusion that the people making the following posts are leveraging anecdotal assertions to support a particular ideological agenda instead of actually describing what happened at the Harvard tournament:
