Preseason Home Up (consensus index)

Preseason Consensus Notes


Frequently Asked Questions

What is the preseason consensus?

The preseason consensus is a USEnet posting, made to the newsgroup rec.sport.football.college each year. The posting lists the national top 25 and conference predictions from all of the college football preseason magazines that I can collect. It also contains a "consensus" that is computed from all of the individual predictions put together.

Which magazines get included?

As a general rule, only publications which give both national rankings and conference predictions are used. There are a zillion different national top 25 lists without conference predictions; it would impossible to collect them all. Also, an effort is made to select "national" magazines (i.e., those which are available to football fans throughout the country). Predictions from web sites are welcome, as long as they publish both conference and national predictions.

Don't you get in trouble for reporting their predictions?

I have discussed the matter with publishers of a few of the preseason magazines. None had any objection to this site. If any organization did not want their predictions on this web site, I'd remove them. I would expect that most publishers wouldn't mind the preseason consensus, for two reasons:

  1. During the few months leading up to football season, the consensus page gets thousands of hits per day. That's a fair amount of free publicity for the magazines that are listed.
  2. I report only the conference/national rankings, without comment. Any person could pick up the magazine on a newsstand and read that much without buying a copy. The real "meat" of these magazines is the per-team discussion; I do not include any of that here. (I do report on the quantity and quality of that information in the review section.)

Notes on the conference predictions

Dash "-" in the conference predictions means that the publication did not rank that conference. This only happened frequently to the Big West and MAC. However, there are a few cases where data for other conferences is missing.

Notes on the national top 25

Rankings of the teams in each magazine are given under each column. A "dash" represents a team that was unranked (or ranked below #25, for those magazines that rank more than 25 teams). The individual rankings are totalled, and the teams are presented in order ranked by that total.

The scoring system is similar to that used in the A.P. poll, except: (1) there is a 5-point bonus for being mentioned (i.e., the difference between "25" and "unranked" is six points, not one point); and (2) I've had to make adjustments for national rankings which don't include a full 25 teams.

For magazines that give a top 25, 30 points are given for a 1st-place vote, 29 for 2nd-place, down to 6 points for 25th-place (and zero for unranked). For magazines that rank less teams, the point total for a first-place vote is kept the same, but the value of "unranked" is increased to preserve the six-point drop between the lowest ranking and unranked -- e.g., "unranked" is fifteen points for a magazine that only lists a top ten ("10th" = 21 points... minus six = 15 points).

Why the five-point bonus?

If there were only a one-point difference between 25th and "unranked," then a team ranked #20 in one magazine (six points) would be ahead of a team ranked #25 in five magazines (five points).

I felt that a team which is "consensus" ranked (i.e., appears in the most magazines) should have an advantage over a team which is ranked relatively high by very few publications. The AP poll has enough voters that such anomalous entries are lost in the noise; this list only has about ¼ as many.

This bonus doesn't affect the top 10-15 teams, because they are all mentioned in every magazine. It does help those lower-ranked teams that are mentioned many times, at the expense of those lower-ranked teams that are not mentioned as many times.

Note that the consensus top 25 isn't significantly impacted by the five-point bonus. A comparison of different scoring methods (on 1996 consensus top 25 data) is given here.

Why compensate for shorter lists?

If I did not do this, the teams mentioned near the bottom of a short list would have a huge advantage over those that just missed the list (i.e., the short list would have undue influence on the rankings of teams that just-made or just-missed the list).

For example: Suppose Syracuse were #11 in nine publications and #10 in a magazine that only made a top ten list. Suppose Southern Cal were #10 in those first nine publications, but just missed the top ten list in the last one. If I didn't make an adjustment for the shorter list, the huge drop between the last ranking on that list (#10 = 21 points) and unranked (0 points) would cause Syracuse (201 points) to be rated way ahead of Southern Cal (189 points)... even though nine of ten magazines had rated the Trojans higher! I fixed this problem by deciding that being left out of a top ten list shouldn't be as "bad" as being left out of a top 25 list.

This adjustment ends up awarding points to teams that are unranked, but it prevents the unfair advantage given to some teams by short lists. Alternatively, I could made the adjustment on the other end and lowered the point values for rankings in a short list (instead of raising the value for "unranked"). For example, a magazine that only rated ten teams could be scored 15...6 points for #1-#10 teams, and still zero for unranked. However, that would have greatly complicated the scoring script, and resulted in strange totals at the upper end of the scale.

Note that the consensus top 25 isn't significantly impacted by the adjustment for shorter lists. A comparison of different scoring methods (on 1996 consensus top 25 data) is given here.


Data and programs

In case you care, you can look at Programs and raw data used in the consensus.


Preseason Home Up (consensus index)