NCAA Seeding — A Lot of Noise

The 2013 NCAA tournament has afforded another experiment on whether the seeding process incorporates much more noise than signal.  For those following the tournament, it should come as no surprise that this year offers more confirmation of the suspect seeding process.

I collected scores and seed differences for all 48 games from the first two rounds.  Using a common statistical technique (regression analysis), I examined the relationship between seed differences and score differences.  Seed differences explain a paltry 6 percent of score differences.  When the 1-16 and 2-15 are dropped, this percentage falls to only 3 percent.  Even as a predictor of win-loss rather than score, seed differences fare poorly.  For many of the games, plucking scores from a lottery hopper would come close to providing as much information on outcomes.

Is seeding really intended to predict score differential?  Obviously, not directly.  However, seeding reflects a gauge of team’s in-season performance quality – wins and losses adjusted for quality of competition.  If this gauge has much meaning to it, better seeds indicating higher quality teams, differences between them should show up in scores.  Instead, games with large seed differences wind up close or in upsets and games with narrow seed differences result in some blowouts.

It’s not just the NCAA Selection Committee that struggled with finding meaningful differences between teams.  Vegas point spreads, direct estimates of score differences, predicted only about 17 percent of score differences.  That’s a stunningly low number given that these spreads incorporate information such as injuries and best guesses about team specific matchups.

The 1 and 2 seeds are still very likely to win their opening round games, but even these games have become much more contested.  Based on this year’s tourney, I might amend original suggestion to select seeds 1-4 with everyone else randomly selected to just selecting the 1 seeds and randomizing from there.  Yes, that would lead to some seemingly strong teams facing other strong teams in the first round, and weak versus weak, but, that’s already happening.  The setup just puts a different face on it.

Photo of author

Author: Brian Goff

Published on:

Published in:

ncaa; basketball

3 thoughts on “NCAA Seeding — A Lot of Noise”

  1. I’d offer this up: I think you’re confusing seeding as a resume-based reward system vs. seeding as a prediction mechanism. “Better” team lose all the time to inferior ones — I’ll define that as a game that gets played 100 times would be won by the better team more than 50 — yet seeding “better” teams (and thus ones more likely to perform in relation to their seed) isn’t the point of seeding. That kind of system would be incredibly unfair and defeat the point of playing games, anyway, unless you went full blown MLB.

    Instead, you’re rewarding teams based on outcome, which is the fun in all of this anyway. You find this with football poll arguments all the time, and it drives me a little mad. The point is who wins, now who “would” win.

    So that’s my 2 cents.

  2. I think one reason the selection and seeding of teams appears so random is the use of pseudo indicators like RPI and “wins vs RPI top 50” when evaluating teams.
    Because of the number of teams and scheduling variability, everyone agrees that W-L is a poor indicator of team strength. RPI attempts to correct for the bias of opponent strength by using Wpct of opponents and opponents’ opponents. This is non-nonsensical, since we already started with the assumption that W-L is a bad indicator, and we cant use W-L as the only factor to adjust W-L for its bias.
    Any reasonable rating system at least takes into account each game individually, adjusting for the strength of both teams, location (home/away), and outcome (which team won). The BCS uses many of them, but the RPI does not do this.
    Even better systems incorporate total score and margin of victory to produce better statistical predictions. As one would expect, they do not always agree with RPI.
    To compare a few examples:
    New Mexico St – RPI #2, but using a W-L rating system (Dolphin Std) they are #11, and using scores they are #25 (Dolphin Pred) or #37 (BBRef SRS). NCAA awarded a #3 seed
    Middle Tenn St – RPI #28, #41 (Std), #41 (Pred), #49 (SRS). Earned a bid and #11-ish seed (Last Four) to play the statistically superior St Marys.
    Ole Miss – RPI #48, but #34 (Std), #28 (Pred), #27 (SRS). SEC champ bumped down to a dismal #12 exactly where RPI would predict.
    Boise St – RPI #44, but #54 (Std), #57 (Pred), #60 (SRS). Similarly snuck into the tourney with the Rosy RPI glasses and faced a more deserving La Salle (#46 RPI, #44, #50, #55)

    I could go on, but RPI consistently will produce these differences. Just having two fewer conference games will boost RPI because the ratio of soft non-conference games is higher. There is a notion that RPI is biased against major conferences, but which, why, and how is lost on the NCAA and the selection process.
    http://www.ncaa.com/rankings/basketball-men/d1/ncaa_mens_basketball_rpi
    http://www.sports-reference.com/cbb/seasons/2013-standings.html
    http://www.dolphinsim.com/ratings/ncaa_mbb/index_pred.html

    Even worse, the NCAA ranks conferences using RPI, an even more flawed measure because its primary component is non-conference win-loss. This gives the shocking result that the Mountain West (87-26 Non-Conference, 36-1 vs teams ranked 50) is the best conference:
    1) MWC 2) B10 3) BEST 4) ACC 5) B12 6) P12 7) A-10 8) SEC 9) MVC 10) WCC
    The NCAA seemed to agree here, giving this 9-school league 5 bids, versus only 4 for the 12-team ACC that was hurt by RPI measures.

    Both Dolphin and SRS agree exactly on the order of top conferences:
    1) B10 2) BEST 3) ACC 4) P12 5) B12 6) MWC 7) SEC 8) A-10 9) MVC 10) WCC
    The Mountain West drops five places to #6 and suddenly their 2-5 NCAA tourney performance looks more appropriate. As more teams from mid-level conferences become competitive, this RPI formula begins to break down as their schedules still remain relatively isolated.

    http://www.cbssports.com/collegebasketball/bracketology/conference

    Sometimes RPI mimics a reasonable ranking, but its deviations are non-random and seem to particularly skew towards or against a whole conference at once. I would love to see that statistical regression with those 7 MWC games removed (as they might have had higher seeds/bids than they deserved) or possibly remove results of any team where seed and rating differed by more than a threshold.
    My conclusion is that more precision in the selection and seeding process is necessary, not less. Those slight probability differences in the inital bracket seeds add upp to sigificant cumulative changes in a team’s chances at advancing deeper in the tourney. The sloppy RPI method makes the results appear more random, and upsets more “likely”, but it is more a symptom of the poor system than a truth about parity in basketball. As much as the gap between #1 and #16 has narrowed, any consecutive seed game (8/9, 4/5, 2/3, etc) should reliably be competitive, and we should see Vegas rarely favoring a higher numbered seed.

  3. I think this rates a “who cares?” Seeding is about mean values/history, while the games are won or lost due to daily variance (at least in part–there are obviously other factors). The daily variance for any given matchup is fairly large and getting larger season to season. The only seed with enough difference in means to defeat the variance is the 1-16 matchup, and considering that there have been 1 point wins in the past, an upset here is a matter of time.

    The seeding process has well-known issues, especially the 3-14 and 4-13 games. There doesn’t seem to be any incentive to “correct” them by seeding the 13 and 14 seeds higher. They don’t usually last long if they win anyway, which gets them out of the way of the “real” teams.

    The NCAA (and CBS) have a vested interest in reducing the effect of variance in the tournament–ad revenue–so they try hard to give the higher seeded teams every advantage that they can. That this doesn’t work like it “should” is an indication that there is more at work here.

    And if the seedings were perfect, who’d want to watch? This year, if everything went perfectly, you could hand Louisville the trophy and not bother with the tournament. (Yes, I know–strawman argument–but that’s what “perfect seeding” would do.)

Comments are closed.