In Defense of NFL Passer Ratings

Browsing around some of my favorite sites, I came across a torrid rant on NFL QB Passer Ratings (QBPR) by Brian Billick posted by Advanced NFL Stats:

“We can admit quarterback rating is useless…I’ve never seen a more useless number than quarterback rating.”

My first reaction — Billick hasn’t seen many stats if he thinks this is the most useless.  However, the usually reliable ANS praises Billick and dismisses a “confused” defense of QBPR offered at Cold Hard Football Facts.com:

Of course passer rating will correlate with success on the field. It counts touchdowns and interceptions! That doesn’t mean a stat is useful. Why don’t we just count those instead of adding all kinds of arbitrary lard to create unit-less, non-predictive nonsense? This is the kind of stuff that gives stats a bad name. Billick is right.

While admitting the convoluted details of the FCHF defense, I’m willing to serve as QBPR’s public defender and assert that my client is wrongfully accused.  Until a few years back, I harbored my own skepticism about QBPR  until I investigated further.   Do more meaningful measures exist?  Very likely.  Are there weaknesses in this measure? Certainly.   Is the construction of “unit-less” composite indicators bad statistical practice?  That’s where I enter a plea of not guilty for QBPR.

ANS accuses QBPR from both ends — it contains “lard” and also obviously correlated items.   QBPR combines  Completions as Percent of Attempts, Yards per Attempt, Touchdowns per Attempt, and Interceptions per Attempt.   Lard? At least on their face, all of these relate to performance in non-trivial ways and QBs exert substantial influence over them. What’s an indicator supposed to do — include only things that correlate poorly? Why not use these or other structural components of performance rather than a indicator, composite measure?   Structural, underlying influences can be useful but so are composite indicators such as the CPI.  I really don’t understand this criticism.

The accusations grow more venemous.  Billick charges QBPR with uselessness in the extreme.  ANS calls it non-predictive nonsense.  Ouch!  Evidence from a sample of 64 games from the 2010 season suggests otherwise (2 games from each week using the first Sunday and last Sunday afternoon games listed on Yahoo scoreboard; if a game had more than just a few throws by someone other than the starting QB, I used the adjacent game).  QBPR  explains 60 percent of offensive point variation among the teams.  The difference in Passer Rating between the two QBs in the game explains 65 percent of the variation in score differential.  No subset of the structural components (including allowance for non-linear effects) improved these results, nor did experimenting with inclusion of sacks or a few other variables.

Is QBPR the best composite indicators?  I’m not at all making that claim.  QBPR centers on passing only, ignoring the value of a QBs legs.  Dave Berri at Wages of Wins has developed his own measure (WP100 = wins per 100 plays) that  goes farther, incorporating QB impacts on wins (2010 WP100 Ratings).

Does QBPR produce results very similar to a more carefully designed indicator like WP100?  For the 2010 season data and using only “qualifiers” — QBs with a minimum number of pass attempts — the correlation between the two is 91 percent.  QBPR correlates slightly higher (0.53) with points per game than the WP100 (0.50).   I’m not making the claim that QBPR  is ultimately a more informative and better measure.  Instead, I’m only showing that it is, in fact, a highly informative, useful indicator of a QB’s contribution closely related to a more sophisticated measure. The larger differences in QBPR and WP100 appear for QBs with few plays.  As the number of plays increases, the two measures seem to converge, which isn’t all that surprising.

Photo of author

Author: Brian Goff

Published on:

Published in:

General, NFL; statistics