Mike Giberson at Knowledge Problem unearthed an article on bunting in today's Washington Post, "Too Much of A Sacrifice?" and suggests that I "will be commenting shortly." Indeed, and thanks Mike.
The article, by Dave Sheinin, discusses the differing views between baseball managers and some sabermetricians on the wisdom of employing a sacrifice bunt. The problem with the sacrifice bunt is that it intentionally gives up an offense's most precious and scare resource, an out, and thereby intentionally truncates the distribution of runs scored in an inning. Sheinin discusses a 2004 article by James Click of Baseball Prospectus, which shows that bunting results in fewer runs scored in an inning (yeahh...), and argues that bunting is an "archaic, outdated strategy" (hmm...).
Using data from the 2003 season, Click found that a team with a runner on first base and no outs subsequently averaged 0.919 of a run per inning. But with a runner on second and one out -- which is to say, following a hypothetical sacrifice bunt -- a team averaged 0.706 of a run per inning. That means a bunt in that situation actually "costs" a team 0.213 of a run each time it is deployed.
Similarly, with a runner on second and nobody out -- another potential bunt situation -- teams averaged 1.177 runs per inning, while a situation with a runner on third and one out yielded only 1.032 runs.
However, Click realized those numbers did not tell the full story, because they relied on an "average" player on an "average" team, with no regard to whether a team was playing for one run -- i.e., in the late innings of a close game.
So Click ran simulations using actual players to determine the thresholds for which specific hitters should and should not bunt. His conclusion: With a runner on first base and no outs, any hitter with an on-base percentage (OBP) of at least .206 and/or a slugging percentage (SLG) of at least .182 -- numbers that would encompass practically every hitter in the majors, including many pitchers -- should swing away. The only exception is when a team is playing specifically for one run, in which case the thresholds are a .282 OBP and/or .322 SLG.
Sheinin goes on to discuss the issue with Bill James, Frank Robinson, and others, most of whom - including James - aren't willing to buy Click's conclusion. I've done similar, and in a sense to be described in a moment, more extensive calculations of this sort with Jahn Hakes, and I think Click is half right.
Some managers clearly understand the tradeoff between runner advancement and giving up an out perfectly well, but others do not. Recently fired Astros managers are a good example. Pick up a copy of Larry Dierker's This Ain't Brain Surgery, and you'll find the expected runs table he worked with reproduced in one of the chapters. But he perhaps went a bit far employing sabermetric and unconventional thinking, and lost command of his old-school clubhouse as a result (something not factored into Click's calculations). The next guy, Jimy Williams, was a player's coach who apparently had no clue what the numbers said. Every time Adam Everett came up with a man on first last season, Jimy would bunt. Being an Astros fan, this drove me nuts. I was sorry to see Dierker go, but said hallelujah! at Jimy's departure. So managers differ.
In our paper, Hakes and I did not take a stand on whether individual bunts were cases of erroneous decision-making. Although we are working on a method to document strategic errors by managers, as in the anecdotal case of Jimy Willams, this requires some sense of what the optimal frequency of bunting might be in the aggregate. As a start on that, we used thresholds similar to those mentioned above, and asked what would happen to those thresholds, and thus would happen to the frequency of bunts as game conditions change? Without knowing all of the constraints and objectives of the decision-maker, it is not possible to determine whether a given bunt is the right choice or not (even absent strategic considerations). But it's relatively easy to determine that bunting is a better choice in some conditions than others, and thus how changes in conditions affect the frequency of (rational) bunting.
This is a standard comparative statics exercise, long a staple of economics. And baseball managers, as a group over many seasons, largely conform to the economic model. They bunt more when conditions support that decision (fewer outs, tighter game, poorer batter) and bunt less when conditions favor swinging away (see Table 6 in Hakes and Sauer). In this limited sense, as a group they are rational decision-makers, if not sabermetricians.
But the real challenge - and this is where Click is half right - is to use economics or sabermetrics to detect Jimy Williams in the data. If statistical modeling is as powerful as some of us think, we will soon quantify the cost of strategic errors, such as excessive use of the sacrifice bunt, in a convincing manner. How many games did Jimy cost the Astros through excessive use of the sacrifice? If it was as many as one per season, that's an expensive managerial error.