Friday, December 09, 2011

A "Grantland" article on Moneyball effects

Here's a baseball salary article at Grantland, by economists Tyler Cowen and Kevin Grier. It’s a strange one ... the impression I get is that is that the authors are just going on the basics of the "Moneyball" story, but don’t really follow baseball discussions very much. And so some of their arguments are obviously behind the curve.

For instance, they talk about how closers used to be paid inefficiently, but aren't any more, except by free-spending teams like New York:

"This year, the Yankees' Mariano Rivera was ranked fifth in total saves with 44. At a salary of $14.9 million, that works out to be a hefty $338,600 per save. The four closers ranked ahead of him averaged 46.5 saves and a salary of $2.9 million, or $63,771 per save — quite the bargain."

The problem here is obvious to almost any serious baseball fan: closers aren’t normally evaluated by the number of saves, which is mostly a function of the opportunities the team provides. Rather, and like any other member of the roster, the closer is paid according to how many wins he can contribute to the team's record, as compared to a replacement player. For Rivera to be worth $15 million, he has to contribute about three extra wins (at a going rate of $4.5 million per win). Which means, basically, he has to blow three fewer saves, given his opportunities. Or, rather, he has to be *expected* to blow three fewer saves; there's still a lot of randomness there.

But Cowen and Grier don't mention randomness at all. And their only reference to blown saves is in one sentence that mentions the Twins' Joe Nathan and Matt Capps, who blew 12 saves out of 41 opportunities.

Another thing, too, is that the article doesn't mention one big difference between Rivera and the others: Rivera is a free agent, while young players like Neftali Feliz can be paid whatever the team wants. The Yankees might prefer Feliz to Rivera, but that’s not a choice they have open to them.

It's not a new "Moneyball" discovery that "slaves" make less money than established free-agent stars ... but the article seems to imply that teams don’t realize that the $400,000 stopper can be just as valuable, for the money, as the $15,000,000 stopper.

To me, it looks like the problem is that if you don’t know baseball that well, you tend to overrate the “Moneyball” possibilities, because that’s the story that you’ve heard the most.

-----

The authors then go on to say:

"The best-known Moneyball theory was that on-base percentage was an undervalued asset and sluggers were overvalued. At the time, protagonist Billy Beane was correct. Jahn Hakes and Skip Sauer showed this in a very good economics paper. From 1999 to 2003, on-base percentage was a significant predictor of wins, but not a very significant predictor of individual player salaries. That means players who draw a lot of walks were really cheap on the market, just as the movie narrates."

The authors imply that “walks were really cheap on the market,” means that the A’s had a huge hole to exploit.

But ... even if walks were indeed “really cheap,” it would still be a small hole. Walks are a significant part of a player’s value, but still in the sense of a small edge, not a huge one. Suppose teams valued walks at only half their actual value. If you can pick up a player with 60 walks, for the price of 30, you’ll gain about 10 runs, or one win. Not a big deal.

Of course, if you can do that nine times, that’s nine free wins. But the A’s didn’t. In 2002, they walked 609 times, third in the league. But that was only 157 more walks than Baltimore, second-worst in the league. If 157 was the number of walks they got at half-price, that’s still only two or three wins.

You could choose, instead, to compare the A’s to the 2002 Tigers, who walked only 363 times. That would be completely unrealistic, in my view, to assume the A’s would have been as bad as one of the worst recent teams ever. But if you do, you *still* only gain four wins.

----

The authors also put too much faith in the Hakes/Sauer paper. As I wrote a few years ago, it seems to me that the paper has a few problems, and I don’t think it shows what it purports to show.

The study found a huge increase in the correlation between salary and OBP between 2003 (when the "Moneyball" book was released) and 2004. The numbers for 2004 almost exactly matched the actual value of a walk, so the authors concluded that the market became efficient in the off-season, and teams wised up after reading the book..

But that conclusion doesn’t make sense. Since only a small percentage of players got new contracts between 2003 and 2004, for the overall average to move so much, the market would have had to overcompensate for walks by double, or triple their real value! That doesn’t sound like a reasonable possibility, and it’s certainly not consistent with GMs now learning to be efficient.

-----

Finally, on the subject of correlation:

"Here's something funny about the Moneyball strategy: It is bringing us a world where payroll matters more and more. Spotting undervalued players boosts their salaries and makes money more important for the general manager; little did Billy Beane know that in the long run he would be strengthening the hand of the large home-market teams, such as the Yankees. From 1986 to 1993, payroll explained 2.2 percent of the variation in team winning percentage, and that meant spending more money yielded little return in terms of quality on the field. In the 2004 to 2006 seasons, after the Moneyball revolution was under way, payroll explained 27.1 percent of the variation in team winning percentage, which means a stronger reason to spend more."

I've written about this before, and Tango’s written about it several times: a higher r-squared does NOT necessarily mean money is more important in buying wins. Rather, the r-squared is a combination of:

1. the extent to which money can actually buy wins;
2. the extent to which teams differ in spending, in real-life.

When the authors say, "spending more money yielded little return," they seem to be assuming it’s all the first thing, when it might be all the second thing.

As an example, take dueling, where two people go out at dawn, draw weapons, and one of them kills the other. Back when it was legal, dueling would explain a lot of the variation in death rates of people who didn’t like each other. Now that it’s illegal, it explains zero.

However, the fact that the r-squared dropped doesn’t mean that dueling is any less dangerous than it used to be (point 1) -- it just means that people no longer vary in how often they get killed in duels (point 2).

The same thing could be happening here. I did a Google search and found an article (.pdf) that gives some team payroll data for the period the article covers. From Table 1, the article shows that from 1985 to 1990, fourth quartile teams (the 25% of teams with the highest payrolls) outspend the first quartile teams by only about 2 to 1. From 1998 to 2002, the ratio jumped to 3 to 1. The paper only covers to 2002, but a glance at later numbers seems to show around 2.5 to 1 (but up to 3.1 to 1 for the 2011 season).

This is evidence that at least *some* of the difference is probably caused by teams being willing to spend more.

I may be unfair to the authors here ... that might be partly what they’re saying. If I read them right, they’re saying that, armed with "Moneyball" concepts, teams are realizing they can buy wins cheaper by evaluating players more accurately (1) -- and, that teams are therefore more likely to vary in how much they pay when they know it’s money well spent (2).

But ... well, I think these effects are pretty small. As I argued, walks are a small part of the overall equation, even if they were undervalued by half (which itself is probably an overestimate). It’s not like, in 1990, teams were paying Jose Oquendo as much as Wade Boggs. To be sure, teams weren’t perfect in evaluating players -- but they were still reasonably good. Any improvement since then has to be relatively small, at the margins.

So, the idea that teams would say, "hey, we can now evaluate players slightly more accurately, so let’s go on a spending spree" doesn’t seem all that plausible.

------

What actually *did* happen to tighten the relationship between payroll and wins? As usual, you guys probably know better than I do. I’ll give you my guess anyway, which is that it’s a combination of a bunch of things:

1. It became more "socially acceptable" for teams to pay big money to free agents. Remember, 1985 to 1990 includes the collusion year, and there was probably a significant amount of pressure to keep spending down. That pressure was probably more significant in discouraging headline-grabbing salaries, rather than routine signings, so maybe a player who was twice as valuable wouldn’t be able to sign for twice as much. That would help keep the correlation between salary and success low.

2. When baseball revenues exploded, they grew more in some cities than others. That meant that marginal wins would be extremely valuable to the Yankees, but not so much to the Pirates. That increased the variation in team spending, which pushed up the r-squared.

3. Teams got smarter, in line with Cowen and Grier’s theory. But I think that was a small part of what happened. Also, I’d guess that a lot of improvement in that regard would have happened well before Moneyball, as Bill James’ discoveries got around a bit. Conventional wisdom denies that baseball executives put any faith in what Bill James had to say, but ... I dunno, good ideas tend to get noticed, even if people say they don’t believe in them. Also, Bill James’ ideas showed up early in arbitration hearings, which affected the teams’ bottom lines pretty much immediately.

4. Randomness. In a team payroll to wins regression, Cowen and Grier give an r-squared of .022 for 1986 to 1993.

(By the way, I assume Cowen and Grier's regression adjusted for payroll inflation ... salaries more than doubled between 1986 and 1993. If they didn't adjust, that might explain the low correlation.)

I wonder if that .022 might just be an outlier. Here are equivalent numbers from Berri/Schmidt/Brook in "The Wages of Wins," page 40:

Wages of Wins:

1988 to 1994: r-squared = .062, r = .25
1995 to 1999: r-squared = .325, r = .57
2000 to 2005: r-squared = .176, r = .42

Cowen/Grier:

1986 to 1993: r-squared = .022, r = .15

The numbers sure do move around a lot! It probably doesn’t take much to knock the correlation down: you need a few teams to get lucky in exceeding their talent, and a few teams to get lucky and get some good slaves and arbs. Maybe I’ll try a simulation and see how common a .022 might actually be.



Labels: , , , ,

2 Comments:

At Friday, December 09, 2011 3:43:00 PM, Anonymous EvanZ said...

Do teams pay "Moneyball" players more now? I would expect that by now, you can't really play Moneyball. It's just "winning" now. And just like in any sport, you pay more for the guys who help you do that. Am I right? Or are there stil a lot of players who are vastly undervalued?

 
At Saturday, December 10, 2011 12:41:00 PM, Blogger bigmouth said...

EvanZ: The quest is always on for new metrics and resulting inefficiencies to exploit, which is the heart of Moneyball. It's pretty hard to do that on the offensive side anymore, so I think teams are trying hard to develop reliable defensive metrics.

Anyway, I agree with the author that the Grantland article suffers from an over-reliance on anecdote and a lack ot systematic analysis of the claims being made.

 

Post a Comment

<< Home