Tuesday, August 08, 2006

How much is a slap shot worth? An emprical study

Back in 1986, Jeff Z. Klein and Karl-Eric Reif released a book called “The Klein and Reif Hockey Compendium.” Obviously modeled after The Bill James Baseball Abstract, it was entertaining and had a lot of numbers, but was not all that sabermetrically informative.

In that book, and again in the
2001 update (now simply called “The Hockey Compendium”), they introduced a goaltender rating stat called “perseverance.” The idea was that save percentage isn’t good enough – goalies who face many more shots per game will also face a higher caliber of shots, and their rating should be adjusted upward to account for their more difficult task.

It sounded reasonable, but the authors didn’t do any testing on it -- they chose their formula because it looked good to them. With the proper data, it would be pretty simple to actually investigate how save percentages change with number of shots, and create a formula that matches the empirical evidence. I’ve always thought that would be a good study to do.

But now, Alan Ryder, at
hockeyanalytics.com, has an awesome study that goes many steps better and makes the Klein/Reif method obsolete. It uses (extremely useful!) NHL play-by-play data (sample) to investigate shot stoppability not just by number of shots, but by type of shot and distance from the net.

Here’s what Ryder did. First, he found that there are five special kinds of shots where type and distance don’t matter much – empty net goals, penalty shots, very long shots, rebounds, and scramble shots (shots from less than six feet away that weren’t rebounds). For those, the shot is rated by the overall probability of a goal from that type – so an even-strength rebound, which went in 34.8 percent of the time, counts as 34.8 percent regardless of the details of the shot.

For all other shots – “normal” shots -- distance matters. At even strength, the chance of scoring on a 10-foot shot was 15% -- but from 20 feet out, the chance dropped to 10%. (All figures are for even strength – on the power play, “normal” shots are uniformly about 50% more effective.)

He also found that the probabilities varied for different types of shots. Only 6.7% of slapshots were goals, but over 20% of tip-ins went in.

(Ryder is quick to note that the data does not mean that players should change their shot selection based on these findings, implictly acknowledging that players are likely choosing the shot most appropriate for the situation.)

Combining types and distances, Ryder came up with a graph of the chance of scoring on any combination of shot and distance, and then smoothed out the results. Unfortunately, we don’t get the full set of data, but we get a graph of relative probabilities. For instance, a slapshot is a bit above average in effectiveness (relative to other shot types at that distance) anywhere from 5 to 50 feet, but drops after that.

Having done all that groundwork, Ryder is now in a position to easily evaluate defenses and goaltenders. Basically, the best defense is one that keeps the opposition from taking more dangerous shots. It’s now possible to add up the probabilities of all shots taken, to see how many “expected goals” the defense allowed. For instance, if a team allows six shots, each with a 15% chance of scoring, it’s effectively yielded 0.9 of a statistical goal.

And, of course, you can now evaluate the goalies, too. If the offense’s shot probabilities added up to 4.4, but only four goals were scored, you can credit the goalie with the 0.4 goals saved. Or, as Ryder chooses to do it, you’d give him a “goaltending index” of 0.909, which is 4 divided by 4.4.

I’ll leave it to you to check out the study – which is very easy to read and understand – to find out who the best and worst goalies and defenses are. I’ll mention only one of Ryder’s examples. In 2002-03, the Rangers allowed 21 more goals than the Lightning. But, after adjusting for the types and distances of shots, it turns out that the Rangers’ goaltending was actually significantly better – but was more than made up for by a defense that allowed many more quality opportunities.

Ryder’s study is by far the best hockey study I’ve seen (subject to the disclaimer that I haven’t seen that many). My only concern is that, just as Ryder points out that not all shots are equal, it’s probably also true that not all 30-foot wrist shots are equal. There could be many other factors that affect that kind of shot – who the shooter is, whether it was a one-timer, whether the goalie is screened, whether the defense is out of position, and so forth.

This doesn’t affect a team’s overall rating (which is simply goals allowed), but it would affect the proportion of credit or blame to assign to the goaltender. If the goalie is faced with a lot of difficult 30-foot wrist shots, he will be underrated by this system. If the 30-foot wrist shots are easy, he will be overrated.

Is this a big factor? One way to find out would be to see if a goalie’s rating is reliable and consistent from year to year, especially when he changes teams. If it’s not, and to what extent it’s not, that would be evidence that defenses vary in ways that aren’t captured by shot type and distance alone.

6 Comments:

At Wednesday, August 09, 2006 2:14:00 PM, Blogger JavaGeek said...

It sounded reasonable, but the authors didn’t do any testing on it -- they chose their formula because it looked good to them. With the proper data, it would be pretty simple to actually investigate how save percentages change with number of shots, and create a formula that matches the empirical evidence. I’ve always thought that would be a good study to do.

I did a (poor) regression (expected scoring percentage vs shots for and team expected shooting percentage [on a per game basis]) and got that a shot increases the difficulty of all shots by 0.0002 or 0.02% so ten extra shots increases the difficulty of shots by 0.2% so it goes from about 10.1% to 10.3% or a save percentage goes from .899 to .897.

 
At Wednesday, August 09, 2006 2:18:00 PM, Blogger Phil Birnbaum said...

Hey, thanks, that took a bit of effort ... much appreciated. I will dig out a copy of Klein and Reif and see how that compares to their perseverance formula.

 
At Friday, August 11, 2006 11:39:00 AM, Blogger Phil Birnbaum said...

Hi, javageek,

Did you really want to include team expected shooting percentage? I think Klein and Reif's idea was that lots of shots means both (a) better opposition and (b) better shots from that particular opposition. Your regression gives only (b), doesn't it, because it adjusts for (a)?

 
At Sunday, August 13, 2006 1:49:00 AM, Blogger JavaGeek said...

Fair enough:
0.00027 increase in difficulty per extra shot or 0.027% (not the biggest difference - on standard deviation).

It explains 0.9% of the variability (R-squared of 0.009), there are a lot of teams on both ends (few shots -> tough shots) and (lots of shots -> easy shots), on average it works out to almost nothing.

 
At Sunday, August 13, 2006 9:26:00 AM, Blogger Phil Birnbaum said...

Thanks again, javageek.

Klein and Reif say an extra shot is worth 1.6666 points of save percentage. So a .900 goalie facing three extra shots per game (that is, three more than average) would get an increase of five points, for a perseverance rating of .905.

Your regression says that three extra shots should be worth only 0.81 of a point, from .900 to (almost) .901.

So Klein and Reif overestimate the effects of shots by a factor of six.

Did I get that right?

 
At Sunday, August 13, 2006 3:35:00 PM, Blogger JavaGeek said...

Sounds right to me

 

Post a Comment

<< Home