web analytics

Overrated and Underrated via RAPM (Part 2)

November 10, 2012

A week ago, I discussed overrated and underrated players over the past 12 years, evaluated by comparing popular box-score stats with Jeremias Engelmann’s 12 year average Regularized Adjusted Plus/Minus (RAPM) dataset. For a primer on the virtues of RAPM (in large samples), see my article reviewing the state-of-the-art of adjusted plus/minus and stabilization.

This time, I am looking at offensive and defensive RAPM over the same 12 years, and comparing them to Offensive and Defensive Win Shares and PER. Again, I am limiting this to very large sample sizes (20,000+ possessions) where RAPM should be both accurate and stable.



First of all: the R^2 of PER on ORAPM is 0.38 and the R^2 of OWS/48 on ORAPM 0.48. This indicates that Offensive Win Shares do a better job of measuring offensive value than does PER.

However, both box score stats underrate a similar set of offensive players: excellent point guards and creators. The most underrated players by both stats are Steve Nash and Baron Davis, both excellent passers/creators. Steve Nash is rated by ORAPM as the best offensive player of the past 12 years by a significant margin, rating out at +7.0, or 7 points per hundred possessions better than average. Second is D-Wade at +6.2; third is Kobe at +5.9.

Looking at the players most underrated by both box-score stats: Nash, Davis, Mike Conley, Ray Allen, Toni Kukoc, Damon Jones, Paul Pierce, Vladimir Radmanovic, Gilbert Arenas, and … Shaq.

I guess the exception proves the rule?


Conversely, most of the overrated offensive players are post players. The most overrated player on offense by both systems is Andrew Bynum. Both box score stats rated him well above average, but ORAPM says he has been pretty bad. I am not totally sure what to make of this; it could be related to issues with ORAPM as well as him simply being overrated.

The most overrated players on offense: Andrew Bynum, Hakim Warrick, Yao Ming, Rajon Rondo (!), Marcin Gortat, Aaron Williams, Craig Smith, Dikembe Mutombo, Tony Battie, Eddy Curry, and Michael Curry. Outside of Rondo, that is a who’s who of “not creators” on offense! It appears that assists (at least with these implementations) are not enough to capture the effect of great creators on offense, and the effect of poor creators on offense.


Okay, let’s look at the other side of the ball.

First of all: note the confirmation of something John Hollinger has said often, but most people ignore: PER does not measure defense. At all. So I won’t even discuss PER for rating defensive players. It doesn’t.


That leaves us to look at Defensive Win Shares. It is well known that the box score doesn’t really measure defense well. All we’ve got to go on are blocks, steals, rebounds, and the overall points allowed to the opposing team. That said, DWS/48 still manages an R^2 of 0.48 on DRAPM for this sample, oddly enough the same as was found for OWS/48 on ORAPM! When I previously looked at similar R^2 numbers with a different data set, I found OWS on ORAPM of 0.59, and DWS on DRAPM of 0.45. Some more investigation is needed on these subjects…

The most underrated defensive players of the last 12 years: Andrew Bogut, Brendan Haywood, Jason Collins, Amir Johnson, LaMarcus Aldridge, Dikembe Mutombo, Kevin Garnett, Shawn Bradley, Quinton Ross, Casey Jacobsen, and Nick Young (!!). It looks like Defensive Win Shares underrates the excellent rim protectors and mobile defensive bigs.


On the other hand, let’s look at who is overrated by Defensive Win Shares: Carlos Boozer, Karl Malone, Andres Nocioni, Troy Murphy, Drew Gooden, J.J. Hickson, Chris Webber, Malik Rose, Steve Smith, and Jeff McInnis.

So the most overrated by Defensive Win Shares are again bigs–offensive minded bigs who are known for their poor defense. In other words–bigs MATTER on defense. Both the underrated and overrated lists for defense are populated mostly by bigs. Box score stats are not going to tell you how good your bigs are on defense with any certainty, but your bigs will definitely determine how good your team’s defense is.

Tags: , , , , , ,

6 Responses to Overrated and Underrated via RAPM (Part 2)

  1. Chris on November 18, 2012 at 7:09 am

    I have never seen nor understood Carl Boozer being rated. He is an average regular season guy, who cannot produce in the playoffs; poor Chicago. Similarly, Jeff McInnis?! “LOL!”

  2. durvasa on November 28, 2012 at 11:23 am

    According to this, Yao Ming’s offensive RAPM over his career was -0.5. But from Jeremias Engelmann’s page, Yao’s ORAPM season-by-season was: 0.3, 0.9, -0.3, 0.9, 2.7, 1.3, 1.7, (out for year), -1.3 (only 5 games).

    It looks like there is a discrepancy here. Are you guys maybe using different methods for calculating RAPM, or does computing over multiple seasons do something funny to the ratings?

    • DanielM on November 28, 2012 at 12:53 pm

      J.E. has just moved from pure RAPM to box-score informed RAPM, which is a different animal and not independent anymore. In addition, it uses height in the regression, a variable in which, if I mistake not, Yao was an outlier.

      You are correct, though–true single year RAPM will not sum to multi-year RAPM. That was not the (biggest) issue here, though.

  3. Underrated and Overrated Via RAPM | DStats on February 14, 2013 at 8:53 am

    [...] Overrated and Underrated via RAPM (Part 2) [...]

  4. DanielM on November 17, 2012 at 4:07 pm

    Possibly. Could be an issue with the RAPM related to the averaging.

    Remember, though, this only is picking up from the 2000 season on, well after Karl’s peak.

Leave a Reply

Your email address will not be published. Required fields are marked *

Current day month ye@r *

DSMok1 on Twitter

To-Do List

  1. Salary and contract value discussions and charts
  2. Multi-year APM/RAPM with aging incorporated
  3. Revise ASPM based on multi-year RAPM with aging
  4. ASPM within-year stability/cross validation
  5. Historical ASPM Tableau visualizations
  6. Create Excel VBA recursive web scraping tutorial
  7. Comparison of residual exponents for rankings
  8. Comparison of various "value metrics" ability to "explain" wins
  9. Publication of spreadsheets used
  10. Work on using Bayesian priors in Adjusted +/-
  11. Work on K-Means clustering for player categorization
  12. Learn ridge regression
  13. Temporally locally-weighted rankings
  14. WOWY as validation of replacement level
  15. Revise ASPM with latest RAPM data
  16. Conversion of ASPM to" wins"
  17. Lineup Bayesian APM
  18. Lineup RAPM
  19. Learn SQL