Posted: Tue Sep 18, 2007 1:30 am Post subject: Questions related to adjusted +/- ratings
I have a few questions about adjusted +/- ratings.
In Dan Rosenbaum's earlier adjusted +/- work. statistical +/- ratings were incorporated. Some of my past caution / critique about adjusted +/- stems from not keying in on this feature adequately and seeing in it perhaps a partial answer to my concerns/preferences. If I understand it right to helps blend in the individual performance of the measured player while on the court based on statistical/boxscore data, though the weights are determined by +/- regression study. Given the stat list of what it was comprised of in the original article at 82games I wonder if it introduces a modest offensive bias in that more of offense is measured and used in the statistical +/- rating than defense? (Or if it at least weights up the available defensive stats implicitly considering shot defense, but not in a player specific, data driven way for their true individual performance on the counted and uncounted?)
Maybe it is there and I missed it but I am wondering about the weights of the statistical and the pure +/- and how big statistical is.
Second is it correct or incorrect that the adusted +/- ratings that David Lewin presented for 04-05 and 06-07 are "pure" adjusted +/- ratings, i.e. without statistical +/- ratings blended in? I didn't see a clear indication on this. If correct, that seems notable and especially worth keeping in mind for any moving from the Dan's earlier numbers for players and David's later ones.
In general I think I would lean toward an adjusted +/- rating approach with statistical +/- ratings blended in. But I'd be interested in listening to more debate from those who know the issues far better or have also user opinions.
With a conference coming up and a presentation to be made, perhaps some further discussion of adjusted +/- might be timely. If others have questions, comments or ideas, feel free to add them here too.
Last edited by Mountain on Sun Sep 23, 2007 4:50 pm; edited 3 times in total
I'll leave it to Dan and Dave to talk about the specifics of their work, but I believe that both of them have blended in box score statistics to effectively smooth the results and help them pass the sanity test.
I'm doing work to revamp basketballvalue.com for the upcoming season (e.g. tracking offensive and defensive possessions), and as part of that I expect to show the adjusted +/- without the blending, at least initially. That will probably highlight why people have introduced processing to deal with the noisiness of the results.
Adjusted +/- ratings are average across all court appearances though the same player does not perform to the same level of impact in all teammate, role and opponent contexts but if you acknowledge that and set it aside or acknowledge it and make use of adjusted +/- ratings for a player just in a specific context that you could try to optimize your best lineup for that situation or the overall playing time mix you will see.
If you took the time to slice adjusted +/- ratings for players into splits by minutes played you could add to the evidence obtained by looking at per mnutes stats or PER performance under different levels of minutes and the debate going on about that. Perhaps the change in adjusted +/- rating moves in a different way than the individual performance does.
In his original article Dan mentioned interest in studying adjusted +/- rankings by size and age/experience. I would also be interested in seeing research get to that level.
Last edited by Mountain on Sun Sep 23, 2007 4:53 pm; edited 1 time in total
There were just 17 +4 or better on offensive statistical +/- last season, 11 on defense. About 50 +2 or better on offense but about 70 on defense. (With shot defense the defensive numbers might increase considerably.) 16 were +4 or better combined and just over 50 were +2 or better combined. Fairly modest top player results (none topping +6) compared to both the pure and overall adjusted +/- top player list impacts presented with a dozen or so near and over +10.
Last edited by Mountain on Sun Sep 23, 2007 4:55 pm; edited 6 times in total
Thinking about all this eventually led me back to how to interpret adjusted +/-. In Dan's original paper there is this quote:
"Using data from the 2002-03 and 2003-04 seasons (with the latter season being weighted twice as heavily), I find that Kevin Garnett, Tracy McGrady, Andrei Kirilenko, Tim Duncan, and Shaquille O’Neal are the five most effective players in the NBA. Replacing an average player with one of these five players would result in a team improving by about 14 points per 100 possessions or a little over 10 points per game. In other words, in 2003-04 replacing one of the average players on the Orlando Magic with one of these five players likely would have made them a bit better than the New Jersey Nets and Memphis Grizzlies."
The 03-04 Magic were a -7 differential, moving 10 would make them a +3 as described. But have there been any formal investigations of the tranferability of adjusted +/-?
Does adjusted +/- really mean this or does it just mean that the current team of these players are 10 points better with the player on the court than off with transferability an uncharted topic (at least to my know, in public) and perhaps yielding a different answer?
Should adjusted +/- be considered "proven" only as a team specific impact (even dependency measure) rather than a transferable leadership impact? Where is the data for this later possibility? How strong is it? Has the above introductory descriptive explanation of what adjusted +/- is been casually accepted as part of the methodology's conclusion?
For a specific player to have a fully transferable impact to a new team the new team would seem to have the same need for his mix of statistical contributions and even given a similar need the mechanics of achieving that are not a sure thing.
I am not sure how much people assume adjusted +/- transferability today. Maybe that passage long ago is just an artifact to let go. There is some talk about transferability of Wins Produced but it faces similar doubts. I don't think transferability gets brought up with winshares. Any such cliams would need more proof.
I see adjusted +/- as dividing up the credit pie, a particular historical pie and anything more to me is fairly speculative.
Last edited by Mountain on Wed Sep 19, 2007 12:31 am; edited 6 times in total
Mountain, don't take the magnitudes of the statistical plus/minus I posted literally. I haven't looked into it closely, but for whatever reason using Dan's coefficients doesn't yield numbers with magnitudes matching the statistical plus/minus ratings he has occasionally posted. So the numbers I posted shouldn't be interpreted as saying that player X makes his team Y points per 40 minutes better offensively. However, even if the magnitudes are off, I think the figures I posted are pretty on target in terms of rank ordering players from best to worst in statistical plus/minus. _________________ Eli W. (formerly John Quincy)
CountTheBasket.com
Looking at the 05-06 pure adjusted data I see 21 teams with 6 or more players listed as negative. Do 2/3 thirds of the teams in the league employ 40% or more of their players whose performance is harmful to team average performance? About half of total population are negative on 2 year average. It might be useful to check the statistical +/- on these guys and try to learn more about the demographics of the population too.
I also see about 80% of teams where their worst negative player is close to or exceeds the distance from zero of the best positive player. Coincidence or is this stretching, dual direction stretching, possibly overdoing it and not informing us enough about roles? Just asking.
Has anyone run a correlation on the 2 years of pure +/- available? I find a set of 261 who played at least 500 minutes both years.I haven't done correlation in a long while but simply using excel I find a correlation value of .325. that seems low for year to year, roughly 75% on same team.
I also found 64 players who met this condition and changed teams. The pure adjusted +/- rating change by 4 points or more on 56%, by 6 or more or 33% and 8 or more on 13%. And a small sample correlation of .346 (not much different than for all players) .
The relative weakness of these correlations reduces the strength of the transferability from team to team argument for pure +/-. Even from year to year. More research findings and discussion from strong statisticians would help. A similar test could be done for overall +/- with statistical +/- involved to see is the strength of correlation is significantly different.
Last edited by Mountain on Sat Sep 22, 2007 1:38 pm; edited 1 time in total
I calculated the minute weighted average of the overall rating for some teams and got some strange results. For example, the Wizards had a weighted average of 0.54 while the Spurs had an average of 0.44. Other than a mistake in my calculations, any explanation for this?
I calculated the minute weighted average of the overall rating for some teams and got some strange results. For example, the Wizards had a weighted average of 0.54 while the Spurs had an average of 0.44. Other than a mistake in my calculations, any explanation for this?
Well, Dan said that when he calculated his statistical plus/minus ratings, he didn't simply apply the coefficients to the various box score stats and leave it at that - he then had to go team by team and adjust each player's rating so that they matched up to team point differential. I did not take that extra step, though it wouldn't be that difficult to do.
Statistical plus/minus is an estimator of adjusted plus/minus, and unlike raw plus/minus, adjusted plus/minus does not have the handy property of allowing one to weight each players rating by their minutes played and arrive at a team total that matches perfectly with point differential. One reason for this is because it adjusts for the strength of the opposing lineups faced, and teams don't play the exact same quality of opponents over their seasons.
Beyond that, there's the problem with the magnitudes that I mentioned previously. I'm not sure why those are off by so much, but that's surely part of the reason the team totals don't look right. (To see some actual magnitudes for statistical plus/minus that Dan found, see here - http://www.uncg.edu/bae/people/rosenbaum/NBA/wv2t3.txt ) _________________ Eli W. (formerly John Quincy)
CountTheBasket.com
The pure adjusted +/- rating changed year to year for players by 4 points or more on 56% cases. You might be tempted to say that isn't much but it would move you 150-250 places in the rankings. By 6 or more generally moves you over 300 places in a field of 487. 8 or more moves you 350-400 spots.
I hadn't digested this detail before from Dan's original article:
"3) OVERALL = a * PURE + (1 – a) * STATS, where
OVERALL is the overall plus/minus rating
PURE is the pure adjusted plus/minus rating from Table 1
STATS is the statistical plus/minus rating from Table 3
a is the share of the overall rating due to the pure rating (it is chosen to minimize the standard error of the overall rating with the restriction that it fall between 10% and 90%, note that this will result in the pure rating counting less when it is especially noisy)"
In table 4 of the top 20 players pure +/- represented 10-61% of the rating. By inference statistical +/- represented 39-90% of the total rating. Looking at the all players page for table 4 cited in the article, I find the average "a" for pure +/- % used in the overall +/- formula is only 18.9 %.
Dan's original "overall +/-" is a true hybrid +/-, significantly different than pure +/-. Because on average it is 81.1% composed by statistical +/- with pure +/- being the tail, rather than the dog.
Pure +/- results brought to the public by David Lewin were a significant advance in what was widely known and Aaron's intention to provided such data on the upcoming season is certainly quite welcome as well.
If overall +/- is considered better than a next stage would be to try to get a 06-07 version of this in the public too if any able to tackle it are willing to undertake the task and share it. We have the statistical +/- for 06-07 and we are told to expect the 06-07 pure +/- in near future. The task is properly combining it. If it is not expertly done, as a last resort, a hack would be to use statisticial +/- and some patchjob % of pure +/- following the text of and mimicking the pattern of the previous study as best you can.
It also looks possible to take adjusted +/- down to the level of the 4 factors at least for the large statistical (or individual) share. The global or team impact 4 factor components could potentially be done if the public authors wanted to split pure +/- up that way.
Last edited by Mountain on Sun Sep 23, 2007 9:56 pm; edited 1 time in total
You cannot post new topics in this forum You cannot reply to topics in this forum You cannot edit your posts in this forum You cannot delete your posts in this forum You cannot vote in polls in this forum