I wrote pretty much these same criticisms (team v. individual attribution, value above average v. above zero) in my review of wages of wins for 82games.com shortly after the book came out.
Dave, I'm sure I read that when it came out, but I wanted to re-read it and now I can't find it on 82games. Do you have a link?
Joined: 13 Oct 2005 Posts: 374 Location: Atlanta, GA
Posted: Fri Sep 21, 2007 1:14 pm Post subject:
John Quincy wrote:
Why don't you re-post your criticisms as a comment to Berri's post?
Because, like David says, there's really no point in even confronting Berri with the flaws in his system... He simply won't acknowledge that there are flaws. He literally refuses to engage in meaningful discourse about his method. I mean, on the one hand it makes sense -- he, Schmidt, and Brook are primarily in the business of selling books, so why on earth would he openly acknowledge that said book is based on a faulty premise? But, on the other hand, that's really not in the spirit of what we're trying to do here. There's a reason for the divide between Berri-ites and APBRmetrics.
Anyway, I'm very much looking forward to a summary of Dan and David's presentation, and I want to wish them good luck at the Symposium next week. Go get 'em, guys!
Well, I respect your decision. I guess I just have a little more hope for positive dialogue and debate. Maybe at least some of the commenters would respond to your points. And some of the readers of Berri's blog might be swayed by them.
Joined: 03 Jan 2005 Posts: 671 Location: Washington, DC
Posted: Fri Sep 21, 2007 3:26 pm Post subject:
For what it's worth, I've commented on Berri's blog and exchanged emails with him, and I've found him to be reasonably responsive. Now, I haven't been challenging Wins Produced -- I've primarily been asking questions about his methods. Like anyone else, I think he's going to pick and choose which criticisms/comments he wants to answer. The thing to do is demonstrate where his system breaks down, and it sounds like that's what Dan and Dave will be doing in their paper/presentation. _________________ My blog
I'm a fan of the WP48 metric. I like the discussion here, I think there are some flaws to how wins produced is calculated. I'm looked at the data and it usually comes close to explaining team wins, but there is room for improvement. I'm just not sure how to correct these problems.
That's the nature of science, though. Right now Wins Produced is the best (maybe only) system of explaining how a player contributes to his team's wins. I think people should spend less time looking for logic holes in wins produced and spend more time coming up with a superior system.
Joined: 30 Dec 2004 Posts: 534 Location: Near Philadelphia, PA
Posted: Sun Sep 23, 2007 10:27 am Post subject:
mateo82 wrote:
That's the nature of science, though. Right now Wins Produced is the best (maybe only) system of explaining how a player contributes to his team's wins. I think people should spend less time looking for logic holes in wins produced and spend more time coming up with a superior system.
Or the individual win-loss system in Basketball on Paper.
There are a lot of ways to generate individual wins. We have listed 4 in 4 posts. As I pointed out in BoP, there isn't a great way to determine what is best. Sooo, what is superior? _________________ Dean Oliver
Author, Basketball on Paper
http://www.basketballonpaper.com
Joined: 03 Jan 2005 Posts: 497 Location: Greensboro, North Carolina
Posted: Sun Sep 23, 2007 11:22 am Post subject:
mateo82 wrote:
That's the nature of science, though. Right now Wins Produced is the best (maybe only) system of explaining how a player contributes to his team's wins. I think people should spend less time looking for logic holes in wins produced and spend more time coming up with a superior system.
If you add in a team adjustment, pretty much any metric, including really problematic ones like points per game and NBA efficiency better predict future wins (and future adjusted plus/minus) than does Wins Produced. And remember ANY metric with the right team adjustment can explain current team wins just as well.
Dave Lewin and I will be presenting a paper on this issue next weekend at the NESSIS conference at Harvard. The paper is not written yet (and may not be completely written by next weekend), but we will make it available when it is done. And if the paper is not done next weekend, we will make the PowerPoint presentation available.
Berri has done a fabulous job of self-promotion with Wins Produced, but it is not good science. I hope that after this paper comes out, we can just put this whole issue to bed. Let Berri do his thing, but his thing is not about advancing the science. If it was, he would have made a working paper version of his Wins Produced paper (that he has been citing for at least 3 years) available to the general public or to interested analysts who ask for this paper. That's how good work is done in academics.
Dan, I know I talked about this before but it seems worth mentioning (in the vein of advancement of science / group knowledge) that well known Win Shares and eWins were not included in this metric study done in March 2007.
http://tinyurl.com/29jgb5
You said: "When I rewrote this program, I purposely did not include other metrics for comparison purposes, including my own statistical plus/minus metric. I want to leave the focus squarely on Wages of Wins."
Ok. Is the focus still on Wage of Wins vs simple metrics or will the conference report examine other major models and perhaps look for new ones? I share Mateo's interest in new methods and share the expressed interest in other methods developed by Justin, Mike and Dean. And other metrics and blends discussed at various places in the forum.
(I am doublechecking to see if other methods were rated elsewhere, but if so, maybe you can remind where.)
Dean, could you mention the section of BOP where you discuss this? It is just chapter 15? I don"t think the quest is doomed or over. I don't think a fair evaluation is that difficult, maybe it won't be totally decisive, but Dan's measuring seems well-designed and fair. Dan's study method applied to a broader list would be a fine next step.
Last edited by Mountain on Sun Sep 23, 2007 9:38 pm; edited 7 times in total
Joined: 03 Jan 2005 Posts: 497 Location: Greensboro, North Carolina
Posted: Sun Sep 23, 2007 12:37 pm Post subject:
The focus of the paper is to compare a handful of widely known advanced metrics to very simple metrics that folks attribute to NBA decision-makers. You may want a broader paper, and you are welcome to write that paper. But that approach would overwhelm all but a handful of readers; this paper is already going to be complicated enough.
I found this post and recognize it (with apologies my memory didnt recall it quicker but I did look and find it) as a useful broader study; but it is different than the multi-year correlation with wins study referenced above and is not providing the measurement provided there which I and I think many would be interested in.
It had Winshares and Dean's individual rating method included but not eWins or several other possibilities.
I'd do such a broader study correlating metrics to wins if I was a sufficient statistician and had the time and standing to make it worthwhile. Perhaps someday; perhaps not. It just seemed like something you were perfectly set up to do, so in the spirit of a forum and group research dialogue (with people playing different roles) I've asked based on your pioneering efforts.
I'll look forward to seeing what metrics you include this time. I take it won't be all inclusive, but I appreciate what you have provided in the past and look forward to what I can learn from the presentation you do write.
Last edited by Mountain on Mon Sep 24, 2007 2:15 pm; edited 7 times in total
Joined: 30 Dec 2004 Posts: 534 Location: Near Philadelphia, PA
Posted: Sun Sep 23, 2007 1:06 pm Post subject:
Mountain wrote:
Dean, could you mention the section of BOP where you discuss this? It is just chapter 15? I don"t think the quest is doomed or over. I don't think a fair evaluation is that difficult, maybe not decisive but no more difficult to start and improve on than any scientific evaluation. In this case Dan's study method applied to a broader list would be a fine next step.
Chapter 15 and the other chapters on individual wins/net points.
If a fair evaluation is not that difficult, how come it is so rarely done? And Dan says that his paper is pretty complex. I believe it. _________________ Dean Oliver
Author, Basketball on Paper
http://www.basketballonpaper.com
I shouldnt have said "not that difficult", I should have said "not too difficult for the skilled statistician". I was off in my response.
Dan set up a method and ran it for several metrics. The exact same methodology could be used for other metrics. Application of the existing, expert prepared methodology to other metrics wouldnt be that difficult given the work already done, the skill possessed and already applied. Time consuming yes. Time consuming for someone else to replicate and sell as equivalent to what Dan's done so far, yes too. Someone else might be able to handle it well though.
I'd like to see how various versions of Dean's player W-L metric using his individual offensive and team based defensive ratings would perform compared to what Dan included and the other metrics out there, including any that used individual defensive statistics more heavily including rough estimates of individual shot defense.
With a truly broad test I think the best current metric at explaining team wins going forward could be identified with proper notations about confidence intervals and other caveats. "Best" at least argued with more support. Lacking a really broad test the title of best current metric can not yet be awarded. Some are fine with it, appreciating the variety. I wouldn't mind knowing the scores too alongside the variety.
I have a bit of impatience and disappointment about research not advancing on (at least for the moment) to this broader study and findings given all the previous efforts, goals of the movement and the finding within apparent reach. If this isn't the right moment, take my view as a seed for later.
I can also see this could potentially get into an area of proprietary knowledge, professional assets and advantage- the persistent conundrum.
A narrow test will determine what it sets out to test and that certainly has value.
I concede that to do a broader study a full dataset would need to be provided or assembled for each metric and that would be a large task. While it would be even better to see at least a few more metrics tested, I understand it could have been and still be beyond the targetted studies scope, timeframes and audience & presentation time.
In some cases the dataset is available, in some cases it is not immediately. But if the opportunity were offered to be scored by a future study that would create incentive for its assembly and provision, though perhaps the study could be a shorter number of years of league play. There would also be the possibility that volunteer support could be requested and obtained for assistance with the data assembly and perhaps even parts of the statistical analysis if there was enough interest in pursuing a joint project.
With enough hypothesis testing a higher performing new metric or meta-metric could perhaps be found. That seems worth doing pursuing in my view. By insiders / leading voices or perhaps and / or by other interested and sufficently able or guided amateurs. Near-term or down the line.
Last edited by Mountain on Sun Sep 23, 2007 11:01 pm; edited 10 times in total
You cannot post new topics in this forum You cannot reply to topics in this forum You cannot edit your posts in this forum You cannot delete your posts in this forum You cannot vote in polls in this forum