|
APBRmetrics The statistical revolution will not be televised.
|
View previous topic :: View next topic |
Author |
Message |
HoopStudies
Joined: 30 Dec 2004 Posts: 705 Location: Near Philadelphia, PA
|
Posted: Sat Dec 15, 2007 4:49 pm Post subject: Next in Basketball Analysis |
|
|
D2W posted something recently that got me thinking. I quote some below:
davis21wylie2121 wrote: |
A few months ago, Gary Huckabay of Baseball Prospectus wrote an article on the "Death of Baseball Analysis". It's a thought-provoking piece that you should read in its entirety, but here are the parts which seem particularly pertinent to APBRmetrics today:
Quote: | Baseball analysis is dead.
It’s not wounded, it’s not in hibernation, it’s not at the nadir of a repeating cycle—it’s dead. Good thing, too. Let it go unmourned. Don’t get me wrong. There are still lots of interesting questions to answer, and as I'll go into, I’ve purposely tightened the definition of analysis to illustrate the point. When I’m talking about ‘baseball analysis’ here, I’m talking about the rigorous review of player performance data. I’m not talking about the inclusion of pitch velocity and location data that’s now coming available, and I’m not talking about the integration of scouting data with performance data. I’m strictly talking about activities like developing value metrics, forecasting, and all the other stuff we do with the massive yarn-ball of data we’ve all put together over the years.
Baseball analysis is dead because its utility has pretty much vanished. Analysis and information are only interesting and useful if they inform a decision, and even then, there really needs to be some sort of advantage or gain present relative to competitors in order for the time investment to be worthwhile. At this point in history, baseball analysis really has very little to offer on that front.
...
Any club that actually wants to use baseball analysis now to develop and maintain an advantage relative to their competitors has a tough task in front of them. They need to expand the scope of the data used for the analysis. They need to identify real changes that can be made in their operations if real phenomena are unearthed. They need to have people of sufficient skill to find these new discoveries. They need to develop a culture receptive to adopting the changes implied by this newfound wisdom. And finally, they need to find a way to keep other organizations from discovering the formula to their secret sauce. That’s a reasonable description of what clubs need from their search on the datafields of the game, and it’s precisely what baseball analysis cannot provide. Because baseball analysis is dead. |
I think we're pretty much at that point, too. Our lessons from the box-score stats are basically:
* Evaluate team offense and defense on a per-possession basis. (And that offensive rebounds don't constitute new possessions.)
* Per-minute stats > per-game stats.
* Shooting % matters. (A sub-point being that eFG%, TO rate, Reb %, and FT rate explain almost 100% of team efficiency.)
* Individual efficiency is influenced by a player's role in the offense.
We've spent so damned much time arguing about player evaluation metrics that we've lost sight of the big picture, which is to advance what we know about basketball itself. We could argue (and have argued) for days about whether we prefer eWins or PER or pW-pL, or TENDEX, or anything else somebody has come up with, but the simple truth is that it doesn't matter. ...
...
The future of APBRmetrics is not in these tired arguments over boxscore player-rating systems. It's in improving +/- data, counting new things, getting like-minded people into front offices, and improving the way NBA teams do business. It's in finding better ways to measure defense, and incorporating that into our existing knowledge base.
... |
This is relevant across a few threads. The thread talking about who is actually in jobs should note the above wisdom. All threads on player productivity can take note.
I want to both retreat a bit from the points and emphasize them. First, to retreat:
* Player "analysis", as Gary described it above, is not dead in basketball because it is much more nuanced than in baseball. We know the structure of baseball. New methods of baseball player valuation (as I prefer to call it) are really very very similar and not really providing enough new clarity to overcome just the variation in player performance. In contrast, our modeled structure of basketball is incomplete. I like to think that the possession framework is good, but it is primarily a team framework. More detailed models of offense and defense, incorporating individual skills are not developed. Parameterization of even existing models is not complete (we don't have great D stats). Creating new models creates new needs for data.
* Skill curves were my first attempt at putting a more detailed offensive model together. They are quite useful, but really just a start. I think that furthering this effort is part of player valuation.
* Implicit in the above is the concept of "fit." How well do teammates fit together. I recently did a paper for JQAS on teammate fit, using as a simple example the sport of frescoball (I'll try to post it somewhere). Frescoball is just that sport you play on the beach of hitting a ball back and forth with wooden paddles. It's 2 people cooperating and they have 2 different skills - being able to retrieve difficult shots at all and being able to return easy shots to a good spot. Just with that, you can show how important fitting these skills can be. It illustrates a balance between individual ability and fit. We all subjectively accept that fit is important, but there has been no attempt to quantify it. And that would come with more detailed offensive and defensive models.
* Finally, I would say that, outside of the really good and really bad performances, there is nowhere near consensus even subjectively on what is good or bad. 2 players with exactly the same boxscore stats in a game can really have played 2 very different quality games. This is a point made to advocate adj +/-, but I make it to advocate other methods in general.
That's the retreat. Here is the emphasis:
* With all the battling over player valuation methods, does anyone think we really are closer to having a better method? I think we are closer to what D2W says: "I'll use my metric, you'll use yours, and we'll just have to agree to disagree. " That's not as good as moving together with consensus, but it's fine. The battling has moved things farther from consensus, it seems.
* Football quantitative analysis is operating in a very different way than we in basketball or those in baseball operated - but a way we might consider. Aaron Schatz and his collaborators really do a great job at studying small things. They study the offensive line, the defensive backfield, the wide receivers -- the units. They better understand how those units operate without yet putting together the big picture of how the units interact. If you haven't seen Pro Football Prospectus, I recommend it as a different perspective on how to study sports at the very least. For us, we don't need a more detailed framework to understand parts of basketball that we aren't capturing right. When we say a person is a good rebounder or good passer or good shooter, what does that mean? We don't even have a good "passer rating." Roland's passer rating was quick and dirty and, I think, he doesn't really try to stand behind it. By coming up with a new passer rating, even conceptually without the numbers, it can lead to insights about a part of the game.
* Collecting new data is HUGE. The availability of data is there. Watch a game and track something, even something easy, but do it in a way to open up insight. I warn people to stay away from quality judgments when doing so -- tracking "good" picks vs "bad" picks isn't helpful because what is good and bad is better decided by the experts who are setting up plays. Focus on tracking facts or subjective judgments that don't imply whether it's good or bad. Define what a contested shot is and stick with it. Contesting Ray Allen at 20' is good, but contesting Shaq at 20' is not -- still you should record both as contested shots. You can sort out what is good or bad later, but be consistent with the data recording.
* If you want to do studies that require some measure of player quality, pick something that exists or, if you make a new one, keep it simple. Use PER or Adj +/- or WP or Minkoff Player Rating or Win Shares or Tendex or Pts Created or NBA Efficiency or Alt Win Score. Or just look at guys who played at least 2000 minutes. Yeah, your results become sensitive to which metric you used, but if it ain't interesting, no one will care anyway. If it is interesting, expect to look at other metrics or to look at the question more broadly. Heather's work that JonathanG published on the draft used metrics of success that I wouldn't use (if I recall correctly), but they were still interesting. It may not pass muster at some high profile thing, but it was thought-provoking, showed well-rounded thought, and a lot of people saw it. It gives something to cite for future work.
* Finally, politics happen when people pick sides. We aren't big enough or influential enough to have sides. Let's get big and influential first. _________________ Dean Oliver
Author, Basketball on Paper
The postings are my own & don't necess represent positions, strategies or opinions of employers. |
|
Back to top |
|
|
Kevin Pelton Site Admin
Joined: 30 Dec 2004 Posts: 979 Location: Seattle
|
Posted: Sat Dec 15, 2007 8:14 pm Post subject: Re: Next in Basketball Analysis |
|
|
HoopStudies wrote: | New methods of baseball player valuation (as I prefer to call it) are really very very similar and not really providing enough new clarity to overcome just the variation in player performance.
[list]* With all the battling over player valuation methods, does anyone think we really are closer to having a better method? I think we are closer to what D2W says: "I'll use my metric, you'll use yours, and we'll just have to agree to disagree. " That's not as good as moving together with consensus, but it's fine. The battling has moved things farther from consensus, it seems. |
I remember Baseball Prospectus describing baseball as searching for that "last 1%" of capturing player (offensive) value, or something like that. There was some debate as to the importance of that last 1%.
Certainly, when you look at the differences in the way, say Wins Produced and PER treat shot creation, we're nowhere near that similarity. I do think that if we come to a better understanding of the value of shot creation, this may bring some clarity to the debate.
Quote: | If you haven't seen Pro Football Prospectus, I recommend it as a different perspective on how to study sports at the very least. |
Yes, yes. I think football analysis has more to offer us than baseball analysis at this point.
FootballOutsiders' analysis has basically started from evaluating teams and then gone to units, as you mention, and then to individuals. In basketball, team analysis hasn't been as important, so it's kind of developed independently of player analysis.
Quote: | * Finally, politics happen when people pick sides. We aren't big enough or influential enough to have sides. Let's get big and influential first. |
I'd like to avoid picking sides in general, but yeah, at least let's put it off. |
|
Back to top |
|
|
basketballvalue
Joined: 07 Mar 2006 Posts: 208
|
Posted: Sat Dec 15, 2007 10:38 pm Post subject: |
|
|
Interesting post, Dean. To your point, I think one area where we can have a nice impact as a community is adding statistics to what is commonly tracked by the mainstream. This is similar to the way +/- and Blocked Attempts have been added to the boxscores this year, and OBP is now common vernacular in baseball.
It would be interesting to try and reach real consensus here on what we'd like a boxscore to look like 10 years from now, and then manually create some of those today by manually scoring some of this year's games, or perhaps last year's NBA finals. I believe we've talked about something like this previously, but I'm not advocating trying to do this for every game this season.
Obviously, this would be a little different kind of project, and one that might have more impact than developing one's own personal rating system that is discussed here but not elsewhere.
Thanks,
Aaron _________________ www.basketballvalue.com
Follow on Twitter |
|
Back to top |
|
|
Dan Rosenbaum
Joined: 03 Jan 2005 Posts: 541 Location: Greensboro, North Carolina
|
Posted: Sun Dec 16, 2007 5:26 am Post subject: |
|
|
I don't think we will ever reach the moment baseball has reached in terms of statistical analysis. Unlike in baseball, statistical analysis in basketball is much, much harder and done optimally would require (a) significant investments in programming and data, (b) skills and intution in econometrics/statistics on par with the superstars in applied microeconomics (the Steve Levitts and Justin Wolfers of the world), and (c) an understanding of basketball on par with coaches like Larry Brown and Hubie Brown. On top of this, the statistical analysis needs to be able to communicate effectively to non-stats folks.
In baseball large investments in (a) have already been made. The skills mentioned in (b) are largely superfluous, good econometrics/statistics intution is sufficient. The understanding in (c) is really superfluous. Baseball statistics can be done quite well with the baseball understanding of a really serious fan. All of that means that with small investments, a team can get that 80% of benefits from baseball statistical analysis. Whatever is left may not be worth the investment.
In basketball I believe that many teams have and do use statistical analysis in ways that not only do not help them, but actually make them worse off. I am not going to go into specifics here, but I have talked about this in other posts and in my paper with Dave Lewin. But more importantly, there is no one person who has the set of skills that I describe above for statistical analysts in basketball. And so different analysts will carve out their niche in different ways and so there is unlikely to ever be the consensus there is in baseball.
Take the huge divergence in what three academics - Wayne Winston, Dave Berri, and I - say about doing statistical analysis. One of the many things we have learned in this debate between Berri and me is that the issues are complex enough that evaluating different claims is very hard. Notice how little folks have weighed in on that debate here. Part of it is that folks are bored with it, but part of it is that the ideas are subtle and complex and it is a lot of work to sort through everything that is going on. In baseball the ideas are simple enough that lots of people can enter in the debate, but in basketball it will increasingly be harder for folks without an immense amount of time on their hands (and without sufficient econometrics/statistics skills and basketball understanding) to follow the debate.
(It is no accident that one of the success stories from this board is someone who learned a ton of statistics/econometrics/basketball analysis while being laid up in bed recovering from a car accident.)
And since those folks with those skills and time are likely to get scooped up by teams, it may be hard to maintain a public discussion of anything resembling the cutting edge in forums like this. My debate with Berri is largely not cutting edge; it is just my (likely failed) attempt to bring the economics profession into the 21st century in terms of basketball statistical analysis.
There likely will always be many teams in the NBA that do just fine with doing very little statistical analysis. In baseball small investments in statistical analysis can reap large returns, but in basketball small investments are a crapshoot. A small investment in someone who is careful and cautious and takes the time to learn from others might help some, but a small investment in someone who falls in love with their models could very easily do more damage than good. (And that is the group most likely to get hired by teams, since those folks have something to sell.) Bigger investments are also risky. Invest in the wrong analyst and it is possible for them to do more damage than good.
But done the right way I strongly believe that there are huge gains from statistical analysis in basketball. And these gains are likely to last a very long time, if not forever, because not every team will have a decision-maker with the skills to evaluate whether they are getting useful statistical analysis or flawed statistical analysis.
Last edited by Dan Rosenbaum on Sun Dec 16, 2007 4:05 pm; edited 1 time in total |
|
Back to top |
|
|
HoopStudies
Joined: 30 Dec 2004 Posts: 705 Location: Near Philadelphia, PA
|
Posted: Sun Dec 16, 2007 1:52 pm Post subject: |
|
|
Dan Rosenbaum wrote: | I don't think we will ever reach the moment baseball has reached in terms of statistical analysis. Unlike in baseball, statistical analysis in basketball is much, much harder and done optimally would require (a) significant investments in programming and data, (b) skills and intution in econometrics/statistics on par with the superstars in applied microeconomics (the Steve Levitts and Justin Wolfers of the world), and (c) an understanding of basketball on par with coaches like Larry Brown and Hubie Brown. On top of this, the statistical analysis needs to be able to communicate effectively to non-stats folks.
|
I don't know about the economics skills. I did a lot of my work before I had any real training in anything. I did the basic framework stuff before I was a junior in college. Further, I've always felt that statistics are less a practice area than a logical way of thinking about data. You can think logically about data without formal statistics training. Look at data, understand what it says in whole, don't go for the headline story (which is probably a lie anyway). The reason I know anything about stats and math is because I cared about sports. I learned about a standard deviation because it mattered for sports. I think EdK would say the same general thing (though it could be that Ed and I are part of the fraction that Dan suggests is doing work that makes our teams worse).
My point is to keep people from thinking you need a PhD in economics to make it. Basketball is much harder than baseball, but it is just as visible. You can study basketball cheaply -- just watching, listening to experienced coaches (most who serve as broadcasters really know what they're talking about), charting data, studying the small things and not trying to build another player value method, reading all the stats that are out there (studying the anecdotes), read JQAS. And, if it drives you, if it is your passion, you can learn the tools you need. Ask EdK or Dave Lewin. _________________ Dean Oliver
Author, Basketball on Paper
The postings are my own & don't necess represent positions, strategies or opinions of employers. |
|
Back to top |
|
|
Dan Rosenbaum
Joined: 03 Jan 2005 Posts: 541 Location: Greensboro, North Carolina
|
Posted: Sun Dec 16, 2007 4:44 pm Post subject: |
|
|
I think DeanO is missing my point. Advanced training is neither a necessary nor sufficient condition for doing good statistical analysis. I can think of lots of people with little or no formal training who are much better "applied micro-econometricians" than folks with Ph.D training in economics, statistics, etc. In my paper with Dave Lewin (an undergraduate with little formal training in econometrics) it is the Ph.Ds who fare the worst. (Unless Hollinger has gotten a Ph.D sometime recently.) And this is no accident. We Ph.Ds tend to fall in love with our models and that can blind us to obvious problems, especially when our Ph.Ds result in others giving us more credence than they should. Also, lots of folks with Ph.Ds have terrible intuition about data analysis.
On the other hand, there are some Ph.Ds in economics and related fields who have unbelievably good intution about data analysis and lots can be learned from interacting with those folks. Also, advanced training gives folks a bigger toolkit, which can sometimes be very helpful in framing problems or in coming up with methods to deal with them.
That said, the criteria that I listed for a good statistical analyst is something that no one is going to meet. That was the whole point. Different folks are going to be different mixes of those criteria and that will lead to different analysts doing things very differently. That doesn't happen as much in baseball, because there isn't much of a return to being a really good "applied micro-econometrician" or really understanding the game well. There is a return, but it is much smaller than in basketball. |
|
Back to top |
|
|
mathayus
Joined: 15 Aug 2005 Posts: 207
|
Posted: Sun Dec 16, 2007 5:44 pm Post subject: |
|
|
Hmm. Calling 'baseball stats' dead is basically saying "they've been so successful, any future improvement will be very small", I'd be very hesitant to make any such claims about basketball. Maybe there won't be any further quantum leaps, but I wouldn't want to give up after accomplishing only a fraction of what's been done in baseball.
Then there's things like the fact that ratings on defense are still so weak right now and even non-stats guys who know anything about the game could think of ways that could improve that measurement significantly based on improved game tracking. I see that, and it's hard for me to think this is the end. |
|
Back to top |
|
|
Neil Paine
Joined: 13 Oct 2005 Posts: 774 Location: Atlanta, GA
|
Posted: Sun Dec 16, 2007 7:36 pm Post subject: |
|
|
Good posts, all. I'd like to emphasize again my meaning in the original post: basketball analysis is NOT dead, nor is APBRmetrics. But I do think we've done all we can using just the set of raw boxscore stats that have been tracked since 1978. People -- myself included -- are always coming up with new (although not necessarily novel) ways to twist that same old dataset, but I don't think the future of APBRmetrics goes in that direction. Baseball stats "died" when sabermetricians realized that they had done everything possible using only boxscore stats, and that's what I mean when I say that we're at the same point. Recognizing this and moving forward, sabermetrics has done great work with pitch f/x tracking, play-by-play data, video scouting, and innovative defensive measures; in other words, they're counting new things, because they've gotten all the utility they can out of the old things. Feel free to disagree, but I think we've also reached that same point, where boxscore stats simply don't cut it anymore -- we've learned all of the lessons they offer, and it's time to shift our focus. This is not to say that we shouldn't use those stats at all (I use them all the time), but rather that they should no longer be the focal point of the analysis. My original remarks were basically an agreement with Dean when he pointed out how much time and energy on this board is wasted on arguments over these boxscore rating systems, resources which could be better spent on developing new ideas. To paraphrase Bill James, the world needs another boxscore-based rating system like Custer needed more Indians. I'm certainly not saying we've been anywhere near as successful as baseball's stat-heads, but I think both fields have reached the point where we have to branch out beyond those old statistical categories and develop new ones. |
|
Back to top |
|
|
HoopStudies
Joined: 30 Dec 2004 Posts: 705 Location: Near Philadelphia, PA
|
Posted: Sun Dec 16, 2007 8:34 pm Post subject: |
|
|
davis21wylie2121 wrote: | Good posts, all. I'd like to emphasize again my meaning in the original post: basketball analysis is NOT dead, nor is APBRmetrics. But I do think we've done all we can using just the set of raw boxscore stats that have been tracked since 1978. People -- myself included -- are always coming up with new (although not necessarily novel) ways to twist that same old dataset, but I don't think the future of APBRmetrics goes in that direction. Baseball stats "died" when sabermetricians realized that they had done everything possible using only boxscore stats, and that's what I mean when I say that we're at the same point. Recognizing this and moving forward, sabermetrics has done great work with pitch f/x tracking, play-by-play data, video scouting, and innovative defensive measures; in other words, they're counting new things, because they've gotten all the utility they can out of the old things. Feel free to disagree, but I think we've also reached that same point, where boxscore stats simply don't cut it anymore -- we've learned all of the lessons they offer, and it's time to shift our focus. This is not to say that we shouldn't use those stats at all (I use them all the time), but rather that they should no longer be the focal point of the analysis. My original remarks were basically an agreement with Dean when he pointed out how much time and energy on this board is wasted on arguments over these boxscore rating systems, resources which could be better spent on developing new ideas. To paraphrase Bill James, the world needs another boxscore-based rating system like Custer needed more Indians. I'm certainly not saying we've been anywhere near as successful as baseball's stat-heads, but I think both fields have reached the point where we have to branch out beyond those old statistical categories and develop new ones. |
If we do our job, the boxscore GROWS. It already is with blocks against and +/-. Note, for instance, that Phoenix with Joe Johnson in 2004-05 was +9.2 with him and +1.8 without him, whereas Atlanta the next year was -6.5 with him and +0.7 without him. If that's in the boxscore, it changes the story a little. (The new stats should change those player value methods, but we haven't managed to talk about that.)
That is our goal - build a richer story, in part through building a bigger better boxscore, but through many means. I tried to frame some of those ways earlier.
APBRmetrics is far from dead, but arguing over player value metrics clearly distracts us a lot and potentially keeps us from the developments we need to make to keep it growing. _________________ Dean Oliver
Author, Basketball on Paper
The postings are my own & don't necess represent positions, strategies or opinions of employers. |
|
Back to top |
|
|
Neil Paine
Joined: 13 Oct 2005 Posts: 774 Location: Atlanta, GA
|
Posted: Sun Dec 16, 2007 9:56 pm Post subject: |
|
|
Exactly. I love the fact that +/- and blocks against (which they've tracked for a long time in Euroleague) are now being included in some box scores. I'd love even more to see the inclusion of some of the things that 82games tracks: "bad passes", shots and assists broken down by type (jump, close, dunk, etc.), rebound chances, turnovers broken down by type... the list goes on and on. We're at the stage of the game where a small increase in the amount of data tracked could reap huge benefits for the analysis community. |
|
Back to top |
|
|
Dan Rosenbaum
Joined: 03 Jan 2005 Posts: 541 Location: Greensboro, North Carolina
|
Posted: Sun Dec 16, 2007 10:12 pm Post subject: |
|
|
HoopStudies wrote: | APBRmetrics is far from dead, but arguing over player value metrics clearly distracts us a lot and potentially keeps us from the developments we need to make to keep it growing. |
HoopStudies wrote: | Finally, politics happen when people pick sides. We aren't big enough or influential enough to have sides. Let's get big and influential first. |
DeanO, I get that my work discussing Wins Produced has, in your opinion, been a bad thing, but like you I can't discuss everything that I work on these days. But this work is sufficiently removed from my real work that I think I have been able to make contributions to this community without damaging my work with the Cavs.
For the longest time you made the argument that player evaluation metrics could not be evaluated. But my paper with Dave Lewin does just that in two different ways. That to me is a big contribution that helps advance the field. Being able to evaluate what we do is really important and these methods we developed could be adapted to evaluate lots of other things.
Between our JQAS paper and this paper, we also make advances in the area of the theory of possession usage. Possessions are a fundamental building block of practically everything that we do, so I think this is useful.
Finally, I think this whole discussion has really put a bulls-eye on the usage/efficiency tradeoff, and I think that is a good thing. Understanding that tradeoff better is a key to doing basketball statistics analysis better.
Again, like you I cannot divulge everything that I am working on for the Cavs, but given that I think that these have been useful contributions to the field. But it seems like you are arguing that I would have helped more if I had been like you over the past few years and just made "big picture" comments about things every once in awhile (outside of our JQAS article).
Yes, being specific and pointed leads to disagreements, but I believe it also leads to advancements. The only difference between my treatment of Berri and your treatment of Winston and Sagarin in your book is that I have developed new tools to evaluate the claims of Berri whereas you mostly just relied on the "laugh test" to dismiss Winston and Sagarin.
And finally, I think it is a good thing to point out that statistical analysis can sometimes lead to worse decision-making. We lose credibility if we are not willing to be critical of our own work. I know in the short-term it helps both you and I if more teams take Berri's work seriously, but I also feel an obligation to the science. And personally, I don't think it helps me one bit to be critical of Berri. If my goal was to promote myself, I would stay above the fray and leave the heavy lifting to someone else. |
|
Back to top |
|
|
Mike G
Joined: 14 Jan 2005 Posts: 3605 Location: Hendersonville, NC
|
Posted: Mon Dec 17, 2007 6:32 am Post subject: Re: Next in Basketball Analysis |
|
|
HoopStudies wrote: |
davis21wylie2121 wrote: |
...We've spent so damned much time arguing about player evaluation metrics that we've lost sight of the big picture, ...
The future of APBRmetrics is not in these tired arguments over boxscore player-rating systems.
... |
* With all the battling over player valuation methods, does anyone think we really are closer to having a better method? I think we are closer to what D2W says: "I'll use my metric, you'll use yours, and we'll just have to agree to disagree. " . |
basketballvalue wrote: | It would be interesting to try and reach real consensus here on what we'd like a boxscore to look like 10 years from now, and then manually create some of those...
Obviously, this would be a little different kind of project, and one that might have more impact than developing one's own personal rating system that is discussed here but not elsewhere. |
Dan Rosenbaum wrote: | ... the statistical analysis needs to be able to communicate effectively to non-stats folks.
... different analysts will carve out their niche in different ways and so there is unlikely to ever be the consensus ...
in basketball it will increasingly be harder for folks ... to follow the debate.
... |
Hoopstudies wrote: | ...I did a lot of my work before I had any real training in anything. I did the basic framework stuff before I was a junior in college.... |
Dan Rosenbaum wrote: | ..We Ph.Ds tend to fall in love with our models and that can blind us to obvious problems, especially when our Ph.Ds result in others giving us more credence than they should. Also, lots of folks with Ph.Ds have terrible intuition about data analysis. ... |
davis21wylie2121 wrote: | ...
boxscore stats simply don't cut it anymore -- we've learned all of the lessons they offer...
...time and energy on this board is wasted on arguments over these boxscore rating systems, resources which could be better spent on developing new ideas. To paraphrase Bill James, the world needs another boxscore-based rating system like Custer needed more Indians...
|
Hoopstudies wrote: | If we do our job, the boxscore GROWS. It already is with blocks against and +/-.
... arguing over player value metrics clearly distracts us a lot and potentially keeps us from the developments we need to make to keep it growing. |
So, if I've been valuating players since about 1985, and I keep finding better ways of twisting the available data, this is just a distraction and an impediment to actual progress? We're better off pushing and waiting for more stuff to be tracked?
A 'movement' such as APBRmetrics doesn't live or die based on availability of data. It lives by infusion of (generally young) new people with a vital interest. Evaluating players is fun. Waiting for Dr. Know to tell us the Truth isn't fun.
Basketball is a game for most people, and a job for a relative few. The only times I think to take a game more seriously is when it could be fun to do so.
You never know who is going to have a vital insight, just as you never know who is going to make a key steal in a game, or hit an amazing shot. We are interested by what interests us; no one has been dragged into any discussion. The enemy is not "another rating system". _________________ `
36% of all statistics are wrong |
|
Back to top |
|
|
Harold Almonte
Joined: 04 Aug 2006 Posts: 616
|
Posted: Mon Dec 17, 2007 9:18 am Post subject: |
|
|
About some D.R. ideas: Kasparov an other chess players were used to help in building a software which beated him later.
About the baseball comparison: Basketball could be most predictable than baseball even with worse data and metrics, I think it's because the player's slumps are not as long, and un-make up-ble at the replacement, I think probably the needs of testhosterone are different in both games, but that's another threat.
About some Dean ideas: 82games is already the first step of the next steps in BB metrics (which by the way is just entering its teens). It's true hockey and potential plays will never be boxscored in live mode, that's difficult for not to say impossible. It's needed a parallel boxscore tracked with TV replay, and somebody said above, a consensus to stablish rules above personal judgements wich will be needed too. |
|
Back to top |
|
|
HoopStudies
Joined: 30 Dec 2004 Posts: 705 Location: Near Philadelphia, PA
|
Posted: Mon Dec 17, 2007 9:37 am Post subject: |
|
|
Dan Rosenbaum wrote: |
Finally, I think this whole discussion has really put a bulls-eye on the usage/efficiency tradeoff, and I think that is a good thing. Understanding that tradeoff better is a key to doing basketball statistics analysis better.
|
This is a good thing, but I didn't see people's comments really recognizing this as a result of the discussion.
Dan Rosenbaum wrote: |
Again, like you I cannot divulge everything that I am working on for the Cavs, but given that I think that these have been useful contributions to the field. But it seems like you are arguing that I would have helped more if I had been like you over the past few years and just made "big picture" comments about things every once in awhile (outside of our JQAS article).
|
Note that the comments I've made about player valuation method discussion are quite general. Look at how many of the recent threads are dedicated essentially to player ratings. I count 5 of the last 10, 9 of the last 20 as of this morning (critiques, developments, or just lists). That seems pretty large, doesn't it? Especially when there are so many other things we can work on.
As to what would have helped more, I appreciate that we can't post a lot of things we work on. There were so many times that I wanted to redirect people's passions, but couldn't give out info on things that interested me because they were directly related to internal work. So, yeah, I think your work was prominent enough that you didn't have to take on this route.
Further, with all due respect to you and Dave L., I wish I could say that the paper you had was a good result. But, as you pointed out, there is an identification issue with the one approach and the second approach, without a validation of the adj +/- methodology used, is not practical... more below on that.
Dan Rosenbaum wrote: |
Yes, being specific and pointed leads to disagreements, but I believe it also leads to advancements. The only difference between my treatment of Berri and your treatment of Winston and Sagarin in your book is that I have developed new tools to evaluate the claims of Berri whereas you mostly just relied on the "laugh test" to dismiss Winston and Sagarin.
|
I'm glad you point this out. When I wrote that, what bugged me about Winval was the marketing, media hype, and extremely high price for a stat. At the end, I said that the concept "made sense" or was "nice", but that got lost in the message. But implementation (in contrast to concept) has not been reviewed for all forms of adj +/-. There has been no validation of the methods behind what is a good concept. As you know, there are lots of ways to screw up regression techniques, but while multiple different implementations have been done by different people, no one (here, at least) has sat down and said, "this is what is wrong with that implementation," or "those are exactly the results I get" or "let's do some checks on these results." Peer review of adj +/- methods has pretty much been along the lines of, "it sounds like a good concept" or "the results generally make sense." That leads me to...
I have mentioned at least once here and many other times in person to people that I regret even pulling out the laugh test. That was probably the biggest error I have made in this field. That part of the chapter got messed up, in part, for the reasons I claim are messing things up here -- I got emotional before being rational. For that, I owe Winston-Sagarin and the community an apology. I'd like to strike the laugh test from any tool we use. It ain't right and it is my fault.
Dan Rosenbaum wrote: |
And finally, I think it is a good thing to point out that statistical analysis can sometimes lead to worse decision-making. We lose credibility if we are not willing to be critical of our own work. I know in the short-term it helps both you and I if more teams take Berri's work seriously, but I also feel an obligation to the science. And personally, I don't think it helps me one bit to be critical of Berri. If my goal was to promote myself, I would stay above the fray and leave the heavy lifting to someone else. |
We do owe an obligation to the science. But I'm not convinced that this has helped the science. Berri's work is probably more prominent because of the duel that has occurred (and I don't mean that to imply that it was just you). I don't know if anyone here would have adopted his work with the groundwork of analysis so well founded already. I don't know of anyone in basketball that would have adopted his work either. Beating on a little guy in the field probably bought him adherents.
So I'll end it this way: I have felt an obligation to push the science ahead in a different way than you did (and I fully respect that you wanted to push the science). The critique and development of new player valuation methods can go on and on, but I believe that it should be a much smaller part of our discussion than it has been. Ultimately, my push will lose because we will get big enough to matter (and then, taking sides has more rewards), but I'd like to see it last a bit longer. _________________ Dean Oliver
Author, Basketball on Paper
The postings are my own & don't necess represent positions, strategies or opinions of employers. |
|
Back to top |
|
|
Harold Almonte
Joined: 04 Aug 2006 Posts: 616
|
Posted: Mon Dec 17, 2007 9:56 am Post subject: |
|
|
Quote: | This is a good thing, but I didn't see people's comments really recognizing this as a result of the discussion. |
Valid. |
|
Back to top |
|
|
|
|
You cannot post new topics in this forum You cannot reply to topics in this forum You cannot edit your posts in this forum You cannot delete your posts in this forum You cannot vote in polls in this forum
|
Powered by phpBB © 2001, 2005 phpBB Group
|