View previous topic :: View next topic |
Author |
Message |
back2newbelf
Joined: 21 Jun 2005 Posts: 275
|
Posted: Thu Mar 24, 2011 5:24 pm Post subject: |
|
|
Code: | SanAntonio Spurs 63.57
Chicago Bulls 59.25
LosAngeles Lakers 58.07
Boston Celtics 57.53
Miami Heat 56.78
Dallas Mavericks 55.89
OklahomaCity Thunder 52.9
Orlando Magic 52.49
Denver Nuggets 49.17
PortlandTrail Blazers 46.48
NewOrleans Hornets 45.81
Memphis Grizzlies 45.68
Atlanta Hawks 44.88
Houston Rockets 43.53
Philadelphia 76ers 42.94
Phoenix Suns 41.71
NewYork Knickerbockers 40.96
Utah Jazz 40.33
Indiana Pacers 37.21
GoldenState Warriors 34.33
Milwaukee Bucks 33.45
Charlotte Bobcats 33.34
LosAngeles Clippers 31.69
Detroit Pistons 30.01
NewJersey Nets 27.29
Toronto Raptors 24.13
Sacramento Kings 21.96
Washington Wizards 21.19
Minnesota Timberwolves 20.47
Cleveland Cavaliers 16.93
|
Code: | b2n 6.75
Vegas 5.52
JH 5.44
KP 7.79
KD 6.59
Dsmok1 6.3
Crow 6.37
schtevie 7.26
WoW 7.24
WS 6.58
SPM(bbr) 7.26
SRS 8.25
“41” 10.71
(lastyear+41)/2 8.16
|
_________________ http://stats-for-the-nba.appspot.com/ |
|
Back to top |
|
|
greyberger
Joined: 27 Sep 2010 Posts: 52
|
Posted: Thu Mar 24, 2011 10:12 pm Post subject: |
|
|
Looks like John Hollinger could quit his day job. |
|
Back to top |
|
|
Jeff Fogle
Joined: 11 Jan 2011 Posts: 70
|
Posted: Sun Mar 27, 2011 11:16 pm Post subject: |
|
|
Thanks back2new...didn't get a chance until now to thank you for posting the update.
Do you have any thoughts, or do others, about what the thresholds might be for evaluating the predictions. Obviously John Hollinger deserves kudo's based on that chart. Is "pretty good" within a point of Vegas (say 6.52 or better)? Or two points (7.52 or better)?
Or, maybe, is 5.52 disappointing from Vegas, and the market and/or stat nation should be aiming for something closer to 4.0?
Not sure how to evaluate it beyond ranking from closest to furthest. Generally Vegas is the benchmark in the prediction field...so I'm confident JH deserves pats on the back...
Thanks again for posting that...looking forward to seeing the season-end report... |
|
Back to top |
|
|
Crow
Joined: 20 Jan 2009 Posts: 824
|
Posted: Mon Mar 28, 2011 12:01 am Post subject: |
|
|
In the first contest here in 2007-8 my early meta-metric approach of using a loose blend of other metrics and then making some quick adjustments produced the lowest average error of 8.4 and the rest of the field of pure metric based predictions and not necessarily pure predictions trailed by about one point. The blended metric approach may have helped reduce the impact of systemic errors some. A form of regression to the mean of other predictions.
http://tinyurl.com/47gctbd
In 2008-9 Neil Paine won with an SPM based model that produced an average error of 7.7, narrowly besting John Hollinger. A wider comparison showed Bill Simmons with a slight win over Vegas.
viewtopic.php?t=1885&start=150
In 2009-10 back2newbelf won here with an Adjusted +/- based model that produced an average error of 6.7 and beat Vegas by 2/3rds of a point and another poster (Cysco?) beat Vegas as well.
http://tinyurl.com/477pd8s
In the past several people said they regretted not regressing expectations back toward the mean some or some more. I felt that way. I think that generally would have helped with average error. I don't know how much the metric contestants have applied this change in recent runs.
It looks like there will be substantial improvement again this season on the order of possibly another full point.
The Vegas line referenced has been better than almost all listed predictions every year. I'd say it is at least "pretty good" to be within a point of Vegas. Beating Vegas once is notable but still probably not enough to "plan to do it for real" next time with an as presented metric because of the vigorish applied to bets and because no predictor in these listings has beaten Vegas twice in the timespan so far. Within 2 points of Vegas isn't bad for trying to predict all of them instead of trying to cherry-pick a few teams. I'd guess within 1.5 - 2 points of Vegas will probably beat a good number of other media and fan predictions but it would come down to compiling the errors of those others.
(For predictors with 2 or more entries including this season as of now, besides Vegas, I think I have a slight lead on back2newbelf for best average rank. Neil Paine and John Hollinger have competed every year and both have 2 strong finishes and lesser ones that hurt their average rank.)
Injuries, trades and coaching moves will somewhat limit how much further the average error can be reduced but it probably can go down further.
Maybe somebody gets under 5 next season.
An average error of 4 - 5 might be close to the limit to achieve once or on a regular basis, but hard to say for sure what the typical limit is at this point based on a few seasons. Long term retrodiction metric testing would probably be the way to go if one wanted to give it the time to improve precision of the predictor and then perhaps use that refined projection capability to either try to win "the contest" or improve pre-season stories or beat Vegas or help a team or just improve one's own perspective.
Last edited by Crow on Thu Mar 31, 2011 6:00 pm; edited 2 times in total |
|
Back to top |
|
|
Mike G
Joined: 14 Jan 2005 Posts: 3616 Location: Hendersonville, NC
|
Posted: Mon Mar 28, 2011 9:13 am Post subject: |
|
|
Crow, nice summary. You wrote:
Quote: | In the first contest here in 2007-8 my early meta-metric approach of using a loose blend of other metrics and then making some quick adjustments produced the lowest average error of 8.4 |
Reviewing that 2008 thread, I found this comment by Cherokee_ACB: Quote: | To put it into perspective, Vegas (error=7.3) beats you all, but at least you all beat Bill Simmons (9.8). Last season, Vegas' predictions errored by just 5.1 on average, while bloggers did poorly. |
It seems '08 was just a wild and crazy year for predictions. Everyone was 20+ wins too high for Chi and Mia. And 15+ too low with Por and LAL.
Our predictions probably haven't really gotten 'better'; the league's just been more predictable[/quote] _________________ `
36% of all statistics are wrong |
|
Back to top |
|
|
Crow
Joined: 20 Jan 2009 Posts: 824
|
Posted: Mon Mar 28, 2011 10:48 am Post subject: |
|
|
Good added detail about 2007-8. I didn't re-read it all to find that.
It is "hard to say for sure what the typical limit is at this point based on a few seasons".
4 seasons is dangerously short to be sure that an apparent trend is real sustainable in the future progress or all real progress (for this or other things). I still think there has been some, but not sure how much. More time might tell more if the interest continues.
Growth in number of entries over time also helped the trend toward a lower winning average error thru more chances to be a strong outlier.
Another year of bigger unexpected stories could break the clean trend. Maybe the future lowest error will tend to generally fall between the recently experienced high and low.
I tried not to over-conclude from the results but there was a base case for progress to at least hint at in response to the questions asked and perhaps draw out the next round(s) of analysis.
Looks like there have been 6-12 teams per season where the average error was above 10 or at least near it with many individual predictions over 10. 2007-8 was at the high end of that range, this season appears to be failing at the low end, That accounts for a portion of the overall trend but, without doing the exact math, it appears to be only a minority part of the overall trend, probably less than 1 of the near 3 point average error improvement from 2007-8 to this season. The error this season will probably creep up some more from where it is right now based on where it was earlier but probably not a lot.
Last edited by Crow on Mon Mar 28, 2011 12:29 pm; edited 1 time in total |
|
Back to top |
|
|
EvanZ
Joined: 22 Nov 2010 Posts: 300
|
Posted: Mon Mar 28, 2011 11:29 am Post subject: |
|
|
My main question is whether the differences in predictions are primarily due to the actual metrics being used or the predicted allocation of minutes. It would be nice after the season is finished to see retrodicted results for the various metrics. That way we could at least take the playing time predictions out of the mix and focus on the metrics themselves. _________________ http://www.thecity2.com
http://www.ibb.gatech.edu/evan-zamir |
|
Back to top |
|
|
huevonkiller
Joined: 25 May 2010 Posts: 15 Location: Miami, Fl
|
Posted: Mon Mar 28, 2011 2:09 pm Post subject: |
|
|
Jeff Fogle wrote: | back2newbelf, any updates on how the predictions are doing compared to actual performances now that we're in the final stages of the season?
Was fun to go back and read all the early-thread thoughts about Miami. Turned out they've mostly avoided serious injuries yet still have to win out to finish 60-22. Great team for working through the process of full season analysis and trying to figure out what all the indicators mean. Good to review what people were thinking before the season started too.
Page 4 of this thread shows the first listing of Vegas estimates, for anyone who just wants to see how the market compared to what's happened. |
Since December Miami has played like a 63-66 win team (upper 60's if Big three are intact). Those estimations weren't that bad at all, I don't think anyone expected Miami's superstars to play so poorly in the first month of the season.
Further while Miami has struggled, I think people lose sight of the fact that a 1 point loss is not the same as the 2009 Cavs getting owned by elite teams. Certain tactics Miami has been using are flawed, but this team has developing chemistry (two ball-dominant perimeter superstars is very rare) and has a young coach.
With some ball movement down the stretch this team can probably win half of their close games. Those who guarantee otherwise make me question their basketball knowledge. Benching Mike Miller for James Jones would probably have added key victories too, just as an example. |
|
Back to top |
|
|
Jeff Fogle
Joined: 11 Jan 2011 Posts: 70
|
Posted: Mon Mar 28, 2011 4:09 pm Post subject: |
|
|
Thanks Crow, all, for the comprehensive responses. Vegas looks to be a solid benchmark year in and year out. Would be fun to crack the mystery of what's in those projections that is eluding some of the logical stat/human projections. |
|
Back to top |
|
|
BobboFitos
Joined: 21 Feb 2009 Posts: 201 Location: Cambridge, MA
|
Posted: Mon Mar 28, 2011 8:25 pm Post subject: |
|
|
EvanZ wrote: | My main question is whether the differences in predictions are primarily due to the actual metrics being used or the predicted allocation of minutes. It would be nice after the season is finished to see retrodicted results for the various metrics. That way we could at least take the playing time predictions out of the mix and focus on the metrics themselves. |
This this this _________________ http://pointsperpossession.com/
@PPPBasketball |
|
Back to top |
|
|
Crow
Joined: 20 Jan 2009 Posts: 824
|
Posted: Mon Mar 28, 2011 10:52 pm Post subject: |
|
|
Thanks Jeff.
What did the Vegas line foresee particularly better this year than most?
That the Cavs would fall farther than most thought, though Kelly Dwyer went far more extreme and will end up low by just a few.
That Detroit would be near .400. John Hollinger and the ESPN panel average called that closely too.
That Minnesota wouldn't gain much ground. The ESPN panel average called that probably a little closer.
I'd say Vegas shared in all of the widespread big misses. Its strength must be mostly coming from being a little closer here and there or maybe avoiding adding big misses that weren't widespread.
A simple and even blend of back2newbelf's RAPM and DSMok1's ASPM didn't improve the results beyond the best of the two this year.
Besides Vegas and Hollinger, the ESPN panel average did well. Blending many worked for pretty well for them in this lowest average error contest. |
|
Back to top |
|
|
Jeff Fogle
Joined: 11 Jan 2011 Posts: 70
|
Posted: Mon Mar 28, 2011 11:52 pm Post subject: |
|
|
When the year is in the books, crow, let's take a closer look at differences between Vegas and some of the projections that missed. I think you're right that it's a lot about coming closer here and there rather than correctly anticipating big surprises.
Let's start with last year's final team records...see where Vegas made adjustments from those...maybe it's a case of being more conservative in anticipating changes than others, and there's an inherent stability in the league giving the consistency of the have's and the consistency of the have not's. Just throwing that out off the top of my head. Want to wait until final records are in the books. |
|
Back to top |
|
|
Crow
Joined: 20 Jan 2009 Posts: 824
|
Posted: Tue Mar 29, 2011 12:43 am Post subject: |
|
|
Yes, last year to this year Vegas prediction change would be important to view. And maybe second half of last season to prediction or some such split. And maybe look at playoff success or failure too.
Somebody in an earlier year thread also raised the variable of team age. One could add team stability or team experience together.
East and West prediction - actuals and inter-conference records / expectations would be worthwhile too. Last season top 5-10 offenses and defenses. Previous home and road and low rest strength and whether to expect it again or expect regression to the mean on the tails.
Potentially one could also look at teams lead by a top 5-10 star and teams lacking a top 20 star. And the multiple top 20 star teams. Alpha dog contests and switches?
You could consider whether adding Coaching Adjusted +/- on top of the player data correlates well with predictions and / or actuals. Who has more of what they have done well or not well with? What strategies could the other coaches try to mess that up compared to neutral expectations and how likely is that? How is going to get fired / hired and when and with what impact?
Which teams by their language in the media or other signs sound and look above or below on unity, focus, confidence, etc?
Maybe look at pace or high pace - strong offense and low pace - strong defense subgroups.
If 3 point shooting is particularly volatile how dependent on that shot were teams last season compared to average? How reliant were they on 3 point shot defense and success there?
Did teams who rely on driving for shots and foulshots overachievers last season and this season and did it accelerate?
Any trend with high and low assist rate teams?
Bench strength and clarity.
If you looked at the game consistency and the clutch performance data, what can one make of it in general?
The strength of top 5-10 lineups and whether teams can field those from last season in this season or not. Or even player pair availability and expected usage.
Does shooting well and defending the shot well carry a greater in actual wins than it might appear from the season average data?
Does offensive possession usage on teams looking clearcut or it is a luxury of options or a potentially troubling competition / issue to sort out? Who handles that well and not- at player, coach and organization level?
What if you tried to predict injury frequency and injury impact? Would you gain edge on the competition or lose ground because it is too hard to do well? Is it too hard to do better than neutral?
What if you tried to predict trades to some degree? The lopsided salary dumps and the trade the future for now and now for the future ones? Could you at least add an estimated value impact * probability for the 4-10 trade situations with the most potential to occur? Is that to much or another example of what you might have to do to get edge?
What about big market / small market splits. It shouldn't matter in winning but does it affect predictions (Vegas and "the pack") and does it appear to affect winning at all?
Any new additions or subtractions to the "gets superstar calls" list?
I guess you could look at payroll too. If it is a proxy for GM valuations does it translates well or not? Model in the moral hazard dimension of player contract situation. And maybe anticipated team responses to signs of that or expectations of that. In-season team payroll and roster flexibility- cap space, exceptions, non-guaranteed contracts, open slots or slots easily opened if opportunity arises.
What about the benefits and moral hazards of GMs and owners?
If you put the team over and underachievers thru a complete wash, what stats are the subgroups of teams high or low on?
Any spike in win achievement among teams with 6+ rotation guys positive on Adjusted +/- or greater than expected loss from having a star with moderate to heavy negative Adjusted +/- or multiple such starters or rotation guys?
Defensive breakdown of teams with 3 or more bad rotation defenders or at least key lineup breakdowns with 2-3+ bad ones on the court?
If you spent a lot of time looking at and working with the data perhaps one could beat the average accuracy of typical minute projections too. Maybe especially for rookies and of course their predicted performance is tough but also an opportunity to exceed the norm.
How will player location changes affect their performance due to elevation change or degree or travel miles and rest levels or quality of medical staff or amount of weekend games or "team discipline" or city distractions or friendship reunions and breakups? Player familiarity with and success level against new conference and division and their familiarity with the player.
Who has good and well targeted player - mentor pairings and might expect to get more good things out of it especially from young players than without looking at that consideration?
There is probably more. Easier and quicker to list than to research and estimate impacts, but brainstorming is an early step in the process.
Last edited by Crow on Tue Mar 29, 2011 2:05 pm; edited 2 times in total |
|
Back to top |
|
|
Jeff Fogle
Joined: 11 Jan 2011 Posts: 70
|
Posted: Tue Mar 29, 2011 11:13 am Post subject: |
|
|
No, that would take way too long. Let's just look at last year's records (lol).
Agree that all of those and more could be influences. I think we'll find though that many of the positives will cluster (well-run teams make good trades, emphasize the best percentage strategies, pace their players well, hire coaches and mentors that mesh well with their talent, while poorly run teams don't). That may be at the heart of the consistency over the years with many of the same teams being in the same general spots over and over again.
Maybe looking only at the teams who had extreme changes from one year to the next will help isolate things. But, even then, you're dealing with small sample sizes in a very dynamic universe. Or, the answers may come from studying the quilts rather than focusing so closly on each thread of the quilt. Hard to say.
My take from the industry is that:
*Oddsmakers tend to look at the quilts with just a few little threads. What did this team do last year? What have they done the last few years? What's different coming into this season (new players or coaches)? They're not digging deep into stats/minutes/etc... If a team won 50 last year, and 48 the year before, and it's a veteran team with the same coach, the number will be in that range.
*Bettors (and it's almost exclusively the "sharps" who bet regular season win totals in the NBA because the public isn't into them) may focus more on the specifics you mention.
I know Haralabob pops in occasionally to this forum. Maybe he can shed some light on his thinking process about these propositions.
Definitely think the factors you've listed are very relevant. Maybe be tough to separate the things that go hand in hand... |
|
Back to top |
|
|
haralabob
Joined: 11 Apr 2007 Posts: 27
|
Posted: Fri Apr 01, 2011 4:10 pm Post subject: |
|
|
I am curious what you guys used to determine the "Vegas" line, ie what book and when.
The reason I ask when is because unless its the virgin opening lines, the "Vegas" line is actually the adjusted line after the sharps have have taken their turns sharpening the line. |
|
Back to top |
|
|
|