web analytics

Sweet 16 Update

March 23, 2011
By
No Comments »

Well, the first weekend of the NCAA Tournament is in the books.  I don’t have much time to post, so I’ll make it quick: Predictions did fairly well. In the 8 closest 1st-round games by my pregame predictions (averaging a 52.72% favorite) my stats went only 25%–otherwise, everything matched up really well.  The other 3 sets: top 8 predicted 96.7% actual 100%, next 8...
Read more »

On Crowds and Contrarian Picks

March 17, 2011
By
No Comments »
On Crowds and Contrarian Picks

If you are choosing NCAA tournament picks in a LARGE group (like ESPN), then, if possible, you need to account for what the masses have chosen in making your own selections.  Fortunately, ESPN publicly shows what everyone has picked–and that lets us account for them. As the number of people approaches infinity, the formula for “Pick Value” is equal to Val=(Stat%*2-Mass%). If you have...
Read more »

NCAA Tourney Bayesian Ratings and Odds

March 16, 2011
By
3 Comments »
NCAA Tourney Bayesian Ratings and Odds

Okay, ready for the tournament? I’ve put together some adjustments based on the work I .  The theory behind the adjustments may be found on STATS @ MTSU and Dr. Winner @ Florida.  Basically, I’m adjusting for teams that raise their game to a higher level against good foes (or vise-versa).  For instance, Long Island plays much better against good teams–probably because they got...
Read more »

Raising Their Game

March 15, 2011
By
No Comments »
Raising Their Game

When we get to the NCAA tournament, it seems that inevitably, some teams will raise their game, matching up with the “better” teams, suddenly emerging as a top team.  Some teams play well when their opponent is better, and let the foot off the gas when playing East Popcorn St.  Those teams tend to be penalized in efficiency-based metrics, the ones that mostly played...
Read more »

How Does the Committee Seed? Introducing ExpSd

March 14, 2011
By
2 Comments »

The Bracket was revealed yesterday.  Quick thoughts and long ramblings below: I have two rankings systems: my Bayesian predictive power ratings, which tell how good the teams are, and my DSMRPI ratings, which tell how much they have accomplished.  I would put teams in and seed them based on DSMRPI, which looks purely at win-loss data for the team, but for SoS looks at...
Read more »

NCAA Bayesian Ratings, With Projection Prior

March 12, 2011
By
No Comments »

In my , I took as the Bayesian prior the overall distribution of NCAA teams.  Now, we know more than that–we can create a pretty good projection of how good a team will be based on how good the team has been the previous few years.  So let’s do it! I compiled the Pomeroy Ratings for all teams since 2003, and ran a regression...
Read more »

NCAA Bayesian Analysis & DSMRPI

March 7, 2011
By
No Comments »
NCAA Bayesian Analysis & DSMRPI

 I posted my NCAA Bayesian Ratings and methodology.  Today I thought I’d update the numbers quickly and add a new twist. What is the objective in basketball? To win the game! When doing a predictive rating system (like this Bayesian method) or even trying to tell how good teams are over this season (KenPom’s ratings), we account for margin of victory rather than just...
Read more »

The Carmelo Trade

February 22, 2011
By
9 Comments »
The Carmelo Trade

Carmelo Anthony was FINALLY traded yesterday, in a mega 3-team deal. How did the teams make out? There are several good trade analyses around, but none of them are really focusing on the financial aspect. Kevin Pelton’s article is a good primer on the trade as a starting point, and Joe Treutlein at Hoopdata has a good analysis as well. I’ll look at this...
Read more »

DSMok1 on Twitter

To-Do List

  1. Salary and contract value discussions and charts
  2. Multi-year APM/RAPM with aging incorporated
  3. Revise ASPM based on multi-year RAPM with aging
  4. ASPM within-year stability/cross validation
  5. Historical ASPM Tableau visualizations
  6. Create Excel VBA recursive web scraping tutorial
  7. Comparison of residual exponents for rankings
  8. Comparison of various "value metrics" ability to "explain" wins
  9. Publication of spreadsheets used
  10. Work on using Bayesian priors in Adjusted +/-
  11. Work on K-Means clustering for player categorization
  12. Learn ridge regression
  13. Temporally locally-weighted rankings
  14. WOWY as validation of replacement level
  15. Revise ASPM with latest RAPM data
  16. Conversion of ASPM to" wins"
  17. Lineup Bayesian APM
  18. Lineup RAPM
  19. Learn SQL