The Tools – “Using quantitative investment techniques, just like Wall Street”
The Goal – “High Risk equals High Reward”
The Challenge – “Structure an algorithm to consistently predict college football outcomes at a greater than 60% success rate”
The Philosophy – “The Balancing Act is Unbalanced”
Stock prices quoted on Wall Street represent aggregate value determined by millions of interactive investors balancing buys and sells at any given moment. Similarly, point spreads in college football represent the attempt to balance the bets between the underdog and favorite.
The quoted price of a Wall Street stock is the collective result of thousands of investment analysts estimating the financial fundamentals of a particular company. The point spread is the result of the collective predictions of sports gambling professionals. Both are the result of collective analysis that encompassed all data available. The underlying belief is stock prices and point spreads are reflections of the analysis of all known public data.
Wall Street stock price quotes rarely represent the calculated financial value of a company. Usually the buy and sell quotes are “balanced” substantially higher or lower based upon “financial experts” perception of future earnings. Similarly, the college football point spread established by the “experts” is meant to “balance” the bets between favorite and underdog.
Each quote, either price or point spread, is biased. Quantifying the bias is the trick, that is where the profit opportunity lies.
The Algorithm Evolution – “Seeking the Great Predictor”
2011 – Our first year. The basic initial algorithm defined 30 different database driven game situations. When a game met 1 of 30 game situations, a selection was identified. The year yielded 98 wins versus 77 losses, 175 games total. The season was a net 21 game winner coming in at a win rate of 56%. A solid opening year, winning $1,330 on $110 representative bets.
2012 – The identical 30 different database driven game situations identified for 2011 were employed for 2012. The results were less heartening. Winning 119 while losing 118, a total of 237 games. The algorithm was making a substantial number of selections compared to 2011, and the results approaching 50/50 could have reasonably been expected. The net 1 game winner meant the season was a financial loss due to the 10% commission on losing wagers. For this year that meant a loss of -$1,080 on $110 bets due to the high number of losing selections. The commission was viewed as the cost of entertainment and education. From an investment viewpoint, still $250 ahead after two years and 412 selections.
2013 – Looking to fine tune after the 50/50 2012 season we identified 8 of the highest probability outcomes within the 30 identified game situations. The purpose was to reduce the number of selections weeding out the lowest probability. The season came in at 85 total selections, 44 wins versus 41 losses. Another year of forking up the juice, coming in at a -$110 loss. For our first 3 years we were $140 ahead, suffering the last two coming in roughly equal in terms of the absolute number of games.
2014 – Getting back on the rails with further fine tuning. For this season we identified 11 high probability game situations from the defined 30 game situations. The results were encouraging for the season, 72 wins versus 61 losses, a net 11 game winner. The money winnings were good, coming in at $290. From inception we were up $430 on representative $110 bets.
2015 – The best year to date. We used the exact same 11 high probability game situations defined for 2014. The results had us on the cusp, 74 wins versus 51 losses, a 59.2% win rate. The winnings calculated to be $1,790. For the first 5 years we were up $2,220.
2016 – Statistical reality set in. The continuity established by the 2014 and 2015 seasons was destroyed. Our worst suspicions regarding the algorithm were revealed. Our view that point spreads reflected all data was proven wrong. In addition, the last few years saw changes in college football due to the spread of air raid offenses producing increased point spread variance. The result was 50 wins versus 71 losses, a net 21 game loser.
Taking all games since 2011 into account we are 457 wins versus 419 losses (52.1%) a net 38 game winner. No losses in the absolute number of games; however, taking into account the 10% sportsbook commission, we are down -$390. Remember, this small commission loss is the result of a total of $96,360 representative $110 bets.
To date, a massive effort resulting in an extremely small cost of entertainment.
We knew about the 3rd week of 2016 our algorithm was in a losing predictive mode. We recommended our followers take a leave of absence for the year. We published week after week of losing performances knowing we were on the wrong side of the equation. Plus we will always honor our commitment of 100% internet transparency. Our pledge was to finish the season and do a substantive analysis to determine what went wrong. To the degree it went wrong told us the data would jump off the sheet showing us where the algorithm was deficient. Our hypothesis was correct. Analysis clearly revealed where we were off base. We know how to correct it. The beauty is our underlying database is and was compiled in exactly the same manner every year. Our algorithm was sound; however, we learned the point spread is not the reflection of 100% aggregate knowledge. We have learned the hard way point spreads and stock prices are different.
We are more than confident for next year. We have expanded the algorithm to incorporate data previously thought to have been incorporated within the point spread. It makes a huge difference.
Bring on 2017!