Tuesday, October 28, 2008

Quality Control Tables



I've just finished making a big table that helps identify the accuracy of my lines based on the total yards and yards per play for each team in it's respective game. It's a simple way to get a ballpark idea of whether my projections would have or would not have covered when compared to statistical averages. The stat estimated final scores are based on league averages of total yards divided by points and yards per play divided by average number of points scored. Like I said, it's not an exact science, but it gives a good general view if you really should or should not have covered on some of the plays. For example, the PHI/ATL and JAC/CLE Overs should have both readily covered where in actuality they didn't. I had a higher cover % on Stat estimated totals than the actual results bore out, but ended up with the same poor 38% on sides. My favorites predicted to cover the spread were pretty awful this week, and it can't all be attributed to turnovers, though 5 of the 7 Stat estimated favorite losers also had a negative turnover differential in their games. Kind of a strong indicator there that favorites, especially large ones, will have a very hard time covering spreads with a negative turnover ratio in their games.

My lines use red zone offense/defense amongst many other things not used in the quickie stat estimated scores I've created here. There are several nuances within each game, including the order in which the points were scored for each team, that lessen the accuracy of these simple total yards and yards per play final score estimations -- but as I said, they're just ballpark estimators. One thing that looks certain is that it appears difficult for teams to cover large spreads on total yards and yards per play alone. The only favorite that did so last week was the Redskins (which, coincidentally, was one of my strongest side values).

I also just added 4 columns on the right end of the table. They represent the point differentials between the stat estimated final scores, my estimated final scores, and the actual final scores based on the fave/dog, O/U opinion I had on each game. You can see that grading my opinions on totals versus the stat estimated lines left me 17 points to the good, whereas grading my opinions versus the actual final scores leaves me almost 8 total points to the bad. Conclusion? Poor luck on totals over all. Looking at the sides, I was -15 points to the bad on stat estimated final scores versus my final score estimations, but a whopping -83 points to the bad on my opinions versus the final scores. Lots of bad luck involved on many of the sides (the correlation to turnover ratio in each game rears it's ugly head), which has caused me to drop from 1st place on the error measurements at thepredictiontracker.com to much further down in the pack over the last 2 weeks. The correlation between turnovers and the difference between my lines and the final scores was a very high .77 in this table -- lots of turnovers against equalled big negative points on my lines versus the final scores.

In the turnover columns, there is one discrepancy as I had TEN to slightly cover the spread, but my only play on sides in the game was a 1/2 unit ML play on the Colts -- which based on the stat estimations should have won!

And boy howdy, the Bucs really should have beat the Cowboys by 6 points, almost right on my projected final score margin of 7. I've had several frustrating games over the last few weeks where the dogs have outgained the favorites, yet lost and failed to cover.

No comments: