My mid-season bet that KU would go winless this year turned out to be true, so I probably need to post how right I was about that. Don't worry though, this isn't a blog post of me simply gloating. Well the first part is, but I have built a helpful piece of code too, which we'll get to later.

## WAS PREDICTION ACCURATE?

So let's do a post mortem analysis of whether my predictions for KU Football were accurate. What was the initial prediction I made?

<Gloating>About a 50-50 chance of a winless season starting from the first week of the conference season (game four).

Some people might look at this prediction and say "50-50 means you don't know," which is kind of true. I wasn't sure whether KU would go winless or not. But in this case the the absolute probability doesn't matter, but

*information gain*does. Information gain in this case, means how much more we know about the probability of something over a "par" or "average probability.

College football teams go winless about 0.2% of the time, making this a fairly rare event. To be able to say that KU had a 50% chance of going winless means that they were effectively 250 times more likely to go winless than any random college football team, a huge gain in information from this prediction.

As an example, let's say that the daily probability that an elevator in your building would get stuck on a ride is 0.1%. However, I have modeled the performance of elevators in your building, and tell you that the elevator you're about to get on has a 50-50 chance of getting stuck. I still "don't know" whether the elevator will get stuck, but the model is actually quite useful because it provides a lot of information about this specific elevator ride over the normal 0.1% par probability. In essence, the 50% probability is not certain, but is still useful.

On the other hand, an event predicted at the 50% level should come true about 50% of the time, how can we be sure that it wasn’t actually more likely than I had originally predicted? Without looking at my estimates over a series of seasons, there isn’t a good way to determine the accuracy of the overall predictions. Some cynics would have claimed prior to the season that KU would almost certainly (>90%?) go winless. It’s hard to falsify that statement, now that the team has gone 0-12, however there were a few games that were played close (SDSU,Texas Tech, TCU), and that tells me the team had a legitimate shot to win a game along the way. Add this to the *probability* of when a team under a new coach will play well, then 50-50 still seems like it was a reasonable estimate (imagine if KU played the way they did in the TCU game when they were playing Iowa State).

<Gloating/>

## SIMULATING SEASON

Now that I have that out of my system, how about some actual statistical work? One piece of my toolset that I’ve had in SQL or Excel but never in R before is a batch probability simulation engine. The point of a simulation engine is to look at a set of event with probabilities fixed to them, and simulate them thousands of times, to get a sense of how things might turn out together (likely outcomes for a season). A concrete way of looking at this is like letting Madden play 1,000,000 seasons (computer versus computer), and then setting probabilities based on what team wins the super bowl the most.

To write a probability simulation engine you need a few general parts:

**A set of input probabilities**(e.g. a vector of probabilities of a team winning each of their games this season).**Creating a matrix of random probabilities**with columns = number of events (games), rows = to number of simulations.**A win/loss classifier that compares random probabilities**to the set event probabilities.**A summarizer, to summarize total numbers of wins and loses,**and the season outcomes.

My code to do this is below. There are actually some different pieces you can add in, for instance bootstrap modifiers that account for dependencies between events, and other modifiers to run many teams at once. I'll work on that later.

How does this actually work? I simulated KU’s season 1 million times (only takes about 2 seconds), and summarized the results. Here’s how the seasons set out in terms of number of wins (including some higher probability wins, e.g. SDSU):

That's a bit depressing. Even including the "easier" non-conference season KU would go winless 37% of the time. KU would become bowl eligible (wins>=6) once in every 10,000 seasons.

Here’s a look at just the conference season. Over 50% of the time, KU won zero games.

How bad was KU compared to a “par” team? I made a par power-conference team which has three non-conference games of .80 probability to win, and .50 percent to win each conference game. Here's what that looks like.

And here’s the R code that got me here (this is really simple, but I will expand on it to handle multiple scenarios, and simulate full leagues).

## CONCLUSION

Just a couple of points to cap this one off:

**I was generally correct about KU's chances of winning a game this season (gloating).****It's fairly straight forward,**after creating probabilities of winning each game, to simulate how teams may actually perform during the year.**With KU's current performance,**they will go winless one out of every three years, and go to**a bowl once every 10,000 seasons.**

I very much like to read such information

ReplyDeleteBig English 3 Activity Book

Great Article

ReplyDeleteIEEE final year projects on machine learning

JavaScript Training in Chennai

Final Year Project Centers in Chennai

JavaScript Training in Chennai