top of page

Fanalytics U Class 5: In-Game Decisions

Updated: Sep 9, 2020


Introduction


Can we use statistics to make better in-game decisions?


Let’s start with something easy. Imagine the following scenario and decision. Your football team (you can be the coach) is playing in the Super Bowl. They are down by 8 points with 15 second left. Let’s say it is 28 to 26. On the next play, the quarterback scrambles to avoid the rush and throws a touchdown pass.


Now, your team is down by 2 point with 3 seconds left on the clock. What should the coach do next? Kick the extra point or go for the 2 point conversion?


That one should be easy. Let’s change the scenario a bit.

1. What if your team if down by 1 point?

2. What if you are down by 2 but there are 1 minutes left?

3.What if you are down 2 and there are 10 minutes left?


Did the decision change across scenarios 2 and 3? If it changed, at what amount of time left would you be indifferent to going for 2 versus kicking the extra point? Is there any other information that you would like to have before making the decision?


The preceding example should highlight some key aspects of in-game decision making and should start to get you thinking about how analytics can help decision makers improve outcomes.


Before we get to the “analytics” can you think of similar types of decisions in other sports?


The Core


In our opening example, the football game is described by the score and time remaining. Take away the decision for a moment and assume that the only option is to kick an extra point. Going back to our initial scenario. In this case, your team was down 2 points with 3 seconds on the clock. What does the situation look like after the kick?


We need some data on the success rate for extra point kicks. A quick Internet search suggests that extra-point kicks are successful 94.4% of the time.


Before the kick, the score was 28 to 26. After the kick, the time has expired but what is the score. There is a 94.4% chance that the score is 28 to 27 and a 5.6% chance the score is 28 to 26. The key is the situation/environment/world/game evolves randomly. We do NOT know with certainty what the status of the game will be following the attempted extra-point kick.


Incidentally, 2-point conversions succeed about 48% of the time. If we add a decision to this scenario, we now get to partially control how the game evolves. If we go for 2, we have a 48% chance of a tied, 28 to 28 game and a 52% chance of a 28 to 26 loss.


ASSUMING everything is average


The best tool for providing in-game decision support analytics is the Markov Decision Process (MDP). Informally, a Markov Process is a stochastic model of how the environment evolves based on the current state. Really informally, the idea of a Markov process is that a situation (a game) evolves somewhat randomly from play to play. The coach can affect how the game evolves.


The MDP framework involves several elements.


First, there are a set of variables that define the state of the environment. We will call these state space variables.


Second, there is a decision maker who has a set of possible actions.


Third, there is a reward function that describes the value the decision maker receives for taking some action in a specific state.


Fourth, there are equations of motion that describe how the state or environment evolves based on the decision maker’s actions.

That is the basic structure. The next question is implementation. What is needed? A couple of things. Number one, we need to understand how the world evolves. This gets us into either simple data such as how often extra-points are successful or potentially complex models. The extension to this framework is a dynamic optimization model. This is beyond our course and computationally challenging, but the basic idea is to use data to make the optimal sequence of decisions.


Example: The Sacrifice Bunt


The classic example of in-game decision modeling is the sacrifice bunt in baseball.


How do we describe the environment in a baseball inning? We can start by considering how many outs and the runners on base. For runners on base there are 8 possible states from bases empty to bases loaded. In terms of outs there are 3 states –0, 1 or 2 outs. Therefore, we can describe a game as being in 1 or 24 states.


Simple example, if a man on first and no one out what action should the manager take? A sacrifice bunt or have the batter swing away? To make the decision we need something call a run expectancy table. This table uses historical data to determine the expected number of runs scored in an inning given the current state of the game.


The run expectancy table tells us that teams with a man on and 0 outs score about .94 runs from that point onward in an inning. The decision is either swing away or change the state of the environment through a sacrifice bunt (we will assume that the sacrifice bunt successfully move the runner to second and results in an out for the batter). If the manager chooses the sacrifice bunt, the new state of the inning will be a runner on second and 1 out and the expected runs will be only .56. Simple.


Next Class


Class 6: Analytics and Decision Biases

  • Class 4 and 5 are about the challenge of developing analytics in organizations.

  • Class 6 will be about using analytics in organizations

This one is about psychology and reality

  • Psychology in terms of how decisions are made

  • Reality in terms of how decisions are made

No Homework on this one. But if you’ve ever worked in a large organization think about how decisions are actually made.


Listen here:


Also streaming on Apple Podcasts, Spotify, and Stitcher.

0 comments

Recent Posts

See All
bottom of page