INTRODUCTION
As this letter to "Ask Marilyn" attests, the game ChuckaLuck appears to favor the bettor. For this reason, it has long been a popular carnival game. It also has a long history in British pubs, where it is called "Crown and Anchor" since the six sides of the dice are inscribed with the four common card suits: clubs, diamonds, hearts, and spades, as well as two additional suits, the crown and anchor (Epstein 1977).
Of course, only a naïve gambler, a poor one at that, would fall for this ruse, but as P.T. Barnum once said, "A sucker is born every minute." A more seasoned gambler, like Daniel here, will naturally suspect a fallacy in his probabilistic logic yet will be equally perplexed as to what that fallacy is. This is often the case with things probabilistic. Probability problems are highly nonintuitive. The goal of the teacher of probability, then, is to impart formal methods to solve these problems and eliminate fallacious reasoning.
This case was developed for an undergraduate course introducing probabilistic models in operations research using the classic textbook by Hillier and Lieberman (1995). The course begins with coverage of Markov chains, a simple type of stochastic (probabilistic) process. The course is offered each spring semester and builds on the students' prerequisite course in probability, usually taken in the prior fall semester. This case provides a good bridge between the two courses.
Objectives
This is a directed case study with two parts. The first part, Part I, was developed to revive students' prior knowledge in probability theory. The goal was to reinforce sound probabilistic reasoning. The questions here explore both incorrect and correct probabilistic arguments. For most probability problems, there are a variety of plausible solution approaches (see "Discussion" below.) Many of these approaches end up with the correct solution. Many do not. Students have a hard time distinguishing between the good and bad. This series of questions was designed to explore alternative solutions and provide insight into what constitutes correct probabilistic reasoning. The final question asks students to then collect empirical evidence, for this is often the only way to convince yourself that your methodology was indeed correct.
The second part, Part II, of this case study shows how the repeated play of this game could be modeled as a Markov chain. The goal was to introduce a "fun" application of Markov chains called the gambler's ruin problem. The first question addresses problem formulation as well as healthy questioning of assumptions. Two additional questions then address the longrun issues involving the eventual absorbing behavior of the chain. Again, a final question asks students to simulate the play of the gambler's ruin game to verify their results.
DISCUSSION
Part I  What Are the Odds?
 Answer to Question 1: Since the probability of each die not showing your number is 5/6, the probability of losing this game is (5/6)^{3} = 125/216. Hence, the probability of winning is 1  125/216 = 91/216, giving the odds of winning at this game at 91:125.
 Answer to Question 2: Daniel's analysis is flawed in that, instead of finding the probability of winning, he has found the expected winnings per throw. This is indeed 50 cents since each of the three dice has a probability 1/6 of independently winning $1. However, Daniel forgot to subtract the expected losses, namely the original $1 wager that is lost when his number does not turn up on every 125 out of 216 throws. This yields an expected profit on each throw of , as Marilyn claims.
Here we have subtracted expected losses from expected winnings to find the expected profit per throw. This is probably the simplest approach. An alternative approach by Mosteller (1965) is illustrated in the answer to question 3 below. A final approach (cf. Packel 1981) utilizes the fact that the number of successes in three rolls is a binomial random variable. This approach is illustrated in the answer to question 4 below.
 Answer to Question 3: We first compute the probabilities of the three possible events: (i) all three dice are the same, (ii) all three dice are different, and (iii) two of the three are the same. Since each die has six possibilities, there are 6*6*6 = 216 possible outcomes for the three dice. Case (i) will occur in only 6*1*1 = 6 of these cases since there are six choices for the first die, but, once it is set, the other two dice must be the same. Case (ii) will occur in 6*5*4 = 120 of these cases since, for the dice to all differ, there are six choices for the first die, five choices for the second die, and then four choices for the last die. Case (iii) occurs in the remaining 216  6  120 = 90 cases.
To compute the operator's expected profit using these probabilities, recognize that in case (i) the operator pays the player whose number showed up $3, which can be taken from the wagers of 3 losing players. The other two losing players net the operator $2. In case (ii) the operator pays the three winning players $1 each from the wagers of the three losing players, netting himself nothing. In case (iii) the operator nets $1 after paying one winner $2 and one winner $1 from the $4 wagered by the four losing players. Thus, the operator's expected winnings are:
.
Dividing this profit by six gives an expected profit of nearly $0.08 from each bettor, as Marilyn claimed.
 Answer to Question 4: Now, we utilize the binomial distribution by defining the Bernoulli event, with probability of success 1/6, to be that a die shows our number. Hence, using the binomial probability of k successes among the three dice, for k = 0, 1, 2, 3, yields an expected profit of
.
This equals 0 (fair game) if and only if 15x + y = 50. A natural choice is x=3 and y=5. Another possibility is x=2 and y=20. (As verification, notice the choice x=2 and y=3 gives the same result found by alternative means in (2).)
 Answer to Question 5: The sample mean winnings should eventually converge to the theoretical average of 17/216. Advanced classes could be asked to create a confidence interval for the estimate as illustrated in, for example, Levin and Rubin (1991). Since we are dealing with a "displaced" binomial which takes on a value 1 rather than 0 when no successes occur, the population mean is mu = 17/216, as we have already seen, and the population variance is .
Part II  Going for Broke
 Answer to Question 1: This is a classical gambler's ruin problem (cf. Ross 1993). Let the state be the amount of money Chuck has at the start of a play, which ranges from 0 to 4. The state transition probability p_{ij} gives the probability of having $j after 1 roll of the dice, given that you start with $i. Using the binomial probabilities from Part I, the matrix P=[p_{ij}] of transition probabilities when wagering $1 per play is
The Markov property is satisfied since each play is independent of every other. Thus, the amount of money at the start of the next play depends only on the amount of money now (current state) and the outcome (via transition probabilities) of this play.
 Answer to Question 2: To determine the probability that Chuck goes home broke, define the absorption probability f_{i} to be the conditional probability that Chuck eventually goes broke, given that he currently has $i. Clearly, f_{0} = 1 and f_{4} = 0. Using a firststep analysis we seek to solve the system of equations
The solution to this system of three equations in three unknowns is
Thus, the probability that Chuck goes broke, starting with $2, is approximately 0.583.
 Answer to Question 3: To determine how long Chuck plays on average, we first combine the termination states 0 and 4 into a single state and combine the probabilities associated with entering these states. Let this combined state be called state 0. This gives a revised transition probability matrix
Define the mean firstpassage times to be the expected number of plays until Chuck either goes broke or has $4 (i.e., enters state 0), given that he currently has $i, i=1, 2, 3. Clearly, = 0 since the game terminates once Chuck enters state 0. Using a firststep analysis (and counting that one step) we seek to solve the system of equations
.
The solution to this system of three equations in three unknowns is
Thus, the expected number of plays of the game, starting with $2, is approximately 3.35.
 Answer to Question 4: When wagering $2 per play the transition probability matrix reduces to
Clearly, the probability Chuck goes home broke is , which is slightly less than before. Thus, this strategy is better if he wants to minimize his probability of going broke. The number of plays is just one, however, which does not provide for a long and exciting game.
 Answer to Question 5: The "chucker" should win approximately 41.7 percent of the games, while the "house" player should win about 58.3 percent of the games.
CLASSROOM MANAGEMENT
This case should be taught in two parts. Each part is somewhat independent in methodology, the first part addressing probability theory and the second part stochastic processes. The instructor could focus on teaching one part but not the other, depending on the material covered in the course. I addressed both parts, attempting to tie a prior course in probability modeling to the material in the first part of my course on stochastic processes. These case discussions were therefore a couple weeks apart, with the interim time spent introducing new subject matter on Markov chains.
This case, although short on details, is lengthy in the amount of work involved. Both parts require that students do a lot of outside computational work, preferably in small groups. The classroom can be used as a precursor to that work to introduce the problem and discuss the fallacy in probabilistic logic that was the impetus for this article. Discussion should center on the key concepts, not necessarily the detail in computation, although students often find these difficult concepts harder still when lacking in numerical illustration.
The classroom is also a good setting to perform some of the gaming to verify the solutions obtained. With many small groups of two playing in parallel, the instructor can then batch statistical estimates from all the groups into a more accurate batch estimate. This can then lead into the discussion on statistical output analysis, tying in aspects of another course in statistics that my students take concurrently with my own. Incidentally, after sharing the case with a colleague, he extended the game playing aspects of the gambler's ruin problem into a Monte Carlo simulation project for his course in computer simulation the following semester. A JavaScript simulator can be found at www.acsu.buffalo.edu/~thill/programs/JavaScript/ie477/chuck.html. This simulator can be used in classroom equipped with Internet access, enabling one to quickly simulate play of many games on a computer.
REFERENCES
Epstein, R.A. The Theory of Gambling and Statistical Logic. New York: Academic Press, 1977.
Hillier, F.S., and G.J. Lieberman. Introduction to Operations Research, 6th ed. New York: McGrawHill, 1995.
Levin, R.I., and D.S. Rubin. Statistics for Management, 5th ed. Englewood Cliffs, NJ: Prentice Hall, 1991.
Mosteller, F. Fifty Challenging Problems in Probability with Solutions. New York: Dover Publications, Inc., 1965.
Packel, E. The Mathematics of Games and Gambling. Washington, DC: The Mathematical Association of America, 1981.
Ross, S.M. Introduction to Probability Models, 5th ed. San Diego: Academic Press, Inc., 1993.
vos Savant, M. Ask Marilyn. Parade December 27, 1998, p. 25.
Acknowledgements: Publication of this case study on the National Center for Case Study Teaching in Science website was made possible with support from The Pew Charitable Trusts.
