# 13.8: The Red and Black Game

- Page ID
- 10262

\( \newcommand{\vecs}[1]{\overset { \scriptstyle \rightharpoonup} {\mathbf{#1}} } \)

\( \newcommand{\vecd}[1]{\overset{-\!-\!\rightharpoonup}{\vphantom{a}\smash {#1}}} \)

\( \newcommand{\id}{\mathrm{id}}\) \( \newcommand{\Span}{\mathrm{span}}\)

( \newcommand{\kernel}{\mathrm{null}\,}\) \( \newcommand{\range}{\mathrm{range}\,}\)

\( \newcommand{\RealPart}{\mathrm{Re}}\) \( \newcommand{\ImaginaryPart}{\mathrm{Im}}\)

\( \newcommand{\Argument}{\mathrm{Arg}}\) \( \newcommand{\norm}[1]{\| #1 \|}\)

\( \newcommand{\inner}[2]{\langle #1, #2 \rangle}\)

\( \newcommand{\Span}{\mathrm{span}}\)

\( \newcommand{\id}{\mathrm{id}}\)

\( \newcommand{\Span}{\mathrm{span}}\)

\( \newcommand{\kernel}{\mathrm{null}\,}\)

\( \newcommand{\range}{\mathrm{range}\,}\)

\( \newcommand{\RealPart}{\mathrm{Re}}\)

\( \newcommand{\ImaginaryPart}{\mathrm{Im}}\)

\( \newcommand{\Argument}{\mathrm{Arg}}\)

\( \newcommand{\norm}[1]{\| #1 \|}\)

\( \newcommand{\inner}[2]{\langle #1, #2 \rangle}\)

\( \newcommand{\Span}{\mathrm{span}}\) \( \newcommand{\AA}{\unicode[.8,0]{x212B}}\)

\( \newcommand{\vectorA}[1]{\vec{#1}} % arrow\)

\( \newcommand{\vectorAt}[1]{\vec{\text{#1}}} % arrow\)

\( \newcommand{\vectorB}[1]{\overset { \scriptstyle \rightharpoonup} {\mathbf{#1}} } \)

\( \newcommand{\vectorC}[1]{\textbf{#1}} \)

\( \newcommand{\vectorD}[1]{\overrightarrow{#1}} \)

\( \newcommand{\vectorDt}[1]{\overrightarrow{\text{#1}}} \)

\( \newcommand{\vectE}[1]{\overset{-\!-\!\rightharpoonup}{\vphantom{a}\smash{\mathbf {#1}}}} \)

\( \newcommand{\vecs}[1]{\overset { \scriptstyle \rightharpoonup} {\mathbf{#1}} } \)

\( \newcommand{\vecd}[1]{\overset{-\!-\!\rightharpoonup}{\vphantom{a}\smash {#1}}} \)

\(\newcommand{\avec}{\mathbf a}\) \(\newcommand{\bvec}{\mathbf b}\) \(\newcommand{\cvec}{\mathbf c}\) \(\newcommand{\dvec}{\mathbf d}\) \(\newcommand{\dtil}{\widetilde{\mathbf d}}\) \(\newcommand{\evec}{\mathbf e}\) \(\newcommand{\fvec}{\mathbf f}\) \(\newcommand{\nvec}{\mathbf n}\) \(\newcommand{\pvec}{\mathbf p}\) \(\newcommand{\qvec}{\mathbf q}\) \(\newcommand{\svec}{\mathbf s}\) \(\newcommand{\tvec}{\mathbf t}\) \(\newcommand{\uvec}{\mathbf u}\) \(\newcommand{\vvec}{\mathbf v}\) \(\newcommand{\wvec}{\mathbf w}\) \(\newcommand{\xvec}{\mathbf x}\) \(\newcommand{\yvec}{\mathbf y}\) \(\newcommand{\zvec}{\mathbf z}\) \(\newcommand{\rvec}{\mathbf r}\) \(\newcommand{\mvec}{\mathbf m}\) \(\newcommand{\zerovec}{\mathbf 0}\) \(\newcommand{\onevec}{\mathbf 1}\) \(\newcommand{\real}{\mathbb R}\) \(\newcommand{\twovec}[2]{\left[\begin{array}{r}#1 \\ #2 \end{array}\right]}\) \(\newcommand{\ctwovec}[2]{\left[\begin{array}{c}#1 \\ #2 \end{array}\right]}\) \(\newcommand{\threevec}[3]{\left[\begin{array}{r}#1 \\ #2 \\ #3 \end{array}\right]}\) \(\newcommand{\cthreevec}[3]{\left[\begin{array}{c}#1 \\ #2 \\ #3 \end{array}\right]}\) \(\newcommand{\fourvec}[4]{\left[\begin{array}{r}#1 \\ #2 \\ #3 \\ #4 \end{array}\right]}\) \(\newcommand{\cfourvec}[4]{\left[\begin{array}{c}#1 \\ #2 \\ #3 \\ #4 \end{array}\right]}\) \(\newcommand{\fivevec}[5]{\left[\begin{array}{r}#1 \\ #2 \\ #3 \\ #4 \\ #5 \\ \end{array}\right]}\) \(\newcommand{\cfivevec}[5]{\left[\begin{array}{c}#1 \\ #2 \\ #3 \\ #4 \\ #5 \\ \end{array}\right]}\) \(\newcommand{\mattwo}[4]{\left[\begin{array}{rr}#1 \amp #2 \\ #3 \amp #4 \\ \end{array}\right]}\) \(\newcommand{\laspan}[1]{\text{Span}\{#1\}}\) \(\newcommand{\bcal}{\cal B}\) \(\newcommand{\ccal}{\cal C}\) \(\newcommand{\scal}{\cal S}\) \(\newcommand{\wcal}{\cal W}\) \(\newcommand{\ecal}{\cal E}\) \(\newcommand{\coords}[2]{\left\{#1\right\}_{#2}}\) \(\newcommand{\gray}[1]{\color{gray}{#1}}\) \(\newcommand{\lgray}[1]{\color{lightgray}{#1}}\) \(\newcommand{\rank}{\operatorname{rank}}\) \(\newcommand{\row}{\text{Row}}\) \(\newcommand{\col}{\text{Col}}\) \(\renewcommand{\row}{\text{Row}}\) \(\newcommand{\nul}{\text{Nul}}\) \(\newcommand{\var}{\text{Var}}\) \(\newcommand{\corr}{\text{corr}}\) \(\newcommand{\len}[1]{\left|#1\right|}\) \(\newcommand{\bbar}{\overline{\bvec}}\) \(\newcommand{\bhat}{\widehat{\bvec}}\) \(\newcommand{\bperp}{\bvec^\perp}\) \(\newcommand{\xhat}{\widehat{\xvec}}\) \(\newcommand{\vhat}{\widehat{\vvec}}\) \(\newcommand{\uhat}{\widehat{\uvec}}\) \(\newcommand{\what}{\widehat{\wvec}}\) \(\newcommand{\Sighat}{\widehat{\Sigma}}\) \(\newcommand{\lt}{<}\) \(\newcommand{\gt}{>}\) \(\newcommand{\amp}{&}\) \(\definecolor{fillinmathshade}{gray}{0.9}\)In this section and the following three sections, we will study gambling strategies for one of the simplest gambling models. Yet in spite of the simplicity of the model, the mathematical analysis leads to some beautiful and sometimes surprising results that have importance and application well beyond gambling. Our exposition is based primarily on the classic book by Dubbins and Savage, Inequalities for Stochastic Processes (How to Gamble if You Must) by Lester E Dubbins and Leonard J Savage (1965).

## Basic Theory

### Assumptions

Here is the basic situation: The gambler starts with an initial sum of money. She bets on independent, probabilistically identical games, each with two outcomes—win or lose. If she wins a game, she receives the amount of the bet on that game; if she loses a game, she must pay the amount of the bet. Thus, the gambler plays at even stakes. This particular situation (IID games and even stakes) is known as red and black, and is named for the color bets in the casino game roulette. Other examples are the *pass* and *don't pass* bets in craps.

Let us try to formulate the gambling experiment mathematically. First, let \(I_n\) denote the outcome of the \(n\)th game for \(n \in \N_+\), where 1 denotes a win and 0 denotes a loss. These are independent indicator random variables with the same distribution: \[ \P\left(I_j = 1\right) = p_j, \quad \P\left(I_j = 0\right) = q = 1 - p_j \] where \(p \in [0, 1]\) is the probability of winning an individual game. Thus, \(\bs{I} = (I_1, I_2, \ldots)\) is a sequence of Bernoulli trials.

If \(p = 0\), then the gambler always loses and if \(p = 1\) then the gambler always wins. These trivial cases are not interesting, so we will usually assume that \(0 \lt p \lt 1\). In real gambling houses, of course, \(p \lt \frac{1}{2}\) (that is, the games are unfair to the player), so we will be particularly interested in this case.

### Random Processes

The gambler's fortune over time is the basic random process of interest: Let \(X_0\) denote the gambler's initial fortune and \(X_i\) the gambler's fortune after \(i\) games. The gambler's strategy consists of the decisions of how much to bet on the various games and when to quit. Let \(Y_i\) denote the amount of the \(i\)th bet, and let \(N\) denote the number of games played by the gambler. If we want to, we can always assume that the games go on forever, but with the assumption that the gambler bets 0 on all games after \(N\). With this understanding, the game outcome, fortune, and bet processes are defined for all times \(i \in \N_+\).

The fortune process is related to the wager process as follows: \[ X_j = X_{j-1} + \left(2 I_j - 1\right) Y_j, \quad j \in \N_+ \]

### Strategies

The gambler's strategy can be very complicated. For example, the random variable \(Y_n\), the gambler's bet on game \(n\), or the event \(N = n - 1\), her decision to stop after \(n - 1\) games, could be based on the entire past history of the game, up to time \(n\). Technically, this history forms a \( \sigma \)-algebra: \[ \mathscr{H}_n = \sigma\left\{X_0, Y_1, I_1, Y_2, I_2, \ldots, Y_{n-1}, I_{n-1}\right\} \] Moreover, they could have additional sources of randomness. For example a gambler playing roulette could partly base her bets on the roll of a lucky die that she keeps in her pocket. However, the gambler cannot see into the future (unfortunately from her point of view), so we can at least assume that \(Y_n\) and \(\{N = n - 1\}\) are independent of \(\left(I_1, I_2, \ldots, I_{n-1}\right)\).

At least in terms of expected value, any gambling strategy is futile if the games are unfair.

\(\E\left(X_i\right) = \E\left(X_{i-1}\right) + (2 p - 1) \E\left(Y_i\right)\) for \(i \in \N_+\)

## Proof

This follows from the previous result and the assumption of no prescience.

Suppose that the gambler has a positive probability of making a real bet on game \(i\), so that \(\E(Y_i) \gt 0\). Then

- \(\E(X_i) \lt \E(X_{i-1})\) if \(p \lt \frac{1}{2}\)
- \(\E(X_i) \gt \E(X_{i-1})\) if \(p \gt \frac{1}{2}\)
- \(\E(X_i) = \E(X_{i-1})\) if \(p = \frac{1}{2}\)

## Proof

This follows from the previous result on the expected value of \( X_i \).

Thus on any game in which the gambler makes a positive bet, her expected fortune strictly decreases if the games are unfair, remains the same if the games are fair, and strictly increases if the games are favorable.

As we noted earlier, a general strategy can depend on the past history and can be randomized. However, since the underlying Bernoulli games are independent, one might guess that these complicated strategies are no better than simple strategies in which the amount of the bet and the decision to stop are based only on the gambler's current fortune. These simple strategies do indeed play a fundamental role and are referred to as stationary, deterministic strategies. Such a strategy can be described by a betting function \(S\) from the space of fortunes to the space of allowable bets, so that \(S(x)\) is the amount that the gambler bets when her current fortune is \(x\).

### The Stopping Rule

From now on, we will assume that the gambler's stopping rule is a very simple and standard one: she will bet on the games until she either loses her entire fortune and is ruined or reaches a fixed target fortune \(a\): \[ N = \min\{n \in \N: X_n = 0 \text{ or } X_n = a\} \] Thus, any strategy (betting function) \(S\) must satisfy \(s(x) \le \min\{x, a - x\}\) for \(0 \le x \le a\): the gambler cannot bet what she does not have, and will not bet more than is necessary to reach the target \(a\).

If we want to, we can think of the difference between the target fortune and the initial fortune as the entire fortune of the house. With this interpretation, the player and the house play symmetric roles, but with complementary win probabilities: play continues until either the player is ruined or the house is ruined. Our main interest is in the final fortune \(X_N\) of the gambler. Note that this random variable takes just two values; 0 and \(a\).

The mean and variance of the final fortune are given by

- \(\E(X_N) = a \P(X_N = a)\)
- \(\var(X_N) = a^2 \P(X_N = a) \left[1 - \P(X_N = a)\right]\)

Presumably, the gambler would like to maximize the probability of reaching the target fortune. Is it better to bet small amounts or large amounts, or does it not matter? How does the optimal strategy, if there is one, depend on the initial fortune, the target fortune, and the game win probability?

We are also interested in \(\E(N)\), the expected number of games played. Perhaps a secondary goal of the gambler is to maximize the expected number of games that she gets to play. Are the two goals compatible or incompatible? That is, can the gambler maximize both her probability of reaching the target *and* the expected number of games played, or does maximizing one quantity necessarily mean minimizing the other?

In the next two sections, we will analyze and compare two strategies that are in a sense opposites:

- Timid Play: On each game, until she stops, the gambler makes a small constant bet, say $1.
- Bold Play: On each game, until she stops, the gambler bets either her entire fortune or the amount needed to reach the target fortune, whichever is smaller.

In the final section of the chapter, we will return to the question of optimal strategies.