# 16.2: Elements of Markov Sequences

- Page ID
- 10848

## Elements of Markov Sequences

Markov sequences (Markov chains) are often studied at a very elementary level, utilizing algebraic tools such as matrix analysis. In this section, we show that the fundamental Markov property is an expression of conditional independence of “past” and “future," given the “present.” The essential Chapman-Kolmogorov equation is seen as a consequence of this conditional independence. In the usual time-homogeneous case with finite state space, the Chapman-Kolmogorov equation leads to the algebraic formulation that is widely studied at a variety of levels of mathematical sophistication. With the background laid, we only sketch some of the more common results. This should provide a probabilistic perspective for a more complete study of the algebraic analysis.

**Markov sequences**

We wish to model a system characterized by a sequence of *states* taken on at discrete instants which we call *transition times*. At each transition time, there is either a change to a new state or a renewal of the state immediately before the transition. Each state is maintained unchanged during the *period* or *stage* between transitions. At any transition time, the move to the next state is characterized by a conditional *transition probability* distribution. We suppose that the system is *memoryless*, in the sense that the transition probabilities are dependent upon the current state (and perhaps the period number), but not upon the manner in which that state was reached. *The past influences the future only through the present*. This is the *Markov property*, which we model in terms of conditional independence.

For period \(i\), the state is represented by a value of a random variable \(X_i\), whose value is one of the members of a set **E**, known as the *state space*. We consider only a finite state space and identify the states by integers from 1 to \(M\). We thus have a sequence

\(X_N = \{X_n: n \in N\}\), where \(N = \{0, 1, 2, \cdot\cdot\cdot\}\)

We view an observation of the system as a composite trial. Each \(\omega\) yields a sequence of states \(\{X_0 (\omega), X_1 (\omega), \cdot\cdot\cdot\}\) which is referred to as a *realization* of the sequence, or a *trajectory*. We suppose the system is evolving in time. At discrete instants of time \(t_1, t_2, \cdot\cdot\cdot\) the system makes a transition from one state to the succeeding one (which may be the same).

Initial period: | \(n = 0\), \(t \in [0, t_1)\), | state is \(X_0 (\omega)\); at \(t_1\) the transition is to \(X_1 (\omega)\) |

Period one: | \(n = 1\), \(t \in [t_1, t_2)\), | state is \(X_1 (\omega)\); at \(t_2\) the transition is to \(X_2 (\omega)\) |

...... | ||

Period \(k\): | \(n = k\), \(t \in [t_k, t_{k = 1})\), | state is \(X_k (\omega)\); at \(t_{k +1}\) move to \(X_{k +1} (\omega)\) |

...... |

The parameter \(n\) indicates the *period *\(t \in [t_n, t_{n + 1})\). If the periods are of unit length, then \(t_n = n\). At \(t_{n + 1}\), there is a *transition* from the state \(X_n (\omega)\) to the state \(X_{n + 1} (\omega)\) for the next period. To simplify writing, we adopt the following convention:

\(U_n = (X_0, X_1, \cdot\cdot\cdot, X_n) \in E_n\) \(U_{m,n} = (X_m, \cdot\cdot\cdot, X_n)\) and \(U^{n} = (X_n, X_{n + 1}, \cdot\cdot\cdot) \in E^n\)

The random vector \(U_n\) is called the *past* at \(n\) of the sequence \(X_N\) and \(U^n\) is the *future* at \(n\). In order to capture the notion that the system is without memory, so that the future is affected by the present, but not by how the present is reached, we utilize the notion of conditional independence, given a random vector, in the following

Definition

The sequence \(X_N\) is *Markov* iff

(M) \(\{X_{n + 1}, U_n\}\) ci \(|X_n\) for all \(n \ge 0\)

Several conditions equivalent to the Markov condition (M) may be obtained with the aid of properties of conditional independence. We note first that (M) is equivalent to

\(P(X_{n + 1} = k|X_n = j, U_{n - 1} \in Q) = P(X_{n + 1} = k|X_n = j)\) for each \(n \ge 0\), \(j, k \in E\), and \(Q \subset E^{n - 1}\)

The state in the next period is conditioned by the past only through the present state, and not by the manner in which the present state is reached. The statistics of the process are determined by the *initial state probabilities* and the *transition probabilities*

\(P(X_{n + 1} = k|X_n = j)\) \(\forall j, k \in E\), \(n \ge 0\)

The following examples exhibit a pattern which implies the Markov condition and which can be exploited to obtain the transition probabilities.

Example \(\PageIndex{1}\) One-dimensional random walk

An object starts at a given initial position. At discrete instants \(t_1, t_2, \cdot\cdot\cdot\) the object moves a random distance along a line. The various moves are independent of each other. Let

\(Y_0\) be the initial position

\(Y_k\) be the amount the object moves at time \(t = t_k\) \(\{Y_k: 1 \le k\}\) iid

\(X_n = \sum_{k =0}^{n} Y_k\) be the position after \(n\) moves

We note that \(X_{n + 1} = g(X_n, Y_{n + 1})\). Since the position after the transition at \(t_{n + 1}\) is affected by the past only by the value of the position \(X_n\) and not by the sequence of positions which led to this position, it is reasonable to suppose that the process \(X_N\) is Markov. We verify this below.

Example \(\PageIndex{2}\) A class of branching processes

Each member of a population is able to reproduce. For simplicity, we suppose that at certain discrete instants the entire next generation is produced. Some mechanism limits each generation to a maximum population of \(M\) members. Let

\(Z_{in}\) be the number propagated by the \(i\)th member of the \(n\)th generation.

\(Z_{in} = 0\) indicates death and no offspring, \(Z_{in} = k\) indicates a net of \(k\) propagated by the \(i\)th member (either death and \(k\) offspring or survival and \(k - 1\) offspring).

The population in generation \(n + 1\) is given by

\(X_{n + 1} = \text{min } \{M, \sum_{i = 1}^{X_n} Z_{in}\}\)

We suppose the class \(\{Z_{in}: 1 \le i \le M, 0 \le n\}\) is iid. Let \(Y_{n + 1} = (Z_{in}, Z_{2n}, \cdot\cdot\cdot, Z_{Mn})\). Thne \(\{Y_{n + 1}, U_n\}\) is independent. It seems reasonable to suppose the sequence \(X_N\) is Markov.

Example \(\PageIndex{3}\) An inventory problem

A certain item is stocked according to an \((m, M)\) inventory policy, as follows:

- If stock at the end of a period is less than \(m\), order up to \(M\).
- If stock at the end of a period is \(m\) or greater, do not order.

Let \(X_0\) be the initial stock, and \(X_n\) be the stock at the end of the \(n\)th period (before restocking), and let \(D_n\) be the demand during the \(n\)th period. Then for \(n \ge 0\),

\(X_{n + 1} = \begin{cases} \text{max } \{M - D_{n + 1}, 0\} & \text{if } 0 \le X_n < m \\ \text{max } \{X_n - D_{n + 1}, 0\} & \text{if } m \le X_n \end{cases} = g(X_n, D_{n + 1})\)

If we suppose \(\{D_n: 1 \le n\}\) is independent, then \(\{D_{n = 1}, U_n\}\) is independent for each \(n \ge 0\), and the Markov condition seems to be indicated.

*Remark*. In this case, the actual transition takes place throughout the period. However, for purposes of analysis, we examine the state only at the end of the period (before restocking). Thus, the transitions are dispersed in time, but the observations are at discrete instants.

Example \(\PageIndex{4}\) Remaining lifetime

A piece of equipment has a lifetime which is an integral number of units of time. When a unit fails, it is replaced immediately with another unit of the same type. Suppose

- \(X_n\) is the remaining lifetime of the unit in service at time \(n\)
- \(Y_{n + 1}\) is the lifetime of the unit installed at time \(n\), with \(\{Y_n: 1 \le n\}\) iid

Then \(X_{n + 1} = \begin{cases} X_n - 1 & \text{if } X_n \ge 1 \\ Y_{n + 1} - 1 & \text{if } X_n = 0 \end{cases} = g(X_n, Y_{n + 1})\)

*Remark*. Each of these four examples exhibits the pattern

\(\{X_0, Y_n: 1 \le n\}\) is independent

\(X_{n + 1} = g_{n + 1} (X_n, Y_{n + 1})\), \(n \ge 0\)

We now verify the Markov condition and obtain a method for determining the transition probabilities.

**A pattern yielding Markov sequences**

Suppose \(\{Y_n : 0 \le n\}\) is independent (call these the *driving random variables*). Set

\(X_0 = g_0 (Y_0)\) and \(X_{n + 1} = g_{n + 1} (X_n, Y_{n + 1})\) \(\forall n \ge 0\)

Then

\(X_N\) is Markov

\(P(X_{n+1} \in Q|X_n = u) = P[g_{n + 1} (u, Y_{n + 1}) \in Q]\) for all \(n, u\), and any Borel set \(Q\).

VERIFICATION

It is apparent that if \(Y_0, Y_1, \cdot\cdot\cdot, Y_n\) are known, then \(U_n\) is known. Thus \(U_n = h_n (Y_0, Y_1, \cdot\cdot\cdot, Y_n)\), which ensures each pair \(\{Y_{n + 1}, U_n\}\) is independent. By property __(CI13)__, with \(X = Y_{n +1}\), \(Y = X_n\), and \(Z = U_{n - 1}\), we have

\(\{Y_{n + 1}, U_{n - 1}\}\) ci\(|X_n\)

Since \(X_{n + 1} = g_{n + 1} (Y_{n + 1}, X_n)\) and \(U_n = h_n (X_n, U_{n - 1})\), property __(CI9)__ ensures

\(\{X_{n + 1}, U_n\}\) ci\(|X_n\) \(\forall n \ge 0\)

which is the Markov property.

\(P(X_{n + 1} \in Q|X_n = u) = E\{I_Q [g_{n + 1} (X_n,Y_{n + 1})]|X_n = u\}\) a.s. \(= E\{I_Q [g_{n + 1} (u, Y_{n + 1})]\}\) a.s. \([P_X]\) by __(CE10b)__

\(= P[g_{n + 1} (u, Y_{n + 1}) \in Q]\) by __(E1a)__

— □

The application of this proposition, below, to the previous examples shows that the transition probabilities are invariant with \(n\). This case is important enough to warrant separate classification.

Definition

If \(P(X_{n + 1} \in Q|X_n = u)\) is invariant with \(n\), for all Borel sets \(Q\), all \(u \in E\), the Markov process \(X_N\) is said to be *homogeneous*.

As a matter of fact, this is the only case usually treated in elementary texts. In this regard, we note the following special case of the proposition above.

*Homogenous Markov sequences*

If \(\{Y_n: 1 \le n\}\) is iid and \(g_{n + 1} = g\) for all \(n\), then the process is a homogeneous Markov process, and

\(P(X_{n + 1} \in Q|X_n = u) = P[g(u, Y_{n + 1}) \in Q]\), invariant with \(n\)

— □

*Remark*.

In the homogeneous case, the transition probabilities are invariant with \(n\). In this case, we write

\(P(X_{n + 1} = j|X_n = i) = p(i, j)\) or \(p_{ij}\) (invariant with \(n\))

These are called the *(one-step) transition probabilities*.

The transition probabilities may be arranged in a matrix **P** called the *transition probability matrix*, usually referred to as the *transition matrix*,

**P** = \([p(i, j)]\)

The element \(p(i, j)\) on row \(i\) and column \(j\) is the probability \(P(X_{n + 1} = j|X_n = i)\). Thus, the elements on the \(i\)th row constitute the conditional distribution for \(X_{n + 1}\), given \(X_n = i\). The transition matrix thus has the property that *each row sums to one*. Such a matrix is called a *stochastic matrix*. We return to the examples. From the propositions on transition probabilities, it is apparent that each is Markov. Since the function \(g\) is the same for all \(n\) and the driving random variables corresponding to the \(Y_i\) form an iid class, the sequences must be homogeneous. We may utilize part (b) of the propositions to obtain the one-step transition probabilities.

Example \(\PageIndex{5}\) Random walk continued

\(g_n (u, Y_{n + 1}) = u + Y_{n + 1}\). so that \(g_n\) is invariant with \(n\). Since \(\{Y_n: 1 \le n\}\) is iid,

\(P(X_{n+1} = k|X_n = j) = P(j + Y = k) = P(Y = k - j) = p_{k - j}\) where \(p_k = P(Y = k)\)

Example \(\PageIndex{6}\) Branching process continued

\(g(j, Y_{n + 1}) = \text{min } \{M, \sum_{i = 1}^{j} Z_{in}\}\) and **E** = \(\{0, 1, \cdot\cdot\cdot, M\}\). If \(\{Z_{in}: 1 \le i \le M\}\) is iid, then

\(W_{jn} = \sum_{i = 1}^{j} Z_{in}\) ensures \(\{W_{jn}: 1 \le n\}\) is iid for each \(j \in\) **E**

We thus have

\(P(X_{n + 1} = k|X_n = j) = \begin{cases} P(W_{jn} = k) & \text{for } 0 \le k < M \\ P(W_{jn} \ge M) & \text{for } k \ge M \end{cases} 0 \le j \le M\)

With the aid of moment generating functions, one may determine distributions for

\(W_1 = Z_1, W_2 = Z_1 + Z_2, \cdot\cdot\cdot, W_{M} = Z_1 + \cdot\cdot\cdot + Z_M\)

These calculations are implemented in an m-procedure called *branchp*. We simply need the distribution for the iid \(Z_{in}\).

% file branchp.m % Calculates transition matrix for a simple branching % process with specified maximum population. disp('Do not forget zero probabilities for missing values of Z') PZ = input('Enter PROBABILITIES for individuals '); M = input('Enter maximum allowable population '); mz = length(PZ) - 1; EZ = dot(0:mz,PZ); disp(['The average individual propagation is ',num2str(EZ),]) P = zeros(M+1,M+1); Z = zeros(M,M*mz+1); k = 0:M*mz; a = min(M,k); z = 1; P(1,1) = 1; for i = 1:M % Operation similar to genD z = conv(PZ,z); Z(i,1:i*mz+1) = z; [t,p] = csort(a,Z(i,:)); P(i+1,:) = p; end disp('The transition matrix is P') disp('To study the evolution of the process, call for branchdbn') PZ = 0.01*[15 45 25 10 5]; % Probability distribution for individuals branchp % Call for procedure Do not forget zero probabilities for missing values of Z Enter PROBABILITIES for individuals PZ Enter maximum allowable population 10 The average individual propagation is 1.45 The transition matrix is P To study the evolution of the process, call for branchdbn disp(P) % Optional display of generated P Columns 1 through 7 1.0000 0 0 0 0 0 0 0.1500 0.4500 0.2500 0.1000 0.0500 0 0 0.0225 0.1350 0.2775 0.2550 0.1675 0.0950 0.0350 0.0034 0.0304 0.1080 0.1991 0.2239 0.1879 0.1293 0.0005 0.0061 0.0307 0.0864 0.1534 0.1910 0.1852 0.0001 0.0011 0.0075 0.0284 0.0702 0.1227 0.1623 0.0000 0.0002 0.0017 0.0079 0.0253 0.0579 0.1003 0.0000 0.0000 0.0003 0.0020 0.0078 0.0222 0.0483 0.0000 0.0000 0.0001 0.0005 0.0021 0.0074 0.0194 0.0000 0.0000 0.0000 0.0001 0.0005 0.0022 0.0068 0.0000 0.0000 0.0000 0.0000 0.0001 0.0006 0.0022 Columns 8 through 11 0 0 0 0 0 0 0 0 0.0100 0.0025 0 0 0.0705 0.0315 0.0119 0.0043 0.1481 0.0987 0.0559 0.0440 0.1730 0.1545 0.1179 0.1625 0.1381 0.1574 0.1528 0.3585 0.0832 0.1179 0.1412 0.5771 0.0406 0.0698 0.1010 0.7591 0.0169 0.0345 0.0590 0.8799 0.0062 0.0147 0.0294 0.9468

Note that \(p(0, 0) = 1\). If the population ever reaches zero, it is extinct and no more births can occur. Also, if the maximum population (10 in this case) is reached, there is a high probability of returning to that value and very small probability of becoming extinct (reaching zero state).

Example \(\PageIndex{7}\) Inventory problem (continued)

In this case,

\(g(j, D_{n + 1}) = \begin{cases} \text{max } \{M - D_{n + 1}, 0\} & \text{for } 0 \le j < m \\ \text{max } \{j - D_{n + 1}, 0\} & \text{for } m \le j \le M \end{cases}\)

*Numerical example*

\(m = 1\) \(M = 3\) \(D_n\) is Poisson (1)

To simplify writing, use \(D\) for \(D_n\). Because of the invariance with \(n\), set

\(P(X_{n + 1} = k|X_n = j) = p(j, k) = P(g(j, D_{n + 1}) = k]\)

The various cases yield

\(g(0, D) = \text{max } \{3 - D, 0\}\)

\(g(0, D) = 0\) iff \(D \ge 3\) imples \(p(0, 0) = P(D \ge 3)\)

\(g(0, D) = 1\) iff \(D = 2\) imples \(p(0, 1) = P(D = 2)\)

\(g(0, D) = 2\) iff \(D = 1\) imples \(p(0, 2) = P(D = 1)\)

\(g(0, D) = 3\) iff \(D = 0\) imples \(p(0, 3) = P(D = 0)\)

\(g(1, D) = \text{max } \{1 - D, 0\}\)

\(g(1, D) = 0\) iff \(D \ge 1\) imples \(p(1, 0) = P(D \ge 1)\)

\(g(1, D) = 1\) iff \(D = 0\) imples \(p(1, 1) = P(D = 0)\)

\(g(1, D) = 2, 3\) is impossible

\(g(2, D) = \text{max } \{2 - D, 0\}\)

\(g(2, D) = 0\) iff \(D \ge 2\) imples \(p(2, 0) = P(D \ge 2)\)

\(g(2, D) = 1\) iff \(D = 1\) imples \(p(2, 1) = P(D = 1)\)

\(g(2, D) = 2\) iff \(D = 0\) imples \(p(2, 2) = P(D = 0)\)

\(g(2, D) = 3\) is impossible

\(g(3, D) = \text{max } \{3 - D, 0\} = g(0, D)\) so that \(p(3, k) = p(0, k)\)

The various probabilities for \(D\) may be obtained from a table (or may be calculated easily with cpoisson) to give the transition probability matrix

**P** = \(\begin{bmatrix} 0.0803 & 0.1839 & 0.3679 & 0.3679 \\ 0.6321 & 0.3679 & 0 & 0 \\ 0.2642 & 0.3679 & 0.3679 & 0 \\ 0.0803 & 0.1839 & 0.3679 & 0.3679 \end{bmatrix}\)

The calculations are carried out “by hand” in this case, to exhibit the nature of the calculations. This is a standard problem in inventory theory, involving costs and rewards. An m-procedure *inventory1* has been written to implement the function \(g\).

% file inventory1.m % Version of 1/27/97 % Data for transition probability calculations % for (m,M) inventory policy M = input('Enter value M of maximum stock '); m = input('Enter value m of reorder point '); Y = input('Enter row vector of demand values '); PY = input('Enter demand probabilities '); states = 0:M; ms = length(states); my = length(Y); % Calculations for determining P [y,s] = meshgrid(Y,states); T = max(0,M-y).*(s < m) + max(0,s-y).*(s >= m); P = zeros(ms,ms); for i = 1:ms [a,b] = meshgrid(T(i,:),states); P(i,:) = PY*(a==b)'; end P

We consider the case \(M = 5\), the reorder point \(m = 3\). and demand is Poisson (3). We approximate the Poisson distribution with values up to 20.

inventory1 Enter value M of maximum stock 5 % Maximum stock Enter value m of reorder point 3 % Reorder point Enter row vector of demand values 0:20 % Truncated set of demand values Enter demand probabilities ipoisson(3,0:20) % Demand probabilities P = 0.1847 0.1680 0.2240 0.2240 0.1494 0.0498 0.1847 0.1680 0.2240 0.2240 0.1494 0.0498 0.1847 0.1680 0.2240 0.2240 0.1494 0.0498 0.5768 0.2240 0.1494 0.0498 0 0 0.3528 0.2240 0.2240 0.1494 0.0498 0 0.1847 0.1680 0.2240 0.2240 0.1494 0.0498

Example \(\PageIndex{8}\) Remaining lifetime (continued)

\(g(0, Y) = Y - 1\), so that \(p(0, k) = P(Y - 1 = k) = P(Y = k + 1)\)

\(g(j, Y) = j - 1\) for \(j \ge 1\), so that \(p(j, k) = \delta_{j - 1, k}\) for \(j \ge 1\)

The resulting transition probability matrix is

**P** = \(\begin{bmatrix} p_1 & p_2 & p_3 & \cdot\cdot\cdot \\ 1 & 0 & 0 & \cdot\cdot\cdot \\ 0 & 1 & 0 & \cdot\cdot\cdot \\ \cdot\cdot\cdot & & & \cdot\cdot\cdot \\ \cdot\cdot\cdot \\ \cdot\cdot\cdot & & & \cdot\cdot\cdot \end{bmatrix}\)

The matrix is an infinite matrix, unless \(Y\) is simple. If the range of \(Y\) is \(\{1, 2, \cdot\cdot\cdot, M\}\) then the state space **E **is \(\{0, 1, \cdot\cdot\cdot, M - 1\}\).

Various properties of conditional independence, particularly __(CI9)__, __(CI10)__, and __(CI12)__, may be used to establish the following. The immediate future \(X_{n + 1}\) may be replaced by any *finite future *\(U_{n, n+p}\) and the present \(X_n\) may be replaced by any *extended present *\(U_{m,n}\). Some results of abstract measure theory show that the finite future \(U_{n, n+p}\) may be replaced by the *entire future *\(U^n\). Thus, we may assert

*Extended Markov property*

\(X_N\) is Markov iff

(M*) \(\{U^n, U_m\}\) ci\(|U_{m,n}\) \(\forall 0 \le m \le n\)

— □

**The Chapman-Kolmogorov equation and the transition matrix**

As a special case of the extended Markov property, we have

\(\{U^{n + k}, U_n\}\) ci\(|X_{n + k}\) for all \(n \ge 0\), \(k \ge 1\)

Setting \(g(U^{n + k}, X_{n + k}) = X_{n + k + m}\) and \(h(U_n, X_{n + k}) = X_n\) in __(CI9)__, we get

\(\{X_{n + k + m}, X_n\}\) ci\(|X_{n + k}\) for all \(n \ge 0\), \(k, m \ge 1\)

This is the *Chapman-Kolmogorov equation*, which plays a central role in the study of Markov sequences. For a discrete state space **E**, with

\(P(X_n = j|X_m = i) = p_{m, n} (i, j)\)

this equation takes the form

(\(CK'\)) \(p_{m, q} (i, k) = \sum_{j \in E} p_{m,n} (i, j) p_{n,q} (j, k)\) \(0 \le m < n < q\)

To see that this is so, consider

\(P(X_q = k|X_m = i) = E[I_{\{k\}} (X_q)|X_m = i] = E\{E[I_{\{k\}} (X_q)|X_n] |X_m = i\}\)

\(= \sum_{j} E[I_{\{k\}} (X_q)|X_n = j] p_{m, n} (i, j) = \sum_{j} p_{n, q} (j, k) p_{m, n} (i, j)\)

*Homogeneous case*

For this case, we may put (\(CK'\)) in a useful matrix form. The conditional probabilities \(p^m\) of the form

\(p^m (i, k) = P(X_{n + m} = k|X_n = i)\) invariant in \(n\)

are known as the *m-step transition probabilities*. The Chapman-Kolmogorov equation in this case becomes

(\(CK''\)) \(p^{m + n} (i, k) = \sum_{j \in E} p^m (i, j) p^n (j, k)\) \(\forall i, j \in\) **E**

In terms of the *m-step transition matrix ***P**\(^{(m)} = [p^m (i, k)]\), this set of sums is equivalent to the matrix product

(\(CK''\)) **P**\(^{(m + n)}\) = **P**\(^{(m)}\)**P**\(^{(n)}\)

Now

**P**\(^{(2)}\) = **P**\(^{(1)}\)**P**\(^{(1)}\) = **PP = P**\(^{2}\), **P**\(^{(3)}\) = **P**\(^{(2)}\)**P**\(^{(1)}\) =** P**\(^{3}\), etc.

A simple inductive argument based on (\(CK''\)) establishes

*The product rule for transition matrices*

The *m*-step probability matrix **P**\(^{(m)}\) = **P**\(^{m}\), the \(m\)th power of the transition matrix **P**

— □

Example \(\PageIndex{9}\) The inventory problem (continued)

For the inventory problem in __Example 16.2.7__, the three-step transition probability matrix **P**\(^{(3)}\) is obtained by raising **P** to the third power to get

**P**\(^{(3)}\) = **P**\(^{3}\) = \(\begin{bmatrix} 0.2930 & 0.2917 & 0.2629 & 0.1524 \\ 0.2619 & 0.2730 & 0.2753 & 0.1898 \\ 0.2993 & 0.2854 & 0.2504 & 0.1649 \\ 0.2930 & 0.2917 & 0.2629 & 0.1524 \end{bmatrix}\)

— □

We consider next the state probabilities for the various stages. That is, we examine the distributions for the various \(X_n\), letting \(p_k (n) = P(X_n = k)\) for each \(k \in\) **E**. To simplify writing, we consider a finite state space **E** = \(\{1, \cdot\cdot\cdot, M\}\), We use \(\pi(n)\) for the *row*matrix

\(\pi(n) = [p_1 (n) p_2 (n) \cdot\cdot\cdot p_M (n)]\)

As a consequence of the product rule, we have

*Probability distributions for any period*

For a homogeneous Markov sequence, the distribution for any \(X_n\) is determined by the initial distribution (i.e., for \(X_0\)) and the transition probability matrix **P**

VERIFICATION

Suppose the homogeneous sequence \(X_N\) has finite state-space **E** = \(\{1, 2, \cdot\cdot\cdot, M\}\). For any \(n \ge 0\), let \(p_j (n) P(X_n = j)\) for each \(j \in\) **E**. Put

\(\pi(n) = [p_1 (n) p_2 (n) \cdot\cdot\cdot p_M (n)]\)

Then

\(\pi(0) =\) the initial probability distribution

\(\pi(1) = \pi(0)\)**P**

......

\(\pi(n) = \pi(n - 1)\)**P** = \(\pi(0)\) **P**\(^{(n)} = \pi (0)\) **P**\(^{n}\) = the \(n\)th-period distribution

The last expression is an immediate consequence of the product rule.

Example \(\PageIndex{10}\) Inventory problem (continued)

In the inventory system for Examples __3__, __7__ and __9__, suppose the initial stock is \(M = 3\). This means that

\(\pi (0) =\) [0 0 0 1]

The product of \(\pi (0)\) and \(P^3\) is the fourth row of \(P^3\), so that the distribution for \(X_3\) is

\(\pi(3) = [p_0 (3)\ \ p_1 (3)\ \ p_2 (3)\ \ p_3 (3)] = [0.2930\ \ 0.2917\ \ 0.2629\ \ 0.1524]\)

Thus, given a stock of \(M = 3\) at startup, the probability is 0.2917 that \(X_3 = 1\). This is the probability of one unit in stock at the end of period number three.

*Remarks*

- A similar treatment shows that for the nonhomogeneous case the distribution at any stage is determined by the initial distribution and the class of one-step transition matrices. In the nonhomogeneous case, transition probabilities \(p_{n, n+1} (i, j)\) depend on the stage \(n\).
- A discrete-parameter Markov process, or Markov sequence, is characterized by the fact that each member \(X_{n + 1}\) of the sequence is conditioned by the value of the previous member of the sequence. This one-step stochastic linkage has made it customary to refer to a Markov sequence as a
*Markov chain*. In the discrete-parameter Markov case, we use the terms*process, sequence,*or*chain*interchangeably.

**The transition diagram and the transition matrix**

The previous examples suggest that a Markov chain is a dynamic system, evolving in time. On the other hand, the stochastic behavior of a homogeneous chain is determined completely by the probability distribution for the initial state and the one-step transition probabilities \(p(i, j)\) as presented in the transition matrix **P**. The time-invariant transition matrix may convey a static impression of the system. However, a simple geometric representation, known as the *transition diagram*, makes it possible to link the unchanging structure, represented by the transition matrix, with the dynamics of the evolving system behavior.

Definition

A *transition diagram* for a homogeneous Markov chain is a linear graph with one node for each state and one directed edge for each possible one-step transition between states (nodes).

We ignore, as essentially impossible, any transition which has zero transition probability. Thus, the edges on the diagram correspond to positive one-step transition probabilities between the nodes connected. Since for some pair \((i, j)\) of states, we may have \(p(i, j) > 0\) but \(p(j, i) = 0\) we may have a connecting edge between two nodes in one direction, but none in the other. *The system can be viewed as an object jumping from state to state (node to node) at the successive transition times*. As we follow the *trajectory* of this object, we achieve a sense of the evolution of the system.

Example \(\PageIndex{11}\) Transition diagram for inventory example

Consider, again, the transition matrix **P** for the inventory problem (rounded to three decimals).

**P** = \(\begin{bmatrix} 0.080 & 0.184 & 0.368 & 0.368 \\ 0.632 & 0.368 & 0 & 0 \\ 0.264 & 0.368 & 0.368 & 0 \\ 0.080 & 0.184 & 0.368 & 0.368 \end{bmatrix}\)

Figure 16.2.1 shows the transition diagram for this system. At each node corresponding to one of the possible states, the *state value* is shown. In this example, the state value is one less than the state number. For convenience, we refer to the node for state \(k + 1\). which has state value \(k\), as node \(k\). If the state value is zero, there are four possibilities: remain in that condition with probability 0.080; move to node 1 with probability 0.184; move to node 2 with probability 0.368; or move to node 3 with probability 0.368. These are represented by the “self loop” and a directed edge to each of the nodes representing states. Each of these directed edges is marked with the (conditional) transition probability. On the other hand, probabilities of reaching state value 0 from each of the others is represented by directed edges into the node for state value 0. A similar situation holds for each other node. Note that the probabilities on edges leaving a node (including a self loop) must total to one, since these correspond to the transition probability distribution from that node. There is no directed edge from the node 2 to node 3, since the probability of a transition from value 2 to value 3 is zero. Similary, there is no directed edge from node 1 to either node 2 or node 3.

**Figure 16.2.1**. Transition diagram for the inventory system of Example 16.2.11

There is a one-one relation between the transition diagram and the transition matrix **P**. The transition diagram not only aids in visualizing the dynamic evolution of a chain, but also displays certain structural properties. Often a chain may be decomposed usefully into subchains. Questions of *communication* and *recurrence* may be answered in terms of the transition diagram. Some subsets of states are essentially *closed*, in the sense that if the system arrives at any one state in the subset it can never reach a state outside the subset. Periodicities can sometimes be seen, although it is usually easier to use the diagram to show that periodicities cannot occur.

**Classification of states**

Many important characteristics of a Markov chain can be studied by considering the number of visits to an arbitrarily chosen, but fixed, state.

Definition

For a fixed state \(j\), let

\(T_1\) = the time (stage number) of the first visit to state \(j\) (after the initial period).

\(F_{k} (i, j) = P(T_i = k|X_0 = i)\), the probability of reaching state \(j\) for the first time from state \(i\) in \(k\) steps.

\(F(i, j) = P(T_i < \infty|X_0 = i) = \sum_{k = 1}^{\infty} F_k (i, j)\), the probability of ever reaching state \(j\) from state \(i\).

A number of important theorems may be developed for \(F_k\) and \(F\), although we do not develop them in this treatment. We simply quote them as needed. An important classification of states is made in terms of \(F\).

Definition

State \(j\) is said to be *transient* iff \(F(j, j) < 1\),

and is said to be *recurrent* iff \(F(j, j) = 1\).

*Remark*. If the state space **E** is infinite, recurrent states fall into one of two subclasses: positive or null. Only the positive case is common in the infinite case, and that is the only possible case for systems with finite state space.

Sometimes there is a regularity in the structure of a Markov sequence that results in periodicities.

Definition

For state \(j\), let

\(\delta = \text{greatest common denominatior of } \{n: p^n (j, j) > 0\}\)

If \(\delta > 1\), then state \(j\) is *periodic* with *period *\(\delta\); otherwise, state \(j\) is *aperiodic*.

Usually if there are any self loops in the transition diagram (positive probabilities on the diagonal of the transition matrix **P**) the system is aperiodic. *Unless stated otherwise, we limit consideration to the aperiodic case.*

Definition

A state \(j\) is called *ergodic* iff it is positive, recurrent, and aperiodic.

It is called *absorbing* iff \(F(j, j) = 1\).

A recurrent state is one to which the system eventually returns, hence is visited an infinity of times. If it is absorbing, then once it is reached it returns each step (i.e., never leaves).

An arrow notation is used to indicate important relations between states.

Definition

We say

State \(i\) reaches \(j\), denoted \(i \to j\), iff \(p^n (i, j) > 0\) for some \(n > 0\).

State \(i\) and \(j\) communicate, denoted \(i \leftrightarrow j\) iff both \(i\) reaches \(j\) and \(j\) reaches \(i\).

By including \(j\) reaches \(j\) in all cases, the relation \(\leftrightarrow\) is an equivalence relation (i.e., is reflexive, transitive, and idempotent). With this relationship, we can define important classes.

Definition

A class of states is *communicating* iff every state in the class may be reached from every other state in the class (i.e. every pair communicates). A class is *closed* if no state outside the class can be reached from within the class.

The following important conditions are intuitive and may be established rigorously:

\(i \leftrightarrow j\) implies \(i\) is recurrent iff \(j\) is recurrent

\(i \to j\) and \(i\) recurrent implies \(i \leftrightarrow j\)

\(i \to j\) and \(i\) recurrent implies \(j\) recurrent

**Limit theorems for finite state space sequences**

The following propositions may be established for Markov sequences with finite state space:

- There are no null states, and not all states are transient.
- If a class of states is irreducible (i.e.,has no proper closed subsets), then
- All states are recurrent
- All states are aperiodic
*or*all are periodic with the same period. - If a class
*C*is closed, irreducible, and*i*is a transient state (necessarily not in \(C\)).

then \(F(i, j) = F(i, k)\) for all \(j, k \in C\)

*A limit theorem*

If the states in a Markov chain are ergodic (i.e., positive, recurrent, aperiodic), then

\(\text{lim}_{n} p^n (i, j) = \pi_j > 0\) \(\sum_{j = 1}^{M} \pi_j = 1\) \(\pi_j = \sum_{i = 1}^{M} \pi_i p(i, j)\)

If, as above, we let

\(\pi (n) = [p_1 (n)\ p_2(n) \cdot\cdot\cdot p_M (n)]\) so that \(\pi (n) = \pi (0)\) **P**\(^{n}\)

the result above may be written

\(\pi (n) = \pi (0)\) **P**\(^{n}\) \(\to\) \(\pi (0)\) **P**\(_{0}\)

where

**P**\(_{0}\) = \(\begin{bmatrix} \pi_1 & \pi_2 & \cdot\cdot\cdot & \pi_m \\ \pi_1 & \pi_2 & \cdot\cdot\cdot & \pi_m \\ \cdot\cdot\cdot & \cdot\cdot\cdot & \cdot\cdot\cdot & \cdot\cdot\cdot \\ \pi_1 & \pi_2 & \cdot\cdot\cdot & \pi_m \end{bmatrix}\)

Each row of **P**\(_0 = \text{lim}_n\) **P**\(^n\) is the long run distribution \(\pi = \text{lim}_n \pi (n)\).

Definition

A distribution is **stationary** iff

\(\pi = \pi\)**P**

The result above may be stated by saying that the long-run distribution is the stationary distribution. A generating function analysis shows the convergence is exponential in the following sense

|**P**\(^n\) - **P**\(_0\)| \(\le\) \(\alpha |\lambda|^n\)

where \(|\lambda|\) is the largest absolute value of the eigenvalues for **P** other than \(\lambda = 1\).

Example \(\PageIndex{12}\) The long run distribution for the inventory example

We use MATLAB to check the eigenvalues for the transition probability **P** and to obtain increasing powers of **P**. The convergence process is readily evident.

P = 0.0803 0.1839 0.3679 0.3679 0.6321 0.3679 0 0 0.2642 0.3679 0.3679 0 0.0803 0.1839 0.3679 0.3679 E = abs(eig(P)) E = 1.0000 0.2602 0.2602 0.0000 format long N = E(2).^[4 8 12] N = 0.00458242348096 0.00002099860496 0.00000009622450 >> P4 = P^4 P4 = 0.28958568915950 0.28593792666752 0.26059678211310 0.16387960205989 0.28156644866011 0.28479107531968 0.26746979455342 0.16617268146679 0.28385952806702 0.28250048636032 0.26288737107246 0.17075261450021 0.28958568915950 0.28593792666752 0.26059678211310 0.16387960205989 >> P8 = P^8 P8 = 0.28580046500309 0.28471421248816 0.26315895715219 0.16632636535655 0.28577030590344 0.28469190218618 0.26316681807503 0.16637097383535 0.28581491438224 0.28471028095839 0.26314057837998 0.16633422627939 0.28580046500309 0.28471421248816 0.26315895715219 0.16632636535655 >> P12 = P^12 P12 = 0.28579560683438 0.28470680858266 0.26315641543927 0.16634116914369 0.28579574073314 0.28470680714781 0.26315628010643 0.16634117201261 0.28579574360207 0.28470687626748 0.26315634631961 0.16634103381085 0.28579560683438 0.28470680858266 0.26315641543927 0.16634116914369 >> error4 = max(max(abs(P^16 - P4))) % Use P^16 for P_0 error4 = 0.00441148012334 % Compare with 0.0045824... >> error8 = max(max(abs(P^16 - P8))) error8 = 2.984007206519035e-05 % Compare with 0.00002099 >> error12 = max(max(abs(P^16 - P12))) error12 = 1.005660185959822e-07 % Compare with 0.00000009622450

The convergence process is clear, and the agreement with the error is close to the predicted. We have not determined the factor \(a\), and we have approximated the long run matrix \(P_0\) with \(P^{16}\). This exhibits a *practical criterion* for sufficient convergence. If the rows of \(P^n\) agree within acceptable precision, then \(n\) is sufficiently large. For example, if we consider agreement to four decimal places sufficient, then

P10 = P^10 P10 = 0.2858 0.2847 0.2632 0.1663 0.2858 0.2847 0.2632 0.1663 0.2858 0.2847 0.2632 0.1663 0.2858 0.2847 0.2632 0.1663

shows that \(n = 10\) is quite sufficient.

## Simulation of finite homogeneous Markov sequences

In the section, __"The Quantile Function"__, the quantile function is used with a random number generator to obtain a simple random sample from a given population distribution. In this section, we adapt that procedure to the problem of simulating a trajectory for a homogeneous Markov sequences with finite state space.

**Elements and terminology**

**States and state numbers**. We suppose there are*m*states, usually carrying a numerical value. For purposes of analysis and simulation, we number the states 1 through*m*. Computation is carried out with state numbers; if desired, these can be translated into the actual state values after computation is completed.**Stages, transitions, period numbers, trajectories and time**. We use the term stage and period interchangeably. It is customary to number the periods or stages beginning with zero for the initial stage. The*period number*is the number of transitions to reach that stage from the initial one. Zero transitions are required to reach the original stage (period zero), one transition to reach the next (period one), two transitions to reach period two, etc. We call the sequence of states encountered as the system evolves a*trajectory*or a*chain*. The terms “sample path” or “realization of the process” are also used in the literature. Now if the periods are of equal time length, the number of transitions is a measure of the elapsed time since the chain originated. We find it convenient to refer to*time*in this fashion. At time \(k\) the chain has reached the period numbered \(k\). The trajectory is \(k + 1\) stages long, so time or period number is one less than the number of stages.**The transition matrix and the transition distributions**. For each state, there is a conditional transition probability distribution for the next state. These are arranged in a*transition matrix*. The \(i\)th row consists of the transition distribution for selecting the next-period state when the current state number is \(i\). The transition matrix \(P\) thus has nonnegative elements, with each row summing to one. Such a matrix is known as a*stochastic matrix*.

The fundamental simulation strategy

- A fundamental strategy for sampling from a given population distribution is developed in the unit on the Quantile Function. If \(Q\) is the quantile function for the population distribution and \(U\) is a random variable distributed uniformly on the interval [0, 1], then \(X = Q(U)\) has the desired distribution. To obtain a sample from the uniform distribution use a random number generator. This sample is “transformed” by the quantile function into a sample from the desired distribution.
- For a homogeneous chain, if we are in state \(k\), we have a distribution for selecting the next state. If we use the quantile function for that distribution and a number produced by a random number generator, we make a selection of the next state based on that distribution. A succession of these choices, with the selection of the next state made in each case from the distribution for the current state, constitutes a valid simulation of a trajectory.

**Arrival times and recurrence times**

The basic simulation produces one or more trajectories of a specified length. Sometimes we are interested in continuing until first arrival at (or visit to) a specific target state or any one of a set of target states. The time (in transitions) to reach a target state is one less than the number of stages in the trajectory which begins with the initial state and ends with the target state reached.

- If the initial state is not in the target set, we speak of the
*arrival time*. - If the initial state is in the target set, the arrival time would be zero. In this case, we do not stop at zero but continue until the next visit to a target state (possibly the same as the initial state). We call the number of transitions in this case the
*recurrence time*. - In some instances, it may be desirable to know the time to complete visits to a prescribed number of the target states. Again there is a choice of treatment in the case the initial set is in the target set.

**Data files**

For use of MATLAB in simulation, we find it convenient to organize the appropriate data in an m-file.

- In every case, we need the transition matrix \(P\). Its size indicates the number of states (say by the length of any row or column).
- If the states are to have values other than the state numbers, these may be included in the data file, although they may be added later, in response to a prompt.
- If long trajectories are to be produced, it may be desirable to determine the fraction of times each state is realized. A comparison with the long-run probabilities for the chain may be of interest. In this case, the data file may contain the long-run probability distribution. Usually, this is obtained by taking one row of a sufficiently large power of the transition matrix. This operation may be performed after the data file is called for but before the simulation procedure begins.

An example data file used to illustrate the various procedures is shown below. These data were generated artificially and have no obvious interpretations in terms of a specific systems to be modeled. However, they are sufficiently complex to provide nontrivial illustrations of the simulation procedures.

% file markovp1.m % Artificial data for a Markov chain, used to % illustrate the operation of the simulation procedures. P = [0.050 0.011 0.155 0.155 0.213 0.087 0.119 0.190 0.008 0.012 0.103 0.131 0.002 0.075 0.013 0.081 0.134 0.115 0.181 0.165 0.103 0.018 0.128 0.081 0.137 0.180 0.149 0.051 0.009 0.144 0.051 0.098 0.118 0.154 0.057 0.039 0.153 0.112 0.117 0.101 0.016 0.143 0.200 0.062 0.099 0.175 0.108 0.054 0.062 0.081 0.029 0.085 0.156 0.158 0.011 0.156 0.088 0.090 0.055 0.172 0.110 0.059 0.020 0.212 0.016 0.113 0.086 0.062 0.204 0.118 0.084 0.171 0.009 0.138 0.140 0.150 0.023 0.003 0.125 0.157 0.105 0.123 0.121 0.167 0.149 0.040 0.051 0.059 0.086 0.099 0.192 0.093 0.191 0.061 0.094 0.123 0.106 0.065 0.040 0.035]; states = 10:3:37; PI = [0.0849 0.0905 0.1125 0.1268 0.0883 0.1141 ... 0.1049 0.0806 0.0881 0.1093]; % Long-run distribution

The largest absolute value of the eigenvalues (other than one) is 0.1716. Since \(0.1716^{16} \approx 5.6 \cdot 10^{-13}\), we take any row of \(P^{16}\) as the long-run probabilities. These are included in the matrix PI in the m-file, above. The examples for the various procedures below use this set of artificial data, since the purpose is to illustrate the operation of the procedures.

**The setup and the generating m-procedures**

The m-procedure *chainset* sets up for simulation of Markov chains. It prompts for input of the transition matrix *P*, the states (if different from the state numbers), the long-run distribution (if available), and the set of target states if it is desired to obtain arrival or recurrence times. The procedure determines the number of states from the size of *P* and calculates the information needed for the quantile function. It then prompts for a call for one of the generating procedures.

The m-procedure *mchain*, as do the other generating procedures below, assumes chainset has been run, so that commonly used data are available in appropriate form. The procedure prompts for the number of stages (length of the trajectory to be formed) and for the initial state. When the trajectory is produced, the various states in the trajectory and the fraction or relative frequency of each is displayed. If the long-run distribution has been supplied by chainset, this distribution is included for comparison. In the examples below, we reset the random number generator (set the “seed” to zero) for purposes of comparison. However, in practice, it may be desirable to make several runs without resetting the seed, to allow greater effective “randomness.”

Example \(\PageIndex{13}\)

markovp1 % Call for data chainset % Call for setup procedure Enter the transition matrix P Enter the states if not 1:ms states % Enter the states States are 1 10 2 13 3 16 4 19 5 22 6 25 7 28 8 31 9 34 10 37 Enter the long-run probabilities PI % Enter the long-run distribution Enter the set of target states [16 22 25] % Not used with mchain Call for for appropriate chain generating procedure rand('seed',0) mchain % Call for generating procedure Enter the number n of stages 10000 % Note the trajectory length Enter the initial state 16 State Frac P0 % Statistics on the trajectory 10.0000 0.0812 0.0849 13.0000 0.0952 0.0905 16.0000 0.1106 0.1125 19.0000 0.1226 0.1268 22.0000 0.0880 0.0883 25.0000 0.1180 0.1141 28.0000 0.1034 0.1049 31.0000 0.0814 0.0806 34.0000 0.0849 0.0881 37.0000 0.1147 0.1093 To view the first part of the trajectory of states, call for TR disp(TR') 0 1 2 3 4 5 6 7 8 9 10 16 16 10 28 34 37 16 25 37 10 13

The fact that the fractions or *relative frequencies* approximate the long-run probabilities is an expression of a fundamental limit property of probability theory. This limit property, which requires somewhat sophisticated technique to establish, justifies a relative frequency interpretation of probability.

The procedure *arrival* assumes the setup provided by chainset, including a set \(E\) of target states. The procedure prompts for the number *r* of repetitions and the initial state. Then it produces \(r\) succesive trajectories, each starting with the prescribed initial state and ending on one of the target states. The arrival times vary from one run to the next. Various statistics are computed and displayed or made available. In the single-run case (\(r = 1\)), the trajectory may be displayed. An auxiliary procedure *plotdbn* may be used in the multirun case to plot the distribution of arrival times.

Example \(\PageIndex{14}\) Arrival time to a target set of states

rand('seed',0) arrival % Assumes chainset has been run, as above Enter the number of repetitions 1 % Single run case The target state set is: 16 22 25 Enter the initial state 34 % Specified initial state The arrival time is 6 % Data on trajectory The state reached is 16 To view the trajectory of states, call for TR disp(TR') % Optional call to view trajectory 0 1 2 3 4 5 6 34 13 10 28 34 37 16 rand('seed',0) arrival Enter the number of repetitions 1000 % Call for 1000 repetitions The target state set is: 16 22 25 Enter the initial state 34 % Specified initial state The result of 1000 repetitions is: % Run data (see optional calls below) Term state Rel Freq Av time 16.0000 0.3310 3.3021 22.0000 0.3840 3.2448 25.0000 0.2850 4.3895 The average arrival time is 3.59 The standard deviation is 3.207 The minimum arrival time is 1 The maximum arrival time is 23 To view the distribution of arrival times, call for dbn To plot the arrival time distribution, call for plotdbn plotdbn % See Figure 16.2.2

**Figure 16.2.2**. Time distribution for Example 16.2.14

It would be difficult to establish analytically estimates of arrival times. The simulation procedure gives a reasonable “feel” for these times and how they vary.

The procedure *recurrence* is similar to the procedure arrival. If the initial state is not in the target set, it behaves as does the procedure arrival and stops on the first visit to the target set. However, if the initial state is in the target set, the procedures are different. The procedure arrival stops with zero transitions, since it senses that it has “arrived.” We are usually interested in having at least one transition– back to the same state or to another state in the target set. We call these times *recurrence times*.

Example \(\PageIndex{15}\)

rand('seed',0) recurrence Enter the number of repititions 1 The target state set is: 16 22 25 Enter the initial state 22

**Figure 16.2.3**. Transition time distribution for Example 16.2.15

The recurrence time is 1 The state reached is 16 To view the trajectory of state numbers, call for TR disp(TR') 0 1 22 16 recurrence Enter the number of repititions 1000 The target state set is: 16 22 25 Enter the initial state 25 The result of 1000 repetitions is: Term state Rel Freq Av time 16.0000 0.3680 2.8723 22.0000 0.2120 4.6745 25.0000 0.4200 3.1690 The average recurrence time is 3.379 The standard deviation is 3.0902 The minimum recurrence time is 1 The maximum recurrence time is 20 To view the distribution of recurrence times, call for dbn To plot the recurrence time distribution, call for plotdbn % See Figure 16.2.3

The procedure *kvis* stops when a designated number \(k\) of states are visited. If \(k\) is greater than the number of target states, or if no \(k\) is designated, the procedure stops when all have been visited. For \(k = 1\), the behavior is the same as arrival. However, that case is better handled by the procedure arrival, which provides more statistics on the results.

Example \(\PageIndex{16}\)

rand('seed',0) kvis % Assumes chainset has been run Enter the number of repetitions 1 The target state set is: 16 22 25 Enter the number of target states to visit 2 Enter the initial state 34 The time for completion is 7 To view the trajectory of states, call for TR disp(TR') 0 1 2 3 4 5 6 7 34 13 10 28 34 37 16 25 rand('seed',0) kvis Enter the number of repetitions 100 The target state set is: 16 22 25 Enter the number of target states to visit % Default-- visit all three Enter the initial state 31 The average completion time is 17.57 The standard deviation is 8.783 The minimum completion time is 5 The maximum completion time is 42 To view a detailed count, call for D. The first column shows the various completion times; the second column shows the numbers of trials yielding those times

The first goal of this somewhat sketchy introduction to Markov processes is to provide a general setting which gives insight into the essential character and structure of such systems. The important case of homogenous chains is introduced in such a way that their algebraic structure appears as a logical consequence of the Markov propertiy. The general theory is used to obtain some tools for formulating homogeneous chains in practical cases. Some MATLAB tools for studying their behavior are applied to an artificial example, which demonstrates their general usefulness in studying many practical, applied problems.