# 11: Markov Chains

- Page ID
- 3171

Modern probability theory studies chance processes for which the knowledge of previous outcomes influences predictions for future experiments. In principle, when we observe a sequence of chance experiments, all of the past outcomes could influence our predictions for the next experiment. For example, this should be the case in predicting a student’s grades on a sequence of exams in a course. But to allow this much generality would make it very difficult to prove general results. In 1907, A. A. Markov began the study of an important new type of chance process. In this process, the outcome of a given experiment can affect the outcome of the next experiment. This type of process is called a Markov chain.

- 11.1: Introduction
- Most of our study of probability has dealt with independent trials processes. These processes are the basis of classical probability theory and much of statistics. We have discussed two of the principal theorems for these processes: the Law of Large Numbers and the Central Limit Theorem.

- 11.2: Absorbing Markov Chains**
- The subject of Markov chains is best studied by considering special types of Markov chains.

- 11.3: Ergodic Markov Chains**
- A second important kind of Markov chain we shall study in detail is an Markov chain

- 11.5: Mean First Passage Time for Ergodic Chains
- In this section we consider two closely related descriptive quantities of interest for ergodic chains: the mean time to return to a state and the mean time to go from one state to another state.

*Thumbnail: A diagram representing a two-state Markov process, with the states labeled E and A. Each number represents the probability of the Markov process changing from one state to another state, with the direction indicated by the arrow. If the Markov process is in state A, then the probability it changes to state E is 0.4, while the probability it remains in state A is 0.6. (CC BY-SA 3.0; Joxemai4 via Wikipedia).*