Loading [MathJax]/extensions/mml2jax.js
Skip to main content
Library homepage
 

Text Color

Text Size

 

Margin Size

 

Font Type

Enable Dyslexic Font
Statistics LibreTexts

Search

  • Filter Results
  • Location
  • Classification
    • Article type
    • Author
    • Cover Page
    • License
    • Show TOC
    • Embed Jupyter
    • Transcluded
    • OER program or Publisher
    • Autonumber Section Headings
    • License Version
  • Include attachments
Searching in
About 60 results
  • https://stats.libretexts.org/Bookshelves/Probability_Theory/Introductory_Probability_(Grinstead_and_Snell)/03%3A_Combinatorics/3.03%3A_Card_Shuffling
    Given a deck of n cards, how many times must we shuffle it to make it “random"? Of course, the answer depends upon the method of shuffling which is used and what we mean by “random."
  • https://stats.libretexts.org/Bookshelves/Probability_Theory/Introductory_Probability_(Grinstead_and_Snell)/01%3A_Discrete_Probability_Distributions/1.02%3A_Discrete_Probability_Distribution
    In this book we shall study many different experiments from a probabilistic point of view.
  • https://stats.libretexts.org/Bookshelves/Probability_Theory/Introductory_Probability_(Grinstead_and_Snell)/12%3A_Random_Walks
    Thumbnail: Random walk in two dimensions. (Public Domain; László Németh via Wikipedia).
  • https://stats.libretexts.org/Bookshelves/Probability_Theory/Introductory_Probability_(Grinstead_and_Snell)/11%3A_Markov_Chains
    Modern probability theory studies chance processes for which the knowledge of previous outcomes influences predictions for future experiments. In principle, when we observe a sequence of chance experi...Modern probability theory studies chance processes for which the knowledge of previous outcomes influences predictions for future experiments. In principle, when we observe a sequence of chance experiments, all of the past outcomes could influence our predictions for the next experiment.  In a Markov process, the outcome of a given experiment can affect the outcome of the next experiment. This type of process is called a Markov chain.
  • https://stats.libretexts.org/Bookshelves/Probability_Theory/Introductory_Probability_(Grinstead_and_Snell)/11%3A_Markov_Chains/11.05%3A_Mean_First_Passage_Time_for_Ergodic_Chains
    In this section we consider two closely related descriptive quantities of interest for ergodic chains: the mean time to return to a state and the mean time to go from one state to another state.
  • https://stats.libretexts.org/Bookshelves/Probability_Theory/Introductory_Probability_(Grinstead_and_Snell)/09%3A_Central_Limit_Theorem/9.03%3A_Central_Limit_Theorem_for_Continuous_Independent_Trials
    We have seen in Section 1.2 that the distribution function for the sum of a large number \(n\) of independent discrete random variables with mean \(\mu\) and variance \(\sigma^2\) tends to look like a...We have seen in Section 1.2 that the distribution function for the sum of a large number \(n\) of independent discrete random variables with mean \(\mu\) and variance \(\sigma^2\) tends to look like a normal density with mean \(n\mu\) and variance \(n\sigma^2\). Let us begin by looking at some examples to see whether such a result is even plausible.
  • https://stats.libretexts.org/Bookshelves/Probability_Theory/Introductory_Probability_(Grinstead_and_Snell)/09%3A_Central_Limit_Theorem/9.02%3A_Central_Limit_Theorem_for_Discrete_Independent_Trials
    We have illustrated the Central Limit Theorem in the case of Bernoulli trials, but this theorem applies to a much more general class of chance processes.
  • https://stats.libretexts.org/Bookshelves/Probability_Theory/Introductory_Probability_(Grinstead_and_Snell)/01%3A_Discrete_Probability_Distributions
  • https://stats.libretexts.org/Bookshelves/Probability_Theory/Introductory_Probability_(Grinstead_and_Snell)/06%3A_Expected_Value_and_Variance/6.01%3A_Expected_Value_of_Discrete_Random_Variables
    When a large collection of numbers is assembled, as in a census, we are usually interested not in the individual numbers, but rather in certain descriptive quantities such as the average or the median...When a large collection of numbers is assembled, as in a census, we are usually interested not in the individual numbers, but rather in certain descriptive quantities such as the average or the median.
  • https://stats.libretexts.org/Bookshelves/Probability_Theory/Introductory_Probability_(Grinstead_and_Snell)/01%3A_Discrete_Probability_Distributions/1.R%3A_References
    Pearson, “Science and Monte Carlo," , vol. McCracken, “The Monte Carlo Method," vol. 1, 3rd ed. (New York: John Wiley & Sons, 1968), p. Trask (New York: Harcourt-Brace, 1968), p. Quoted in the ed. Put...Pearson, “Science and Monte Carlo," , vol. McCracken, “The Monte Carlo Method," vol. 1, 3rd ed. (New York: John Wiley & Sons, 1968), p. Trask (New York: Harcourt-Brace, 1968), p. Quoted in the ed. Putnam (New York: Viking, 1946), p. in (Cambridge: Cambridge University Press, 1982).↩ Hacking, (Cambridge: Cambridge University Press, 1975). Ore, (Princeton: Princeton University Press, 1953). Ore, “Pascal and the Invention of Probability Theory,” , vol. See Knot X, in Lewis Carroll, vol.
  • https://stats.libretexts.org/Bookshelves/Probability_Theory/Introductory_Probability_(Grinstead_and_Snell)/01%3A_Discrete_Probability_Distributions/1.01%3A__Simulation_of_Discrete_Probabilities
    In this chapter, we shall first consider chance experiments with a finite number of possible outcomes \(\omega_1\), \(\omega_2\), …, \(\omega_n\).

Support Center

How can we help?