Search
- Filter Results
- Location
- Classification
- Include attachments
- https://stats.libretexts.org/Courses/Cerritos_College/Introduction_to_Statistics_with_R/02%3A_Introduction_to_R/2.01%3A_Why_Programming_Is_Hard_to_LearnA large amount of psychological research has shown that practice needs to be deliberate, meaning that it focuses on developing the specific skills that one needs to perform the skill, at a level that ...A large amount of psychological research has shown that practice needs to be deliberate, meaning that it focuses on developing the specific skills that one needs to perform the skill, at a level that is always pushing one’s ability. Whenever I experience an error that I don’t understand, the first thing that I do is to copy and paste the error message into a search engine Often this will provide several pages discussing the problem and the ways that people have solved it.
- https://stats.libretexts.org/Courses/Cerritos_College/Introduction_to_Statistics_with_R/01%3A_Basics/1.01%3A_Introduction/1.1.02%3A_Dealing_with_Statistics_AnxietyAnxiety feels uncomfortable, but psychology tells us that this kind of emotional arousal can actually help us perform better on many tasks, by focusing our attention So if you start to feel anxious ab...Anxiety feels uncomfortable, but psychology tells us that this kind of emotional arousal can actually help us perform better on many tasks, by focusing our attention So if you start to feel anxious about the material in this course, remind yourself that many others in the class are feeling similarly, and that the arousal could actually help you perform better (even if it doesn’t seem like it!).
- https://stats.libretexts.org/Courses/Cerritos_College/Introduction_to_Statistics_with_R/02%3A_Introduction_to_R/2.09%3A_Math_with_VectorsYou can apply mathematical operations to the elements of a vector just as you would with a single number: We can also apply logical operations across vectors; again, this will return a vector with the...You can apply mathematical operations to the elements of a vector just as you would with a single number: We can also apply logical operations across vectors; again, this will return a vector with the operation applied to the pairs of values at each position. Most functions will work with vectors just as they would with a single number. We could create a vector and pass it to the sin() function, which will return as many sine values as there are input values:
- https://stats.libretexts.org/Courses/Cerritos_College/Introduction_to_Statistics_with_R/02%3A_Introduction_to_R/2.04%3A_Getting_Started_with_RWhen we work with R, we often do this using a command line in which we type commands and it responds to those commands. We will return to variables in a little while, but if we want R to print out the...When we work with R, we often do this using a command line in which we type commands and it responds to those commands. We will return to variables in a little while, but if we want R to print out the word hello then we need to contain it in quotation marks, telling R that it is a character string. Another important one is real numbers, which are the most common kind of numbers that we will deal with in statistics, which span the entire number line including the spaces in between the integers.
- https://stats.libretexts.org/Bookshelves/Probability_Theory/Probability_Mathematical_Statistics_and_Stochastic_Processes_(Siegrist)/15%3A_Renewal_Processes/15.06%3A_Renewal_Reward_ProcessesNote first that \[ R_t = \sum_{i=1}^{N_t} Y_i = \sum_{i=1}^{N_t + 1} Y_i - Y_{N(t) + 1} \] Next Recall that \( N_t + 1 \) is a stopping time for the sequence of interarrival times \(\bs{X}\) for \( t ...Note first that \[ R_t = \sum_{i=1}^{N_t} Y_i = \sum_{i=1}^{N_t + 1} Y_i - Y_{N(t) + 1} \] Next Recall that \( N_t + 1 \) is a stopping time for the sequence of interarrival times \(\bs{X}\) for \( t \in (0, \infty) \), and hence is also a stopping time for the sequence of interarrival time, reward pairs \( \bs{Z} \). (If a random time is a stopping time for a filtration, then it's a stopping time for any larger filtration.) By Wald's equation, \[ \E\left(\sum_{i=1}^{N_t + 1} Y_i\right) = \nu \…
- https://stats.libretexts.org/Bookshelves/Probability_Theory/Probability_Mathematical_Statistics_and_Stochastic_Processes_(Siegrist)/13%3A_Games_of_Chance/13.10%3A_Bold_PlayThis representation is unique except when \(x\) is a binary rational (sometimes also called a dyadic rational), that is, a number of the form \(k / 2^n\) where \(n \in \N_+\) and \(k \in \{1, 3, \ldot...This representation is unique except when \(x\) is a binary rational (sometimes also called a dyadic rational), that is, a number of the form \(k / 2^n\) where \(n \in \N_+\) and \(k \in \{1, 3, \ldots, 2^n - 1\}\); the positive integer \(n\) is called the rank of \(x\). Thus, for \(p = \frac{1}{2}\) (fair trials), the probability that the bold gambler reaches the target fortune \(a\) starting from the initial fortune \(x\) is \(x / a\), just as it is for the timid gambler.
- https://stats.libretexts.org/Bookshelves/Probability_Theory/Probability_Mathematical_Statistics_and_Stochastic_Processes_(Siegrist)/01%3A_Foundations/1.02%3A_Functions\( x \in f^{-1}(A \cup B) \) if and only if \( f(x) \in A \cup B \) if and only if \( f(x) \in A \) or \( f(x) \in B \) if and only if \(x \in f^{-1}(A)\) or \( x \in f^{-1}(B) \) if and only if \( x ...\( x \in f^{-1}(A \cup B) \) if and only if \( f(x) \in A \cup B \) if and only if \( f(x) \in A \) or \( f(x) \in B \) if and only if \(x \in f^{-1}(A)\) or \( x \in f^{-1}(B) \) if and only if \( x \in f^{-1}(A) \cup f^{-1}(B) \)
- https://stats.libretexts.org/Bookshelves/Probability_Theory/Probability_Mathematical_Statistics_and_Stochastic_Processes_(Siegrist)/16%3A_Markov_Processes/16.22%3A_Continuous-Time_Queuing_ChainsWhen \( k = 1 \), the single-server queue, the exponential parameter in state \( x \in \N_+ \) is \( \mu + \nu \) and the transition probabilities for the jump chain are \[ Q(x, x - 1) = \frac{\nu}{\m...When \( k = 1 \), the single-server queue, the exponential parameter in state \( x \in \N_+ \) is \( \mu + \nu \) and the transition probabilities for the jump chain are \[ Q(x, x - 1) = \frac{\nu}{\mu + \nu}, \; Q(x, x + 1) = \frac{\mu}{\mu + \nu} \] When \( k = \infty \), the infinite server queue, the cases above for \( x \ge k \) are vacuous, so the exponential parameter in state \( x \in \N \) is \( \mu + x \nu \) and the transition probabilities are \[ Q(x, x - 1) = \frac{\nu x}{\mu + \nu…
- https://stats.libretexts.org/Bookshelves/Probability_Theory/Probability_Mathematical_Statistics_and_Stochastic_Processes_(Siegrist)/05%3A_Special_DistributionsIn this chapter, we study several general families of probability distributions and a number of special parametric families of distributions. Unlike the other expository chapters in this text, the sec...In this chapter, we study several general families of probability distributions and a number of special parametric families of distributions. Unlike the other expository chapters in this text, the sections are not linearly ordered and so this chapter serves primarily as a reference. You may want to study these topics as the need arises.
- https://stats.libretexts.org/Bookshelves/Probability_Theory/Probability_Mathematical_Statistics_and_Stochastic_Processes_(Siegrist)/16%3A_Markov_Processes/16.19%3A_Time_Reversal_in_Continuous-Time_ChainsThen \begin{align*} \P(\hat X_t = y \mid \hat X_s = x, A) & = \frac{\P(\hat X_t = y, \hat X_s = x, A)}{\P(\hat X_s = x, A)} = \frac{\P(X_{h - t} = y, X_{h - s} = x, A)}{\P(X_{h - s} = x, A)} \\ & = \f...Then \begin{align*} \P(\hat X_t = y \mid \hat X_s = x, A) & = \frac{\P(\hat X_t = y, \hat X_s = x, A)}{\P(\hat X_s = x, A)} = \frac{\P(X_{h - t} = y, X_{h - s} = x, A)}{\P(X_{h - s} = x, A)} \\ & = \frac{\P(A \mid X_{h - t} = y, X_{h - s} = x) \P(X_{h - s} = x \mid X_{h - t} = y) \P(X_{h - t} = y)}{\P(A \mid X_{h - s} = x) \P(X_{h - s} = x)} \end{align*} But \( A \in \sigma\{X_r: r \in [h - s, h]\} \) and \( h - t \lt h - s \), so by the Markov property for \( \bs X \), \[ \P(A \mid X_{h - t} =…
- https://stats.libretexts.org/Bookshelves/Probability_Theory/Probability_Mathematical_Statistics_and_Stochastic_Processes_(Siegrist)/16%3A_Markov_Processes/16.23%3A__Continuous-Time_Branching_ChainsUsing the Kolmogorov backward equation we have \[ \frac{d}{dt} \Phi_t(r) = \sum_{x=0}^\infty r^x \frac{d}{dt} P_t(1, x) = \sum_{x=0}^\infty r^x G P_t(1, x) \] Using the generator above, \[ G P_t(1, x)...Using the Kolmogorov backward equation we have \[ \frac{d}{dt} \Phi_t(r) = \sum_{x=0}^\infty r^x \frac{d}{dt} P_t(1, x) = \sum_{x=0}^\infty r^x G P_t(1, x) \] Using the generator above, \[ G P_t(1, x) = \sum_{y = 0}^\infty G(1, y) P_t(y, x) = - \alpha P_t(1, x) + \sum_{k=0}^\infty \alpha f(k) P_t(k, x), \quad x \in \N \] Substituting and using the result above gives \begin{align*} \frac{d}{dt} \Phi_t(r) & = \sum_{x=0}^\infty r^x \left[-\alpha P_t(1, x) + \sum_{k=0}^\infty \alpha f(k) P_t(k, x)\…