Skip to main content
Statistics LibreTexts

10.7: End-of-Chapter Materials

  • Page ID
    57756
  • \( \newcommand{\vecs}[1]{\overset { \scriptstyle \rightharpoonup} {\mathbf{#1}} } \)

    \( \newcommand{\vecd}[1]{\overset{-\!-\!\rightharpoonup}{\vphantom{a}\smash {#1}}} \)

    \( \newcommand{\dsum}{\displaystyle\sum\limits} \)

    \( \newcommand{\dint}{\displaystyle\int\limits} \)

    \( \newcommand{\dlim}{\displaystyle\lim\limits} \)

    \( \newcommand{\id}{\mathrm{id}}\) \( \newcommand{\Span}{\mathrm{span}}\)

    ( \newcommand{\kernel}{\mathrm{null}\,}\) \( \newcommand{\range}{\mathrm{range}\,}\)

    \( \newcommand{\RealPart}{\mathrm{Re}}\) \( \newcommand{\ImaginaryPart}{\mathrm{Im}}\)

    \( \newcommand{\Argument}{\mathrm{Arg}}\) \( \newcommand{\norm}[1]{\| #1 \|}\)

    \( \newcommand{\inner}[2]{\langle #1, #2 \rangle}\)

    \( \newcommand{\Span}{\mathrm{span}}\)

    \( \newcommand{\id}{\mathrm{id}}\)

    \( \newcommand{\Span}{\mathrm{span}}\)

    \( \newcommand{\kernel}{\mathrm{null}\,}\)

    \( \newcommand{\range}{\mathrm{range}\,}\)

    \( \newcommand{\RealPart}{\mathrm{Re}}\)

    \( \newcommand{\ImaginaryPart}{\mathrm{Im}}\)

    \( \newcommand{\Argument}{\mathrm{Arg}}\)

    \( \newcommand{\norm}[1]{\| #1 \|}\)

    \( \newcommand{\inner}[2]{\langle #1, #2 \rangle}\)

    \( \newcommand{\Span}{\mathrm{span}}\) \( \newcommand{\AA}{\unicode[.8,0]{x212B}}\)

    \( \newcommand{\vectorA}[1]{\vec{#1}}      % arrow\)

    \( \newcommand{\vectorAt}[1]{\vec{\text{#1}}}      % arrow\)

    \( \newcommand{\vectorB}[1]{\overset { \scriptstyle \rightharpoonup} {\mathbf{#1}} } \)

    \( \newcommand{\vectorC}[1]{\textbf{#1}} \)

    \( \newcommand{\vectorD}[1]{\overrightarrow{#1}} \)

    \( \newcommand{\vectorDt}[1]{\overrightarrow{\text{#1}}} \)

    \( \newcommand{\vectE}[1]{\overset{-\!-\!\rightharpoonup}{\vphantom{a}\smash{\mathbf {#1}}}} \)

    \( \newcommand{\vecs}[1]{\overset { \scriptstyle \rightharpoonup} {\mathbf{#1}} } \)

    \(\newcommand{\longvect}{\overrightarrow}\)

    \( \newcommand{\vecd}[1]{\overset{-\!-\!\rightharpoonup}{\vphantom{a}\smash {#1}}} \)

    \(\newcommand{\avec}{\mathbf a}\) \(\newcommand{\bvec}{\mathbf b}\) \(\newcommand{\cvec}{\mathbf c}\) \(\newcommand{\dvec}{\mathbf d}\) \(\newcommand{\dtil}{\widetilde{\mathbf d}}\) \(\newcommand{\evec}{\mathbf e}\) \(\newcommand{\fvec}{\mathbf f}\) \(\newcommand{\nvec}{\mathbf n}\) \(\newcommand{\pvec}{\mathbf p}\) \(\newcommand{\qvec}{\mathbf q}\) \(\newcommand{\svec}{\mathbf s}\) \(\newcommand{\tvec}{\mathbf t}\) \(\newcommand{\uvec}{\mathbf u}\) \(\newcommand{\vvec}{\mathbf v}\) \(\newcommand{\wvec}{\mathbf w}\) \(\newcommand{\xvec}{\mathbf x}\) \(\newcommand{\yvec}{\mathbf y}\) \(\newcommand{\zvec}{\mathbf z}\) \(\newcommand{\rvec}{\mathbf r}\) \(\newcommand{\mvec}{\mathbf m}\) \(\newcommand{\zerovec}{\mathbf 0}\) \(\newcommand{\onevec}{\mathbf 1}\) \(\newcommand{\real}{\mathbb R}\) \(\newcommand{\twovec}[2]{\left[\begin{array}{r}#1 \\ #2 \end{array}\right]}\) \(\newcommand{\ctwovec}[2]{\left[\begin{array}{c}#1 \\ #2 \end{array}\right]}\) \(\newcommand{\threevec}[3]{\left[\begin{array}{r}#1 \\ #2 \\ #3 \end{array}\right]}\) \(\newcommand{\cthreevec}[3]{\left[\begin{array}{c}#1 \\ #2 \\ #3 \end{array}\right]}\) \(\newcommand{\fourvec}[4]{\left[\begin{array}{r}#1 \\ #2 \\ #3 \\ #4 \end{array}\right]}\) \(\newcommand{\cfourvec}[4]{\left[\begin{array}{c}#1 \\ #2 \\ #3 \\ #4 \end{array}\right]}\) \(\newcommand{\fivevec}[5]{\left[\begin{array}{r}#1 \\ #2 \\ #3 \\ #4 \\ #5 \\ \end{array}\right]}\) \(\newcommand{\cfivevec}[5]{\left[\begin{array}{c}#1 \\ #2 \\ #3 \\ #4 \\ #5 \\ \end{array}\right]}\) \(\newcommand{\mattwo}[4]{\left[\begin{array}{rr}#1 \amp #2 \\ #3 \amp #4 \\ \end{array}\right]}\) \(\newcommand{\laspan}[1]{\text{Span}\{#1\}}\) \(\newcommand{\bcal}{\cal B}\) \(\newcommand{\ccal}{\cal C}\) \(\newcommand{\scal}{\cal S}\) \(\newcommand{\wcal}{\cal W}\) \(\newcommand{\ecal}{\cal E}\) \(\newcommand{\coords}[2]{\left\{#1\right\}_{#2}}\) \(\newcommand{\gray}[1]{\color{gray}{#1}}\) \(\newcommand{\lgray}[1]{\color{lightgray}{#1}}\) \(\newcommand{\rank}{\operatorname{rank}}\) \(\newcommand{\row}{\text{Row}}\) \(\newcommand{\col}{\text{Col}}\) \(\renewcommand{\row}{\text{Row}}\) \(\newcommand{\nul}{\text{Nul}}\) \(\newcommand{\var}{\text{Var}}\) \(\newcommand{\corr}{\text{corr}}\) \(\newcommand{\len}[1]{\left|#1\right|}\) \(\newcommand{\bbar}{\overline{\bvec}}\) \(\newcommand{\bhat}{\widehat{\bvec}}\) \(\newcommand{\bperp}{\bvec^\perp}\) \(\newcommand{\xhat}{\widehat{\xvec}}\) \(\newcommand{\vhat}{\widehat{\vvec}}\) \(\newcommand{\uhat}{\widehat{\uvec}}\) \(\newcommand{\what}{\widehat{\wvec}}\) \(\newcommand{\Sighat}{\widehat{\Sigma}}\) \(\newcommand{\lt}{<}\) \(\newcommand{\gt}{>}\) \(\newcommand{\amp}{&}\) \(\definecolor{fillinmathshade}{gray}{0.9}\)

    R Functions

    In this chapter, we were introduced to a few R functions that will be useful in the future. These are listed here.

    Packages

    • nlme
      This package gives R the functionality to fit generalized least squares using the gls function. It actually has many other useful functions that allow us to fit non-linear models and random-effects models. Those are beyond the scope of this book, however.
    • KnoxStats
      This package adds much general functionality to R.

    Statistics

    • autocor.test(e)
      This function calculates the auto-correlation, which is just the correlation between sequential values in the vector.
    • lm(formula)
      This function performs linear regression on the data, with the supplied formula. If you specify the weights, then they are applied and you are fitting the model using weighted least squares. As there is much information contained in this function, you will want to save the results in a variable.
    • residuals(mod)
      This calculates the simple residuals in a model, the observed values minus the predicted values.
    • gls(formula)
      This function performs generalized least squares regression. It even allows you to specify the correlation structure via the correlation parameter.

    Mathematics

    This is just a reminder of some of the matrix functions available in R.

    • \%*\%
      This multiplies two matrices in R. Thus, running the command A\%*\%B will return the matrix product \(\mathbf{AB}\).
    • abs(x)
      This returns the absolute value of the real number x, a.k.a. \(|x|\).
    • column(A)
      This returns the column number of the matrix \(\mathbf{A}\).
    • diag(n)
      If \(n\) is an integer, then this returns the \(\mathbf{I}_n\) identity matrix.
    • diag(v)
      This returns a diagonal matrix with the elements of the vector \(\mathbf{v}\) along the diagonal.
    • diag(A)
      This returns the diagonal entries of the matrix \(\mathbf{A}\).
    • rep(n,x)
      This returns a vector of the number \(x) repeated \(n) times.
    • row(A)
      This returns the row number of the matrix \(\mathbf{A}\).
    • solve(A)
      This returns the inverse of the matrix \(\mathbf{A}\).
    • t(A)
      This returns the transpose of the matrix \(\mathbf{A}\).

    Exercises

    1. Let \(\mathbf{E} \sim N \left( \mathbf{0};\ \sigma^2\mathbf{D}\right)\) be the residuals. Prove that if \(\mathbf{D}\) is a diagonal covariance matrix, then it is invertible.
    2. Let \(\mathbf{E} \sim N \left( \mathbf{0};\ \sigma^2\mathbf{D}\right)\) be the residuals. Here, \(\mathbf{D}\) is a diagonal covariance matrix. Determine a matrix \(\mathbf{W}\) such that \(\mathbf{W}\mathbf{W} = \mathbf{D}\).
    3. Prove Theorem 10.2.2.
    4. Under the assumptions of weighted least squares, determine the formula for a confidence interval for \(\beta_1\).
    5. What is the difference between \(\mathbf{e}\) and \(\mathbf{E}\)?
    6. Under the assumptions of generalized least squares, determine the formula for the estimator of \(\mathbf{B}\).
    7. Under the assumptions of generalized least squares, determine the formula for a confidence interval for \(\mathbf{b}\).
    8. Determine if Theorem 10.2.2 holds if the weights matrix \(\mathbf{D}\) is a random matrix independent of \(\mathbf{X}\). If it does not, what is the distribution of \(\mathbf{WE}\)?
    9. Prove that the WLS estimator is unbiased for B if \(\mathbf{D}\) is independent of \(\mathbf{X}\).
    10. The theorem concerning the variance of the WLS estimator requires \(\mathbf{D}\) is non-random. Determine the variance of \(\mathbf{b}_\text{wls}\) if \(\mathbf{D}\) is random, but independent of \(\mathbf{X}\).
    11. In the second example in Section 10.3, I state that the adjacency matrix is symmetric. Explain why this is so.

    Theory Readings

    • Adrian Baddeley, Ege Rubak, and Rolf Turner (2015). Spatial Point Patterns: Methodology and Applications with R. Boca Raton, FL: Chapman & Hall/CRC.
    • Roger S. Bivand, Edzer Pebesma, and Virgilio Gómez-Rubio (2013). Applied Spatial Data Analysis with R. New York: Springer-Verlag.
    • Marta Blangiardo and Michela Cameletti (2015). Spatial and Spatio-temporal Bayesian Models with R. Hoboken, NJ: John Wiley & Sons.
    • Chris Brunsdon and Lex Comber (2015). An Introduction to R for Spatial Analysis and Mapping. Thousand Oaks, CA: SAGE Publications.
    • Robert Haining (2003). Spatial Data Analysis: Theory and Practice. Cambridge, UK: Cambridge University Press.
    • Tonny J. Oyana and Florence Margai (2013). Spatial Analysis: Statistics, Visualization, and Computational Methods. Boca Raton, FL: Chapman & Hall/CRC.
    • Zekai Sen (2016). Spatial Modeling Principles in Earth Sciences. New York: Springer-Verlag.
    • Thorsten Wiegand and Kirk A. Moloney (2013). Handbook of Spatial Point-Pattern Analysis in Ecology. Boca Raton, FL: Chapman & Hall/CRC.

    This page titled 10.7: End-of-Chapter Materials is shared under a CC BY-NC-SA 4.0 license and was authored, remixed, and/or curated by Ole Forsberg.

    • Was this article helpful?