20.6: Statistics in Matrices
- Page ID
- 57814
\( \newcommand{\vecs}[1]{\overset { \scriptstyle \rightharpoonup} {\mathbf{#1}} } \)
\( \newcommand{\vecd}[1]{\overset{-\!-\!\rightharpoonup}{\vphantom{a}\smash {#1}}} \)
\( \newcommand{\dsum}{\displaystyle\sum\limits} \)
\( \newcommand{\dint}{\displaystyle\int\limits} \)
\( \newcommand{\dlim}{\displaystyle\lim\limits} \)
\( \newcommand{\id}{\mathrm{id}}\) \( \newcommand{\Span}{\mathrm{span}}\)
( \newcommand{\kernel}{\mathrm{null}\,}\) \( \newcommand{\range}{\mathrm{range}\,}\)
\( \newcommand{\RealPart}{\mathrm{Re}}\) \( \newcommand{\ImaginaryPart}{\mathrm{Im}}\)
\( \newcommand{\Argument}{\mathrm{Arg}}\) \( \newcommand{\norm}[1]{\| #1 \|}\)
\( \newcommand{\inner}[2]{\langle #1, #2 \rangle}\)
\( \newcommand{\Span}{\mathrm{span}}\)
\( \newcommand{\id}{\mathrm{id}}\)
\( \newcommand{\Span}{\mathrm{span}}\)
\( \newcommand{\kernel}{\mathrm{null}\,}\)
\( \newcommand{\range}{\mathrm{range}\,}\)
\( \newcommand{\RealPart}{\mathrm{Re}}\)
\( \newcommand{\ImaginaryPart}{\mathrm{Im}}\)
\( \newcommand{\Argument}{\mathrm{Arg}}\)
\( \newcommand{\norm}[1]{\| #1 \|}\)
\( \newcommand{\inner}[2]{\langle #1, #2 \rangle}\)
\( \newcommand{\Span}{\mathrm{span}}\) \( \newcommand{\AA}{\unicode[.8,0]{x212B}}\)
\( \newcommand{\vectorA}[1]{\vec{#1}} % arrow\)
\( \newcommand{\vectorAt}[1]{\vec{\text{#1}}} % arrow\)
\( \newcommand{\vectorB}[1]{\overset { \scriptstyle \rightharpoonup} {\mathbf{#1}} } \)
\( \newcommand{\vectorC}[1]{\textbf{#1}} \)
\( \newcommand{\vectorD}[1]{\overrightarrow{#1}} \)
\( \newcommand{\vectorDt}[1]{\overrightarrow{\text{#1}}} \)
\( \newcommand{\vectE}[1]{\overset{-\!-\!\rightharpoonup}{\vphantom{a}\smash{\mathbf {#1}}}} \)
\( \newcommand{\vecs}[1]{\overset { \scriptstyle \rightharpoonup} {\mathbf{#1}} } \)
\(\newcommand{\longvect}{\overrightarrow}\)
\( \newcommand{\vecd}[1]{\overset{-\!-\!\rightharpoonup}{\vphantom{a}\smash {#1}}} \)
\(\newcommand{\avec}{\mathbf a}\) \(\newcommand{\bvec}{\mathbf b}\) \(\newcommand{\cvec}{\mathbf c}\) \(\newcommand{\dvec}{\mathbf d}\) \(\newcommand{\dtil}{\widetilde{\mathbf d}}\) \(\newcommand{\evec}{\mathbf e}\) \(\newcommand{\fvec}{\mathbf f}\) \(\newcommand{\nvec}{\mathbf n}\) \(\newcommand{\pvec}{\mathbf p}\) \(\newcommand{\qvec}{\mathbf q}\) \(\newcommand{\svec}{\mathbf s}\) \(\newcommand{\tvec}{\mathbf t}\) \(\newcommand{\uvec}{\mathbf u}\) \(\newcommand{\vvec}{\mathbf v}\) \(\newcommand{\wvec}{\mathbf w}\) \(\newcommand{\xvec}{\mathbf x}\) \(\newcommand{\yvec}{\mathbf y}\) \(\newcommand{\zvec}{\mathbf z}\) \(\newcommand{\rvec}{\mathbf r}\) \(\newcommand{\mvec}{\mathbf m}\) \(\newcommand{\zerovec}{\mathbf 0}\) \(\newcommand{\onevec}{\mathbf 1}\) \(\newcommand{\real}{\mathbb R}\) \(\newcommand{\twovec}[2]{\left[\begin{array}{r}#1 \\ #2 \end{array}\right]}\) \(\newcommand{\ctwovec}[2]{\left[\begin{array}{c}#1 \\ #2 \end{array}\right]}\) \(\newcommand{\threevec}[3]{\left[\begin{array}{r}#1 \\ #2 \\ #3 \end{array}\right]}\) \(\newcommand{\cthreevec}[3]{\left[\begin{array}{c}#1 \\ #2 \\ #3 \end{array}\right]}\) \(\newcommand{\fourvec}[4]{\left[\begin{array}{r}#1 \\ #2 \\ #3 \\ #4 \end{array}\right]}\) \(\newcommand{\cfourvec}[4]{\left[\begin{array}{c}#1 \\ #2 \\ #3 \\ #4 \end{array}\right]}\) \(\newcommand{\fivevec}[5]{\left[\begin{array}{r}#1 \\ #2 \\ #3 \\ #4 \\ #5 \\ \end{array}\right]}\) \(\newcommand{\cfivevec}[5]{\left[\begin{array}{c}#1 \\ #2 \\ #3 \\ #4 \\ #5 \\ \end{array}\right]}\) \(\newcommand{\mattwo}[4]{\left[\begin{array}{rr}#1 \amp #2 \\ #3 \amp #4 \\ \end{array}\right]}\) \(\newcommand{\laspan}[1]{\text{Span}\{#1\}}\) \(\newcommand{\bcal}{\cal B}\) \(\newcommand{\ccal}{\cal C}\) \(\newcommand{\scal}{\cal S}\) \(\newcommand{\wcal}{\cal W}\) \(\newcommand{\ecal}{\cal E}\) \(\newcommand{\coords}[2]{\left\{#1\right\}_{#2}}\) \(\newcommand{\gray}[1]{\color{gray}{#1}}\) \(\newcommand{\lgray}[1]{\color{lightgray}{#1}}\) \(\newcommand{\rank}{\operatorname{rank}}\) \(\newcommand{\row}{\text{Row}}\) \(\newcommand{\col}{\text{Col}}\) \(\renewcommand{\row}{\text{Row}}\) \(\newcommand{\nul}{\text{Nul}}\) \(\newcommand{\var}{\text{Var}}\) \(\newcommand{\corr}{\text{corr}}\) \(\newcommand{\len}[1]{\left|#1\right|}\) \(\newcommand{\bbar}{\overline{\bvec}}\) \(\newcommand{\bhat}{\widehat{\bvec}}\) \(\newcommand{\bperp}{\bvec^\perp}\) \(\newcommand{\xhat}{\widehat{\xvec}}\) \(\newcommand{\vhat}{\widehat{\vvec}}\) \(\newcommand{\uhat}{\widehat{\uvec}}\) \(\newcommand{\what}{\widehat{\wvec}}\) \(\newcommand{\Sighat}{\widehat{\Sigma}}\) \(\newcommand{\lt}{<}\) \(\newcommand{\gt}{>}\) \(\newcommand{\amp}{&}\) \(\definecolor{fillinmathshade}{gray}{0.9}\)While the formulas you learned in introductory statistics are perfectly suited for calculation, a matrix formulation offers a more general and powerful perspective that scales to advanced methods like linear models. This optional section re-expresses fundamental sample statistics — such as the mean, variance, and covariance — using matrix notation.
The purpose is twofold: to provide essential practice with matrix operations, and to reveal the elegant underlying structure that unifies these concepts. In what follows, we will represent a data sample of size \(n\) as a column vector \(\mathbf{Y}\). Through this lens, familiar quantities like the sum of squares emerge naturally as specific quadratic forms, building a critical bridge between basic statistics and the linear algebra that drives modern data analysis.
✦•················• ✦ •··················•✦
The sample mean using matrices: \(\overline{Y} = \frac{1}{n} \mathbf{j}^\prime \mathbf{Y}\).
Proof.
\begin{align}
\frac{1}{n} \mathbf{j}^\prime \mathbf{Y} &= \frac{1}{n} \left[ 1\ 1\ 1\ \cdots\ 1\ \right] \left[ \begin{matrix}Y_1 \\ Y_2 \\ Y_3 \\ \vdots \\ Y_n \end{matrix} \right] \\[1em]
&= \frac{1}{n} \left( 1Y_1 + 1Y_2 + 1Y_3 + \cdots + 1Y_n\right) \\[1em]
&= \frac{1}{n} \sum_{i=1}^n Y_i \\[1em]
&= \overline{Y}
\end{align}
\(\blacksquare\)
The sum of squared values using matrices: \(\mathbf{Y}^\prime\mathbf{Y}\).
Proof.
\begin{align}
\mathbf{Y}^\prime \mathbf{Y} &= \left[ y_1\ y_2\ y_3\ \cdots\ y_n\ \right] \left[ \begin{matrix}y_1 \\ y_2 \\ y_3 \ \vdots \\ y_n \end{matrix} \right] \\[1em]
&= \left( y_1y_1 + y_2y_2 + y_3y_3 + \cdots + y_ny_n\right) \\[1em]
&= \sum_{i=1}^n y_i^2
\end{align}
\(\blacksquare\)
The sample variance using matrices: \(s_y^2 = \frac{1}{n-1} \left( \mathbf{Y}-\overline{Y}\mathbf{j}\right)^\prime\left( \mathbf{Y}-\overline{Y}\mathbf{j}\right)\).
Proof.
\begin{align}
& \frac{1}{n-1} \left( \mathbf{Y}-\overline{Y}\mathbf{j}\right)^\prime\left( \mathbf{Y}-\overline{Y}\mathbf{j}\right)\\[1em]
&= \frac{1}{n-1}\ \left[ y_1-\overline{Y},\ y_2-\overline{Y},\ y_3-\overline{Y},\ \cdots,\ y_n-\overline{Y},\ \right] \left[ \begin{matrix} y_1-\overline{Y} \\ y_2-\overline{Y} \\ y_3-\overline{Y} \\ \cdots \\ y_n-\overline{Y} \\ \end{matrix} \right] \\[1em]
&= \frac{1}{n-1}\ (y_1-\overline{Y})(y_1-\overline{Y}) + (y_2-\overline{Y})(y_2-\overline{Y}) + \cdots + (y_n-\overline{Y})(y_n-\overline{Y}) \\[1em]
&= \frac{1}{n-1}\ \sum_{i=1}^n (y_i-\overline{Y})^2
\end{align}
\(\blacksquare\)
The sample covariance using matrices: \(s_{xy} = \frac{1}{n-1} \left( \mathbf{Y}-\overline{Y}\mathbf{j}\right)^\prime\left( \mathbf{X}-\bar{x}\mathbf{j}\right)\).
Proof.
This proof echoes the previous proof.
\begin{align}
& \frac{1}{n-1} \left( \mathbf{Y}-\overline{Y}\mathbf{j}\right)^\prime\left( \mathbf{X}-\bar{x}\mathbf{j}\right)\\[1em]
&= \frac{1}{n-1}\ \left[ y_1-\overline{Y},\ y_2-\overline{Y},\ y_3-\overline{Y},\ \cdots,\ y_n-\overline{Y},\ \right] \left[ \begin{matrix} x_1-\bar{x} \\ x_2-\bar{x} \\ x_3-\bar{x} \\ \cdots \\ x_n-\bar{x} \\ \end{matrix} \right] \\[1em]
&= \frac{1}{n-1}\ (y_1-\overline{Y})(x_1-\bar{x}) + (y_2-\overline{Y})(x_2-\bar{x}) + \cdots + (y_n-\overline{Y})(x_n-\bar{x}) \\[1em]
&= \frac{1}{n-1}\ \sum_{i=1}^n (y_i-\overline{Y})(x_i-\bar{x})
\end{align}
\(\blacksquare\)
Note that this notation leads to the synonym \(s^2_y = s_{yy}\). It also leads to a nice proof that the covariance matrix is symmetric.
Let \(\mathbf{Y}\) be a random vector (a column vector whose elements are random variables). The quantity \(V[\mathbf{Y}]\) is called the variance-covariance matrix of \(\mathbf{Y}\).
Note that it is often just called the covariance matrix of \(\mathbf{Y}\).
Let \(\mathbf{Y} \in \mathscr{M}_{r,c}\). If \(\mathbf{Y}^\prime = \left[ \mathbf{Y_1},\ \mathbf{Y_2},\ \mathbf{Y_3},\ \cdots, \mathbf{Y_r} \right]\), then the elements of \(V[\mathbf{Y}]\) are \(\left[ \sigma_{ij}\right]\), where \(\sigma_{ij}\) is the covariance between \(Y_i\) and \(Y_j\) and \(\sigma_{i,i}\) is the variance of \(Y_i\).
Covariance matrices are symmetric.
If \(\mathbf{Y}\) is a random vector and \(\mathbf{X}\) is not, then
\begin{equation}
V[\mathbf{X}^\prime\mathbf{Y}] = \mathbf{X}^\prime V[\mathbf{Y}]\mathbf{X}
\end{equation}
This assumes that the multiplication makes sense (the matrices are commensurate).


