Skip to main content
Statistics LibreTexts

Multiple Linear Regression

  • Page ID
    225
  • \( \newcommand{\vecs}[1]{\overset { \scriptstyle \rightharpoonup} {\mathbf{#1}} } \)

    \( \newcommand{\vecd}[1]{\overset{-\!-\!\rightharpoonup}{\vphantom{a}\smash {#1}}} \)

    \( \newcommand{\id}{\mathrm{id}}\) \( \newcommand{\Span}{\mathrm{span}}\)

    ( \newcommand{\kernel}{\mathrm{null}\,}\) \( \newcommand{\range}{\mathrm{range}\,}\)

    \( \newcommand{\RealPart}{\mathrm{Re}}\) \( \newcommand{\ImaginaryPart}{\mathrm{Im}}\)

    \( \newcommand{\Argument}{\mathrm{Arg}}\) \( \newcommand{\norm}[1]{\| #1 \|}\)

    \( \newcommand{\inner}[2]{\langle #1, #2 \rangle}\)

    \( \newcommand{\Span}{\mathrm{span}}\)

    \( \newcommand{\id}{\mathrm{id}}\)

    \( \newcommand{\Span}{\mathrm{span}}\)

    \( \newcommand{\kernel}{\mathrm{null}\,}\)

    \( \newcommand{\range}{\mathrm{range}\,}\)

    \( \newcommand{\RealPart}{\mathrm{Re}}\)

    \( \newcommand{\ImaginaryPart}{\mathrm{Im}}\)

    \( \newcommand{\Argument}{\mathrm{Arg}}\)

    \( \newcommand{\norm}[1]{\| #1 \|}\)

    \( \newcommand{\inner}[2]{\langle #1, #2 \rangle}\)

    \( \newcommand{\Span}{\mathrm{span}}\) \( \newcommand{\AA}{\unicode[.8,0]{x212B}}\)

    \( \newcommand{\vectorA}[1]{\vec{#1}}      % arrow\)

    \( \newcommand{\vectorAt}[1]{\vec{\text{#1}}}      % arrow\)

    \( \newcommand{\vectorB}[1]{\overset { \scriptstyle \rightharpoonup} {\mathbf{#1}} } \)

    \( \newcommand{\vectorC}[1]{\textbf{#1}} \)

    \( \newcommand{\vectorD}[1]{\overrightarrow{#1}} \)

    \( \newcommand{\vectorDt}[1]{\overrightarrow{\text{#1}}} \)

    \( \newcommand{\vectE}[1]{\overset{-\!-\!\rightharpoonup}{\vphantom{a}\smash{\mathbf {#1}}}} \)

    \( \newcommand{\vecs}[1]{\overset { \scriptstyle \rightharpoonup} {\mathbf{#1}} } \)

    \( \newcommand{\vecd}[1]{\overset{-\!-\!\rightharpoonup}{\vphantom{a}\smash {#1}}} \)

    \(\newcommand{\avec}{\mathbf a}\) \(\newcommand{\bvec}{\mathbf b}\) \(\newcommand{\cvec}{\mathbf c}\) \(\newcommand{\dvec}{\mathbf d}\) \(\newcommand{\dtil}{\widetilde{\mathbf d}}\) \(\newcommand{\evec}{\mathbf e}\) \(\newcommand{\fvec}{\mathbf f}\) \(\newcommand{\nvec}{\mathbf n}\) \(\newcommand{\pvec}{\mathbf p}\) \(\newcommand{\qvec}{\mathbf q}\) \(\newcommand{\svec}{\mathbf s}\) \(\newcommand{\tvec}{\mathbf t}\) \(\newcommand{\uvec}{\mathbf u}\) \(\newcommand{\vvec}{\mathbf v}\) \(\newcommand{\wvec}{\mathbf w}\) \(\newcommand{\xvec}{\mathbf x}\) \(\newcommand{\yvec}{\mathbf y}\) \(\newcommand{\zvec}{\mathbf z}\) \(\newcommand{\rvec}{\mathbf r}\) \(\newcommand{\mvec}{\mathbf m}\) \(\newcommand{\zerovec}{\mathbf 0}\) \(\newcommand{\onevec}{\mathbf 1}\) \(\newcommand{\real}{\mathbb R}\) \(\newcommand{\twovec}[2]{\left[\begin{array}{r}#1 \\ #2 \end{array}\right]}\) \(\newcommand{\ctwovec}[2]{\left[\begin{array}{c}#1 \\ #2 \end{array}\right]}\) \(\newcommand{\threevec}[3]{\left[\begin{array}{r}#1 \\ #2 \\ #3 \end{array}\right]}\) \(\newcommand{\cthreevec}[3]{\left[\begin{array}{c}#1 \\ #2 \\ #3 \end{array}\right]}\) \(\newcommand{\fourvec}[4]{\left[\begin{array}{r}#1 \\ #2 \\ #3 \\ #4 \end{array}\right]}\) \(\newcommand{\cfourvec}[4]{\left[\begin{array}{c}#1 \\ #2 \\ #3 \\ #4 \end{array}\right]}\) \(\newcommand{\fivevec}[5]{\left[\begin{array}{r}#1 \\ #2 \\ #3 \\ #4 \\ #5 \\ \end{array}\right]}\) \(\newcommand{\cfivevec}[5]{\left[\begin{array}{c}#1 \\ #2 \\ #3 \\ #4 \\ #5 \\ \end{array}\right]}\) \(\newcommand{\mattwo}[4]{\left[\begin{array}{rr}#1 \amp #2 \\ #3 \amp #4 \\ \end{array}\right]}\) \(\newcommand{\laspan}[1]{\text{Span}\{#1\}}\) \(\newcommand{\bcal}{\cal B}\) \(\newcommand{\ccal}{\cal C}\) \(\newcommand{\scal}{\cal S}\) \(\newcommand{\wcal}{\cal W}\) \(\newcommand{\ecal}{\cal E}\) \(\newcommand{\coords}[2]{\left\{#1\right\}_{#2}}\) \(\newcommand{\gray}[1]{\color{gray}{#1}}\) \(\newcommand{\lgray}[1]{\color{lightgray}{#1}}\) \(\newcommand{\rank}{\operatorname{rank}}\) \(\newcommand{\row}{\text{Row}}\) \(\newcommand{\col}{\text{Col}}\) \(\renewcommand{\row}{\text{Row}}\) \(\newcommand{\nul}{\text{Nul}}\) \(\newcommand{\var}{\text{Var}}\) \(\newcommand{\corr}{\text{corr}}\) \(\newcommand{\len}[1]{\left|#1\right|}\) \(\newcommand{\bbar}{\overline{\bvec}}\) \(\newcommand{\bhat}{\widehat{\bvec}}\) \(\newcommand{\bperp}{\bvec^\perp}\) \(\newcommand{\xhat}{\widehat{\xvec}}\) \(\newcommand{\vhat}{\widehat{\vvec}}\) \(\newcommand{\uhat}{\widehat{\uvec}}\) \(\newcommand{\what}{\widehat{\wvec}}\) \(\newcommand{\Sighat}{\widehat{\Sigma}}\) \(\newcommand{\lt}{<}\) \(\newcommand{\gt}{>}\) \(\newcommand{\amp}{&}\) \(\definecolor{fillinmathshade}{gray}{0.9}\)

    A response variable \(Y\) is linearly related to \(p\) different explanatory variables \(X^{(1)},\ldots,X^{(p-1)}\) (where \(p \geq2\)). The regression model is given by

    \[Y_i = \beta_0 + \beta_1 X_i^{(1)} + \cdots + \beta_p X_i^{(p-1)} + \varepsilon_i, \qquad i=1,\ldots,n \qquad \label{1}\]

    where \(\varepsilon_i\) have mean zero, variance \(\sigma^2\) and are uncorrelated. The Equation \ref{1} can be expressed in matrix notations as

    \[Y = \mathbf{X} \beta + \varepsilon,\]

    where

    \[ Y = \begin{bmatrix} Y_1 \\ Y_2 \\ \cdot\\ Y_n \end{bmatrix}, \qquad \varepsilon = \begin{bmatrix} \varepsilon_1 \\ \varepsilon_2 \\ \cdot\\ \varepsilon_n \end{bmatrix},\]

    \[\mathbf{X} = \begin{bmatrix} 1 & X_1^{(1)} & X_1^{(2)} & \cdots & X_1^{(p-1)} \\ 1 & X_2^{(1)} & X_2^{(2)} & \cdots & X_2^{(p-1)} \\ \cdot & \cdot & \cdot & \cdots & \cdot\\ 1 & X_n^{(1)} & X_n^{(2)} & \cdots & X_n^{(p-1)} \end{bmatrix}, \qquad\mbox{and} \qquad \beta = \begin{bmatrix} \beta_0\\ \beta_1 \\ \cdot \\ \beta_{p-1} \end{bmatrix} .\]

    So \(\mathbf{X}\) is an \(n \times p\) matrix.

    Estimation Problem

    Note that \(\beta\) is estimated by the least squares procedure. That is minimizing the sum of squared errors \(\sum_{i=1}^n (Y_i - \beta_0 - \beta_1 X_i^{(1)} - \cdots - \beta_{p-1} X_i^{(p-1)})^2\). The latter quantity can be expressed in matrix notations as \(\parallel Y - \mathbf{X}\beta\parallel^2\). Minimization with respect to the parameter \(\beta\) (a \(p \times 1\) vector) gives rise to the normal equations:

    \[\begin{eqnarray*} b_0 n + b_1\sum_i X_i^{(1)} + b_2 \sum_i X_i^{(2)} + \cdots + b_{p-1} \sum_i X_i^{(p-1)} &=& \sum_i Y_i \\ b_0 \sum_i X_i^{(1)} + b_1 \sum_i (X_i^{(1)})^2 + b_2 \sum_i X_i^{(1)} X_i^{(2)} + \cdots + b_{p-1} \sum_i X_i^{(1)} X_i^{(p-1)} &=& \sum_i X_i^{(1)} Y_i \\ \cdots \qquad \cdots \qquad \cdots \qquad \cdots &=& \cdot \\ b_0 \sum_i X_i^{(p-1)} + b_1 \sum_i X_i^{(p-1)}X_i^{(1)} + b_2 \sum_i X_i^{(p-1)} X_i^{(2)} + \cdots + b_{p-1} \sum_i (X_i^{(p-1)})^2 &=& \sum_i X_i^{(p-1)} Y_i \end{eqnarray*}\]

    Observe that we can express this system of \(p\) equations in \(p\) variables \(b_0,b_1,\ldots,b_{p-1}\) as \(\begin{equation}\label{eq:normal} \mathbf{X}^T\mathbf{X} \mathbf{b} = \mathbf{X}^T Y, \end{equation}\) where \(\mathbf{b}\) is a \(p \times 1\) vector with \(\mathbf{b}^T = (b_0,b_1,\ldots,b_{p-1})\).

    If the \(p \times p\) matrix \(\mathbf{X}^T\mathbf{X}\) is nonsingular (as we shall assume for the time being), then the solution to this system is given by \(\widehat \beta = \mathbf{b} = (\mathbf{X}^T\mathbf{X})^{-1}\mathbf{X}^T Y .\) This is the least squares estimate of \(\beta\).

    Expected value and variance of random vectors

    For an \(m \times 1\) vector \(\mathbf{Z}\), with coordinates \(Z_1,\ldots,Z_m\), the expected value (or mean), and variance of \(\mathbf{Z}\) are defined as

    \[E(\mathbf{Z}) = E \begin{bmatrix} Z_1 \\ Z_2 \\ \cdot\\ Z_m \end{bmatrix} = \begin{bmatrix} E(Z_1) \\ E(Z_2)\\ \cdot\\ E(Z_m)\) \(\begin{bmatrix} \mbox{Var}(Z_1) & \mbox{Cov}(Z_1,Z_2) & \cdot & \mbox{Cov}(Z_1,Z_m) \\ \mbox{Cov}(Z_2,Z_1) & \mbox{Var}(Z_2) & \cdot & \mbox{Cov}(Z_2,Z_m) \\ \cdot & \cdot & \cdots & \cdot \\ \mbox{Cov}(Z_m,Z_1) & \mbox{Cov}(Z_m,Z_2) & \cdot & \mbox{Var}(Z_m) \end{bmatrix}.\]

    Observe that Var\((\mathbf{Z})\) is an \(m\times m\) matrix. Also, since Cov\((Z_i,Z_j)\) = Cov\((Z_j,Z_i)\) for all \(1\leq i,j \leq m\), Var\((\mathbf{Z})\) is a symmetric matrix. Moreover, it can be checked, using the relationship that Cov\((Z_i,Z_j) = E(Z_iZ_j) - E(Z_i)E(Z_j)\), that Var\((\mathbf{Z}) = E(\mathbf{Z}\mathbf{Z}^T) - (E(\mathbf{Z}))(E(\mathbf{Z}))^T\).

    Contributors

    • Agnes Oshiro

    Multiple Linear Regression is shared under a not declared license and was authored, remixed, and/or curated by LibreTexts.

    • Was this article helpful?