20.2: Addition
- Page ID
- 57810
\( \newcommand{\vecs}[1]{\overset { \scriptstyle \rightharpoonup} {\mathbf{#1}} } \)
\( \newcommand{\vecd}[1]{\overset{-\!-\!\rightharpoonup}{\vphantom{a}\smash {#1}}} \)
\( \newcommand{\dsum}{\displaystyle\sum\limits} \)
\( \newcommand{\dint}{\displaystyle\int\limits} \)
\( \newcommand{\dlim}{\displaystyle\lim\limits} \)
\( \newcommand{\id}{\mathrm{id}}\) \( \newcommand{\Span}{\mathrm{span}}\)
( \newcommand{\kernel}{\mathrm{null}\,}\) \( \newcommand{\range}{\mathrm{range}\,}\)
\( \newcommand{\RealPart}{\mathrm{Re}}\) \( \newcommand{\ImaginaryPart}{\mathrm{Im}}\)
\( \newcommand{\Argument}{\mathrm{Arg}}\) \( \newcommand{\norm}[1]{\| #1 \|}\)
\( \newcommand{\inner}[2]{\langle #1, #2 \rangle}\)
\( \newcommand{\Span}{\mathrm{span}}\)
\( \newcommand{\id}{\mathrm{id}}\)
\( \newcommand{\Span}{\mathrm{span}}\)
\( \newcommand{\kernel}{\mathrm{null}\,}\)
\( \newcommand{\range}{\mathrm{range}\,}\)
\( \newcommand{\RealPart}{\mathrm{Re}}\)
\( \newcommand{\ImaginaryPart}{\mathrm{Im}}\)
\( \newcommand{\Argument}{\mathrm{Arg}}\)
\( \newcommand{\norm}[1]{\| #1 \|}\)
\( \newcommand{\inner}[2]{\langle #1, #2 \rangle}\)
\( \newcommand{\Span}{\mathrm{span}}\) \( \newcommand{\AA}{\unicode[.8,0]{x212B}}\)
\( \newcommand{\vectorA}[1]{\vec{#1}} % arrow\)
\( \newcommand{\vectorAt}[1]{\vec{\text{#1}}} % arrow\)
\( \newcommand{\vectorB}[1]{\overset { \scriptstyle \rightharpoonup} {\mathbf{#1}} } \)
\( \newcommand{\vectorC}[1]{\textbf{#1}} \)
\( \newcommand{\vectorD}[1]{\overrightarrow{#1}} \)
\( \newcommand{\vectorDt}[1]{\overrightarrow{\text{#1}}} \)
\( \newcommand{\vectE}[1]{\overset{-\!-\!\rightharpoonup}{\vphantom{a}\smash{\mathbf {#1}}}} \)
\( \newcommand{\vecs}[1]{\overset { \scriptstyle \rightharpoonup} {\mathbf{#1}} } \)
\(\newcommand{\longvect}{\overrightarrow}\)
\( \newcommand{\vecd}[1]{\overset{-\!-\!\rightharpoonup}{\vphantom{a}\smash {#1}}} \)
\(\newcommand{\avec}{\mathbf a}\) \(\newcommand{\bvec}{\mathbf b}\) \(\newcommand{\cvec}{\mathbf c}\) \(\newcommand{\dvec}{\mathbf d}\) \(\newcommand{\dtil}{\widetilde{\mathbf d}}\) \(\newcommand{\evec}{\mathbf e}\) \(\newcommand{\fvec}{\mathbf f}\) \(\newcommand{\nvec}{\mathbf n}\) \(\newcommand{\pvec}{\mathbf p}\) \(\newcommand{\qvec}{\mathbf q}\) \(\newcommand{\svec}{\mathbf s}\) \(\newcommand{\tvec}{\mathbf t}\) \(\newcommand{\uvec}{\mathbf u}\) \(\newcommand{\vvec}{\mathbf v}\) \(\newcommand{\wvec}{\mathbf w}\) \(\newcommand{\xvec}{\mathbf x}\) \(\newcommand{\yvec}{\mathbf y}\) \(\newcommand{\zvec}{\mathbf z}\) \(\newcommand{\rvec}{\mathbf r}\) \(\newcommand{\mvec}{\mathbf m}\) \(\newcommand{\zerovec}{\mathbf 0}\) \(\newcommand{\onevec}{\mathbf 1}\) \(\newcommand{\real}{\mathbb R}\) \(\newcommand{\twovec}[2]{\left[\begin{array}{r}#1 \\ #2 \end{array}\right]}\) \(\newcommand{\ctwovec}[2]{\left[\begin{array}{c}#1 \\ #2 \end{array}\right]}\) \(\newcommand{\threevec}[3]{\left[\begin{array}{r}#1 \\ #2 \\ #3 \end{array}\right]}\) \(\newcommand{\cthreevec}[3]{\left[\begin{array}{c}#1 \\ #2 \\ #3 \end{array}\right]}\) \(\newcommand{\fourvec}[4]{\left[\begin{array}{r}#1 \\ #2 \\ #3 \\ #4 \end{array}\right]}\) \(\newcommand{\cfourvec}[4]{\left[\begin{array}{c}#1 \\ #2 \\ #3 \\ #4 \end{array}\right]}\) \(\newcommand{\fivevec}[5]{\left[\begin{array}{r}#1 \\ #2 \\ #3 \\ #4 \\ #5 \\ \end{array}\right]}\) \(\newcommand{\cfivevec}[5]{\left[\begin{array}{c}#1 \\ #2 \\ #3 \\ #4 \\ #5 \\ \end{array}\right]}\) \(\newcommand{\mattwo}[4]{\left[\begin{array}{rr}#1 \amp #2 \\ #3 \amp #4 \\ \end{array}\right]}\) \(\newcommand{\laspan}[1]{\text{Span}\{#1\}}\) \(\newcommand{\bcal}{\cal B}\) \(\newcommand{\ccal}{\cal C}\) \(\newcommand{\scal}{\cal S}\) \(\newcommand{\wcal}{\cal W}\) \(\newcommand{\ecal}{\cal E}\) \(\newcommand{\coords}[2]{\left\{#1\right\}_{#2}}\) \(\newcommand{\gray}[1]{\color{gray}{#1}}\) \(\newcommand{\lgray}[1]{\color{lightgray}{#1}}\) \(\newcommand{\rank}{\operatorname{rank}}\) \(\newcommand{\row}{\text{Row}}\) \(\newcommand{\col}{\text{Col}}\) \(\renewcommand{\row}{\text{Row}}\) \(\newcommand{\nul}{\text{Nul}}\) \(\newcommand{\var}{\text{Var}}\) \(\newcommand{\corr}{\text{corr}}\) \(\newcommand{\len}[1]{\left|#1\right|}\) \(\newcommand{\bbar}{\overline{\bvec}}\) \(\newcommand{\bhat}{\widehat{\bvec}}\) \(\newcommand{\bperp}{\bvec^\perp}\) \(\newcommand{\xhat}{\widehat{\xvec}}\) \(\newcommand{\vhat}{\widehat{\vvec}}\) \(\newcommand{\uhat}{\widehat{\uvec}}\) \(\newcommand{\what}{\widehat{\wvec}}\) \(\newcommand{\Sighat}{\widehat{\Sigma}}\) \(\newcommand{\lt}{<}\) \(\newcommand{\gt}{>}\) \(\newcommand{\amp}{&}\) \(\definecolor{fillinmathshade}{gray}{0.9}\)Matrix addition is crucial for linear models, as it provides the mechanism for incorporating an error or residual term. In the classic linear model formulation, the vector of observed responses is expressed as the sum of the systematic component (\(\mathbf{XB}\)), and the random component (\(\mathbf{E}\)). Without the simple, element-wise operation of matrix addition, we could not structurally separate the predictable pattern in the data from the inherent, unexplained variability.
✦•················• ✦ •··················•✦
Matrix addition is closed. This means that the sum of two matrices will always give you another matrix... as long as it makes sense to add two matrices. Two matrices are can be added if they have the same dimension. That is, two matrices are commensurate for addition if they have the same dimension.
When the matrices have the correct dimension to perform the mathematical operation, they are called "commensurate." For addition, commensurate matrices have the same dimension. For multiplication, the requirement is much different (see here).
Let \(\mathbf{A} \in \mathcal{M}_{r \times c}\) and \(\mathbf{B} \in \mathcal{M}_{r \times c}\) for some values of \(r\) and \(c\). \(\mathbf{A}\) and \(\mathbf{B}\) are commensurate and can be summed. Matrix addition is element-by-element (elementwise) addition. Thus,
\begin{equation}
\mathbf{A} + \mathbf{B} = \left[ \begin{matrix}
a_{11} + b_{11} & a_{12} + b_{12} & a_{13} + b_{13} & \cdots & a_{1c} + b_{1c} \\
a_{21} + b_{21} & a_{22} + b_{22} & a_{23} + b_{23} & \cdots & a_{2c} + b_{2c} \\
a_{31} + b_{31} & a_{32} + b_{32} & a_{33} + b_{33} & \cdots & a_{3c} + b_{3c} \\
\vdots & \vdots & \vdots & \ddots & \vdots \\
a_{r1} + b_{r1} & a_{r2} + b_{r2} & a_{r3} + b_{r3} & \cdots & a_{rc} + b_{rc} \\
\end{matrix} \right]
\end{equation}
This can also be symbolized (shortened) as
\begin{equation}
\mathbf{A} + \mathbf{B} = \left[ a_{i,j} \right] + \left[ b_{i,j} \right] = \left[ a_{i,j} + b_{i,j} \right]
\end{equation}
Matrix addition has a zero (additive identity). It is the commensurate matrix with all elements equal to zero:
\begin{equation}
\mathbf{0} = \left[ \begin{matrix}
0_{1,1} & 0_{1,2} & 0_{1,3} & \cdots & 0_{1,c} \\
0_{2,1} & 0_{2,2} & 0_{2,3} & \cdots & 0_{2,c} \\
0_{3,1} & 0_{3,2} & 0_{3,3} & \cdots & 0_{3,c} \\
\vdots & \vdots & \vdots & \ddots & \vdots \\
0_{r,1} & 0_{r,2} & 0_{r,3} & \cdots & 0_{r,c} \\
\end{matrix} \right] =
\left[ 0_{i,j} \right]
\end{equation}
How does the additive identity work in addition? Just as you would expect:
\begin{equation}
\mathbf{A} + \mathbf{0} = \left[ \begin{matrix}
a_{11} + 0 & a_{12} + 0 & a_{13} + 0 & \cdots & a_{1c} + 0 \\
a_{21} + 0 & a_{22} + 0 & a_{23} + 0 & \cdots & a_{2c} + 0 \\
a_{31} + 0 & a_{32} + 0 & a_{33} + 0 & \cdots & a_{3c} + 0 \\
\vdots & \vdots & \vdots & \ddots & \vdots \\
a_{r1} + 0 & a_{r2} + 0 & a_{r3} + 0 & \cdots & a_{rc} + 0 \\
\end{matrix} \right] = \mathbf{A}
\end{equation}
This can also be symbolized (shortened) as
\begin{equation}
\mathbf{A} + \mathbf{0} = \left[ a_{ij} + 0 \right] = \left[ a_{ij} \right]
\end{equation}
I leave it as an exercise to prove \(\mathbf{A} + \mathbf{0} = \mathbf{0} + \mathbf{A} = \mathbf{A}\).
Matrices also have an additive inverse. As with scalar arithmetic, a matrix plus its additive inverse equals the zero matrix; that is, if \(\mathbf{B}\) is the additive inverse of \(\mathbf{A}\), then \(\mathbf{A} + \mathbf{B} = \mathbf{0}\).
Two things about the additive inverse: First, it is commensurate with the original matrix. Second, it is unique (just as in scalar arithmetic).
To calculate the additive inverse of \(\mathbf{A}\), just negate each element of \(\mathbf{A}\). Thus, if \(\mathbf{B} = \left[ -a_{ij} \right]\) then \(\mathbf{B}\) is the additive inverse of \(\mathbf{A}\).
Finally, as with all elementwise operations, matrix addition is both commutative and associative:
- \(\mathbf{A} + \mathbf{B} = \mathbf{B} + \mathbf{A}\)
- \(\left(\mathbf{A} + \mathbf{B}\right) + \mathbf{C} = \mathbf{A} + \left(\mathbf{B} + \mathbf{C}\right)\)
In summation, matrix addition behaves like scalar addition, as long as the matrices are commensurate.
As you may be able to guess, a lot of the proofs rely on checking if the matrices involved are commensurate. So, keep this in mind as you proceed through proofs.


