Skip to main content
Statistics LibreTexts

7.4: Inverse Matrices

  • Page ID
    34462
  • \( \newcommand{\vecs}[1]{\overset { \scriptstyle \rightharpoonup} {\mathbf{#1}} } \)

    \( \newcommand{\vecd}[1]{\overset{-\!-\!\rightharpoonup}{\vphantom{a}\smash {#1}}} \)

    \( \newcommand{\id}{\mathrm{id}}\) \( \newcommand{\Span}{\mathrm{span}}\)

    ( \newcommand{\kernel}{\mathrm{null}\,}\) \( \newcommand{\range}{\mathrm{range}\,}\)

    \( \newcommand{\RealPart}{\mathrm{Re}}\) \( \newcommand{\ImaginaryPart}{\mathrm{Im}}\)

    \( \newcommand{\Argument}{\mathrm{Arg}}\) \( \newcommand{\norm}[1]{\| #1 \|}\)

    \( \newcommand{\inner}[2]{\langle #1, #2 \rangle}\)

    \( \newcommand{\Span}{\mathrm{span}}\)

    \( \newcommand{\id}{\mathrm{id}}\)

    \( \newcommand{\Span}{\mathrm{span}}\)

    \( \newcommand{\kernel}{\mathrm{null}\,}\)

    \( \newcommand{\range}{\mathrm{range}\,}\)

    \( \newcommand{\RealPart}{\mathrm{Re}}\)

    \( \newcommand{\ImaginaryPart}{\mathrm{Im}}\)

    \( \newcommand{\Argument}{\mathrm{Arg}}\)

    \( \newcommand{\norm}[1]{\| #1 \|}\)

    \( \newcommand{\inner}[2]{\langle #1, #2 \rangle}\)

    \( \newcommand{\Span}{\mathrm{span}}\) \( \newcommand{\AA}{\unicode[.8,0]{x212B}}\)

    \( \newcommand{\vectorA}[1]{\vec{#1}}      % arrow\)

    \( \newcommand{\vectorAt}[1]{\vec{\text{#1}}}      % arrow\)

    \( \newcommand{\vectorB}[1]{\overset { \scriptstyle \rightharpoonup} {\mathbf{#1}} } \)

    \( \newcommand{\vectorC}[1]{\textbf{#1}} \)

    \( \newcommand{\vectorD}[1]{\overrightarrow{#1}} \)

    \( \newcommand{\vectorDt}[1]{\overrightarrow{\text{#1}}} \)

    \( \newcommand{\vectE}[1]{\overset{-\!-\!\rightharpoonup}{\vphantom{a}\smash{\mathbf {#1}}}} \)

    \( \newcommand{\vecs}[1]{\overset { \scriptstyle \rightharpoonup} {\mathbf{#1}} } \)

    \( \newcommand{\vecd}[1]{\overset{-\!-\!\rightharpoonup}{\vphantom{a}\smash {#1}}} \)

    \(\newcommand{\avec}{\mathbf a}\) \(\newcommand{\bvec}{\mathbf b}\) \(\newcommand{\cvec}{\mathbf c}\) \(\newcommand{\dvec}{\mathbf d}\) \(\newcommand{\dtil}{\widetilde{\mathbf d}}\) \(\newcommand{\evec}{\mathbf e}\) \(\newcommand{\fvec}{\mathbf f}\) \(\newcommand{\nvec}{\mathbf n}\) \(\newcommand{\pvec}{\mathbf p}\) \(\newcommand{\qvec}{\mathbf q}\) \(\newcommand{\svec}{\mathbf s}\) \(\newcommand{\tvec}{\mathbf t}\) \(\newcommand{\uvec}{\mathbf u}\) \(\newcommand{\vvec}{\mathbf v}\) \(\newcommand{\wvec}{\mathbf w}\) \(\newcommand{\xvec}{\mathbf x}\) \(\newcommand{\yvec}{\mathbf y}\) \(\newcommand{\zvec}{\mathbf z}\) \(\newcommand{\rvec}{\mathbf r}\) \(\newcommand{\mvec}{\mathbf m}\) \(\newcommand{\zerovec}{\mathbf 0}\) \(\newcommand{\onevec}{\mathbf 1}\) \(\newcommand{\real}{\mathbb R}\) \(\newcommand{\twovec}[2]{\left[\begin{array}{r}#1 \\ #2 \end{array}\right]}\) \(\newcommand{\ctwovec}[2]{\left[\begin{array}{c}#1 \\ #2 \end{array}\right]}\) \(\newcommand{\threevec}[3]{\left[\begin{array}{r}#1 \\ #2 \\ #3 \end{array}\right]}\) \(\newcommand{\cthreevec}[3]{\left[\begin{array}{c}#1 \\ #2 \\ #3 \end{array}\right]}\) \(\newcommand{\fourvec}[4]{\left[\begin{array}{r}#1 \\ #2 \\ #3 \\ #4 \end{array}\right]}\) \(\newcommand{\cfourvec}[4]{\left[\begin{array}{c}#1 \\ #2 \\ #3 \\ #4 \end{array}\right]}\) \(\newcommand{\fivevec}[5]{\left[\begin{array}{r}#1 \\ #2 \\ #3 \\ #4 \\ #5 \\ \end{array}\right]}\) \(\newcommand{\cfivevec}[5]{\left[\begin{array}{c}#1 \\ #2 \\ #3 \\ #4 \\ #5 \\ \end{array}\right]}\) \(\newcommand{\mattwo}[4]{\left[\begin{array}{rr}#1 \amp #2 \\ #3 \amp #4 \\ \end{array}\right]}\) \(\newcommand{\laspan}[1]{\text{Span}\{#1\}}\) \(\newcommand{\bcal}{\cal B}\) \(\newcommand{\ccal}{\cal C}\) \(\newcommand{\scal}{\cal S}\) \(\newcommand{\wcal}{\cal W}\) \(\newcommand{\ecal}{\cal E}\) \(\newcommand{\coords}[2]{\left\{#1\right\}_{#2}}\) \(\newcommand{\gray}[1]{\color{gray}{#1}}\) \(\newcommand{\lgray}[1]{\color{lightgray}{#1}}\) \(\newcommand{\rank}{\operatorname{rank}}\) \(\newcommand{\row}{\text{Row}}\) \(\newcommand{\col}{\text{Col}}\) \(\renewcommand{\row}{\text{Row}}\) \(\newcommand{\nul}{\text{Nul}}\) \(\newcommand{\var}{\text{Var}}\) \(\newcommand{\corr}{\text{corr}}\) \(\newcommand{\len}[1]{\left|#1\right|}\) \(\newcommand{\bbar}{\overline{\bvec}}\) \(\newcommand{\bhat}{\widehat{\bvec}}\) \(\newcommand{\bperp}{\bvec^\perp}\) \(\newcommand{\xhat}{\widehat{\xvec}}\) \(\newcommand{\vhat}{\widehat{\vvec}}\) \(\newcommand{\uhat}{\widehat{\uvec}}\) \(\newcommand{\what}{\widehat{\wvec}}\) \(\newcommand{\Sighat}{\widehat{\Sigma}}\) \(\newcommand{\lt}{<}\) \(\newcommand{\gt}{>}\) \(\newcommand{\amp}{&}\) \(\definecolor{fillinmathshade}{gray}{0.9}\)

    Learning Objectives

    In this section, you will learn to:

    1. Find the inverse of a matrix, if it exists.
    2. Use inverses to solve linear systems.

    In this section, we will learn to find the inverse of a matrix, if it exists. Later, we will use matrix inverses to solve linear systems.

    Definition of an Inverse: An \(n \times n\) matrix has an inverse if there exists a matrix \(B\) such that \(AB = BA = I_n\), where \(I_n\) is an \(n \times n\) identity matrix. The inverse of a matrix \(A\), if it exists, is denoted by the symbol \(A^{-1}\).

    Example \(\PageIndex{1}\)

    Given matrices \(A\) and \(B\) below, verify that they are inverses.

    \[A=\left[\begin{array}{ll}
    4 & 1 \\
    3 & 1
    \end{array}\right] \quad B=\left[\begin{array}{cc}
    1 & -1 \\
    -3 & 4
    \end{array}\right] \nonumber \]

    Solution

    The matrices are inverses if the product \(AB\) and \(BA\) both equal the identity matrix of dimension \(2 \times 2\): \(I_2\),

    \[\mathrm{AB}=\left[\begin{array}{ll}
    4 & 1 \\
    3 & 1
    \end{array}\right]\left[\begin{array}{cc}
    1 & -1 \\
    -3 & 4
    \end{array}\right]=\left[\begin{array}{ll}
    1 & 0 \\
    0 & 1
    \end{array}\right]=\mathrm{I}_{2} \nonumber \]

    and

    \[\mathrm{BA}=\left[\begin{array}{cc}
    1 & -1 \\
    -3 & 4
    \end{array}\right]\left[\begin{array}{ll}
    4 & 1 \\
    3 & 1
    \end{array}\right]=\left[\begin{array}{ll}
    1 & 0 \\
    0 & 1
    \end{array}\right]=\mathrm{I}_{2} \nonumber \]

    Clearly that is the case; therefore, the matrices A and B are inverses of each other.

    Example \(\PageIndex{2}\)

    Find the inverse of the matrix \(\mathrm{A}=\left[\begin{array}{ll}
    3 & 1 \\
    5 & 2
    \end{array}\right]\).

    Solution

    Suppose \(A\) has an inverse, and it is

    \[B=\left[\begin{array}{ll}
    a & b \\
    c & d
    \end{array}\right] \nonumber \]

    Then \(AB = I_2\): \(\left[\begin{array}{cc}
    3 & 1 \\
    5 & 2
    \end{array}\right]\left[\begin{array}{ll}
    a & b \\
    c & d
    \end{array}\right]=\left[\begin{array}{ll}
    1 & 0 \\
    0 & 1
    \end{array}\right]=I_{2}\)

    After multiplying the two matrices on the left side, we get

    \[\left[\begin{array}{cc}
    3 a+c & 3 b+d \\
    5 a+2 c & 5 b+2 d
    \end{array}\right]=\left[\begin{array}{cc}
    1 & 0 \\
    0 & 1
    \end{array}\right] \nonumber \]

    Equating the corresponding entries, we get four equations with four unknowns:

    \[\begin{array}{ll}
    3 a+c=1 & 3 b+d=0 \\
    5 a+2 c=0 & 5 b+2 d=1
    \end{array} \nonumber \]

    Solving this system, we get: \(a = 2 \quad b = -1 \quad c = -5 \quad d = 3\)
    Therefore, the inverse of the matrix \(A\) is \(B=\left[\begin{array}{cc}
    2 & -1 \\
    -5 & 3
    \end{array}\right] \nonumber\)

    In this problem, finding the inverse of matrix \(A\) amounted to solving the system of equations:

    \[\begin{array}{ll}
    3 a+c=1 & 3 b+d=0 \\
    5 a+2 c=0 & 5 b+2 d=1
    \end{array} \nonumber \]

    Actually, it can be written as two systems, one with variables \(a\) and \(c\), and the other with \(b\) and \(d\). The augmented matrices for both are given below.

    \[\left[\begin{array}{llll}
    3 & 1 & | & 1 \\
    5 & 2 & | & 0
    \end{array}\right] \text { and }\left[\begin{array}{llll}
    3 & 1 & | & 0 \\
    5 & 2 & | & 1
    \end{array}\right] \nonumber \]

    As we look at the two augmented matrices, we notice that the coefficient matrix for both the matrices is the same. This implies the row operations of the Gauss-Jordan method will also be the same. A great deal of work can be saved if the two right hand columns are grouped together to form one augmented matrix as below.

    \[\left[\begin{array}{lllll}
    3 & 1 & | & 1 & 0 \\
    5 & 2 & | & 0 & 1
    \end{array}\right] \nonumber \]

    And solving this system, we get

    The matrix on the right side of the vertical line is the \(A^{-1}\) matrix.

    What you just witnessed is no coincidence. This is the method that is often employed in finding the inverse of a matrix. We list the steps, as follows:

    The Method for Finding the Inverse of a Matrix

    1. Write the augmented matrix \([ A | I_n ]\).

    2. Write the augmented matrix in step 1 in reduced row echelon form.

    3. If the reduced row echelon form in 2 is \([ I_n | B]\), then \(B\) is the inverse of \(A\).

    4. If the left side of the row reduced echelon is not an identity matrix, the inverse does not exist.

    Example \(\PageIndex{3}\)

    Given the matrix A below, find its inverse.

    \[A=\left[\begin{array}{ccc}
    1 & -1 & 1 \\
    2 & 3 & 0 \\
    0 & -2 & 1
    \end{array}\right] \nonumber \]

    Solution

    We write the augmented matrix as follows.

    \[\left[\begin{array}{ccccccc}
    1 & -1 & 1 & | & 1 & 0 & 0 \\
    2 & 3 & 0 & | & 0 & 1 & 0 \\
    0 & -2 & 1 & | & 0 & 0 & 1
    \end{array}\right] \nonumber \]

    We will reduce this matrix using the Gauss-Jordan method.

    Multiplying the first row by -2 and adding it to the second row, we get

    \[\left[\begin{array}{ccccccc}
    1 & -1 & 1 & | & 1 & 0 & 0 \\
    0 & 5 & -2 & | & -2 & 1 & 0 \\
    0 & -2 & 1 & | & 0 & 0 & 1
    \end{array}\right] \nonumber \]

    If we swap the second and third rows, we get

    \[\left[\begin{array}{ccccccc}
    1 & -1 & 1 & | & 1 & 0 & 0 \\
    0 & -2 & 1 & | & 0 & 0 & 1 \\
    0 & 5 & -2 & | & -2 & 1 & 0
    \end{array}\right] \nonumber \]

    Divide the second row by -2. The result is

    \[\left[\begin{array}{ccccccc}
    1 & -1 & 1 & | & 1 & 0 & 0 \\
    0 & 1 & -1 / 2 & | & 0 & 0 & -1 / 2 \\
    0 & 5 & -2 & | & -2 & 1 & 0
    \end{array}\right] \nonumber \]

    Let us do two operations here. 1) Add the second row to first, 2) Add -5 times the second row to the third. And we get

    \[\left[\begin{array}{ccccccc}
    1 & 0 & 1 / 2 & | & 1 & 0 & -1 / 2 \\
    0 & 1 & -1 / 2 & | & 0 & 0 & -1 / 2 \\
    0 & 0 & 1 / 2 & | & -2 & 1 & 5 / 2
    \end{array}\right] \nonumber \]

    Multiplying the third row by 2 results in

    \[\left[\begin{array}{ccccccc}
    1 & 0 & 1 / 2 & | & 1 & 0 & -1 / 2 \\
    0 & 1 & -1 / 2 & | & 0 & 0 & -1 / 2 \\
    0 & 0 & 1 & | & -4 & 2 & 5
    \end{array}\right] \nonumber \]

    Multiply the third row by 1/2 and add it to the second.
    Also, multiply the third row by -1/2 and add it to the first.

    \[\left[\begin{array}{ccccrrr}
    1 & 0 & 0 & | & 3 & -1 & -3 \\
    0 & 1 & 0 & | & -2 & 1 & 2 \\
    0 & 0 & 1 & | & -4 & 2 & 5
    \end{array}\right] \nonumber \]

    Therefore, the inverse of matrix \(A\) is \(\mathrm{A}^{-1}=\left[\begin{array}{rrr}
    3 & -1 & -3 \\
    -2 & 1 & 2 \\
    -4 & 2 & 5
    \end{array}\right]\)

    One should verify the result by multiplying the two matrices to see if the product does, indeed, equal the identity matrix.

    Now that we know how to find the inverse of a matrix, we will use inverses to solve systems of equations. The method is analogous to solving a simple equation like the one below. \[ \frac{2}{3}x = 4 \nonumber \]

    Example \(\PageIndex{4}\)

    Solve the following equation: \(\frac{2}{3}x = 4\)

    Solution

    To solve the above equation, we multiply both sides of the equation by the multiplicative inverse of \(\frac{2}{3}\) which happens to be \(\frac{3}{2}\). We get

    \[\begin{array}{l}
    \frac{3}{2} \cdot \frac{2}{3} x=4 \cdot \frac{3}{2} \\
    x=6
    \end{array} \nonumber \]

    We use the Example \(\PageIndex{4}\) as an analogy to show how linear systems of the form \(AX = B\) are solved.

    To solve a linear system, we first write the system in the matrix equation \(AX = B\), where \(A\) is the coefficient matrix, \(X\) the matrix of variables, and \(B\) the matrix of constant terms.
    We then multiply both sides of this equation by the multiplicative inverse of the matrix \(A\).

    Consider the following example.

    Example \(\PageIndex{5}\)

    Solve the following system

    \begin{aligned}
    3 x+y&=3 \\
    5 x+2 y&=4
    \end{aligned}

    Solution

    To solve the above equation, first we express the system as

    \[AX = B \nonumber \]

    where A is the coefficient matrix, and B is the matrix of constant terms. We get

    \[\left[\begin{array}{ll}
    3 & 1 \\
    5 & 2
    \end{array}\right]\left[\begin{array}{l}
    x \\
    y
    \end{array}\right]=\left[\begin{array}{l}
    3 \\
    4
    \end{array}\right] \nonumber \]

    To solve this system, we multiply both sides of the matrix equation \(AX = B\) by \(A^{-1}\). Matrix multiplication is not commutative, so we need to multiply by \(A^{-1}\) on the left on both sides of the equation.

    Matrix \(A\) is the same matrix \(A\) whose inverse we found in Example \(\PageIndex{2}\), so \(\mathrm{A}^{-1}=\left[\begin{array}{cc}
    2 & -1 \\
    -5 & 3
    \end{array}\right]\)

    Multiplying both sides by \(A^{-1}\), we get

    \[\begin{array}{c}
    {\left[\begin{array}{cc}
    2 & -1 \\
    -5 & 3
    \end{array}\right]\left[\begin{array}{cc}
    3 & 1 \\
    5 & 2
    \end{array}\right]\left[\begin{array}{c}
    x \\
    y
    \end{array}\right]=\left[\begin{array}{cc}
    2 & -1 \\
    -5 & 3
    \end{array}\right]\left[\begin{array}{c}
    3 \\
    4
    \end{array}\right]} \\
    {\left[\begin{array}{cc}
    1 & 0 \\
    0 & 1
    \end{array}\right]\left[\begin{array}{c}
    x \\
    y
    \end{array}\right]=\left[\begin{array}{c}
    2 \\
    -3
    \end{array}\right]} \\
    {\left[\begin{array}{c}
    x \\
    y
    \end{array}\right]=\left[\begin{array}{c}
    2 \\
    -3
    \end{array}\right]}
    \end{array} \nonumber \]

    Therefore, \(x = 2\), and \(y = -3\).

    Example \(\PageIndex{6}\)

    Solve the following system:

    \begin{aligned}
    x-y+z &=6 \\
    2 x+3 y &=1 \\
    -2 y+z &=5
    \end{aligned}

    Solution

    To solve the above equation, we write the system in matrix form \(AX = B\) as follows:

    \[\left[\begin{array}{rrr}
    1 & -1 & 1 \\
    2 & 3 & 0 \\
    0 & -2 & 1
    \end{array}\right]\left[\begin{array}{l}
    x \\
    y \\
    z
    \end{array}\right]-\left[\begin{array}{l}
    6 \\
    1 \\
    5
    \end{array}\right] \nonumber \]

    To solve this system, we need inverse of \(A\). From Example \(\PageIndex{3}\), \(\mathrm{A}^{-1}=\left[\begin{array}{rrr}
    3 & -1 & -3 \\
    -2 & 1 & 2 \\
    -4 & 2 & 5
    \end{array}\right]\)

    Multiplying both sides of the matrix equation \(AX = B\) on the left by \(A^{-1}\), we get

    \[\left[\begin{array}{rrr}
    3 & -1 & -3 \\
    -2 & 1 & 2 \\
    -4 & 2 & 5
    \end{array}\right]\left[\begin{array}{rrr}
    1 & 1 & 1 \\
    2 & 3 & 0 \\
    0 & 2 & 1
    \end{array}\right]\left[\begin{array}{l}
    x \\
    y \\
    z
    \end{array}\right]=\left[\begin{array}{rrr}
    3 & -1 & -3 \\
    -2 & 1 & 2 \\
    -4 & 2 & 5
    \end{array}\right]\left[\begin{array}{l}
    6 \\
    1 \\
    5
    \end{array}\right] \nonumber \]

    After multiplying the matrices, we get

    \begin{aligned}
    {\left[\begin{array}{lll}
    1 & 0 & 0 \\
    0 & 1 & 0 \\
    0 & 0 & 1
    \end{array}\right]\left[\begin{array}{l}
    x \\
    y \\
    z
    \end{array}\right]=\left[\begin{array}{r}
    2 \\
    -1 \\
    3
    \end{array}\right]} \\
    {\left[\begin{array}{l}
    x \\
    y \\
    z
    \end{array}\right]=\left[\begin{array}{r}
    2 \\
    -1 \\
    3
    \end{array}\right]}
    \end{aligned}

    We remind the reader that not every system of equations can be solved by the matrix inverse method. Although the Gauss-Jordan method works for every situation, the matrix inverse method works only in cases where the inverse of the square matrix exists. In such cases the system has a unique solution.

    The Method for Finding the Inverse of a Matrix

    1. Write the augmented matrix \(\left[\mathrm{A} | \mathrm{I}_{\mathrm{n}}\right]\).
    2. Write the augmented matrix in step 1 in reduced row echelon form.
    3. If the reduced row echelon form in 2 is \(\left[\mathrm{I}_{\mathrm{n}} | \mathrm{B}\right]\), then \(B\) is the inverse of \(A\).
    4. If the left side of the row reduced echelon is not an identity matrix, the inverse does not exist.

    The Method for Solving a System of Equations When a Unique Solution Exists

    1. Express the system in the matrix equation \(AX = B\).

    2. To solve the equation \(AX = B\), we multiply on both sides by \(A^{-1}\).

    \[AX = B \nonumber \]

    \[A^{-1}AX = A^{-1}B \nonumber \]

    \[I X = A^{-1}B \text{ where } I \text{ is the identity matrix} \nonumber \]


    This page titled 7.4: Inverse Matrices is shared under a CC BY 4.0 license and was authored, remixed, and/or curated by Rupinder Sekhon and Roberta Bloom via source content that was edited to the style and standards of the LibreTexts platform.