2.2: Matrix multiplication and linear combinations (2024)

  1. Last updated
  2. Save as PDF
  • Page ID
    82481
  • \( \newcommand{\vecs}[1]{\overset { \scriptstyle \rightharpoonup} {\mathbf{#1}}}\)

    \( \newcommand{\vecd}[1]{\overset{-\!-\!\rightharpoonup}{\vphantom{a}\smash{#1}}} \)

    \( \newcommand{\id}{\mathrm{id}}\) \( \newcommand{\Span}{\mathrm{span}}\)

    ( \newcommand{\kernel}{\mathrm{null}\,}\) \( \newcommand{\range}{\mathrm{range}\,}\)

    \( \newcommand{\RealPart}{\mathrm{Re}}\) \( \newcommand{\ImaginaryPart}{\mathrm{Im}}\)

    \( \newcommand{\Argument}{\mathrm{Arg}}\) \( \newcommand{\norm}[1]{\| #1 \|}\)

    \( \newcommand{\inner}[2]{\langle #1, #2 \rangle}\)

    \( \newcommand{\Span}{\mathrm{span}}\)

    \( \newcommand{\id}{\mathrm{id}}\)

    \( \newcommand{\Span}{\mathrm{span}}\)

    \( \newcommand{\kernel}{\mathrm{null}\,}\)

    \( \newcommand{\range}{\mathrm{range}\,}\)

    \( \newcommand{\RealPart}{\mathrm{Re}}\)

    \( \newcommand{\ImaginaryPart}{\mathrm{Im}}\)

    \( \newcommand{\Argument}{\mathrm{Arg}}\)

    \( \newcommand{\norm}[1]{\| #1 \|}\)

    \( \newcommand{\inner}[2]{\langle #1, #2 \rangle}\)

    \( \newcommand{\Span}{\mathrm{span}}\) \( \newcommand{\AA}{\unicode[.8,0]{x212B}}\)

    \( \newcommand{\vectorA}[1]{\vec{#1}} % arrow\)

    \( \newcommand{\vectorAt}[1]{\vec{\text{#1}}} % arrow\)

    \( \newcommand{\vectorB}[1]{\overset { \scriptstyle \rightharpoonup} {\mathbf{#1}}}\)

    \( \newcommand{\vectorC}[1]{\textbf{#1}}\)

    \( \newcommand{\vectorD}[1]{\overrightarrow{#1}}\)

    \( \newcommand{\vectorDt}[1]{\overrightarrow{\text{#1}}}\)

    \( \newcommand{\vectE}[1]{\overset{-\!-\!\rightharpoonup}{\vphantom{a}\smash{\mathbf {#1}}}} \)

    \( \newcommand{\vecs}[1]{\overset { \scriptstyle \rightharpoonup} {\mathbf{#1}}}\)

    \( \newcommand{\vecd}[1]{\overset{-\!-\!\rightharpoonup}{\vphantom{a}\smash{#1}}} \)

    \(\newcommand{\avec}{\mathbf a}\) \(\newcommand{\bvec}{\mathbf b}\) \(\newcommand{\cvec}{\mathbf c}\) \(\newcommand{\dvec}{\mathbf d}\) \(\newcommand{\dtil}{\widetilde{\mathbf d}}\) \(\newcommand{\evec}{\mathbf e}\) \(\newcommand{\fvec}{\mathbf f}\) \(\newcommand{\nvec}{\mathbf n}\) \(\newcommand{\pvec}{\mathbf p}\) \(\newcommand{\qvec}{\mathbf q}\) \(\newcommand{\svec}{\mathbf s}\) \(\newcommand{\tvec}{\mathbf t}\) \(\newcommand{\uvec}{\mathbf u}\) \(\newcommand{\vvec}{\mathbf v}\) \(\newcommand{\wvec}{\mathbf w}\) \(\newcommand{\xvec}{\mathbf x}\) \(\newcommand{\yvec}{\mathbf y}\) \(\newcommand{\zvec}{\mathbf z}\) \(\newcommand{\rvec}{\mathbf r}\) \(\newcommand{\mvec}{\mathbf m}\) \(\newcommand{\zerovec}{\mathbf 0}\) \(\newcommand{\onevec}{\mathbf 1}\) \(\newcommand{\real}{\mathbb R}\) \(\newcommand{\twovec}[2]{\left[\begin{array}{r}#1 \\ #2 \end{array}\right]}\) \(\newcommand{\ctwovec}[2]{\left[\begin{array}{c}#1 \\ #2 \end{array}\right]}\) \(\newcommand{\threevec}[3]{\left[\begin{array}{r}#1 \\ #2 \\ #3 \end{array}\right]}\) \(\newcommand{\cthreevec}[3]{\left[\begin{array}{c}#1 \\ #2 \\ #3 \end{array}\right]}\) \(\newcommand{\fourvec}[4]{\left[\begin{array}{r}#1 \\ #2 \\ #3 \\ #4 \end{array}\right]}\) \(\newcommand{\cfourvec}[4]{\left[\begin{array}{c}#1 \\ #2 \\ #3 \\ #4 \end{array}\right]}\) \(\newcommand{\fivevec}[5]{\left[\begin{array}{r}#1 \\ #2 \\ #3 \\ #4 \\ #5 \\ \end{array}\right]}\) \(\newcommand{\cfivevec}[5]{\left[\begin{array}{c}#1 \\ #2 \\ #3 \\ #4 \\ #5 \\ \end{array}\right]}\) \(\newcommand{\mattwo}[4]{\left[\begin{array}{rr}#1 \amp #2 \\ #3 \amp #4 \\ \end{array}\right]}\) \(\newcommand{\laspan}[1]{\text{Span}\{#1\}}\) \(\newcommand{\bcal}{\cal B}\) \(\newcommand{\ccal}{\cal C}\) \(\newcommand{\scal}{\cal S}\) \(\newcommand{\wcal}{\cal W}\) \(\newcommand{\ecal}{\cal E}\) \(\newcommand{\coords}[2]{\left\{#1\right\}_{#2}}\) \(\newcommand{\gray}[1]{\color{gray}{#1}}\) \(\newcommand{\lgray}[1]{\color{lightgray}{#1}}\) \(\newcommand{\rank}{\operatorname{rank}}\) \(\newcommand{\row}{\text{Row}}\) \(\newcommand{\col}{\text{Col}}\) \(\renewcommand{\row}{\text{Row}}\) \(\newcommand{\nul}{\text{Nul}}\) \(\newcommand{\var}{\text{Var}}\) \(\newcommand{\corr}{\text{corr}}\) \(\newcommand{\len}[1]{\left|#1\right|}\) \(\newcommand{\bbar}{\overline{\bvec}}\) \(\newcommand{\bhat}{\widehat{\bvec}}\) \(\newcommand{\bperp}{\bvec^\perp}\) \(\newcommand{\xhat}{\widehat{\xvec}}\) \(\newcommand{\vhat}{\widehat{\vvec}}\) \(\newcommand{\uhat}{\widehat{\uvec}}\) \(\newcommand{\what}{\widehat{\wvec}}\) \(\newcommand{\Sighat}{\widehat{\Sigma}}\) \(\newcommand{\lt}{<}\) \(\newcommand{\gt}{>}\) \(\newcommand{\amp}{&}\) \(\definecolor{fillinmathshade}{gray}{0.9}\)

    \(\newcommand{\zerovec}{\mathbf 0}\) \(\newcommand{\twovec}[2]{\begin{pmatrix} #1 \\ #2 \end{pmatrix} } \) \(\newcommand{\threevec}[3]{\begin{pmatrix} #1 \\ #2 \\ #3 \end{pmatrix} } \) \(\newcommand{\fourvec}[4]{\begin{pmatrix} #1 \\ #2 \\ #3 \\ #4 \end{pmatrix} } \) \(\newcommand{\fivevec}[5]{\begin{pmatrix} #1 \\ #2 \\ #3 \\ #4 \\ #5 \end{pmatrix} } \)The previous section introduced vectors and linear combinations and demonstrated how they provide a means of thinking about linear systems geometrically. In particular, we saw that the vector \(\mathbf b\) is a linear combination of the vectors \(\mathbf v_1,\mathbf v_2,\ldots,\mathbf v_n\) if the linear system corresponding to the augmented matrix

    \begin{equation*} \left[\begin{array}{rrrr|r} \mathbf v_1 & \mathbf v_2 & \ldots & \mathbf v_n & \mathbf b \end{array}\right] \end{equation*}

    is consistent.

    Our goal in this section is to introduction matrix multiplication, another algebraic operation that connects linear systems and linear combinations.

    2.2.1 Matrices

    We first thought of a matrix as a rectangular array of numbers. When the number of rows is \(m\) and columns is \(n\text{,}\) we say that the dimensions of the matrix are \(m\times n\text{.}\) For instance, the matrix below is a \(3\times4\) matrix:

    \begin{equation*} \left[ \begin{array}{rrrr} 0 & 4 & -3 & 1 \\ 3 & -1 & 2 & 0 \\ 2 & 0 & -1 & 1 \\ \end{array} \right]\text{.} \end{equation*}

    We may also think of the columns of a matrix as a collection of vectors. For instance, the matrix above may be represented as

    \begin{equation*} \left[ \begin{array}{rrrr} \mathbf v_1 & \mathbf v_2 & \mathbf v_3 & \mathbf v_4 \end{array} \right] \end{equation*}

    where

    \begin{equation*} \mathbf v_1=\left[\begin{array}{r}0\\3\\2\\ \end{array}\right], \mathbf v_2=\left[\begin{array}{r}4\\-1\\0\\ \end{array}\right], \mathbf v_3=\left[\begin{array}{r}-3\\2\\-1\\ \end{array}\right], \mathbf v_4=\left[\begin{array}{r}1\\0\\1\\ \end{array}\right]\text{.} \end{equation*}

    In this way, we see that our \(3\times 4\) matrix is the same as a collection of 4 vectors in \(\mathbb R^3\text{.}\)

    This means that we may define scalar multiplication and matrix addition operations using the corresponding vector operations.

    \begin{equation*} \begin{aligned} a\left[\begin{array}{rrrr} \mathbf v_1 & \mathbf v_2 & \ldots & \mathbf v_n \end{array} \right] {}={} & \left[\begin{array}{rrrr} a\mathbf v_1 & a\mathbf v_2 & \ldots & a\mathbf v_n \end{array} \right] \\ \left[\begin{array}{rrrr} \mathbf v_1 & \mathbf v_2 & \ldots & \mathbf v_n \end{array} \right] {}+{} & \left[\begin{array}{rrrr} \mathbf w_1 & \mathbf w_2 & \ldots & \mathbf w_n \end{array} \right] \\ {}={} & \left[\begin{array}{rrrr} \mathbf v_1+\mathbf w_1 & \mathbf v_2+\mathbf w_2 & \ldots & \mathbf v_n+\mathbf w_n \end{array} \right]. \\ \end{aligned} \end{equation*}

    Preview Activity 2.2.1. Matrix operations.

    1. Compute the scalar multiple

      \begin{equation*} -3\left[ \begin{array}{rrr} 3 & 1 & 0 \\ -4 & 3 & -1 \\ \end{array} \right]\text{.} \end{equation*}

    2. Suppose that \(A\) and \(B\) are two matrices. What do we need to know about their dimensions before we can form the sum \(A+B\text{?}\)
    3. Find the sum

      \begin{equation*} \left[ \begin{array}{rr} 0 & -3 \\ 1 & -2 \\ 3 & 4 \\ \end{array} \right] + \left[ \begin{array}{rrr} 4 & -1 \\ -2 & 2 \\ 1 & 1 \\ \end{array} \right]\text{.} \end{equation*}

    4. The matrix \(I_n\text{,}\) which we call the identity matrix is the \(n\times n\) matrix whose entries are zero except for the diagonal entries, which are 1. For instance,

      \begin{equation*} I_3 = \left[ \begin{array}{rrr} 1 & 0 & 0 \\ 0 & 1 & 0 \\ 0 & 0 & 1 \\ \end{array} \right]\text{.} \end{equation*}

      If we can form the sum \(A+I_n\text{,}\) what must be true about the matrix \(A\text{?}\)

    5. Find the matrix \(A - 2I_3\) where

      \begin{equation*} A = \left[ \begin{array}{rrr} 1 & 2 & -2 \\ 2 & -3 & 3 \\ -2 & 3 & 4 \\ \end{array} \right]\text{.} \end{equation*}

    As this preview activity shows, both of these operations are relatively straightforward. Some care, however, is required when adding matrices. Since we need the same number of vectors to add and since the vectors must be of the same dimension, two matrices must have the same dimensions as well if we wish to form their sum.

    The identity matrix will play an important role at various points in our explorations. It is important to note that it is a square matrix, meaning it has an equal number of rows and columns, so any matrix added to it must be square as well. Though we wrote it as \(I_n\) in the activity, we will often just write \(I\) when the dimensions are clear.

    2.2.2 Matrix-vector multiplication and linear combinations

    A more important operation will be matrix multiplication as it allows us to compactly express linear systems. For now, we will work with the product of a matrix and vector, which we illustrate with an example.

    Example 2.2.1

    Suppose we have the matrix \(A\) and vector \(\mathbf x\) as given below.

    \begin{equation*} A = \left[\begin{array}{rr} -2 & 3 \\ 0 & 2 \\ 3 & 1 \\ \end{array}\right], \mathbf x = \left[\begin{array}{r} 2 \\ 3 \\ \end{array}\right]\text{.} \end{equation*}

    Their product will be defined to be the linear combination of the columns of \(A\) using the components of \(\mathbf x\) as weights. This means that

    \begin{equation*} \begin{aligned} A\mathbf x = \left[\begin{array}{rr} -2 & 3 \\ 0 & 2 \\ 3 & 1 \\ \end{array}\right] \left[\begin{array}{r} 2 \\ 3 \\ \end{array}\right] {}={} & 2 \left[\begin{array}{r} -2 \\ 0 \\ 3 \\ \end{array}\right] + 3 \left[\begin{array}{r} 3 \\ 2 \\ 1 \\ \end{array}\right] \\ \\ {}={} & \left[\begin{array}{r} -4 \\ 0 \\ 6 \\ \end{array}\right] + \left[\begin{array}{r} 9 \\ 6 \\ 3 \\ \end{array}\right] \\ \\ {}={} & \left[\begin{array}{r} 5 \\ 6 \\ 9 \\ \end{array}\right]. \\ \end{aligned} \end{equation*}

    Let's take note of the dimensions of the matrix and vectors. The two components of the vector \(\mathbf x\) are weights used to form a linear combination of the columns of \(A\text{.}\) Since \(\mathbf x\) has two components, \(A\) must have two columns. In other words, the number of columns of \(A\) must equal the dimension of the vector \(\mathbf x\text{.}\)

    In the same way, the columns of \(A\) are 3-dimensional so any linear combination of them is 3-dimensional as well. Therefore, \(A\mathbf x\) will be 3-dimensional.

    We then see that if \(A\) is a \(3\times2\) matrix, \(\mathbf x\) must be a 2-dimensional vector and \(A\mathbf x\) will be 3-dimensional.

    More generally, we have the following definition.

    Definition 2.2.2

    The product of a matrix \(A\) by a vector \(\mathbf x\) will be the linear combination of the columns of \(A\) using the components of \(\mathbf x\) as weights.

    If \(A\) is an \(m\times n\) matrix, then \(\mathbf x\) must be an \(n\)-dimensional vector, and the product \(A\mathbf x\) will be an \(m\)-dimensional vector. If

    \begin{equation*} A=\left[\begin{array}{rrrr} \mathbf v_1 & \mathbf v_2 & \ldots & \mathbf v_n \end{array}\right], \mathbf x = \left[\begin{array}{r} c_1 \\ c_2 \\ \vdots \\ c_n \end{array}\right], \end{equation*}

    then

    \begin{equation*} A\mathbf x = c_1\mathbf v_1 + c_2\mathbf v_2 + \ldots c_n\mathbf v_n\text{.} \end{equation*}

    The next activity introduces some properties of matrix multiplication.

    Activity 2.2.2. Matrix-vector multiplication.

    1. Find the matrix product

      \begin{equation*} \left[ \begin{array}{rrrr} 1 & 2 & 0 & -1 \\ 2 & 4 & -3 & -2 \\ -1 & -2 & 6 & 1 \\ \end{array} \right] \left[ \begin{array}{r} 3 \\ 1 \\ -1 \\ 1 \\ \end{array} \right]\text{.} \end{equation*}

    2. Suppose that \(A\) is the matrix

      \begin{equation*} \left[ \begin{array}{rrr} 3 & -1 & 0 \\ 0 & -2 & 4 \\ 2 & 1 & 5 \\ 1 & 0 & 3 \\ \end{array} \right]\text{.} \end{equation*}

      If \(A\mathbf x\) is defined, what is the dimension of the vector \(\mathbf x\) and what is the dimension of \(A\mathbf x\text{?}\)

    3. A vector whose entries are all zero is denoted by \(\zerovec\text{.}\) If \(A\) is a matrix, what is the product \(A\zerovec\text{?}\)
    4. Suppose that \(I = \left[\begin{array}{rrr} 1 & 0 & 0 \\ 0 & 1 & 0 \\ 0 & 0 & 1 \\ \end{array}\right]\) is the identity matrix and \(\mathbf x=\threevec{x_1}{x_2}{x_3}\text{.}\) Find the product \(I\mathbf x\) and explain why \(I\) is called the identity matrix.
    5. Suppose we write the matrix \(A\) in terms of its columns as

      \begin{equation*} A = \left[ \begin{array}{rrrr} \mathbf v_1 & \mathbf v_2 & \ldots & \mathbf v_n \\ \end{array} \right]\text{.} \end{equation*}

      If the vector \(\mathbf e_1 = \left[\begin{array}{r} 1 \\ 0 \\ \vdots \\ 0 \end{array}\right]\text{,}\) what is the product \(A\mathbf e_1\text{?}\)

    6. Suppose that

      \begin{equation*} A = \left[ \begin{array}{rrrr} 1 & 2 \\ -1 & 1 \\ \end{array} \right], \mathbf b = \left[ \begin{array}{r} 6 \\ 0 \end{array} \right]\text{.} \end{equation*}

      Is there a vector \(\mathbf x\) such that \(A\mathbf x = \mathbf b\text{?}\)

    Multiplication of a matrix \(A\) and a vector is defined as a linear combination of the columns of \(A\text{.}\) However, there is a shortcut for computing such a product. Let's look at our previous example and focus on the first row of the product.

    \begin{equation*} \left[\begin{array}{rr} -2 & 3 \\ 0 & 2 \\ 3 & 1 \\ \end{array}\right] \left[\begin{array}{r} 2 \\ 3 \\ \end{array}\right] = 2 \left[\begin{array}{r} -2 \\ * \\ * \\ \end{array}\right] + 3 \left[\begin{array}{r} 3 \\ * \\ * \\ \end{array}\right] = \left[\begin{array}{c} 2(-2)+3(3) \\ * \\ * \\ \end{array}\right] = \left[\begin{array}{r} 5 \\ * \\ * \\ \end{array}\right]\text{.} \end{equation*}

    To find the first component of the product, we consider the first row of the matrix. We then multiply the first entry in that row by the first component of the vector, the second entry by the second component of the vector, and so on, and add the results. In this way, we see that the third component of the product would be obtained from the third row of the matrix by computing \(2(3) + 3(1) = 9\text{.}\)

    You are encouraged to evaluate Item a using this shortcut and compare the result to what you found while completing the previous activity.

    Activity 2.2.3.

    In addition, Sage can find the product of a matrix and vector using the * operator. For example,

    A = matrix(2,2,[1,2,2,1])v = vector([3,-1])A*v
    1. Use Sage to evaluate the product Item a yet again.
    2. In Sage, define the matrix and vectors

      \begin{equation*} A = \left[ \begin{array}{rrr} -2 & 0 \\ 3 & 1 \\ 4 & 2 \\ \end{array} \right], \zerovec = \left[ \begin{array}{r} 0 \\ 0 \end{array} \right], \mathbf v = \left[ \begin{array}{r} -2 \\ 3 \end{array} \right], \mathbf w = \left[ \begin{array}{r} 1 \\ 2 \end{array} \right]\text{.} \end{equation*}

    3. What do you find when you evaluate \(A\zerovec\text{?}\)
    4. What do you find when you evaluate \(A(3\mathbf v)\) and \(3(A\mathbf v)\) and compare your results?
    5. What do you find when you evaluate \(A(\mathbf v+\mathbf w)\) and \(A\mathbf v + A\mathbf w\) and compare your results?
    6. If \(I=\left[\begin{array}{rrr} 1 & 0 & 0 \\ 0 & 1 & 0 \\ 0 & 0 & 1 \\ \end{array}\right]\) is the \(3\times3\) identity matrix, what is the product \(IA\text{?}\)

    This activity demonstrates several general properties satisfied by matrix multiplication that we record here.

    Proposition 2.2.3. Linearity of matrix multiplication.

    If \(A\) is a matrix, \(\mathbf v\) and \(\mathbf w\) vectors, and \(c\) a scalar, then

    • \(A\zerovec = \zerovec\text{.}\)
    • \(A(c\mathbf v) = cA\mathbf v\text{.}\)
    • \(A(\mathbf v+\mathbf w) = A\mathbf v + A\mathbf w\text{.}\)

    2.2.3 Matrix-vector multiplication and linear systems

    So far, we have begun with a matrix \(A\) and a vector \(\mathbf x\) and formed their product \(A\mathbf x = \mathbf b\text{.}\) We would now like to turn this around: beginning with a matrix \(A\) and a vector \(\mathbf b\text{,}\) we will ask if we can find a vector \(\mathbf x\) such that \(A\mathbf x = \mathbf b\text{.}\) This will naturally lead back to linear systems.

    To see the connection between the matrix equation \(A\mathbf x = \mathbf b\) and linear systems, let's write the matrix \(A\) in terms of its columns \(\mathbf v_i\) and \(\mathbf x\) in terms of its components.

    \begin{equation*} A = \left[ \begin{array}{rrrr} \mathbf v_1 & \mathbf v_2 & \ldots \mathbf v_n \end{array} \right], \mathbf x = \left[ \begin{array}{r} c_1 \\ c_2 \\ \vdots \\ c_n \\ \end{array} \right]\text{.} \end{equation*}

    We know that the matrix product \(A\mathbf x\) forms a linear combination of the columns of \(A\text{.}\) Therefore, the equation \(A\mathbf x = \mathbf b\) is merely a compact way of writing the equation for the weights \(c_i\text{:}\)

    \begin{equation*} c_1\mathbf v_1 + c_2\mathbf v_2 + \ldots + c_n\mathbf v_n = \mathbf b\text{.} \end{equation*}

    We have seen this equation before: Remember that Proposition 2.1.7 says that the solutions of this equation are the same as the solutions to the linear system whose augmented matrix is

    \begin{equation*} \left[\begin{array}{rrrr|r} \mathbf v_1 & \mathbf v_2 & \ldots & \mathbf v_n & \mathbf b \end{array}\right]\text{.} \end{equation*}

    This gives us three different ways of looking at the same solution space.

    Proposition 2.2.4.

    If \(A=\left[\begin{array}{rrrr} \mathbf v_1& \mathbf v_2& \ldots\mathbf v_n \end{array}\right]\) and \(\mathbf x=\left[ \begin{array}{r} x_1 \\ x_2 \\ \vdots \\ x_n \\ \end{array}\right] \text{,}\) then the following are equivalent.

    • The vector \(\mathbf x\) satisfies \(A\mathbf x = \mathbf b \text{.}\)
    • The vector \(\mathbf b\) is a linear combination of the columns of \(A\) with weights \(x_j\text{:}\)

      \begin{equation*} x_1\mathbf v_1 + x_2\mathbf v_2 + \ldots + x_n\mathbf v_n = \mathbf b\text{.} \end{equation*}

    • The components of \(\mathbf x\) form a solution to the linear system corresponding to the augmented matrix

      \begin{equation*} \left[\begin{array}{rrrr|r} \mathbf v_1 & \mathbf v_2 & \ldots & \mathbf v_n & \mathbf b \end{array}\right]\text{.} \end{equation*}

    When the matrix \(A = \left[\begin{array}{rrrr} \mathbf v_1& \mathbf v_2& \ldots& \mathbf v_n\end{array}\right]\text{,}\) we will frequently write

    \begin{equation*} \left[\begin{array}{rrrr|r} \mathbf v_1& \mathbf v_2& \ldots& \mathbf v_n& \mathbf b\end{array}\right] = \left[ \begin{array}{r|r} A & \mathbf b \end{array}\right] \end{equation*}

    and say that we augment the matrix \(A\) by the vector \(\mathbf b\text{.}\)

    We may think of \(A\mathbf x = \mathbf b\) as merely giving a notationally compact way of writing a linear system. This form of the equation, however, will allow us to focus on important features of the system that determine its solution space.

    Example 2.2.5

    Describe the solution space of the equation

    \begin{equation*} \left[\begin{array}{rrr} 2 & 0 & 2 \\ 4 & -1 & 6 \\ 1 & 3 & -5 \\ \end{array}\right] \mathbf x = \left[\begin{array}{r} 0 \\ -5 \\ 15 \end{array}\right] \end{equation*}

    By Proposition 2.2.4, the solution space to this equation is the same as the equation

    \begin{equation*} x_1\left[\begin{array}{r}2\\4\\1\end{array}\right] + x_2\left[\begin{array}{r}0\\-1\\3\end{array}\right]+ x_3\left[\begin{array}{r}2\\6\\-5\end{array}\right]= \left[\begin{array}{r}0\\-5\\15\end{array}\right]\text{,} \end{equation*}

    which is the same as the linear system corresponding to

    \begin{equation*} \left[\begin{array}{rrr|r} 2 & 0 & 2 & 0 \\ 4 & -1 & 6 & -5 \\ 1 & 3 & -5 & 15 \\ \end{array} \right]\text{.} \end{equation*}

    We will study the solutions to this linear system by finding the reduced row echelon form of the augmented matrix:

    \begin{equation*} \left[\begin{array}{rrr|r} 2 & 0 & 2 & 0 \\ 4 & -1 & 6 & -5 \\ 1 & 3 & -5 & 15 \\ \end{array} \right] \sim \left[\begin{array}{rrr|r} 1 & 0 & 1 & 0 \\ 0 & 1 & -2 & 5 \\ 0 & 0 & 0 & 0 \\ \end{array} \right]\text{.} \end{equation*}

    This gives us the system of equations

    \begin{equation*} \begin{alignedat}{4} x_1 & & & {}+{} & x_3 & {}={} & 0 \\ & & x_2 & {}-{} & 2x_3 & {}={} & 5 \\ \end{alignedat}\text{.} \end{equation*}

    The variable \(x_3\) is free so we may write the solution space parametrically as

    \begin{equation*} \begin{aligned} x_1 & {}={} -x_3 \\ x_2 & {}={} 5+2x_3 \\ \end{aligned}\text{.} \end{equation*}

    Since we originally asked to describe the solutions to the equation \(A\mathbf x = \mathbf b\text{,}\) we will express the solution in terms of the vector \(\mathbf x\text{:}\)

    \begin{equation*} \mathbf x =\left[ \begin{array}{r} x_1 \\ x_2 \\ x_3 \end{array} \right] = \left[ \begin{array}{r} -x_3 \\ 5 + 2x_3 \\ x_3 \end{array} \right] =\left[\begin{array}{r}0\\5\\0\end{array}\right] +x_3\left[\begin{array}{r}-1\\2\\1\end{array}\right] \end{equation*}

    This shows that the solutions \(\mathbf x\) may be written in the form \(\mathbf v + x_3\mathbf w\text{,}\) for appropriate vectors \(\mathbf v\) and \(\mathbf w\text{.}\) Geometrically, the solution space is a line in \(\mathbb R^3\) through \(\mathbf v\) moving parallel to \(\mathbf w\text{.}\)

    Activity 2.2.4. The equation \(A\mathbf x = \mathbf b\).

    1. Consider the linear system

      \begin{equation*} \begin{alignedat}{4} 2x & {}+{} & y & {}-{} & 3z & {}={} & 4 \\ -x & {}+{} & 2y & {}+{} & z & {}={} & 3 \\ 3x & {}-{} & y & & & {}={} & -4 \\ \end{alignedat}\text{.} \end{equation*}

      Identify the matrix \(A\) and vector \(\mathbf b\) to express this system in the form \(A\mathbf x = \mathbf b\text{.}\)

    2. If \(A\) and \(\mathbf b\) are as below, write the linear system corresponding to the equation \(A\mathbf x=\mathbf b\text{.}\)

      \begin{equation*} A = \left[\begin{array}{rrr} 3 & -1 & 0 \\ -2 & 0 & 6 \end{array} \right], \mathbf b = \left[\begin{array}{r} -6 \\ 2 \end{array} \right] \end{equation*}

      and describe the solution space.

    3. Describe the solution space of the equation

      \begin{equation*} \left[ \begin{array}{rrrr} 1 & 2 & 0 & -1 \\ 2 & 4 & -3 & -2 \\ -1 & -2 & 6 & 1 \\ \end{array} \right] \mathbf x = \left[\begin{array}{r} -1 \\ 1 \\ 5 \end{array} \right]\text{.} \end{equation*}

    4. Suppose \(A\) is an \(m\times n\) matrix. What can you guarantee about the solution space of the equation \(A\mathbf x = \zerovec\text{?}\)

    2.2.4 Matrix products

    In this section, we have developed some algebraic operations on matrices with the aim of simplifying our description of linear systems. We will now introduce a final operation, the product of two matrices, that will become important when we study linear transformations in Section 2.5.

    Given matrices \(A\) and \(B\text{,}\) we will form their product \(AB\) by first writing \(B\) in terms of its columns:

    \begin{equation*} B = \left[\begin{array}{rrrr} \mathbf v_1 & \mathbf v_2 & \ldots & \mathbf v_p \end{array}\right]\text{.} \end{equation*}

    We then define

    \begin{equation*} AB = \left[\begin{array}{rrrr} A\mathbf v_1 & A\mathbf v_2 & \ldots & A\mathbf v_p \end{array}\right]\text{.} \end{equation*}

    Example 2.2.6

    Given the matrices

    \begin{equation*} A = \left[\begin{array}{rr} 4 & 2 \\ 0 & 1 \\ -3 & 4 \\ 2 & 0 \\ \end{array}\right], B = \left[\begin{array}{rrr} -2 & 3 & 0 \\ 1 & 2 & -2 \\ \end{array}\right]\text{,} \end{equation*}

    we have

    \begin{equation*} AB = \left[\begin{array}{rrr} A \twovec{-2}{1} & A \twovec{3}{2} & A \twovec{0}{-2} \end{array}\right] = \left[\begin{array}{rrr} -6 & 16 & -4 \\ 1 & 2 & -2 \\ 10 & -1 & -8 \\ -4 & 6 & 0 \end{array}\right]\text{.} \end{equation*}

    It is important to note that we can only multiply matrices if the dimensions of the matrices are compatible. More specifically, when constructing the product \(AB\text{,}\) the matrix \(A\) multiplies the columns of \(B\text{.}\) Therefore, the number of columns of \(A\) must equal the number of rows of \(B\text{.}\) When this condition is met, the number of rows of \(AB\) is the number of rows of \(A\text{,}\) and the number of columns of \(AB\) is the number of columns of \(B\text{.}\)

    Activity 2.2.5.

    Consider the matrices

    \begin{equation*} A = \left[\begin{array}{rrr} 1 & 3 & 2 \\ -3 & 4 & -1 \\ \end{array}\right], B = \left[\begin{array}{rr} 3 & 0 \\ 1 & 2 \\ -2 & -1 \\ \end{array}\right]\text{.} \end{equation*}

    1. Suppose we want to form the product \(AB\text{.}\) Before computing, first explain how you know this product exists and then explain what the dimensions of the resulting matrix will be.
    2. Compute the product \(AB\text{.}\)
    3. Sage can multiply matrices using the * operator. Define the matrices \(A\) and \(B\) in the Sage cell below and check your work by computing \(AB\text{.}\)
    4. Are you able to form the matrix product \(BA\text{?}\) If so, use the Sage cell above to find \(BA\text{.}\) Is it generally true that \(AB = BA\text{?}\)
    5. Suppose we form the three matrices.

      \begin{equation*} A = \left[\begin{array}{rr} 1 & 2 \\ 3 & -2 \\ \end{array}\right], B = \left[\begin{array}{rr} 0 & 4 \\ 2 & -1 \\ \end{array}\right], C = \left[\begin{array}{rr} -1 & 3 \\ 4 & 3 \\ \end{array}\right]\text{.} \end{equation*}

      Compare what happens when you compute \(A(B+C)\) and \(AB + AC\text{.}\) State your finding as a general principle.

    6. Compare the results of evaluating \(A(BC)\) and \((AB)C\) and state your finding as a general principle.
    7. When we are dealing with real numbers, we know if \(a\neq 0\) and \(ab = ac\text{,}\) then \(b=c\text{.}\) Define matrices

      \begin{equation*} A = \left[\begin{array}{rr} 1 & 2 \\ -2 & -4 \\ \end{array}\right], B = \left[\begin{array}{rr} 3 & 0 \\ 1 & 3 \\ \end{array}\right], C = \left[\begin{array}{rr} 1 & 2 \\ 2 & 2 \\ \end{array}\right] \end{equation*}

      and compute \(AB\) and \(AC\text{.}\)

      If \(AB = AC\text{,}\) is it necessarily true that \(B = C\text{?}\)
    8. Again, with real numbers, we know that if \(ab = 0\text{,}\) then either \(a = 0\) or \(b=0\text{.}\) Define

      \begin{equation*} A = \left[\begin{array}{rr} 1 & 2 \\ -2 & -4 \\ \end{array}\right], B = \left[\begin{array}{rr} 2 & -4 \\ -1 & 2 \\ \end{array}\right] \end{equation*}

      and compute \(AB\text{.}\)

      If \(AB = 0\text{,}\) is it necessarily true that either \(A=0\) or \(B=0\text{?}\)

    This activity demonstrated some general properties about products of matrices, which mirror some properties about operations with real numbers.

    Properties of Matrix-matrix Multiplication.

    If \(A\text{,}\) \(B\text{,}\) and \(C\) are matrices such that the following operations are defined, it follows that

    Associativity:

    \(A(BC) = (AB)C\text{.}\)

    Distributivity:

    \(A(B+C) = AB+AC\text{.}\)

    \((A+B)C = AC+BC\text{.}\)

    At the same time, there are a few properties that hold for real numbers that do not hold for matrices.

    Things to be careful of.

    The following properties hold for real numbers but not for matrices.

    Commutativity:

    It is not generally true that \(AB = BA\text{.}\)

    Cancellation:

    It is not generally true that \(AB = AC\) implies that \(B = C\text{.}\)

    Zero divisors:

    It is not generally true that \(AB = 0\) implies that either \(A=0\) or \(B=0\text{.}\)

    Summary

    In this section, we have found an especially simple way to express linear systems using matrix multiplication.

    • If \(A\) is an \(m\times n\) matrix and \(\mathbf x\) an \(n\)-dimensional vector, then \(A\mathbf x\) is the linear combination of the columns of \(A\) using the components of \(\mathbf x\) as weights. The vector \(A\mathbf x\) is \(m\)-dimensional.
    • The solution space to the equation \(A\mathbf x = \mathbf b\) is the same as the solution space to the linear system corresponding to the augmented matrix \(\left[ \begin{array}{r|r} A & \mathbf b \end{array}\right]\text{.}\)
    • If \(A\) is an \(m\times n\) matrix and \(B\) is an \(n\times p\) matrix, we can form the product \(AB\text{,}\) which is an \(m\times p\) matrix whose columns are the products of \(A\) and the columns of \(B\text{.}\)

    Exercises 2.2.6Exercises

    1

    Consider the system of linear equations

    \begin{equation*} \begin{alignedat}{4} x & {}+{} & 2y & {}-{} & z & {}={} & 1 \\ 3x & {}+{} & 2y & {}+{} & 2z & {}={} & 7 \\ -x & & & {}+{} & 4z & {}={} & -3 \\ \end{alignedat}\text{.} \end{equation*}

    1. Find the matrix \(A\) and vector \(\mathbf b\) that expresses this linear system in the form \(A\mathbf x=\mathbf b\text{.}\)
    2. Give a description of the solution space to the equation \(A\mathbf x = \mathbf b\text{.}\)
    2

    Suppose that \(A\) is a \(135\times2201\) matrix. If \(A\mathbf x\) is defined, what is the dimension of \(\mathbf x\text{?}\) What is the dimension of \(A\mathbf x\text{?}\)

    3

    Suppose that \(A \) is a \(3\times2\) matrix whose columns are \(\mathbf v_1\) and \(\mathbf v_2\text{;}\) that is,

    \begin{equation*} A = \left[\begin{array}{rr} \mathbf v_1 & \mathbf v_2 \end{array} \right]\text{.} \end{equation*}

    1. What is the dimension of the vectors \(\mathbf v_1\) and \(\mathbf v_2\text{?}\)
    2. What is the product \(A\twovec{1}{0}\) in terms of \(\mathbf v_1\) and \(\mathbf v_2\text{?}\) What is the product \(A\twovec{0}{1}\text{?}\) What is the product \(A\twovec{2}{3}\text{?}\)
    3. Suppose that

      \begin{equation*} A\twovec{1}{0} = \threevec{3}{-2}{1}, A\twovec{0}{1} = \threevec{0}{3}{2}\text{.} \end{equation*}

      What is the matrix \(A\text{?}\)

    4

    Shown below are vectors \(\mathbf v_1\) and \(\mathbf v_2\text{.}\) Suppose that the matrix \(A\) is

    \begin{equation*} A = \left[\begin{array}{rr} \mathbf v_1 & \mathbf v_2 \end{array}\right]\text{.} \end{equation*}

    2.2: Matrix multiplication and linear combinations (2)
    1. What are the dimensions of the matrix \(A\text{?}\)
    2. On the plot above, indicate the vectors

      \begin{equation*} A\twovec{1}{0}, A\twovec{2}{3}, A\twovec{0}{-3}\text{.} \end{equation*}

    3. Find all vectors \(\mathbf x\) such that \(A\mathbf x=\mathbf b\text{.}\)
    4. Find all vectors \(\mathbf x\) such that \(A\mathbf x = \zerovec\text{.}\)
    5

    Suppose that

    \begin{equation*} A=\left[\begin{array}{rrr} 1 & 0 & 2 \\ 2 & 2 & 2 \\ -1 & -3 & 1 \end{array}\right]\text{.} \end{equation*}

    1. Describe the solution space to the equation \(A\mathbf x = \zerovec\text{.}\)
    2. Find a \(3\times2\) matrix \(B\) with no zero entries such that \(AB = 0\text{.}\)
    6

    Consider the matrix

    \begin{equation*} A=\left[\begin{array}{rrrr} 1 & 2 & -4 & -4 \\ 2 & 3 & 0 & 1 \\ 1 & 0 & 4 & 6 \\ \end{array}\right]\text{.} \end{equation*}

    1. Find the product \(A\mathbf x\) where

      \begin{equation*} \mathbf x = \fourvec{1}{-2}{0}{2}\text{.} \end{equation*}

    2. Give a description of the vectors \(\mathbf x\) such that

      \begin{equation*} A\mathbf x = \threevec{-1}{15}{17}\text{.} \end{equation*}

    3. Find the reduced row echelon form of \(A\) and identify the pivot positions.
    4. Can you find a vector \(\mathbf b\) such that \(A\mathbf x=\mathbf b\) is inconsistent?
    5. For a general 3-dimensional vector \(\mathbf b\text{,}\) what can you say about the solution space of the equation \(A\mathbf x = \mathbf b\text{?}\)
    7

    The operations that we perform in Gaussian elimination can be accomplished using matrix multiplication. This observation is the basis of an important technique that we will investigate in a subsequent chapter.

    Let's consider the matrix

    \begin{equation*} A = \left[\begin{array}{rrr} 1 & 2 & -1 \\ 2 & 0 & 2 \\ -3 & 2 & 3 \\ \end{array}\right]\text{.} \end{equation*}

    1. Suppose that

      \begin{equation*} S = \left[\begin{array}{rrr} 1 & 0 & 0 \\ 0 & 7 & 0 \\ 0 & 0 & 1 \\ \end{array}\right]\text{.} \end{equation*}

      Verify that \(SA\) is the matrix that results when the second row of \(A\) is scaled by a factor of 7. What matrix \(S\) would scale the third row by -3?

    2. Suppose that

      \begin{equation*} P = \left[\begin{array}{rrr} 0 & 1 & 0 \\ 1 & 0 & 0 \\ 0 & 0 & 1 \\ \end{array}\right]\text{.} \end{equation*}

      Verify that \(PA\) is the matrix that results from interchanging the first and second rows. What matrix \(P\) would interchange the first and third rows?

    3. Suppose that

      \begin{equation*} L_1 = \left[\begin{array}{rrr} 1 & 0 & 0 \\ -2 & 1 & 0 \\ 0 & 0 & 1 \\ \end{array}\right]\text{.} \end{equation*}

      Verify that \(L_1A\) is the matrix that results from multiplying the first row of \(A\) by \(-2\) and adding it to the second row. What matrix \(L_2\) would multiply the first row by 3 and add it to the third row?

    4. When we performed Gaussian elimination, our first goal was to perform row operations that brought the matrix into a triangular form. For our matrix \(A\text{,}\) find the row operations needed to find a row equivalent matrix \(U\) in triangular form. By expressing these row operations in terms of matrix multiplication, find a matrix \(L\) such that \(LA = U\text{.}\)
    8

    In this exercise, you will construct the inverse of a matrix, a subject that we will investigate more fully in the next chapter. Suppose that \(A\) is the \(2\times2\) matrix:

    \begin{equation*} A = \left[\begin{array}{rr} 3 & -2 \\ -2 & 1 \\ \end{array}\right]\text{.} \end{equation*}

    1. Find the vectors \(\mathbf b_1\) and \(\mathbf b_2\) such that the matrix \(B=\left[\begin{array}{rr} \mathbf b_1 & \mathbf b_2 \end{array}\right]\) satisfies

      \begin{equation*} AB = I = \left[\begin{array}{rr} 1 & 0 \\ 0 & 1 \\ \end{array}\right]\text{.} \end{equation*}

    2. In general, it is not true that \(AB = BA\text{.}\) Check that it is true, however, for the specific \(A\) and \(B\) that appear in this problem.
    3. Suppose that \(\mathbf x = \twovec{x_1}{x_2}\text{.}\) What do you find when you evaluate \(I\mathbf x\text{?}\)
    4. Suppose that we want to solve the equation \(A\mathbf x = \mathbf b\text{.}\) We know how to do this using Gaussian elimination; let's use our matrix \(B\) to find a different way:

      \begin{equation*} \begin{aligned} A\mathbf x & {}={} \mathbf b \\ B(A\mathbf x) & {}={} B\mathbf b \\ (BA)\mathbf x & {}={} B\mathbf b \\ I\mathbf x & {}={} B\mathbf b \\ \mathbf x & {}={} B\mathbf b \\ \end{aligned}\text{.} \end{equation*}

      In other words, the solution to the equation \(A\mathbf x=\mathbf b\) is \(\mathbf x = B\mathbf b\text{.}\)

      Consider the equation \(A\mathbf x = \twovec{5}{-2}\text{.}\) Find the solution in two different ways, first using Gaussian elimination and then as \(\mathbf x = B\mathbf b\text{,}\) and verify that you have found the same result.

    9

    Determine whether the following statements are true or false and provide a justification for your response.

    1. If \(A\mathbf x\) is defined, then the number of components of \(\mathbf x\) equals the number of rows of \(A\text{.}\)
    2. The solution space to the equation \(A\mathbf x = \mathbf b\) is equivalent to the solution space to the linear system whose augmented matrix is \(\left[\begin{array}{r|r} A & \mathbf b \end{array}\right]\text{.}\)
    3. If a linear system of equations has 8 equations and 5 unknowns, then the dimensions of the matrix \(A\) in the corresponding equation \(A\mathbf x = \mathbf b\) is \(5\times8\text{.}\)
    4. If \(A\) has a pivot in every row, then every equation \(A\mathbf x = \mathbf b\) is consistent.
    5. If \(A\) is a \(9\times5\) matrix, then \(A\mathbf x=\mathbf b\) is inconsistent for some vector \(\mathbf b\text{.}\)
    10

    Suppose that \(A\) is an \(4\times4\) matrix and that the equation \(A\mathbf x = \mathbf b\) has a unique solution for some vector \(\mathbf b\text{.}\)

    1. What does this say about the pivots of the matrix \(A\text{?}\) Write the reduced row echelon form of \(A\text{.}\)
    2. Can you find another vector \(\mathbf c\) such that \(A\mathbf x = \mathbf c\) is inconsistent?
    3. What can you say about the solution space to the equation \(A\mathbf x = \zerovec\text{?}\)
    4. Suppose \(A=\left[\begin{array}{rrrr} \mathbf v_1 & \mathbf v_2 & \mathbf v_3 & \mathbf v_4 \end{array}\right]\text{.}\) Explain why every four-dimensional vector can be written as a linear combination of the vectors \(\mathbf v_1\text{,}\) \(\mathbf v_2\text{,}\) \(\mathbf v_3\text{,}\) and \(\mathbf v_4\) in exactly one way.
    11

    Define the matrix

    \begin{equation*} A = \left[\begin{array}{rrr} 1 & 2 & 4 \\ -2 & 1 & -3 \\ 3 & 1 & 7 \\ \end{array}\right]\text{.} \end{equation*}

    1. Describe the solution space to the hom*ogeneous equation \(A\mathbf x = \zerovec\text{.}\) What does this solution space represent geometrically?
    2. Describe the solution space to the equation \(A\mathbf x=\mathbf b\) where \(\mathbf b = \threevec{-3}{-4}{1}\text{.}\) What does this solution space represent geometrically and how does it compare to the previous solution space?
    3. We will now explain the relationship between the previous two solution spaces. Suppose that \(\mathbf x_h\) is a solution to the hom*ogeneous equation; that is \(A\mathbf x_h=\zerovec\text{.}\) We will also suppose that \(\mathbf x_p\) is a solution to the equation \(A\mathbf x = \mathbf b\text{;}\) that is, \(A\mathbf x_p=\mathbf b\text{.}\)

      Use the Linearity Principle expressed in Proposition 2.2.3 to explain why \(\mathbf x_h+\mathbf x_p\) is a solution to the equation \(A\mathbf x = \mathbf b\text{.}\) You may do this by evaluating \(A(\mathbf x_h+\mathbf x_p)\text{.}\)

      That is, if we find one solution \(\mathbf x_p\) to an equation \(A\mathbf x = \mathbf b\text{,}\) we may add any solution to the hom*ogeneous equation to \(\mathbf x_p\) and still have a solution to the equation \(A\mathbf x = \mathbf b\text{.}\) In other words, the solution space to the equation \(A\mathbf x = \mathbf b\) is given by translating the solution space to the hom*ogeneous equation by the vector \(\mathbf x_p\text{.}\)

    12

    Suppose that a city is starting a bicycle sharing program with bicycles at locations \(B\) and \(C\text{.}\) Bicycles that are rented at one location may be returned to either location at the end of the day. Over time, the city finds that 80% of bicycles rented at location \(B\) are returned to \(B\) with the other 20% returned to \(C\text{.}\) Similarly, 50% of bicycles rented at location \(C\) are returned to \(B\) and 50% to \(C\text{.}\)

    To keep track of the bicycles, we form a vector

    \begin{equation*} \mathbf x_k = \twovec{B_k}{C_k} \end{equation*}

    where \(B_k\) is the number of bicycles at location \(B\) at the beginning of day \(k\) and \(C_k\) is the number of bicycles at \(C\text{.}\) The information above tells us

    \begin{equation*} \mathbf x_{k+1} = A\mathbf x_k \end{equation*}

    where

    \begin{equation*} A = \left[\begin{array}{rr} 0.8 & 0.5 \\ 0.2 & 0.5 \\ \end{array}\right]\text{.} \end{equation*}

    1. Let's check that this makes sense.
      1. Suppose that there are 1000 bicycles at location \(B\) and none at \(C\) on day 1. This means we have \(\mathbf x_1 = \twovec{1000}{0}\text{.}\) Find the number of bicycles at both locations on day 2 by evaluating \(\mathbf x_2 = A\mathbf x_1\text{.}\)
      2. Suppose that there are 1000 bicycles at location \(C\) and none at \(B\) on day 1. Form the vector \(\mathbf x_1\) and determine the number of bicycles at the two locations the next day by finding \(\mathbf x_2 = A\mathbf x_1\text{.}\)
    2. Suppose that one day there are 1050 bicycles at location \(B\) and 450 at location \(C\text{.}\) How many bicycles were there at each location the previous day?
    3. Suppose that there are 500 bicycles at location \(B\) and 500 at location \(C\) on Monday. How many bicycles are there at the two locations on Tuesday? on Wednesday? on Thursday?
    13

    This problem is a continuation of the previous problem.

    1. Let us define vectors

      \begin{equation*} \mathbf v_1 = \twovec{5}{2}, \mathbf v_2 = \twovec{-1}{1}\text{.} \end{equation*}

      Show that

      \begin{equation*} A\mathbf v_1 = \mathbf v_1, A\mathbf v_2 = 0.3\mathbf v_2\text{.} \end{equation*}

    2. Suppose that \(\mathbf x_1 = c_1 \mathbf v_1 + c_2 \mathbf v_2\) where \(c_2\) and \(c_2\) are scalars. Use the Linearity Principle expressed in Proposition 2.2.3 to explain why

      \begin{equation*} \mathbf x_{2} = A\mathbf x_1 = c_1\mathbf v_1 + 0.3c_2\mathbf v_2\text{.} \end{equation*}

    3. Continuing in this way, explain why

      \begin{equation*} \begin{aligned} \mathbf x_{3} = A\mathbf x_2 & {}={} c_1\mathbf v_1 +0.3^2c_2\mathbf v_2 \\ \mathbf x_{4} = A\mathbf x_3 & {}={} c_1\mathbf v_1 +0.3^3c_2\mathbf v_2 \\ \mathbf x_{5} = A\mathbf x_4 & {}={} c_1\mathbf v_1 +0.3^4c_2\mathbf v_2 \\ \end{aligned}\text{.} \end{equation*}

    4. Suppose that there are initially 500 bicycles at location \(B\) and 500 at location \(C\text{.}\) Write the vector \(\mathbf x_1\) and find the scalars \(c_1\) and \(c_2\) such that \(\mathbf x_1=c_1\mathbf v_1 + c_2\mathbf v_2\text{.}\)
    5. Use the previous part of this problem to determine \(\mathbf x_2\text{,}\) \(\mathbf x_3\) and \(\mathbf x_4\text{.}\)
    6. After a very long time, how are all the bicycles distributed?
    2.2: Matrix multiplication and linear combinations (2024)
    Top Articles
    Craigslist Tri Cities Wa Garage Sales
    Still Spirits Essence Guide
    Funny Roblox Id Codes 2023
    Golden Abyss - Chapter 5 - Lunar_Angel
    Www.paystubportal.com/7-11 Login
    Joi Databas
    DPhil Research - List of thesis titles
    Shs Games 1V1 Lol
    Evil Dead Rise Showtimes Near Massena Movieplex
    Steamy Afternoon With Handsome Fernando
    Which aspects are important in sales |#1 Prospection
    Detroit Lions 50 50
    18443168434
    Zürich Stadion Letzigrund detailed interactive seating plan with seat & row numbers | Sitzplan Saalplan with Sitzplatz & Reihen Nummerierung
    Grace Caroline Deepfake
    978-0137606801
    Nwi Arrests Lake County
    Justified Official Series Trailer
    London Ups Store
    Committees Of Correspondence | Encyclopedia.com
    Pizza Hut In Dinuba
    Jinx Chapter 24: Release Date, Spoilers & Where To Read - OtakuKart
    How Much You Should Be Tipping For Beauty Services - American Beauty Institute
    Free Online Games on CrazyGames | Play Now!
    Sizewise Stat Login
    VERHUURD: Barentszstraat 12 in 'S-Gravenhage 2518 XG: Woonhuis.
    Jet Ski Rental Conneaut Lake Pa
    Unforeseen Drama: The Tower of Terror’s Mysterious Closure at Walt Disney World
    Ups Print Store Near Me
    C&T Wok Menu - Morrisville, NC Restaurant
    Nesb Routing Number
    Random Bibleizer
    10 Best Places to Go and Things to Know for a Trip to the Hickory M...
    Black Lion Backpack And Glider Voucher
    Gopher Carts Pensacola Beach
    Duke University Transcript Request
    Lincoln Financial Field, section 110, row 4, home of Philadelphia Eagles, Temple Owls, page 1
    Jambus - Definition, Beispiele, Merkmale, Wirkung
    Netherforged Lavaproof Boots
    Ark Unlock All Skins Command
    Craigslist Red Wing Mn
    D3 Boards
    Jail View Sumter
    Nancy Pazelt Obituary
    Birmingham City Schools Clever Login
    Thotsbook Com
    Vérificateur De Billet Loto-Québec
    Funkin' on the Heights
    Vci Classified Paducah
    Www Pig11 Net
    Ty Glass Sentenced
    Latest Posts
    Article information

    Author: Ms. Lucile Johns

    Last Updated:

    Views: 6052

    Rating: 4 / 5 (61 voted)

    Reviews: 84% of readers found this page helpful

    Author information

    Name: Ms. Lucile Johns

    Birthday: 1999-11-16

    Address: Suite 237 56046 Walsh Coves, West Enid, VT 46557

    Phone: +59115435987187

    Job: Education Supervisor

    Hobby: Genealogy, Stone skipping, Skydiving, Nordic skating, Couponing, Coloring, Gardening

    Introduction: My name is Ms. Lucile Johns, I am a successful, friendly, friendly, homely, adventurous, handsome, delightful person who loves writing and wants to share my knowledge and understanding with you.