7 Linear maps and matrices
We’ll now “come down to earth” a bit, and ask how to actually calculate with linear maps on our standard vector space \(\mathbb{K}^n.\) We’ll translate this into statements about matrices; so we need to introduce some new ways to calculate with those.
7.1 Linear mappings associated to matrices
For an \((m\times n)\)-matrix \(\mathbf{A}=(A_{ij})_{1\leqslant i\leqslant m, 1\leqslant j \leqslant n} \in M_{m,n}(\mathbb{K})\) with column vectors \(\vec{a}_1,\ldots,\vec{a}_n \in \mathbb{K}^m\) we define a mapping \[f_\mathbf{A}: \mathbb{K}^n \to \mathbb{K}^m, \qquad \vec{x} \mapsto \mathbf{A}\vec{x},\] where the column vector \(\mathbf{A}\vec{x} \in \mathbb{K}^m\) is obtained by matrix multiplication of the matrix \(\mathbf{A}\in M_{m,n}(\mathbb{K})\) and the column vector \(\vec{x}=(x_i)_{1\leqslant i\leqslant n} \in \mathbb{K}^n\): \[\mathbf{A}\vec{x}=\vec{a}_1x_1+\vec{a}_2x_2+\cdots +\vec{a}_n x_n=\begin{pmatrix} A_{11}x_1+A_{12}x_2+\cdots +A_{1n}x_n\\ A_{21}x_1+A_{22}x_2+\cdots +A_{2n}x_n\\ \vdots \\ A_{m1}x_1+A_{m2}x_2+\cdots +A_{mn}x_n\end{pmatrix}.\]
It’s clear that \(f_{\mathbf{A}}\) is a linear map. We’ll now show that any linear map \(\mathbb{K}^n \to \mathbb{K}^m\) arises this way, from a uniquely determined \(\mathbf{A}.\)
Let \(\mathbf{A},\mathbf{B}\in M_{m,n}(\mathbb{K}).\) Then \(f_\mathbf{A}=f_\mathbf{B}\) if and only if \(\mathbf{A}=\mathbf{B}.\)
Proof. Recall that if \(f : \mathcal{X}\to \mathcal{Y}\) and \(g : \mathcal{X} \to \mathcal{Y}\) are mappings from a set \(\mathcal{X}\) into a set \(\mathcal{Y},\) then we write \(f=g\) if \(f(x)=g(x)\) for all elements \(x \in \mathcal{X}.\)
If \(\mathbf{A}=\mathbf{B},\) then \(A_{ij}=B_{ij}\) for all \(1\leqslant i\leqslant m, 1\leqslant j \leqslant n,\) hence we conclude that \(f_\mathbf{A}=f_\mathbf{B}.\) In order to show the converse direction we consider the standard basis \(\vec{e}_i=(\delta_{ij})_{1\leqslant j\leqslant n},\) \(i=1,\ldots,n\) of \(\mathbb{K}^n.\) Now by assumption \[f_\mathbf{A}(\vec{e}_i)=\begin{pmatrix}A_{1i} \\ A_{2i} \\ \vdots \\ A_{mi}\end{pmatrix}=f_\mathbf{B}(\vec{e}_i)=\begin{pmatrix}B_{1i} \\ B_{2i} \\ \vdots \\ B_{mi}\end{pmatrix}.\] Since this holds for all \(i=1,\ldots,n,\) we conclude \(A_{ij}=B_{ij}\) for all \(j=1,\ldots, m\) and \(i=1,\ldots, n.\) Therefore, we have \(\mathbf{A}=\mathbf{B},\) as claimed.
Note that \([f_\mathbf{A}(\vec{e}_i)]_j,\) the \(j\)-th entry of \(f_\mathbf{A}(\vec{e}_i),\) is equal to \([\mathbf{A}]_{ji}\); equivalently, we have \[f_{\mathbf{A}}(\vec{e}_i) = \sum_{j = 1}^m A_{ji} \vec{e}_j.\] (This looks wrong, but it is really correct as stated.)
A mapping \(g : \mathbb{K}^m \to \mathbb{K}^n\) is linear if and only if there exists a matrix \(\mathbf{B}\in M_{n,m}(\mathbb{K})\) so that \(g=f_\mathbf{B}.\)
Proof. We’ve just seen that for any \(\mathbf{B}\in M_{n,m}(\mathbb{K}),\) the map \(f_\mathbf{B}\) is linear.
Conversely, let \(g : \mathbb{K}^m \to \mathbb{K}^n\) be linear. Let \(\{\vec{e}_1,\ldots,\vec{e}_m\}\) denote the standard basis of \(\mathbb{K}^m.\) Write \[g(\vec{e}_i)=\begin{pmatrix} B_{1i} \\ \vdots \\ B_{ni}\end{pmatrix}\quad \text{for} \quad i=1,\ldots,m\] and consider the matrix \[\mathbf{B}=\begin{pmatrix} B_{11} & \cdots & B_{1m} \\ \vdots & \ddots & \vdots \\ B_{n1} & \cdots & B_{nm}\end{pmatrix} \in M_{n,m}(\mathbb{K}).\] For \(i=1,\ldots,m\) we obtain \[\tag{7.1} f_\mathbf{B}(\vec{e}_i)=\mathbf{B}\vec{e}_i=g(\vec{e}_i).\] Any vector \(\vec{v}=(v_i)_{1\leqslant i\leqslant m} \in\mathbb{K}^m\) can be written as \[\vec{v}=v_1\vec{e}_1+\cdots+v_m\vec{e}_m\] for (unique) scalars \(v_i,\) \(i=1,\ldots,m.\) Hence using the linearity of \(g\) and \(f_\mathbf{B},\) we compute \[\begin{aligned} g(\vec{v})-f_\mathbf{B}(\vec{v})&=g(v_1\vec{e}_1+\cdots+v_m\vec{e}_m)-f_\mathbf{B}(v_1\vec{e}_1+\cdots+v_m\vec{e}_m)\\ &=v_1\left(g(\vec{e}_1)-f_\mathbf{B}(\vec{e}_1)\right)+\cdots+v_m\left(g(\vec{e}_m)-f_\mathbf{B}(\vec{e}_m)\right)=0_{\mathbb{K}^n}, \end{aligned}\] where the last equality uses (7.1). Since the vector \(\vec{v}\) is arbitrary, it follows that \(g=f_\mathbf{B},\) as claimed.
Let \(\operatorname{Lin}(\mathbb{K}^n, \mathbb{K}^m)\) denote the linear maps from \(\mathbb{K}^n\) to \(\mathbb{K}^m.\) Then \(\mathbf{A}\mapsto f_{\mathbf{A}}\) defines a mapping \(M_{m,n}(\mathbb{K}) \to \operatorname{Lin}(\mathbb{K}^n, \mathbb{K}^m).\) Proposition 7.2 shows that this mapping is injective, and Lemma 7.4 shows that it is surjective; so it is a bijection.
(If you’re not used to it, this kind of construction – mappings between spaces of mappings – can be a bit confusing; but one gets used to it with practice.)
Composition and inverses
The motivation for the Definition 2.13 of matrix multiplication is given by the following theorem, which states that the mapping \(f_{\mathbf{A}\mathbf{B}}\) associated to the matrix product \(\mathbf{A}\mathbf{B}\) is the composition of the mapping \(f_\mathbf{A}\) associated to the matrix \(\mathbf{A}\) and the mapping \(f_\mathbf{B}\) associated to the matrix \(\mathbf{B}.\) More precisely:
Let \(\mathbf{A}\in M_{m,n}(\mathbb{K})\) and \(\mathbf{B}\in M_{n,r}(\mathbb{K}),\) so we have maps \(f_\mathbf{A}: \mathbb{K}^n \to \mathbb{K}^m,\) \(f_\mathbf{B}: \mathbb{K}^r \to \mathbb{K}^n,\) and \(f_{\mathbf{A}\mathbf{B}} : \mathbb{K}^r \to \mathbb{K}^{m}.\) Then \(f_{\mathbf{A}\mathbf{B}}=f_\mathbf{A}\circ f_\mathbf{B}.\)
Proof. We’ll interpret this as a special case of the associativity of matrix multiplication (part (v) of Proposition Proposition 2.16). Two mappings are equal if they take the same values on any input, so we need to show that \(f_{\mathbf{A}\mathbf{B}}(\vec{x}) = f_{\mathbf{A}}(f_{\mathbf{B}}(x))\) for all \(\vec{x} \in \mathbb{K}^r.\) Then we have \[\begin{aligned} f_{\mathbf{A}\mathbf{B}}(x) &= (\mathbf{A}\cdot \mathbf{B}) \cdot \vec{x} \qquad \text{(regarding $\vec{x}$ as an $r \times 1$ matrix)} \\ &= \mathbf{A}\cdot (\mathbf{B}\cdot \vec{x}) \qquad \text{(by associativity)} \\ &= \mathbf{A}\cdot (f_{\mathbf{B}}(\vec{x}))\\ &= f_{\mathbf{A}}( f_{\mathbf{B}}(\vec{x}) ). \end{aligned}\]
Let \(\mathbf{A}\in M_{m, n}(\mathbb{K}).\) Then \(\mathbf{A}\) is invertible if and only if the linear map \(f_\mathbf{A}\) is bijective. If this is the case, the inverse mapping \((f_{\mathbf{A}})^{-1}\) is the mapping associated to the inverse matrix \(\mathbf{A}^{-1},\) i.e. we have the relation \[(f_{\mathbf{A}})^{-1} = f_{\mathbf{A}^{-1}}.\]
Proof. First, notice that the mapping \(f_{\mathbf{1}_{n}} : \mathbb{K}^n \to \mathbb{K}^n\) associated to the unit matrix is the identity mapping on \(\mathbb{K}^n,\) that is, for all \(n \in \mathbb{N},\) we have \(f_{\mathbf{1}_{n}}=\mathrm{Id}_{\mathbb{K}^n}.\)
Let \(\mathbf{A}\in M_{m,n}(\mathbb{K})\) and suppose that \(f_\mathbf{A}: \mathbb{K}^n \to \mathbb{K}^m\) is bijective with inverse function \((f_\mathbf{A})^{-1} : \mathbb{K}^m \to \mathbb{K}^n.\) We saw in the previous chapter that \((f_\mathbf{A})^{-1}\) is also a linear map. Hence, by Lemma 7.4, \((f_\mathbf{A})^{-1}=f_\mathbf{B}\) for some matrix \(\mathbf{B}\in M_{n,m}(\mathbb{K}).\) Using Theorem 7.6, we obtain \[f_{\mathbf{B}\mathbf{A}} = f_\mathbf{B}\circ f_\mathbf{A}=(f_\mathbf{A})^{-1}\circ f_\mathbf{A}=\mathrm{Id}_{\mathbb{K}^n}=f_{\mathbf{1}_{n}}\] hence Proposition 7.2 implies that \(\mathbf{B}\mathbf{A}=\mathbf{1}_{n}.\) Likewise we have \[f_{\mathbf{A}\mathbf{B}} = f_\mathbf{A}\circ f_\mathbf{B}= f_\mathbf{A}\circ (f_\mathbf{A})^{-1}=\mathrm{Id}_{\mathbb{K}^m}=f_{\mathbf{1}_{m}}\] so that \(\mathbf{A}\mathbf{B}=\mathbf{1}_{m}.\) Thus \(\mathbf{A}\) is invertible, and \(\mathbf{B}\) is its inverse.
Conversely, let \(\mathbf{A}\in M_{m,n}(\mathbb{K})\) and suppose the matrix \(\mathbf{B}\in M_{n,m}(\mathbb{K})\) satisfies \(\mathbf{A}\mathbf{B}=\mathbf{1}_{m}\) and \(\mathbf{B}\mathbf{A}=\mathbf{1}_{n}.\) Then, as before, we have \[f_{\mathbf{A}\mathbf{B}}=f_{\mathbf{1}_{m}}=\mathrm{Id}_{\mathbb{K}^m}=f_\mathbf{A}\circ f_\mathbf{B}\quad \text{and} \quad f_{\mathbf{B}\mathbf{A}}=f_{\mathbf{1}_{n}}=\mathrm{Id}_{\mathbb{K}^n}=f_\mathbf{B}\circ f_\mathbf{A}\] showing that \(f_\mathbf{A}: \mathbb{K}^n \to \mathbb{K}^m\) is bijective with inverse function \(f_\mathbf{B}: \mathbb{K}^m \to \mathbb{K}^n.\)
A non-square matrix cannot be invertible.
Proof. If \(\mathbf{A}\in M_{m, n}(\mathbb{K})\) is invertible, then \(f_{\mathbf{A}}\) is an isomorphism between \(\mathbb{K}^n\) and \(\mathbb{K}^m,\) and we have seen that no such isomorphism exists unless \(m = n.\)
We also get a little extra information when \(m = n\):
Let \(n \in \mathbb{N}\) and \(\mathbf{A}\in M_{n,n}(\mathbb{K})\) a square matrix. Then the following statements are equivalent:
The matrix \(\mathbf{A}\) admits a left inverse, that is, a matrix \(\mathbf{B}\in M_{n,n}(\mathbb{K})\) such that \(\mathbf{B}\mathbf{A}=\mathbf{1}_{n}\);
The matrix \(\mathbf{A}\) admits a right inverse, that is, a matrix \(\mathbf{B}\in M_{n,n}(\mathbb{K})\) such that \(\mathbf{A}\mathbf{B}=\mathbf{1}_{n}\);
The matrix \(\mathbf{A}\) is invertible.
Proof. By the definition of the invertibility of a matrix, (iii) implies both (i) and (ii).
(i) \(\Rightarrow\) (iii) Since \(\mathbf{B}\mathbf{A}=\mathbf{1}_{n}\) we have \(f_\mathbf{B}\circ f_\mathbf{A}=f_{\mathbf{1}_{n}}=\mathrm{Id}_{\mathbb{K}^n}\) by Theorem 7.6 and hence \(f_\mathbf{B}\) is a left inverse for \(f_\mathbf{A}.\) Therefore \(f_\mathbf{A}\) is injective (see Review Exercises on mappings). The implication (i) \(\Rightarrow\) (ii) of Corollary 6.22 implies that \(f_\mathbf{A}\) is actually bijective, so by Proposition 7.7, \(\mathbf{A}\) is invertible.
(ii) \(\Rightarrow\) (iii) is completely analogous – since \(f_{\mathbf{A}}\) has a right inverse, it is surjective, hence bijective by Corollary 6.22 and we conclude as before.
7.2 Computing kernels and images
If \(\mathbf{A}\in M_{m, n}(\mathbb{K})\) is a matrix, then we define \[\operatorname{rank}(\mathbf{A})=\operatorname{rank}(f_\mathbf{A}), \qquad \operatorname{nullity}(\mathbf{A}) = \operatorname{nullity}(f_{\mathbf{A}}).\] We want to compute these explicitly, and compute bases for the subspaces \(\ker(f_{\mathbf{A}}) \subset \mathbb{K}^n\) and \(\operatorname{Im}(f_{\mathbf{A}}) \subset \mathbb{K}^m.\)
Image and rank
In order to compute a basis for \(\operatorname{Im}(f_{\mathbf{A}})\) we use the following lemma:
The image of \(f_{\mathbf{A}}\) is equal to the column space of \(\mathbf{A},\) i.e. the subspace of \(\mathbb{K}^m\) spanned by the column vectors \(\vec{a}_1, \dots, \vec{a}_n.\)
Proof. The columns \(\vec{a}_1, \dots, \vec{a}_n\) of \(\mathbf{A}\) are the images of the standard basis vectors, so they are clearly contained in \(\operatorname{Im}(f_{\mathbf{A}})\); hence \(\operatorname{span}(\vec{a}_1, \dots, \vec{a}_n) \subseteq \operatorname{Im}(f_{\mathbf{A}}).\) Conversely, for any \(\vec{x} = (x_j)_{1 \le j \le n} \in \mathbb{K}^n\) we have \(f_{\mathbf{A}}(\vec{x}) = \sum_j x_j f(\vec{e}_j) = \sum_j x_j \vec{a}_j \in \operatorname{span}(\vec{a}_1, \dots, \vec{a}_n).\)
We saw in Chapter 5 how to compute the a basis of the subspace generated by a finite list of vectors, so we can just apply that here to compute the image. However, since the columns of \(\mathbf{A}\) are column vectors (not row vectors), we have to rewrite them as row vectors before applying RREF.
That is, to compute the image of \(\mathbf{A},\) we need to perform the following steps:
form the transpose matrix \(\mathbf{A}^T\);
compute its RREF;
take the non-zero rows in the RREF matrix;
transpose these again to get column vectors.
(Equivalently, we can directly apply column operations to \(\mathbf{A}\) to get a “reduced column echelon form” of \(\mathbf{A}\) whose columns are the desired basis; but we stick with RREF for the sake of familiarity.)
“Let \[\mathbf{A}=\begin{pmatrix} 1 & -2 & 0 & 4 \\ 3 & 1 & 1 & 0 \\ -1 & -5 & -1 & 8 \\ 3 & 8 & 2 & -12 \end{pmatrix}\] Compute a basis for the image of \(f_\mathbf{A}: \mathbb{R}^4 \to \mathbb{R}^4\) and the rank of \(\mathbf{A}.\)”
The transpose matrix is \[\mathbf{A}^T=\begin{pmatrix} 1 & 3 & -1 & 3 \\ -2 & 1 & -5 & 8 \\ 0 & 1 & -1 & 2 \\ 4 & 0 & 8 & -12\end{pmatrix}\] Computing its RREF yields \[\begin{pmatrix} 1 & 0 & 2 & -3 \\ 0 & 1 & -1 & 2 \\ 0 & 0 & 0 & 0 \\ 0 & 0 & 0 & 0 \end{pmatrix}.\] The non-zero row vectors are \(\vec{\omega}_1 = \begin{pmatrix} 1 & 0 & 2 & -3 \end{pmatrix},\) \(\vec{\omega}_2=\begin{pmatrix} 0 & 1 & -1 & 2 \end{pmatrix}.\) Our basis of \(\operatorname{Im}(f)\) is thus \[\{\vec{\omega}_1^T, \vec{\omega}_2^T \}=\left\{\begin{pmatrix} 1 \\ 0 \\ 2 \\ -3 \end{pmatrix},\begin{pmatrix} 0 \\ 1 \\ -1 \\ 2 \end{pmatrix}\right\}.\] Since this basis has two elements, we also conclude that \(\operatorname{rank}(\mathbf{A})=2.\)
Kernel and nullity
We already know one way of computing the kernel: since the kernel of \(f_{\mathbf{A}}\) is just the set \(\operatorname{Sol}(\mathbf{A}, \mathbf{0}),\) we can apply the general machinery of “free variables” etc. However, there is a slightly slicker way, which allows us to compute the kernel and the image at the same time:
Consider the augmented matrix \(\mathbf{B}= (\mathbf{A}^T\ |\ I_n).\) Divide up the RREF of this matrix into the shape \[\left(\begin{array}{c|c} \mathbf{C}& \text{(junk)} \\ \hline \mathbf{0} & \mathbf{D} \end{array}\right)\] where \(\mathbf{C}\) has no zero rows. Then the rows of \(\mathbf{C}\) (transposed into column vectors) are a basis of the image of \(f_{\mathbf{A}},\) and the rows of \(\mathbf{D}\) (transposed) are a basis of the kernel.
Proof. The first \(m\) columns of the RREF of \(\mathbf{B}\) are the RREF of \(\mathbf{A}^T,\) and we already know this gives a basis of the image; so we need to show that the rest of the RREF gives a basis of the kernel.
Let’s write \(\tilde{\mathbf{D}}\) for the square matrix given by the last \(n\) columns of the RREF, so \(\tilde{\mathbf{D}} = \begin{pmatrix}(junk)\\\mathbf{D}\end{pmatrix}.\) Note that the “junk” submatrix has \(r\) rows, where \(r = \operatorname{rank}(\mathbf{A}).\) Moreover, \(\mathbf{D}\) is itself in RREF, so its nonzero rows are linearly independent; and it can’t have any zero rows, since \(\tilde{\mathbf{D}}\) is invertible.
We know that \(\tilde{\mathbf{D}}\) gives the transformation matrix to put \(\mathbf{A}^T\) into RREF, so the RREF of \(\mathbf{A}^T\) is equal to \(\tilde{\mathbf{D}} \mathbf{A}^T.\) But the \(i\)-th row of the RREF is zero for \(i > r = \operatorname{rank}(\mathbf{A})\); so if \(\delta_i\) are the rows of \(\tilde{\mathbf{D}},\) we have \(\vec{\delta}_i \cdot \mathbf{A}^T = \vec{0}\) for \(r + 1 \le i \le n.\) Transposing this, we have \(\mathbf{A}\cdot \vec{\delta}_i^T = 0\) for all such \(i,\) so we obtain \(n - r\) linearly independent vectors in the kernel. However, since the kernel has dimension \(n - r\) from the rank-nullity theorem, these vectors must in fact be a basis.
Actually the “junk” submatrix isn’t really junk: one can check that for \(1 \le i \le r,\) the vector \(\vec{\delta}_i^T\) is a choice of vector in \(\mathbb{K}^n\) mapping under \(f_{\mathbf{A}}\) to the \(i\)-th vector in our basis of the image.
Let \[\mathbf{C}=\begin{pmatrix} 1 & 0 & 1 & 7 \\ -2 & -3 & 1 & 2 \\ 7 & 9 & -2 & 1\end{pmatrix}\] In order to compute \(\operatorname{Ker}(f_\mathbf{C})\) we apply Gaussian elimination to \(\mathbf{C}^T\) whilst keeping track of the relevant elementary matrices as in the algorithm for computing the inverse of a matrix. That is, we consider \[\left(\begin{array}{ccc}1 & -2 & 7 \\ 0 & -3 & 9 \\ 1 & 1 & -2 \\ 7 & 2 & 1\end{array}\right|\left.\begin{array}{cccc} 1 & 0 & 0 & 0 \\ 0 & 1 & 0 & 0 \\ 0 & 0 & 1 & 0 \\ 0 & 0 & 0 & 1\end{array}\right).\] Gauss–Jordan elimination (again, Gaussian elimination is enough) gives \[\left(\begin{array}{ccc}1 & 0 & 1 \\ 0 & 1 & -3 \\ 0 & 0 & 0 \\ 0 & 0 & 0\end{array}\right|\left.\begin{array}{cccc} 0 & 0 & -\frac{2}{5} & \frac{1}{5} \\ 0 & 0 & \frac{7}{5} & -\frac{1}{5} \\ 1 & 0 & \frac{16}{5} & -\frac{3}{5} \\ 0 & 1 & \frac{21}{5} & -\frac{3}{5}\end{array}\right).\] The vectors \(\vec{\xi}_3=\begin{pmatrix} 1 & 0 & \frac{16}{5} & -\frac{3}{5}\end{pmatrix}\) and \(\vec{\xi}_4=\begin{pmatrix} 0 & 1 & \frac{21}{5} & -\frac{3}{5} \end{pmatrix}\) thus span the subspace of vectors \(\xi\) satisfying \(\xi \mathbf{C}^T=0_{\mathbb{K}_3}.\) A basis \(\mathcal{S}\) for the kernel of \(f_\mathbf{C}\) is thus given by \[\mathcal{S}=\left\{\begin{pmatrix} 1 \\ 0 \\ \frac{16}{5} \\ -\frac{3}{5}\end{pmatrix},\begin{pmatrix} 0 \\ 1 \\ \frac{21}{5} \\ -\frac{3}{5}\end{pmatrix}\right\}\] and so \(\operatorname{nullity}(\mathbf{C})=2.\)
Exercises
Compute a basis for the kernel of the linear map \(f_{\mathbf{A}}\) from Exercise 7.11.
Solution
Applying RREF to the augmented matrix \((\mathbf{A}^T\ |\ \mathbf{1}_{4}),\) we obtain the matrix \[\left(\begin{array}{rrrr|rrrr} 1 & 0 & 2 & -3 & 0 & 0 & 0 & \frac{1}{4} \\ 0 & 1 & -1 & 2 & 0 & 0 & 1 & 0 \\ \hline 0 & 0 & 0 & 0 & 1 & 0 & -3 & -\frac{1}{4} \\ 0 & 0 & 0 & 0 & 0 & 1 & -1 & \frac{1}{2} \end{array}\right)\] A basis for the kernel is given by the rows in the bottom right block, i.e. \[\left\{ \begin{pmatrix} 1\\0\\-3\\-\frac{1}{4}\end{pmatrix}, \begin{pmatrix} 0 \\ 1 \\ -1 \\ \frac{1}{2}\end{pmatrix}\right\}.\]
Let \(\mathbf{A}\in M_{3,2}(\mathbb{R}).\) Then \(f_\mathbf{A}\) admits a left inverse.
- True
- False
Let \(\mathbf{A}\in M_{3,2}(\mathbb{R}).\) Then \(f_\mathbf{A}\) cannot admit a right inverse.
- True
- False
If \(\mathbf{A}\in M_{n,n}(\mathbb{K}),\) then \(\operatorname{dim}(\operatorname{Ker}(\mathbf{A}))=\operatorname{dim}(\operatorname{Ker}(\mathbf{A}^T)).\)
- True
- False
Let \(\mathbf{A}\in M_{3,2}(\mathbb{R})\) have maximal rank. Then \(f_\mathbf{A}\) admits a left inverse.
- True
- False
The kernel space of an \(m \times n\) matrix is contained in \(\mathbb{K}^m.\)
- True
- False
The image of an \(m \times n\) matrix is contained in \(\mathbb{K}^n.\)
- True
- False
If \(\mathbf{A}\in M_{m,n}(\mathbb{K}),\) then \(\operatorname{Ker}(\mathbf{A})=\operatorname{Ker}(\mathbf{A}^T).\)
- True
- False
If \(\mathbf{A}\in M_{m,n}(\mathbb{K}),\) then \(\operatorname{dim}(\operatorname{Ker}(\mathbf{A}))=\operatorname{dim}(\operatorname{Ker}(\mathbf{A}^T)).\)
- True
- False
If \(\mathbf{A}\in M_{m,n}(\mathbb{K}),\) then \(\operatorname{Im}(\mathbf{A})=\operatorname{Im}(\mathbf{A}^T).\)
- True
- False
If \(\mathbf{B}\in M_{m,n}(\mathbb{K})\) is the reduced row echelon form of \(\mathbf{A}\in M_{m,n}(\mathbb{K}),\) then \(\operatorname{Ker}(\mathbf{A})=\operatorname{Ker}(\mathbf{B}).\)
- True
- False
If \(\mathbf{B}\in M_{m,n}(\mathbb{K})\) is the reduced row echelon form of \(\mathbf{A}\in M_{m,n}(\mathbb{K}),\) then \(\operatorname{Im}(\mathbf{A})=\operatorname{Im}(\mathbf{B}).\)
- True
- False
If \(\mathbf{B}\in M_{m,n}(\mathbb{K})\) is the reduced row echelon form of \(\mathbf{A}^T\in M_{n,m}(\mathbb{K}),\) then \(\operatorname{Im}(\mathbf{A})=\operatorname{Im}(\mathbf{B}^T).\)
- True
- False
If \(\mathbf{B}\in M_{m,n}(\mathbb{K})\) is the reduced row echelon form of \(\mathbf{A}\in M_{m,n}(\mathbb{K}),\) then \(\operatorname{rank}(\mathbf{A})=\operatorname{rank}(\mathbf{B}).\)
- True
- False