13 Duality
13.1 The dual vector space
An important class of vector spaces arises from considering the set of linear maps between two given vector spaces. This set can be turned into a vector space itself in a natural way.
Let \(V,W\) be \(\mathbb{K}\)-vector spaces. A linear map \(f : V \to W\) is also called a homomorphism between the vector spaces \(V\) and \(W.\) The set of linear maps between \(V\) and \(W\) is denoted by \(\mathrm{Hom}(V,W).\)
We define addition for \(f,g \in \mathrm{Hom}(V,W)\) by the rule \[\left(f+_{\mathrm{Hom}(V,W)}g\right)(v)=f(v)+_W g(v)\] for all \(v \in V.\) Here \(+_W\) denotes the addition of vectors in \(W.\) We define scalar multiplication for \(f \in \mathrm{Hom}(V,W)\) and \(s\in \mathbb{K}\) by the rule \[(s\cdot_{\mathrm{Hom}(V,W)}f)(v)=s\cdot_Wf(v)\] for all \(v \in V.\) Here \(\cdot_W\) denotes the scalar multiplication in \(W.\) Furthermore, we define the zero vector \(0_{\mathrm{Hom}(V,W)}\) to be the function \(o : V \to W\) defined by the rule \(o(v)=0_W\) for all \(v \in V.\) With these definitions, \(\mathrm{Hom}(V,W)\) is a \(\mathbb{K}\)-vector space, as can be checked without difficulty.
Let \(V,W\) be finite dimensional \(\mathbb{K}\)-vector spaces and \(\mathbf{b}\) an ordered basis of \(V\) and \(\mathbf{c}\) an ordered basis of \(W.\) Then the mapping \[\Theta : \mathrm{Hom}(V,W) \to M_{m,n}(\mathbb{K}), \qquad f \mapsto \mathbf{M}(f,\mathbf{b},\mathbf{c})\] is an isomorphism. In particular \(\dim \mathrm{Hom}(V,W)=\dim(V)\dim(W).\)
Proof. Suppose \(\dim V=n,\) \(\dim W=m\) and write \(\mathbf{b}=(v_1,\ldots,v_n)\) and \(\mathbf{c}=(w_1,\ldots,w_m).\)
Let \(V,W\) be finite dimensional \(\mathbb{K}\)-vector spaces, \(\mathbf{b}=(v_1,\ldots,v_n)\) an ordered basis of \(V,\) \(\mathbf{c}=(w_1,\ldots,w_m)\) an ordered basis of \(W\) and \(g : V \to W\) a linear map. Then there exist unique scalars \(A_{ij} \in \mathbb{K},\) where \(1\leqslant i\leqslant m, 1\leqslant j \leqslant n\) such that \[\tag{3.15} g(v_j)=\sum_{i=1}^m A_{ij}w_i, \qquad 1\leqslant j\leqslant n.\] Furthermore, the matrix \(\mathbf{A}=(A_{ij})_{1\leqslant i\leqslant m, 1\leqslant j \leqslant n}\) satisfies \[f_\mathbf{A}=\boldsymbol{\gamma}\circ g \circ \boldsymbol{\beta}^{-1}\] and hence is the matrix representation of \(g\) with respect to the ordered bases \(\mathbf{b}\) and \(\mathbf{c}.\)
We next show that \(\Theta\) is surjective. Let \(\mathbf{A}=(A_{ij})_{1\leqslant i\leqslant m, 1\leqslant j \leqslant n} \in M_{m,n}(\mathbb{K})\) and define \(f : V \to W\) as follows. For \(v\in V\) there exist unique scalars \(s_1,\ldots,s_n\) such that \(v=\sum_{i=1}^n s_i v_i\) (since \(\mathbf{b}\) is an ordered basis of \(V\)). We define \[f(v)=\sum_{j=1}^n\sum_{i=1}^m A_{ij}s_j w_i.\] Then \(f\) satisfies \(f(v_j)=\sum_{i=1}^m A_{ij}w_i\) for all \(1\leqslant j\leqslant n.\) Hence \(\Theta(f)=\mathbf{M}(f,\mathbf{b},\mathbf{c})=\mathbf{A}\) and \(\Theta\) is surjective.
Let \(V,W\) be finite dimensional \(\mathbb{K}\)-vector spaces.
Suppose \(f,g : V \to W\) are linear maps and \(\mathbf{b}=(v_1,\ldots,v_n)\) is an ordered basis of \(V.\) Then \(f=g\) if and only if \(f(v_i)=g(v_i)\) for all \(1\leqslant i\leqslant n.\)
If \(\dim V=\dim W\) and \(\mathbf{b}=(v_1,\ldots,v_n)\) is an ordered basis of \(V\) and \(\mathbf{c}=(w_1,\ldots,w_n)\) an ordered basis of \(W,\) then there exists a unique isomorphism \(f : V \to W\) such that \(f(v_i)=w_i\) for all \(1\leqslant i\leqslant n.\)
A case of particular interest is when \(W=\mathbb{K}.\)
Let \(V\) be a \(\mathbb{K}\)-vector space. The \(\mathbb{K}\)-vector space \(\mathrm{Hom}(V,\mathbb{K})\) is called the dual vector space of \(V\) and denoted by \(V^*.\)
Let \(V,W\) be finite dimensional \(\mathbb{K}\)-vector spaces. Then there exists an isomorphism \(\Theta : V \to W\) if and only if \(\dim(V)=\dim(W).\)
For \(\nu \in V^*\) and \(v \in V\) we will sometimes write \(v \,\lrcorner\,\nu\) for “plugging \(v\) into \(\nu\)”, that is \[v\,\lrcorner\,\nu=\nu(v).\]
For \(V=\mathbb{K}^n\) we consider the map which sends a vector \(\vec{x}=(x_i)_{1\leqslant i\leqslant n}\) to its \(i\)-th entry, \(\vec{x} \mapsto x_i.\) This map is linear and hence an element of \((\mathbb{K}^n)^*.\)
Recall that the trace of a matrix is a linear map \(\operatorname{Tr}: M_{n,n}(\mathbb{K}) \to \mathbb{K}\) and hence we may think of the trace as an element of \((M_{n,n}(\mathbb{K}))^*.\)
For \(V=\mathsf{P}(\mathbb{K})\) and \(x_0 \in \mathbb{K},\) we can consider the evaluation map \[\mathrm{ev}_{x_0} : \mathsf{P}(\mathbb{K}) \to \mathbb{K}, \qquad p \mapsto p(x_0).\] The map \(\mathrm{ev}_{x_0}\) is linear and hence an element of \(V^*.\)
Let \((V,\langle\cdot{,}\cdot\rangle)\) be a finite dimensional Euclidean and let \(u \in V.\) Then we obtain a map \[\varphi_u : V \to \mathbb{R}, \qquad v \mapsto \langle u,v\rangle.\] The bilinearity of \(\langle\cdot{,}\cdot\rangle\) implies that \(\varphi_u\) is linear and hence an element of \(V^*.\) We thus obtain a map \(\Phi_{\langle\cdot{,}\cdot\rangle} : V \to V^*\) defined by the rule \[u \mapsto \varphi_u=\langle u,\cdot\rangle\] for all \(u \in V.\) This map is linear an moreover an isomorphism. The linearity is a consequence of the bilinearity of \(\langle\cdot{,}\cdot\rangle\) and since \(\dim V=\dim V^*,\) it is sufficient to show that \(\operatorname{Ker}\Phi_{\langle\cdot{,}\cdot\rangle}=\{0_V\}.\) So suppose that \(\varphi_u=0_{V^*}\) so that \(\varphi_u(v)=\langle u,v\rangle=0\) for all \(v \in V.\) Since \(\langle\cdot{,}\cdot\rangle\) is non-degenerate, this implies that \(u=0_V,\) hence \(\Phi\) is injective and an isomorphism.
Recall that if \(V\) is a \(\mathbb{K}\)-vector space of dimension \(n \in \mathbb{N},\) then a linear coordinate system on \(V\) is an injective (and hence bijective) linear map \(\boldsymbol{\beta}: V \to \mathbb{K}^n.\) For a linear coordinate system \(\boldsymbol{\beta}\) and \(1\leqslant i\leqslant n,\) we may define \[\nu_i : V \to \mathbb{K}, \qquad v \mapsto [\boldsymbol{\beta}(v)]_i,\] where \([\boldsymbol{\beta}(v)]_i\) denotes the \(i\)-th entry of the vector \(\boldsymbol{\beta}(v) \in \mathbb{K}^n.\) Both \(\boldsymbol{\beta}\) and taking the \(i\)-th entry of a vector in \(\mathbb{K}^n\) are linear maps, hence \(\nu_i : V \to \mathbb{K}\) is linear as well and thus an element of \(V^*.\) We will argue next that if \(\boldsymbol{\beta}: V \to \mathbb{K}^n\) is a linear coordinate system, then \((\nu_1,\ldots,\nu_n)\) is an ordered basis of \(V^*.\) Since \(\dim V^*=n,\) we only need to show that \(\{\nu_1,\ldots,\nu_n\}\) is linearly independent. Suppose therefore that there are scalars \(s_1,\ldots,s_n\in \mathbb{K}\) such that \[\tag{13.1} s_1\nu_1+\cdots+s_n\nu_n=0_{V^*}=o,\] where \(o : V \to \mathbb{K}\) denotes the zero function, that is, \(o(v)=0\) for all \(v \in V.\) Let \(\mathbf{b}=(v_1,\ldots,v_n)\) denote the ordered basis of \(V\) corresponding to the linear coordinate system \(\boldsymbol{\beta}\) so that \(\boldsymbol{\beta}(v_j)=\vec{e}_j\) for all \(1\leqslant j\leqslant n.\) This is equivalent to \[\nu_i(v_j)=[\boldsymbol{\beta}(v_j)]_i=[\vec{e}_j]_i=\delta_{ij}\] for all \(1\leqslant i,j\leqslant n.\) The Equation (13.1) needs to hold for all choices of \(v \in V,\) choosing \(v_k\) for \(1\leqslant k\leqslant n\) gives \[s_1\nu_1(v_k)+\cdots+s_n\nu_n(v_k)=s_k=o(v_k)=0\] so that \(s_1=\cdots=s_n=0\) and \(\{\nu_1,\ldots,\nu_n\}\) are linearly independent and hence \((\nu_1,\ldots,\nu_n)\) is indeed an ordered basis of \(V^*.\) We may write \[\boldsymbol{\beta}=(\nu_1,\ldots,\nu_n)\] and think of a linear coordinate system \(\boldsymbol{\beta}\) on \(V\) as an ordered basis \((\nu_1,\ldots,\nu_n)\) of \(V^*.\)
Let \(V\) be a finite dimensional \(\mathbb{K}\)-vector space and \(\mathbf{b}=(v_1,\ldots,v_n)\) an ordered basis of \(V.\) The ordered basis \(\boldsymbol{\beta}=(\nu_1,\ldots,\nu_n)\) of \(V^*\) satisfying \(\nu_i(v_j)=\delta_{ij}\) for all \(1\leqslant i,j\leqslant n\) is called the ordered dual basis of \(\mathbf{b}\).
13.2 The transpose map
We now come to an important application of the theory of dual vector spaces which leads to a deeper understanding of the matrix transpose.
Let \(V,W\) be \(\mathbb{K}\)-vector spaces and \(f : V \to W\) a linear map. The map \(f^T : W^* \to V^*\) defined by the rule \[f^T(\omega)=\omega\circ f\] for all \(\omega \in W^*\) is called the transpose of \(f.\) Notice that for all \(\omega\in W^*\) and for all \(v \in V\) we have \[v\,\lrcorner\,f^T(\omega)=f(v)\,\lrcorner\,\omega=\omega(f(v)).\]
The transpose map is linear as well.
The transpose \(f^T : W^* \to V^*\) of a linear map \(f : V \to W\) is linear.
Proof. We need to show that for all \(s_1,s_2 \in \mathbb{K}\) and \(\omega_1,\omega_2 \in W^*,\) we have \[f^T(s_1\omega_1+s_2\omega_2)=s_1f^T(\omega_2)+s_2f^T(\omega_2).\] This is a condition that needs to hold for all \(v \in V\) and indeed, by definition, we have for all \(v \in V\) \[\begin{aligned} v \,\lrcorner\,f^T(s_1\omega_1+s_2\omega_2)&=f(v)\,\lrcorner\,(s_1\omega_1+s_2\omega_2)=s_1\omega_1(f(v))+s_2\omega_2(f(v))\\ &=s_1(v\,\lrcorner\,f^T(\omega_1))+s_2(v\,\lrcorner\,f^T(\omega_2)), \end{aligned}\] as claimed.
The relation between the matrix transpose and the transpose mapping is given by the following proposition which states that the matrix representation of the transpose of a linear map is the transpose of the matrix representation of the linear map.
Let \(V,W\) be finite dimensional \(\mathbb{K}\)-vector spaces equipped with ordered bases \(\mathbf{b},\mathbf{c}\) and corresponding ordered dual bases \(\boldsymbol{\beta},\boldsymbol{\gamma}\) of \(V^*,W^*,\) respectively. If \(f : V \to W\) is a linear map, then \[\mathbf{M}(f^T,\boldsymbol{\gamma},\boldsymbol{\beta})=\mathbf{M}(f,\mathbf{b},\mathbf{c})^T.\]
Proof. Let \(\mathbf{b}=(v_1,\ldots,v_n),\) \(\mathbf{c}=(w_1,\ldots,w_m)\) and \(\boldsymbol{\beta}=(\nu_1,\ldots,\nu_n),\) \(\boldsymbol{\gamma}=(\omega_1,\ldots,\omega_m).\) Then, by definition, we have for all \(1\leqslant j\leqslant m\) \[f^T(\omega_j)=\sum_{i=1}^n [\mathbf{M}(f^T,\boldsymbol{\gamma},\boldsymbol{\beta})]_{ij}\nu_i.\] Hence for all \(1\leqslant k\leqslant n,\) we obtain \[\begin{aligned} v_k \,\lrcorner\,f^T(\omega_j)&=v_k \,\lrcorner\,\sum_{i=1}^n [\mathbf{M}(f^T,\boldsymbol{\gamma},\boldsymbol{\beta})]_{ij}\nu_i=\sum_{i=1}^n[\mathbf{M}(f^T,\boldsymbol{\gamma},\boldsymbol{\beta})]_{ij}(v_k\,\lrcorner\,\nu_i)\\ &=\sum_{i=1}^n[\mathbf{M}(f^T,\boldsymbol{\gamma},\boldsymbol{\beta})]_{ij}\nu_i(v_k)=[\mathbf{M}(f^T,\boldsymbol{\gamma},\boldsymbol{\beta})]_{kj}, \end{aligned}\] where the last equality uses that \(\nu_i(v_k)=\delta_{ik}.\) By definition, we also have \[\begin{aligned} v_k\,\lrcorner\,f^T(\omega_j)&=f(v_k)\,\lrcorner\,\omega_j=\left(\sum_{i=1}^m [\mathbf{M}(f,\mathbf{b},\mathbf{c})]_{ik}w_i\right)\,\lrcorner\,\omega_j\\ &=\sum_{i=1}^m[\mathbf{M}(f,\mathbf{b},\mathbf{c})]_{ik}\omega_j(w_i)=[\mathbf{M}(f,\mathbf{b},\mathbf{c})]_{jk}=[\mathbf{M}(f,\mathbf{b},\mathbf{c})^T]_{kj}, \end{aligned}\] where the second last equality uses \(\omega_j(w_i)=\delta_{ji}.\)
Let \(V,W\) be finite dimensional \(\mathbb{K}\)-vector spaces and \(f : V \to W\) a linear map. Then \(\det(f^T)=\det(f)\) and \(\operatorname{Tr}(f^T)=\operatorname{Tr}(f).\)
Proof. The proof is an exercise.
Recall that for matrices \(\mathbf{A}\in M_{m,n}(\mathbb{K})\) and \(\mathbf{B}\in M_{n,p}(\mathbb{K}),\) we have \((\mathbf{A}\mathbf{B})^T=\mathbf{B}^T\mathbf{A}^T.\) Correspondingly, let \(V,W,Z\) be finite dimensional vector spaces and \(f : V \to W\) and \(g : W \to Z\) be linear maps. Then we obtain \((g\circ f)^T=f^T\circ g^T.\) Indeed, for all \(\zeta \in Z^*\) we have \[(g\circ f)^T(\zeta)=\zeta\circ g\circ f=f^T(\zeta\circ g)=f^T(g^T(\zeta))=(f^T\circ g^T)(\zeta).\]
13.3 Properties of the transpose
For a subspace \(U\subset V\) we can consider those elements of \(V^*\) that map all vectors of \(U\) to \(0.\)
Let \(V\) be a \(\mathbb{K}\)-vector space and \(U\subset V\) a subspace. The annihilator of \(U\) is the subspace \[U^0=\left\{\nu \in V^*\,|\, \nu(u)=0\;\forall\, u \in U\right\}.\]
Let \(V\) be a \(\mathbb{K}\)-vector space. A subset \(U\subset V\) is called a vector subspace of \(V\) if \(U\) is non-empty and if \[\tag{3.8} s_1\cdot_Vv_1+_Vs_2\cdot_V v_2 \in U\quad \text{for all}\; s_1,s_2 \in \mathbb{K}\; \text{and all}\; v_1,v_2 \in U.\]
Consider \(V=\mathsf{P}(\mathbb{R})\) and \(U\) to be the subspace of polynomials which contain \(x^2\) as a factor \[U=\left\{p \in \mathsf{P}(\mathbb{R})\,|\,\text{there exists }q \in \mathsf{P}(\mathbb{R})\text{ such that } p(x)=x^2q(x)\,\forall\, x\in \mathbb{R}\right\}.\] Define a linear map \(\varphi : \mathsf{P}(\mathbb{R}) \to \mathbb{R}\) by the rule \[\varphi(p)=p^{\prime}(0)\] for all \(p \in \mathsf{P}(\mathbb{R})\) and where \(p^{\prime}\) denotes the derivative of \(p\) with respect to \(x.\) Then \(\varphi \in U^0.\)
Let \((V,\langle\cdot{,}\cdot\rangle)\) be a finite dimensional Euclidean space and \(U\subset V\) a subspace. Recall that \(\langle\cdot{,}\cdot\rangle\) gives us an isomorphism \(\Phi_{\langle\cdot{,}\cdot\rangle} : V \to V^*,\) \(u \mapsto \langle u,\cdot\rangle.\) Observe that \(\Phi_{\langle\cdot{,}\cdot\rangle}(U^{\perp})\subset U^0.\) Indeed, let \(v \in U^{\perp},\) then \[\varphi_v(u)=\langle v,u\rangle=0\] for all \(u \in U.\) In fact, \(\Phi_{\langle\cdot{,}\cdot\rangle}(U^{\perp})=U^0.\) To see this consider an element \(\nu \in U^0.\) Since \(\Phi_{\langle\cdot{,}\cdot\rangle}\) is surjective it can be written as \(\nu=\varphi_v\) for some vector \(v \in V.\) Now for all \(u \in U\) we have \[\nu(u)=0=\langle v,u\rangle\] which shows that \(v \in U^{\perp}.\) The restriction of \(\Phi_{\langle\cdot{,}\cdot\rangle}\) to \(U^{\perp}\) is thus an isomorphism from \(U^{\perp}\) to \(U^0.\)
Previously we saw that for a finite dimensional Euclidean space \((V,\langle\cdot{,}\cdot\rangle)\) and a subspace \(U\subset V\) we have that \(U^0\) is isomorphic to \(U^{\perp}.\) Since \(V=U\oplus U^{\perp},\) this implies that \(\dim V=\dim U+\dim U^0.\) We will give a proof of this fact which also holds over the complex numbers (and in fact over an arbitrary field).
For a finite dimensional \(\mathbb{K}\)-vector space \(V\) and a subspace \(U\subset V\) we have \[\dim V=\dim U+\dim U^0.\]
For the proof we need the following lemma which shows that we can always extend \(\mathbb{K}\)-valued linear mappings from subspaces to the whole vector space:
Let \(V\) be a finite dimensional \(\mathbb{K}\)-vector space and \(U\subset V\) a subspace. Then for every \(\omega \in U^*\) there exists an \(\Omega \in V^*\) such that \(\Omega(u)=\omega(u)\) for all \(u \in U.\)
Let \(U\) be a subspace of a finite dimensional \(\mathbb{K}\)-vector space \(V.\) Then there exists a subspace \(U^{\prime}\) so that \(V=U\oplus U^{\prime}.\)
For a finite dimensional \(\mathbb{K}\)-vector space \(V\) and a subspace \(U\subset V\) we have \[\dim V=\dim U+\dim U^0.\]
Let \(V,W\) be finite dimensional \(\mathbb{K}\)-vector spaces and \(f : V \to W\) a linear map. Then we have \[\dim(V)=\dim \operatorname{Ker}(f)+\dim \operatorname{Im}(f)=\operatorname{nullity}(f)+\operatorname{rank}(f).\]
Notice that if \(V\) is finite dimensional, then \[\dim(V^*)=\dim(\mathrm{Hom}(V,\mathbb{K}))=\dim(V)\dim(\mathbb{K})=\dim(V),\] since \(\dim \mathbb{K}=1.\) Therefore, \(V\) and \(V^*\) have the same dimension and are thus isomorphic vector spaces by Proposition 3.80.
The kernel of the transpose of a linear map is related to the image of the map:
Let \(V,W\) be finite dimensional \(\mathbb{K}\)-vector spaces and \(f : V \to W\) a linear map. Then we have
\(\operatorname{Ker}f^T=(\operatorname{Im}f)^0\);
\(\dim \operatorname{Ker}f^T=\dim \operatorname{Ker}f+\dim W -\dim V.\)
Proof. (i) An element \(\omega \in W^*\) lies in the kernel of \(f^T : W^* \to V^*\) if and only if \[v\,\lrcorner\,f^T(\omega)=0=f(v) \,\lrcorner\,\omega\] for all \(v \in V.\) Equivalently, \(w \,\lrcorner\,\omega=0\) for all elements \(w\) in the image of \(f,\) that is, \(\omega \in (\operatorname{Im}f)^0.\)
For a finite dimensional \(\mathbb{K}\)-vector space \(V\) and a subspace \(U\subset V\) we have \[\dim V=\dim U+\dim U^0.\]
Let \(V,W\) be finite dimensional \(\mathbb{K}\)-vector spaces and \(f : V \to W\) a linear map. Then we have \[\dim(V)=\dim \operatorname{Ker}(f)+\dim \operatorname{Im}(f)=\operatorname{nullity}(f)+\operatorname{rank}(f).\]
Surjectivity of a linear map corresponds to injectivity of its transpose:
Let \(V,W\) be finite dimensional \(\mathbb{K}\)-vector spaces and \(f : V \to W\) a linear map. Then \(f\) is surjective if and only if \(f^T\) is injective.
A linear map \(f : V \to W\) is injective if and only if \(\operatorname{Ker}(f)=\{0_V\}.\)
Let \(V,W\) be finite dimensional \(\mathbb{K}\)-vector spaces and \(f : V \to W\) a linear map. Then we have
\(\operatorname{Ker}f^T=(\operatorname{Im}f)^0\);
\(\dim \operatorname{Ker}f^T=\dim \operatorname{Ker}f+\dim W -\dim V.\)
Let \(V,W\) be finite dimensional \(\mathbb{K}\)-vector spaces and \(f : V \to W\) a linear map. Then we have
\(\dim \operatorname{Im}(f^T)=\dim \operatorname{Im}(f)\);
\(\operatorname{Im}(f^T)=(\operatorname{Ker}f)^0.\)
Let \(V,W\) be finite dimensional \(\mathbb{K}\)-vector spaces and \(f : V \to W\) a linear map. Then we have \[\dim(V)=\dim \operatorname{Ker}(f)+\dim \operatorname{Im}(f)=\operatorname{nullity}(f)+\operatorname{rank}(f).\]
Let \(V,W\) be finite dimensional \(\mathbb{K}\)-vector spaces and \(f : V \to W\) a linear map. Then we have
\(\operatorname{Ker}f^T=(\operatorname{Im}f)^0\);
\(\dim \operatorname{Ker}f^T=\dim \operatorname{Ker}f+\dim W -\dim V.\)
For a finite dimensional \(\mathbb{K}\)-vector space \(V\) and a subspace \(U\subset V\) we have \[\dim V=\dim U+\dim U^0.\]
Let \(V,W\) be finite dimensional \(\mathbb{K}\)-vector spaces and \(f : V \to W\) a linear map. Then we have \[\dim(V)=\dim \operatorname{Ker}(f)+\dim \operatorname{Im}(f)=\operatorname{nullity}(f)+\operatorname{rank}(f).\]
For a finite dimensional \(\mathbb{K}\)-vector space \(V\) and a subspace \(U\subset V\) we have \[\dim V=\dim U+\dim U^0.\]
Let \(V,W\) be finite dimensional \(\mathbb{K}\)-vector spaces and \(f : V \to W\) a linear map. Then \(f\) is surjective if and only if \(f^T\) is injective.
Let \(V,W\) be finite dimensional \(\mathbb{K}\)-vector spaces and \(f : V \to W\) a linear map. Then \(f\) is injective if and only if \(f^T\) is surjective.
A linear map \(f : V \to W\) is injective if and only if \(\operatorname{Ker}(f)=\{0_V\}.\)
13.3.1 The rank of a matrix
Let \(V,W\) be \(\mathbb{K}\)-vector spaces with \(W\) finite dimensional. The rank of a linear map \(f : V \to W\) is defined as \[\operatorname{rank}(f)=\dim \operatorname{Im}(f).\] If \(\mathbf{A}\in M_{m,n}(\mathbb{K})\) is a matrix, then we define \[\operatorname{rank}(\mathbf{A})=\operatorname{rank}(f_\mathbf{A}).\]
Let \(V,W\) be finite dimensional \(\mathbb{K}\)-vector spaces and \(f : V \to W\) a linear map. If \(\{v_1,\ldots,v_n\}\) is a basis of \(V,\) then \[\operatorname{Im}(f)=\operatorname{span}\{f(v_1),\ldots,f(v_n)\}.\]
The row rank of every matrix \(\mathbf{A}\in M_{m,n}(\mathbb{K})\) equals its column rank.
Let \(V,W\) be finite dimensional \(\mathbb{K}\)-vector spaces and \(f : V \to W\) a linear map. Then we have
\(\dim \operatorname{Im}(f^T)=\dim \operatorname{Im}(f)\);
\(\operatorname{Im}(f^T)=(\operatorname{Ker}f)^0.\)
Let \(V,W\) be finite dimensional \(\mathbb{K}\)-vector spaces equipped with ordered bases \(\mathbf{b},\mathbf{c}\) and corresponding ordered dual bases \(\boldsymbol{\beta},\boldsymbol{\gamma}\) of \(V^*,W^*,\) respectively. If \(f : V \to W\) is a linear map, then \[\mathbf{M}(f^T,\boldsymbol{\gamma},\boldsymbol{\beta})=\mathbf{M}(f,\mathbf{b},\mathbf{c})^T.\]
Exercises
Show that the dual basis is indeed uniquely defined by the condition \(\nu_i(v_j)=\delta_{ij}\) for all \(1\leqslant i,j\leqslant n.\)
Solution
Let \(\mathbf{b}=(v_1,\ldots,v_n)\) be an ordered basis of \(V\) and suppose that \(\boldsymbol{\beta}= (\nu_1,\ldots,\nu_n)\) and \(\boldsymbol{\beta}'=(\nu'_1,\ldots,\nu'_n)\) are two ordered bases of \(V^*\) both satisfying \(\nu_i(v_j)=\nu'_i(v_j)=\delta_{ij}.\) Every \(v\in V\) can uniquely be written as \[v=\sum_{j=1}^n s_jv_j\] for suitable scalars \(s_1,\ldots,s_n\in\mathbb{K}.\) Then we have for all \(i=1,\ldots,n\) \[\begin{aligned} \nu_i(v) & = \sum_{j=1}^ns_j\nu_i(v_j) = s_i,\\ \nu'_i(v) & = \sum_{j=1}^ns_j\nu'_i(v_j) = s_i, \end{aligned}\] which shows that \(\nu_i=\nu'_i\) and the claim follows.
For a finite dimensional \(\mathbb{K}\)-vector space \(V,\) we may consider the dual of the dual space, that is \((V^*)^*.\) So an element of \((V^*)^*\) is a linear map which takes an element of \(V^*\) as its input and produces a scalar as its output. Consider the map \(\Xi : V \to (V^*)^*\) defined by the rule \[\nu\,\lrcorner\,\Xi(v)=v\,\lrcorner\,\nu=\nu(v)\] for all \(v \in V\) and all \(\nu \in V^*.\) That is, the map \(\Xi(v) \in (V^*)^*\) applied to \(\nu \in V^*\) is given by the application of \(\nu\) to \(v.\) Show that \(\Xi\) is an isomorphism.
Solution
Since the dimensions of \(V\) and \((V^*)^*\) agree, it is enough to show that the kernel of \(\Xi\) is trivial. Let \(v\in\operatorname{Ker}\Xi.\) Then \(\Xi(v):\nu \mapsto \nu(v)\) is the zero map, which means that \(\nu(v) = 0\) for all \(\nu\in V^*,\) which in turn implies that \(v=0_V.\)
Consider \(V=\mathbb{R}^5\) equipped with the ordered standard basis \(\mathbf{e}=(\vec{e}_1,\ldots,\vec{e}_5)\) and let \(U=\operatorname{span}\{\vec{e}_1,\vec{e}_2\}.\) Show that \[U^0=\operatorname{span}\{\vec{\varepsilon}_3,\vec{\varepsilon}_4,\vec{\varepsilon}_5\},\] where \(\boldsymbol{\varepsilon}=(\vec{\varepsilon}_1,\ldots,\vec{\varepsilon}_5)\) denotes the ordered dual basis of \(\mathbf{e}.\)
Solution
Every \(\nu \in V^*\) can be written as \[\nu = \sum_{j=1}^5 \nu_i \vec \varepsilon_i\] for suitable scalars \(\nu_1,\ldots,\nu_5\in\mathbb{R}.\) Let \(\nu\in U^0.\) Then \[0 = \nu(\vec e_1) = \sum_{j=1}^5 \nu_i \vec\varepsilon_i(\vec e_1) = \nu_1\] and similarly we find \(\nu_2=0.\) Therefore, every \(\nu\in U^0\) can be written as \[\nu = \nu_3\vec\varepsilon_3 + \nu_4\vec\varepsilon_4 + \nu_5\vec\varepsilon_5.\] so that \(U^0 = \operatorname{span}\{\vec\varepsilon_3,\vec\varepsilon_4,\vec\varepsilon_5\}.\) The set \(\{\vec\varepsilon_3,\vec\varepsilon_4,\vec\varepsilon_5\}\) is a subset of a linearly independent set and therefore linearly independent. This shows that \(\{\vec\varepsilon_3,\vec\varepsilon_4,\vec\varepsilon_5\}\) is a basis of \(U^0.\)