3.6 The dimension
3.6.1 Defining the dimension
Intuitively, we might define the dimension of a finite dimensional vector space \(V\) to be the number of elements of any basis of \(V,\) so that a line is \(1\)-dimensional, a plane is \(2\)-dimensional and so on. Of course, this definition only makes sense if we know that there always exists a basis of \(V\) and that the number of elements in the basis is independent of the chosen basis. Perhaps surprisingly, these facts take quite a bit of work to prove.
Let \(V\) be a \(\mathbb{K}\)-vector space.
Any subset \(\mathcal{S}\subset V\) generating \(V\) admits a subset \(\mathcal{T}\subset \mathcal{S}\) that is a basis of \(V.\)
Any subset \(\mathcal{S}\subset V\) that is linearly independent in \(V\) is contained in a subset \(\mathcal{T}\subset V\) that is a basis of \(V.\)
If \(\mathcal{S}_1,\mathcal{S}_2\) are bases of \(V,\) then there exists a bijective map \(f : \mathcal{S}_1 \to \mathcal{S}_2.\)
If \(V\) is finite dimensional, then any basis of \(V\) is a finite set and the number of elements in the basis is independent of the choice of the basis.
Every \(\mathbb{K}\)-vector space \(V\) admits at least one basis.
Let \(V\) be a \(\mathbb{K}\)-vector space.
Any subset \(\mathcal{S}\subset V\) generating \(V\) admits a subset \(\mathcal{T}\subset \mathcal{S}\) that is a basis of \(V.\)
Any subset \(\mathcal{S}\subset V\) that is linearly independent in \(V\) is contained in a subset \(\mathcal{T}\subset V\) that is a basis of \(V.\)
If \(\mathcal{S}_1,\mathcal{S}_2\) are bases of \(V,\) then there exists a bijective map \(f : \mathcal{S}_1 \to \mathcal{S}_2.\)
If \(V\) is finite dimensional, then any basis of \(V\) is a finite set and the number of elements in the basis is independent of the choice of the basis.
Let \(\mathcal{X}\) be a set with finitely many elements. We write \(\operatorname{Card}(\mathcal{X})\) – for cardinality – for the number of elements of \(\mathcal{X}.\)
The dimension of a finite dimensional \(\mathbb{K}\)-vector space \(V,\) denoted by \(\dim(V)\) or \(\dim_{\mathbb{K}}(V),\) is the number of elements of any basis of \(V.\)
The zero vector space \(\{0\}\) has the empty set as a basis and hence is \(0\)-dimensional.
A field \(\mathbb{K}\) – thought of as a \(\mathbb{K}\)-vector space – has \(\{1_{\mathbb{K}}\}\) as a basis and hence is \(1\)-dimensional.
The vector space \(\mathbb{K}^n\) has \(\{\vec{e}_1,\ldots,\vec{e}_n\}\) as a basis and hence is \(n\)-dimensional.
The vector space \(M_{m,n}(\mathbb{K})\) has \(\mathbf{E}_{k,l}\) for \(1\leqslant k \leqslant m\) and \(1\leqslant l \leqslant n\) as a basis, hence it is \(mn\)-dimensional.
Let \(V\) be a \(\mathbb{K}\)-vector space.
Any subset \(\mathcal{S}\subset V\) generating \(V\) admits a subset \(\mathcal{T}\subset \mathcal{S}\) that is a basis of \(V.\)
Any subset \(\mathcal{S}\subset V\) that is linearly independent in \(V\) is contained in a subset \(\mathcal{T}\subset V\) that is a basis of \(V.\)
If \(\mathcal{S}_1,\mathcal{S}_2\) are bases of \(V,\) then there exists a bijective map \(f : \mathcal{S}_1 \to \mathcal{S}_2.\)
If \(V\) is finite dimensional, then any basis of \(V\) is a finite set and the number of elements in the basis is independent of the choice of the basis.
Let \(V\) be a \(\mathbb{K}\)-vector space, \(\mathcal{S}\subset V\) linearly independent and \(v_0 \in V.\) Suppose \(v_0 \notin \operatorname{span}(\mathcal{S}),\) then \(\mathcal{S}\cup \{v_0\}\) is linearly independent.
Proof. Let \(\mathcal{T}\) be a finite subset of \(\mathcal{S}\cup\{v_0\}.\) If \(v_0 \notin \mathcal{T},\) then \(\mathcal{T}\) is linearly independent, as \(\mathcal{S}\) is linearly independent. So suppose \(v_0 \in \mathcal{T}.\) There exist distinct elements \(v_1,\ldots,v_n\) of \(\mathcal{S}\) so that \(\mathcal{T}=\{v_0,v_1,\ldots,v_n\}.\) Suppose \(s_0v_0+s_1v_1+\cdots+s_nv_n=0_V\) for some scalars \(s_0,s_1,\ldots,s_n \in \mathbb{K}.\) If \(s_0 \neq 0,\) then we can write \[v_0=-\sum_{i=1}^n \frac{s_i}{s_0}v_i,\] contradicting the assumption that \(v_0 \notin \operatorname{span}(\mathcal{S}).\) Hence we must have \(s_0=0.\) Since \(s_0=0\) it follows that \(s_1v_1+\cdots+s_nv_n=0_V\) so that \(s_1=\cdots=s_n=0\) by the linear independence of \(\mathcal{S}.\) We conclude that \(\mathcal{S}\cup\{v_0\}\) is linearly independent.
Let \(V\) be a \(\mathbb{K}\)-vector space and \(\mathcal{S}\subset V\) a generating set. If \(v_0 \in \operatorname{span}(\mathcal{S}\setminus\{v_0\}),\) then \(\mathcal{S}\setminus\{v_0\}\) is a generating set.
Proof. Since \(v_0 \in \operatorname{span}(\mathcal{S} \setminus\{v_0\}),\) there exist vectors \(v_1,\ldots,v_n \in \mathcal{S}\) with \(v_i\neq v_0\) and scalars \(s_1,\ldots,s_n\) so that \(v_0=s_1v_1+\cdots+s_n v_n.\) Suppose \(v \in V.\) Since \(\mathcal{S}\) is a generating set, there exist vectors \(w_1,\ldots,w_k \in \mathcal{S}\) and scalars \(t_1,\ldots,t_k\) so that \(v=t_1w_1+\cdots+t_kw_k.\) If \(\{w_1,\ldots,w_k\}\) does not contain \(v_0,\) then \(v \in \operatorname{span}(\mathcal{S}\setminus\{v_0\}),\) so assume that \(v_0 \in \{w_1,\ldots,w_k\}.\) After possibly relabelling the elements of \(\{w_1,\ldots,w_k\}\) we can assume that \(v_0=w_1.\) Hence we have \[v=t_1\left(s_1v_1+\cdots+s_n v_n\right)+t_2w_2+\cdots+t_kw_k\] with \(v_0 \neq v_i\) for \(1\leqslant i\leqslant n\) and \(v_0 \neq w_j\) for \(2\leqslant j\leqslant k.\) It follows that \(v \in \operatorname{span}(\mathcal{S}\setminus\{v_0\}),\) as claimed.
Let \(V\) be a finite dimensional \(\mathbb{K}\)-vector space and \(\mathcal{S}\subset V\) a finite set with \(n\) elements which generates \(V.\) If \(\mathcal{T}\subset V\) has more than \(n\) elements, then \(\mathcal{T}\) is linearly dependent.
Proof. We show that if \(\mathcal{T}\) has exactly \(n+1\) elements, then it is linearly dependent. In the other cases, \(\mathcal{T}\) contains a subset with exactly \(n+1\) elements and if this subset is linearly dependent, then so is \(\mathcal{T}.\)
We prove the claim by induction on \(n\geqslant 0.\) Let \(\mathcal{A }(n)\) be the following statement: “For any \(\mathbb{K}\)-vector space \(V,\) if there exists a generating subset \(\mathcal{S}\subset V\) with \(n\) elements, then all subsets of \(V\) with \(n+1\) elements are linearly dependent.”
We first show that \(\mathcal{A }(0)\) is true. A subset with zero elements is the empty set \(\emptyset.\) Hence \(V=\operatorname{span}(\emptyset)=\{0_V\}\) is the zero vector space. The only subset of \(\{V\}\) with \(1\) element is \(\{0_V\}.\) Since \(s0_V=0_V\) for all \(s\in \mathbb{K}\) , the set \(\{0_V\}\) is linearly dependent, thus showing that \(\mathcal{A }(0)\) is correct.
Suppose \(n\geqslant 1\) and that \(\mathcal{A }(n-1)\) is true. We want to argue that \(\mathcal{A }(n)\) is true as well. Suppose \(V\) is generated by the set \(\mathcal{S}=\{v_1,\ldots,v_n\}\) with \(n\) elements. Let \(\mathcal{T}=\{w_1,\ldots,w_{n+1}\}\) be a subset with \(n+1\) elements. We need to show that \(\mathcal{T}\) is linearly dependent. Since \(\mathcal{S}\) is generating, we have scalars \(s_{ij} \in \mathbb{K}\) with \(1\leqslant i\leqslant n+1\) and \(1\leqslant j\leqslant n\) so that \[\tag{3.10} w_i=\sum_{j=1}^n s_{ij}v_j\] for all \(1\leqslant i\leqslant n+1.\) We now consider two cases:
Case 1. If \(s_{11}=\cdots=s_{{n+1},1}=0,\) then (3.10) gives for all \(1\leqslant i\leqslant n+1\) \[w_i=\sum_{j=2}^ns_{ij}v_j.\] Notice that the summation now starts at \(j=2.\) This implies that \(\mathcal{T}\subset W,\) where \(W=\operatorname{span}\{v_2,\ldots,v_n\}.\) We can now apply \(\mathcal{A }(n-1)\) to the vector space \(W,\) the generating set \(\mathcal{S}_1=\{v_2,\ldots,v_n\}\) and the subset with \(n\) elements being \(\mathcal{T}_1=\{w_1,\ldots,w_n\}.\) It follows that \(\mathcal{T}_1\) is linearly dependent and hence so is \(\mathcal{T},\) as it contains \(\mathcal{T}_1.\)
Case 2. Suppose there exists \(i\) so that \(s_{i1}\neq 0.\) Then, after possibly relabelling the vectors, we can assume that \(s_{11}\neq 0.\) For \(2\leqslant i\leqslant n+1\) we thus obtain from (3.10) \[\begin{aligned} w_i-\frac{s_{i1}}{s_{11}}w_1&=w_i-\frac{s_{i1}}{s_{11}}\left(\sum_{j=1}^ns_{1j}v_j\right)=\sum_{j=1}^ns_{ij}v_j-\frac{s_{i1}}{s_{11}}\left(\sum_{j=1}^ns_{1j}v_j\right)\\ &=\sum_{j=1}^n\left(s_{ij}-\frac{s_{i1}}{s_{11}}s_{1j}\right)v_j\\ &=\underbrace{\left(s_{i1}-\frac{s_{i1}}{s_{11}}s_{11}\right)}_{=0}v_1+\sum_{j=2}^n\left(s_{ij}-\frac{s_{i1}}{s_{11}}s_{1j}\right)v_j\\ &=\sum_{j=2}^n\left(s_{ij}-\frac{s_{i1}}{s_{11}}s_{1j}\right)v_j. \end{aligned}\] Hence, setting \[\tag{3.11} \hat{w}_i=w_i-\frac{s_{i1}}{s_{11}}w_1\] for \(2\leqslant i\leqslant n+1\) and \(\hat{s}_{ij}=s_{ij}-\frac{s_{i1}}{s_{11}}s_{1j}\) for \(2\leqslant i\leqslant n+1\) and \(2\leqslant j\leqslant n,\) we obtain the relations \[\hat{w}_i=\sum_{j=2}^n\hat{s}_{ij}v_j\] for all \(2\leqslant i\leqslant n+1.\) Therefore, the set \(\hat{\mathcal{T}}=\{\hat{w}_2,\ldots,\hat{w}_{n+1}\}\) with \(n\) elements is contained in \(W\) which is generated by \(n-1\) elements. Applying \(\mathcal{A }(n-1),\) we conclude that \(\hat{\mathcal{T}}\) is linearly dependent. It follows that we have scalars \(t_2,\ldots,t_{n+1}\) not all zero so that \[t_2\hat{w}_2+\cdots+t_{n+1}\hat{w}_{n+1}=0_V.\] Using (3.11), we get \[\sum_{i=2}^{n+1}t_i\left(w_i-\frac{s_{i1}}{s_{11}}w_1\right)=-\left(\sum_{i=2}^{n+1}t_i\frac{s_{i1}}{s_{11}}\right)w_1+t_2w_2+\cdots+t_{n+1}w_{n+1}=0_V.\] Since not all scalars \(t_2,\ldots,t_{n+1}\) are zero, it follows that \(w_1,\ldots,w_{n+1}\) are linearly dependent and hence so is \(\mathcal{T}.\)
Let \(V\) be a \(\mathbb{K}\)-vector space.
Any subset \(\mathcal{S}\subset V\) generating \(V\) admits a subset \(\mathcal{T}\subset \mathcal{S}\) that is a basis of \(V.\)
Any subset \(\mathcal{S}\subset V\) that is linearly independent in \(V\) is contained in a subset \(\mathcal{T}\subset V\) that is a basis of \(V.\)
If \(\mathcal{S}_1,\mathcal{S}_2\) are bases of \(V,\) then there exists a bijective map \(f : \mathcal{S}_1 \to \mathcal{S}_2.\)
If \(V\) is finite dimensional, then any basis of \(V\) is a finite set and the number of elements in the basis is independent of the choice of the basis.
Let \(V\) be a finite dimensional \(\mathbb{K}\)-vector space and \(\mathcal{S}\subset V\) a finite set with \(n\) elements which generates \(V.\) If \(\mathcal{T}\subset V\) has more than \(n\) elements, then \(\mathcal{T}\) is linearly dependent.
Let \(V\) be a \(\mathbb{K}\)-vector space, \(\mathcal{S}\subset V\) linearly independent and \(v_0 \in V.\) Suppose \(v_0 \notin \operatorname{span}(\mathcal{S}),\) then \(\mathcal{S}\cup \{v_0\}\) is linearly independent.
Let \(V\) be a \(\mathbb{K}\)-vector space and \(\mathcal{S}\subset V\) a generating set. If \(v_0 \in \operatorname{span}(\mathcal{S}\setminus\{v_0\}),\) then \(\mathcal{S}\setminus\{v_0\}\) is a generating set.
Let \(V\) be a finite dimensional \(\mathbb{K}\)-vector space and \(\mathcal{S}\subset V\) a finite set with \(n\) elements which generates \(V.\) If \(\mathcal{T}\subset V\) has more than \(n\) elements, then \(\mathcal{T}\) is linearly dependent.
(iv) is an immediate consequence of (iii).
3.6.2 Properties of the dimension
Isomorphic finite dimensional vector spaces have the same dimension.
Let \(f : V\to W\) be linear and \(\mathcal{S}\subset V\) a generating set. If \(f\) is surjective, then \(f(\mathcal{S})\) is a generating set for \(W.\) Furthermore, if \(f\) is bijective, then \(V\) is finite dimensional if and only if \(W\) is finite dimensional.
Let \(f : V \to W\) be an injective linear map. Suppose \(\mathcal{S}\subset V\) is linearly independent, then \(f(\mathcal{S}) \subset W\) is also linearly independent.
A subspace of a finite dimensional \(\mathbb{K}\)-vector space is finite dimensional as well.
Proof. Let \(V\) be a finite dimensional \(\mathbb{K}\)-vector space and \(U\subset V\) a subspace. Let \(\mathcal{S}=\{v_1,\ldots,v_n\}\) be a basis of \(V.\) For \(1\leqslant i\leqslant n,\) we define \(U_i=U\cap\operatorname{span}\{v_1,\ldots,v_i\}.\) By construction, each \(U_i\) is a subspace and \(U_1\subset U_2 \subset \cdots\subset U_n=U,\) since \(\mathcal{S}\) is a basis of \(V.\)
We will show inductively that all \(U_i\) are finite dimensional. Notice that \(U_1\) is a subspace of \(\operatorname{span}\{v_1\}.\) The only subspaces of \(\operatorname{span}\{v_1\}\) are \(\{0_V\}\) and \(\{tv_1\,|\,t \in \mathbb{R}\},\) both are finite dimensional, hence \(U_1\) is finite dimensional.
Assume \(i\geqslant 2.\) We will show next that if \(U_{i-1}\) is finite dimensional, then so is \(U_{i}.\) Let \(\mathcal{T}_{i-1}\) be a basis of \(U_{i-1}.\) If \(U_i=U_{i-1},\) then \(U_i\) is finite dimensional as well, so assume there exists a non-zero vector \(w \in U_{i}\setminus U_{i-1}.\) Since \(\mathcal{S}\) is a basis of \(V\) and since \(w \in \operatorname{span}\{v_1,\ldots,v_i\},\) there exist scalars \(s_1,\ldots,s_i\) so that \(w=s_1v_1+\cdots+s_iv_i.\) By assumption, \(w \notin U_{i-1},\) hence \(s_i\neq 0.\) Any vector \(v \in U_i\) can be written as \(v=t_1v_1+\cdots+t_iv_i\) for scalars \(t_1,\ldots,t_i.\) We now compute \[\begin{aligned} v-\frac{t_i}{s_i}w&=\sum_{k=1}^it_kv_k-\frac{t_i}{s_i}\left(\sum_{k=1}^is_kv_k\right)=\sum_{k=1}^i\left(t_k-\frac{t_i}{s_i}s_k\right)v_k\\ &=\sum_{k=1}^{i-1}\left(t_k-\frac{t_i}{s_i}s_k\right)v_k \end{aligned}\] so that \(v-(t_i/s_i)w\) can be written as a linear combination of the vectors \(v_1,\ldots,v_{i-1},\) hence is an element of \(U_{i-1}.\) Recall that \(\mathcal{T}_{i-1}\) is a basis of \(U_{i-1},\) hence \(v-(t_i/s_i)w\) is a linear combination of elements of \(\mathcal{T}_{i-1}.\) It follows that any vector \(v \in U_i\) is a linear combination of elements of \(\mathcal{T}_{i-1}\cup \{w\},\) that is, \(\mathcal{T}_{i-1}\cup \{w\}\) generates \(U_i.\) Since \(\mathcal{T}_{i-1}\cup\{w\}\) contains finitely many vectors, it follows that \(U_i\) is finite dimensional.
Let \(V\) be a finite dimensional \(\mathbb{K}\)-vector space. Then for any subspace \(U\subset V\) \[0\leqslant \dim(U)\leqslant \dim(V).\] Furthermore \(\dim(U)=0\) if and only if \(U=\{0_V\}\) and \(\dim(U)=\dim(V)\) if and only if \(V=U.\)
A subspace of a finite dimensional \(\mathbb{K}\)-vector space is finite dimensional as well.
Every \(\mathbb{K}\)-vector space \(V\) admits at least one basis.
Let \(V\) be a \(\mathbb{K}\)-vector space.
Any subset \(\mathcal{S}\subset V\) generating \(V\) admits a subset \(\mathcal{T}\subset \mathcal{S}\) that is a basis of \(V.\)
Any subset \(\mathcal{S}\subset V\) that is linearly independent in \(V\) is contained in a subset \(\mathcal{T}\subset V\) that is a basis of \(V.\)
If \(\mathcal{S}_1,\mathcal{S}_2\) are bases of \(V,\) then there exists a bijective map \(f : \mathcal{S}_1 \to \mathcal{S}_2.\)
If \(V\) is finite dimensional, then any basis of \(V\) is a finite set and the number of elements in the basis is independent of the choice of the basis.
Let \(V,W\) be \(\mathbb{K}\)-vector spaces with \(W\) finite dimensional. The rank of a linear map \(f : V \to W\) is defined as \[\operatorname{rank}(f)=\dim \operatorname{Im}(f).\] If \(\mathbf{A}\in M_{m,n}(\mathbb{K})\) is a matrix, then we define \[\operatorname{rank}(\mathbf{A})=\operatorname{rank}(f_\mathbf{A}).\]
The nullity of a linear map \(f : V \to W\) is the dimension of its kernel, \(\operatorname{nullity}(f)=\dim \operatorname{Ker}(f).\) The following important theorem establishes a relation between the nullity and the rank of a linear map. It states something that is intuitively not surprising, namely that the dimension of the image of a linear map \(f : V \to W\) is the dimension of the vector space \(V\) minus the dimension of the subspace of vectors that we “lose”, that is, those that are mapped onto the zero vector of \(W.\) More precisely:
Let \(V,W\) be finite dimensional \(\mathbb{K}\)-vector spaces and \(f : V \to W\) a linear map. Then we have \[\dim(V)=\dim \operatorname{Ker}(f)+\dim \operatorname{Im}(f)=\operatorname{nullity}(f)+\operatorname{rank}(f).\]
Let \(V\) be a finite dimensional \(\mathbb{K}\)-vector space. Then for any subspace \(U\subset V\) \[0\leqslant \dim(U)\leqslant \dim(V).\] Furthermore \(\dim(U)=0\) if and only if \(U=\{0_V\}\) and \(\dim(U)=\dim(V)\) if and only if \(V=U.\)
Let \(V\) be a \(\mathbb{K}\)-vector space.
Any subset \(\mathcal{S}\subset V\) generating \(V\) admits a subset \(\mathcal{T}\subset \mathcal{S}\) that is a basis of \(V.\)
Any subset \(\mathcal{S}\subset V\) that is linearly independent in \(V\) is contained in a subset \(\mathcal{T}\subset V\) that is a basis of \(V.\)
If \(\mathcal{S}_1,\mathcal{S}_2\) are bases of \(V,\) then there exists a bijective map \(f : \mathcal{S}_1 \to \mathcal{S}_2.\)
If \(V\) is finite dimensional, then any basis of \(V\) is a finite set and the number of elements in the basis is independent of the choice of the basis.
We first show that \(g\) is injective. Assume \(g(v)=0_W.\) Since \(v \in U,\) we can write \(v=s_{d+1}v_{d+1}+\cdots +s_nv_n\) for scalars \(s_{d+1},\ldots,s_n.\) Since \(g(v)=0_W\) we have \(v \in \operatorname{Ker}(f),\) hence we can also write \(v=s_1v_1+\cdots +s_d v_d\) for scalars \(s_1,\ldots,s_d,\) subtracting the two expressions for \(v,\) we get \[0_V=s_1v_1+\cdots+s_d v_d-s_{d+1}v_{d+1}-\cdots-s_nv_n.\] Since \(\{v_1,\ldots,v_n\}\) is a basis, it follows that all the coefficients \(s_i\) vanish, where \(1\leqslant i \leqslant n.\) Therefore we have \(v=0_V\) and \(g\) is injective.
Second, we show that \(g\) is surjective. Suppose \(w \in \operatorname{Im}(f)\) so that \(w=f(v)\) for some vector \(v \in V.\) We write \(v=\sum_{i=1}^ns_iv_i\) for scalars \(s_1,\ldots,s_n.\) Using the linearity of \(f,\) we compute \[w=f(v)=f\left(\sum_{i=1}^ns_iv_i\right)=f\Big(\underbrace{\sum_{i=d+1}^ns_iv_i}_{=\hat{v}}\Big)=f(\hat{v})\] where \(\hat{v} \in U.\) We thus have an element \(\hat{v}\) with \(g(\hat{v})=w.\) Since \(w\) was arbitrary, we conclude that \(g\) is surjective.
Let \(V,W\) be finite dimensional \(\mathbb{K}\)-vector spaces with \(\dim(V)=\dim(W)\) and \(f : V \to W\) a linear map. Then the following statements are equivalent:
\(f\) is injective;
\(f\) is surjective;
\(f\) is bijective.
A linear map \(f : V \to W\) is injective if and only if \(\operatorname{Ker}(f)=\{0_V\}.\)
The zero vector space \(\{0\}\) has the empty set as a basis and hence is \(0\)-dimensional.
A field \(\mathbb{K}\) – thought of as a \(\mathbb{K}\)-vector space – has \(\{1_{\mathbb{K}}\}\) as a basis and hence is \(1\)-dimensional.
The vector space \(\mathbb{K}^n\) has \(\{\vec{e}_1,\ldots,\vec{e}_n\}\) as a basis and hence is \(n\)-dimensional.
The vector space \(M_{m,n}(\mathbb{K})\) has \(\mathbf{E}_{k,l}\) for \(1\leqslant k \leqslant m\) and \(1\leqslant l \leqslant n\) as a basis, hence it is \(mn\)-dimensional.
Let \(V,W\) be finite dimensional \(\mathbb{K}\)-vector spaces and \(f : V \to W\) a linear map. Then we have \[\dim(V)=\dim \operatorname{Ker}(f)+\dim \operatorname{Im}(f)=\operatorname{nullity}(f)+\operatorname{rank}(f).\]
Let \(V\) be a finite dimensional \(\mathbb{K}\)-vector space. Then for any subspace \(U\subset V\) \[0\leqslant \dim(U)\leqslant \dim(V).\] Furthermore \(\dim(U)=0\) if and only if \(U=\{0_V\}\) and \(\dim(U)=\dim(V)\) if and only if \(V=U.\)
Let \(V,W\) be finite dimensional \(\mathbb{K}\)-vector spaces and \(f : V \to W\) a linear map. Then we have \[\dim(V)=\dim \operatorname{Ker}(f)+\dim \operatorname{Im}(f)=\operatorname{nullity}(f)+\operatorname{rank}(f).\]
Let \(V\) be a finite dimensional \(\mathbb{K}\)-vector space. Then for any subspace \(U\subset V\) \[0\leqslant \dim(U)\leqslant \dim(V).\] Furthermore \(\dim(U)=0\) if and only if \(U=\{0_V\}\) and \(\dim(U)=\dim(V)\) if and only if \(V=U.\)
A linear map \(f : V \to W\) is injective if and only if \(\operatorname{Ker}(f)=\{0_V\}.\)
(iii) \(\Rightarrow\) (i) Since \(f\) is bijective, it is also injective.
Let \(V,W\) be finite dimensional \(\mathbb{K}\)-vector spaces and \(f : V \to W\) a linear map. Then \(\operatorname{rank}(f)\leqslant \min\{\dim(V),\dim(W)\}\) and \[\begin{aligned} \operatorname{rank}(f)&=\dim(V)\iff f\text{ is injective,}\\ \operatorname{rank}(f)&=\dim(W)\iff f\text{ is surjective.} \end{aligned}\]
Let \(V\) be a finite dimensional \(\mathbb{K}\)-vector space. Then for any subspace \(U\subset V\) \[0\leqslant \dim(U)\leqslant \dim(V).\] Furthermore \(\dim(U)=0\) if and only if \(U=\{0_V\}\) and \(\dim(U)=\dim(V)\) if and only if \(V=U.\)
Let \(V,W\) be finite dimensional \(\mathbb{K}\)-vector spaces and \(f : V \to W\) a linear map. Then we have \[\dim(V)=\dim \operatorname{Ker}(f)+\dim \operatorname{Im}(f)=\operatorname{nullity}(f)+\operatorname{rank}(f).\]
Let \(V,W\) be finite dimensional \(\mathbb{K}\)-vector spaces and \(f : V \to W\) a linear map. Then we have
If \(\dim(V)<\dim(W),\) then \(f\) is not surjective;
If \(\dim(V)>\dim(W),\) then \(f\) is not injective. In particular, there exist non-zero vectors \(v \in V\) with \(f(v)=0_W.\)
Let \(V,W\) be finite dimensional \(\mathbb{K}\)-vector spaces and \(f : V \to W\) a linear map. Then we have \[\dim(V)=\dim \operatorname{Ker}(f)+\dim \operatorname{Im}(f)=\operatorname{nullity}(f)+\operatorname{rank}(f).\]
Let \(V,W\) be finite dimensional \(\mathbb{K}\)-vector spaces and \(f : V \to W\) a linear map. Then \(\operatorname{rank}(f)\leqslant \min\{\dim(V),\dim(W)\}\) and \[\begin{aligned} \operatorname{rank}(f)&=\dim(V)\iff f\text{ is injective,}\\ \operatorname{rank}(f)&=\dim(W)\iff f\text{ is surjective.} \end{aligned}\]
Let \(V,W\) be finite dimensional \(\mathbb{K}\)-vector spaces and \(f : V \to W\) a linear map. Then \(\operatorname{rank}(f)\leqslant \min\{\dim(V),\dim(W)\}\) and \[\begin{aligned} \operatorname{rank}(f)&=\dim(V)\iff f\text{ is injective,}\\ \operatorname{rank}(f)&=\dim(W)\iff f\text{ is surjective.} \end{aligned}\]
Let \(V,W\) be finite dimensional \(\mathbb{K}\)-vector spaces. Then there exists an isomorphism \(\Theta : V \to W\) if and only if \(\dim(V)=\dim(W).\)
Isomorphic finite dimensional vector spaces have the same dimension.
A linear map \(f : V \to W\) is injective if and only if \(\operatorname{Ker}(f)=\{0_V\}.\)
Let \(V,W\) be finite dimensional \(\mathbb{K}\)-vector spaces with \(\dim(V)=\dim(W)\) and \(f : V \to W\) a linear map. Then the following statements are equivalent:
\(f\) is injective;
\(f\) is surjective;
\(f\) is bijective.
Let \(V_1,V_2,V_3\) be \(\mathbb{K}\)-vector spaces and \(f : V_1 \to V_2\) and \(g: V_2 \to V_3\) be linear maps. Then the composition \(g \circ f : V_1 \to V_3\) is linear. Furthermore, if \(f : V_1 \to V_2\) is bijective, then the inverse function \(f^{-1} : V_2 \to V_1\) (satisfying \(f^{-1}\circ f=f\circ f^{-1}=\mathrm{Id}_{V_1})\) is linear.
Suppose \(\mathbf{A}\in M_{m,n}(\mathbb{K})\) is invertible with inverse \(\mathbf{A}^{-1} \in M_{n,m}(\mathbb{K}).\) Then \(n=m,\) hence \(\mathbf{A}\) is a square matrix.
Let \(\mathbf{A}\in M_{m,n}(\mathbb{K})\) and \(f_\mathbf{A}: \mathbb{K}^n \to \mathbb{K}^m\) the associated linear map. Then \(f_\mathbf{A}\) is bijective if and only if there exists a matrix \(\mathbf{B}\in M_{n,m}(\mathbb{K})\) satisfying \(\mathbf{B}\mathbf{A}=\mathbf{1}_{n}\) and \(\mathbf{A}\mathbf{B}=\mathbf{1}_{m}.\) In this case, the matrix \(\mathbf{B}\) is unique and will be denoted by \(\mathbf{A}^{-1}.\) We refer to \(\mathbf{A}^{-1}\) as the inverse of \(\mathbf{A}\) and call \(\mathbf{A}\) invertible.
Let \(V,W\) be finite dimensional \(\mathbb{K}\)-vector spaces. Then there exists an isomorphism \(\Theta : V \to W\) if and only if \(\dim(V)=\dim(W).\)