12.2 Jordan blocks
Let \(f : V \to V\) be an endomorphism of a finite dimensional \(\mathbb{C}\)-vector space \(V\) of dimension \(n\geqslant 1\) and let \(\lambda_1,\ldots,\lambda_k\) denote the distinct eigenvalues of \(f.\) Then we have \[\mathcal{E}_f(\lambda_1)\oplus \mathcal{E}_f(\lambda_2) \oplus \cdots \oplus \mathcal{E}_f(\lambda_k)=V.\]
\[\mathbf{J}_{1}(\lambda)=(\lambda), \qquad \mathbf{J}_{2}(3)=\begin{pmatrix} 3 & 1 \\ 0 & 3 \end{pmatrix},\qquad \mathbf{J}_{3}(0)=\begin{pmatrix} 0 & 1 & 0 \\ 0 & 0 & 1 \\ 0 & 0 & 0\end{pmatrix},\]
We can now state precisely how the individual matrix blocks look like:
Let \(f : V \to V\) be an endomorphism of a finite dimensional \(\mathbb{K}\)-vector space \(V\) and \(\lambda \in \mathbb{K}\) an eigenvalue of \(f.\) Then there exists an integer \(\ell \in \mathbb{N},\) integers \(m_1,\ldots,m_\ell\) and an ordered basis \(\mathbf{b}\) of \(\mathcal{E}_{f}(\lambda)\) such that \[\mathbf{M}(f|_{\mathcal{E}_f(\lambda)},\mathbf{b},\mathbf{b})=\operatorname{diag}(\mathbf{J}_{m_1}(\lambda),\mathbf{J}_{m_2}(\lambda),\ldots,\mathbf{J}_{m_\ell}(\lambda)).\]
Let \(f : V \to V\) be an endomorphism of a finite dimensional \(\mathbb{C}\)-vector space \(V\) of dimension \(n\geqslant 1\) and let \(\lambda_1,\ldots,\lambda_k\) denote the distinct eigenvalues of \(f.\) Then we have \[\mathcal{E}_f(\lambda_1)\oplus \mathcal{E}_f(\lambda_2) \oplus \cdots \oplus \mathcal{E}_f(\lambda_k)=V.\]
Let \(f : V \to V\) be an endomorphism of a finite dimensional \(\mathbb{C}\)-vector space \(V\) of dimension \(n\geqslant 1.\) Then there exists an ordered basis \(\mathbf{b}\) of \(V,\) an integer \(k\geqslant 1,\) integers \(n_1,\ldots,n_k\) with \(n=n_1+n_2+\cdots+n_k\) and complex numbers \(\lambda_1,\ldots,\lambda_k\) such that \(\mathbf{M}(f,\mathbf{b},\mathbf{b})=\operatorname{diag}(\mathbf{J}_{n_1}(\lambda_1),\mathbf{J}_{n_2}(\lambda_2),\ldots,\mathbf{J}_{n_k}(\lambda_k)),\) that is, \[\mathbf{M}(f,\mathbf{b},\mathbf{b})=\begin{pmatrix} \mathbf{J}_{n_1}(\lambda_1) & & & \\ & \mathbf{J}_{n_2}(\lambda_2) & & \\ && \ddots & \\ &&& \mathbf{J}_{n_k}(\lambda_k) \end{pmatrix}.\]
The ordered basis \(\mathbf{b}\) of \(V\) provided by the Jordan normal form theorem is called a Jordan basis for \(f\).
Let \(f : V \to V\) be an endomorphism of a finite dimensional \(\mathbb{K}\)-vector space \(V\) and \(\lambda \in \mathbb{K}\) an eigenvalue of \(f.\) Then there exists an integer \(\ell \in \mathbb{N},\) integers \(m_1,\ldots,m_\ell\) and an ordered basis \(\mathbf{b}\) of \(\mathcal{E}_{f}(\lambda)\) such that \[\mathbf{M}(f|_{\mathcal{E}_f(\lambda)},\mathbf{b},\mathbf{b})=\operatorname{diag}(\mathbf{J}_{m_1}(\lambda),\mathbf{J}_{m_2}(\lambda),\ldots,\mathbf{J}_{m_\ell}(\lambda)).\]
Let \(m \in \mathbb{N}\) and \(\lambda \in \mathbb{K}.\) The only eigenvalue of \(\mathbf{J}_m(\lambda)\) is \(\lambda.\) Its algebraic multiplicity is \(m\) and its geometric multiplicity is \(1.\)
Let \(n \in \mathbb{N}\) and \(\mathbf{A}=(A_{ij})_{1\leqslant i,j\leqslant n} \in M_{n,n}(\mathbb{K})\) be an upper triangular matrix so that \(A_{ij}=0\) for \(i>j.\) Then \[\tag{5.6} \det(\mathbf{A})=\prod_{i=1}^n A_{ii}=A_{11}A_{22}\cdots A_{nn}.\]
The relation between generalised eigenvectors and Jordan blocks is explained by the following two lemmas:
Let \(m \in \mathbb{N}\) and \(\lambda \in \mathbb{K}.\) Then \(\vec{e}_m\) is a generalised eigenvector of rank \(m\) and with eigenvalue \(\lambda\) of the endomorphism \(f_{\mathbf{J}_{m}(\lambda)} : \mathbb{K}^m \to \mathbb{K}^m.\)
Proof. We assume \(m>1\) since for \(m=1\) the statement is trivial. By definition, we need to show that \[(f_{\mathbf{J}_m(\lambda)}-\lambda\mathrm{Id}_{\mathbb{K}^m})^m(\vec{e}_m)=0_{\mathbb{K}^m}\qquad \text{and} \qquad (f_{\mathbf{J}_m(\lambda)}-\lambda\mathrm{Id}_{\mathbb{K}^m})^{m-1}(\vec{e}_m)\neq 0_{\mathbb{K}^m}.\] By definition, we have \(\mathbf{J}_m(\lambda)-\lambda\mathbf{1}_{m}=\mathbf{J}_m(0)=\sum_{i=1}^{m-1}\mathbf{E}_{i,i+1}.\) We use induction to show that for \(1\leqslant k\leqslant m-1,\) we have \[\tag{12.3} (\mathbf{J}_m(0))^k=\sum_{i=1}^{m-k}\mathbf{E}_{i,i+k}.\] For \(k=1\) the statement is obviously correct and hence anchored.
Let \(m \in \mathbb{N}.\) For \(1\leqslant k,l,p,q \leqslant m,\) we have \[\mathbf{E}_{k,l}\mathbf{E}_{p,q}=\left\{\begin{array}{cc}\mathbf{E}_{k,q} & p=l \\ \mathbf{0}_{m,m} & p\neq l \end{array}\right.\]
Let \(m \in \mathbb{N}.\) For \(1\leqslant k,l,p,q \leqslant m,\) we have \[\mathbf{E}_{k,l}\mathbf{E}_{p,q}=\left\{\begin{array}{cc}\mathbf{E}_{k,q} & p=l \\ \mathbf{0}_{m,m} & p\neq l \end{array}\right.\]
Using the identities (12.3) and (12.4), we compute for \(1\leqslant k\leqslant m-1\) \[(\mathbf{J}_m(0))^k\vec{e}_m=\sum_{i=1}^{m-k}\mathbf{E}_{i,i+k}\vec{e}_m=\sum_{i=1}^{m-k}\delta_{i+k,m}\vec{e}_i=\vec{e}_{m-k}\] so that \[((\mathbf{J}_m(0))^{m-1}\vec{e}_m,(\mathbf{J}_m(0))^{m-2}\vec{e}_m,\ldots,\mathbf{J}_m(0)\vec{e}_m,\vec{e}_m)=(\vec{e}_1,\vec{e}_2,\ldots,\vec{e}_{m-1},\vec{e}_m).\] Applying \(\mathbf{J}_m(\lambda)-\lambda\mathbf{1}_{m}\) repeatedly to the generalised eigenvector \(\vec{e}_m\) thus gives an ordered basis of \(V.\) In general we have:
Let \(V\) be a \(\mathbb{K}\)-vector space and \(f : V \to V\) an endomorphism. Suppose \(v \in V\) is a generalised eigenvector of \(f\) of rank \(m\in \mathbb{N}\) with eigenvalue \(\lambda \in \mathbb{K}\) and define \(u_i=(f-\lambda\mathrm{Id}_V)^{m-i}(v)\) for \(1\leqslant i\leqslant m.\) Then
\(\mathbf{b}=(u_1,\ldots,u_m)\) is an ordered basis of the subspace \(Z(g_{\lambda},v)=\operatorname{span}\{u_1,\ldots,u_m\}\);
the subspace \(Z(g_{\lambda},v)\) is stable under \(f\);
let \(\hat{f}\) denote the restriction of \(f\) to \(Z(g_{\lambda},v),\) then we have \(\mathbf{M}(\hat{f},\mathbf{b},\mathbf{b})=\mathbf{J}_{m}(\lambda).\)
Proof. (i) We only need to show that the vectors \(\{u_1,\ldots,u_m\}\) are linearly independent as by definition, \(\{u_1,\ldots,u_m\}\) is a generating set for \(Z(g_{\lambda},v).\) Write \(g_{\lambda}=f-\lambda\mathrm{Id}_V\) then \[(u_1,\ldots,u_m)=(g_{\lambda}^{m-1}(v),g_{\lambda}^{m-2}(v),\ldots,g_{\lambda}(v),v).\] Suppose we have scalars \(\mu_1,\ldots,\mu_m\) such that \[\tag{12.6} 0_V=\mu_1u_1+\cdots+\mu_mu_m=\mu_1g_{\lambda}^{m-1}(v)+\mu_2g_{\lambda}^{m-2}(v)+\cdots+\mu_{m-1}g_{\lambda}(v)+\mu_m v.\] Since by assumption \(g_{\lambda}^m(v)=0_V\) we have \(g_{\lambda}^{k}(v)=0_V\) for all \(k\geqslant m.\) Applying \(g_{\lambda}\) \((m-1)\)-times to (12.6) thus gives \[\mu_1g_{\lambda}^{2m-2}(v)+\mu_2g_{\lambda}^{2m-3}(v)+\cdots +\mu_{m-1}g_{\lambda}^m(v)+\mu_{m}g_{\lambda}^{m-1}(v)=\mu_{m}g_{\lambda}^{m-1}(v)=0_V.\] By assumption \(g_{\lambda}^{m-1}(v)\neq 0_V,\) hence we conclude that \(\mu_{m}=0.\) Therefore, (12.6) becomes \[\mu_1u_1+\cdots+\mu_mu_m=\mu_1g_{\lambda}^{m-1}(v)+\mu_2g_{\lambda}^{m-2}(v)+\cdots+\mu_{m-1}g_{\lambda}(v)=0_V.\] Applying \(g_{\lambda}\) \((m-2)\)-times to the previous equation we conclude that \(\mu_{m-1}=0\) as well. Continuing in this fashion it follows that \(\mu_1=\mu_2=\cdots=\mu_m=0,\) hence the vectors \(\{u_1,\ldots,u_m\}\) are linearly independent, as claimed.
(ii) Since \(\{u_1,\ldots,u_m\}\) is a basis of \(Z(g_{\lambda},v),\) it is sufficient to show that for all \(1\leqslant i\leqslant m\) the vector \(f(u_i)\) is a linear combination of \(\{u_1,\ldots,u_m\}.\) By construction, we have \[\begin{aligned} (f-\lambda\mathrm{Id}_V)(u_1)&=g_{\lambda}^m(v)=0_V,\\ (f-\lambda\mathrm{Id}_V)(u_2)&=g_{\lambda}^{m-1}(v)=u_1,\\ (f-\lambda\mathrm{Id}_V)(u_3)&=g_{\lambda}^{m-2}(v)=u_2,\\ &\vdots\\ (f-\lambda\mathrm{Id}_V)(u_m)&=g_{\lambda}(v)=u_{m-1} \end{aligned}\] Equivalently, we have \[f(u_1)=\lambda u_1,\quad f(u_2)=u_1+\lambda u_2,\quad f(u_3)=u_2+\lambda u_3,\quad \ldots \quad f(u_m)=u_{m-1}+\lambda u_m,\] which shows the claim.
(iii) Previously we showed that \(f(u_1)=\lambda u_1,\) hence the first column vector of \(\mathbf{M}(\hat{f},\mathbf{b},\mathbf{b})\) is \(\lambda \vec{e}_1.\) For \(2\leqslant i \leqslant m,\) we have \(f(u_i)=1 u_{i-1}+\lambda u_i\) and hence the \(i\)-th column vector of \(\mathbf{M}(\hat{f},\mathbf{b},\mathbf{b})\) is given by \(\vec{e}_{i-1}+\lambda \vec{e}_i.\) This shows that \(\mathbf{M}(\hat{f},\mathbf{b},\mathbf{b})=\mathbf{J}_{m}(\lambda).\)
12.3 Nilpotent endomorphisms
Let \(f : V \to V\) be an endomorphism of a finite dimensional \(\mathbb{K}\)-vector space \(V\) and \(\lambda \in \mathbb{K}\) an eigenvalue of \(f.\) Then there exists an integer \(\ell \in \mathbb{N},\) integers \(m_1,\ldots,m_\ell\) and an ordered basis \(\mathbf{b}\) of \(\mathcal{E}_{f}(\lambda)\) such that \[\mathbf{M}(f|_{\mathcal{E}_f(\lambda)},\mathbf{b},\mathbf{b})=\operatorname{diag}(\mathbf{J}_{m_1}(\lambda),\mathbf{J}_{m_2}(\lambda),\ldots,\mathbf{J}_{m_\ell}(\lambda)).\]
An endomorphism \(g : V \to V\) of a \(\mathbb{K}\)-vector space \(V\) is called nilpotent if there exists an integer \(m \in \mathbb{N}\) such that \(g^m=o,\) where \(o : V \to V\) denotes the zero endomorphism defined by the rule \(o(v)=0_V\) for all \(v \in V.\) A matrix \(\mathbf{A}\in M_{n,n}(\mathbb{K})\) is called nilpotent if \(f_\mathbf{A}:\mathbb{K}^n \to \mathbb{K}^n\) is nilpotent.
Let \(V\) be a finite dimensional \(\mathbb{K}\)-vector space and \(\lambda \in \mathbb{K}\) an eigenvalue of the endomorphism \(f : V \to V.\) Then the restriction \(g=(f-\lambda\mathrm{Id}_V)|_{\mathcal{E}_{f}(\lambda)}\) of \(f-\lambda\mathrm{Id}_V\) to the generalised eigenspace \(\mathcal{E}_f(\lambda)\) is a nilpotent endomorphism.
Proof. There exists an integer \(m \in \mathbb{N}\) such that \(\mathcal{E}_f(\lambda)=\operatorname{Ker}((f-\lambda\mathrm{Id}_V)^m).\) Therefore, for all \(v \in \mathcal{E}_f(\lambda)\) we have \((f-\lambda\mathrm{Id}_V)^m(v)=0_V\) which shows that \(g^m=o,\) as claimed.
For nilpotent endomorphisms, we can always find a natural ordered basis of \(V\):
Let \(V\) be a finite dimensional \(\mathbb{K}\)-vector space and \(g : V \to V\) a nilpotent endomorphism. Then there exists an integer \(\ell \in \mathbb{N},\) integers \(m_1,\ldots,m_{\ell} \in \mathbb{N}\) and vectors \(v_1,\ldots,v_{\ell} \in V\) such that \[\begin{gathered} \mathbf{b}=(g^{m_1-1}(v_1),g^{m_1-2}(v_1),\ldots,g(v_1),v_1,g^{m_2-1}(v_2),g^{m_2-2}(v_2),\ldots,g(v_2),v_2,\ldots\\ \ldots,g^{m_{\ell}-1}(v_\ell),g^{m_{\ell}-2}(v_\ell),\ldots,g(v_\ell),v_\ell) \end{gathered}\] is an ordered basis of \(V\) and such that \(g^{m_1}(v_1)=g^{m_2}(v_2)=\cdots=g^{m_\ell}(v_{\ell})=0_V.\) In particular, we have \[\mathbf{M}(g,\mathbf{b},\mathbf{b})=\operatorname{diag}(\mathbf{J}_{m_1}(0),\mathbf{J}_{m_2}(0),\ldots,\mathbf{J}_{m_{\ell}}(0)).\]
Proof. We use induction on the dimension of the vector space \(V.\) For \(\dim V=1\) the only nilpotent endomorphism is the zero endomorphism \(o : V \to V\) and we can take \(\ell=1,\) \(m_1=1\) and \(\mathbf{b}=(v)\) for any non-zero vector \(v \in V.\) The statement is thus anchored.
Let \(V\) be a finite dimensional \(\mathbb{K}\)-vector space and \(g : V \to V\) an endomorphism. Then the following statements are equivalent:
\(g\) is injective;
\(g\) is surjective;
\(g\) is bijective;
\(\det(g) \neq 0.\)
Since \(u_1,\ldots,u_k \in U=\operatorname{Im}(g),\) there exist vectors \(v_1,\ldots,v_k\) such that \(u_i=g(v_i)\) for all \(1\leqslant i\leqslant k.\) Set \(m_i=n_i+1\) for \(1\leqslant i\leqslant k\) and consider the set \[\begin{gathered} S=\{g^{m_1-1}(v_1),g^{m_1-2}(v_1),\ldots,g(v_1),v_1,g^{m_2-1}(v_2),g^{m_2-2}(v_2),\ldots,g(v_2),v_2,\ldots\\ \ldots,g^{m_{k}-1}(v_k),g^{m_{k}-2}(v_k),\ldots,g(v_k),v_k\}. \end{gathered}\] We claim \(S\) is linearly independent. Suppose we can find a linear combination \(w\) of the elements of \(S\) that gives the zero vector. Applying \(g\) to this linear combination, we obtain a linear combination of the elements of \[\begin{gathered} \{g^{m_1}(v_1),g^{m_1-1}(v_1),\ldots,g^2(v_1),g(v_1),g^{m_2}(v_2),g^{m_2-1}(v_2),\ldots,g^2(v_2),g(v_2),\ldots\\ \ldots,g^{m_{k}}(v_k),g^{m_{k}-1}(v_k),\ldots,g^2(v_k),g(v_k)\} \end{gathered}\] that gives the zero vector. Equivalently, we obtain a linear combination of the elements of \[\begin{gathered} \{g^{m_1-1}(u_1),g^{m_1-2}(u_1),\ldots,g(u_1),u_1,g^{m_2-1}(u_2),g^{m_2-2}(u_2),\ldots,g(u_2),u_2,\ldots\\ \ldots,g^{m_{k}-1}(u_k),g^{m_{k}-2}(u_k),\ldots,g(u_k),u_k\} \end{gathered}\] that gives the zero vector. Equivalently, we obtain a linear combination of the elements of \[\begin{gathered} \{h^{n_1}(u_1),h^{n_1-1}(u_1),\ldots,h(u_1),u_1,h^{n_2}(u_2),h^{n_2-1}(u_2),\ldots,h(u_2),u_2,\ldots\\ \ldots,h^{n_{k}}(u_k),h^{n_{k}-1}(u_k),\ldots,h(u_k),u_k\} \end{gathered}\] that gives the zero vector. Here we use that \(m_i=n_i+1\) for \(1\leqslant i\leqslant k\) and that \(h=g\) on \(\operatorname{Im}(g).\) The tuple \(\mathbf{c}\) is an ordered basis of \(U,\) hence all the coefficients in this linear combination must vanish, except the coefficients before each vector \(h^{n_i}(u_i),\) since \(h^{n_i}(u_i)=0_V\) for all \(1\leqslant i\leqslant k.\) The initial linear combination \(w\) thus simplifies to become \[\mu_1g^{m_1-1}(v_1)+\mu_2g^{m_2-1}(v_2)+\cdots+\mu_{k}g^{m_k-1}(v_k)=0_V.\] for some scalars \(\mu_1,\ldots,\mu_k.\) It remains to argue that these scalars are all zero. The previous equation is equivalent to \[\mu_1 h^{n_1-1}(u_1)+\mu_2h^{n_2-2}(u_2)+\cdots+\mu_kh^{n_k-1}(u_k)=0_V.\] Using the linear independence of the elements of \(\mathbf{c}\) again, we conclude that \(\mu_1=\cdots=\mu_k=0,\) as desired.
Observe that by construction, the vectors \(v_1,\ldots,v_k\) satisfy \(g^{m_1}(v_1)=g^{m_2}(v_2)=\cdots=g^{m_k}(v_k)=0_V.\)
Let \(V\) be a \(\mathbb{K}\)-vector space.
Any subset \(\mathcal{S}\subset V\) generating \(V\) admits a subset \(\mathcal{T}\subset \mathcal{S}\) that is a basis of \(V.\)
Any subset \(\mathcal{S}\subset V\) that is linearly independent in \(V\) is contained in a subset \(\mathcal{T}\subset V\) that is a basis of \(V.\)
If \(\mathcal{S}_1,\mathcal{S}_2\) are bases of \(V,\) then there exists a bijective map \(f : \mathcal{S}_1 \to \mathcal{S}_2.\)
If \(V\) is finite dimensional, then any basis of \(V\) is a finite set and the number of elements in the basis is independent of the choice of the basis.
Finally, the first \(m_1\) vectors of \(\mathbf{b}\) are \(y_i=g^{m_1-i}(v_1)\) for \(1\leqslant i\leqslant m_1\) and we have \(g(y_1)=0_V\) and \(g(y_i)=y_{i-1}\) for \(2\leqslant i\leqslant m_1.\) This contributes the Jordan block \(\mathbf{J}_{m_1}(0)\) to the matrix representation of \(g\) with respect to \(\mathbf{b}.\) The remaining blocks arise by considering the vectors \(g^{m_k-i}(v_k)\) for \(2\leqslant k\leqslant \ell\) and where \(1\leqslant i\leqslant m_k.\)
As an application, we obtain:
Let \(f : V \to V\) be an endomorphism of a finite dimensional \(\mathbb{K}\)-vector space \(V\) and \(\lambda \in \mathbb{K}\) an eigenvalue of \(f.\) Then there exists an integer \(\ell \in \mathbb{N},\) integers \(m_1,\ldots,m_\ell\) and an ordered basis \(\mathbf{b}\) of \(\mathcal{E}_{f}(\lambda)\) such that \[\mathbf{M}(f|_{\mathcal{E}_f(\lambda)},\mathbf{b},\mathbf{b})=\operatorname{diag}(\mathbf{J}_{m_1}(\lambda),\mathbf{J}_{m_2}(\lambda),\ldots,\mathbf{J}_{m_\ell}(\lambda)).\]
Let \(V\) be a finite dimensional \(\mathbb{K}\)-vector space and \(\lambda \in \mathbb{K}\) an eigenvalue of the endomorphism \(f : V \to V.\) Then the restriction \(g=(f-\lambda\mathrm{Id}_V)|_{\mathcal{E}_{f}(\lambda)}\) of \(f-\lambda\mathrm{Id}_V\) to the generalised eigenspace \(\mathcal{E}_f(\lambda)\) is a nilpotent endomorphism.
Let \(V\) be a finite dimensional \(\mathbb{K}\)-vector space and \(g : V \to V\) a nilpotent endomorphism. Then there exists an integer \(\ell \in \mathbb{N},\) integers \(m_1,\ldots,m_{\ell} \in \mathbb{N}\) and vectors \(v_1,\ldots,v_{\ell} \in V\) such that \[\begin{gathered} \mathbf{b}=(g^{m_1-1}(v_1),g^{m_1-2}(v_1),\ldots,g(v_1),v_1,g^{m_2-1}(v_2),g^{m_2-2}(v_2),\ldots,g(v_2),v_2,\ldots\\ \ldots,g^{m_{\ell}-1}(v_\ell),g^{m_{\ell}-2}(v_\ell),\ldots,g(v_\ell),v_\ell) \end{gathered}\] is an ordered basis of \(V\) and such that \(g^{m_1}(v_1)=g^{m_2}(v_2)=\cdots=g^{m_\ell}(v_{\ell})=0_V.\) In particular, we have \[\mathbf{M}(g,\mathbf{b},\mathbf{b})=\operatorname{diag}(\mathbf{J}_{m_1}(0),\mathbf{J}_{m_2}(0),\ldots,\mathbf{J}_{m_{\ell}}(0)).\]
Exercises
Show that \(\mathbf{A}\in M_{n,n}(\mathbb{K})\) is nilpotent if and only if there exists an integer \(m \in \mathbb{N}\) such that \(\mathbf{A}^m=\mathbf{0}_n.\)
Solution
If \(f_\mathbf{A}\) is nilpotent, then there is \(m\in\mathbb{N}\) such that \[o=\underbrace{f_\mathbf{A}\circ\cdots\circ f_\mathbf{A}}_{m-\text{times}} = f_{\mathbf{A}^m}\] and hence \(\mathbf{A}^m = \mathbf{0}_n.\) Conversely, if \(\mathbf{A}^m=\mathbf{0}_n\) for some \(m\in\mathbb{N},\) then \[\underbrace{f_\mathbf{A}\circ\cdots\circ f_\mathbf{A}}_{m-\text{times}}=f_{\mathbf{A}^m}=f_{\mathbf{0}_n} = o.\]