4.7 Global version of the Gauss–Bonnet Theorem
It is natural to wonder whether the Gauss–Bonnet Theorem considered so far has any implications on the total Gauss curvature of an embedded surface \(M\subset \mathbb{R}^3.\) This is indeed the case. In case \(M\) is compact (recall this this is equivalent to \(M\subset \mathbb{R}^3\) being a closed and bounded subset), the total Gauss curvature is always an integer multiple of \(2\pi.\) The integer is called the Euler characteristic of \(M\). It can be computed in terms of a so-called triangulation of \(M\). A subset \(\Delta\) of a surface \(M\) is called a triangle if \(\Delta\) is the image of a simple closed curve which is a piecewise smooth curve and which has \(3\) corners. A triangulation of a surface \(M\) is a finite set \(\mathcal{T}=\left\{\Delta_i\subset M\,|\, 1\leqslant i\leqslant N\right\}\) of triangles on \(M\) so that
\(\bigcup_{i=1}^N \Delta_i=M\);
if for a pair of indices \(i\neq j\) we have \(\Delta_i\cap \Delta_j\neq \emptyset,\) then \(\Delta_i\cap \Delta_j\) consists of a common edge or of a common vertex.
For a given triangulation \(\mathcal{T}\) of \(M\) we call \[\chi=V-E+F\] the Euler characteristic of the triangulation \(\mathcal{T}.\) Here \(F=N\) denotes the number of faces (i.e. triangles) of \(\mathcal{T}.\) The number \(V\) denotes the number of vertices and \(E\) the number of edges of \(\mathcal{T}.\)
An important theorem from a course about topology states that every compact embedded surface \(M\subset \mathbb{R}^3\) admits a triangulation \(\mathcal{T}\) and moreover \(\chi=\chi(M)\) is independent of \(\mathcal{T}.\) Furthermore, one can show that \(\chi(M)\) is related to the (roughly speaking) number \(g\) of holes of the surface via the relation \[\chi(M)=2-2g.\]
For the \(2\)-sphere \(S^2\subset \mathbb{R}^3\) we obtain a triangulation \(\mathcal{T}\) in terms of its octants and for this triangulation we have \[\chi(S^2)=V-E+F=6-12+8=2.\] A sphere has no hole, hence \(\chi(M)=2-2\cdot 0=2,\) which is in agreement with the value obtained in terms of a triangulation.
The torus \(T\subset \mathbb{R}^3\) has \(1\) hole, hence \[\chi(T)=2-2\cdot 1=0.\]
Considering surfaces with more than one hole we obtain surfaces whose Euler characteristic is negative.
We can now state:
Let \(M\subset \mathbb{R}^3\) be a compact embedded surface with Gauss curvature \(K,\) then \[\frac{1}{2\pi}\int_M K dA=\chi(M).\]
Let \(\gamma : [0,L] \to M\) be a simple closed unit speed curve of length \(L\) which is piecewise smooth with exterior angles \(\vartheta_1,\ldots,\vartheta_k\) at the corners \(p_1,\ldots,p_k\) of \(\gamma\) and whose image is contained in \(F(U)\) for some local parametrisation \(F : U \to M.\) Let \(D\) denote the region enclosed by \(\gamma\) and assume that \(J\dot{\gamma}(t)\) points into the interior of \(D\) for all \(t \in [0,L]\) with the exception of the corner points. Then \[\int_0^L k_g(t)\mathrm{d}t+\sum_{i=1}^k \vartheta_i=2\pi-\int_D K d A,\] where \(k_g\) denotes the geodesic curvature of \(\gamma\) and \(K\) the Gauss curvature of \(M.\)
Let \(\gamma : [0,L] \to M\) be a simple closed unit speed curve of length \(L\) which is piecewise smooth with exterior angles \(\vartheta_1,\ldots,\vartheta_k\) at the corners \(p_1,\ldots,p_k\) of \(\gamma\) and whose image is contained in \(F(U)\) for some local parametrisation \(F : U \to M.\) Let \(D\) denote the region enclosed by \(\gamma\) and assume that \(J\dot{\gamma}(t)\) points into the interior of \(D\) for all \(t \in [0,L]\) with the exception of the corner points. Then \[\int_0^L k_g(t)\mathrm{d}t+\sum_{i=1}^k \vartheta_i=2\pi-\int_D K d A,\] where \(k_g\) denotes the geodesic curvature of \(\gamma\) and \(K\) the Gauss curvature of \(M.\)
5 Further topics
In this chapter we provide an outlook to some further topics in differential geometry which are typically studied in depth in a master course. The content of this chapter is not examinable.
5.1 Differential forms
The proof of Gauss’ Theorema Egregium can be simplified by using so-called differential forms. We start with a brief introduction to differential forms.
Recall that a vector field associates to every point \(p\) of its domain of definition a tangent vector \(X(p)\) in the corresponding tangent space. Closely related is the notion of a \(1\)-form:
Let \(\mathcal{X}\subset \mathbb{R}^n\) be a subset. A \(1\)-form \(\alpha\) on \(\mathcal{X}\) is a map \(\alpha : \mathcal{X} \to T^*\mathbb{R}^n\) so that \(\alpha|_p:=\alpha(p) \in T_p^*\mathbb{R}^n\) for all \(p \in U.\) Writing \[\alpha|_p=\alpha_1(p)\mathrm{d}x_1|_{p}+\cdots+\alpha_n(p) \mathrm{d}x_n|_{p}\] for functions \(\alpha_i : \mathcal{X} \to \mathbb{R},\) where \(1\leqslant i\leqslant n.\) We call \(\alpha\) smooth if the functions \(\alpha_i\) are smooth for all \(1\leqslant i\leqslant n.\)
Let \(f : \mathcal{X} \to \mathbb{R}\) be a smooth function, then its exterior derivative \(\mathrm{d}f\) is a smooth \(1\)-form on \(\mathcal{X}.\)
\(1\)-forms can be added and multiplied with functions in the obvious way. If \(\alpha,\beta\) are \(1\)-forms on \(\mathcal{X}\) and \(f : \mathcal{X} \to \mathbb{R}\) a function, then we define \[\begin{aligned} (\alpha+\beta)(\vec{v}_p)&=\alpha(\vec{v}_p)+\beta(\vec{v}_p),\\ (f\alpha)(\vec{v}_p)&=(\alpha f)(\vec{v}_p)=f(p)\alpha(\vec{v}_p) \end{aligned}\] for all \(p \in \mathcal{X}\) and \(\vec{v}_p \in T_p\mathbb{R}^n.\)
Given \(1\)-forms \(\alpha,\beta : \mathcal{X} \to T^*\mathbb{R}^n,\) we can define a symmetric and alternating bilinear form on each tangent space \(T_p\mathbb{R}^n\) for \(p \in \mathcal{X}.\) For all \(p \in \mathcal{X}\) and \(\vec{v}_p,\vec{w}_p \in T_p\mathbb{R}^n,\) we define \((\alpha\beta)|_p : T_p\mathbb{R}^n \times T_p\mathbb{R}^n \to \mathbb{R}\) by the rule \[(\alpha\beta)|_p(\vec{v}_p,\vec{w}_p)=\frac{1}{2}\left(\alpha(\vec{v}_p)\beta(\vec{w}_p)+\beta(\vec{w}_p)\alpha(\vec{v}_p)\right).\] Notice that for all \(p \in \mathcal{X}\) the map \((\alpha\beta)|_p\) is a symmetric bilinear form on \(T_p\mathbb{R}^n.\) Similarly we can define an alternating bilinear form \((\alpha\wedge\beta)_p : T_p\mathbb{R}^n \times T_p\mathbb{R}^n \to \mathbb{R}\) by the rule \[(\alpha\wedge\beta)|_p(\vec{v}_p,\vec{w}_p)=\alpha(\vec{v}_p)\beta(\vec{w}_p)-\beta(\vec{v}_p)\alpha(\vec{w}_p).\]
We call \(\alpha\wedge\beta\) the wedge product of the two \(1\)-forms \(\alpha,\beta.\)
Let \(\alpha,\beta,\xi\) be \(1\)-forms on \(\mathcal{X}\) and \(f,h : \mathcal{X} \to \mathbb{R}\) functions. Show that
\(\alpha\wedge\beta=-\beta\wedge\alpha\) so that \(\alpha\wedge\alpha=0\);
\((\alpha+\beta)\wedge \xi=\alpha\wedge\xi+\beta\wedge\xi\);
\((f\alpha)\wedge\beta=\alpha\wedge(f\beta)=f(\alpha\wedge\beta).\)
For an \(\mathbb{R}\)-vector space \(V\) we write \(\Lambda^2(V^*)\) for the (vector space of) alternating bilinear forms on \(V\) and \[\Lambda^2(T^*\mathbb{R}^n):=\bigcup_{p \in \mathbb{R}^n} \Lambda^2(T^*_p\mathbb{R}^n)\] Likewise we write \(S^2(V^*)\) for the symmetric bilinear forms on \(V\) and \[S^2(T^*\mathbb{R}^n):=\bigcup_{p \in \mathbb{R}^n} S^2(T^*_p\mathbb{R}^n)\]
Let \(\mathcal{X}\subset \mathbb{R}^n\) be a subset. A \(2\)-form on \(\mathcal{X}\) is a map \[\xi : \mathcal{X} \to \Lambda^2(T^*\mathbb{R}^n)\] so that \(\xi|_p:=\xi(p) \in \Lambda^2(T^*_p\mathbb{R}^n).\)
A \(2\)-form thus assigns to each point \(p \in \mathcal{X}\) an alternating bilinear map on \(T_p\mathbb{R}^n.\) The wedge product \(\alpha\wedge\beta\) of two \(1\)-forms \(\alpha,\beta\) is a \(2\)-form. Moreover, we can turn every smooth \(1\)-form into a \(2\)-form by taking the exterior derivative:
Let \(\alpha\) be a smooth \(1\)-form on \(\mathcal{X}\subset \mathbb{R}^n\) so that \(\alpha=\sum_{i=1}^n\alpha_i\mathrm{d}x_i\) for smooth functions \(\alpha_i : \mathcal{X} \to \mathbb{R},\) where \(1\leqslant i\leqslant n.\) The exterior derivative \(\mathrm{d}\alpha\) of \(\alpha\) is the \(2\)-form defined as \[\mathrm{d}\alpha|_p=\sum_{j=1}^n\sum_{i=1}^n \frac{\partial \alpha_i}{\partial x_j}(p)(\mathrm{d}x_j\wedge \mathrm{d}x_i)|_p.\]
Recall that second derivatives of a twice continuously differentiable function commute and this has the important consequence that \(\mathrm{d}^2=0,\) that is:
Let \(f : \mathcal{X} \to \mathbb{R}\) be a smooth function. Then \[\mathrm{d}^2f:=\mathrm{d}(\mathrm{d}f)=0.\]
Proof. By definition we have \[\mathrm{d}f|_p=\sum_{i=1}^n \frac{\partial f}{\partial x_i}(p)\mathrm{d}x_i|_p\] and hence \[\mathrm{d}(\mathrm{d}f)|_p=\sum_{j=1}^n\sum_{i=1}^n \frac{\partial^2 f}{\partial x_j\partial x_i}(p)(\mathrm{d}x_i\wedge\mathrm{d}x_j)|_p.\] Since \(f\) is twice continuously differentiable, it follows that \[\frac{\partial^2 f}{\partial x_j\partial x_i}(p)=\frac{\partial^2 f}{\partial x_i\partial x_j}(p),\] but since \((\mathrm{d}x_i\wedge\mathrm{d}x_j)|_p=-(\mathrm{d}x_j\wedge \mathrm{d}x_i)|_p\) we conclude that \(\mathrm{d}(\mathrm{d}f)=0.\)
We also have:
For a smooth \(1\)-form \(\alpha\) on \(\mathcal{X}\) and a smooth function \(f : \mathcal{X} \to \mathbb{R}\) we have \[\mathrm{d}\left(f \alpha\right)=\mathrm{d}f \wedge \alpha+f \mathrm{d}\alpha.\]
Proof. Writing \(\alpha=\alpha_i \mathrm{d}x_i\) for smooth functions \(\alpha_i : \mathcal{X} \to \mathbb{R},\) we have \[\begin{aligned} \mathrm{d}(f\alpha)&=\mathrm{d}\left(\sum_{i=1}^n f \alpha_i \mathrm{d}x_i\right)=\sum_{j=1}^n\sum_{i=1}^n \frac{\partial (f\alpha_i)}{\partial x_j}\mathrm{d}x_j\wedge \mathrm{d}x_i\\ &=\sum_{j=1}^n\sum_{i=1}^n\left(\frac{\partial f}{\partial x_j}\alpha_i+f\frac{\partial \alpha_i}{\partial x_j}\right)\mathrm{d}x_j\wedge \mathrm{d}x_i\\ &=\left(\sum_{j=1}^n\frac{\partial f}{\partial x_j}\mathrm{d}x_j\right)\wedge \left(\sum_{i=1}^n\alpha_i\mathrm{d}x_i\right)+f\sum_{j=1}^n\sum_{i=1}^n\frac{\partial \alpha_i}{\partial x_j}\mathrm{d}x_j\wedge \mathrm{d}x_i\\ &=\mathrm{d}f \wedge \alpha+f \mathrm{d}\alpha, \end{aligned}\] as claimed.
Whenever we have a smooth map \(f : \mathcal{X} \to M_{n,1}(\mathbb{R})\) we write \(\mathrm{d}f\) for the \(1\)-form with values in \(M_{n,1}(\mathbb{R})\) defined by the rule \[\mathrm{d}f(\vec{v}_p)=\begin{pmatrix} \mathrm{d}f_1(\vec{v}_p) \\ \vdots \\ \mathrm{d}f_n(\vec{v}_p)\end{pmatrix}\qquad \text{where} \qquad f=\begin{pmatrix} f_1 \\ \vdots \\ f_n \end{pmatrix}\] for smooth functions \(f_i : \mathcal{X} \to \mathbb{R},\) \(1\leqslant i\leqslant n\) and where \(\vec{v}_p\) in \(T_p\mathbb{R}^n\) for \(p \in \mathcal{X}.\)
If \(\alpha\) is a vector-valued \(1\)-form on \(\mathcal{X}\) so that \[\alpha=\begin{pmatrix} \alpha_1 \\ \vdots \\ \alpha_n\end{pmatrix}\] for \(1\)-forms \(\alpha_i\) on \(\mathcal{X},\) \(1\leqslant i\leqslant n,\) then we write \[\alpha\cdot f=f\cdot \alpha=f_1\alpha_1+\cdots+f_n\alpha_n.\]
If \(\beta\) is a \(1\)-form on \(\mathcal{X},\) then we write \[\mathrm{d}f \wedge \beta=\begin{pmatrix} \mathrm{d}f_1 \wedge \beta \\ \vdots \\ \mathrm{d}f_n\wedge \beta\end{pmatrix}\]
5.2 The Theorema Egregium revisited
We will next give a proof of the Theorema Egregium using differential forms.
Recall that \(\{(\vec{e}_1)_p,(\vec{e}_2)_p,(\vec{e}_3)_p\}\) denotes the standard basis on \(T_p\mathbb{R}^3\) for each \(p \in \mathbb{R}^3.\) In the presence of a surface \(M\subset \mathbb{R}^3\) it is useful to modify this basis so that for \(p \in M\) the vectors \(\{(\vec{e}_1)_p,(\vec{e}_2)_p\}\) are an orthonormal basis of \(T_pM\) and so that \((\vec{e}_3)_p\) spans the normal space \(T_pM^{\perp}.\) We will call such a basis adapted to \(T_pM\). Unfortunately it is not always possible to find an adapted basis for each tangent space of \(M\) which varies continuously over the whole surface. A theorem which goes beyond the content of this course – sometimes called the Hairy ball theorem – states that on the \(2\)-sphere every continuous vector field must attain the zero tangent vector at some point. This implies in particular that we cannot find a basis \(\{(\vec{e}_1)_p,(\vec{e}_2)_p\}\) for each tangent space of \(S^2\) which varies continuously over all of \(S^2.\) We do however obtain an adapted basis locally. To see this we choose a local parametrisation \(F : U \to M\subset \mathbb{R}^3\) and compute \(F_u,F_v : U \to M_{3,1}(\mathbb{R}).\) Recall that \(\{(F_u)_{F(q)},(F_u)_{F(q)}\}\) is basis of \(T_{F(q)}M\) for all \(q \in U.\) Applying the Gram-Schmidt orthonormalisation procedure we thus obtain an orthonormal basis on each tangent space \(T_pM\) where \(p \in F(U).\) Taking the cross product of the two tangent vectors we obtain an adapted basis for each tangent space in \(F(U).\) If we forget about the base points we obtain three column vector-valued maps \[\vec{e}_i : M \to M_{3,1}(\mathbb{R}), \qquad i=1,2,3\] where here – for notational simplicity – we pretend that these maps are defined on all of \(M.\)
Recall the map \(\Psi_n : \mathbb{R}^n \to M_{n,1}(\mathbb{R})\) which turns a point into a column vector \[\Psi_n : \mathbb{R}^n \to M_{n,1}(\mathbb{R}), \quad (x_1,\ldots,x_n) \mapsto \begin{pmatrix} x_1 \\ \vdots \\ x_n \end{pmatrix}\] For what follows we consider the case \(n=3\) write \(\Psi_3 : \mathbb{R}^3 \to M_{3,1}(\mathbb{R})\) as \[\Psi_3=\begin{pmatrix} x \\ y \\ z \end{pmatrix}\] where the functions \(x,y,z : \mathbb{R}^3 \to \mathbb{R}\) denote the projection onto the respective component. Observe that \[\tag{5.1} [\mathrm{d}\Psi_3(\vec{v}_p)]_p=\begin{pmatrix} \mathrm{d}x(\vec{v}_p) \\ \mathrm{d}y(\vec{v}_p) \\ \mathrm{d}z(\vec{v}_p) \end{pmatrix}_p=\vec{v}_p\] for all \(\vec{v}_p \in T\mathbb{R}^3.\) This implies that for each \(\vec{v}_p \in T_pM\) we have \([\mathrm{d}\Phi(\vec{v}_p)]_p \in T_pM,\) hence there are unique smooth \(1\)-forms \(\omega_1,\omega_2\) on \(M\) so that \[\tag{5.2} \mathrm{d}\Psi_3(\vec{v}_p)=\vec{e}_1(p)\omega_1(\vec{v}_p)+\vec{e}_2(p)\omega_2(\vec{v}_p)\] for all \(p \in M\) and all \(\vec{v}_p \in T_pM.\) Notice that for all \(p \in M\) and all \(\vec{v}_p \in T_pM\) we have \[\tag{5.3} \omega_1(\vec{v}_p)=\langle \vec{v}_p,(\vec{e}_1)_p\rangle_p \qquad \text{and} \qquad \omega_2(\vec{v}_p)=\langle \vec{v}_p,(\vec{e}_2)_p\rangle_p\] so that the \(1\)-forms \(\omega_1,\omega_2\) are intrinsic quantities. Notice the identities \[\tag{5.4} \omega_1=\begin{pmatrix} \mathrm{d}x \\ \mathrm{d}y \\ \mathrm{d}z \end{pmatrix} \cdot \vec{e}_1 \qquad \text{and} \qquad \omega_2=\begin{pmatrix} \mathrm{d}x \\ \mathrm{d}y \\ \mathrm{d}z \end{pmatrix} \cdot \vec{e}_2.\] which follow from computing the inner product of (5.2) with \(\vec{e}_1,\)\(\vec{e}_2,\) respectively and using (5.1).
Recall that for the hyperbolic paraboloid at \(p=(x,y,xy) \in M\) an orthonormal basis of \(T_pM\) is given by \[(\vec{e}_1)_p=\frac{1}{\sqrt{1+y^2}}\begin{pmatrix} 1 \\ 0 \\ y \end{pmatrix}_p\qquad \text{and}\qquad (\vec{e}_2)_p=\frac{1}{\sqrt{1+x^2+y^2}}\begin{pmatrix} -xy/\sqrt{1+y^2} \\ \sqrt{1+y^2} \\ x/\sqrt{1+y^2}\end{pmatrix}_p.\] Using (5.4) we compute that \[\omega_1=\frac{1}{\sqrt{1+y^2}}(\mathrm{d}x + y \mathrm{d}z)\] and \[\omega_2=\frac{1}{\sqrt{1+x^2+y^2}}\left(-\frac{xy}{\sqrt{1+y^2}}\mathrm{d}x+\sqrt{1+y^2}\mathrm{d}y+\frac{x}{\sqrt{1+y^2}}\mathrm{d}z\right)\]
Likewise, there exist unique \(1\)-forms \(\omega_{ij}\) on \(M\) for \(1\leqslant i,j\leqslant 3\) so that \[\mathrm{d}\vec{e}_i(\vec{v}_p)=\sum_{k=1}^3\vec{e}_k(p)\omega_{ki}(\vec{v}_p)\] for all \(p \in M\) and \(\vec{v}_p \in T_pM.\) Omitting the tangent vector \(\vec{v}_p\) we have \[\tag{5.5} \mathrm{d}\vec{e}_i=\sum_{k=1}^3\vec{e}_k\omega_{ki}.\] Since \(\vec{e}_i\cdot \vec{e}_j=\delta_{ij}\) we obtain \[0=\mathrm{d}\left(\vec{e}_i\cdot \vec{e}_j\right)=\left(\sum_{k=1}^3\vec{e}_k\omega_{ki}\right)\cdot \vec{e}_j+\vec{e}_i\cdot \left(\sum_{k=1}^3\vec{e}_k\omega_{kj}\right)=\omega_{ji}+\omega_{ij}\] so that \[\omega_{ij}=-\omega_{ji}.\] We also have:
At each point \(p \in M\) the two cotangent vectors \(\omega_1|_p,\omega_2|_p \in T_p^*M\) are linearly independent and hence a basis of \(T_p^*M.\)
Proof. If \(s_1,s_2 \in \mathbb{R}\) are scalars such that \[s_1\omega_1|_p+s_2\omega_2|_p=0_{T_p^*M},\] then \[0=s_1\omega_1(\vec{v}_p)+s_2\omega_2(\vec{v}_p)=s_1\langle \vec{v}_p+(\vec{e}_1)_p\rangle_p+s_2\langle \vec{v}_p+(\vec{e}_2)_p\rangle_p=\langle \vec{v}_p,\vec{w}_p\rangle_p\] for all \(\vec{v}_p \in T_pM,\) where we write \(\vec{w}_p=s_1(\vec{e}_1)_p+s_2(\vec{e}_2)_p.\) The vector \(\vec{w}_p\) is thus orthogonal to all vectors \(\vec{v}_p \in T_pM.\) Since \(\langle\cdot{,}\cdot\rangle_p\) is non-degenerate this implies that \(\vec{w}_p\) must be the zero vector. This in turn implies that \(s_1=s_2=0,\) since \((\vec{e}_1)_p,(\vec{e}_2)_p\) are linearly independent. Therefore, \(\omega_1|_p,\omega_2|_p\) are linearly independent. Since \(T_p^*M\) is two-dimensional, the claim follows.
Notice that by construction \(\vec{e}_3 : M \to M_{3,1}(\mathbb{R})\) is the Gauss map \(\nu\) of \(M.\) In particular the column vector \(\mathrm{d}\vec{e}_3(\vec{v}_p)=\mathrm{d}\nu(\vec{v}_p)\) attached at the base point \(p \in M\) is tangent to \(M\) for all tangent vectors \(\vec{v}_p \in T_pM.\) This means that there are \(1\)-forms \(\alpha,\beta\) on \(M\) so that \[\tag{5.6} \mathrm{d}\vec{e}_3(\vec{v}_p)=\alpha(\vec{v}_p)\vec{e}_1+\beta(\vec{v}_p)\vec{e}_2.\] Since \(\omega_1|_p,\omega_2|_p\) are a basis of \(T_p^*M\) for all \(p \in M\) there are unique functions \(A_{ij}\) on \(M,\) \(1\leqslant i,j\leqslant 2\) so that \[\tag{5.7} \begin{aligned} \alpha|_p&=-A_{11}(p)\omega_1|_p-A_{12{}}(p)\omega_2|_p,\\ \beta|_p&=-A_{21}(p)\omega_1|_p-A_{22}(p)\omega_2|_p. \end{aligned}\] We now obtain:
The matrix representation of the shape operator \(S_p\) at \(p \in M\) with respect to the ordered orthonormal basis \(\mathbf{b}=((\vec{e}_1)_p,(\vec{e}_2)_p)\) is given by \[\mathbf{M}(S_p,\mathbf{b},\mathbf{b})=-\begin{pmatrix} A_{11}(p) & A_{12}(p) \\ A_{21}(p) & A_{22}(p)\end{pmatrix}.\]
Since \(S_p\) is self-adjoint and \(\mathbf{b}\) an orthonormal basis of \(T_pM,\) the matrix \(\mathbf{M}(S_p,\mathbf{b},\mathbf{b})\) is symmetric, hence Lemma 5.12 Lemma 5.12 ➔implies that \(A_{21}(p)=A_{12}(p).\)The matrix representation of the shape operator \(S_p\) at \(p \in M\) with respect to the ordered orthonormal basis \(\mathbf{b}=((\vec{e}_1)_p,(\vec{e}_2)_p)\) is given by \[\mathbf{M}(S_p,\mathbf{b},\mathbf{b})=-\begin{pmatrix} A_{11}(p) & A_{12}(p) \\ A_{21}(p) & A_{22}(p)\end{pmatrix}.\]
For the Gauss curvature at \(p \in M\) we thus obtain the formula \[K(p)=\det \mathbf{M}(S_p,\mathbf{b},\mathbf{b})=A_{11}(p)A_{22}(p)-A_{12}(p)^2.\]
The matrix representation of the shape operator \(S_p\) at \(p \in M\) with respect to the ordered orthonormal basis \(\mathbf{b}=((\vec{e}_1)_p,(\vec{e}_2)_p)\) is given by \[\mathbf{M}(S_p,\mathbf{b},\mathbf{b})=-\begin{pmatrix} A_{11}(p) & A_{12}(p) \\ A_{21}(p) & A_{22}(p)\end{pmatrix}.\]
Let \(M\subset \mathbb{R}^3\) be an embedded surface, \(p \in M\) and \(\mathbf{b}=(\vec{v}_p,\vec{w}_p)\) an ordered orthonormal basis of \(T_pM.\) Then with respect to \(\mathbf{b}\) the shape operator has matrix representation \[\mathbf{M}(S_p,\mathbf{b},\mathbf{b})=\begin{pmatrix} \langle S_p(\vec{v}_p),\vec{v}_p\rangle & \langle S_p(\vec{v}_p),\vec{w}_p\rangle \\ \langle S_p(\vec{w}_p),\vec{v}_p\rangle & \langle S_p(\vec{w}_p),\vec{w}_p\rangle \end{pmatrix}\]
Combining (5.5) with (5.6) and (5.7) we also have \[\mathrm{d}\vec{e}_3=\vec{e}_1\omega_{13}+\vec{e}_2\omega_{23}=-(A_{11}\omega_1+A_{12}\omega_2)\vec{e}_1-(A_{12}\omega_1+A_{22}\omega_2)\vec{e}_2,\] where we use that \(A_{12}=A_{21}.\) This implies \[\begin{aligned} \omega_{13}&=-\omega_{31}=-A_{11}\omega_1-A_{12}\omega_2,\\ \omega_{23}&=-\omega_{32}=-A_{12}\omega_1-A_{22}\omega_2. \end{aligned}\] On the other hand, since \(\mathrm{d}^2=0,\) we obtain \[\begin{aligned} 0&=\mathrm{d}^2\Psi_3=\mathrm{d}\vec{e}_1\wedge \omega_1+\vec{e}_1\mathrm{d}\omega_1+\mathrm{d}\vec{e}_2\wedge \omega_2+\vec{e}_2\mathrm{d}\omega_2\\ &=\left(\vec{e}_2\omega_{21}+\vec{e}_3\omega_{31}\right)\wedge\omega_1+\vec{e}_1\mathrm{d}\omega_1+\left(\vec{e}_1\omega_{12}+\vec{e}_3\omega_{32}\right)\wedge\omega_2+\vec{e}_2\mathrm{d}\omega_2 \end{aligned}\] Taking the inner product with \(\vec{e}_1\) this simplifies to become \[0=\mathrm{d}\omega_1+\omega_{12}\wedge\omega_2\] and taking the inner product with \(\vec{e}_2,\) we obtain \[0=\mathrm{d}\omega_2+\omega_{21}\wedge\omega_1.\] Writing \(\theta:=\omega_{21}=-\omega_{12}\) we thus obtain the equations \[\begin{aligned} \mathrm{d}\omega_1&=-\omega_2\wedge\theta,\\ \mathrm{d}\omega_2&=-\theta\wedge\omega_1. \end{aligned}\] Taking the exterior derivative of the identity \[\mathrm{d}\vec{e}_i=\sum_{k=1}^3 \omega_{ik}\vec{e}_k\] we conclude likewise that \[\mathrm{d}\omega_{ij}=-\sum_{k=1}^3 \omega_{ik}\wedge\omega_{kj}.\] In particular, we have \[\begin{aligned} \mathrm{d}\theta&=\mathrm{d}\omega_{21}=-\omega_{23}\wedge\omega_{31}=(A_{12}\omega_1+A_{22}\omega_2)\wedge(A_{11}\omega_1+A_{12}\omega_2)\\ &=A_{12}^2\omega_{1}\wedge\omega_2+A_{22}A_{11}\omega_2\wedge\omega_1=-(A_{11}A_{22}-A_{12}^2)\omega_1\wedge\omega_2\\ &=-K\omega_1\wedge\omega_2. \end{aligned}\] In summary, we have to so-called structure equations of E. Cartan \[\tag{5.8} \boxed{\begin{aligned} \mathrm{d}\omega_1 & = -\omega_2\wedge\theta,\\ \mathrm{d}\omega_2 & = -\theta \wedge \omega_1,\\ \mathrm{d}\theta&=-K\omega_1\wedge\omega_2. \end{aligned}}\] These equations imply that the Gauss curvature is an intrinsic quantity (i.e. the Theorema Egregium). Indeed, the first two equations of (5.8) imply that \(\omega_1,\omega_2\) uniquely determine \(\theta.\) Suppose that \(\hat{\theta}\) is another \(1\)-form on \(M\) satisfying the first two equations of (5.8). There exist real-valued functions \(a,b\) on \(M\) so that \[\hat{\theta}=\theta+a\omega_1+b\omega_2.\] The functions \(a,b\) exist since \(\omega_1|_p,\omega_2|_p\) are basis of \(T_p^*M\) for all \(p \in M.\) By assumption, we have \(\mathrm{d}\omega_1=-\omega_2\wedge\hat{\theta}\) and hence \[0=\mathrm{d}\omega_1-\mathrm{d}\omega_1=-\omega_2\wedge\theta+\omega_2\wedge\hat{\theta}=-a\omega_1\wedge\omega_2.\] Since \(\{(\vec{e}_1)_p,(\vec{e}_2)_p\}\) are linearly independent for all \(p \in M\) it follows that the alternating bilinear form \((\omega_1\wedge\omega_2)|_p\) is never the zero form. This implies that \(a\) must vanish identically. Arguing with the second equation from (5.8) it follows that \(b\) must vanish identically as well, this implies that \(\hat{\theta}=\theta.\) Using the third equation, one can conclude similarly that \(K\) is uniquely determined in terms of \(\omega_1,\omega_2,\theta.\) Recall that \(\omega_1,\omega_2\) are intrinsic quantities. Since \(\theta\) is uniquely determined by \(\omega_1,\omega_2,\) it follows that \(\theta\) and hence the Gauss curvature \(K\) are intrinsic as well.