For the next two convergence criteria we recall the limit superior and the limit inferior of a sequence \({(a_n)}_{n \in {\mathbb{N}}}.\) If we inspect the definitions of these two terms in more detail, we see that in the case that the sequence \({(a_n)}_{n \in {\mathbb{N}}}\) is only bounded from above, then the sequence \(\left( \sup\{ a_n \;|\; n \ge k \}\right)_{k \in {\mathbb{N}}}\) is well-defined and since it is monotonically decreasing, it is either convergent or properly divergent towards \(-\infty.\) In the latter case we write \[\limsup_{n \to \infty} a_n = -\infty.\] Moreover, if the sequence is not bounded from above, then we set \[\limsup_{n \to \infty} a_n := \infty.\] Analogously we proceed with the limit inferior by allowing the improper limits \(\pm \infty.\)

Theorem 3.10 • Root test

Let \({(a_n)}_{n \in {\mathbb{N}}}\) be a sequence with \(a: = \limsup_{n \to \infty} \sqrt[n]{|a_n|} \in {\mathbb{R}}\cup \{\infty\}.\)

  1. If \(a < 1,\) then the series \(\sum_{k=0}^\infty a_k\) is absolutely convergent.

  2. If \(a > 1\) or \(a = \infty,\) then the series \(\sum_{k=0}^\infty a_k\) is divergent.

Proof.

  1. If \(a < 1,\) then there exists a \(q \in {\mathbb{R}}\) such that \(a < q < 1\) – we could e. g., choose \(q:=\frac{a+1}{2}.\) Then for \(\varepsilon :=q-a > 0,\) by Theorem 2.16 i, there exists an \(N \in {\mathbb{N}}\) with \[\sqrt[n]{|a_n|} < a+ \varepsilon = q \quad \text{for all } n \ge N.\] Therefore, \(|a_n| < q^n\) for all \(n \ge N.\) Since \(q = |q| < 1,\) the geometric series \(\sum_{k=0}^\infty q^k\) is a convergent majorant of \(\sum_{k=0}^\infty a_k.\)

  2. If, on the other hand, \(a > 1,\) then by Theorem 2.16 ii, for \(\varepsilon:= a-1>0\) there exist infinitely many \(n \in {\mathbb{N}}\) with \(\sqrt[n]{|a_n|} > a - \varepsilon = 1\) from which we obtain \(|a_n| > 1\) for infinitely many \(n \in {\mathbb{N}}.\) But then \({(a_n)}_{n \in {\mathbb{N}}}\) is not a null sequence, hence the series \(\sum_{k=0}^\infty a_k\) is divergent. The case \(a = \infty\) follows analogously.

Video 3.7. The root test.

Remark 3.4

For the case \(a=1,\) Theorem 3.10 does not make any statement. Indeed, the considered series can be convergent or divergent. Consider for instance the series \[\sum_{k=0}^\infty \frac{1}{k} \quad \text{and} \quad \sum_{k=0}^\infty \frac{1}{k^2}.\] We already know that the first series is divergent and the second one is convergent (exercise!). On the other hand, since \(\lim_{n \to \infty} \sqrt[n]{n} = 1\) (exercise!), we have \[\lim_{n \to \infty} \sqrt[n]{\frac{1}{n}} = \lim_{n \to \infty} \frac{1}{\sqrt[n]{n}} = 1 \quad \text{and} \quad \lim_{n \to \infty} \sqrt[n]{\frac{1}{n^2}} = \lim_{n \to \infty} \frac{1}{\sqrt[n]{n} \sqrt[n]{n}} = 1.\]

Theorem 3.11 • Ratio test

Let \({(a_n)}_{n \in {\mathbb{N}}}\) with \(a_n \neq 0\) for all \(n \in {\mathbb{N}}.\)

  1. If \(\limsup_{n \to \infty} \left| \frac{a_{n+1}}{a_n} \right| < 1,\) then the series \(\sum_{k=0}^\infty a_k\) is absolutely convergent.

  2. If \(\liminf_{n \to \infty} \left| \frac{a_{n+1}}{a_n} \right| > 1,\) then the series \(\sum_{k=0}^\infty a_k\) is divergent.

Proof.

  1. We prove the statement about convergence with the help of the root test. We show that for all sequences \({(a_n)}_{n \in {\mathbb{N}}}\) it holds that \[a := \limsup_{n \to \infty} \sqrt[n]{|a_n|} \le \limsup_{n \to \infty} \left| \frac{a_{n+1}}{a_n} \right| =: b,\] assuming that \(b \in {\mathbb{R}},\) i. e., the corresponding limit superior exists. If \(b<1,\) then also \(a < 1\) and the absolute convergence of \(\sum_{k=0}^\infty a_k\) then follows from the root test.

  2. Let \(\varepsilon > 0\) be arbitrary and \(q:=b+\varepsilon.\) Then by Theorem 2.16 i there exists an \(N \in {\mathbb{N}}\) such that \[\left| \frac{a_{n+1}}{a_n} \right| \le q \quad \text{for all } n \ge N.\] (In fact, we even have “\(<\)” in the latter inequality, but the weaker statement is sufficient for our purposes.) Then for all \(n,\,m \in {\mathbb{N}}\) with \(n \ge m \ge N\) we obtain \[\left| \frac{a_n}{a_m} \right| = \left| \frac{a_{n}}{a_{n-1}} \right| \cdot \left| \frac{a_{n-1}}{a_{n-2}} \right| \cdot \ldots \cdot \left| \frac{a_{m+1}}{a_m} \right| \le q^{n-m} = \frac{q^n}{q^m}.\] From this we obtain \[\sqrt[n]{|a_n|} \le \sqrt[n]{\frac{|a_m|}{q^m}} \cdot q.\] Now we fix \(m \ge N\) and exploit the fact that for fixed \(x > 0,\) \(\lim_{n \to \infty} \sqrt[n]{x} = 1\) (exercise!). With this we obtain \[a = \limsup_{n \to \infty} \sqrt[n]{|a_n|} \le \lim_{n \to \infty} \sqrt[n]{\frac{|a_m|}{q^m}} \cdot q = q = b+ \varepsilon.\] Since this holds true for all \(\varepsilon > 0,\) we obtain \(a \le b\) (cf. Remark 2.6).

  3. From \(\liminf_{n \to \infty} \left| \frac{a_{n+1}}{a_n} \right| > 1,\) by Corollary 2.17 i it follows that \(\left| \frac{a_{n+1}}{a_n} \right| \ge 1\) for almost all \(n \in {\mathbb{N}}.\) Thus there exists an \(N \in {\mathbb{N}}\) with \(|a_{n+1}| \ge |a_n|\) for all \(n \ge N,\) in particular, \(|a_n| \ge |a_N| > 0.\) But then \({(a_n)}_{n \in {\mathbb{N}}}\) is not a null sequence and the series \(\sum_{k=0}^\infty a_k\) is divergent.

Video 3.8. The ratio test.

Remark 3.5

  1. Similarly as for the root test, there are also cases, for which the theorem does not make any statement, namely if \[\liminf_{n \to \infty} \left| \frac{a_{n+1}}{a_n} \right| \le 1 \le \limsup_{n \to \infty} \left| \frac{a_{n+1}}{a_n} \right|.\] The two sequences from Remark 3.4 are examples for this, since \(\left| \frac{a_{n+1}}{a_n}\right| = 1.\)

  2. In case \(\liminf_{n \to \infty} \left| \frac{a_{n+1}}{a_n} \right| = 1,\) we have the possibility to test whether \(\left| \frac{a_{n+1}}{a_n} \right| \ge 1\) for almost all \(n \in {\mathbb{N}}.\) The proof of the ratio test already shows that then the series \(\sum_{k=0}^\infty a_k\) is divergent.

  3. The root test is stronger than the ratio test as a convergence criterion. Namely, if the ratio test confirms the convergence of a series, the root test will also do. Conversely, there are series where the ratio test cannot decide convergence, but the root test can.

Example 3.6

  1. Let \(\sum_{k=0}^\infty a_k\) be the series defined by \[a_{2n} = \frac{1}{2^{2n}} \quad \text{and} \quad a_{2n+1} = \frac{1}{2^{2n-1}} \quad \text{for all } n \in {\mathbb{N}}.\] Then we have \[\frac{a_{2n+1}}{a_{2n}} = \frac{2^{2n}}{2^{2n-1}} = 2 \quad \text{and} \quad \frac{a_{2n+2}}{a_{2n+1}} = \frac{2^{2n-1}}{2^{2n+2}} = \frac{1}{8} \quad \text{for all } n \in {\mathbb{N}}.\] Then \[\frac{1}{8} = \liminf_{n \to \infty} \left| \frac{a_{n+1}}{a_n} \right| \le 1 \le \limsup_{n \to \infty} \left| \frac{a_{n+1}}{a_n} \right| = 2.\] So the ratio test does not decide about convergence or divergence of the series. On the other hand, \[\begin{aligned} \sqrt[2n]{a_{2n}} &= \frac{1}{2} \quad \text{and} \\ \sqrt[2n+1]{a_{2n+1}} &= \sqrt[2n+1]{\frac{4}{2^{2n+1}}} = \frac{\sqrt[2n+1]{4}}{2} \to \frac{1}{2} \quad \text{for } n \to \infty. \end{aligned}\] So with Lemma 3.4, we obtain \(\lim_{n \to \infty} \sqrt[n]{a_n} = \frac{1}{2} < 1.\) Hence, by the root test, the series is convergent.

  2. The ratio test also have some advantages, namely the calculation of the involved limits and limits superior is often much easier compared to the root test. Consider, e. g., the so-called exponential series \[\sum_{k=0}^\infty \frac{x^k}{k!} = 1 + x + \frac{x^2}{2!} + \frac{x^3}{3!} + \ldots\] for \(x \in {\mathbb{R}}.\) This series is absolutely convergent for all \(x \in {\mathbb{R}}\) because of the ratio test and \[\left| \frac{a_{n+1}}{a_n} \right| = \frac{|x|^{n+1}}{(n+1)!} \cdot \frac{n!}{|x|^n} = \frac{|x|}{n+1} \quad \Longrightarrow \quad \lim_{n \to \infty} \left| \frac{a_{n+1}}{a_n} \right| = 0 < 1.\] Using the root test we have had to calculate the difficult limit \(\lim_{n \to \infty} \frac{|x|}{\sqrt[n]{n!}}.\)

3.3 The Exponential Series

The series we have considered in Example 3.6 ii is particularly important so that we will devote an entire section to it. Let us have a look at a particular appliciation problem, namely a growth process such as a bacteria culture grown in a nutrient solution in a petri dish. We define the function \(f: {\mathbb{R}}\to {\mathbb{R}}\) by \[f(t) := \text{mass of the bacteria at time } t.\] Of course, for this we have prescribe an appropriate initial time. Due to the nutrient solution, the bacteria population grows. For sufficiently small time intervals \(\Delta t,\) the growth is almost proportional to the mass of the bacteria as long as the petri dish is not too small (in which case the bacteria population would be saturated and grow slowlier or not at all.) If the petri dish is large enough we thus obtain \[f(t + \Delta t) - f(t) \approx c \cdot f(t) \cdot \Delta t.\] For simplicity, we consider the special case \(c=1\) and consider the limit \[f'(t) := \lim_{\Delta t \to 0} \frac{f(t+\Delta t) - f(t)}{\Delta t} = f(t).\] At this point we have used the limit for functions and the derivative – concepts that we will introduce formally later in this course. The function \(f\) that describes the bacteria growth has the property that \(f' = f.\) The latter is an example of an ordinary differential equation. You will learn more about the theory and the numerical solution of differential equations in upcoming modules. From high-school you may remember that the function \(f: {\mathbb{R}}\to {\mathbb{R}},\) \(t \mapsto e^t\) fulfills the property \(f' = f,\) i. e., this function solves the differential equation. Since we have not yet defined the exponential function and do not yet know how to deal with non-integer exponenents we are going to describe this function differently. As ansatz we choose a polynomial function \[f(t) = a_0 + a_1t + a_2t^2 + \ldots + a_nt^n, \quad t \in {\mathbb{R}}\] with coefficients \(a_0,\,a_1,\,a_2,\,\ldots,\,a_n \in {\mathbb{R}}.\) By using the well-known differentiation rules (that we will prove in detail later), we obtain \[f'(t) = a_1 + 2a_2 t + 3a_3t^2 + \ldots + na_nt^{n-1}, \quad t \in {\mathbb{R}},\] hence \[f(t) - f'(t) = a_0 - a_1 + (a_1-2a_2)t + (a_2-3a_3)t^2 + \ldots + (a_{n-1} - na_n)t^{n-1} + a_n t^n.\] For small \(|t|<1\) and sufficiently large \(n \in {\mathbb{N}},\) the term \(a_nt^n\) is negligable. Hence, also \(|f(t) - f'(t)|\) is small, if \(a_{k-1} = ka_k\) for all \(k=1,\,\ldots,\,n.\) By induction we obtain \[a_k = \frac{1}{k!}a_0.\] Therefore, the function \(f: {\mathbb{R}}\to {\mathbb{R}}\) with \[f(t) = a_0 \cdot \sum_{k=0}^n \frac{t^k}{k!}\] should be a good approximation for a function with \(f'=f,\) at least for small values of \(|t|.\) It is expected that the approximation quality becomes better, if we take more summands, i. e., we consider the limit \(n \to \infty.\)

Definition 3.5 • Exponential function, Euler’s number

  1. For \(x \in {\mathbb{R}}\) we define \[\exp(x) := \sum_{k=0}^\infty \frac{x^k}{k!}.\] The function \(\exp: {\mathbb{R}}\to {\mathbb{R}},\) \(x \to \exp(x)\) is called exponential function.

  2. The number \[\mathrm{e}:= \exp(1) = \sum_{k=0}^\infty \frac{1}{k!}\] is called Euler’s number.

Remark 3.6

By the series representation we can compute approximative solutions for Euler’s number \(\mathrm{e}\) by calculating partial sums. To determine how good our approximation is, we consider the remainder term \[R_n(x) := \exp(x) - \sum_{k=0}^n \frac{x^k}{k!}.\] It holds that \[\begin{aligned} |R_n(x)| &\le \sum_{k=n+1}^\infty \frac{|x|^k}{k!} = \frac{|x|^{n+1}}{(n+1)!} \cdot \left( 1+ \frac{|x|}{n+2} + \frac{|x|^2}{(n+2)(n+3)} + \ldots \right) \\ &\le \frac{|x|^{n+1}}{(n+1)!} \cdot\left( 1+ \frac{|x|}{n+2} + \frac{|x|^2}{(n+2)^2} + \ldots \right) \\ &= \frac{|x|^{n+1}}{(n+1)!} \cdot \sum_{k=0}^\infty \left( \frac{|x|}{n+2}\right)^k. \end{aligned}\] The second estimate is allowed by the majorant criterion, if we know that the geometric series \(\sum_{k=0}^\infty \left( \frac{|x|}{n+2}\right)^k\) converges. We can enforce the convergence of this series by choosing \(n\) sufficiently large. If we choose \(n\) such that \(\frac{|x|}{n+2} \le \frac{1}{2}\) (or \(n \ge 2(|x|-1)\)), we can estimate \[\sum_{k=0}^\infty \left( \frac{|x|}{n+2}\right)^k \le \sum_{k=0}^\infty \left( \frac{1}{2}\right)^k = 2.\] We conclude \[|R_n(x)| \le \frac{2|x|^{n+1}}{(n+1)!} \quad \text{for } n \ge 2(|x|-1).\] For \(x=1\) and \(n=10\) we obtain \(|R_{10}(1)| \le \frac{2}{11!} \le 5.06 \cdot 10^{-8}\) and thus we obtain the estimate \[\mathrm{e}= \sum_{k=0}^{10} \frac{1}{k!} + R_{10}(1) \in \big[2.7182818011\ldots - 5.02\cdot 10^{-8},2.7182818011\ldots + 5.02\cdot 10^{-8}\big].\] One can see that we have determined the first 7 digits of Euler’s number by only taking the first 10 summands of its series representation.

Our next goal is to show that \(\exp(k) = \mathrm{e}^k\) for all \(k \in {\mathbb{Z}}.\) In order to fulfill the power law \(\mathrm{e}^{k+m} = \mathrm{e}^k\cdot\mathrm{e}^m\) for all \(k,\,m \in {\mathbb{N}},\) the condition \(\exp(k+m) = \exp(k)\exp(m)\) should be satisfied for all \(k,\,m \in {\mathbb{Z}}.\) We show that it even holds that \[\exp(x+y) = \exp(x)\exp(y) = \left( \sum_{k=0}^\infty \frac{x^k}{k!}\right) \cdot \left( \sum_{k=0}^\infty \frac{y^k}{k!}\right)\] for all \(x,\, y \in {\mathbb{R}}.\) Let us first formulate a result that shows how to multiply two series. Consider the partial sums of the series \(\sum_{k=0}^\infty a_k\) and \(\sum_{k=0}^\infty b_k.\) Then we get \[\left(\sum_{k=0}^n a_k \right) \cdot \left(\sum_{k=0}^n b_k \right) = \sum_{k=0}^n \sum_{\ell=0}^n a_k b_\ell.\] If we try to represent this product as a series, we expect that we have to sum over all products of the form \(a_k b_\ell\) for \(k,\,\ell \in {\mathbb{N}}.\) These can be visualized as “infinite array” \[\begin{matrix} a_0b_0 & a_0b_1 & a_0b_2 & a_0b_3 & \cdots \\ a_1b_0 & a_1b_1 & a_1b_2 & a_1b_3 & \cdots \\ a_2b_0 & a_2b_1 & a_2b_2 & a_2b_3 & \cdots \\ a_3b_0 & a_3b_1 & a_3b_2 & a_3b_3 & \cdots \\ \vdots & \vdots & \vdots & \vdots & \ddots \end{matrix}\] There are many possibilities to formulate the corresponding series which differ in the order of summation of the summands. Two possibilities are given in Figure 3.2.

Figure 3.2: Different summation orders for products of series

The corresponding series attain the form \[\sum_{k=0}^\infty q_k \text{ with } q_k := a_k b_0 + \ldots + a_k b_k + \ldots + a_0 b_k, \tag{3.8}\] and \[\sum_{k=0}^\infty d_k \text{ with } d_k := a_k b_0 + a_{k-1}b_1 + \ldots + a_0 b_k = \sum_{\ell = 0}^k a_{k-\ell}b_\ell. \tag{3.9}\] If both series are decomposed into summands \(a_ib_j,\) then they can be viewed as rearrangements of each other and deliver the same sum in case of absolute convergence. In this case, we prefer the summation along the diagonals, since this helps us to handle products of the series \(\sum_{k=0}^\infty a_k\) and \(\sum_{k=0}^\infty b_k\) more easily. For the \(n\)-th partial sums we obtain \[\begin{aligned} \left(\sum_{k=0}^n a_k x^k\right) \cdot \left(\sum_{k=0}^n b_k x^k\right) &= \big( a_0 + a_1x + \ldots + a_n x^n \big) \cdot \big( b_0 + b_1 x + \ldots + b_n x^n \big) \\ &= a_0b_0 + (a_1b_0 + a_0b_1)x + (a_2b_0+a_1b_1+a_0b_2) x^2 \\ &\phantom{=} + \ldots + a_nb_n x^{2n}, \end{aligned}\] where the coefficients of \(x^k\) for \(k=0,\,\ldots,\,n\) are given by \(a_kb_0 + a_{k-1}b_1 + \ldots + a_0b_k.\) (Note that the coefficients of \(x^k\) for \(k=0,\,\ldots,\,n\) do not change anymore if \(n\) is further increased, while the coefficients of \(x^k\) for \(k=n+1,\,\ldots,\,2n\) may change, if \(n\) is increased.)

Definition 3.6 • Cauchy product

Let \(\sum_{k=0}^\infty a_k\) and \(\sum_{k=0}^\infty b_k\) be two series. Then the series \[\sum_{k=0}^\infty d_k, \quad d_k = \sum_{\ell=0}^k a_{k-\ell} b_\ell\] is called the Cauchy product of the series \(\sum_{k=0}^\infty a_k\) and \(\sum_{k=0}^\infty b_k.\)

Theorem 3.12

Let \(\sum_{k=0}^\infty a_k\) and \(\sum_{k=0}^\infty b_k\) be absolutely convergent series. Then also the Cauchy product of the two series is convergent and \[\sum_{k=0}^\infty \sum_{\ell=0}^k a_{k-\ell} b_\ell = \left(\sum_{k=0}^\infty a_k \right) \cdot \left(\sum_{k=0}^\infty b_k \right).\]

Proof. First we consider the series defined by (3.8). The \(n\)-th partial sum contains all summands in the corresponding square and we have \[\sum_{k=0}^n q_k = \left(\sum_{k=0}^n a_k \right) \cdot \left(\sum_{k=0}^n b_k \right).\] Moreover, by the triangle inequality, for \(k=1,\,\ldots,\,n\) we have \[\begin{aligned} |q_k| &\le |a_k b_0| + \ldots + |a_k b_k| + \ldots + |a_0 b_k| \\ &= |a_k| \cdot |b_0| + \ldots + |a_k| \cdot |b_k| + \ldots + |a_0| \cdot |b_k|. \end{aligned}\] Hence we have \[\sum_{k=0}^n |q_k| \le \left(\sum_{k=0}^n |a_k| \right) \cdot \left(\sum_{k=0}^n |b_k| \right).\] Since both \(\sum_{k=0}^\infty a_k\) and \(\sum_{k=0}^\infty b_k\) are absolutely convergent, it follows from Theorem 3.1, that also \(\left(\sum_{k=0}^\infty |a_k| \right) \cdot \left(\sum_{k=0}^\infty |b_k| \right)\) is convergent and hence, also \(\sum_{k=0}^\infty |q_k|\) is convergent (and so \(\sum_{k=0}^\infty q_k\) is absolutely convergent). For the sum we have \[\sum_{k=0}^\infty q_k = \lim_{n \to \infty} \sum_{k=0}^n q_k = \left(\sum_{k=0}^\infty a_k \right) \cdot \left(\sum_{k=0}^\infty b_k \right).\] Since the Cauchy product of \(\sum_{k=0}^\infty a_k\) and \(\sum_{k=0}^\infty b_k\) is a rearrangement of the series \(\sum_{k=0}^\infty q_k\) (which is absolutely convergent), we obtain the claim by Theorem 3.7.

As an application of Theorem 3.12 we obtain the functional equation for the exponential function.

Theorem 3.13

(Functional equation for \(\exp\)). For all \(x,\,y \in {\mathbb{R}}\) it holds that \[\exp(x+y) = \exp(x)\exp(y).\]

Proof. With the help of Theorem 3.12 and the binomial theorem we obtain \[\begin{aligned} \exp(x)\exp(y) &= \left( \sum_{k=0}^\infty \frac{1}{k!}x^k \right) \cdot\left( \sum_{k=0}^\infty \frac{1}{k!}y^k \right) \\ &= \sum_{k=0}^\infty \left( \sum_{\ell=0}^k \frac{1}{(k-\ell)!}x^{k-\ell}\cdot \frac{1}{\ell!}y^\ell\right) \\ &= \sum_{k=0}^\infty \left( \frac{1}{k!} \sum_{\ell=0}^k \begin{pmatrix} k \\ \ell \end{pmatrix} x^{k-\ell}\cdot y^\ell\right) \\ &= \sum_{k=0}^\infty \frac{1}{k!} (x+y)^k \\ &= \exp(x+y). \end{aligned}\]

Corollary 3.14

Let \(x \in {\mathbb{R}}\) and \(m \in {\mathbb{Z}}.\) Then the following properties are satisfied:

  1. \(\exp(0) = 1,\)

  2. \(\exp(x) \neq 0\) and \(\exp(-x) = \frac{1}{\exp(x)},\)

  3. \(\exp(x) > 0,\)

  4. \(\exp(m) = \mathrm{e}^m.\)

Proof. Exercise!

3.4 \(b\)-Adic Fractions

In this section we consider the representation of the real number as decimal fractions, or more generally, as \(b\)-adic fractions with the basis \(b \in {\mathbb{N}}\) with \(b \ge 2.\)

Definition 3.7 • b-adic fraction, basis, digits

Let \(b \in {\mathbb{N}}\setminus \{0,\,1\}.\) A \(b\)-adic fraction is a series of the form \[c \sum_{k=-n}^\infty a_k b^{-k} = c\left( a_{-n}b^n + \ldots + a_{-1}b + \sum_{k=0}^\infty a_k b^{-k} \right),\] where \(n \in {\mathbb{N}},\) \(a_k \in \{0,\,1,\,\ldots,\,b-1\}\) for \(k \in {\mathbb{N}}\cup \{ -n,\,\ldots,\,-1 \}\) and \(c \in \{-1,\,1\}.\) We write \[\tag{3.10} \pm\left( a_{-n} \ldots a_{-1}a_0.a_1a_2a_3 \ldots \right)_b.\] The number \(b\) is called basis of the fraction, the elements \(\{0,\,1,\,2,\,\ldots,\,b-1\}\) are called digits.

For the case \(b=10\) we also talk about a decimal fraction and then we do not write the subscript “\(b\)” in (3.10). (If we are very precise, we should actually write \(b=9+1\) instead of \(b=10,\) since here we already make use of the decimal fractions before having defined them properly.)

Example 3.7

  1. It holds that \(\frac{25}{2} = 1\cdot10^1 + 2\cdot10^0 + 5\cdot10^{-1}\) and therefore, \[\frac{25}{2} = +1 \cdot \sum_{k=-1}^{\infty} a_k\cdot 10^{-k}\] with \(a_{-1} = 1,\) \(a_0 = 2,\) \(a_1 = 5,\) \(a_k = 0\) for \(k \ge 2,\) and \(c = 1.\) With this we obtain \(\frac{25}{2} = +{(12.5000\ldots)}_{10}\) or for short, \(\frac{25}{2} = 12.5.\)

  2. Analogously we obtain \[\frac{10}{9} = \sum_{k=0}^\infty 1\cdot 10^{-k} = 1.111111\ldots.\] Such decimal fractions are called periodic and we write \(1.\overline{1}.\) where the bar denotes an ever repeating group of digits after the point (or comma). Since we know this notation from high-school we do not define it properly at his point.

Lemma 3.15

Let \(b \in {\mathbb{N}}\) with \(b \ge 2.\) Then every \(b\)-adic fraction is absolutely convergent.

Proof. Let \(c\sum_{k=-n}^\infty a_kb^{-k}\) be the considered \(b\)-adic fraction. Since \(0\le a_k < b\) for all \(k \in {\mathbb{N}}\) we have \[\left| ca_kb^{-k} \right| = a_k b^{-k} < b\cdot b^{-k} = b^{-k+1} = \frac{1}{b^{k-1}}.\] Since \(\sum_{k=1}^\infty \frac{1}{b^{k-1}}\) with \(b \ge 2\) is a convergent geometric series, by the majorant criterion the series \(\sum_{k=1}^\infty ca_kb^{-k}\) is absolutely convergent. Hence also the series \(\sum_{k=-n}^\infty ca_kb^{-k}\) is absolutely convergent.

For the proof of the following theorem, we recall the following notations (that we have already seen previously). For \(x \in {\mathbb{R}}\) let \[\lfloor x \rfloor := \max\{ k \in {\mathbb{Z}}\;|\; k\le x \}, \quad \lceil x \rceil := \min \{ k \in {\mathbb{Z}}\;|\; k\ge x \},\] i. e., the integer numbers that we obtain by rounding down (or up). Obviously, \[\tag{3.11} 0 \le x - \lfloor x \rfloor < 1 \quad \text{and} \quad 0 \le \lceil x \rceil - x < 1.\]

Theorem 3.16

Let \(b \in {\mathbb{N}}\) with \(b \ge 2.\) Then every \(x \in {\mathbb{R}}\) can be represented as \(b\)-adic fraction.

Proof. Let w. l. o. g. (without loss of generality) be \(x \ge 0.\) The case \(x<0\) can be shown analogously. Since \(\big(b^k\big)_{k \in {\mathbb{N}}}\) is an unbounded sequence, there exists a \(q \in {\mathbb{N}}\) such that \(x < b^{q+1}.\) Let \(n \in {\mathbb{N}}\) be minimal with this property. In particular, we have that \(0 \le x < b^{n+1}.\) we will now construct a sequence \({(a_k)}_{k \ge -n}\) such that for all \(\ell \in {\mathbb{N}}\cup \{-n,\,\ldots,\,-1\}\) we have that \[\tag{3.12} x = \sum_{k = -n}^\ell a_k b^{-k} + r_\ell \quad \text{where} \quad 0 \le r_\ell < b^{-\ell}.\] Since \(0 \le r_\ell < b^{-\ell}\) for all \(\ell \in {\mathbb{N}}\cup \{-n,\,\ldots,\,-1\}\) and since \(\lim_{\ell \to \infty} b^{-\ell} = 0,\) we can use Theorem 2.7 and obtain \(\lim_{\ell \to \infty} r_{\ell} = 0.\) Hence we also have \[x = \sum_{k = -n}^\infty a_k b^{-k}.\] So let us show (3.12) for \(\ell \in {\mathbb{N}}\cup \{-n,\,\ldots,\,-1\}\) inductively:
Induction base: We show the statement for \(\ell = -n.\) (It is no problem to start with \(\ell = -n\) instead of \(\ell = 0.\)) Because of \(0 \le x < b^{n+1}\) it holds that \(0 \le xb^{-n} < b.\) We set \(a_{-n} := \big\lfloor xb^{-n} \big\rfloor\) and \(\delta_{-n} := xb^{-n} - a_{-n}.\) Then \(a_{-n} \in \{ 0,\,1,\,\ldots,\,b-1 \},\) \(\delta_{-n} \in [0,1)\) (because of (3.11)) and \[xb^{-n} = a_{-n} + \delta_{-n}.\] Hence we obtain \[x = a_{-n}b^n + \delta_{-n}b^n = a_{-n}b^n + r_{-n},\] where \(r_{-n} := \delta_{-n}b^n < b^n\) and \(r_{-n} \ge 0.\)
Induction hypothesis: We assume that the elements \(a_{-n},\,\ldots,\,a_m\) up to some \(m \in {\mathbb{N}}\cup \{ -n,\,\ldots,\,-1 \}\) are already constructed and that (3.12) is true for \(\ell = m.\)
Induction step: We show that then (3.12) is also true for \(\ell = m+1.\) By the induction hypothesis we have that \(0 \le r_m b^{m+1} < b.\) Analogously to the induction base we obtain with \(a_{m+1} := \big\lfloor r_m b^{m+1} \big\rfloor \in \{0,\,1,\,\ldots,\,b-1\}\) and \(\delta_{m+1} := r_m b^{m+1} - a_{m+1} \in [0,1)\) that \[r_m b^{m+1} = a_{m+1} + \delta_{m+1}.\] This finally yields \[\begin{aligned} x &= \sum_{k=-n}^m a_kb^{-k} + r_m \\ &= \sum_{k=-n}^m a_kb^{-k} + a_{m+1}b^{-(m+1)} + \delta_{m+1} b^{-(m+1)} \\ &= \sum_{k=-n}^{m+1} a_kb^{-k} + r_{m+1}, \end{aligned}\] where \(r_{m+1} := \delta_{m+1}b^{-(m+1)}\) and \(0\le r_{m+1} < b^{-(m+1)}.\)

Example 3.8

We expand \((0.1)_{10}\) into a 2-adic (also called dyadic) fraction. From the proof of Theorem 3.16 we obtain \(a_{\ell+1} = \big\lfloor r_\ell b^{\ell+1} \big\rfloor = \big\lfloor \delta_\ell b^{-\ell} b^{\ell+1} \big\rfloor = \big\lfloor \delta_\ell b \big\rfloor.\) Because of \(0 \le 0.1 < 2^0 = 1\) we obtain \(a_0 = 0\) and \(\delta_0 = 0.1.\) Hence, \[\begin{aligned} 0.1 \cdot 2 = 0.2 \quad &\Longrightarrow \quad a_1 = \lfloor 0.2 \rfloor = 0 \quad \text{and} \quad \delta_1 = 0.2, \\ 0.2 \cdot 2 = 0.4 \quad &\Longrightarrow \quad a_2 = \lfloor 0.4 \rfloor = 0 \quad \text{and} \quad \delta_2 = 0.4, \\ 0.4 \cdot 2 = 0.8 \quad &\Longrightarrow \quad a_3 = \lfloor 0.8 \rfloor = 0 \quad \text{and} \quad \delta_3 = 0.8, \\ 0.8 \cdot 2 = 1.6 \quad &\Longrightarrow \quad a_4 = \lfloor 1.6 \rfloor = 1 \quad \text{and} \quad \delta_4 = 0.6, \\ 0.6 \cdot 2 = 1.2 \quad &\Longrightarrow \quad a_5 = \lfloor 1.2 \rfloor = 1 \quad \text{and} \quad \delta_5 = 0.2, \\ 0.2 \cdot 2 = 0.4 \quad &\Longrightarrow \quad a_6 = \lfloor 0.4 \rfloor = 0 \quad \text{and} \quad \delta_6 = 0.4. \end{aligned}\] From now on, the scheme repeates itself and we obtain the dyadic representation \[{(0.1)}_{10} = {(0.0001100110011\ldots)}_2 = \big(0.0\overline{0011}\big)_2.\] As the example shows, fractions can be periodic with respect to one basis and finite with respect to another basis.

Remark 3.7

In general, the representation as a \(b\)-adic fraction is not unique. For example, we have \[\begin{gathered} 0.\overline{9} = 0.999\ldots = \sum_{k=1}^\infty 9 \cdot 10^{-k} = 9 \cdot \left( \sum_{k=0}^\infty 10^{-k} - 1 \right) \\ = 9 \cdot \left( \frac{1}{1-\frac{1}{10}} - 1 \right) = 9 \cdot \left( \frac{10}{9} - 1\right) = 1 = 1.000\ldots. \end{gathered}\]

Home

Contents

Study Weeks