Lemma 7.13 • Cauchy’s convergence criterion for improper integrals

Let \(f:[a,\infty) \to {\mathbb{R}}\) be integrable over every interval \([a,b]\) with \(b > 0.\) Then the improper integral of \(f\) over \([a,\infty)\) exists, if and only if for every \(\varepsilon > 0\) there exists a \(c_0 \ge a\) such that for all \(c_1,\,c_2 \ge c_0\) it holds that \[\left| \int_{c_1}^{c_2} f(x)\,\mathrm{d}x \right| < \varepsilon.\]

Proof. Exercise!

Theorem 7.14 • Integral criterion for series

Let \(f:[0,\infty) \to [0,\infty)\) be monotonically decreasing. Then \[\sum_{k=0}^\infty f(k) \text{ is convergent} \quad \Longleftrightarrow \quad \int_0^\infty f(x)\,\mathrm{d}x \text{ exists}.\]

Proof. First we note that \(f\) is integrable in every interval \([a,b]\) with \(b > a,\) since it is monotone.
“\(\Longrightarrow\)”: Let \(\varepsilon > 0.\) Since the series \(\sum_{k=0}^\infty\) is convergent, by Cauchy’s convergence criterion (Theorem 3.3) there exists an \(N \in {\mathbb{N}}\) such that \[\sum_{k=n}^m f(k) < \varepsilon \quad \text{for all } m \ge n \ge N.\] Now suppose that \(c_1,\,c_2 \in {\mathbb{R}}\) with \(c_2 \ge c_1 \ge N.\) Then there exist \(n,\,m \in {\mathbb{N}}\) with \(m > c_2 \ge c_1 \ge n \ge N.\) Since \(f\) is a nonnegative function, this implies \[\left| \int_{c_1}^{c_2} f(x)\,\mathrm{d}x \right| = \int_{c_1}^{c_2} f(x)\,\mathrm{d}x \le \int_{n}^{m} f(x)\,\mathrm{d}x \le \sum_{k=n}^{m-1} f(k) < \varepsilon,\] where for the last inequality we have used (7.10). With Lemma 7.13 the existence of \(\int_0^\infty f(x)\,\mathrm{d}x\) follows.
“\(\Longleftarrow\)”: Assume that the integral \(\int_0^\infty f(x)\,\mathrm{d}x\) exists. Since \(f\) is a nonnegative function, for all \(m \in {\mathbb{N}}\) it holds that \[\sum_{k=1}^m f(k) \le \int_0^m f(x)\,\mathrm{d}x \le \int_0^\infty f(x)\,\mathrm{d}x,\] where the first inequality follows from (7.10). As a consequence, the sequence of partial sums \(\left( \sum_{k=1}^m f(k) \right)_{m \ge 1}\) is bounded. Further it is monotone, since \(f\) is nonnegative. Hence, \(\left( \sum_{k=1}^m f(k) \right)_{m \ge 1}\) is convergent by the monotone convergence theorem (Theorem 2.8) and so \(\sum_{k=0}^\infty f(k)\) is convergent as well.

The integral criterion for series is very useful to test for convergence of series whose summands can be expressed as function values for integer arguments of monotonically decreasing functions whose antiderivative can be easily formed. Conversely, there are situations where the existence of an improper integral can only hardly be shown, for instance if the integrand does not have an elementary antiderivative. Then using the converse direction of Theorem 7.14, one can prove existence of the integral by showing convergence of a series. We will have a look at a few examples.

Example 7.9

  1. We prove the existence of the integral \[\tag{7.11} \int_{-\infty}^\infty \mathrm{e}^{-x^2}\,\mathrm{d}x,\] that we have seen in the introduction of this section. First we show the existence of the improper integral of \(x \mapsto \mathrm{e}^{-x^2}\) over \([0,\infty).\) In this interval, the function is monotonically decreasing. For \(x > 0\) it holds that \[\mathrm{e}^{x^2} = \sum_{k=0}^\infty \frac{x^{2k}}{k!} \ge x^2 \quad \Longrightarrow \quad \mathrm{e}^{-x^2} = \frac{1}{\mathrm{e}^{x^2}} \le \frac{1}{x^2}.\] Therefore, the series \(\sum_{k=1}^\infty \mathrm{e}^{-k^2}\) is convergent, since the series \(\sum_{k=1}^\infty \frac{1}{k^2}\) is a convergent majorant. Hence, also the series \(\sum_{k=0}^\infty \mathrm{e}^{-k^2}\) is convergent. Then with the integral criterion from Theorem 7.14, the existence of the integral \(\int_0^\infty \mathrm{e}^{-x^2}\,\mathrm{d}x\) follows immediately. For the existence of the improper integral over \((-\infty,0],\) we observe that with the substitution \(x = -t\) and \(\mathrm{d}x = -\mathrm{d}t\) as well as \(b \ge 0\) we obtain \[\int_{-b}^0 \mathrm{e}^{-x^2}\,\mathrm{d}x = -\int_{b}^0 \mathrm{e}^{-t^2}\,\mathrm{d}t = \int_0^b \mathrm{e}^{-t^2}\, \mathrm{d}t.\] This directly implies \[\int_{-\infty}^0 \mathrm{e}^{-x^2} \, \mathrm{d}x = \int_0^\infty \mathrm{e}^{-x^2}\, \mathrm{d}x,\] and hence, the existence of the integral (7.11). Unfortunately, at the moment we do not yet know the techniques needed to calculate the exact value of (7.11).

  2. By Example 7.8 i, the improper integral \(\int_0^\infty \frac{1}{x^s} \, \mathrm{d}x\) exists for all \(s > 1,\) but it does not for \(s \le 1.\) As a consequence, Theorem 7.14 implies that the series \[\sum_{k=1}^\infty \frac{1}{k^s}\] converges for \(s > 1\) and diverges for \(s \le 1.\) This is in agreement with our analysis in Chapter 3 for the case \(s=1\) (which leads to the harmonic series) and the case \(s=2.\)

There is also another kind of improper integrals, namely those where the integrand is not defined at the boundary of the integration domain.

Definition 7.10 • Improper integral of second kind

Let \(f:(a,b] \to {\mathbb{R}}\) be integrable over every interval \([a+\varepsilon,b]\) with \(0 < \varepsilon < b-a.\) If the limit \[\int_a^b f(x)\, \mathrm{d}x := \lim_{\varepsilon \searrow 0} \int_{a+\varepsilon}^b f(x)\,\mathrm{d}x\] exists, then it is called improper integral of \(f\) over \([a,b]\). (In a similar fashion one defines the improper integral over \([a,b]\) for functions \(f:[a,b) \to {\mathbb{R}}.\))

Remark 7.11

In the following we also allow combinations of improper integrals of first kind as in Definition 7.9 and improper integrals of second kind as in Definition 7.10. For example, the improper integral of a function \(f:(0,\infty) \to {\mathbb{R}}\) over \([0,\infty)\) is defined by \[\int_0^\infty f(x)\,\mathrm{d}x := \int_0^1 f(x)\,\mathrm{d}x + \int_1^\infty f(x)\, \mathrm{d}x,\] if both integrals on the right-hand side exist. Here we can also show that we can “split” the integration domain at an arbitrary \(c \in (0,\infty)\) instead of \(c=1.\)

Example 7.10

  1. We analyze the existence of the improper integral \[\int_0^1 \frac{1}{x^s}\,\mathrm{d}x\] of the function \(x \mapsto x^{-s}\) in dependence of \(s \in {\mathbb{R}}.\) Then for all \(s \neq 1\) and \(\varepsilon \in (0,1)\) it holds that \[\int_\varepsilon^1 \frac{1}{x^s}\,\mathrm{d}x = \frac{1}{1-s}x^{1-s}\bigg|_\varepsilon^1 = \frac{1}{1-s}\left( 1-\varepsilon^{1-s}\right).\] Consequently, the improper integral exists for \(s < 1\) since \[\int_0^1 \frac{1}{x^s}\,\mathrm{d}x = \lim_{\varepsilon \searrow 0} \int_\varepsilon^1 \frac{1}{x^s}\,\mathrm{d}x = \frac{1}{1-s}.\] An analogous limit argument shows that the improper integral does not exist in the case \(s > 1.\) Furthermore, if \(s=1\) we obtain \[\int_0^1 \frac{1}{x}\,\mathrm{d}x = \lim_{\varepsilon \searrow 0} \int_\varepsilon^1 \frac{1}{x}\,\mathrm{d}x = \lim_{\varepsilon \searrow 0} \left(\ln 1 - \ln \varepsilon\right) = \infty.\] In particular, our analysis implies that the improper integral \[\int_0^\infty \frac{1}{x^s}\,\mathrm{d}x = \int_0^1 \frac{1}{x^s}\,\mathrm{d}x + \int_1^\infty \frac{1}{x^s}\,\mathrm{d}x\] does never exist for any choice of \(s \in {\mathbb{R}},\) since the first improper integral on the right-hand side only exists for \(s<1\) while the second one only exists for \(s>1.\)

  2. One can show that the improper integral \[\Gamma(x) := \int_0^\infty t^{x-1}\mathrm{e}^{-t}\,\mathrm{d}t\] exists for all \(x \in (0,\infty)\) (exercise!). Note that for the case \(0 < x <1,\) the latter is an improper integral of “mixed type”, since the function \(t \mapsto t^{x-1}\mathrm{e}^{-t}\) at \(t=0\) is only defined for \(x \ge 1.\)

  3. The function \(\Gamma:(0,\infty) \to {\mathbb{R}},\) \(x \mapsto \Gamma(x)\) is called gamma function. It has the remarkable property (exercise!) that \[\Gamma(x+1) = x \cdot \Gamma(x) \quad \text{for all } x \in (0,\infty).\] Because of \(\Gamma(1) = 1,\) one can show by induction that \(\Gamma(n+1) = n!\) for all \(n \in {\mathbb{N}}.\) By the definition \[x! := \Gamma(x+1),\] the factorial operation can be extended to all \(x \in (-1,\infty).\)

8 Convergence of Function Sequences

In Chapter 2 we have discussed sequences of real numbers and their convergence. However, one may also ask the question whether it is possible to consider convergence also within arbitrary sets. The answer to this questions is positive and in this chapter we will have a closer look at sequences whose elements are functions. Indeed, we have already seen an example for this in Chapter 3 where we have defined the exponential function with the help of series, more precisely, \[\exp(x) = \sum_{k=0}^\infty \frac{x^k}{k!} = \lim_{n \to \infty} \sum_{k=0}^n \frac{x^k}{k!}.\] The mapping \[p_n : {\mathbb{R}}\to {\mathbb{R}}, \quad x \mapsto \sum_{k=0}^n \frac{x^k}{k!}\] defines a polynomial function. (It is the \(n\)-th Taylor polynomial of \(\exp\) in the expansion point \(x_0=0.\)) Hence, the sequence of functions \({(p_n)}_{n \in {\mathbb{N}}}\) converges towards the exponential function \(\exp\) in the sense that \[\lim_{n \to \infty} p_n(x) = \exp(x) \quad \text{for all } x \in {\mathbb{R}}.\] A central question is the following: If \({(f_n)}_{n \in {\mathbb{N}}}\) is a sequence of functions \(f_n:[a,b] \to {\mathbb{R}}\) for \(n \in {\mathbb{N}}\) and \(f:[a,b] \to {\mathbb{R}}\) has the property \[\lim_{n \to \infty} f_n(x) = f(x) \quad \text{for all } x \in [a,b],\] do we then also have \[\int_a^b \left( \lim_{n \to \infty} f_n(x) \right) \, \mathrm{d}x = \int_a^b f(x) \, \mathrm{d}x = \lim_{n \to \infty} \int_a^b f_n(x)\,\mathrm{d}x,\] i. e., can integration and taking the limit be interchanged? If this is true we would obtain a new method for integration, in particular, if we could not express the antiderivative of \(f\) by elementary functions which is, e. g., the case for the function \(x \mapsto \mathrm{e}^{-x^2}.\) If we could express \(f\) as the limit of a sequence of functions \(f_n\) that are easily integrable (such as for Taylor polynomials), then we would obtain a sequence of integrals whose limit is the desired integral of \(f.\) Let us have a look at two illustrative examples.

Example 8.1

  1. Consider the sequence \({(f_n)}_{n \in {\mathbb{N}}}\) with \(f_n:[0,1] \to {\mathbb{R}},\) \(x \mapsto x^n.\) Then we have \[f(x) := \lim_{n \to \infty} f_n(x) = \begin{cases} 0, & \text{if } x \in [0,1), \\ 1, & \text{if } x = 1, \end{cases}\] i. e., the limit function of our function sequence is \(f:[0,1] \to {\mathbb{R}},\) \(x \mapsto f(x).\) Here we experience a little surprise. Even if all \(f_n,\) \(n=0,\,1,\,\ldots\) are continuous (and even infinitely often differentiable), the limit function \(f\) is not continuous in \(x_0=1\) (and hence, also not differentiable). On the other hand, it is a staircase function and hence, it is integrable. Moreover, it holds that \[\begin{aligned} \lim_{n \to \infty} \int_0^1 f_n(x)\,\mathrm{d}x &= \lim_{n \to \infty} \int_0^1 x^n\,\mathrm{d}x \\ &= \lim_{n \to \infty} \frac{1}{n+1} = 0 = \int_0^1 f(x)\,\mathrm{d}x. \end{aligned}\] In other words, in this example, integration and taking the limit can be interchanged.

  2. Consider the sequence \(\left( f_n:[0,1] \to {\mathbb{R}}\right)_{n \ge 1}\) with \[f_n: x \mapsto \begin{cases} n^2 x, & \text{if } x \in \left[0, \frac{1}{n} \right), \\ 2n - n^2x, & \text{if } x \in \left[\frac{1}{n},\frac{2}{n} \right], \\ 0, & \text{if } x \in \left( \frac{2}{n},1\right]. \end{cases}\] One of the functions \(f_n\) is depicted in Figure 8.1. For all \(x \in [0,1]\) it holds that \[\lim_{n \to \infty} f_n(x) = 0.\] For \(x=0\) and for \(x > 0\) we have \(f_n(x) = 0\) for all \(n\) with \(\frac{2}{n} \le x,\) i. e., for all \(n \ge \frac{2}{x}.\) Consequently, the limit of our function sequence is the zero function \(f:[0,1] \to {\mathbb{R}},\) \(x \mapsto 0.\) In contrast to i, the continuity of \(f_n\) is preserved for the limit function \(f.\) On the other hand, we obtain \[\int_0^1 f(x)\, \mathrm{d}x = 0 \neq 1 = \lim_{n \to \infty} \int_0^1 f_n(x)\, \mathrm{d}x,\] since Figure 8.1 shows that \(\int_0^1 f_n(x)\, \mathrm{d}x = 1\) for all \(n \in {\mathbb{N}}.\) In this case, integration and taking the limit cannot be interchanged. The problem here is that the convergence is not “uniform”. Even though for each fixed \(x \in [0,1],\) there exists an \(N_x \in {\mathbb{N}}\) such that \(f_n(x) = 0\) for all \(n \ge N_x,\) for each \(N \in {\mathbb{N}}\) there also exist points \(x_N \in [0,1]\) for which the function value is large, e. g., \(f_N\left(\frac{1}{N}\right) = N\) for \(N \ge 1.\)

Figure 8.1: The graph of \(f_n\)

In the following we first consider very general functions whose domain is an arbitrary set \(K\) and whose co-domain may be the complex numbers. We want to emphasize that for most of the following terms only the domain is of relevance. However, mostly we have examples in mind where \(K \subseteq {\mathbb{C}}.\)

Definition 8.1 • Pointwise convergence, uniform convergence

Let \(K\) be an arbitrary set and \(f_n : K \to {\mathbb{C}}\) for \(n \in {\mathbb{N}}\) and \(f:K \to {\mathbb{C}}.\)

  1. The sequence \({(f_n)}_{n \in {\mathbb{N}}}\) is called pointwise convergent towards \(f,\) if for every \(x \in K\) it holds that \[\lim_{n \to \infty} f_n(x) = f(x),\] i. e., for every \(x \in K\) and every \(\varepsilon > 0\) there exists an \(N_{x,\varepsilon} \in {\mathbb{N}}\) such that \[|f_n(x) - f(x)| < \varepsilon \quad \text{for all } n \ge N_{x,\varepsilon}.\]

  2. The sequence \({(f_n)}_{n \in {\mathbb{N}}}\) is called uniformly convergent towards \(f,\) if for every \(\varepsilon > 0\) there exists an \(N_\varepsilon \in {\mathbb{N}}\) such that for all \(n \ge N_\varepsilon\) and all \(x \in K\) it holds that \[|f_n(x) - f(x)| < \varepsilon \quad \text{for all } n \ge N_\varepsilon.\]

Remark 8.1

  1. To compare both terms defined in Definition 8.1, we rewrite both definitions with the help of quantors: \[\begin{aligned} \text{pointwise } &\text{convergence:} \\ &\forall x\in K\;\forall \varepsilon >0\;\exists N \in {\mathbb{N}}\; \forall n \ge N: |f_n(x) - f(x)| < \varepsilon, \\ \text{uniform } &\text{convergence:} \\ &\forall \varepsilon >0\;\exists N \in {\mathbb{N}}\; \forall x \in K\; \forall n \ge N: |f_n(x) - f(x)| < \varepsilon. \end{aligned}\] As for continuity and uniform continuity (cf. Remark 4.5 i), the order of the quantors plays a central role here. In case of uniform convergence, \(N\) only depends on \(\varepsilon,\) while for pointwise convergence, \(N\) may depend on \(x\) and \(\varepsilon.\)

  2. The definition implies that a uniformly convergent function sequence is also pointwise convergent. The converse is not true as Example 8.1 shows. There, both examples converge pointwise, but not uniformly (exercise!).

  3. A usual notation for pointwise convergence of \({(f_n)}_{n \in {\mathbb{N}}}\) towards \(f\) is \(f_n \to f,\) while uniform convergence is denoted by \(f_n \rightrightarrows f.\)

Uniform convergence of a function sequence can also be characterized differently. For that we notice that the condition \[|f_n(x) - f(x)| < \varepsilon \quad \text{for all } x \in K\] implies that the set \(\{ |f_n(x) - f(x)| \; | \; x \in K \} \subseteq {\mathbb{R}}\) is bounded from above by \(\varepsilon\) and hence, has a supremum. Then we obviously have \[\sup\{ |f_n(x) - f(x)| \; | \; x \in K \} \le \varepsilon.\]

Definition 8.2 • Supremum norm

Let \(K\) be a set and \(g:K \to {\mathbb{C}}\) be a function. Then \[\tag{8.1} \left\| g \right\|_K := \| g \| := \sup \{ |g(x)| \; | \; x \in K \}\] is called the supremum norm of \(g\).

In the following considerations we mostly write \(\|g\|\) instead of \({\|g\|}_K.\) We will only use the latter notation if there would be an ambiguity for understanding otherwise.

Remark 8.2

  1. Note that the set in (8.1) is not necessarily bounded. Therefore, \(\|g\| \in {\mathbb{R}}\) only holds for bounded functions \(g:K \to {\mathbb{C}}.\) (The generalization of the boundedness definition in Definition 7.4 to the case considered here is obvious.) On the other hand, for unbounded functions we obtain \(\|g\| = \infty.\) Therefore, in precise mathematical terms, the supremum norm is only a norm, if we restrict ourselves to bounded functions. Norms will be discussed in more details in module M05: “Analysis II”. On the other hand, it has become a standard convention to also talk about the supremum norm in the unbounded case.

  2. With our preliminary considerations we obtain the following:
    A function sequence \(\left( f_n :K \to {\mathbb{C}}\right)_{n \in {\mathbb{N}}}\) converges uniformly towards \(f:K \to {\mathbb{C}},\) if and only if \[\tag{8.2} \lim_{n \to \infty} \| f_n - f \| = 0.\]
    We note that (8.2) is a sequence whose elements may be \(\infty.\) So to be very precise, the statement we have just made above would have to read:
    A function sequence \(\left( f_n :K \to {\mathbb{C}}\right)_{n \in {\mathbb{N}}}\) converges uniformly towards \(f:K \to {\mathbb{C}},\) if and only if there exists an \(N \in {\mathbb{N}}\) such that \(\| f_n - f \| \in {\mathbb{R}}\) for all \(n \ge N\) and the sequence \(\left( f_n :K \to {\mathbb{C}}\right)_{n \ge N}\) fulfills \[\lim_{n \to \infty} \| f_n - f \| = 0.\]
    Obviously, this is a very laborious formulation, so we are happy with the first, slighty unprecise, formulation of the result.

Home

Contents

Weeks