5.3 Recurrence and transience

Suppose that a Markov chain starts from state \(x \in S.\) How long does it take for it to return to \(x\)? How often will it return? These questions are two of the most central ones in the study of Markov chains, and lead to the notions of recurrence and transience of Markov chains.

To study these questions, we introduce two fundamental variables. In their definition, we always use the convention that \(\inf \emptyset = \infty.\)

Definition 5.17

Let \(x \in S.\)

  • The time of first visit at \(x,\) or first return to \(x,\) is \[H_x :=\inf \{n \geqslant 1 \,\colon X_n = x\}\,.\]

  • The number of visits at \(x\) is \[N_x :=\sum_{n \geqslant 0} \mathbf 1_{X_n = x}\,.\]

We emphasize that \(H_x\) cannot be \(0,\) i.e. if the chain starts at \(x\) then \(H_x\) counts the time of the first return to \(x.\) On the other hand, if the chain starts at \(x\) then this initial visit is counted in \(N_x.\)

Remark 5.18

The random variable \(H_x\) is an example of a stopping time, i.e. a random time \(T \in \mathbb{N}\) such that, for each \(n \in \mathbb{N},\) the event \(\{T = n\}\) is in \(\sigma(X_0, \dots, X_n)\) (i.e. it is determined only by the values of the process up to time \(n\)). The intuition is that the decision to stop at a certain time \(n,\) i.e. the event \(\{T = n\},\) can only be made based on the information \(X_0, \dots, X_n\) available up to time \(n\): we cannot see into the future.

Indeed, for \(T = H_x\) we have \[\tag{5.5} \{H_x = n\} = \{X_1 \neq x, \dots, X_{n - 1} \neq x, X_n = x\} \in \sigma(X_0, \dots, X_n)\,,\]

Definition 5.19

We often use the abbreviations \[\mathbb{P}_x(\cdot) :=\mathbb{P}(\cdot \mid X_0 = x)\,, \qquad \mathbb{P}_\mu(\cdot) :=\sum_{x \in S} \mu(x) \, \mathbb{P}_x(\cdot) \,,\] where \(\mu\) is a probability measure on \(S.\) We denote by \(\mathbb{E}_x\) and \(\mathbb{E}_\mu\) the corresponding expectations.

The next proposition establishes the crucial dichotomy for the number of visits at \(x.\)

Proposition 5.20

Let \(x \in S.\)

  1. If \(\mathbb{P}_x(H_x < \infty) = 1\) then \(N_x = \infty\) \(\mathbb{P}_x\)-a.s. In this case \(x\) is called recurrent.

  2. If \(\mathbb{P}_x(H_x < \infty) < 1\) then \(\mathbb{E}_x[N_x] < \infty.\) In this case \(x\) is called transient.

Proof. The proof is a typical application of the Markov property, Proposition 5.15. To avoid technical issues, we shall first work with the truncated number of visits, \[N_x^m :=\sum_{n = 0}^m \mathbf 1_{X_n = x}\,,\] which only depends on a finite number of random variables.

The main idea of the proof is to condition on the random time \(H_x,\) i.e. to sum over all of its possible values, to compute the probability of at least \(k+1\) visits at \(x.\) For \(k \geqslant 1\) we get \[\begin{aligned} \mathbb{P}_x(N_x^m \geqslant k+1) &= \sum_{n \geqslant 1} \mathbb{P}_x(N_x^m \geqslant k+1, H_x = n) \\ &= \sum_{n \geqslant 1} \mathbb{P}_x\biggl(\sum_{i = 0}^m \mathbf 1_{X_i = x} \geqslant k+1, H_x = n\biggr) \\ &= \sum_{n \geqslant 1} \mathbb{P}_x\biggl(\sum_{i = n}^m \mathbf 1_{X_i = x} \geqslant k, H_x = n\biggr)\,, \end{aligned}\] where the last step follows from the definition of the time of first return \(H_x.\) Using (5.5) we can use Proposition 5.15 to get \[\begin{aligned} \mathbb{P}_x(N_x^m \geqslant k+1) &= \sum_{n \geqslant 1} \mathbb{P}_x\biggl(\sum_{i = n}^m \mathbf 1_{X_i = x} \geqslant k \,\bigg|\, X_1 \neq x, \dots, X_{n - 1} \neq x, X_n = x\biggr) \mathbb{P}(H_x = n) \\ &= \sum_{n \geqslant 1} \mathbb{P}\biggl(\sum_{i = 0}^{m-n} \mathbf 1_{X_i = x} \geqslant k \,\bigg|\, X_0 = x\biggr) \mathbb{P}(H_x = n) \\ &= \sum_{n \geqslant 1} \mathbb{P}_x(N_x^{m - n} \geqslant k) \mathbb{P}(H_x = n)\,. \end{aligned}\]

Next, we let \(m \to \infty.\) To that end, we note that for any \(k \geqslant 1\) the sequence of random variables \(\mathbf 1_{N_x^m \geqslant k}\) is pointwise nondecreasing in \(m,\) with limit \(\mathbf 1_{N_x \geqslant k}.\) Hence, by monotone convergence (recall Remark 2.26), we conclude that \[\mathbb{P}_x(N_x \geqslant k+1) = \sum_{n \geqslant 1} \mathbb{P}_x(N_x \geqslant k) \mathbb{P}(H_x = n) = \mathbb{P}_x(N_x \geqslant k) \mathbb{P}(H_x < \infty)\,.\] (Notice that on the right-hand side, we used monotone convergence twice; once for the sum over \(n\) and then for the expectation \(\mathbb{E}_x.\))

Since \(\mathbb{P}_x(N_x \geqslant 1) = 1\) trivially, we conclude by induction that \[\tag{5.6} \mathbb{P}_x(N_x \geqslant k) = \mathbb{P}_x(H_x < \infty)^{k - 1}\,.\]

Using (5.6), it is now easy to conclude the proof.

  • If \(\mathbb{P}_x(H_x < \infty) = 1\) then \(\mathbb{P}_x(N_x \geqslant k) = 1\) for all \(k,\) and hence \[\mathbb{P}_x(N_x = \infty) = \mathbb{P}_x \biggl(\bigcap_{k \geqslant 1} \{N_x \geqslant k\}\biggr) = 1\,.\]

  • If \(\mathbb{P}_x(H_x < \infty) < 1\) then \[\begin{aligned} &\mathbb{E}_x[N_x] = \sum_{k \geqslant 1} \mathbb{P}_x(N_x \geqslant k) = \sum_{k \geqslant 1} \mathbb{P}_x(H_x < \infty)^{k - 1} \\ &= \frac{1}{1 - \mathbb{P}_x(H_x < \infty)} = \frac{1}{\mathbb{P}_x(H_x = \infty)} < \infty\,. \end{aligned}\]

Informally, we have a dichotomy for a Markov chain starting from \(x\): either the walk returns to \(x\) almost surely (recurrent) or there is a positive probability that it never returns to \(x\) (transient). Proposition 5.20 gives a simple criterion for analysing the recurrence or transience: check whether the sum \[\tag{5.7} \mathbb{E}_x[N_x] = \sum_{n \in \mathbb{N}} \mathbb{P}_x(X_n = x) = \sum_{n \in \mathbb{N}} Q^n(x,x)\] is finite. Thus, the problem reduces to a question about the analysis of the matrix \(Q.\) We illustrate this for the simple random walk on \(\mathbb{Z}^d.\)

Example 5.21 • Example 5.11 continued

Let us consider the simple random walk on \(\mathbb{Z}^d\) from Example 5.11. In principle, one can find explicit combinatorial formulas for \(Q^n(x,x)\) and perform an asymptotic analysis to determine the convergence of (5.7). In practice, this can be rather tedious and the analysis depends strongly on the precise transition matrix we are considering.

A far more powerful and versatile approach is to use Fourier analysis (cf. Section 4.3). We observe first that for \(x \in \mathbb{Z}^d\) we have \[\int_{-[\pi, \pi]^d} \frac{\mathrm d\xi}{(2\pi)^d} \, \mathrm e^{\mathrm i\xi \cdot x} = \delta_{x0}\,,\] as follows by a simple application of Fubini’s theorem to evaluate the integral. Hence, \[\mathbb{P}_0(X_n = 0) = \int_{-[\pi, \pi]^d} \frac{\mathrm d\xi}{(2\pi)^d} \, \mathbb{E}_0[\mathrm e^{\mathrm i\xi \cdot X_n}]\] By the representation (5.3), we find for the characteristic function of \(X_n\) \[\mathbb{E}_0[\mathrm e^{\mathrm i\xi \cdot X_n}] = \Phi(\xi)^n\,,\] where we abbreviated \[\Phi(\xi) :=\mathbb{E}_0[\mathrm e^{\mathrm i\xi \cdot Z_1}] = \frac{1}{d} \sum_{i = 1}^d \cos(\xi_i)\,,\] where the last step follows from by an explicit calculation using the law (5.4) of \(Z_1.\)

By Fubini’s theorem, we conclude that for any \(0 < \lambda < 1\) we have \[\sum_{n \in \mathbb{N}} \lambda^n \mathbb{P}_0(X_n = 0) = \int_{-[\pi, \pi]^d} \frac{\mathrm d\xi}{(2\pi)^d} \sum_{n \in \mathbb{N}} \lambda^n \Phi(\xi)^n = \int_{-[\pi, \pi]^d} \frac{\mathrm d\xi}{(2\pi)^d} \frac{1}{1 - \lambda \Phi(\xi)}\,.\]

We shall now take the limit \(\lambda \uparrow 1.\) By monotone convergence, the left-hand side converges to (5.7). As for the right-hand side, we note that there exists a constant \(c > 0\) such that \(\phi(x) \leqslant 1 - c\) if \(\xi \in [-\pi.\pi]^d \setminus [-1,1]^d\) and that \(\Phi(\xi) \geqslant 0\) for \(\xi \in [-1,1]^d.\) We now split the integral over \([-\pi.\pi]^d\) into two pieces: \([-\pi.\pi]^d \setminus [-1,1]^d\) and \([-1,1]^d.\) We can now take the limit \(\lambda \uparrow 1\) by applying dominated convergence to the former piece and monotone convergence to the latter piece. This yields \[\tag{5.8} \sum_{n \in \mathbb{N}} \mathbb{P}_0(X_n = 0) = \int_{-[\pi, \pi]^d} \frac{\mathrm d\xi}{(2\pi)^d} \frac{1}{1 - \Phi(\xi)}\,.\]

The denominator of the integral has a singularity at \(\xi = 0,\) whose integrability we need to analyse. From Taylor’s theorem we obtain for \(t \in [-1,1]\) \[\cos(t) = 1 - \frac{1}{2} t^2 + \frac{1}{24} s^4\] for some \(0 \leqslant s \leqslant t,\) from which we deduce that \[1 - \frac{1}{2} t^2 \leqslant\cos(t) \leqslant 1 - \frac{11}{24} t^2\] for \(t \in [-1,1].\) Hence, \[\frac{1}{2d} \lvert \xi \rvert^2 \geqslant 1 - \Phi(\xi) \geqslant\frac{11}{24d} \lvert \xi \rvert^2\] in a neighbourhood of \(0.\) We conclude that the integral in (5.8) is finite if and only if \(d \geqslant 3.\)

Summarising: the simple random walk on \(\mathbb{Z}^2\) is recurrent if \(d \leqslant 2\) and transient for \(d \geqslant 3.\) Or, as Shizuo Kakutani put it, “A drunk man will find his way home, but a drunk bird may get lost forever22.”

5.4 Stationary and reversible measures

Throughout this section, \(\mu\) always denotes a measure on \(S\) satisfying \(\mu(S) > 0\) and \(0 \leqslant\mu(x) < \infty\) for all \(x.\) In particular, \(\mu(S)\) may be infinite.

Definition 5.22 • Stationary measure

Let \(Q\) be a stochastic matrix on \(S.\) A measure \(\mu\) is stationary with respect to \(\mu\) if \(\mu = \mu Q.\) Explicitly, this means that \[\mu(y) = \sum_{x \in S} \mu(x) Q(x,y) \,, \qquad \forall y \in S\,.\]

To explain in more detail the adjective stationary, suppose that \((X_n)\) is a Markov chain with initial distribution \(\mu\) and transition matrix \(Q,\) such that \(\mu\) is stationary with respect to \(Q.\) Then the law of \(X_n\) is \[\mathbb{P}_\mu(X_n = x) = \mu Q^n \delta_x = \mu Q^{n-1} \delta_x = \cdots \mu \delta_x = \mu(x)\] for all \(n \in \mathbb{N}.\) Hence, the law of \(X_n\) is \(\mu\) for all \(n.\) In this sense, the process \((X_n)\) is in the stationary state \(\mu\): it is an example of a dynamic process in equilibrium.

Example 5.23 • Example 5.11 continued

Consider a random walk on \(\mathbb{Z}^d\) with transition matrix \(Q(x,y) = \nu(y - x),\) as in Example 5.11. The counting measure \(\mu\) on \(\mathbb{Z}^d\) is stationary with respect to \(Q.\) (Because \(\sum_x Q(x,y) = \sum_x \nu(y - x) = 1.\))

Definition 5.24 • Reversible measure

The measure \(\mu\) is reversible with respect to \(Q\) if \[\tag{5.9} \mu(x) Q(x,y) = \mu(y) Q(y,x) \,, \qquad \forall x,y \in S\,.\]

The condition (5.9) is often called detailed balance. Interpreting \(Q(x,y)\) as the flow of probability from \(x\) to \(y\) per unit probability in \(x,\) the detailed balance condition states that the flow from \(x\) to \(y\) is the same as the flow from \(y\) to \(x.\) It is a remarkable and extremely useful condition in both pure mathematics and innumerable applications, for instance in so-called Markov chain Monte Carlo methods (see Section 5.9 below). Its usefulness relies on the following observation.

Remark 5.25

Reversibility is a stronger condition than stationarity (i.e. Definition 5.24 implies Definition 5.22). Indeed, if \(\mu\) is reversible then \[\sum_x \mu(x) Q(x,y) = \sum_x \mu(y) Q(y,x) = \mu(y)\,.\]

Thus, a simple way to verify that a measure is stationary is to verify that it is reversible. In practice, reversibility is often much easier to check, since it presents a simple local equation that can often be explicitly solved, whereas it can be very difficult to find stationary measures. However, it is worth keeping in mind that there are measures that are stationary but not reversible; see Remark 5.26.

Remark 5.26

The converse implication is wrong: in general stationarity does not imply reversibility. For instance, in the random walk on \(\mathbb{Z}^d\) from Example 5.11, if \(\nu\) is not symmetric, \(\nu(-x) \neq \nu(x)\) for some \(x \in \mathbb{Z}^d,\) then the counting measure \(\mu\) is stationary (see Example 5.23) but not reversible.

Example 5.27

Consider the simple asymmetric random walk on \(\mathbb{Z},\) with transition matrix \[Q(x,x+1) = p \,, \qquad Q(x, x - 1) = 1-p\,, \qquad \forall x \in \mathbb{Z}\,,\] where \(0 < p < 1.\)

We know from Example 5.23 and Remark 5.26 that the counting measure on \(\mathbb{Z}\) is stationary but, if \(p \neq 1/2,\) not reversible.

Can we find a reversible measure \(\mu\)? The equation we have solve is \[\mu(x) Q(x,x+1) = \mu(x+1) Q(x+1, x)\,,\] i.e. \[\mu(x) \, p = \mu(x+1) \, (1 - p)\,,\] whose can be solved by induction to yield \[\tag{5.10} \mu(x) = \mu(0) \biggl(\frac{p}{1 - p}\biggr)^x\,.\]

This measure coincides with the counting measure if and only if \(p = 1/2,\) in which case \(\mu\) is stationary and reversible.

If \(p \neq 1/2,\) we conclude that \(Q\) has two different stationary measures: the counting measure, which is not reversible, and \(\mu,\) which is reversible.

Example 5.28 • Example 5.13 continued

Consider the simple random walk on a connected locally finite graph (S,E). Can we find a stationary measure? It is much easier to look for a reversible measure by solving the detailed balance equation, which can be written as \[\mu(x) \frac{1}{D_x} = \mu(y) \frac{1}{D_y}\] for any adjacent \(x,y.\) Since the graph is connected, we easily find that \(\mu\) must be of the form \[\mu(x) = C D_x\] for some positive constant \(C.\)

Example 5.29 • Example 5.14 continued

Let us look for a stationary measure of the Ehrefest model from Example 5.14. As before, the best approach is to find a reversible measure by solving the detailed balance equations, which read \[\mu(x) \frac{N - x}{N} = \mu(x+1) \frac{x+1}{N}\,, \qquad 0 \leqslant x \leqslant N - 1\,.\] This can be solved by induction on \(x\) to yield \[\mu(x) = C \binom{N}{x} \,, \qquad 0 \leqslant x \leqslant N\,.\] For a probability measure, we take \(C = 2^{-N}.\) This stationary measure corresponds to the equilibrium measure of the diffusion model. It is strongly concentrated around \(x = N/2,\) as one would expect based on the physical interpretation of the model.

5.5 The Green function

In this Section we introduce a tool of great power, which bears many names in the literature: the Green function, the potential kernel, the fundamental matrix, ... Green functions appear throughout mathematics; informally, Green functions are always inverses of some basic matrix or operator underlying the problem one is considering (see also Remark 5.31 (i) below).

Definition 5.30

The Green function of a Markov chain is the function \(U \,\colon S^2 \to [0,\infty]\) defined by \[U(x,y) :=\mathbb{E}_x[N_y]\,,\] i.e. the expected number of visits in \(y\) starting from \(x.\)

Remark 5.31

  1. Clearly, \[U(x,y) = \mathbb{E}_x\Biggl[\sum_{n \in \mathbb{N}} \mathbf 1_{X_n = y}\Biggr] = \sum_{n \in \mathbb{N}} \mathbb{P}_x(X_n = y) = \sum_{n \in \mathbb{N}} Q^n(x,y)\,.\] Hence, \(U(x,y) > 0\) if and only if there is an \(n \in \mathbb{N}\) such that \(Q^n(x,y) > 0.\) In matrix notation, the Green function is therefore formally given by the Neumann series \[U = \sum_{n \in \mathbb{N}} Q^n = (I - Q)^{-1}\,.\]

  2. By Proposition 5.20, \(x\) is recurrent if and only if \(U(x,x) = \infty.\)

Proposition 5.32

If \(x \neq y\) then \[U(x,y) = \mathbb{P}_x(H_y < \infty) U(y,y)\,.\]

This is rather intuitive: to count the number of visits at \(y\) starting from \(x,\) one has to first find the probability of going from \(x\) to \(y\) and then count the number of visits at \(y\) starting from \(y\); in fact, this is an informal summary of the proof given below.

Proof. We compute \[\begin{aligned} U(x,y) &= \mathbb{E}_x[N_y] \\ &= \mathbb{E}_x[N_y \mathbf 1_{H_y < \infty}] \\ &= \sum_{n,k \in \mathbb{N}} \mathbb{E}_x[\mathbf 1_{X_k = y} \mathbf 1_{H_y = n}] \\ &= \sum_{n, \ell \in \mathbb{N}} \mathbb{E}_x[\mathbf 1_{X_{n + \ell} = y} \mathbf 1_{H_y = n}] \\ &= \sum_{n, \ell \in \mathbb{N}} \mathbb{P}(X_{n + \ell} = y \mid H_y = n, X_n = y) \, \mathbb{P}_x(H_y = n) \\ &= \sum_{n, \ell \in \mathbb{N}} \mathbb{P}(X_{\ell} = y \mid X_0 = y) \, \mathbb{P}_x(H_y = n) \\ &= U(y,y) \mathbb{P}_x(H_y < \infty)\,, \end{aligned}\] where in the fourth step we set \(k = n + \ell\) and used the definition of \(H_y,\) and in the sixth step we used Proposition 5.15.

By Remark 5.31 (i), \(U(x,y) > 0\) is equivalent to \(Q^n(x,y) > 0\) for some \(n \in \mathbb{N}.\) This means that it is possible to go from \(x\) to \(y\) with positive probability in finite time. Does this imply that \(U(y,x) > 0\)? In general, the answer is clearly no (think of an example!). However, if \(x\) is recurrent, then the answer is yes.

Proposition 5.33

Let that \(x,y \in S\) such that \(x\) is recurrent. If \(U(x,y) > 0\) then \(y\) is also recurrent and \[\tag{5.11} \mathbb{P}_y(H_x < \infty) = 1\,.\] In particular, by Proposition 5.32, \(U(y,x) > 0.\)

Proof. The main work of the proof is to show (5.11). We start by computing the probability of going from \(x\) to y in time \(n\) and not returning to \(x\) between time \(n\) and \(n+m.\) Using Proposition 5.15, it is \[\begin{aligned} &\mspace{-10mu} \mathbb{P}_x(H_y = n, X_{n+1} \neq x, \dots, X_{n+m} \neq x) \\ &= \mathbb{P}(X_{n+1} \neq x, \dots, X_{n+m} \neq x \mid X_n = y, H_y = n) \, \mathbb{P}_x(H_y = n) \\ &= \mathbb{P}_y(X_{1} \neq x, \dots, X_{m} \neq x) \, \mathbb{P}_x(H_y = n)\,. \end{aligned}\] Taking \(m \to \infty\) yields \[\begin{aligned} \mathbb{P}_y(H_x = \infty) \, \mathbb{P}_x(H_y = n) &= \mathbb{P}_x \Biggl(\{H_y = n\} \cap \bigcap_{k \in \mathbb{N}^*}\{X_{n+k} \neq x\}\Biggr) \\ &\leqslant\mathbb{P}_x(H_y = n, N_x < \infty)\,. \end{aligned}\] Summing over \(n \in \mathbb{N}\) yields \[\mathbb{P}_y(H_x = \infty) \, \mathbb{P}_x(H_y < \infty) \leqslant\mathbb{P}_x(H_y < \infty, N_x < \infty) \leqslant\mathbb{P}_x(N_x < \infty) = 0\,,\] by Proposition 5.20. Since \(\mathbb{P}_x(H_y < \infty) > 0\) by \(U(x,y) > 0\) and Proposition 5.32, we conclude \(\mathbb{P}_y(H_x = \infty) = 0,\) which is (5.11). In particular, by Proposition 5.32 we also have \(U(y,x) > 0.\)

What remains is to show that \(y\) is recurrent. The idea how to show this is to use \(U(x,y) > 0\) and \(U(y,x) > 0\) to transport the question of recurrence of \(y\) to the recurrence of \(x.\) More precisely, because \(U(x,y) > 0\) there exists \(n \in \mathbb{N}\) such that \(Q^n(x,y) > 0,\) and because \(U(y,x) > 0\) there exists \(m \in \mathbb{N}\) such that \(Q^m(y,x) > 0.\) Now for any \(k \in \mathbb{N}\) we have, by definition of matrix multiplication, \[Q^{n+ k + m}(y,y) \geqslant Q^{m}(y,x) Q^k(x,x) Q^n(x,y)\,,\] which implies \[U(y,y) \geqslant\sum_{k \in \mathbb{N}} Q^{n+ k + m}(y,y) \geqslant Q^{m}(y,x) \biggl(\sum_{k \in \mathbb{N}} Q^k(x,x)\biggr) Q^n(x,y)\,.\] The first and third terms on the right-hand side are strictly positive, while the second term is simply \(U(x,x) = \infty,\) by recurrence of \(y.\) We conclude that \(U(y,y) = \infty\) and hence \(y\) is recurrent.

We conclude this section a notion that describes the connectedness of states in a Markov chain. Consider the following somewhat silly example. Take two Markov chains on the disjoint state spaces \(S_1\) and \(S_2,\) with transition matrices \(Q_1\) and \(Q_2\) respectively. Combine these two chains into one on the combined the space \(S = S_1 \cup S_2,\) where the transition matrix is given by \[Q(x,y) = \begin{cases} Q_1(x,y) & \text{if } x,y \in S_1 \\ Q_2(x,y) & \text{if } x,y \in S_2 \\ 0 & \text{otherwise}\,. \end{cases}\] These two sub-chains evolve without any knowledge of each other, and one can never go from \(S_1\) to \(S_2.\) More precisely, \(U(x,y) = 0\) if \(x\) and \(y\) belong to differents sets \(S_1\) and \(S_2.\) Such a chain is reducible, in the sense that we can break it apart into two chains without changing anything in its behaviour. The following definition precludes precisely this kind of behaviour.

Definition 5.34

A Markov chain is irreducible if \(U(x,y) > 0\) for all \(x,y \in S.\)

Equivalently, the chain is irreducible if for any \(x,y \in S\) there exists \(n \in \mathbb{N}\) such that \(Q^n(x,y) > 0.\) In other words, irreducibility means that one can go with positive probability from any state to any other state in finite time.

Remark 5.35

If a chain is irreducible and has a recurrent state, then, by Proposition 5.33, all states are recurrent. In this case we call the whole chain recurrent.

Definition 5.36

Let \(x\) and \(y\) be recurrent states. We say that they communicate if \(U(x,y) > 0.\)

Note that this definition is unambiguous in the sense that the conditions \(U(x,y) > 0\) and \(U(y,x) > 0\) are equivalent by Proposition 5.33. Thus, \(x\) and \(y\) communicate if and only if there exists \(n \in \mathbb{N}\) such that \(Q^n(x,y) > 0.\)

Home

Contents

Study Weeks