Suppose that \( S \) is an interval of integers (that is, a set of consecutive integers), either finite or infinite. A (discrete-time) birth-death chain on \( S \) is a discrete-time Markov chain \( \bs{X} = (X_0, X_1, X_2, \ldots) \) on \( S \) with transition probability matrix \( P \) of the form \[ P(x, x - 1) = q(x), \; P(x, x) = r(x), \; P(x, x + 1) = p(x); \quad x \in S \] where \( p \), \( q \), and \( r \) are nonnegative functions on \( S \) with \( p(x) + q(x) + r(x) = 1 \) for \( x \in S \).
If the interval \( S \) has a minimum value \( a \in \Z \) then of course we must have \( q(a) = 0 \). If \( r(a) = 1 \), the boundary point \( a \) is absorbing and if \( p(a) = 1 \), then \( a \) is reflecting. Similarly, if the interval \( S \) has a maximum value \( b \in \Z \) then of course we must have \( p(b) = 0 \). If \( r(b) = 1 \), the boundary point \( b \) is absorbing and if \( p(b) = 1 \), then \( b \) is reflecting. Several other special models that we have studied are birth-death chains; these are explored in .
In this section, as you will see, we often have sums of products. Recall that a sum over an empty index set is 0, while a product over an empty index set is 1.
If \( S \) is finite, classification of the states of a birth-death chain as recurrent or transient is simple, and depends only on the state graph. In particular, if the chain is irreducible, then the chain is positive recurrent. So we will study the classification of birth-death chains when \( S = \N \). We assume that \( p(x) \gt 0 \) for all \( x \in \N \) and that \( q(x) \gt 0 \) for all \( x \in \N_+ \) (but of course we must have \( q(0) = 0 \)). Thus, the chain is irreducible.
Under these assumptions, the birth-death chain on \( \N \) is
We will use the test for recurrence derived earlier with \( A = \N_+ \), the set of positive states. That is, we will compute the probability that the chain never hits 0, starting in a positive state.
The chain \( \bs{X} \) is recurrent if and only if \[ \sum_{x = 0}^\infty \frac{q(1) \cdots q(x)}{p(1) \cdots p(x)} = \infty \]
Let \( P_+ \) denote the restriction of \( P \) to \( \N_+ \times \N_+ \), and define \( u_+: \N_+ \to [0, 1] \) by \[ u_+(x) = \P(X_1 \gt 0, X_2 \gt 0, \ldots \mid X_0 = x), \quad x \in \N_+ \] So \( u_+(x) \) is the probability that chain never reaches 0, starting in \( x \in \N_+ \). From our general theory, we know that \( u_+ \) satisfies \( u_+ = P_+ u_+ \) and is the largest such function with values in \( [0, 1] \). Furthermore, we know that either \( u_+(x) = 0 \) for all \( x \in \N_+ \) or that \( \sup\{u_+(x): x \in [0, 1]\} = 1 \). In the first case the chain is recurrent, and in the second case the chain is transient.
The functional equation \( P_+ u = u \) for a function \( u: \N_+ \to [0, 1] \) is equivalent to the following system of equations: \begin{align} u(2) - u(1) & = \frac{q(1)}{p(1)} u(1) \\ u(x + 1) - u(x) & = \frac{q(x)}{p(x)}[u(x) - u(x - 1)], \quad x \in \{2, 3, \ldots\} \end{align} Solving this system of equations for the differences gives \[ u(x + 1) - u(x) = \frac{q(1) \cdots q(x)}{p(1) \cdots p(x)} u(1), \quad x \in \N_+ \] Solving this new systems gives \[ u(x) = u(1) \sum_{i=0}^{x-1} \frac{q(1) \cdots q(i)}{p(1) \cdots p(i)}, \quad x \in \N_+ \] Note that \( u(x) \) is increasing in \( x \in \N_+ \) and so has a limit as \( x \to \infty \). Let \( A = \sum_{i=0}^\infty \frac{q(1) \cdots q(i)}{p(1) \cdots p(i)} \).
Note that \( r \), the function that assigns to each state \( x \in \N \) the probability of an immediate return to \( x \), plays no direct role in whether the chain is transient or recurrent. Indeed all that matters are the ratios \( q(x) / p(x) \) for \( x \in \N_+ \).
Suppose again that we have a birth-death chain \( \bs{X} \) on \( \N \), with \( p(x) \gt 0 \) for all \( x \in \N \) and \( q(x) \gt 0 \) for all \( x \in \N_+\). Thus the chain is irreducible.
The function \( g: \N \to (0, \infty) \) defined by \[ g(x) = \frac{p(0) \cdots p(x - 1)}{q(1) \cdots q(x)}, \quad x \in \N \] is invariant for \( \bs{X} \), and is the only invariant function, up to multiplication by constants. Hence \( \bs{X} \) is positive recurrent if and only if \( B = \sum_{x = 0}^\infty g(x) \lt \infty \), in which case the (unique) invariant probability density function \( f \) is given by \( f(x) = \frac{1}{B} g(x) \) for \( x \in \N \).
Recall that by convention, a product over an empty index set is 1. So first, \begin{align*} (g P)(0) & = g(0) P(0, 0) + g(1) P(1, 0) = g(0) r(0) + g(1) q(1) \\ & = 1 r(0) + \frac{p(0)}{q(1)} q(1) = [1 - p(0)] + p(0) = 1 = g(0) \end{align*} Next, for \( y \in \N_+ \), \begin{align*} (g P)(y) & = g(y - 1) P(y - 1, y) + g(y) P(y, y) + g(y + 1) P(y + 1, y) \\ & = g(y - 1) p(y - 1) + g(y) r(y) + g(y + 1) q(y + 1) \\ & = g(y - 1) p(y - 1) + g(y) [1 - p(y) - q(y)] + g(y + 1) q(y + 1) \end{align*} But \begin{align*} g(y - 1) p(y - 1) & = g(y) q(y) = \frac{p(0) \cdots p(y - 1)}{q(1) \cdots q(y - 1)} \\ g(y + 1) q(y + 1) & = g(y) p(y) = \frac{p(0) \cdots p(y)}{q(1) \cdots q(y)} \end{align*} so \( (g P)(y) = g(y) \).
Conversely, suppose that \( h: \N \to \R \) is invariant for \( \bs{X} \). We will show by induction that \( h(x) = h(0) g(x) \) for all \( x \in \N \). The result is trivailly true for \( x = 0 \) since \( g(0) = 1 \). Next, \( (h P)(0) = h(0) \) gives \( h(0) P(0, 0) + h(1) P(1, 0) = h(0) \). But \( P(0, 0) = r(0) = [1 - p(0)] \) and \( P(1, 0) = q(1) \), so substituting and solving for \( h(1) \) gives \[ h(1) = h(0) \frac{p(0)}{q(1)} = h(0) g(1) \] so the result is true when \( x = 1 \). Assume now that \( y \in \N_+ \) and that the result is true for all \( x \in \N \) with \( x \le y \). Then \( (h P)(y) = h(y) \) gives \[ h(y - 1) P(y - 1, y) + h(y) P(y, y) + h(y + 1) P(y + 1, y) = h(y) \] But \( P(y - 1, y) = p(y - 1) \), \( P(y, y) = r(y) = 1 - p(y) - q(y) \), and \( P(y + 1, y) = q(y + 1) \). Also, by the induction hypothesis, \( h(y) = h(0) g(y) \) and \( h(y - 1) = h(0) g(y - 1) \) so substituting and using the definition of \( g \) gives \begin{align*} q(y + 1) h(y + 1) & = [p(y) + q(y)] h(0) \frac{p(0) \cdots p(y - 1)}{q(1) \cdots q(y)} - p(y - 1) h(0) \frac{p(0) \cdots p(y - 2)}{q(1) \cdots q(y - 1)} \\ & = h(0) \frac{p(0) \cdots p(y)}{q(1) \cdots q(y)} \end{align*} Finally, solving gives \[ h(y + 1) = h(0) \frac{p(0) \cdots p(y)}{q(1) \cdots q(y + 1)} = h(0) g(y + 1) \]
Here is a summary of the classification:
For the birth-death chain \( \bs X \), define \[A = \sum_{x = 0}^\infty \frac{q(1) \cdots q(x)}{p(1) \cdots p(x)}, \quad B = \sum_{x = 0}^\infty \frac{p(0) \cdots p(x - 1)}{q(1) \cdots q(x)}\]
Note again that \( r \), the function that assigns to each state \( x \in \N \) the probability of an immediate return to \( x \), plays no direct role in whether the chain is transient, null recurrent, or positive recurrent. Also, we know that an irreducible, recurrent chain has a positive invariant function that is unique up to multiplication by positive constants, but the birth-death chain gives an example where this is also true in the transient case.
Suppose now that \( n \in \N_+ \) and that \( \bs X = (X_0, X_1, X_2, \ldots) \) is a birth-death chain on the integer interval \( \N_n = \{0, 1, \ldots, n\} \). We assume that \( p(x) \gt 0 \) for \( x \in \{0, 1, \ldots, n - 1\} \) while \( q(x) \gt 0 \) for \( x \in \{1, 2, \ldots n\} \). Of course, we must have \( q(0) = p(n) = 0 \). With these assumptions, \( \bs X \) is irreducible, and since the state space is finite, positive recurrent. So all that remains is to find the invariant distribution. The result is essentially the same as when the state space is \( \N \).
The invariant probability density function \( f_n \) is given by \[ f_n(x) = \frac{1}{B_n} \frac{p(0) \cdots p(x - 1)}{q(1) \cdots q(x)} \text{ for } x \in \N_n \text{ where } B_n = \sum_{x=0}^n \frac{p(0) \cdots p(x - 1)}{q(1) \cdots q(x)} \]
Note that \( B_n \to B \) as \( n \to \infty \), and if \( B \lt \infty \), \( f_n(x) \to f(x) \) as \( n \to \infty \) for \( x \in \N \). We will see this type of behavior again. Results for the birth-death chain on \( \N_n \) often converge to the corresponding results for the birth-death chain on \( \N \) as \( n \to \infty \).
Often when the state space \( S = \N \), the state of a birth-death chain represents a population of individuals of some sort (and so the terms birth and death have their usual meanings). In this case state 0 is absorbing and means that the population is extinct. Specifically, suppose that \( \bs X = (X_0, X_1, X_2, \ldots) \) is a birth-death chain on \( \N \) with \( r(0) = 1 \) and with \( p(x), \, q(x) \gt 0 \) for \( x \in \N_+ \). Thus, state 0 is absorbing and all positive states lead to each other and to 0. Let \( N = \min\{n \in \N: X_n = 0\} \) denote the time until absorption, where as usual, \( \min \emptyset = \infty \).
One of the following events will occur:
Part (b) follows from the general theory, since 0 is absorbing, and all positive states lead to each other and to 0. Thus the positive states are transient and we know that with probability 1, a Markov chain will visit a transient state only finitely often. Thus \( N = \infty \) is equivalent to \( X_n \to \infty \) as \( n \to \infty \).
Naturally we would like to find the probability of these complementary events, and happily we have already done so in our study of recurrence in . Let \[ u(x) = \P(N = \infty) = \P(X_n \to \infty \text{ as } n \to \infty \mid X_0 = x), \quad x \in \N \] so the absorption probability is \[v(x) = 1 - u(x) = \P(N \lt \infty) = \P(X_n = 0 \text{ for some } n \in \N \mid X_0 = x), \quad x \in \N \]
For the birth-death chain \( \bs X \), \[ u(x) = \frac{1}{A} \sum_{i=0}^{x - 1} \frac{q(1) \cdots q(i)}{p(1) \cdots p(i)} \text{ for } x \in \N_+ \text{ where } A = \sum_{i=0}^\infty \frac{q(1) \cdots q(i)}{p(1) \cdots p(i)} \]
For \( x \in \N_+ \), note that \( u(x) = \P(X_n \in \N_+ \text{ for all } n \in \N \mid X_0 = x) \), the function that gives the probability of staying in the positive states for all time. The proof of has nothing to do with the transition probabilities in state 0, so the proof applies in this setting as well. In that proof we showed that \( u(x) \) as the form given above, where of course the value is 0 if \( A = \infty \). Trivially, \( u(0) = 0 \).
So if \( A = \infty \) then \( u(x) = 0 \) for all \( x \in S \). If \( A \lt \infty \) then \( u(x) \gt 0 \) for all \( x \in \N_+ \) and \( u(x) \to 1 \) as \( x \to \infty \). For the absorption probability, \( v(x) = 1 \) for all \( x \in \N \) if \( A = \infty \) and so absorption is certain. If \( A \lt \infty \) then \[v(x) = \frac{1}{A} \sum_{i=x}^\infty \frac{q(1) \cdots q(i)}{p(1) \cdots p(i)}, \quad x \in \N \] Next we consider the mean time to absorption, so let \( m(x) = \E(N \mid X_0 = x) \) for \( x \in \N_+ \).
The mean absorption function is given by \[ m(x) = \sum_{j=1}^x \sum_{k=j-1}^\infty \frac{p(j) \cdots p(k)}{q(j) \cdots q(k+1)}, \quad x \in \N \]
Our first proof is probabilisitic. The number of steps required to go from state \( x \in \N_+ \) to \( x - 1 \) has the same distribution as the number of steps required to go from state 1 to 0, except with parameters \( p(y), \, q(y) \) for \( y \in \{x, x + 1, \ldots\} \) instead of parameters \( p(y), \, q(y) \) for \( y \in \{1, 2, \ldots\} \). So by the additivity of expected value, we just need to compute \( m(1) \) as a function of the parameters. Starting in state 1, the chain will be absorbed in state 0 after a random number of returns to state 1 without absorption. Whenever the chain is in state 1, absorption occurs at the next time with probability \( q(1) \) so it follows that the number of times that the chain is in state 1 before absorption has the geometric distribution on \( \N_+ \) with success parameter \( q(1) \). The mean of this distribution is \( 1 / q(1) \). On the other hand, starting in state 1, the number of steps until the chain is in state 1 again (without absorption) has the same distribution as the return time to state 0, starting in state 0 for the irreducible birth-death chain \( \bs{X}^\prime \) considered in but with birth and death functions \( p^\prime \) and \( q^\prime \) given by \( p^\prime(x) = p(x + 1) \) for \( x \in \N \) and \( q^\prime(x) = q(x + 1) \) for \( x \in \N_+ \). Thus, let \[ \mu = \sum_{k=0}^\infty \frac{p(1) \cdots p(k)}{q(2) \cdots q(k+1)} \] Then \( \mu \) is the mean return time to state 0 for the chain \( \bs{X}^\prime \). Specifically, note that if \( \mu = \infty \) then \( \bs{X}^\prime \) is either transient or null recurrent. If \( \mu \lt \infty \) then \( 1 / \mu \) is the invariant PDF at 0. So, it follows that \[ m(1) = \frac{1}{q(1)} \mu = \sum_{k=0}^\infty \frac{p(1) \cdots p(k)}{q(1) \cdots q(k + 1)} \] By our argument above, the mean time to go from state \( x \) to \( x - 1 \) is \[ \sum_{k=x-1}^\infty \frac{p(x) \cdots p(k)}{q(x) \cdots q(k + 1)} \]
Our second proof is analytic. Conditioning and using the Markov property, we have \[ m(x) = 1 + p(x) m(x + 1) + q(x) m(x - 1) + r(x) m(x), \quad x \in \N_+ \] with initial condition \( m(0) = 0 \). Equivalently, \[ m(x + 1) - m(x) = \frac{q(x)}{p(x)}[m(x) - m(x - 1)] - \frac{1}{p(x)}, \quad x \in \N_+ \] Solving gives \[ m(x + 1) - m(x) = \frac{q(1) \cdots q(x)}{p(1) \cdots p(x)} m(1) - \sum_{y=1}^x \frac{q(y+1) \cdots q(x)}{p(y) \cdots p(x)}, \quad x \in \N_+ \] Next, \( m(x) = \sum_{y=0}^{x-1} [m(y+1) - m(y)] \) for \( x \in \N \) which gives \[ m(x) = m(1) \sum_{y=0}^{x-1} \frac{q(1) \cdots q(y)}{p(1) \cdots p(y)} - \sum_{y=0}^{x-1} \sum_{z=1}^y \frac{q(z + 1) \cdots q(y)}{p(z) \cdots p(y)}, \quad x \in \N \] Finally, \( m(1) \) is given as in the first proof. The expression for \( m(x) \) is different, but equivalent, of course.
Next we will consider a birth-death chain on a finite integer interval with both endpoints absorbing. Our interest is in the probability of absorption in one endpoint rather than the other, and in the mean time to absorption. Thus suppose that \( n \in \N_+ \) and that \( \bs X = (X_0, X_1, X_2, \ldots) \) is a birth-death chain on \( \N_n = \{0, 1, \ldots, n\} \) with \( r(0) = r(n) = 1 \) and with \( p(x) \gt 0 \) and \( q(x) \gt 0 \) for \( x \in \{1, 2, \ldots, n - 1\} \). So the endpoints 0 and \( n \) are absorbing, and all other states lead to each other and to the endpoints. Let \( N = \min\{n \in \N: X_n \in \{0, n\}\} \), the time until absorption, and for \( x \in S \) let \( v_n(x) = \P(X_N = 0 \mid X_0 = x) \) and \( m_n(x) = \E(N \mid X_0 = x) \). The definitions make sense since \( N \) is finite with probability 1.
The absorption probability function for state 0 is given by \[ v_n(x) = \frac{1}{A_n} \sum_{i=x}^{n-1} \frac{q(1) \cdots q(i)}{p(1) \cdots p(i)} \text{ for } x \in \N_n \text{ where } A_n = \sum_{i=0}^{n-1} \frac{q(1) \cdots q(i)}{p(1) \cdots p(i)} \]
Conditioning and using the Markov property, \( v_n \) satisfies the second-order linear difference equation \[ v_n(x) = p(x) v_n(x + 1) + q(x) v_n(x - 1) + r(x) v_n(x), \quad x \in \{1, 2, \ldots, n - 1\} \] with boundary conditions \( v_n(0) = 1 \), \( v_n(n) = 0 \). As we have seen before, the difference equation can be rewritten as \[v_n(x + 1) - v_n(x) = \frac{p(x)}{q(x)} [v_n(x) - v_n(x - 1)], \quad x \in \{1, 2, \ldots, n - 2\}\] Solving and applying the boundary conditions gives the result.
Note that \( A_n \to A \) as \( n \to \infty \) where \( A \) is the constant above for the absorption probability at 0 with the infinite state space \( \N \). If \( A \lt \infty \) then \( v_n(x) \to v(x) \) as \( n \to \infty \) for \( x \in \N \).
The mean absorption time is given by \[ m_n(x) = m_n(1) \sum_{y=0}^{x-1} \frac{q(1) \cdots q(y)}{p(1) \cdots p(y)} - \sum_{y=0}^{x-1} \sum_{z=1}^y \frac{q(z+1) \cdots q(y)}{p(z) \cdots p(y)}, \quad x \in \N_n \] where, with \( A_n \) as in the previous theorem, \[ m_n(1) = \frac{1}{A_n} \sum_{y=1}^{n-1} \sum_{z=1}^y \frac{q(z+1) \cdots q(y)}{p(z) \cdots p(y)} \]
The probabilistic proof in exericse with state space \( \N \) and 0 absorbing does not work here, but the first part of the analytic proof does. So, \[ m_n(x) = m_n(1) \sum_{y=0}^{x-1} \frac{q(1) \cdots q(y)}{p(1) \cdots p(y)} - \sum_{y=0}^{x-1} \sum_{z=1}^y \frac{q(z + 1) \cdots q(y)}{p(z) \cdots p(y)}, \quad x \in \{1, 2, \ldots, n\} \] Substituting \( x = n \) and applying the boundary condition \( m_n(n) = 0 \), gives the result for \( m_n(1) \) in the theorem.
Our next discussion is on the time reversal of a birth-death chain. Essentially, every recurrent birth-death chain is reversible.
Suppose that \( \bs X = (X_0, X_1, X_2, \ldots) \) is an irreducible, recurrent birth-death chain on an integer interval \( S \). Then \( \bs X \) is reversible.
We need to show that the Kolmogorov cycle condition is satisfied. That is, for every sequence of states \((x_0, x_1, x_2, \ldots, x_n) \) with \( x_0 = x_n \), \[ P(x_0, x_1) P(x_1, x_2) \cdots P(x_{n-1}, x_n) = P(x_n, x_{n-1}) P(x_{n-1}, x_{n-2}) \cdots P(x_1, x_0) \] We can restrict our attention to sequences where \( x_{i+1} \in \{x_i, x_i - 1, x_i + 1\} \) for each \( i \in \{1, 2, \ldots, n\} \). For such sequences, the cycle condition is trivially satisfied.
If \( S \) is finite and the chain \( \bs X \) is irreducible, then of course \( \bs X \) is recurrent (in fact positive recurrent), so by the previous result, \( \bs X \) is reversible. In the case \( S = \N \), we can use the invariant function above to show directly that the chain is reversible.
Suppose that \( \bs X = (X_0, X_1, X_2, \ldots) \) is a birth-death chain on \( \N \) with \( p(x) \gt 0 \) for \( x \in \N \) and \( q(x) \gt 0 \) for \( x \in \N_+ \). Then \( \bs X \) is reversible.
With the function \( g \) defined in , it suffices to show the reversibility condition \( g(x)P(x, y) = g(y) P(y, x) \) for all \( x, \, y \in \N \). It then follows that \( g \) is invariant for \( \bs{X} \) and that \( \bs{X} \) is reversible with respect to \( g \). But since \( g \) is the only positive invariant function for \( \bs{X} \), up to multiplication by positive constants, we can omit the qualifying phrase with respect to \( g \)
. For \( x \in \N \) and \( y = x + 1 \) we have
\[g(x) P(x, y) = g(y) P(y, x) = \frac{p(0) \cdots p(x)}{q(1) \cdots q(x)}\]
For \( x \in \N_+ \) and \( y = x - 1 \) we have
\[ g(x) P(x, y) = g(y) P(y, x) = \frac{p(0) \cdots p(x - 1)}{q(1) \cdots q(x - 1)} \]
In all other cases, the reversibility condition is trivially satisfied.
Thus, in the positive recurrent case, when the variables are given the invariant distribution, the transition matrix \( P \) describes the chain forward in time and backwards in time.
As always, be sure to try the problems yourself before expanding the details.
Our first examples consider birth-death chains on \( \N \) with constant birth and death probabilities, except at the boundary points. Such chains are often referred to as random walks, although that term is used in a variety of different settings. The results are special cases of the general results above, but sometimes direct proofs are illuminating.
Suppose that \( \bs X = (X_0, X_1, X_2, \ldots) \) is the birth-death chain on \( \N \) with constant birth probability \( p \in (0, \infty) \) on \( \N \) and constant death probability \( q \in (0, \infty) \) on \( \N_+ \), with \( p + q \le 1 \). Then
Next we consider the random walk on \( \N \) with 0 absorbing. As in the discussion of absorption in above, \( v(x) \) denotes the absorption probability and \( m(x) \) the mean time to absorption, starting in state \( x \in \N \).
Suppose that \( \bs X = (X_0, X_1, \ldots) \) is the birth-death chain on \( \N \) with constant birth probability \( p \in (0, \infty)\) on \( \N_+ \) and constant death probability \( q \in (0, \infty) \) on \( \N_+ \), with \( p + q \le 1 \). Assume also that \( r(0) = 1 \), so that 0 is absorbing.
This chain is essentially the gambler's ruin chain. Consider a gambler who bets on a sequence of independent games, where \( p \) and \( q \) are the probabilities of winning and losing, respectively. The gambler receives one monetary unit when she wins a game and must pay one unit when she loses a game. So \( X_n \) is the gambler's fortune after playing \( n \) games.
Next we consider random walks on a finite interval.
Suppose that \( \bs X = (X_0, X_1, \ldots) \) is the birth-death chain on \( \N_n = \{0, 1, \ldots, n\} \) with constant birth probability \( p \in (0, \infty) \) on \( \{0, 1, \ldots, n - 1\} \) and constant death probability \( q \in (0, \infty) \) on \( \{1, 2, \ldots, n\} \), with \( p + q \le 1 \). Then \( \bs X \) is positive recurrent and the invariant probability density function \( f_n \) is given as follows:
Note that if \( p \lt q \) then the invariant distribution is a truncated geometric distribution, and \( f_n(x) \to f(x) \) for \( x \in \N \) where \( f \) is the invariant probability density function of the birth-death chain on \( \N \) in . If \( p = q \), the invariant distribution is uniform on \( \N_n \), certainly a reasonable result. Next we consider the chain with both endpoints absorbing. As before, \( v_n \) is the function that gives the probability of absorption in state 0, while \( m_n \) is the function that gives the mean time to absorption.
Suppose that \( \bs X = (X_0, X_1, \ldots) \) is the birth-death chain on \( \N_n = \{0, 1, \ldots, n\} \) with constant birth probability \( p \in (0, 1) \) and death probability \( q \in (0, \infty) \) on \( \{1, 2, \ldots, n - 1\} \), where \( p + q \le 1 \). Assume also that \( r(0) = r(n) = 1 \), so that \( 0 \) and \( n \) are absorbing.
Note that if \( q \lt p \) then \( v_n(x) \to v(x) \) as \( n \to \infty \) for \( x \in \N \).
Suppose again that \( \bs X = (X_0, X_1, \ldots) \) is the birth-death chain on \( \N_n = \{0, 1, \ldots, n\} \) with constant birth probability \( p \in (0, 1) \) and death probability \( q \in (0, \infty) \) on \( \{1, 2, \ldots, n - 1\} \), where \( p + q \le 1 \). Assume also that \( r(0) = r(n) = 1 \), so that \( 0 \) and \( n \) are absorbing.
Some of the random processes that we have studied previously are birth-death Markov chains.
Describe each of the following as a birth-death chain.
Consider the birth-death process on \( \N \) with \( p(x) = \frac{1}{x + 1} \), \( q(x) = 1 - p(x) \), and \( r(x) = 0 \) for \( x \in S \).