\(\newcommand{\R}{\mathbb{R}}\) \(\newcommand{\N}{\mathbb{N}}\) \(\newcommand{\rta}{\rightarrow}\) \(\newcommand{\upa}{\uparrow}\) \(\newcommand{\Upa}{\Uparrow}\) \(\newcommand{\ms}{\mathscr}\) \(\newcommand{\bs}{\boldsymbol}\) \(\newcommand{\P}{\mathbb{P}}\) \(\newcommand{\E}{\mathbb{E}}\) \(\newcommand{\var}{\text{var}}\) \(\newcommand{\Li}{\mathrm{Li}}\)
  1. Reliability
  2. 8. Subset Spaces
  3. 1
  4. 2
  5. 3
  6. 4
  7. 5

1. Graphs

Throughout this section \(I\) is a fixed, nonempty, countable set.

Basics

Definitions

Let \(S\) denote the collection of all finite subsets of \(I\).

  1. \((S, \subseteq)\) is a discrete, left-finite partial order graph with minimum element \(\emptyset\).
  2. \((S, \subseteq)\) is a lattice with \(x \wedge y = x \cap y\) and \(x \vee y = x \cup y\) for \(x, \, y \in S\).
Details:
  1. Of course \(\subseteq\) is a partial order on the collection of subsets of any set. Since \(I\) is countable, the collection \(S\) of finite subsets of \(I\) is also countable. The graph \((S, \subseteq)\) is left finite since a set \(x \in S\) has only finitely many subsets.
  2. The lattice property is well known.

Since \(S\) is countable, we have our standard discrete reference space \((S, \ms P(S), \#)\) in the background. Note that if \(I\) is finite then \(S\) is also finite, and \(S\) is the maximum element of \((S, \subseteq)\). If \(I\) is countably infinite then \((S, \subseteq)\) has no maximum (or even maximal) elements. As usual, let \(S_+ = S - \{\emptyset\}\), the collection of finite, nonempty subsets of \(I\). Our primary interest in this section is the partial order graph \((S, \subseteq)\), but as usual, we also consider the strict partial order graph \((S, \subset)\). Of course, \((S, \subseteq)\) is the reflexive closure of \((S, \subset)\) as in Section 1.6. The covering graph and related graphs are also of interest.

Let \((S, \upa)\) denote the covering graph of \((S, \subseteq)\). If \(x, \, y \in S\) then \(x \upa y\) if and only if \(y = x \cup \{i\}\) for some \(i \notin x\).

For \(n \in \N\) define

  1. \(A_n = \{x \in S: \#(x) = n\}\)
  2. \(A_n(x) = \{y \in S: x \subseteq y, \, \#(y) = \#(x) + n\}\) for \(x \in S\)

So \(\{A_n: n \in \N\}\) is the partition of \(S\) induced by the cardinality function \(\#: S \to \N\). Similarly, \(\{A_n(x): n \in \N\}\) is a partition of \(\{y \in S: x \subseteq y\}\), the neighbor set of \(x \in S\) for \((S, \subseteq)\). Let \(\upa^n\) denote the \(n\)-fold composition of \(\upa\) for \(n \in \N_+\). If \(x, \, y \in S\) then \(x \upa^n y\) if and only if \(y \in A_n(x)\). Occasionally, we will also consider the reflexive closure \(\Upa\) of \(\upa\). So if \(x, \, y \in S\), then \(x \Upa y\) if and only if \(y = x\) for \(y = x \cup \{i\}\) for some \(i \notin x\).

The partial order graph \((S, \subseteq)\) is uniform: If \(x, \, y \in S\) and \(x \subseteq y\) then \(\#(y) - \#(x)\) is the length of every path in \((S, \upa)\) form \(x\) to \(y\).

Walk and Related Functions

Recall the permutation (falling power) formula: for \(m \in \R\) and \(k \in \N\), \[m^{(k)} = m (m - 1) \cdots (m - k + 1)\]

For \(n \in \N\), the (left) walk function \(w_n\) of \((S, \subseteq)\) is given by \[w_n(x) = (n + 1)^{\#(x)}, \quad x \in S\]

Details:

We give two proofs. The simplest proof is combinatorial. A walk in \((S, \subseteq)\) of length \(n\) terminating in \(x\) is a sequence \((x_1, x_2, \ldots, x_{n + 1})\) of sets in \(S\) with \(x_i \subseteq x_{i + 1}\) for \(i \in \{1, 2, \ldots, n\}\) and \(x_{n + 1} = x\). The following algorithm generates all such sequences once and only once: For each \(k \in x\) either \(k\) is not in any of the sets in the sequence \((x_1, x_2, \ldots, x_n)\) or \(k\) occurs in the sequence for the first time in set \(x_i\) for some \(i \in \{1, 2, \ldots, n\}\). Thus there are \(n + 1\) choices for each element of \(x\).

A proof by induction on \(n\) is also simple. The result is trivially true when \(n = 0\), since \(w_0(x) = 1\) for \(x \in S\). Thus, assume the result holds for a given \(n\). Using the binomial theorem, \begin{align*} w_{n + 1}(x) &= \sum_{t \subseteq x} w_n(t) = \sum_{k = 0}^{\#(x)} \sum_{t \subseteq x, \, \#(t) = k} (n + 1)^k \\ &= \sum_{k = 0}^{\#(x)} \binom{\#(x)}{k} (n + 1)^k = (n + 2)^{\#(x)}, \quad x \in S \end{align*}

When \(n = 1\), proposition gives the usual formula for the number of subsets of \(x\), namely \(w_1(x) = 2^{\#(x)}\).

Find the left walk function of order \(n \in \N\) for the following graphs:

  1. \((S, \subset)\)
  2. \((S, \upa)\)
  3. \((S, \Upa)\)
Details:

Generically, let \(w_n\) denote the left walk function of order \(n \in \N\).

  1. This follows from and the results in Section 1.6 on reflexive closure, \[w_n(x) = \sum_{k = 0}^n (-1)^{n - k} \binom{n}{k} (k + 1)^{\#(x)}, \quad x \in S\]
  2. Let \(x \in S\) and \(n \in \N\) with \(n \le \#(x)\). To form a walk (actually a path) of length \(n\) in \((S, \uparrow)\) terminating in \(x\), select a permuatation \((k_1, k_2, \ldots, k_n)\) of size \(n\) from \(x\) and let \(y = x - \{k_1, k_2, \ldots, k_n\}\). The walk is then \[y \upa y \cup \{k_1\} \upa y \cup \{k_1, k_2\} \upa \cdots \upa y \cup \{k_1, k_2, \ldots, k_n\} = x \] Hence \( w_n(x) = \#(x)^{(n)} \) for \( x \in S \).
  3. From (b) and Section 1.6 on reflexive closure, \[w_n(x) = \sum_{k = 0}^n \binom{n}{k} \#(x)^{(k)}, \quad x \in S\]

For the following proposition, recall that \(\Li\) denotes the polylogarithm function.

The (left) generating function \(W\) of \((S, \subseteq)\) is given by \[W(x, t) = \frac{1}{t} \Li[-\#(x), t], \quad x \in S, \, t \in (-1, 1)\]

Details:

\[W(x, t) = \sum_{n = 0}^\infty w_n(x) t^n = \sum_{n = 0}^\infty (n + 1)^{\#(x)} t^n = \frac{1}{t} \Li[-\#(x), t], \quad t \in (-1, 1)\] where it is understood that \(W(x, 0) = 1\).

Find the left generating function for the following graphs:

  1. \((S, \subset)\)
  2. \((S, \upa)\)
  3. \((S, \Upa)\)
Details:

Generically, let \(W\) denote the left generating function, and recall that \(\Gamma\) denotes the usual gamma special function.

  1. Follows from and standard results on reflexive closure in Section 1.6, \[W(x, t) = \frac{1}{t} \Li\left[-\#(x), \frac{t}{1 + t}\right], \quad x \in S, \, t \in \R\]
  2. From part (b) of \[W(x, t) = \sum_{n = 0}^{\#(x)} [\#(x)]^{(n)} t^n = e^{1 / t} t^{\#(x)} \Gamma(1 + \#(x), 1 / t), \quad x \in S, \, t \in \R\] Note that the generating function is a simple polynomial: \(W(x, t) = \phi_{\#(x)}(t)\) where \[\phi_m(t) = 1 + m t + m (m - 1) t^2 + \cdots + m! t^m, \quad m \in \N, \, t \in \R\] The polynomials \(\phi_m\) for \(m \in \N\) can be generated recursively by \[\phi_{m + 1}(t) = 1 + (m + 1) t \phi_m(t), \quad t \in \R\] along with the initial condition \(\phi_0(t) = 1\) for all \(t \in \R\).
  3. From (b) and results on reflexive closure in Section 1.6, \[W(x, t) = \frac{1}{1 - t} e^{(1 - t) / t} \left(\frac{t}{1 - t}\right)^{\#(x)} \Gamma\left[1 + \#(x), \frac{1 - t}{t}\right], \quad x \in S, t \in (-1, 1)\]

The Möbius kernel \(M\) for \((S, \subseteq)\) is given by \[M(x, y) = (-1)^{\#(y) - \#(x)}, \quad (x, y) \in S^2, \, x \subseteq y\]

Details:

This is a well-known result, but we give a proof for completion, based on induction on \(\#(y) - \#(x)\). First, \(M(x, x) = 1\) by definition. Suppose that the result holds for \(x \subseteq y\) when \(\#(y) \le \#(x) + n\). Suppose that \(x \subseteq y\) with \(\#(y) =\#(x) + n + 1\). Then \begin{align*} M(x, y) &= -\sum_{t \in [x, y)} M(x, t) = -\sum_{k = 0}^n \sum \{M(x, t): t \in [x, y), \#(t) = \#(x) + k \}\\ &= -\sum_{k = 0}^n \sum \{(-1)^k: t \in [x, y), \#(t) = \#(x) + k\}\\ &= -\sum_{k = 0}^n \binom{n + 1}{k} (-1)^k\\ &= -\sum_{k = 0}^{n + 1} \binom{n + 1}{k} (-1)^k + (-1)^{n + 1} = 0 + (-1)^{n + 1} \end{align*}

Verify the Möbius inversion formula with the walk function \(w_n\) of order \(n \in \N\) for \((S, \subseteq)\).

Details:

Recall that \(w_n(x) = (n + 1)^{\#(x)}\) for \(x \in S\). \begin{align*} \sum_{t \subseteq x} w_{n + 1}(t) M(t, x) &= \sum_{t \subseteq x} (n + 2)^{\#(t)} (-1)^{\#(x) - \#(t)}\\ &= \sum_{k = 0}^{\#(x)} \sum \{(n + 2)^{\#(t)} (-1)^{\#(x) - \#(t)}: t \subseteq x, \#(t) = k\}\\ &= \sum_{k = 0}^{\#(x)} \binom{\#(x)}{k} (n + 2)^k (-1)^{\#(x) - k}\\ &= (n + 1)^{\#(x)} = w_n(x), \quad x \in S \end{align*}

Sigma Algebras

Recall that the (right) \(\sigma\)-algebra associated with a graph is the \(\sigma\) algebra generated by the right neighbor sets.

  1. The \(\sigma\)-algebra associated with \((S, \subseteq)\) and with \((S, \Upa)\) is the reference \(\sigma\)-algebra \(\ms P(S)\).
  2. Let \(B_\emptyset = \{x \in S: \#(x) = 1\}\). The \(\sigma\)-algebra associated with \((S, \subset)\) and with \((S, \upa)\) is \[\ms B = \{B \subseteq S: B_\emptyset \subseteq B \text{ or } B_\emptyset \cap B = \emptyset\}\]
Details:
  1. Since \((S, \subseteq)\) is a discrete, left-finite partial order graphs, part (a) holds by a result in Section 1.2.
  2. By another result in the same section, the \(\sigma\)-algebras associated with \((S, \subset)\) and with \((S, \upa)\) are the same. Hence we can just concetrate on \((S, \upa)\). Let \(B_x = \{y \in S: x \upa y\}\) denote the neighbor set of \(x \in S\) for \((S, \upa)\), so that \(\ms B = \sigma(\{B_x: x \in S\})\) is the \(\sigma\)-algebra associated with \((S, \upa)\). Note that the notation is consistent since\(B_\emptyset\) as stated in part (b) is the neighbor set of \(\emptyset\). (It is also the set \(A_1\) given in ). Next let \(\ms C\) denote the \(\sigma\) algebra generated by the collection of subsets of \(S\) consisting of \(B_\emptyset\) and \(\{x\}\) for all \(x \in S\) with \(\#(x) \ge 2\). Trivially \(B_\emptyset \in \ms B\). Suppose that \(x \in S\) with \(\#(x) \ge 2\). Let \(i, \, j \in S\) be distinct elements and let \(x_i = x - \{i\}\) and \(x_j = x - \{j\}\). Then \(\{x\} = B_{x_i} \cap B_{x_j}\) so \(\{x\} \in \ms B\). It follows that \(\ms C \subseteq \ms B\). Conversely, \(B_\emptyset \in \ms A\) again trivially. If \(x \in S\) and \(x \ne \emptyset\) then \(B_x\) is a countable union of singletons \(\{y\}\) with \(\#(y) \ge 2\). Hence \(B_x \in \ms C\). It follows that \(\ms B \subseteq \ms C\). Finally, note that \(B_\emptyset \cup \{x \in S: \#(x) \ge 2\} = \{x \in S: \#(x) \ge 1\}\) and the complement of this set is \(\{\emptyset\}\). It follows that \(\ms B\) is also generated by the collection of sets that consists of \(\{\emptyset\}\), \(B_\emptyset\), and \(\{x\}\) for all \(x \in S\) with \(\#(x) \ge 2\). This collection partitions \(S\) and hence \(\ms B\) consists of all unions of the sets in the collection, which can be restated as in part (b).

Subgraphs

Our most important special case is \(I = \N_+\) since in this case \(I\) is infinite and totally ordered, which leads to additional structure on the collection of finite subsets \(S\). In particular , \(S\) can be partitioned into sub-collections where the difference between the maximum element and the cardinality is constant. In the following definition, we take \(\max(\emptyset) = 0\) by convention.

For \(k \in \N\), define \( S_k = \{x \in S: \max(x) - \#(x) = k\} \).

Note that \(\{S_k: k \in \N\}\) partitions \(S\) and is the partition induced by the function \(v: S \to \N\) defined by \(v(x) = \max(x) - \#(x)\) for \(x \in S\). If \(x \in S_+\) then \(\{1, 2, \ldots, \max(x)\} - x\) is the set of gaps of \(x\) relative to \(\{1, 2, \ldots, \max(x)\}\) and so \(v(x)\) is the cardinality of this set of gaps. So \(S_0\) is the collection of finite subsets with no gaps: \[S_0 = \{\emptyset, \{1\}, \{1, 2\}, \{1, 2, 3\}, \ldots\}\] Thus \((S_0, \subseteq)\) has minimum element \(\emptyset\), and more generally, \((S_0, \subseteq)\) is isomorphic to the standard discrete graph \((\N, \le)\), under the isomorphims \(n \mapsto \{1, 2, \ldots, n\}\) for \(n \in \N\). When \(k \in \N_+\), the partial order graph \((S_k, \subseteq)\) has a much more complex structure. In particular, \((S_k, \subseteq)\) does not have a minimum element. However, we can characterize the minimal elements. To do so succinctly, we will need some additional notation.

For \(x \in S_+\) and \(i \in \{1, 2, \ldots, \#(x)\}\), let \(x(i)\) denote the \(i\)th smallest element of \(x\). We refer to \(x(i)\) as the element of rank \(i\) in \(x\).

For \(k \in \N_+\), the set of minimal elements of \((S_k, \subseteq)\) is \[M_k = \{x \in S_k: x(i) \le k \text{ for all } i \lt \#(x)\}\] There are \(2^k\) minimal elements.

Details:

First we show that if \(x \in S_k\) is not minimal element of \((S_k, \subseteq)\) then \(x \notin M_k\). Thus, suppose that \(t, \, x \in S_k\) and \(t \subset x\) so that \(x\) is not minimal. Then \(\max(t) > k\) since \(\#(t) \in \N_+\), and \(\max(t) \in t \subset x\). Moreover, \(\#(t) \lt \#(x)\) and so \(\max(t) \lt \max(x)\). Hence the rank of \(\max(t)\) in \(x\) is less than \(\#(x)\). Therefore \(x \notin M_k\).

Next we show that if \(x \in S_k - M_k\) then \(x\) is not a minimal element of \((S_k, \subseteq)\). Thus, suppose that \(x \in S_k\) and \(x(i) > k\) for some \(i \lt \#(x)\). Construct \(t \in S\) as follows: \(x(i) \in t\) and \(t\) contains \(x(i) - k - 1\) elements of \(x\) that are smaller than \(x(i)\). This can be done since \(x(i) - i \le k\), and hence \(x(i) - k - 1 \le i - 1\), and by definition, \(x\) contains \(i - 1\) elements smaller than \(x(i)\). Now note that \(\max(t) - \#(t)\) = \(x(i) - [x(i) - k] = k\) so \(t \in S_k\). By construction, \(t \subset x\) and hence \(x\) is not minimal.

Next, note that if \(x \in S_k\) and \(\#(x) \ge k + 2\), then \(x \notin M_k\), since one of the \(k + 1\) elements of \(x\) of rank less than \(\#(x)\) must be at least \(k + 1\). For \(n \le k + 1\), the number of elements \(x \in M_k\) with \(\#(x) = n\) is \(\binom{k}{n - 1}\), since \(x\) must contain \(n + k\) and \(n - 1\) elements in \(\{1, 2, \ldots, k\}\). Hence \[\#(M_k) = \sum_{n = 1}^{k + 1} \binom{k}{n - 1} = 2^k\]

Explicitly give the set of minimal elements of \((S_k, \subseteq)\) for \(k \in \{1, 2, 3\}\).

Details:
  1. \(M_1 = \{\{2\}, \{1,3\}\}\)
  2. \(M_2 = \{\{3\}, \{1, 4\}, \{2, 4\}, \{1, 2, 5\}\}\)
  3. \(M_3 = \{\{4\}, \{1, 5\}, \{2, 5\}, \{3, 5\}, \{1, 2, 6\}, \{1, 3, 6\}, \{2, 3, 6\}, \{1, 2, 3, 7\}\}\)

Finally, we will need some additional sets defined in terms of cardinality and maximum.

Let \(S_{0, 0} = \{\emptyset\}\). For \(n \in \N_+\) and \(m \in \{n, n + 1, \ldots\}\), define \(S_{n, m} = \{x \in S: \#(x) = n, \max(x) = m\}\). In this case, \[ \#(S_{n, m}) = \binom{m - 1}{n - 1}\]

Details:

Note that \(x \in S_{n, m}\) must contain the element \(m\) and \(n - 1\) elements from \(\{1, 2, \ldots, m - 1\}\).

For fixed \(k \in \N_+\), note that \(\{S_{n, n + k}: n \in \N_+\}\) partitions \(S_k\) and \(\{S_{n, n}: n \in \N\}\) partitions \(S_0\). Of course, for \(n \in \N_+\), the set \(S_{n, n}\) is the singleton \(\{\{1, 2, \ldots, n\}\}\).

Probability

Next we consider probability distributions on \(S\), with particular emphasis of course on the reliability and rate functions for the graphs, and the possibilty of constant rate distributions. So, we assume that we have a probability space \((\Omega, \ms F, \P)\) in the background. When \(I\) is finite, the covering graph does not support a probability distribution on \(S\) since \(I \in S\) and \(I\) the maximum element of \((S, \subseteq)\). Hence we will usually assume that \(I\) is countably infinite, and to give additional structure, we often assume that \(I = \N_+\).

Existence

Suppose that \(X\) is a random subset of \(I\). We start with a variation of a result from Section 6.2l

Let \(p_i = \P(i \in X)\) for \(i \in I\). If \(\sum_{i \in I} p_i \lt \infty\) then \(X\) is a well-defined random variable in \(S\). That is, \(X\) is finite with probability 1.

Details:

This follows from the first Borel-Cantelli Lemma. The condition implies that \(i \in X\) for all infinitely many \(i \in I\) has probability 0.

In the context of , note that \(\sum_{i \in I} p_i = \E[\#(X)]\).

Reliability

Recall from that the \(\sigma\)-algebra associated with \((S, \subseteq)\) is the reference \(\sigma\)-algebra \(\ms P(S)\). From a result in Section 1.3, the graph is stochastic, so that the reliability function of a probability measure on \(\ms P(S)\) uniquely determines the measure. The following result shows how to recover the reliability function from the density function.

Let \(X\) be a random variable in \(S\) with probability density function \(f\), and let \(F\) denote the reliability function of \(X\) for \((S, \subseteq)\) so that \[F(x) = \sum_{x \subseteq y} f(y), \quad x \in S\] If \(\sum_{x \in S} F(x) = \E\left[2^{\#(X)}\right] \lt \infty\) then \[f(x) = \sum_{x \subseteq y} (-1)^{\#(y) - \#(x)} F(y), \quad x \in S\]

Details:

This follows immediately from the Möbius inversion result in Section 1.4 and the Möbius function and given in . The fact that \(\sum_{x \in S} F(x) = \E\left[2^{\#(X)}\right]\) follows from the basic moment result.

Recall that the partial order graph \((S, \subseteq)\) is a uniform; in the notation used before, if \(x \subseteq y\) then \(d(x, y) = \#(y) - \#(x)\), the length of a path from \(x\) to \(y\) in \((S, \upa)\). The next result is a restatement of using the sets \(A_n(x)\) defined in .

Let \(F\) denote the reliability function of \(X\) for \((S, \subseteq)\). Then \[f(x) = \sum_{n = 0}^\infty (-1)^n \sum_{y \in A_n(x)} F(y), \quad x \in S\]

As before, let \(X\) be a random variable in \(S\) with probability density function \(f\), and let \(F\) denote the reliability function of \(X\) for the partial order graph \((S, \subseteq)\) so that \(F(x) = \P(x \subseteq X)\) for \(x \in S\). For \(n \in \N\), let \(F_n\) denote the reliability function of \(X\) for the graph \((S, \upa^n)\), so that \(F_n(x) = \P[X \in A_n(x)]\) for \(x \in S\). Of course, \(F_0 = f\).

Note that \(F = \sum_{n = 0}^\infty F_n\).

The following result gives a recursive relationship between \(F_{n + 1}\) and \(F_n\) for \(n \in \N\).

Let \(n \in \N\). Then \[F_{n + 1}(x) = \frac{1}{n + 1} \sum_{i \notin x} F_n(x \cup \{i\}), \quad x \in S\]

Details:

Note that \[F_{n + 1}(x) = \sum_{y \in A_{n + 1}(x)} f(y) = \frac{1}{n + 1} \sum_{i \notin x} \sum_{z \in A_n(x \cup \{i\})} f(z) = \frac{1}{n + 1} \sum_{i \notin x} F_n(x \cup \{i\}), \quad x \in S\] The factor \(1 / (n + 1)\) is present because we have counted every set \(y \in A_{n + 1}(x)\) exactly \(n + 1\) times in the middle double sum: we could interchange \(i \notin x\) with any of the \(n + 1\) elements in a set \(z \in A_n(x \cup \{i\})\) without changing the subset \(y\).

Hitting Probability

Suppose that \(X\) is a random variable in \(S\). The hitting probability function \(G\) is defined by \[G(x) = \P(X \cap x \ne \emptyset), \quad x \subseteq I\]

The hitting probability function is fundamental importance in the general theory of random sets (see the book by Matheron) since it completely determines the distribution of \(X\). Note that \(G\) is defined for all subsets of \(I\), not just finite subsets. The following result gives the relationship between the hitting function \(G\) and the reliability function \(F\) for \((S, \subseteq)\).

Suppose that \(X\) is a random variable in \(S\) with reliability function \(F\) for \((S, \subseteq)\) and with hitting probability function \(G\). Then \[G(x) = \sum_{k = 1}^{\#(x)} (-1)^{k - 1} \sum_{y \subseteq x, \#(y) = k} F(y), \quad x \subseteq I\]

Details:

For \(x \in S\), this follows from the standard inclusion-exclusion formula (or from Matheron). The formula then holds for infinite \(x \subseteq I\) by the continuity theorem.

Independent Elements

In this subsetcion we consider a simple model for a random variable with independent elements. A more interesting and complex model, with conditionally independent elements, is studied in Section 2. Thus, suppose that \(X\) is a random variable in \(S\) with the property that \(i \in X\) with probability \(p_i\), independently over \(i \in I\). By the Borel-Cantelli lemmas, the condition \(\sum_{i \in I} p_i \lt \infty\) in is necessary and sufficient for \(X\) to be well defined, and is equivalent to \(\P(X = \emptyset) = \prod_{i \in I} (1 - p_i) \gt 0\).

Suppose that \(X\) has independent elements with parameters \((p_i: i \in I)\) as above. Then

  1. The probability density function \(f\) of \(X\) is given by \[f(x) = \prod_{i \in x} p_i \prod_{i \in x^c} (1 - p_i), \quad x \in S\]
  2. The hitting function \(G\) of \(X\) is given by \[G(x) = 1 - \prod_{i \in x} (1 - p_i), \quad x \subseteq I\]
  3. The reliability function \(F\) of \(X\) for \((S, \subseteq)\) is given by \[F(x) = \prod_{i \in x} p_i, \quad x \in S\]
  4. The rate function \(r\) of \(X\) for \((S, \subseteq)\) is given by \[r(x) = \prod_{i \in x^c} (1 - p_i), \quad x \in S\]
Details:

The results follow from the definition of the model. In part (d), note that if \(x \subseteq y\) then \(y^c \subseteq x^c\) and hence \[r(y) = \prod_{i \in y^c} (1 - p_i) \ge \prod_{i \in x^c} (1 - p_i) = r(x)\]

So as noted in the details, \(X\) has increasing rate. Note also that the hitting function \(G\) and the reliability function \(r\) are related by \(G(x) = 1 - r(x^c)\) when \(x \in S\).

Suppose again that \(X\) has independent units with parameters \((p_i: i \in I)\), and let \(N = \#(X)\). Then

  1. \(E(N) = \sum_{i \in I} p_i\)
  2. \(\var(N) = \sum_{i \in I} p_i (1 - p_i)\)

Of course, as noted above, part (a) is true without the assumption of independence.

Suppose again that \(X\) has independent elements with parameters \((p_i: i \in I)\).

  1. The reliability function \(F_1\) of \(X\) for \((S, \upa)\) is given by \[F_1(x) = f(x) \sum_{j \in x^c} \frac{p_j}{1 - p_j}, \quad x \in S\]
  2. Assuming that \(I\) is countably infinte, the rate function \(r_1\) of \(X\) for \((S, \upa)\) is given by \[r_1(x) = \frac{1}{\sum_{j \in x^c} p_j / (1 - p_j)}, \quad x \in S\]
Details:

For part (a), note that \[F_1(x) = \sum_{j \in x^c} f(x \cup \{j\}) = \sum_{j \in x^c} \left(\prod_{i \in x} p_i\right) p_j \left(\prod_{i \in x^c} (1 - p_i)\right) \frac{1}{1 - p_j}, \quad x \in S\] Of course, if \(I\) is finite and \(x = I\) then \(F_1(x) = 0\) and so the rate function \(r_1\) is undefined at \(x\).

Note that the expression \(p_j / (1 - p_j)\) in parts (a) and (b) is the odds ratios for the event \(\{j \in X\}\), where \(j \in x^c\). In particular, \(X\) does not have constant rate for \((S, \subseteq)\) or for \((S, \upa)\). When \(I\) is finite and the parameters are the same, the model is particularly simple and elementary.

Suppose that \(I\) is finite with \(m = \#(I) \in \N_+\) and that \(p_i = p \in (0, 1)\) for all \(i \in I\).

  1. The events \(\{i \in X\}\) for \(i \in I\) form a sequence of \(m\) Bernoulli trials with success parameter \(p\).
  2. The probability density function \(f\) of \(X\) is given by \(f(x) = p^{\#(x)} (1 - p)^{m - \#(x)}\) for \(x \in S\).
  3. The hitting probability function \(G\) is given by \(G(x) = 1 - (1 - p)^{\#(x)}\) for \(x \in S\).
  4. The reliability function \(F\) of \(X\) for \((S, \subseteq)\) is given by \(F(x) = p^{\#(x)}\) for \(x \in S\).
  5. The rate function \(r\) of \(X\) for \((S, \subseteq)\) is given by \(r(x) = (1 - p)^{m - \#(x)}\) for \(x \in S\).

Once again, \(X\) has increasing rate for \((S, \subseteq)\). Of course, \(N = \#(X)\) has the binomial distribution with parameters \(m\) and \(p\) so \(\E(N) = m p\) and \(\var(N) = m p (1 - p)\).