Probability

Consider some physical system $A$. Suppose that a measurement of a given property of this system can result in a number of distinct outcomes. If we wish to determine the probability of obtaining a given outcome at an arbitrary time then we can take one of two approaches. First, we can observe system $A$ at many distinct times; this approach is known as a time average. Second, we can observe many systems that are identical to $A$ at an arbitrary time; this approach is known as an ensemble average. An ensemble average is the most convenient theoretical approach, and the one that we shall adopt in the following discussion, whereas a time average is more directly related to real experiments.

Suppose that there are $N$ systems in our ensemble (i.e., collection of identical systems) and that $N_r$ of these systems exhibit the outcome $r$. The probability of occurrence of outcome $r$ is defined

$\displaystyle P_r = ~_{\lim N\rightarrow\infty} \,\frac{N_r}{N}.$ (5.1)

It is clear that $P_r$ is a number that lies between 0 and 1. If $P_r=0$ then no systems in the ensemble exhibit the outcome $r$, even in the limit that the number of systems tends to infinity. This is another way of saying that outcome $r$ is impossible. If $P_r=1$ then all systems in the ensemble exhibit the outcome $r$, even in the limit that the number of systems tends to infinity. This is another way of saying that outcome $r$ is certain to occur.

Suppose that a measurement of a given property of some physical system $A$ can lead to any one of $R$ mutually exclusive outcomes. Let the total number of systems in the ensemble be $N$, and let the number of systems that exhibit the outcome $r$ be $N_r$. It follows that

$\displaystyle \sum_{r=1,R} N_r= N.$ (5.2)

However, if we divide both sides of the previous equation by $N$, and then take the limit that $N\rightarrow\infty$, then we obtain the so-called normalization condition,

$\displaystyle \sum_{r=1,R}P_r = 1,$ (5.3)

where use has been made of Equation (5.1). The normalization condition states that the sum of the probabilities of all of the possible outcomes of a measurement of a given property of system $A$ is unity. This condition is equivalent to the self-evident proposition that a measurement of the property is bound to result in one of the possible outcomes of this measurement.

Let us determine the probability of occurrence of outcome $r$ or outcome $s$ when an observation is made of our system. Here, $r$ and $s$ are distinct outcomes. There are $N_r+N_s$ systems in our ensemble that exhibit either the outcome $r$ or the outcome $s$, so

$\displaystyle P_{r\vert s} = ~_{\lim N\rightarrow\infty} \,\frac{N_r+ N_s}{N}= P_r + P_s,$ (5.4)

where use has been made of Equation (5.1). In other words, the probability of observing the outcome $r$ or the outcome $s$ is the sum of the probabilities of occurrence of these two outcomes. For example, the probability of throwing a $1$ on a six-sided die is $1/6$. Likewise, the probability of throwing a 2 is $1/6$. Hence, the probability of throwing a $1$ or a $2$ is $1/6+1/6=1/3$. The previous result can easily be extended to deal with more that two alternative outcomes.

Suppose that our system can exhibit two different types of outcome. Type-1 outcomes are labeled $r=1,\cdots, R$. Type-2 outcomes are labeled $s=1,\cdots, S$. Let there be $N$ systems in our ensemble, and let $N_r$ of them exhibit the type-1 outcome $r$, and let $N_s$ of them exhibit the type-2 outcome $s$. The probability of outcome $s$ is

$\displaystyle P_s = \frac{N_s}{N},$ (5.5)

which implies that

$\displaystyle N_s = P_s\,N.$ (5.6)

[Here, the limit $N\rightarrow\infty$ is taken as read; see Equation (5.1).] By analogy, the number of systems that exhibit the type-1 outcome $r$ and the type-2 outcome $s$ is

$\displaystyle N_{r\otimes s}= P_s\,N_r.$ (5.7)

Hence, the probability of obtaining both the type-1 outcome $r$ and the type-2 outcome $s$ simultaneously is

$\displaystyle P_{r\otimes s} = ~_{\lim N\rightarrow\infty} \,\frac{N_{r\otimes s}}{N}= P_s~_{\lim N\rightarrow\infty} \,\frac{N_{r}}{N}
= P_r\,P_s,$ (5.8)

where use has been made of Equation (5.1). However, the previous result is only valid provided outcomes $r$ and $s$ are statistically independent of one another. In other words, obtaining the outcome $r$ must not affect the probability of obtaining the outcome $s$. As an example of the previous result, consider a system consisting of two six-sided dies. The probability of throwing a 1 on either die is $1/6$. Hence, the probability of simultaneously throwing a 1 on both dies is $1/6\times 1/6=1/36$. The previous result can easily be extended to deal with more than two types of outcome.