7

Probability Tidbits 6 - Kolmolgorovs 0-1 Law | Ray

 2 years ago
source link: https://oneraynyday.github.io/math/2022/09/11/Kolmogorovs-0-1-Law/
Go to the source link to view the article. You can view the picture content, updated content and better typesetting reading experience. If the link is broken, please click the button below to view the snapshot at that time.
neoserver,ios ssh client

Probability Tidbits 6 - Kolmogorov’s 0-1 Law

In the 4th tidbit we discussed the definition of almost surely and infinitely often. In the discussion of the converse of the second Borel-Cantelli lemma, we referenced the Kolmogorov 0-1 Law, but didn’t prove it or even state the general result. The law states that for independent random variables {Xn}n its tail σ-algebra is P-trivial. Let’s define some terms in that word salad.

Independence

The definition of independence taught in undergrad probability theory concerns events, in that for event A,B∈F where F is some σ-algebra, these events are independent if:

P(A∩B)=P(A)P(B)

Typically, for events A1,…,AN to be independent, they must be jointly independent (pairwise independence isn’t enough):

P(⋂nAn)=∏nP(An)

In the context of measure theoretic probability theory, we often concern ourselves with independence on the sub σ-algebra level. Suppose we have two sub σ-algebras H,G⊂F, then they are independent if:

P(H∩G)=P(H)P(G)∀H∈H,G∈G

When we say two random variables are independent, we mean their corresponding generated σ-algebras are independent. As we discussed in the 3rd tidbit working with σ-algebras is difficult and proving the above independence is not easy. Thankfully π-systems, a much simpler construct often embedded in σ-algebras, can come to the rescue here. Recall that two measures that agree on the same π-system will be the same measure on the σ-algebra generated by that π-system. Suppose the two sub σ-algebras H,G⊂F have corresponding π-systems I,J such that σ(I)=H,σ(J)=G. A natural question is: if I,J are independent, does that make H,G independent as well? The answer is yes - in fact this is an iff statement. We know that

P(I∩J)=P(I)P(J)

If we were to fix I, then we can see that the measure μ′(J):=P(I∩J) agrees with μ fixed on J. These two measures are the same, so it’ll be the same measure on σ(J)=G, which implies independence on the σ-algebra level. So now we know that I, a π-system, is independent with G, a σ-algebra. Now let’s fix some G∈G and apply the same argument to I, so that G is independent with σ(I)=H. Now we have H,G independent.

Tail σ-algebra

Recall in the 5th tidbit we said that a random variable X can generate a σ-algebra:

σ(X):=σ({{ω∈Ω:X(ω)∈B}:B∈B})

Suppose we have a sequence of random variables {Xn}n then collections of these random variables can generate a sigma algebra:

σ(X1,..,XN)=σ({{ω∈Ω:Xi(ω)∈B}:B∈B,i≤N}) Let’s define a specific type of generated σ-algebra created by countable collections:

Tn=σ(Xn,Xn+1,...),T=⋂nTn Here, T is the tail σ-algebra consisting of tail events.

P-trivial

This term, though not as commonly used, is pretty simple. The most trivial probability measure is one where the probability can be either 1 or 0. In the measurable space (R,B), a trivial measure P is one where P(X=c)=1 for some c∈R, and consequently 0 everywhere else. Plotting its corresponding distribution function, this looks like a right-continuous step function (recall from tidbit 5, all distribution functions are right continuous).

Kolmogorov’s 0-1 Law

Now that we have all the setup, let’s figure out why Kolmogorov’s law is true. To remind us: The law states that for independent random variables {Xn}n its tail σ-algebra T is P-trivial. Let’s start off with these σ-algebras:

XN=σ(X1,...,XN),TN=σ(XN+1,XN+2,...)

These are basically “the measurable events before and after N”. Intuitively, XN⊥TN, but it’s kind of hard to reason about these huge σ-algebras, so let’s make a π-system for each:

IN={ω:Xi(ω)≤xi∀i≤N},JN={ω:Xi(ω)≤xi∀i>N}

These are π-systems since:

A={ω:Xi(ω)≤ai∀i≤N}∈INB={ω:Xi(ω)≤bi∀i≤N}∈INA∩B={ω:Xi(ω)≤min(ai,bi)∀i≤N}∈IN

and obviously IN≠∅. A similar argument can be made for JN. These π-systems are independent since each Xi are independent and by our previous result in the independence section, we have XN⊥TN. But we know that T⊂TN∀N! This means T⊥XN∀N. Now let’s push N→∞ - is X∞⊥T? We know that XN⊂XN+1∀N, and as with most “monotonically increasing sets of sets” this is probably a π-system!

A,B∈⋃nXn⟹A∈Xa,B∈Xb for some a,b∈N⟹A∩B∈Xmin(a,b)⊂⋃nXn

Let’s denote this π-system as K:=⋃nXn. We know that K⊥T, since for any element in K, we have that it belongs to some Xn which is independent of T. We also know that σ(K)⊥T since π-systems independence extend to its generated σ-algebra. But wait - T⊂σ(K)! This is true since T:=⋂nσ(Xn,Xn+1,...)⊂σ(X1,X2,...). This means T⊥T, so that

T∈T,P(T∩T)=P(T)P(T)⟹P(T)∈{0,1}

Thus T is P-trivial. With Kolmogorov’s law, we now know that if a tail event T will either happen with probability 0 or 1 - there is no in between (kind of like a law of excluded middle). We used this in the converse of the second Borel-Cantelli lemma to show that if the probability of the tail event limsupn→∞En not occurring is greater than 0, then the tail event must have probability 0.


About Joyk


Aggregate valuable and interesting links.
Joyk means Joy of geeK