Introduction

We begin by defining random variables \(\xi_1, \dots, \xi_n\) and their partial sums \(S_1, \dots, S_n\), i.e., \(S_k = \sum_{i=1}^k \xi_i\). In a very general sense, \(S_1, S_2, \dots\) defines a random walk (in a simple random walk on \(\mathbb{Z}\) the \(\xi_i\) are i.i.d. coin flips that take the values +1 and -1 with probability 1/2). The process \(X^{(n)} := \big(X^{(n)}_t, \, t \in [0,1]\big)\) defined by

\[\begin{equation} X^{(n)}_t := \frac{1}{\sigma \sqrt{n}}\Big(S_{\lfloor nt \rfloor} - (nt - \lfloor nt \rfloor)\xi_{\lfloor nt \rfloor + 1}\Big), \tag{1} \end{equation}\]

linearly interpolates the random walk up to time \(n\), i.e., \(0\equiv S_0/(\sigma \sqrt{n}),S_1/(\sigma \sqrt{n}), \dots, S_n/(\sigma \sqrt{n})\). Figure 1 gives a graphical representation of this connection.

Connection between random walk and $X^{(n)}$ process

Figure 1: Connection between random walk and \(X^{(n)}\) process

The process \(X^{(n)}\) is also an element of \(C := C[0,1]\), where \(C\) is the space of continuous functions \(f: [0,1] \to \mathbb{R}\) endowed with the metric \[ d(f,g) = \sup_{0 \leq t \leq 1} |f(t) - g(t)|, \] for elements \(f, g \in C\). We also equip \(C\) with its Borel \(\sigma\)-algebra. Information about some of the properties of \(C\) can be seen in Example 1.3 and Section 7 of Billingsley (1999). The following result about the process \(X^{(n)}\), called Donsker’s theorem, or Donsker’s invariance principle, is fundamental.

Theorem 1 (Donsker’s Theorem) Let \(\xi_1, \dots, \xi_n\) be i.i.d. random variables with mean 0 and variance \(\sigma^2\). Then for \(X^{(n)}\) defined by (1), we have \[ (X^{(n)}_t, \, t \geq 0) \Rightarrow (B_t, \, t \geq 0), \] in \(C\), where \((B_t, \, t \geq 0\) is standard Brownian motion.


In Figure 2, you can see an illustration of Donsker’s theorem at work for the simple random walk case. What this plot really shows is the convergence of our rescaled random walk to a Brownian motion sample path.

Donsker's theorem at work

Figure 2: Donsker’s theorem at work

History

Donsker’s theorem was first given by Monroe Donsker in (1951). In the paper, he gave the motivation for studying such an invariance principle. A function \(f: C \to \mathbb{R}\) is called a functional; Donsker was motivated by the earlier study of Erdős and Kac (1946) showing that, among other results, the limiting distribution of \[ \max_{1 \leq i \leq n} \frac{S_i}{\sigma\sqrt{n}}, \]

when \(\sigma=1\), is the same as the distribution of \(\sup_{0 \leq t \leq 1} x(t)\) on \(C\), with respect to Wiener measure. In other words, what was shown was that \[ P\bigg( \sup_{1 \leq i \leq n} \frac{S_i}{\sqrt{n}} \leq x\bigg) \Rightarrow P\Big(\sup_{0 \leq t \leq 1} B_t \leq x\Big). \]

References

Billingsley, Patrick. 1999. Convergence of Probability Measures. 2nd edition. John Wiley & Sons.

Donsker, Monroe D. 1951. “An Invariance Principle for Certain Probability Limit Theorems.” In Four Papers on Probability, 50–61. Memoirs of the American Mathematical Society 6. Providence, Rhode Island: American Mathematical Society.

Erdős, Paul, and Mark Kac. 1946. “On Certain Limit Theorems of the Theory of Probability.” Bulletin of the American Mathematical Society 52: 292–302.