Law of Large Numbers and Central Limit Theorem under Nonlinear Expectations

The law of large numbers (LLN) and central limit theorem (CLT) are long and widely been known as two fundamental results in probability theory. Recently problems of model uncertainties in statistics, measures of risk and superhedging in finance motivated us to introduce, in [4] and [5] (see also [2], [3] and references herein), a new notion of sublinear expectation, called \textquotedblleft% $G$-expectation\textquotedblright, and the related \textquotedblleft$G$-normal distribution\textquotedblright from which we were able to define G-Brownian motion as well as the corresponding stochastic calculus. The notion of G-normal distribution plays the same important rule in the theory of sublinear expectation as that of normal distribution in the classic probability theory. It is then natural and interesting to ask if we have the corresponding LLN and CLT under a sublinear expectation and, in particular, if the corresponding limit distribution of the CLT is a G-normal distribution. This paper gives an affirmative answer. The proof of our CLT is short since we borrow a deep interior estimate of fully nonlinear PDE in [6] which extended a profound result of [1] (see also [7]) to parabolic PDEs. The assumptions of our LLN and CLT can be still improved. But the discovered phenomenon plays the same important rule in the theory of nonlinear expectation as that of the classical LLN and CLT in classic probability theory.


Introduction
The law of large numbers (LLN) and central limit theorem (CLT) are long and widely been known as two fundamental results in probability theory.
Recently problems of model uncertainties in statistics, measures of risk and superhedging in finance motivated us to introduce, in [4] and [5] (see also [2], [3] and references herein), a new notion of sublinear expectation, called "Gexpectation", and the related "G-normal distribution" (see Def. 10) from which we were able to define G-Brownian motion as well as the corresponding stochastic calculus. The notion of G-normal distribution plays the same important rule in the theory of sublinear expectation as that of normal distribution in the classic probability theory. It is then natural and interesting to ask if we have the corresponding LLN and CLT under a sublinear expectation and, in particular, if the corresponding limit distribution of the CLT is a G-normal distribution. This paper gives an affirmative answer. The proof of our CLT is short since we borrow a deep interior estimate of fully nonlinear PDE in [6] which extended a profound result of [1] (see also [7]) to parabolic PDEs. The assumptions of our LLN and CLT can be still improved. But the phenomenon discovered plays the same important rule in the theory of nonlinear expectation as that of the classical LLN and CLT in classic probability theory.

Sublinear expectations
Let Ω be a given set and let H be a linear space of real functions defined on Ω such that if X 1 , · · · , X n ∈ H then ϕ(X 1 , · · · , X n ) ∈ H for each ϕ ∈ C poly (R n ) where C poly (R) denotes the space of continuous functions with polynomial growth, i.e., there exists constants C and k ≥ 0, such that |ϕ(x)| ≤ C(1 + |x| k ). H is considered as a space of "random variables".
Here we use C poly (R n ) in our framework only for some technique reason. In general it can be replaced by C b (R n ) the space of bounded and continuous functions, by lip b (R n ) the space of bounded and and Lipschitz continuous functions, or by L 0 (R n ) the space of Borel measurable functions.
(b) Sub-additivity (or self-dominated property): For each given p ≥ 1, we denote by H p , the collection of X ∈ H such that E[|X| p ] < ∞. It can be checked (see [4] and [5]) that We also have H q ⊆ H p for 1 ≤ p ≤ q < ∞ and, if 1 p + 1 q = 1, then for each X ∈ H p and Y ∈ H q we have X · Y ∈ H 1 and It follows that H p is a linear space and the sublinear expectation E[·] naturally induces a norm X p := E[|X| p ] 1/p on H p . The completion of H p under this norm forms a Banach space. The expectation E[·] can be extended to this Banach space as well. This extended E[·] still satisfies the above (a)-(d). But in this paper only the pre-Banach space H p is involved.

Law of Large Numbers
where σ ∈ (0, ∞) is a fixed number. Then the sum satisfies the following law of large numbers: Moreover, the convergence rate is dominated by Proof. By a simple calculation, we have, using Proposition 2,

Remark 4 The above condition (1) can be easily extended to the situation
In this case we have

Central Limit Theorem
We now consider a generalization of the notion of the distribution under E of a random variables. To this purpose we can make a set Ω a linear space of real functions H defined on Ω as well as a sublinear expectation E[·] in exact the same way as Ω, H and E defined in Section 2. We can similarly define H p for p ≥ 1.

Definition 6 A random variable X ∈ H is said to be independent under E[·]
to Y = (Y 1 , · · · , Y n ) ∈ H n if for each test function ϕ ∈ C poly (R n+1 ) such that ϕ(X, Y ) ∈ H 1 , we have ϕ(X, y) ∈ H 1 , for each y ∈ R n and, with ϕ(y) A random variable X ∈ H 2 is said to be weakly independent of Y if the above test functions ϕ are taken only among, instead of C poly (R n+1 ),

Remark 7
In the case of linear expectation, this notion is just the classical independence. Note that under sublinear expectations "X is independent to Y " does not implies automatically that " Y is independent to X".
Remark 8 If we assume in the above law of large numbers that the sequence X 1, X 2 , · · · is dynamically independent and identically distributed from each other and that E[ We denote by lip b (R) the collection of all uniformly Lipschitz and bounded real functions on R. It is a linear space.
Remark 11 A simple construction of a G-normal distributed random variable ξ is to take Ω = R, H = C poly (R). The expectation E is defined by E[ϕ] := u ϕ (1, 0), where u = u ϕ is the unique polynomial growth and continuous viscosity solution of (3) with ϕ ∈ C poly (R) = H 1 . The G-normal distributed random variable is ξ(ω) ≡ ω, ω ∈ Ω = R.
Our main result is: in H 3 be identically distributed with each others. We also assume that, each X n+1 is independent (or weakly independent) to (X 1 , · · · , X n ) for n = 1, 2, · · · . We assume furthermore that where ξ is G-normal distributed under E..
Proof. For a function ϕ ∈ lip b (R) and a small but fixed h > 0, let V be the unique viscosity solution of We have, according to the definition of G-normal distribution Particularly, Since (5) is a uniformly parabolic PDE and G is a convex function, thus, by the interior regularity of V (see Wang [6], Theorem 4.13) we have V C 1+α/2,2+α ([0,1]×R) < ∞, for some α ∈ (0, 1).
We set δ = 1 n and S 0 = 0. Then with, by Taylor's expansion, We have, by applying ∂ t V (iδ, It then follows that But since both ∂ t V and ∂ 2 xx V are uniformly α-hölder continuous in x and α 2hölder continuous in t on [0, 1] × R, we then have |I i As n → ∞, we thus have On the other hand, we have, for each t, t ′ ∈ [0, 1 + h] and x ∈ R, where k ϕ denotes the Lipschitz constant of ϕ. Thus |V (0, 0) − V (0, h)| ≤ C √ h and, by (6), It follows form (7) and (6)  We can easily check that (4) holds.