Convergence to a self-normalized G-Brownian motion

G-Brownian motion has a very rich and interesting new structure which nontrivially generalizes the classical one. Its quadratic variation process is also a continuous process with independent and stationary increments. We prove a self-normalized functional central limit theorem for independent and identically distributed random variables under the sub-linear expectation with the limit process being a G-Browian motion self-normalized by its quadratic variation. To prove the self-normalized central limit theorem, we also establish a new Donsker's invariance principle.


Introduction
Let {X n ; n ≥ 1} be a sequence of independent and identically distributed random variables on a probability space (Ω, F , P ). Set S n = n j=1 X j . Suppose EX 1 = 0 and EX 2 1 = σ 2 > 0. The well-known central limit theorem says that S n √ n d → N(0, σ 2 ), (1.1) or equivalently, for any bounded continuous function ψ(x), where ξ ∼ N(0, σ 2 ) is a normal random variable. If the normalization factor √ n is replaced by √ V n where V n = n j=1 X 2 j , then where ξ is a G-normal random variable.
In the classical case, when E[X 2 1 ] is finite, (1.3) follows from the cental limit theorem (1.1) immediately by Slutsky's lemma and the fact that The later is due to the law of large numbers. In the framework of the sub-linear expectation, Vn n no longer converges to a constant. The self-normalized central limit theorem can not follow from the central limit theorem (1.5) directly. In this paper, where W t is a G-Brownian motion and W t is its quadratic variation process. A very interesting phenomenon of G-Brownian motion is that its quadratic variation process is also a continuous process with independent and stationary increments, and thus can be still regarded as a Brownian motion. When the sub-linear expectation E reduces to a linear one, W t is the classical Borwnian motion with W 1 ∼ N(0, σ 2 ) and W t = tσ 2 , and then (1.6) is just (1.3). Our main results on the self-normalized central limit theorem will be given in Section 3 where the process of the self-normalized partial sums S [nt] / √ V n is proved to converge to a self-normalized G-Browian motion W t / W 1 . We also consider the case that the second moments of X i s are infinite and obtain the self-normalized central limit theorem under a condition similar to (1.4). In the next section, we state basic settings in a sub-linear expectation space including, capacity, independence, identical distribution, G-Brownian motion etc. One can skip this section if he/she is familiar with these concepts. To prove the self-normalized central limit theorem, we establish a new Donsker's invariance principle in Section 4 with the limit process being a generalized G-Browian motion. The proof is given in the last section.

Basic Settings
We use the framework and notations of Peng (2008b). Let (Ω, F ) be a given measurable space and let H be a linear space of real functions defined on (Ω, F ) such that if X 1 , . . . , X n ∈ H then ϕ(X 1 , . . . , X n ) ∈ H for each ϕ ∈ C b (R n ) C l,Lip (R n ), where C b (R n ) denote the space of all bounded continuous functions and C l,Lip (R n ) denotes the linear space of (local Lipschitz) functions ϕ satisfying for some C > 0, m ∈ N depending on ϕ.
H is considered as a space of "random variables". In this case we denote X ∈ H .
Further, we let C b,Lip (R n ) denote the space of all bounded and Lipschitz functions on R n . (c) Sub-additivity:
Give a sub-linear expectation E, let us denote the conjugate expectation Eof E by Next, we introduce the capacities corresponding to the sub-linear expectations.
It is called to be sub- where A c is the complement set of A. Then V is sub-additive and (2.1) Further, we define an extension of E * of E by , (i) (Identical distribution) Let X 1 and X 2 be two n-dimensional random vectors defined respectively in sub-linear expectation spaces (Ω 1 , H 1 , E 1 ) and (Ω 2 , H 2 , E 2 ).

Independence and distribution
They are called identically distributed, denoted by X 1 whenever the sub-expectations are finite. A sequence {X n ; n ≥ 1} of random variables is said to be identically distributed if X i d = X 1 for each i ≥ 1.
(ii) (Independence) In a sub-linear expectation space (Ω, H , E), a random vector Y = (Y 1 , . . . , Y n ), Y i ∈ H is said to be independent to another random vector (iii) (IID random variables) A sequence of random variables {X n ; n ≥ 1} is said to be independent and identically distributed (IID), if X i d = X 1 and X i+1 is independent to (X 1 , . . . , X i ) for each i ≥ 1.
The quadratic variation process of a G-Brownian motion W is defined by The quadratic variation process W t is also a continuous process with independent and stationary increments. For the properties and the distribution of the quadratic variation process, one can refer to a book of Peng (2010a).
Lemma 2.1 Let (Ω, F , P ) be a probability measure space and {B(t)} t≥0 is a P -Brownian motion. Then for all bounded continuous function ϕ : In the sequel of this paper, the sequences {X n ; n ≥ 1}, {Y n ; n ≥ 1} etc of the random variables are considered in (Ω, H , E). Without specification, we suppose that {X n ; n ≥ 1} is a sequence of independent and identically distributed random We denote a pair of capacities corresponding to the sub-linear expectation E by ( V, V), and the extension of E by E * .

Main results
We consider the convergence of the process S [nt] . Because it is not in C[0, 1], it needs to be modified. Define the C[0, 1]-valued random variable S X n (·) by setting extended by linear interpolation in each interval Here [nt] is the largest integer less than or equal to nt. Zhang (2015) obtained the functional central limit theorem as follows.
Then for all bounded continuous function ϕ : Replacing the normalization factor √ n by √ V n , we obtain the self-normalized process of partial sums: where 0 0 is defined to be 0. Our main result is the following self-normalized functional central limit theorem (FCLT).
Then for all bounded continuous function ϕ : In particular, for all bounded continuous function ϕ : R → R, An interesting problem is how to estimate the upper bounds of the expectations on the right hands of (3.2) and (3.3).
For the classical self-normalized central limit theorem, Giné, Götze and Mason Then the conclusions of Theorem 3.1 remain true with W (t) being a G-Brownian

Invariance principle
To prove Theorems 3.1 and 3.2, we will prove a new Donsker's invariance principle.
Let ξ be a G-normal distributed random variable, η be a maximal distributed random variable such that the distribution of (ξ, η) is characterized by the following parabolic i.e., if for any bounded Lipschitz function ϕ(x, y) : Further, let B t and b t be two random processes such that the distribution of the . . , k, in sense that, for any ϕ ∈ C l,Lip (R 2(k+1) ), (v) the distribution of (B 1 , b 1 ) is characterized by the PDE (4.2).
It is easily seen that B t is a G-Brownian motion with B 1 ∼ N 0, [σ 2 , σ 2 ] , and (B t , b t ) is a generalized G-Brownian motion introduced by Peng (2010a). The existence of the generalized G-Brownian motion can be found in Peng (2010a).
Then for any bounded continuous function ϕ : Further, let p ≥ 2, q ≥ 1, and assume E[ Then for all continuous function ϕ : where S(d) is the collection of all d×d symmetric matrices. The conclusion of Theorem 4.1 remains true with the distribution of (B 1 , b 1 ) being characterized by the following parabolic partial differential equation defined on [0, ∞) × R d × R d : Remark 4.2 As a conclusion of Theorem 4.1, we have Before the proof, we need several lemmas. For random vectors X n in (Ω, H , E) and for any bounded continuous ϕ.
The following three lemmas are obvious.
where a is a constant and y is a constant vector, and V( X > λ) → 0 as λ → ∞. Then y, a), and as a result, η n X n + Y n d → aX + y.
The following lemma is proved by Zhang (2015).
The next lemma is on the Rosenthal type inequalities due to Zhang (2014).
Lemma 4.5 Let {X 1 , . . . , X n } be a sequence of independent random variables in X k ∧ √ n, X n,k = X k − X n,k , S X n,k = k j=1 X n,j and S X n,k = k j=1 X n,j , k = 1, . . . , n. Then The proof is completed.
is uniformly integrable and so is tight.
is uniformly integrable and so is tight.
On the other hand, It follows that Proof of Theorem 4.1. We first show the tightness of W n . It is easily seen that It follows that for any ǫ > 0, if δ < ǫ/(4b), then Letting δ → 0 and then b → ∞ yields For any η > 0, we choose δ k ↓ 0 such that, if Note that when δ < 1/(2n), Choose a k 0 such that δ k < 1/(2Mk) for k ≥ k 0 . Then on the event On the other hand, It follows that We conclude that for any η > 0, there exists a compact Next, we show that for any η > 0, there exists a compact Similar to (4.10), it is sufficient to show that With the same argument of Billingsley (1968, Pages 56-59, c.f., (8.12)), for large n, by Lemma 4.7 (a) where p = 2. On the other hand, for fixed n, if δ < 1/(2n) then It follows that lim δ→0 V w δ S X n (·) √ n ≥ ǫ = 0 for each n. It follows that (4.12) holds. Now, by combing (4.10) and (4.11) we obtain the tightness of W n as follows.
Next, it is sufficient to show that (ξ t , η t ) defined on the sub-linear space (Ω, L F (Ω), F) satisfies (i)-(v) and so (ξ · , η · ) d = (B · , b · ), which means that the limit distribution of any subsequence of W n (·) is uniquely determined.
(i) is obvious.
Let 0 ≤ t 1 ≤ . . . ≤ t k ≤ s ≤ t + s. By (4.14), for any bounded continuous function It follows that by Lemmas 4.2 and 4.7, In particular, Hence, Next, we show that It follows that On the other hand, let S X n,k and S X n,k be defined as in Lemma 4.6. Then S X k = S X n,k + S X n,k . By (4.16) and Lemmas 4.6 and 4.2, It follows that Hence, by the completeness of (Ω, L F (Ω), F). (4.18) is proved. Now, note that (X i , Y i ), i = 1, 2, . . ., are independent and identically distributed.
Note that, by (4.18), the random variables concerned in (4.3) and (4.17) have finite moments of each order. The function space C b (R 2(k+1) ) and C b (R 2 ) can be extended to C l,Lip (R 2(k+1) ) and C l,Lip (R 2 ), respectively, by elemental arguments. So, (ii) and (iii) is proved.
For (iv) and (v), we let ϕ : R 2 → R be a bounded Lipschitz function and consider It is sufficient to show that u is a viscosity solution of the PDE (4.2). In fact, due to the uniqueness of the viscosity solution, we will have Taking x = 0 and y = 0 yields (iv) and (v).
To verify the PDE (4.2), firstly it is easily seen that [nt] = [nt] n G(p, q).
Proof of Remark 4.1. When X k and Y k are d-dimensional random vectors, the tightness (4.13) of W n (·) also follows because each sequence of the components of the vector W n (·) is tight. Also, (4.18) remains true because each component has this property. On the other hand, it follows that The remainder proof is the same as that of Theorem 4.1.

Proof of the self-normalized FCLTs
Let Y k = X 2 k . The function G(p, q) in (4.1) becomes Then the process (B t , b t ) in (4.4) and the process (W (t), W t ) are identically distributed.
In fact, note It is easy to verify that (W (t), W t ) satisfies (i)-(iv) for (B · , b · ). It remains to show On the other hand, let t k = k n . Then Hence (B · , b · ) d = (W (·), W · ). We conclude the following proposition from Theorem 4.1.
Then for all bounded continuous function ψ : Then ψ : C[0, 1] × R → R is a bounded continuous function. Hence by Proposition On the other hand, It follows that The proof is now completed.
The condition (I) implies that l(x) is slowly varying as x → ∞ and E[|X 1 | r ∧ x r ] = o(x r−2 l(x)), r > 2.
Further E * [X 2 1 I{|X 1 | ≤ x}] l(x) → 1, If the conditions (I) and (III) are satisfied, then Now, let d t = inf{x : x −2 l(x) = t −1 }. Then nl(d n ) = d 2 n . Similar to Theorem 3.1, it is sufficient to show that for all bounded continuous function ψ : It is sufficient to show that for all bounded continuous function ψ : Following the line of the proof of Thus (b) is verified, and the proof is completed.