Open Access

A branching particle system approximation for a class of FBSDEs

Probability, Uncertainty and Quantitative Risk20161:9

https://doi.org/10.1186/s41546-016-0007-y

Received: 6 April 2016

Accepted: 10 August 2016

Published: 1 December 2016

Abstract

In this paper, a new numerical scheme for a class of coupled forward-backward stochastic differential equations (FBSDEs) is proposed by using branching particle systems in a random environment. First, by the four step scheme, we introduce a partial differential Eq. (PDE) used to represent the solution of the FBSDE system. Then, infinite and finite particle systems are constructed to obtain the approximate solution of the PDE. The location and weight of each particle are governed by stochastic differential equations derived from the FBSDE system. Finally, a branching particle system is established to define the approximate solution of the FBSDE system. The branching mechanism of each particle depends on the path of the particle itself during its short lifetime ε=n −2α , where n is the number of initial particles and \({\alpha }<\frac {1}{2}\) is a fixed parameter. The convergence of the scheme and its rate of convergence are obtained.

Keywords

Forward-backward stochastic differential equation Partial differential equations Branching particle system Numerical solution

MSC (2010) Classification

60H35 60H15 62J99

Introduction

Since the work of Pardoux and Peng (1990), forward-backward stochastic differential equations (FBSDEs) have been extensively studied and have found important applications in many fields, including finance, risk measure, stochastic control and so on (cf. Cvitanić and Ma (1996); El Karoui et al. (1997); Ma and Yong (1999); Xiong and Zhou (2007), and Yong and Zhou (1999)). For instance, we consider a risk minimizing economic management problem. x(·) denotes an economic quantity, which can be interpreted as cash-balance, wealth and an intrinsic value in different fields. Suppose that x(·) is governed by
$$\begin{array}{@{}rcl@{}} \left\{ \begin{array}{l} dx^{v}(t)=(A(t)x^{v}(t)+B(t)v(t))dt+(C(t)v(t)+D(t))dW(t),\\ x^{v}(0)=x_{0}, \end{array}\right. \end{array} $$
where v(·) is the control strategy of a policymaker and A(·),B(·),C(·),D(·) are bounded and deterministic. Let ρ(x v (1)) denote the risk of the economic quantity x v (1), where the risk measure is convex in the sense of Föllmer and Schied (1999). Recently, Rosazza Gianin (2006) established the relationship between the risk measure ρ(·) and the g-expectation \(\mathcal {E}_{g}^{v}\) (see Peng (2010)):
$$\rho(x^{v}(1))=\mathcal{E}_{g}^{v}[-x^{v}(1)] $$
where the functional \(g: [0,1]\times \mathbb {R}\times \mathbb {R}\times \mathbb {R}\rightarrow \times \mathbb {R}\) satisfies g(t,y,0)=0 and is the generator of the following BSDE:
$$\begin{array}{@{}rcl@{}} \left\{ \begin{array}{l} -dy^{v}(t)=g(t,y^{v}(t),z^{v}(t))dt-z^{v}(t)dW(t),\\ y^{v}(1)=-x^{v}(1). \end{array}\right. \end{array} $$
Thus, the objective of the policymaker is equivalent to minimizing
$$J(v(\cdot))=\mathbb{E}^{v}[y^{v}(0)+\frac{1}{2}{\int^{1}_{0}}(v(t)-M(t))^{2}dt] $$
subject to the FBSDE.

In previous work, Ma et al. (1994) studied the solvability of the adapted solution to the FBSDEs, in particular, they designed a direct scheme, called the four step scheme to solve the FBSDEs explicitly. However, in most cases, it is often difficult to get the solution in closed form so it is important to study numerical methods for solving FBSDEs. Following the earlier works of Bally (1997) and Douglas et al. (1996), various efforts have been made to find efficient numerical schemes for FBSDEs. In the decoupled forward-backward case, these include the PDE method in the Markovian case (e.g., Chevance (1997)), random walk approximations (e.g., Briand et al. (2001) and Ma et al. (2002)), Malliavin calculus and Monte-Carlo method (e.g., Zhang (2004), Ma and Zhang (2005), and Bouchard and Touzi (2004)) and so on. However, in the case of coupled FBSDEs, to our knowledge, there are very few works in the literature, such as Milstein and Tretyakov (2006), Delarue and Menozzi (2006), Cvitanić and Zhang (2005), and Ma et al. (2008).

In this paper, we are interested in investigating a new numerical scheme for a class of coupled FBSDEs by a branching particle system approximation. There are various studies about particle system representations for stochastic partial differential equations with application to filtering since the pioneering work of Crisan and Lyons (1997) and Del Moral (1996). Here we list a few which are closely related to the present work: Kurtz and Xiong (1999); Kurtz and Xiong (2001), Crisan (2002), Xiong (2008), Liu and Xiong (2013), Crisan and Xiong (2014).

Particle system representations for FBSDEs are studied in Henry-Labordère et al. (2014) when the forward part is independent of the backward one, namely, the decoupled case. In this case, the approximation of the solution of a PDE and that of the forward SDE can be constructed separately. However, for the coupled case, the construction of the branching particle system must consider both the PDE and the SDE in a delicate manner. This paper can be regarded as a first attempt in this direction. One of the main advantages of this method is the circumventing of the computation of conditional expectations via regression methods.

Let \(\left (\Omega,,\mathcal {F},\{\mathcal {F}_{t}\}_{0\leq t\leq T},\mathbb {P}\right)\) be a filtered complete probability space, where \(\{\mathcal {F}_{t}\}_{0\leq t\leq T}\) denotes the natural filtration generated by a standard Brownian motion \(\{W_{t}\}_{0\leq t\leq T}, \mathcal {F}=\mathcal {F}_{T}\) and T>0 is a fixed time horizon. We consider the following FBSDE in the fixed duration [0,T]:
$$\begin{array}{@{}rcl@{}} \left\{ \begin{array}{l} dX(t)=b\left(X\left(t\right),Y\left(t\right)\right)dt+\sigma\left(X\left(t\right)\right)dW(t), \\ -dY(t)=g(X(t),Y(t),Z(t))dt-Z(t)dW(t),\\ X(0)=x, Y(T)=f(X(T)), \end{array}\right. \end{array} $$
(1.1)

where \(b:\mathbb {R}^{d}\times \mathbb {R}^{k}\rightarrow \mathbb {R}^{d}\), \(\sigma :\mathbb {R}^{d}\rightarrow \mathbb {R}^{d\times l}\), \(g:\mathbb {R}^{d}\times \mathbb {R}^{k}\times \mathbb {R}^{k\times l}\rightarrow \mathbb {R}^{k}\) and \(f:\mathbb {R}^{d}\rightarrow \mathbb {R}^{k}\).

In what follows, we make the following assumption:

(A1) The generator g has the following form: for z=(z 1,,z l ),
$$g(x,y,z)=C(x,y)y+\sum\limits_{j=1}^{l}D_{j}(x,y)z_{j}, $$

and b(x,y),σ(x),g(x,y,z),f(x),C(x,y) and D(x,y) are all bounded and Lipschitz continuous maps with bounded partial derivatives up to order 2. Furthermore, the matrix σ σ is uniformly positive definite, and the function f is integrable. Here σ denote the transpose of the matrix σ.

Remark 1.1

For the generators associated with g-expectation, the condition g(y,0)=0 (we omit the variable t) together with an extra differentiability condition, we have
$$g(y,z)=\sum^{\ell}_{j=1}D_{j}(y,z)z_{j}. $$

The case of D j (y,z) depending on z is more technically demanding in the construction of the branching particle systems. We hope to return to this case in a future work.

Relying on the idea of the four step scheme, we know that the solution to the above FBSDE has the relation Y(t)=u(t,X(t)),Z(t)= x u(t,X(t))σ(X(t)), where u(t,x) is a solution to the PDE
$$\begin{array}{@{}rcl@{}} \begin{aligned} \left\{ \begin{array}{l} -\frac{\partial u(t,x)}{\partial t}=Lu(t,x)+C\left(x,u\left(t,x\right)\right)u\left(t,x\right)+\sum_{j=1}^{l}D_{j}(x,u\left(t,x\right))\partial_{x}u\left(t,x\right)\sigma_{j}\left(x\right)\\ u(T,x)=f(x), \end{array}\right. \end{aligned} \end{array} $$
and
$$L=\frac{1}{2}\sum_{i,j=1}^{d}a_{ij}\partial_{x_{i},x_{j}} +\sum_{i=1}^{d}b_{i}\partial_{x_{i}}, $$
with a ij =(σ σ T ) ij , σ=(σ 1,,σ l ) and b i being the ith coordinate of b.
For 0≤tT, assume v(t,x)=u(Tt,x). Note that
$$\begin{array}{@{}rcl@{}} \left\{ \begin{array}{l} \frac{\partial v(t,x)}{\partial t}=Lv(t,x)+C\left(x,v\left(t,x\right)\right)v\left(t,x\right) +\sum_{j=1}^{l}D_{j}(x,v\left(t,x\right))\partial_{x}v\left(t,x\right)\sigma_{j}\left(x\right) \\ v\left(0,x\right)=f(x). \end{array}\right. \end{array} $$
(1.2)

Remark 1.2

According to Proposition 4.2 in Ma et al. (1994), the above nonlinear parabolic partial differential equation has a unique solution.

The nonlinear parabolic partial differential Eq. (1.2) can be written as:
$$\begin{array}{@{}rcl@{}} \left\{\begin{aligned} \frac{\partial v(t,x)}{\partial t}=&\frac{1}{2}\sum_{i,j=1}^{d}a_{ij}(x)\partial_{x_{i},x_{j}}v\left(t,x\right)+\sum_{i=1}^{d}b_{i}\left(x,v\right)\partial_{x_{i}}v(t,x)+C\left(x,v\right)v\left(t,x\right)\\ &+\sum_{j=1}^{l}D_{j}(x,v)\partial_{x}v\left(t,x\right)\sigma_{j}\left(x\right)\\ v\left(0,x\right)=&f(x). \end{aligned}\right.\\ \end{array} $$
(1.3)
By rules of derivative, we have
$$\begin{array}{@{}rcl@{}} &&\frac{\partial v(t,x)}{\partial t}\\ &=&\frac{1}{2}\sum_{i,j=1}^{d}\partial_{x_{i},x_{j}}\left(a_{ij}(x)v(t,x)\right)-\frac{1}{2}\sum_{i,j=1}^{d}\partial_{x_{i},x_{j}}a_{ij}(x)v(t,x)\\ &&-\frac{1}{2}\sum_{i=1}^{d}\sum_{j=1}^{d}\partial_{x_{i}}a_{ij}(x)\partial_{x_{j}}v\left(t,x\right)-\frac{1}{2}\sum_{j=1}^{d}\sum_{i=1}^{d}\partial_{x_{j}}a_{ij}(x)\partial_{x_{i}}v\left(t,x\right)\\ &&+\sum_{i=1}^{d}\partial_{x_{i}}\left(b_{i}(x,v)v\left(t,x\right)\right)-\sum_{i=1}^{d}\partial_{x_{i}}b_{i}\left(x,v\right)v\left(t,x\right)\\ &&+\sum_{i=1}^{d}\partial_{x_{i}}\left(\tilde{D}_{i}(x,v)v\left(t,x\right)\right)-\sum_{i=1}^{d}\partial_{x_{i}}\tilde{D}_{i}\left(x,v\right)v\left(t,x\right)+C\left(x,v\left(t,x\right)\right)v\left(t,x\right) \end{array} $$
$$\begin{array}{@{}rcl@{}} &=&\frac{1}{2}\sum_{i,j=1}^{d}\partial_{x_{i},x_{j}}\left(a_{ij}(x)v(t,x)\right)-\sum_{i=1}^{d}\partial_{x_{i}}\left(\left(\sum_{j=1}^{d}\partial_{x_{j}}a_{ij}(x)-b_{i}(x,v)-\tilde{D}_{i}\left(x,v\right)\!\right)\!v\left(t,x\right)\right)\\ &&+\left(C\left(x,v(t,x)\right)-\sum_{i=1}^{d}\partial_{x_{i}}b_{i}\left(x,v\right)-\sum_{i=1}^{d}\partial_{x_{i}}\tilde{D}_{i}\left(x,v\right)+\frac{1}{2}\sum_{i,j=1}^{d}\partial_{x_{i},x_{j}}a_{ij}(x)\right)v\left(t,x\right)\\ &\equiv&\frac{1}{2}\sum_{i,j=1}^{d}\partial_{x_{i},x_{j}}\left(a_{ij}(x)v(t,x)\right)-\sum_{i=1}^{d}\partial_{x_{i}}\left(\tilde{b}_{i}\left(x,v\right)v\left(t,x\right)\right)+\tilde{c}\left(x,v\left(t,x\right)\right)v\left(t,x\right) \end{array} $$
where
$$\begin{array}{@{}rcl@{}} \tilde{c}\left(x, v\left(t,x\right)\right) \,=\, C\left(x,v(t,x)\right)-\sum_{i=1}^{d}\partial_{x_{i}}b_{i}\left(x,v\right)-\sum_{i=1}^{d} \partial_{x_{i}}\tilde{D}_{i}\left(x,v\right)+\frac{1}{2}\sum_{i,j=1}^{d}\partial_{x_{i},x_{j}}a_{ij}(x), \end{array} $$
$$\tilde{b}_{i}\left(x,v(t,x)\right)=\sum_{j=1}^{d}\partial_{x_{j}}a_{ij}(x)-b_{i}(x,v)-\tilde{D}_{i}\left(x,v\right), $$
and
$$\tilde{D}_{i}\left(x,v\right)=\sum_{j=1}^{l}D_{j}\left(x,v(t,x)\right)\sigma_{ij}\left(x\right). $$
Comparing this equation with (1.1) inKurtz and Xiong (1999) formally, we now construct an infinite particle system \(\{X_{i}(t): i\in \mathbb {N}\}\) with locations in \(\mathbb {R}^{d}\) and time varying weights \(\{A_{i}(t): i\in \mathbb {N}\}\) governed by the following equations: for 0<tT,i=1,2,
$$\begin{array}{@{}rcl@{}} \left\{ \begin{array}{l} {dX}_{i}(t)=\tilde{b}(X_{i}(t),v(t,X_{i}(t)))dt+\sigma(X_{i}(t)){dB}_{i}(t),\\ {dA}_{i}(t)=A_{i}(t)\tilde{c}\left(X_{i}(t), v\left(t,X_{i}(t)\right)\right)dt\\ V(t)={\lim}_{n\rightarrow\infty}\frac{1}{n}\sum_{j=1}^{n}A_{j}(t)\delta_{X_{j}(t)} \end{array}\right. \end{array} $$
(1.4)
with i.i.d initial random sequence \(\{(X_{i}(0), A_{i}(0)), i\in \mathbb {N}\}\) taking values in \(\mathbb {R}^{d}\times \mathbb {R}\), where \(\{B_{i}(t), i\in \mathbb {N}\}\) are independent standard Brownian motions and
$$\left<V(0),\phi\right>=\int_{\mathbb{R}^{d}}\phi(x)f(x)dx,\qquad\text{for any~} \phi\in {C_{b}^{2}}(\mathbb{R}^{d}), $$
where \({C_{b}^{2}}(\mathbb {R}^{d})\) denotes the collection of all bounded functions with bounded continuous derivatives up to order 2. In Theorem 2.2, we will show that the density function of V(t) determined by the above infinite particle system is v(t,x), which is exactly the solution to PDE (1.2).

The rest of this paper is organized as follows. In “Particle system approximation” Section, we construct infinite and finite particle systems to respectively get the approximate solution of the PDE and prove the convergent results. “Branching particle system approximation” Section is devoted to the formulation of a branching particle system to represent the approximate solution of the PDE. In “Numerical solution” Section, we present the numerical solution of the FBSDE system and its error bound. Finally, “Conclusion” Section concludes the paper.

Particle system approximation

For two integrable functions v 1,v 2, we define their distance
$$\rho\left(v_{1},v_{2}\right)=\int_{\mathbb{R}^{d}}|v_{1}(x)-v_{2}(x)|dx. $$
Now we construct infinite particle systems governed by the following stochastic differential equations: for any fixed δ>0,t(0,T],i=1,2,
$$\begin{array}{@{}rcl@{}} \left\{ \begin{array}{l} dX^{\delta}_{i}(t)=\tilde{b}\left(X^{\delta}_{i}(t),v^{\delta}(t,X_{i}^{\delta}(t))\right)dt+\sigma(X^{\delta}_{i}(t)){dB}_{i}(t),\\ dA^{\delta}_{i}(t)=A^{\delta}_{i}(t)\tilde{c}\left(X^{\delta}_{i}(t), v^{\delta}(t,X_{i}^{\delta}(t))\right)dt\\ v^{\delta}(t,x)={\lim}_{n\rightarrow\infty}\frac{1}{n}\sum_{j=1}^{n}A^{\delta}_{j}(t)p_{\delta}(x-X_{j}^{\delta}(t))\\ \end{array}\right. \end{array} $$
(2.1)
where p δ is the heat kernel given by
$$p_{\delta}(x)=(2\pi\delta)^{-\frac{d}{2}}exp\left(-\frac{|x|^{2}}{2\delta}\right). $$

In this paper we regard K with or without subscript as a constant which assumes different values at different places. By the boundedness of the coefficient assumed in (A1), we can verify the following condition:

(I)
$$\left|\tilde{b}(x,v)\right|^{2}+\left|\sigma(x)\right|^{2}+\left|\tilde{c}(x,v)\right|^{2}\leq K^{2}. $$

We also make the following condition on the initial data:

(II) \(\left \{\left (A_{i}(0), X_{i}(0)\right),\left (A^{\delta }_{j}(0), X^{\delta }_{j}(0)\right)\right \}\) is an i.i.d sequence and
$$\mathbb{E}\left|A_{i}(0)\right|^{2}+\mathbb{E}\left|X_{i}(0)\right|^{2}+\mathbb{E}\left|A^{\delta}_{j}(0)\right|^{2} +\mathbb{E}\left|X^{\delta}_{j}(0)\right|^{2}<\infty. $$

Theorem 2.1

Assume that \(\{A^{\delta }_{i}(0), X^{\delta }_{i}(0)\}\) is i.i.d and independent of {B i }. Under (A1), for every \(\phi \in {C_{b}^{2}}(\mathbb {R}^{d})\), we have
$$\left<v^{\delta}(t,\cdot),\phi\right>=\mathbb{E}\left(A_{i}^{\delta}(t)T_{\delta}\phi(X^{\delta}_{i}(t))\right) $$
and
$$ \left<v^{\delta}(t,\cdot), \phi\right>=\left<v^{\delta}(0,\cdot), \phi\right>+{\int_{0}^{t}}\left<v^{\delta}(s,\cdot), L_{v^{\delta}}\phi\right>ds +{\int_{0}^{t}}\left<v^{\delta}(s,\cdot), L\phi\right>ds, $$
(2.2)
where
$$T_{\delta}\phi(x)=\int_{\mathbb{R}^{d}}p_{\delta}(x-y)\phi(y)dy, $$
$$L\phi(x)=\frac{1}{2}\sum_{i,j=1}^{d}a_{ij}(x)\partial_{x_{i},x_{j}}\phi(x), $$
and
$$L_{v}\phi(x)=\sum_{i=1}^{d}\tilde{b}_{i}(x,v)\partial_{x_{i}}\phi(x)+\tilde{c}(x,v)\phi(x), $$
while \(\tilde {b}_{i}\) is the ith coordinator of \(\tilde {b}\) and a ij =(σ σ ) ij .

Proof

By the law of large numbers, we have
$$\left<v^{\delta}(t,\cdot),\phi\right>={\lim}_{n\rightarrow\infty}\frac{1}{n}\sum_{i=1}^{n}A_{i}^{\delta}(t)T_{\delta}\phi(X_{i}^{\delta}(t))=\mathbb{E}\left(A_{i}^{\delta}(t)T_{\delta}\phi(X^{\delta}_{i}(t))\right). $$
Applying Itô’s formula to (2.1),
$$\begin{array}{@{}rcl@{}} d\left(A^{\delta}_{i}(t)T_{\delta}\phi\left(X^{\delta}_{i}(t)\right)\right) &=&A^{\delta}_{i}(t)\tilde{c}\left(X^{\delta}_{i}(t),v^{\delta}(t,X_{i}^{\delta}(t))\right) T_{\delta}\phi\left(X^{\delta}_{i}(t)\right)dt\\ &&+A^{\delta}_{i}(t)\nabla^{*}T_{\delta}\phi\left(X^{\delta}_{i}(t)\right)\!\left[\!\sigma(X^{\delta}_{i}(t)){dB}_{i}(t) \,+\, \tilde{b}(X^{\delta}_{i}(t),v^{\delta}(t,X_{i}^{\delta}(t)))dt\!\right]\\ &&+\frac{1}{2}A^{\delta}_{i}(t)\sum_{i,j=1}^{n}\partial_{x_{i},x_{j}} T_{\delta}\phi\left(X^{\delta}_{i}(t)\right)a_{ij}\left(X^{\delta}_{i}\left(t\right)\right)dt\\ &=&A^{\delta}_{i}(t)L_{v^{\delta}}T_{\delta}\phi\left(X^{\delta}_{i}(t)\right)dt +A^{\delta}_{i}(t){LT}_{\delta}\phi\left(X^{\delta}_{i}(t)\right)dt\\ &&+A^{\delta}_{i}(t)\nabla^{*}T_{\delta}\phi\left(X^{\delta}_{i}(t)\right) \sigma(X^{\delta}_{i}(t)){dB}_{i}(t), \end{array} $$
(2.3)

where denotes the transpose of the gradient operator .

By the boundedness of \(\tilde {c}\), it is easy to show that there is a constant K such that
$$ \mathbb{E}\sup_{0\leq t\leq T}A_{i}^{\delta}\left(t\right)^{2}=\sup_{i}\mathbb{E}\sup_{0\leq t\leq T}A_{i}^{\delta}\left(t\right)^{2}\le K<\infty. $$
(2.4)
Hence, the martingale term on the right hand side of (2.3) can be estimated as follows:
$$\begin{array}{@{}rcl@{}} &&\mathbb{E}\sup_{0\leq t\leq T}\left|\frac{1}{n}\sum_{i=1}^{n}{\int_{0}^{t}}A^{\delta}_{i}(s)\nabla^{*}T_{\delta}\phi\left(X^{\delta}_{i}(s)\right)\sigma(X^{\delta}_{i}(s)){dB}_{i}(s)\right|^{2}\\ &\leq&\frac{4}{n^{2}}\sum_{i=1}^{n}\mathbb{E}{\int_{0}^{T}}A^{\delta}_{i}(s)^{2}\left|\nabla^{*}T_{\delta}\phi\left(X^{\delta}_{i}(s)\right)\sigma(X^{\delta}_{i}(s))\right|^{2}ds\\ &\leq&\frac{4}{n}\parallel\nabla T_{\delta}\phi\parallel_{\infty}^{2}K^{2}T\mathbb{E}\sup_{0\leq t\leq T}A_{i}^{\delta}\left(t\right)^{2}\rightarrow 0 \text{~as~}n\rightarrow\infty. \end{array} $$

Integrating and averaging both sides of (2.3), we see that (2.2) holds. □

Theorem 2.2

The solution to particle system (1.4) is unique and its density function is the solution to partial differential equation (1.3).

Proof

Firstly, we know for any fixed i=1,2,, the SDE
$${dX}_{i}(t)=\tilde{b}(X_{i}(t),v(t,X_{i}(t)))dt+\sigma(X_{i}(t)){dB}_{i}(t) $$
has a unique solution because of the Lipschitz condition on the coefficients. Since we know the partial differential Eq. (1.3) has a unique solution, then
$${dA}_{i}(t)=A_{i}(t)\tilde{c}\left(X_{i}(t), v\left(t,X_{i}(t)\right)\right)dt $$
is solvable. The i.i.d property of {(A i (0),X i (0))} and independence of {B i },i=1,2, ensures that V(t) is well-defined.
Following similar steps as in Theorem 2.1, for any \(\phi \in {C_{b}^{2}}(\mathbb {R}^{d})\) with compact support, it is easy to get
$$\begin{array}{@{}rcl@{}} \left<V(t), \phi\right>=\left<V(0), \phi\right>+{\int_{0}^{t}}\left<V(s), L_{V(s)}\phi\right>ds+{\int_{0}^{t}}\left<V(s), L\phi\right>ds. \end{array} $$
(2.5)
Since (2.5) is a parabolic PDE satisfying the uniform elliptic condition, by standard PDE theory, it is well-known that V(t) is absolutely continuous with respect to the Lebesgue measure. We denote the density function by v(t,x). Then,
$$\begin{array}{@{}rcl@{}} \int_{\mathbb{R}^{d}}\phi(x)v(t,x)dx&=&\left<V(0),\phi\right>+{\int_{0}^{t}}ds\int_{\mathbb{R}^{d}}\sum_{i=1}^{d}\partial_{x_{i}}\phi(x)\tilde{b}_{i}(x,v(s,x))v(s,x)dx\\ &&+{\int_{0}^{t}}ds\int_{\mathbb{R}^{d}}\tilde{c}\left(x,v(s,x)\right)\phi(x)v(s,x)dx\\ &&+{\int_{0}^{t}}ds\int_{\mathbb{R}^{d}}\frac{1}{2}\sum_{i,j=1}^{d}\partial_{x_{i},x_{j}}\phi(x)a_{ij}(x)v(s,x)dx\\ &=&\int_{\mathbb{R}^{d}}\phi(x)v(0,x)dx-{\int_{0}^{t}}ds\int_{\mathbb{R}^{d}}\sum_{i=1}^{d}\partial_{x_{i}}\left[\tilde{b}_{i}(x,v(s,x))v(s,x)\right]\phi(x)dx\\ &&+{\int_{0}^{t}}ds\int_{\mathbb{R}^{d}}\tilde{c}\left(x,v(s,x)\right)\phi(x)v(s,x)dx\\ &&+{\int_{0}^{t}}ds\int_{\mathbb{R}^{d}}\frac{1}{2}\sum_{i,j=1}^{d}\partial_{x_{i},x_{j}}\left[a_{ij}(x)v(s,x)\right]\phi(x)dx.\\ \end{array} $$
Therefore
$$\begin{array}{@{}rcl@{}} v(t,x)&=&v(0,x)+{\int_{0}^{t}}\tilde{c}\left(x, v(s,x)\right)v(s,x)ds +{\int_{0}^{t}}\frac{1}{2}\sum_{i,j=1}^{d}\partial_{x_{i},x_{j}}\left[a_{ij}(x)v(s,x)\right]ds\\ &&-{\int_{0}^{t}}\sum_{i=1}^{d}\partial_{x_{i}}\left[\tilde{b}_{i}(x,v(s,x))v(s,x)\right]ds. \end{array} $$
In differential form, we have
$$\begin{array}{@{}rcl@{}} \frac{\partial v(t,x)}{\partial t}&=&\tilde{c}\left(x, v(t,x)\right)v(t,x)+\frac{1}{2}\sum_{i,j=1}^{d}\partial_{x_{i},x_{j}}\left[a_{ij}(x)v(t,x)\right]\\ &&-\sum_{i=1}^{d}\partial_{x_{i}}\left[\tilde{b}_{i}(x,v(s,x))v(t,x)\right], \end{array} $$

i.e. v(t,x) is the solution to Eq. (1.3). □

Remark 2.1

For any \(\phi \in {C_{b}^{2}}(\mathbb {R}^{d})\), it is obvious that
$$\left<v(t),\phi\right>=\mathbb{E}\left(A_{i}(t)\phi\left(X_{i}(t)\right)\right), $$
where i=1,2,
Next we introduce a finite particle system to get the approximation solution: for fixed δ>0,t(0,T],
$$\begin{array}{@{}rcl@{}} \left\{ \begin{array}{l} dX^{n,\delta}_{i}(t)=\tilde{b}(X^{n,\delta}_{i}(t),v^{n,\delta}(t,X_{i}^{n,\delta}(t)))dt+\sigma(X^{n,\delta}_{i}(t)){dB}_{i}(t),\\ dA^{n,\delta}_{i}(t)=A^{n,\delta}_{i}(t)\tilde{c}\left(X^{n,\delta}_{i}(t), v^{n,\delta}(t,X_{i}^{n,\delta}(t))\right)dt\\ v^{n,\delta}(t,x)=\frac{1}{n}\sum_{j=1}^{n}A^{n,\delta}_{j}(t)p_{\delta}(x-X_{j}^{n,\delta}(t)), \end{array}\right. \end{array} $$
(2.6)

where i=1,2,n. The initial values are given as \(X_{i}^{n,\delta }(0)=X_{i}(0), A_{i}^{n,\delta }(0)=A_{i}(0)\).

Similar to (2.4), we can prove that
$$ \sup_{i}\mathbb{E}\sup_{0\leq t\leq T}A_{i}^{n,\delta}\left(t\right)^{2}\le K<\infty. $$
(2.7)
Let
$$\tau_{n,\delta}=\inf\left\{t:\;\frac1n\sum^{n}_{i=1}\left(A_{i}^{\delta}\left(t\right)^{2}+ A_{i}^{n,\delta}\left(t\right)^{2}\right)>\delta^{-1/2}\right\}.$$
It follows from (2.4) and (2.7), we have
$$ \mathbb{P}(\tau_{n,\delta}\le T)\le 2K\sqrt{\delta}. $$
(2.8)
For simplicity, take notation
$$M_{i}^{n,\delta}(t)=\log A_{i}^{n,\delta}(t) $$
and
$$M_{i}^{\delta}(t)=\log A_{i}^{\delta}(t).$$
Then
$$M_{i}^{n,\delta}(t)={\int_{0}^{t}}\tilde{c}\left(X^{n,\delta}_{i}(s), v^{n,\delta}(s,X_{i}^{n,\delta}(s))\right)ds$$
and
$$M_{i}^{\delta}(t)={\int_{0}^{t}}\tilde{c}\left(X^{\delta}_{i}(s), v^{\delta}(s,X_{i}^{\delta}(s))\right)ds.$$

Proposition 2.1

Under conditions (I) and (II), we have
$$\mathbb{E}\sup_{t\leq T\wedge\tau_{n,\delta}}|M_{i}^{n,\delta}(t)-M_{i}^{\delta}(t)|^{2}\leq \frac{K_{\delta,T}}{n} $$
and
$$\mathbb{E}\sup_{t\leq T\wedge\tau_{n,\delta}}|X_{i}^{n,\delta}(t)-X_{i}^{\delta}(t)|^{2}\leq \frac{K_{\delta,T}}{n}. $$

Proof

For simplicity of notation, we assume that τ n,δ T. Let
$$\tilde{v}^{n,\delta}(t,x)=\frac{1}{n}\sum_{j=1}^{n}A^{\delta}_{j}(t)p_{\delta}(x-X_{j}^{\delta}(t)) $$
and
$$\tilde{v}_{i}^{n,\delta}(t,x)=\frac{1}{n-1}\sum_{j=1,j\neq i}^{n}A^{\delta}_{j}(t)p_{\delta}(x-X_{j}^{\delta}(t)). $$
Then
$$\begin{array}{@{}rcl@{}} &&\mathbb{E}\sup_{r\leq t}|M_{i}^{n,\delta}(r)-M_{i}^{\delta}(r)|^{2}\\ &\leq&T{\int_{0}^{t}}\mathbb{E}\left|\tilde{c}\left(X_{i}^{n,\delta}(s),v^{n,\delta}(s,X_{i}^{n,\delta}(s))\right) -\tilde{c}\left(X_{i}^{\delta}(s),v^{\delta}(s,X_{i}^{\delta}(s))\right)\right|^{2}ds. \end{array} $$
Note that
$$\begin{array}{@{}rcl@{}} &&\mathbb{E}\left|\tilde{c}\left(X_{i}^{n,\delta}(s),v^{n,\delta}(s,X_{i}^{n,\delta}(s))\right) -\tilde{c}\left(X_{i}^{\delta}(s),v^{\delta}(s,X_{i}^{\delta}(s))\right)\right|^{2}\\ &\leq&2\mathbb{E}\left|\tilde{c}\left(X_{i}^{n,\delta}(s),v^{n,\delta}(s,X_{i}^{n,\delta}(s))\right) -\tilde{c}\left(X_{i}^{\delta}(s),v^{n,\delta}(s,X_{i}^{\delta}(s))\right)\right|^{2}\\ &&+2\mathbb{E}\left|\tilde{c}\left(X_{i}^{\delta}(s),v^{n,\delta}(s,X_{i}^{\delta}(s))\right) -\tilde{c}\left(X_{i}^{\delta}(s),v^{\delta}(s,X_{i}^{\delta}(s))\right)\right|^{2}\\ &\leq&2K^{2}\mathbb{E}\left|X_{i}^{n,\delta}(s)-X_{i}^{\delta}(s)\right|^{2}\\ &&+6\mathbb{E}\left|\tilde{c}\left(X_{i}^{\delta}(s),v^{n,\delta}(s,X_{i}^{\delta}(s))\right) -\tilde{c}\left(X_{i}^{\delta}(s),\tilde{v}^{n,\delta}(s,X_{i}^{\delta}(s))\right)\right|^{2}\\ &&+6\mathbb{E}\left|\tilde{c}\left(X_{i}^{\delta}(s),\tilde{v}^{n,\delta}(s,X_{i}^{\delta}(s))\right) -\tilde{c}\left(X_{i}^{\delta}(s),\tilde{v}_{i}^{n,\delta}(s,X_{i}^{\delta}(s))\right) \right|^{2}\\ &&+6\mathbb{E}\left|\tilde{c}\left(X_{i}^{\delta}(s),\tilde{v}_{i}^{n,\delta}(s,X_{i}^{\delta}(s))\right) -\tilde{c}\left(X_{i}^{\delta}(s),v^{\delta}(s,X_{i}^{\delta}(s))\right)\right|^{2}\\ &\leq&2K^{2}\mathbb{E}\left|X_{i}^{n,\delta}(s)-X_{i}^{\delta}(s)\right|^{2}\\ &&+6\mathbb{E}\left(K\left(\frac{1}{n\delta^{\frac{d}{2}}}\sum_{j=1}^{n}\left|A_{j}^{n,\delta}(s)-A_{j}^{\delta}(s)\right|+\frac{1}{n\delta^{\frac{d}{2}+\frac{1}{2}}} \sum_{j=1}^{n}A_{j}^{\delta}(s)\left|X_{j}^{n,\delta}(s)-X_{j}^{\delta}(s)\right|\right)\right)^{2}\\ &&+6\mathbb{E}\left(\frac{K}{\delta^{\frac{d}{2}}}\left(\frac{1}{n-1}A_{i}^{\delta}(s) +\frac{1}{n(n-1)}\sum_{j=1}^{n}A_{j}^{\delta}(s)\right)\right)^{2}\\ &&+6\mathbb{E}\left|\tilde{c}\left(X_{i}^{\delta}(s),\tilde{v}_{i}^{n,\delta}(s,X_{i}^{\delta}(s))\right) -\tilde{c}\left(X_{i}^{\delta}(s),v^{\delta}(s,X_{i}^{\delta}(s))\right)\right|^{2}\\ &\leq&2K^{2}\mathbb{E}\left|X_{i}^{n,\delta}(s)-X_{i}^{\delta}(s)\right|^{2}\\ &&+\frac{12K^{4}}{n\delta^{d}}\sum_{j=1}^{n}\mathbb{E}\left|M_{j}^{n,\delta}(s)-M_{j}^{\delta}(s)\right|^{2}+\frac{12K^{4}}{n\delta^{d+1}}\sum_{j=1}^{n}\mathbb{E}\left|X_{j}^{n,\delta}(s)-X_{j}^{\delta}(s)\right|^{2}+\frac{6K^{4}}{\delta^{d} n^{2}}\\ &&+6\mathbb{E}\left|\tilde{c}\left(X_{i}^{\delta}(s),\tilde{v}_{i}^{n,\delta}(s,X_{i}^{\delta}(s))\right) -\tilde{c}\left(X_{i}^{\delta}(s),v^{\delta}(s,X_{i}^{\delta}(s))\right)\right|^{2}, \end{array} $$
(2.9)
where the last inequality follows from Cauchy-Schwarz inequality and the fact that for τ n,δ T,
$$\frac1n\sum^{n}_{j=1}A^{\delta}_{j}(s)^{2}\le\delta^{-1/2}. $$
On the other hand,
$$\begin{array}{@{}rcl@{}} \begin{aligned} &\mathbb{E}\sup_{r\leq t}\left|X_{i}^{n,\delta}(t)-X_{i}^{\delta}(t)\right|^{2}\\ \leq&12\mathbb{E}{\int_{0}^{t}}\left|\sigma(X^{n,\delta}_{i}(s))-\sigma(X^{\delta}_{i}(s))\right|^{2}ds\\ &+3t\mathbb{E}{\int_{0}^{t}}\left|\tilde{b}(X^{n,\delta}_{i}(s),v^{n,\delta}(s,X^{n,\delta}_{i}(s)))-\tilde{b}(X^{\delta}_{i}(s),v^{\delta}(s,X^{\delta}_{i}(s)))\right|^{2}ds\\ \leq&6(3t+2)\mathbb{E}{\int_{0}^{t}}\Big\{K^{2}\left|X_{i}^{n,\delta}(s)-X_{i}^{\delta}(s)\right|^{2}\\ &+\frac{2K^{4}}{n\delta^{d}}\sum_{j=1}^{n}\left|M_{j}^{n,\delta}(s)-M_{j}^{\delta}(s)\right|^{2}+\frac{2K^{4}}{n\delta^{d+1}}\sum_{j=1}^{n}\left|X_{j}^{n,\delta}(s)-X_{j}^{\delta}(s)\right|^{2}+\frac{K^{4}}{\delta^{d} n^{2}}\\ &+\left|\tilde{b}\left(X_{i}^{\delta}(s),\tilde{v}_{i}^{n,\delta}(s,X_{i}^{\delta}(s))\right)-\tilde{b}\left(X_{i}^{\delta}(s),v^{\delta}(s,X_{i}^{\delta}(s))\right)\right|^{2}\Big\}ds. \end{aligned} \end{array} $$
(2.10)
Let
$$g_{n}(t)=\mathbb{E}\sup_{r\leq t}\left|M_{i}^{n,\delta}(r)-M_{i}^{\delta}(r)\right|^{2} $$
and
$$f_{n}(t)=\mathbb{E}\sup_{r\leq t}\left|X_{i}^{n,\delta}(r)-X_{i}^{\delta}(r)\right|^{2}. $$
$$\begin{array}{@{}rcl@{}} &&\mathbb{E}\left|\tilde{b}\left(X_{i}^{\delta}(s),\tilde{v}_{i}^{n,\delta}(s,X_{i}^{\delta}(s))\right) -\tilde{b}\left(X_{i}^{\delta}(s),v^{\delta}(s,X_{i}^{\delta}(s))\right)\right|^{2}\\ &\leq&K\mathbb{E}\left|\frac{1}{n-1}\sum_{j=1,j\neq i}^{n}A^{\delta}_{j}(s)p_{\delta}(X_{i}^{\delta}(s)-X_{j}^{\delta}(s))-{\lim}_{n\rightarrow\infty}\frac{1}{n} \sum_{j=1}^{n}A^{\delta}_{j}(s)p_{\delta}(X_{i}^{\delta}(s)-X_{j}^{\delta}(s))\right|^{2}\\ &=&K\mathbb{E}\left|\frac{1}{n-1}\sum_{j=1,j\neq i}^{n}\left(A^{\delta}_{j}(s)p_{\delta}(X_{i}^{\delta}(s)-X_{j}^{\delta}(s))-\mathbb{E} A^{\delta}_{j}(s)p_{\delta}(X_{i}^{\delta}(s)-X_{j}^{\delta}(s))\right)\right|^{2}\\ &=&\frac{K}{(n-1)^{2}}\sum_{j=1,j\neq i}^{n}\mathbb{E}\left(A^{\delta}_{1}(s)p_{\delta}(X_{i}^{\delta}(s)-X_{1}^{\delta}(s))-\mathbb{E} A^{\delta}_{1}(s)p_{\delta}(X_{i}^{\delta}(s)-X_{1}^{\delta}(s))\right)^{2}\\ &=&\frac{K}{n-1}\mathbb{E}\left(A^{\delta}_{1}(s)p_{\delta}(X_{i}^{\delta}(s) -X_{1}^{\delta}(s))\right)^{2}\\ &\leq&\frac{K}{\delta^{d} (n-1)}\mathbb{E} A^{\delta}_{1}(s)^{2}. \end{array} $$
(2.11)
Similarly,
$$\mathbb{E}\left|\tilde{c}\left(X_{i}^{\delta}(s),\tilde{v}_{i}^{n,\delta}\left(s,X_{i}^{\delta}(s)\right)\right) -\tilde{c}\left(X_{i}^{\delta}(s),v^{\delta}(s,X_{i}^{\delta}(s))\right)\right|^{2} \leq\frac{K}{\delta^{d} (n-1)}\mathbb{E} A^{\delta}_{1}(s)^{2}.$$
Then,
$$ g_{n}(t) {\leq\int_{0}^{t}}\{K_{1}\left(f_{n}(s)+\frac{g_{n}(s)}{\delta^{d}}+\frac{f_{n}(s)}{\delta^{d+1}}\right)+K_{2}\frac{1}{\delta^{d} n}\}ds $$
(2.12)
and
$$ f_{n}(t) {\leq\int_{0}^{t}}\{K_{3}\left(f_{n}(s)+\frac{g_{n}(s)}{\delta^{d}}+\frac{f_{n}(s)}{\delta^{d+1}}\right)+K_{4}\frac{1}{\delta^{d} n}\}ds. $$
(2.13)
Adding (2.12) and (2.13), for tT, we have
$$g_{n}(t)+f_{n}(t)\leq K_{5}\left(1+\frac{1}{\delta^{d+1}}\right){\int_{0}^{t}}\left(g_{n}(s)+f_{n}(s)\right)ds+\frac{K_{6}}{\delta^{d} n}. $$
By Gronwall’s inequality, we have
$$g_{n}(t)+f_{n}(t)\leq \frac{K_{6}}{\delta^{d} n}e^{K_{5}\left(1+\frac{1}{\delta^{d+1}}\right)t}=\frac{K_{\delta,T}}{n}. $$
Then, we have
$$\mathbb{E}\sup_{t\leq T}\left|M_{i}^{n,\delta}(t)-M_{i}^{\delta}(t)\right|^{2}\leq \frac{K_{\delta,T}}{n} $$
and
$$\mathbb{E}\sup_{t\leq T}\left|X_{i}^{n,\delta}(t)-X_{i}^{\delta}(t)\right|^{2}\leq \frac{K_{\delta,T}}{n}. $$

Lemma 2.1

For 0≤tT, we have
$$\mathbb{E}\rho\left(v^{n,\delta}(t), \tilde{v}^{n,\delta}(t)\right)\leq \frac{K_{\delta,T}}{\sqrt{n}}+K_{T}\sqrt{\delta}. $$

Proof

$$\begin{array}{@{}rcl@{}} &&\rho\left(v^{n,\delta}(t), \tilde{v}^{n,\delta}(t)\right)\\ &=&\int_{\mathbb{R}^{d}}\left|\frac{1}{n}\sum_{j=1}^{n}A^{n,\delta}_{j}(t)p_{\delta}(x-X_{j}^{n,\delta}(t)) -\frac{1}{n}\sum_{j=1}^{n}A^{\delta}_{j}(t)p_{\delta}(x-X_{j}^{\delta}(t))\right|dx\\ &\leq&\!\int_{\mathbb{R}^{d}}\!\frac{1}{n}\!\sum_{j=1}^{n}\!\left(\!\left|A^{n,\delta}_{j}(t) \,-\, A^{\delta}_{j}(t)\!\right|\!p_{\delta}\! \left(x \,-\, X_{j}^{n,\delta}(t)\right) \,+\, A^{\delta}_{j}(t)\!\left|p_{\delta}\!\left(x \,-\, X_{j}^{n,\delta}(t)\right)\! -\!p_{\delta}\!\left(x \,-\, X_{j}^{\delta}(t)\right)\!\right|\right)dx\\ &\leq&\frac{1}{n}\sum_{j=1}^{n}\int_{\mathbb{R}^{d}}\left(A^{n,\delta}_{j}(t)\vee A^{\delta}_{j}(t)\right)\left\{\left|M_{j}^{n,\delta}(t) -M_{j}^{\delta}(t)\right|p_{\delta}\left(x-X_{j}^{n,\delta}(t)\right)\right.\\ &&+\left.\left|p_{\delta}\left(x-X_{j}^{n,\delta}(t)\right) -p_{\delta}\left(x-X_{j}^{\delta}(t)\right)\right|\right\}dx\\ &\leq&\frac{1}{n}\sum_{j=1}^{n}\left(A^{n,\delta}_{j}(t)\vee A^{\delta}_{j}(t)\right)\left\{\left|M_{j}^{n,\delta}(t)-M_{j}^{\delta}(t)\right|+\frac{K}{\delta^{\frac{d}{2}}} \left|X_{j}^{n,\delta}(t)-X_{j}^{\delta}(t)\right|\right\}\\ &\leq&\sqrt{\frac{1}{n}\sum_{j=1}^{n}\left(A_{j}^{n,\delta}(t)\vee A_{j}^{\delta}(t)\right)^{2}} \sqrt{\frac{1}{n}\sum_{j=1}^{n}\left|M_{j}^{n,\delta}(t)-M_{j}^{\delta}(t)\right|^{2}}\\ &&+\sqrt{\frac{1}{n}\sum_{j=1}^{n}\left(A_{j}^{n,\delta}(t)\vee A_{j}^{\delta}(t)\right)^{2}}\sqrt{\frac{1}{n}\sum_{j=1}^{n} \frac{K^{2}}{\delta^{d}}\left|X_{j}^{n,\delta}(t)-X_{j}^{\delta}(t)\right|^{2}}. \end{array} $$
(2.14)
By the boundedness of \(\tilde {c}\), we have
$$ \mathbb{E}\sup_{r\leq t\wedge\tau_{n,\delta}}\rho\left(v^{n,\delta}(r), \tilde{v}^{n,\delta}(r)\right)^{2}\leq K\left(1+\frac{1}{\delta^{d}}\right)\left(g_{n}(t)+f_{n}(t)\right)\leq \frac{K_{\delta,T}}{n}. $$
(2.15)
Then,
$$\begin{array}{@{}rcl@{}} \mathbb{E}\rho\left(v^{n,\delta}(t), \tilde{v}^{n,\delta}(t)\right) &=&\mathbb{E}\rho\left(v^{n,\delta}(t), \tilde{v}^{n,\delta}(t)\right)1_{t<\tau_{n,\delta}} +\mathbb{E}\rho\left(v^{n,\delta}(t), \tilde{v}^{n,\delta}(t)\right)1_{t\ge\tau_{n,\delta}}\\ &\leq& \frac{K_{\delta,T}}{\sqrt{n}}+K_{T}\sqrt{\delta}, \end{array} $$

where the last inequality follows from (2.8), (2.15) and the fact that ρ≤2. □

Lemma 2.2

For 0≤tT, we have \(\mathbb {E}\rho \left (\tilde {v}^{n,\delta }(t), v^{\delta }(t)\right)\leq \frac {K_{\delta }}{\sqrt {n}}\).

Proof

$$\begin{array}{@{}rcl@{}} &&\mathbb{E}\rho\left(\tilde{v}^{n,\delta}(t), v^{\delta}(t)\right)\\ &=&\mathbb{E}\left(\int_{\mathbb{R}^{d}}\left|\frac{1}{n}\sum_{j=1}^{n}A^{\delta}_{j}(t)p_{\delta}(x-X_{j}^{\delta}(t))-{\lim}_{n\rightarrow\infty}\frac{1}{n}\sum_{j=1}^{n}A^{\delta}_{j}(t)p_{\delta}(x-X_{j}^{\delta}(t))\right|dx\right)\\ &\leq&\int_{\mathbb{R}^{d}}\left(\mathbb{E}\left|\frac{1}{n}\sum_{j=1}^{n}A^{\delta}_{j}(t)p_{\delta}(x-X_{j}^{\delta}(t))-\mathbb{E}\left(A^{\delta}_{j}(t)p_{\delta}(x-X_{j}^{\delta}(t))\right)\right|^{2}\right)^{\frac{1}{2}}dx\\ &\leq&\int_{\mathbb{R}^{d}}\left(\frac{1}{n}\mathbb{E}\left(A^{\delta}_{1}(t)p_{\delta}(x-X_{1}^{\delta}(t))-\mathbb{E}\left(A^{\delta}_{1}(t)p_{\delta}(x-X_{1}^{\delta}(t))\right)\right)^{2}\right)^{\frac{1}{2}}dx\\ &\leq&\frac{1}{\sqrt{n}}\int_{\mathbb{R}^{d}}\left(\mathbb{E}\left(\left(A^{\delta}_{1}(t)\right)^{2}p^{2}_{\delta}(x-X_{1}^{\delta}(t))\right)\right)^{\frac{1}{2}}dx\\ &=&\frac{1}{\sqrt{n}}\int_{\mathbb{R}^{d}}\left(\int_{\mathbb{R}^{d}}\int_{\mathbb{R}^{d}}a^{2}p^{2}_{\delta}(x-y)e^{2|x|}g(a,y)dady\right)^{\frac{1}{2}}e^{-|x|}dx\\ &\leq&\frac{1}{\sqrt{n}}\left(\int_{\mathbb{R}^{d}}\int_{\mathbb{R}^{d}}\int_{\mathbb{R}^{d}}a^{2}p^{2}_{\delta}(x-y)e^{2|x|}g(a,y)dadydx\right)^{\frac{1}{2}}\left(\int_{\mathbb{R}^{d}}e^{-2|x|}dx\right)^{\frac{1}{2}}\\ &\leq&\frac{K_{\delta}}{\sqrt{n}}\left(\int_{\mathbb{R}^{d}}\int_{\mathbb{R}^{d}}a^{2}e^{2|y|}g(a,y)dady\right)^{\frac{1}{2}}\\ &=&\frac{K_{\delta}}{\sqrt{n}}\left(\mathbb{E}\left(A^{\delta}_{1}(t)\right)^{2}e^{2|X_{1}^{\delta}(t)|}\right)^{\frac{1}{2}}\\ &\leq&\frac{K_{\delta}}{\sqrt{n}} \end{array} $$

where g is the joint probability density of \((A^{\delta }_{1}(t),X^{\delta }_{1}(t))\). □

Combining Lemmas 2.1 and 2.2, we get the approximation of v n,δ to v δ as δ δ being fixed and n. The next lemma estimates the distance between v δ and We adapt the argument of Crisan and Xiong (Crisan and Xiong 2014) to the current setup.

Lemma 2.3

There exists a constant C 3(T), such that
$$\sup\limits_{x\in\mathbb{R}^{d}}v^{\delta}(t,x)\leq C_{3}(T) $$

Proof

By the convolution form of (2.2), we have
$$\begin{array}{@{}rcl@{}} v^{\delta}(t,x)&=&\int_{\mathbb{R}^{d}}p_{t}(y,x)v^{\delta}(0,y)dy +{\int_{0}^{t}}\int_{\mathbb{R}^{d}}v^{\delta}(s,y)L_{v^{\delta}}p_{t-s}(y,x)dyds\\ &=&\int_{\mathbb{R}^{d}}p_{t}(y,x)v^{\delta}(0,y)dy\\ &&+{\int_{0}^{t}}\int_{\mathbb{R}^{d}}v^{\delta}(s,y)\!\left(\! \tilde{b}^{*}(y,v^{\delta}(s,y)) \nabla_{y}p_{t-s}(y,x)+\tilde{c}(y,v^{\delta}(s,y))p_{t-s}(y,x)\!\right)dyds, \end{array} $$
(2.16)
where p t (y,x) is the transition density of the reflecting diffusion with generator L. By Theorem 6.4.5 in Friedman (1975), there are constants K 1,K 2 such that
$$\left|\nabla_{y}p_{t-s}(y,x)\right|\leq K_{1}(t-s)^{-((d+1)/2)}\exp\left(-K_{2}\frac{|x-y|^{2}}{t-s}\right)\equiv\frac{1}{\sqrt{t-s}}q_{t-s}(x-y). $$
Plugging it to (2.16), it follows from the boundedness of \(v^{\delta }(0,\cdot),\ \tilde {b},\ \tilde {c}\), we get
$$v^{\delta}(t,x)\leq C_{1}+{\int_{0}^{t}}\int_{\mathbb{R}^{d}}v^{\delta}(s,y)C_{2}\left(\frac{1}{\sqrt{t-s}}q_{t-s}(x-y)+p_{t-s}(y,x)\right)dyds. $$
Define \(a_{t}=\sup \limits _{x\in \mathbb {R}^{d}}v^{\delta }(t,x)\). Then
$$a_{t}\leq C_{1}+C_{2}{\int_{0}^{t}}(\frac{1}{\sqrt{t-s}}+1)a_{s}ds. $$
By an extended Gronwall’s inequality, we get
$$a_{t}\leq C_{3}(T). $$

Lemma 2.4

There exists a constant C 4(T), such that
$$\sup\limits_{x\in\mathbb{R}^{d}}\left|\nabla v^{\delta}(t,x)\right|\leq C_{4}(T) $$

Proof

Note that
$$\begin{array}{@{}rcl@{}} \frac{d\left<\partial_{i}v^{\delta}(t,\cdot),f\right>}{dt} &=&-\frac{d\left<v^{\delta}(t,\cdot),\partial_{i}f\right>}{dt}\\ &=&-\left<v^{\delta}(t,\cdot),L(\partial_{i}f)\right>-\left<v^{\delta}(t,\cdot),L_{v^{\delta}}(\partial_{i}f)\right>\\ &=&\left<\partial_{i} v^{\delta}(t,\cdot),Lf+L_{v^{\delta}}f\right>+\sum^{d}_{j=1}\left<\partial_{j} v^{\delta}(t,\cdot),{L^{j}_{i}}f\right>+\left<v^{\delta}(t,\cdot),\tilde{L}_{v^{\delta}}f\right>, \end{array} $$
(2.17)
where
$${L^{j}_{i}}f=-\frac12\sum^{d}_{k=1}\partial_{i}a_{jk}\partial_{k}f, $$
and
$$\tilde{L}_{v}f=-\frac12\sum^{d}_{k=1}\partial^{2}_{jk}a_{jk}\partial_{k}f +\sum^{d}_{j=1}\partial_{i}\tilde{b}_{j}(\cdot,v)+\partial_{i}\tilde{c}(\cdot,v).$$
Recalling that p t (y,x) is the transition density of the reflecting diffusion with generator L, the convolution form of (2.17) is as follows:
$$\begin{array}{@{}rcl@{}} \partial_{i}v^{\delta}(t,x)&=&\int_{\mathbb{R}^{d}}p_{t}(y,x)\partial_{i}v^{\delta}(0,y)dy +{\int_{0}^{t}}\int_{\mathbb{R}^{d}}\partial_{i}v^{\delta}(s,y)L_{v^{\delta}}p_{t-s}(y,x)dyds\\ &&+\sum^{d}_{j=1}{\int_{0}^{t}}\int_{\mathbb{R}^{d}}\partial_{j}v^{\delta}(s,y){L^{j}_{i}}p_{t-s}(y,x)dyds\\ &&+{\int_{0}^{t}}\int_{\mathbb{R}^{d}}v^{\delta}(s,y)\tilde{L}_{v^{\delta}}p_{t-s}(y,x)dyds. \end{array} $$
(2.18)
By Theorem 6.4.5 in Friedman (1975), there are constants K 1,K 2 such that
$$\left|\nabla_{y}p_{t-s}(y,x)\right|\leq K_{1}(t-s)^{-((d+1)/2)}\exp\left(-K_{2}\frac{|x-y|^{2}}{t-s}\right)\equiv\frac{1}{\sqrt{t-s}}q_{t-s}(x-y). $$
Therefore,
$$\begin{array}{@{}rcl@{}} |\nabla v^{\delta}(t,x)|&\leq& C_{5}(T)+{\int_{0}^{t}}\int_{\mathbb{R}^{d}}|\nabla v^{\delta}(s,y)|C_{6}\frac{1}{\sqrt{t-s}}q_{t-s}(x-y)dyds\\ &&+{\int_{0}^{t}}\int_{\mathbb{R}^{d}}v^{\delta}(s,y)C_{7}\frac{1}{\sqrt{t-s}}q_{t-s}(x-y)dyds. \end{array} $$
Define \(b_{t}=\sup \limits _{x\in \mathbb {R}^{d}}||\nabla v^{\delta }(t,x)|\). Then
$$b_{t}\leq C_{8}(T)+C_{6}{\int_{0}^{t}}\left(\frac{1}{\sqrt{t-s}}+1\right)b_{s}ds. $$
By an extended Gronwall’s inequality, we get
$$\sup\limits_{x\in\mathbb{R}^{d}}\left|\nabla v^{\delta}(t,x)\right|\leq C_{4}(T). $$

As a consequence of Lemmas 2.3 and 2.4, we have the following lemma.

Lemma 2.5

There exists a constant K T , such that \(\mathbb {E}\rho \left (v^{\delta }(t),v(t)\right)\leq K_{T}\sqrt {\delta }\).

Proof

We define \(\bar {v}^{\delta }=v^{\delta }-v\). Then
$$\begin{array}{@{}rcl@{}} \frac{d}{dt}\left<\bar{v}^{\delta}(t,x),f\right>&=&\left<\bar{v}^{\delta}(t,x),Lf\right>+\left<v^{\delta}(t,\cdot),L_{v^{\delta}}f\right>-\left<v(t,x),L_{v}f\right>\\ &=&\left<\bar{v}^{\delta}(t,x),Lf\right>+\left<\bar{v}^{\delta}(t,x),L_{v}f\right> +\left<v^{\delta}(t,\cdot),L_{v^{\delta}}f-L_{v}f\right>. \end{array} $$
(2.19)
Therefore,
$$\frac{d}{ds}\left<\bar{v}^{\delta}(s,x),T_{t-s}f\right>=\left<\bar{v}^{\delta}(s,x),L_{v}T_{t-s}f\right>+\left<v^{\delta}(s,x),L_{v^{\delta}}T_{t-s}f-L_{v}T_{t-s}f\right>, $$
$$\begin{array}{@{}rcl@{}} \left<\bar{v}^{\delta}(t,x),f\right>-\left<\left(T_{\delta}-I\right)v(0,x),T_{t}f\right>&=&{\int_{0}^{t}}\left<\bar{v}^{\delta}(s,x),L_{v}T_{t-s}f\right>ds\\ &&+{\int_{0}^{t}}\left<v^{\delta}(s,x),L_{v^{\delta}}T_{t-s}f-L_{v}T_{t-s}f\right>ds. \end{array} $$
By the convolution form of the above equation, we get
$$\begin{array}{@{}rcl@{}} \bar{v}^{\delta}(t,y)&=&T_{t}\left(T_{\delta}-I\right)v(0,y)\\ &&+{\int_{0}^{t}}\int_{\mathbb{R}^{d}}\bar{v}^{\delta}(s,x)\! \left(\! \nabla^{*}p_{t-s}(x \,-\, y)\tilde{b}\left(x,v(s,x)\right) \!+ \tilde{c}\left(x,v(s,x)\right)p_{t-s}(x-y)\!\right)dxds \\ &&+{\int_{0}^{t}}\int_{\mathbb{R}^{d}}v^{\delta}(s,x)\left[\nabla^{*}p_{t-s}(x-y)\left(\tilde{b}\left(x,v^{\delta}(s,x)\right)-\tilde{b}\left(x,v(s,x)\right)\right)\right. \\ &&\qquad\quad \left.+\left(\tilde{c}\left(x,v^{\delta}(s,x)\right)-\tilde{c}\left(x,v(s,x)\right)\right)p_{t-s}(x-y)\right]dxds. \end{array} $$
(2.20)
Set \(c_{t}(y)=|\bar {v}^{\delta }(t,y)|\), then
$$\begin{array}{@{}rcl@{}} c_{t}(y)&\leq&\int_{\mathbb{R}^{d}}\left|p_{t+\delta}(x-y)-p_{t}(x-y)\right|v(0,x)dx\\ &&+K_{T}{\int_{0}^{t}}\int_{\mathbb{R}^{d}}c_{s}(x)\left(\frac{1}{\sqrt{t-s}}q_{t-s}(x-y) +p_{t-s}(x-y)\right)dxds\\ &\leq&K_{T}\sqrt{\delta}\int_{\mathbb{R}^{d}}q_{t}(x-y)v(0,x)dx\\ &&+K_{T}{\int_{0}^{t}}\int_{\mathbb{R}^{d}}c_{s}(x)\left(\frac{1}{\sqrt{t-s}}q_{t-s}(x\,-\,y) \,+\,p_{t-s}(x \,-\, y)\right)dxds. \end{array} $$
(2.21)
Set \(c_{t}=\int _{\mathbb {R}^{d}}c_{t}(y)dy\), then
$$\begin{array}{@{}rcl@{}} c_{t}&\leq&K_{T}\sqrt{\delta}+K_{T}{\int_{0}^{t}}\left(\frac{1}{\sqrt{t-s}}+1\right)c_{s}ds \\ &\leq&K_{T}\sqrt{\delta}+K_{T}{\int_{0}^{t}}\left(\frac{1}{\sqrt{t-s}}+1\right) \left(K_{T}\sqrt{\delta}+K_{T}{\int_{0}^{s}}\left(\frac{1}{\sqrt{s-r}}+1\right) c_{r}dr\right)ds\\ &\leq&K_{T}\sqrt{\delta}+K_{T}{\int_{0}^{t}}c_{r}dr, \end{array} $$
(2.22)

here we used, in the first inequality, the integrability condition on v(0,·)=f.

Applying Gronwall’s inequality, we get
$$c_{t}\leq K_{T}\sqrt{\delta} $$
Then, we have
$$\mathbb{E}\rho\left(v^{\delta}(t),v(t)\right)\leq K_{T}\sqrt{\delta}. $$

Remark 2.2

Set \(c_{t}^{\infty }=\sup \limits _{y\in \mathbb {R}^{d}}c_{t}(y)\), similarly, we have
$$c_{t}^{\infty}\leq K_{T}\sqrt{\delta}. $$

Theorem 2.3

The distance of v n,δ (t) to v(t) is bounded by \(\frac {K_{\delta,T}}{\sqrt {n}}+K_{T}\sqrt {\delta }\).

Proof

Combining the conclusions from Lemmas 2.1, 2.2 and 2.5, we get
$$\mathbb{E}\rho\left(v^{n,\delta}\left(t\right),v\left(t\right)\right)\leq \frac{K_{\delta,T}}{\sqrt{n}}+K_{T}\sqrt{\delta}. $$

Branching particle system approximation

Note that K T of Theorem 2.3 above is of exponential growth as T increases, and hence, the error of the approximation grows exponentially fast. To avoid this drawback of the numerical scheme, we introduce a branching particle system to modify the weights of the particles at the time-discretization steps.

Firstly, we rewrite our infinite particle systems governed by the following stochastic differential equations: for any fixed δ>0,t(0,T],i=1,2,
$$\begin{array}{@{}rcl@{}} \left\{ \begin{array}{l} dX^{\delta}_{i}(t)=\tilde{b}^{\delta}(X^{\delta}_{i}(t),V^{\delta}(t))dt+\sigma(X^{\delta}_{i}(t)){dB}_{i}(t),\\ dA^{\delta}_{i}(t)=A^{\delta}_{i}(t)\tilde{c}^{\delta}\left(X^{\delta}_{i}(t), V^{\delta}(t)\right)dt\\ V^{\delta}(t)={\lim}_{n\rightarrow\infty}\frac{1}{n}\sum_{l=1}^{n}A^{\delta}_{l}(t)\delta_{X^{\delta}_{l}(t)}\\ \end{array}\right. \end{array} $$
(3.1)
where
$$\tilde{c}^{\delta}\left(x, V\right)=\tilde{c}\left(x, \int_{\mathbb{R}^{d}}p_{\delta}\left(x-y\right)V\textit{dy}\right),$$
$$\tilde{b}^{\delta}\left(x, V\right)=\tilde{b}\left(x,\int_{\mathbb{R}^{d}}p_{\delta}\left(x-y\right)V\textit{dy}\right), $$
and
$$\left<V^{\delta}(0),\phi\right>=\int_{\mathbb{R}^{d}}\phi(x)f(x)dx,\qquad\text{for any }\phi\in {C_{b}^{2}}(\mathbb{R}^{d}). $$
For \(V_{i}\in M(\mathbb {R}^{d}), i=1,2\), the Wasserstein metric is given by
$$\rho_{1}\left(V_{1}, V_{2}\right)=\sup\{|\left<V_{1},\phi\right>-\left<V_{2},\phi\right>|: \phi\in\mathbb{C}\}, $$
where
$$\mathbb{C}=\{\phi: \phi\in\mathbb{B}_{1}, \nabla\phi\in\mathbb{B}_{1}, L\phi\in\mathbb{B}_{1}\} $$
and
$$\mathbb{B}_{1}=\{\phi: |\phi(x)-\phi(y)|\leq|x-y|, |\phi(x)|\leq 1, \forall x,y\in\mathbb{R}^{d}\}. $$
Now, we are ready to construct the branching particle system. For fixed δ>0,ε=n −2α ,0<α<1, there are n particles initially, each with weight 1 at locations \(X_{i}^{n,\delta,{\epsilon }}(0), i=1,2,\cdots,n\) which are i.i.d random variables in \(\mathbb {R}^{d}\). Assume the time interval is [0,T] and \(N^{*}=\left [\frac {T}{{\epsilon }}\right ]\) which is the largest integer not greater than \(\frac {T}{{\epsilon }}\). Define ε(t)=j ε for j εt<(j+1)ε. In the time interval [j ε,(j+1)ε),jN , there are \({m_{j}^{n}}\) particles alive and their locations and weights are determined as follows: for \(i=1,2,\cdots,{m_{j}^{n}}\),
$$\left\{\begin{array}{rcl} X_{i}^{n,\delta,{\epsilon}}\left(t\right)&=&X^{n,\delta,{\epsilon}}_{i}\left({j{\epsilon}}\right) +\tilde{b}^{\delta}\left(X_{i}^{n,\delta,{\epsilon}}\left({j{\epsilon}}\right),V^{n,\delta,{\epsilon}}({j{\epsilon}})\right)\left(t-j{\epsilon}\right)\\ &&+\sigma(X^{n,\delta,{\epsilon}}_{i}\left(j{\epsilon}\right))\left(B_{i}\left(t\right)-B_{i}\left(j{\epsilon}\right)\right)\\ A_{i}^{n,\delta,{\epsilon}}\left(j{\epsilon},t\right)&=&\exp\{\tilde{c}^{\delta}\left(X_{i}^{n,\delta,{\epsilon}}\left(j{\epsilon}\right),V^{n,\delta,{\epsilon}}\left(j{\epsilon}\right)\right)\left(t-j{\epsilon}\right)\},\\ \end{array}\right.$$
where the initial values are defined as: \(X^{n,\delta,{\epsilon }}_{i}(0)=x, A_{i}^{n,\delta,{\epsilon }}(0,0)=1, {m_{0}^{n}}=n\).
At the end of the interval, the ith particle branches into \(\xi _{j+1}^{i}\) offsprings such that the conditional expectation and the conditional variance given the information prior to the branching satisfy
$$\mathbb{E}\left(\xi_{j+1}^{i}|\mathcal{F}_{(j+1){\epsilon}-}\right)=A_{i}^{n,\delta,{\epsilon}}(j{\epsilon},\left(j+1\right){\epsilon}) $$
and
$$Var\left(\xi_{j+1}^{i}|\mathcal{F}_{(j+1){\epsilon}-}\right)=\gamma_{j+1}^{i}, $$
where \(\gamma _{j+1}^{i}\) is arbitrary and
$$A_{i}^{n,\delta,{\epsilon}}\left(j{\epsilon},\left(j+1\right){\epsilon}\right)=\exp\{\tilde{c}^{\delta}\left(X_{i}^{n,\delta,{\epsilon}}\left(j{\epsilon}\right),V^{n,\delta,{\epsilon}}\left(j{\epsilon}\right)\right){\epsilon}\}. $$
To minimize \(\gamma _{j+1}^{i}\), take
$$\begin{array}{@{}rcl@{}} \xi_{j+1}^{i}=\left\{ \begin{array}{ll} \left[A_{i}^{n,\delta,{\epsilon}}\left(j{\epsilon},\left(j+1\right){\epsilon}\right)\right] ~~~~&\text{with probability}~~ 1-\left\{A_{i}^{n,\delta,{\epsilon}}\left(j{\epsilon},\left(j+1\right){\epsilon}\right)\right\},\\ \left[A_{i}^{n,\delta,{\epsilon}}\left(j{\epsilon},\left(j+1\right){\epsilon}\right)\right]+1 ~~~~&\text{with probability}~~ \left\{A_{i}^{n,\delta,{\epsilon}}\left(j{\epsilon},\left(j+1\right){\epsilon}\right)\right\}, \end{array}\right. \end{array} $$
where {x}=x−[x] is the fraction of x. In this case, we have
$$\gamma_{j+1}^{i}=\{A_{i}^{n,\delta,{\epsilon}}\left(j{\epsilon},\left(j+1\right){\epsilon}\right)\}\left(1-\{A_{i}^{n,\delta,{\epsilon}}\left(j{\epsilon},\left(j+1\right){\epsilon}\right)\}\right). $$
Now we define the unnormalized approximate filter as following:
$$\begin{array}{@{}rcl@{}} \left\{ \begin{array}{l} \left<V^{n,\delta,{\epsilon}}\left(0\right),\phi\right>=\int_{\mathbb{R}^{d}}f(x)\phi(x)dx\\ V^{n,\delta,{\epsilon}}(t)=\frac{1}{n}\sum_{l=1}^{{m_{j}^{n}}}\delta_{X_{l}^{n,\delta,{\epsilon}}(t)}, ~~\text{if} ~~j{\epsilon}\leq t<\left(j+1\right){\epsilon}. \end{array}\right. \end{array} $$

A preliminary identity

Following similar steps as in Theorem 2.1, for every \(\phi \in {C_{b}^{2}}(\mathbb {R}^{d})\), we have
$$\left<V^{\delta}(t), \phi\right>=\left<V^{\delta}(0), \phi\right>+{\int_{0}^{t}}\left<V^{\delta}(s), \tilde{L}_{V^{\delta}(s)}\phi\right>ds +{\int_{0}^{t}}\left<V^{\delta}(s), L\phi\right>ds,$$
where \(\tilde {L}_{V^{\delta }(s)}\) is defined as
$$\tilde{L}_{V^{\delta}(s)}\phi(x)=\sum_{i=1}^{d}\tilde{b}_{i}^{\delta}(x,V^{\delta}(s))\partial_{x_{i}}\phi(x)+\tilde{c}^{\delta}(x,V^{\delta}(s))\phi(x). $$
Now we imitate section 6.5 ofXiong (2008) to define a backward PDE for s[0,t] such that
$$\begin{array}{@{}rcl@{}} \left\{ \begin{array}{ll} &\frac{\partial\psi_{s}}{\partial s}=-L\psi_{s}-\tilde{L}_{V^{\delta}(s)}\psi_{s}\\ &\psi_{t}=\phi. \end{array}\right. \end{array} $$
(3.2)

Note that ψ s depends on t. Simple calculations show that \(\frac {\partial \left <V^{\delta }(t),\psi _{t}\right >}{\partial t}=0\).

Recall that g denote the transpose of a vector or matrix. Let
$${\theta_{g}^{B}}(t)=\exp\left(\sqrt{-1}{\int_{0}^{t}}g_{s}^{*}dB(s)+\frac{1}{2}{\int_{0}^{t}}\left|g_{s}\right|^{2}ds\right) $$
and
$$\tilde{\theta}_{g}^{B}(r)=\exp\left(\sqrt{-1}{\int_{r}^{t}}g_{s}^{*}dB(s) +\frac{1}{2}{\int_{r}^{t}}\left|g_{s}\right|^{2}ds\right). $$

Theorem 3.1

For j εt<(j+1)ε,jN , for any ψ r ,r[j ε,t] satisfying Eq. (3.2), we have
$$\begin{array}{@{}rcl@{}} \begin{aligned} &\psi_{t}(X_{i}^{n,\delta,{\epsilon}}\left(t\right))A_{i}^{n,\delta,{\epsilon}}\left(j{\epsilon},t\right)-\psi_{j{\epsilon}}\left(X_{i}^{n,\delta,{\epsilon}}\left(j{\epsilon}\right)\right)\\ =&\int_{j{\epsilon}}^{t}\nabla^{*}\psi_{r}(X_{i}^{n,\delta,{\epsilon}}\left(r\right))\sigma(X_{i}^{n,\delta,{\epsilon}}\left(r\right))A_{i}^{n,\delta,{\epsilon}}\left(j{\epsilon},r\right)dB(r)\\ &+\int_{j{\epsilon}}^{t}\psi_{r}\left(X_{i}^{n,\delta,{\epsilon}}\left(r\right)\right)A_{i}^{n,\delta,{\epsilon}}\left(j{\epsilon},r\right)\left[\tilde{c}^{\delta}\left(X_{i}^{n,\delta,{\epsilon}}\left(j{\epsilon}\right),V^{n,\delta,{\epsilon}}\left(j{\epsilon}\right)\right)-\tilde{c}^{\delta}\left(X_{i}^{n,\delta,{\epsilon}}\left(r\right),V^{\delta}(r)\right)\right]dr\\ &+\int_{j{\epsilon}}^{t}A_{i}^{n,\delta,{\epsilon}}\left(j{\epsilon},r\right)\nabla^{*}\psi_{r}(X_{i}^{n,\delta,{\epsilon}}\left(r\right))\left[\tilde{b}^{\delta}\left(X_{i}^{n,\delta,{\epsilon}}(r),V^{n,\delta,{\epsilon}}(r)\right)-\tilde{b}^{\delta}\left(X_{i}^{n,\delta,{\epsilon}}\left(r\right),V^{\delta}(r)\right)\right]dr. \end{aligned} \end{array} $$
(3.3)

Proof

For simplicity of notation, we only consider the case when j=0 and denote \(A_{i}^{n,\delta,{\epsilon }}\left (0,t\right)\) and \(X_{i}^{n,\delta,{\epsilon }}\left (t\right)\) by A(t) and X(t).

By the independent increments of B(t) and note that \({\theta _{g}^{B}}\) is a martingale, we know for r[0,t] and \(x\in \mathbb {R}^{n}\),
$$\mathbb{E}\left(\psi_{r}(x)\tilde{\theta}_{g}^{B}(r)|\mathcal{F}_{r}\right)=\psi_{r}(x)\mathbb{E}\left(\tilde{\theta}_{g}^{B}(r)|\mathcal{F}_{r}\right) =\psi_{r}(x)\mathbb{E}\left(\frac{{\theta_{g}^{B}}(t)}{{\theta_{g}^{B}}(r)}|\mathcal{F}_{r}\right)=\psi_{r}(x) $$
and
$$\begin{array}{@{}rcl@{}} \mathbb{E}\left(\! \psi_{r} \!\left(x\right)A\left(r\right){\theta_{g}^{B}}(t)|\mathcal{F}_{r}\! \right) &=& A(r){\theta_{g}^{B}}(r)\mathbb{E}\!\left(\! \psi_{r}(x)\tilde{\theta}_{g}^{B}(r)|\mathcal{F}_{r}\right)\\ &=& A(r){\theta_{g}^{B}}(r)\psi_{r}(x). \end{array} $$
(3.4)
By Itô’s formula, we have
$$\begin{array}{@{}rcl@{}} d\psi_{r}(X(r))&=&\left(-L\psi_{r}(X(r))-\tilde{L}_{V^{\delta}(r)}\psi_{r}(X(r))\right)dr\\ &&+\nabla^{*}\psi_{r}(X(r))\left[\sigma\left(X(r)\right)dB(r)+\tilde{b}^{\delta}\left(X(r),V(r)\right)dr\right]\\ &&+\frac{1}{2}\sum_{i,j=1}^{d}\partial_{x_{i},x_{j}}\psi_{r}(X(r))a_{ij}\left(X(r)\right)dr\\ &=&-\tilde{c}^{\delta}\left(X(r),V^{\delta}(r)\right)\psi_{r}(X(r))dr\\ &&+\nabla^{*}\psi_{r}(X(r))\left[\tilde{b}^{\delta}\left(X(r),V(r)\right)-\tilde{b}^{\delta}\left(X(r),V^{\delta}(r)\right)\right]dr\\ &&+\nabla^{*}\psi_{r}(X(r))\sigma\left(X(r)\right)dB(r). \end{array} $$
Then
$$\begin{array}{@{}rcl@{}} \begin{aligned} &d\left[\psi_{r}(X(r))A(r){\theta_{g}^{B}}(r)\right]\\ =&-\tilde{c}^{\delta}\left(X(r),V^{\delta}(r)\right)\psi_{r}(X(r))dr +\nabla^{*}\psi_{r}(X(r))\left[\tilde{b}^{\delta}\left(X(r),V(r)\right) -\tilde{b}^{\delta}\left(X(r),V^{\delta}(r)\right)\right]dr\\ &+\nabla^{*}\psi_{r}(X(r))\sigma(X(r))A(r){\theta_{g}^{B}}(r)dB(r)\\ &+\psi_{r}(X(r))A(r)\tilde{c}^{\delta}\left(X(0),V(0)\right){\theta_{g}^{B}}(r)dr+\sqrt{-1}\psi_{r}(X(r))A(r){\theta_{g}^{B}}(r)g_{r}^{*}dB(r)\\ &+\sqrt{-1}\nabla^{*}\psi_{r}\left(X(r)\right)\sigma(X(r))A(r){\theta_{g}^{B}}(r)g_{r}^{*}dr\\ =&\nabla^{*}\psi_{r}(X(r))\sigma(X(r))A(r){\theta_{g}^{B}}(r)dB(r)\\ &+\sqrt{-1}\psi_{r}(X(r))A(r){\theta_{g}^{B}}(r)g_{r}^{*}dB(r)\\ &+\sqrt{-1}\nabla^{*}\psi_{r}(X(r))\sigma(X(r))A(r){\theta_{g}^{B}}(r)g_{r}^{*}dr\\ &+\psi_{r}(X(r))A(r)\left[\tilde{c}^{\delta}\left(X(0),V(0)\right)-\tilde{c}^{\delta}\left(X(r),V^{\delta}(r)\right)\right]{\theta_{g}^{B}}(r)dr\\ &+A(r)\nabla^{*}\psi_{r}(X(r))\left[\tilde{b}^{\delta}\left(X(r),V(r)\right)-\tilde{b}^{\delta}\left(X(r),V^{\delta}(r)\right)\right]{\theta_{g}^{B}}(r)dr. \end{aligned} \end{array} $$
By taking r=0 and r=t in (3.4), respectively, we have
$$\mathbb{E}\!\left(\!\!\psi_{t}(X(t))A(t){\theta_{g}^{B}}(t)|\mathcal{F}_{t}\right) \,-\, \mathbb{E}\!\left(\! \psi_{0}(X(0)){\theta_{g}^{B}}(t)|\mathcal{F}_{0}\right) \!\,=\, \psi_{t}(X(t))A(t){\theta_{g}^{B}}(t)-\! \psi_{0}(X(0)). $$
Getting expectation on both sides,
$$\begin{array}{@{}rcl@{}} &&\mathbb{E}\left[\left(\psi_{t}(X(t))A(t)-\psi_{0}(X(0))\right){\theta_{g}^{B}}(t)\right]\\ &=&\mathbb{E}\left[\psi_{t}(X(t))A(t){\theta_{g}^{B}}(t)-\psi_{0}(X(0))\right]\\ &=&\mathbb{E}{\int_{0}^{t}}\sqrt{-1}\nabla^{*}\psi_{r}(X(r))\sigma(X(r))A(r){\theta_{g}^{B}}(r)g_{r}^{*}dr\\ &&+\mathbb{E}{\int_{0}^{t}}\psi_{r}(X(r))A(r)\left[\tilde{c}^{\delta}\left(X(0),V(0)\right)-\tilde{c}^{\delta}\left(X(r),V^{\delta}(r)\right)\right]{\theta_{g}^{B}}(r)dr\\ &&+\mathbb{E}{\int_{0}^{t}}A(r)\nabla^{*}\psi_{r}(X(r))\left[\tilde{b}^{\delta}\left(X(r),V(r)\right)-\tilde{b}^{\delta}\left(X(r),V^{\delta}(r)\right)\right]{\theta_{g}^{B}}(r)dr\\ &\equiv&P_{1}+P_{2}+P_{3}. \end{array} $$
By Itô’s formula, we know
$$\begin{array}{@{}rcl@{}} &&{\int_{0}^{t}}\nabla^{*}\psi_{r}(X(r))\sigma\left(X(r)\right)A(r)dB(r){\theta_{g}^{B}}(t)\\ &=&{\int_{0}^{t}}\sqrt{-1}\nabla^{*}\psi_{r}(X(r))\sigma(X(r))A(r){\theta_{g}^{B}}(r)g_{r}^{*}dr+{\int_{0}^{t}}\cdots dB(r),\\ &&{\int_{0}^{t}}\psi_{r}(X(r))A(r)\left[\tilde{c}^{\delta}\left(X(0),V(0)\right)-\tilde{c}^{\delta}\left(X(r),V^{\delta}(r)\right)\right]{dr\theta_{g}^{B}}(t)\\ &=&{\int_{0}^{t}}\psi_{r}(X(r))A(r)\left[\tilde{c}^{\delta}\left(X(0),V(0)\right)-\tilde{c}^{\delta}\left(X(r),V^{\delta}(r)\right)\right]{\theta_{g}^{B}}(r)dr+{\int_{0}^{t}}\cdots dB(r), \end{array} $$
and
$$\begin{array}{@{}rcl@{}} &&{\int_{0}^{t}}A(r)\nabla^{*}\psi_{r}(X(r))\left[\tilde{b}^{\delta}\left(X(r),V(r)\right)-\tilde{b}^{\delta}\left(X(r),V^{\delta}(r)\right)\right]{dr\theta_{g}^{B}}(t)\\ &=&{\int_{0}^{t}}A(r)\nabla^{*}\psi_{r}(X(r))\left[\tilde{b}^{\delta}\left(X(r),V(r)\right)-\tilde{b}^{\delta}\left(X(r),V^{\delta}(r)\right)\right]{\theta_{g}^{B}}(r)dr+{\int_{0}^{t}} \cdots dB(r). \end{array} $$
Then
$$P_{1}=\mathbb{E}{\int_{0}^{t}}\nabla^{*}\psi_{r}(X(r))\sigma\left(X(r),v\left(r,X\left(r\right)\right)\right)A(r)dB(r){\theta_{g}^{B}}(t), $$
$$P_{2}=\mathbb{E}{\int_{0}^{t}}\psi_{r}(X(r))A(r)\left[\tilde{c}^{\delta}\left(X(0),V(0)\right)-\tilde{c}\left(X(r),v\left(r,X(r)\right)\right)\right]{dr\theta_{g}^{B}}(t) $$
and
$$P_{3}=\mathbb{E}{\int_{0}^{t}}A(r)\nabla^{*}\psi_{r}(X(r))\left[\tilde{b}^{\delta}\left(X(r),V(r)\right) -\tilde{b}^{\delta}\left(X(r),V^{\delta}(r)\right)\right]{dr\theta_{g}^{B}}(t). $$
By Lemma 6.20 inXiong (2008), the following equation holds:
$$\begin{array}{@{}rcl@{}} \begin{aligned} &\psi_{t}(X(t))A(t)-\psi_{0}(X(0))\\ =&{\int_{0}^{t}}\nabla^{*}\psi_{r}(X(r))\sigma(X(r))A(r)dB(r)\\ &+{\int_{0}^{t}}\psi_{r}(X(r))A(r)\left[\tilde{c}^{\delta}\left(X(0),V(0)\right)-\tilde{c}^{\delta}\left(X(r),V^{\delta}(r)\right)\right]dr\\ &+{\int_{0}^{t}}A(r)\nabla^{*}\psi_{r}(X(r))\left[\tilde{b}^{\delta}\left(X(r),V(r)\right)-\tilde{b}^{\delta}\left(X(r),V^{\delta}(r)\right)\right]dr. \end{aligned} \end{array} $$

Similarly, we know Eq. (3.3) holds. □

Convergence of V n,δ,ε (t) to V δ (t) at any point t[0,T]

Proposition 3.1

For j=1,2,,N , there exists a constant K, such that
$$\mathbb{E} {m_{j}^{n}}\leq Kn. $$

Proof

By the definition of \({m_{j}^{n}}\), we have
$$\mathbb{E} {m_{j}^{n}}=\mathbb{E}\mathbb{E}\left(\sum_{l=1}^{m_{j-1}^{n}}{\xi_{j}^{l}}|\mathcal{F}_{j{\epsilon}-}\right)=\mathbb{E}\left(\sum_{l=1}^{m_{j-1}^{n}}A_{l}^{n,\delta,{\epsilon}}((j-1){\epsilon},j{\epsilon})\right)\leq\mathbb{E} m_{j-1}^{n} e^{K{\epsilon}}. $$

By induction, we have \(\mathbb {E} {m_{j}^{n}}\leq \mathbb {E} m_{j-2}^{n}e^{2K{\epsilon }}\leq \cdots \leq \mathbb {E} {m_{0}^{n}}e^{jK{\epsilon }}\leq ne^{KT}.\)

Proposition 3.2

For j=1,2,,N , there exists a constant K, such that
$$\mathbb{E} \left({m_{j}^{n}}\right)^{2}\leq K\left(n^{2}\vee n^{1+2{\alpha}}\right). $$

Proof

$$\begin{array}{@{}rcl@{}} \mathbb{E}\left({m_{j}^{n}}\right)^{2}&=&\mathbb{E}\left(\sum_{l=1}^{m_{j-1}^{n}}{\xi_{j}^{l}}\right)^{2}\\ &=&\mathbb{E}\left(\sum_{l=1}^{m_{j-1}^{n}}\left({\xi_{j}^{l}}\right)^{2}\right)+2\mathbb{E}\sum_{1\leq l_{1}<l_{2}\leq m_{j-1}^{n}}\left(\xi_{j}^{l_{1}}\xi_{j}^{l_{2}}\right)\\ &=&\mathbb{E}\mathbb{E}\left(\sum_{i=1}^{m_{j-1}^{n}}\left({\xi_{j}^{l}}\right)^{2}|\mathcal{F}_{j{\epsilon}-}\right)+2\mathbb{E}\mathbb{E}\left(\sum_{1\leq l_{1}<l_{2}\leq m_{j-1}^{n}}\left(\xi_{j}^{l_{1}}\xi_{j}^{l_{2}}\right)|\mathcal{F}_{j{\epsilon}-}\right)\\ &=&\mathbb{E}\left(\sum_{l=1}^{m_{j-1}^{n}}\left({\gamma_{j}^{l}}+A_{l}^{n,\delta,{\epsilon}}((j-1){\epsilon},j{\epsilon})^{2}\right)\right)\\ &&+2\mathbb{E}\sum_{1\leq l_{1}<l_{2}\leq m_{j-1}^{n}}\left(A_{l_{1}}^{n,\delta,{\epsilon}}((j-1){\epsilon},j{\epsilon})A_{l_{2}}^{n,\delta,{\epsilon}}((j-1){\epsilon},j{\epsilon})\right)\\ &\leq&\mathbb{E}\sum_{l=1}^{m_{j-1}^{n}}\left(1+e^{2K{\epsilon}}\right)+2\mathbb{E}\sum_{1\leq l_{1}<l_{2}\leq m_{j-1}^{n}}e^{2K{\epsilon}}\\ &\leq&\mathbb{E} m_{j-1}^{n}+\mathbb{E}\left(m_{j-1}^{n}\right)^{2}e^{2K{\epsilon}}. \end{array} $$
Let \(a_{j}=\mathbb {E}\left ({m_{j}^{n}}\right)^{2}\), by induction
$$\begin{array}{@{}rcl@{}} a_{j}&\leq& Kn+e^{2K{\epsilon}}a_{j-1}\leq Kn+e^{2K{\epsilon}}Kn+e^{4K{\epsilon}}a_{j-2}\\ &\leq&Kn\left(1+e^{2K{\epsilon}}+e^{4K{\epsilon}}+\cdots+e^{2(j-1)K{\epsilon}}\right)+e^{2jK{\epsilon}}n^{2}\\ &\leq&Kn\frac{1-e^{2K{\epsilon} j}}{1-e^{2K{\epsilon}}}+e^{2jK{\epsilon}}n^{2}\\ &\leq&Kn\frac{e^{2KT}-1}{e^{2K{\epsilon}}-1}+e^{2KT}n^{2}\\ &\leq&K\left(n^{1+2{\alpha}}\vee n^{2}\right). \end{array} $$

Lemma 3.1

For any t[0,T], there exists a constant K, such that
$$\mathbb{E}{\rho_{1}^{2}}\left(V^{n,\delta,{\epsilon}}(t), V^{n,\delta,{\epsilon}}\left({\epsilon}(t)\right)\right)\leq K\left(n^{-2{\alpha}}\vee n^{-1}\right).$$

Proof

By the definition of measures, we have
$$\begin{array}{@{}rcl@{}} \rho_{1}\left(V^{n,\delta,{\epsilon}}(t), V^{n,\delta,{\epsilon}}\left({\epsilon}(t)\right)\right)&=&\sup_{\phi\in\mathbb{C}}\left|\frac{1}{n}\sum_{l=1}^{m^{n}_{[t/{\epsilon}]}}\phi\left(X_{l}^{n,\delta,{\epsilon}}(t)\right) -\frac{1}{n}\sum_{l=1}^{m^{n}_{[t/{\epsilon}]}}\phi\left(X_{l}^{n,\delta,{\epsilon}}\left({\epsilon}(t)\right)\right)\right|\\ &\leq&\frac{K}{n}\sum_{l=1}^{m^{n}_{[t/{\epsilon}]}}\left|X_{l}^{n,\delta,{\epsilon}}(t)-X_{l}^{n,\delta,{\epsilon}}\left({\epsilon}(t)\right)\right|. \end{array} $$
Therefore
$$\begin{array}{@{}rcl@{}} \mathbb{E}{\rho_{1}^{2}}\left(V^{n,\delta,{\epsilon}}(t), V^{n,\delta,{\epsilon}}\left({\epsilon}(t)\right)\right)&\leq&\frac{K}{n^{2}}\mathbb{E}\left(\sum_{l=1}^{m^{n}_{[t/{\epsilon}]}}m^{n}_{[t/{\epsilon}]}\left|X_{l}^{n,\delta,{\epsilon}}(t)-X_{l}^{n,\delta,{\epsilon}}\left({\epsilon}(t)\right)\right|^{2}\right)\\ &=&\frac{K}{n^{2}}\mathbb{E}\left(\sum_{l=1}^{m^{n}_{[t/{\epsilon}]}}m^{n}_{[t/{\epsilon}]}\! \left(\!\mathbb{E}\left|X_{l}^{n,\delta,{\epsilon}}(t)-X_{l}^{n,\delta,{\epsilon}}\left({\epsilon}(t)\right)\right|^{2} \!|\mathcal{F}_{[t/{\epsilon}]{\epsilon}-}\right)\!\!\right)\\ &\leq&\frac{K{\epsilon}}{n^{2}}\left(n^{2}\vee n^{1+2{\alpha}}\right)\\ &=&K\left(n^{-2{\alpha}}\vee n^{-1}\right). \end{array} $$

Lemma 3.2

For any t[0,T], there exists a constant K T , such that
$$\mathbb{E}{\rho_{1}^{2}}\left(V^{\delta}(t), V^{\delta}\left({\epsilon}(t)\right)\right)\leq K_{T}n^{-4{\alpha}}. $$

Proof

Since
$$\begin{array}{@{}rcl@{}} \rho_{1}\left(V^{\delta}(t), V^{\delta}\left({\epsilon}(t)\right)\right)&=&\sup_{\phi\in\mathbb{C}}\left|\left<V^{\delta}(t),\phi\right>-\left<V^{\delta}\left({\epsilon}(t)\right),\phi\right>\right|\\ &=&\sup_{\phi\in\mathbb{C}}\left|\int_{{\epsilon}(t)}^{t}\left<V^{\delta}(s), \tilde{L}_{V^{\delta}(s)}\phi\right>ds+\int_{{\epsilon}(t)}^{t}\left<V^{\delta}(s),L\phi\right>ds\right|, \end{array} $$
therefore
$$\begin{array}{@{}rcl@{}} \mathbb{E}{\rho_{1}^{2}}\left(V^{\delta}(t), V^{\delta}\left({\epsilon}(t)\right)\right)\leq K_{T}\left({\epsilon}\right)^{2}=K_{T}n^{-4{\alpha}}. \end{array} $$

In the following part, we first estimate the distance between V n,δ,ε (t) and V δ (t) at the subinterval endpoints, i.e. the case that t=N ε where N is a nonnegative integer less or equal to N . Then we discuss the convergence of V n,δ,ε (t) to V δ (t) at any point t[0,T].

Let ψ s ,0≤sN ε be the solution to the PDE (3.2) with t replaced by N ε. Note that <V n,δ,ε (N ε),ψ N ε >−<V n,δ,ε (0),ψ 0> can be written as a telescopic sum
$$\sum_{j=1}^{N}\left(\left<V^{n,\delta,{\epsilon}}(j{\epsilon}), \psi_{j{\epsilon}}\right>-\left<V^{n,\delta,{\epsilon}}((j-1){\epsilon}), \psi_{(j-1){\epsilon}}\right>\right).$$
As ψ N ε =ϕ, we get
$$\begin{array}{@{}rcl@{}} &&\left<V^{n,\delta,{\epsilon}}(N{\epsilon}), \phi\right>-\left<V^{n,\delta,{\epsilon}}(0), \psi_{0}\right>\\ &=&\sum_{j=1}^{N}\left(\left<V^{n,\delta,{\epsilon}}(j{\epsilon}), \psi_{j{\epsilon}}\right>-\mathbb{E}\left(\left<V^{n,\delta,{\epsilon}}(j{\epsilon}), \psi_{j{\epsilon}}\right>|\mathcal{F}_{j{\epsilon}-}\right)\right)\\ &&+\sum_{j=1}^{N}\left(\mathbb{E}\left(\left<V^{n,\delta,{\epsilon}}(j{\epsilon}), \psi_{j{\epsilon}}\right>|\mathcal{F}_{j{\epsilon}-}\right)-\left<V^{n,\delta,{\epsilon}}((j-1){\epsilon}), \psi_{(j-1){\epsilon}}\right>\right)\\ &\equiv&I_{1}+I_{2}. \end{array} $$
Since
$$\begin{array}{@{}rcl@{}} &&\mathbb{E}\left(\left<V^{n,\delta,{\epsilon}}(j{\epsilon}), \psi_{j{\epsilon}}\right>|\mathcal{F}_{j{\epsilon}-}\right) =\mathbb{E}\left(\frac{1}{n}\sum_{l=1}^{{m_{j}^{n}}}\psi_{j{\epsilon}}\left(X_{l}^{n,\delta,{\epsilon}}\left(j{\epsilon}\right)\right)|\mathcal{F}_{j{\epsilon}-}\right)\\ &=&\mathbb{E}\left(\! \frac{1}{n}\sum_{l=1}^{m_{j-1}^{n}}{\xi_{j}^{l}}\psi_{j{\epsilon}}\left(X_{l}^{n,\delta,{\epsilon}}\left(j{\epsilon}\right)\right)|\mathcal{F}_{j{\epsilon}-}\!\right)\! =\frac{1}{n}\sum_{l=1}^{m_{j-1}^{n}}\psi_{j{\epsilon}}\left(X_{l}^{n,\delta,{\epsilon}}\left(j{\epsilon}\right)\right)A_{l}^{n,\delta,{\epsilon}}\left((j-1){\epsilon},j{\epsilon}\right), \end{array} $$
then
$$\begin{array}{@{}rcl@{}} I_{1}&=&\sum_{j=1}^{N}\left(\frac{1}{n}\sum_{l=1}^{{m_{j}^{n}}}\psi_{j{\epsilon}}\left(X_{l}^{n,\delta,{\epsilon}}\left(j{\epsilon}\right)\right) -\frac{1}{n}\sum_{l=1}^{m_{j-1}^{n}}\psi_{j{\epsilon}}\left(X_{l}^{n,\delta,{\epsilon}}\left(j{\epsilon}\right)\right)A_{l}^{n,\delta,{\epsilon}}\left((j-1){\epsilon},j{\epsilon}\right)\right)\\ &=&\sum_{j=1}^{N}\frac{1}{n}\sum_{l=1}^{m_{j-1}^{n}}\psi_{j{\epsilon}}\left(X_{l}^{n,\delta,{\epsilon}}\left(j{\epsilon}\right)\right)\left({\xi_{j}^{l}}-A_{l}^{n,\delta,{\epsilon}}\left((j-1){\epsilon},j{\epsilon}\right)\right), \end{array} $$
and
$$\begin{array}{@{}rcl@{}} I_{2}&=&\sum_{j=1}^{N}\!\left(\!\frac{1}{n}\sum_{l=1}^{m_{j-1}^{n}}\!\psi_{j{\epsilon}}\!\left(\! X_{l}^{n,\delta,{\epsilon}}\!\left(j{\epsilon}\right)\!\right)\!A_{l}^{n,\delta,{\epsilon}}\left((j \,-\, 1){\epsilon},j{\epsilon}\right)\,-\, \frac{1}{n}\!\sum_{l=1}^{m_{j-1}^{n}}\!\psi_{(j-1){\epsilon}}\left(X_{l}^{n,\delta,{\epsilon}}\left(\left(j-1\right){\epsilon}\right)\right) \right)\\ &=&\sum_{j=1}^{N}\frac{1}{n}\sum_{l=1}^{m_{j-1}^{n}}\left(\psi_{j{\epsilon}}\left(X_{l}^{n,\delta,{\epsilon}}\left(j{\epsilon}\right)\right)A_{l}^{n,\delta,{\epsilon}}\left((j-1){\epsilon},j{\epsilon}\right) -\psi_{(j-1){\epsilon}}\left(X_{l}^{n,\delta,{\epsilon}}\left(\left(j-1\right){\epsilon}\right)\right)\right). \end{array} $$

Lemma 3.3

There exist a constant K such that \(\mathbb {E} \left (I_{1}\right)^{2}\leq Kn^{-(1-2{\alpha })}\).

Proof

Note that
$$\begin{array}{@{}rcl@{}} &&\mathbb{E}\sum_{l=1}^{m_{j-1}^{n}}\psi_{j{\epsilon}}\left(X_{l}^{n,\delta,{\epsilon}}\left(j{\epsilon}\right)\right)\left({\xi_{j}^{l}}-A_{l}^{n,\delta,{\epsilon}}\left((j-1){\epsilon},j{\epsilon}\right)\right)\\ &=&\mathbb{E}\left[\mathbb{E}\sum_{l=1}^{m_{j-1}^{n}}\psi_{j{\epsilon}}\left(X_{l}^{n,\delta,{\epsilon}}\left(j{\epsilon}\right)\right)\left({\xi_{j}^{l}}-A_{l}^{n,\delta,{\epsilon}}\left((j-1){\epsilon},j{\epsilon}\right)\right)|\mathcal{F}_{j{\epsilon}-}\right]=0, \end{array} $$
and for any j <j, we have
$$\begin{array}{@{}rcl@{}} &&\mathbb{E}\bigg(\sum_{l=1}^{m_{j-1}^{n}}\psi_{j{\epsilon}}\left(X_{l}^{n,\delta,{\epsilon}}\left(j{\epsilon}\right)\right)\left({\xi_{j}^{l}}-A_{l}^{n,\delta,{\epsilon}}\left((j-1){\epsilon},j{\epsilon}\right)\right)\\ &&\qquad\times \sum_{l=1}^{m_{j^{'}-1}^{n}}\psi_{j^{'}{\epsilon}}\left(X_{l}^{n,\delta,{\epsilon}}\left(j^{'}{\epsilon}\right)\right)\left(\xi_{j^{'}}^{l}-A_{l}^{n,\delta,{\epsilon}}\left((j^{'}-1){\epsilon},j^{'}{\epsilon}\right)\right)\bigg)\\ &=&\mathbb{E}\bigg[\mathbb{E}\bigg(\sum_{l=1}^{m_{j-1}^{n}}\psi_{j{\epsilon}}\left(X_{l}^{n,\delta,{\epsilon}}\left(j{\epsilon}\right)\right)\left({\xi_{j}^{l}}-A_{l}^{n,\delta,{\epsilon}}\left((j-1){\epsilon},j{\epsilon}\right)\right)\\ &&\qquad\times \sum_{l=1}^{m_{j^{'}-1}^{n}}\psi_{j^{'}{\epsilon}}\left(X_{l}^{n,\delta,{\epsilon}}\left(j^{'}{\epsilon}\right)\right)\left(\xi_{j^{'}}^{l}-A_{l}^{n,\delta,{\epsilon}}\left((j^{'}-1){\epsilon},j^{'}{\epsilon}\right)\right)\bigg)\bigg|\mathcal{F}_{j{\epsilon}-}\bigg]\\ &=&\mathbb{E}\bigg[\sum_{l=1}^{m_{j^{'}-1}^{n}}\psi_{j^{'}{\epsilon}}\left(X_{l}^{n,\delta,{\epsilon}}\left(j^{'}{\epsilon}\right)\right)\left(\xi_{j^{'}}^{l}-A_{l}^{n,\delta,{\epsilon}}\left((j^{'}-1){\epsilon},j^{'}{\epsilon}\right)\right)\\ &&\qquad\times \mathbb{E}\left(\sum_{l=1}^{m_{j-1}^{n}}\psi_{j{\epsilon}}\left(X_{l}^{n,\delta,{\epsilon}}\left(j{\epsilon}\right)\right)\left({\xi_{j}^{l}}-A_{l}^{n,\delta,{\epsilon}}\left((j-1){\epsilon},j{\epsilon}\right)\right) \right)\bigg|\mathcal{F}_{j{\epsilon}-}\bigg]\\ &=&0 \end{array} $$
therefore,
$$\begin{array}{@{}rcl@{}} \mathbb{E}\left(I_{1}\right)^{2}&=&\mathbb{E}\left(\sum_{j=1}^{N}\frac{1}{n}\sum_{l=1}^{m_{j-1}^{n}}\psi_{j{\epsilon}}\left(X_{l}^{n,\delta,{\epsilon}}\left(j{\epsilon}\right)\right)\left({\xi_{j}^{l}}-A_{l}^{n,\delta,{\epsilon}}\left((j-1){\epsilon},j{\epsilon}\right)\right)\right)^{2}\\ &=&\frac{1}{n^{2}}\sum_{j=1}^{N}\mathbb{E}\left(\sum_{l=1}^{m_{j-1}^{n}}\psi_{j{\epsilon}}\left(X_{l}^{n,\delta,{\epsilon}}\left(j{\epsilon}\right)\right)\left({\xi_{j}^{l}}-A_{l}^{n,\delta,{\epsilon}}\left((j-1){\epsilon},j{\epsilon}\right)\right)\right)^{2}\\ &=&\frac{1}{n^{2}}\sum_{j=1}^{N}\mathbb{E}\left(\sum_{l=1}^{m_{j-1}^{n}}\psi_{j{\epsilon}}^{2}\left(X_{l}^{n,\delta,{\epsilon}}\left(j{\epsilon}\right)\right)\left({\xi_{j}^{l}}-A_{l}^{n,\delta,{\epsilon}}\left((j-1){\epsilon},j{\epsilon}\right)\right)^{2}\right)\\ &\leq&\frac{K}{n^{2}}\sum_{j=1}^{N}\mathbb{E}\left(m_{j-1}^{n}\right){\gamma_{j}^{l}}\\ &\leq&Kn^{-(1-2{\alpha})}. \end{array} $$

Remark 3.1

To guarantee the convergence of V n,δ,ε (N ε) to V δ (N ε), we will only consider \(0<{\alpha }<\frac {1}{2}\) in the following paragraph.

Theorem 3.2

For any t[0,T],δ>0,ε=n −2α and \(0<{\alpha }<\frac {1}{2}\), there exists a constant K δ,T , such that \(\mathbb {E}{\rho ^{2}_{1}}\left (V^{n,\delta,{\epsilon }}(t), V^{\delta }(t)\right)\leq K_{T}n^{-(1-2{\alpha })}+K_{\delta,T}n^{-2{\alpha }}\).

Proof

From the Eq. (3.3), it is obvious that I 2 can be separated into the sum of three parts:
$$\begin{array}{@{}rcl@{}} &&I_{2}\\ &=&\sum_{j=1}^{N}\frac{1}{n}\sum_{l=1}^{m_{j-1}^{n}}\left(\psi_{j{\epsilon}}\left(X_{l}^{n,\delta,{\epsilon}}\left(j{\epsilon}\right)\right)A_{l}^{n,\delta,{\epsilon}}\left((j-1){\epsilon},j{\epsilon}\right)-\psi_{(j-1){\epsilon}}\left(X_{l}^{n,\delta,{\epsilon}}\left(\left(j-1\right){\epsilon}\right)\right)\right)\\ &=&\sum_{j=1}^{N}\frac{1}{n}\sum_{l=1}^{m_{j-1}^{n}}\int_{(j-1){\epsilon}}^{j{\epsilon}}\left\{\nabla^{*}\psi_{r}\left(X_{l}^{n,\delta,{\epsilon}}(r)\right)\sigma\left(X_{l}^{n,\delta,{\epsilon}}(r)\right)A_{l}^{n,\delta,{\epsilon}}((j-1){\epsilon},r){dB}_{l}(r)\right.\\ &&+\left\{\psi_{r}\left(X_{l}^{n,\delta,{\epsilon}}(r)\right)A_{l}^{n,\delta,{\epsilon}}((j-1){\epsilon},r) \left[\tilde{c}^{\delta}\left(X_{l}^{n,\delta,{\epsilon}}((j-1){\epsilon}),V^{n,\delta,{\epsilon}}\left((j-1){\epsilon}\right)\right)\right.\right.\\ &&\left. \left. -\tilde{c}^{\delta}\left(X_{l}^{n,\delta,{\epsilon}}(r),V^{\delta}(r)\right)\!\right]\right.\\ &&\left.\left.+A_{l}^{n,\delta,{\epsilon}}((j \,-\, 1){\epsilon},r)\nabla^{*}\psi_{r}\!\left(\!X_{l}^{n,\delta,{\epsilon}}(r)\!\right)\! \!\left[\!\tilde{b}^{\delta}\!\left(\!X_{l}^{n,\delta,{\epsilon}}(r),\!V^{n,\delta,{\epsilon}}(r)\!\right)\! \,-\,\tilde{b}^{\delta}\!\left(\!X_{l}^{n,\delta,{\epsilon}}(r),\!\!V^{\delta}(r)\!\right)\!\right]\right\}dr\right\}\\ &\equiv&I_{21}+I_{22}+I_{23}, \end{array} $$
where
$$I_{21}=\sum_{j=1}^{N}\frac{1}{n}\sum_{l=1}^{m_{j-1}^{n}}\int_{(j-1){\epsilon}}^{j{\epsilon}}\nabla^{*}\psi_{r}\left(\!X_{l}^{n,\delta,{\epsilon}}(r)\!\right)\! \sigma\left(\!X_{l}^{n,\delta,{\epsilon}}(r)\right)A_{l}^{n,\delta,{\epsilon}}((j-1){\epsilon},r){dB}_{l}(r), $$
$$\begin{array}{@{}rcl@{}} I_{22}&=&\sum_{j=1}^{N}\frac{1}{n}\sum_{l=1}^{m_{j-1}^{n}}\int_{(j-1){\epsilon}}^{j{\epsilon}}\psi_{r}\left(X_{l}^{n,\delta,{\epsilon}}(r)\right)A_{l}^{n,\delta,{\epsilon}}((j-1){\epsilon},r)\\ &&~~~~\times\left[\tilde{c}^{\delta}\left(X_{l}^{n,\delta,{\epsilon}}((j-1){\epsilon}),V^{n,\delta,{\epsilon}}\left((j-1){\epsilon}\right)\right)-\tilde{c}^{\delta}\left(X_{l}^{n,\delta,{\epsilon}}(r),V^{\delta}(r)\right)\right]dr. \end{array} $$
and
$$\begin{array}{@{}rcl@{}} I_{23}&=&\sum_{j=1}^{N}\frac{1}{n}\sum_{l=1}^{m_{j-1}^{n}}\int_{(j-1){\epsilon}}^{j{\epsilon}} A_{l}^{n,\delta,{\epsilon}}((j-1){\epsilon},r)\nabla^{*}\psi_{r}\left(X_{l}^{n,\delta,{\epsilon}}(r)\right)\\ &&\qquad\times \left[\tilde{b}^{\delta}\left(X_{l}^{n,\delta,{\epsilon}}(r),V^{n,\delta,{\epsilon}}(r)\right)-\tilde{b}^{\delta}\left(X_{l}^{n,\delta,{\epsilon}}(r),V^{\delta}(r)\right)\right]dr. \end{array} $$
Naturally,
$$\begin{array}{@{}rcl@{}} \mathbb{E}\left(I_{21}\right)^{2}&=&\mathbb{E}\left(\!\sum_{j=1}^{N}\frac{1}{n}\sum_{l=1}^{m_{j-1}^{n}}\int_{(j-1){\epsilon}}^{j{\epsilon}}\!\nabla^{*}\psi_{r}\!\left(\!X_{l}^{n,\delta,{\epsilon}}(r)\!\right) \sigma\!\left(\!X_{l}^{n,\delta,{\epsilon}}(r),\!\right)\! A_{l}^{n,\delta,{\epsilon}}((j \,-\, 1){\epsilon},r){dB}_{l}(r)\!\right)^{2}\\ &=&\frac{1}{n^{2}}\sum_{j=1}^{N}\mathbb{E}\left(\!\sum_{l=1}^{m_{j-1}^{n}}\!\int_{(j-1){\epsilon}}^{j{\epsilon}}\!\nabla^{*}\psi_{r}\left(\!X_{l}^{n,\delta,{\epsilon}}(r)\!\right) \sigma\!\left(\!X_{l}^{n,\delta,{\epsilon}}(r)\!\right)\! A_{l}^{n,\delta,{\epsilon}}((j\,-\,1){\epsilon},r){dB}_{l}(r)\!\right)^{2}\\ &=&\frac{1}{n^{2}}\sum_{j=1}^{N}\mathbb{E}\sum_{l=1}^{m_{j-1}^{n}}\int_{(j-1){\epsilon}}^{j{\epsilon}}\!\nabla^{*}{\psi_{r}^{2}}\left(\!X_{l}^{n,\delta,{\epsilon}}(r)\right)\!\sigma^{2}\left(X_{l}^{n,\delta,{\epsilon}}(r)\right) \left(A_{l}^{n,\delta,{\epsilon}}((j-1){\epsilon},r)\right)^{2}dr\\ &\leq&\frac{K}{n^{2}}\sum_{j=1}^{N}\mathbb{E} m_{j-1}^{n}{\epsilon}\\ &\leq&K n^{-1}. \end{array} $$
Now consider
$$\begin{array}{@{}rcl@{}} &&\left|\tilde{c}^{\delta}\left(X_{l}^{n,\delta,{\epsilon}}\left((j-1){\epsilon}\right),V^{n,\delta,{\epsilon}}\left((j-1){\epsilon}\right)\right)-\tilde{c}^{\delta}\left(X_{l}^{n,\delta,{\epsilon}}(r),V^{\delta}(r)\right)\right|\\ &\leq&\left|\tilde{c}^{\delta}\left(\!X_{l}^{n,\delta,{\epsilon}}\!\left((j\,-\,1){\epsilon}\right),\!V^{n,\delta,{\epsilon}}\left((j\,-\,1){\epsilon}\right)\!\right)\,-\,\tilde{c}^{\delta}\!\left(\!X_{l}^{n,\delta,{\epsilon}}\!\left(r\right), \!V^{n,\delta,{\epsilon}}\left((j\,-\,1){\epsilon}\right)\right)\right|\\ &&+\left|\tilde{c}^{\delta}\left(X_{l}^{n,\delta,{\epsilon}}\left(r\right),V^{n,\delta,{\epsilon}}\left((j-1){\epsilon}\right)\right)-\tilde{c}^{\delta}\left(X_{l}^{n,\delta,{\epsilon}}\left(r\right),V^{n,\delta,{\epsilon}}(r)\right)\right|\\ &&+\left|\tilde{c}^{\delta}\left(X_{l}^{n,\delta,{\epsilon}}\left(r\right),V^{n,\delta,{\epsilon}}(r)\right)-\tilde{c}^{\delta}\left(X_{l}^{n,\delta,{\epsilon}}(r),V^{\delta}(r)\right)\right|\\ \equiv&Q_{1}+Q_{2}+Q_{3}. \end{array} $$
Then, we have
$$\begin{array}{@{}rcl@{}} \mathbb{E}\left(Q_{1}\right)^{2}&\leq&\mathbb{E} K^{2}\left|X_{l}^{n,\delta,{\epsilon}}\left((j-1){\epsilon}\right)-X_{l}^{n,\delta,{\epsilon}}(r)\right|^{2}\\ &=&\mathbb{E} K^{2}\left|\tilde{b}^{\delta}\left(X_{l}^{n,\delta,{\epsilon}}\left((j-1){\epsilon}\right),V^{n,\delta,{\epsilon}}\left((j-1){\epsilon}\right)\right)\left(r-(j-1){\epsilon}\right)\right.\\ &&~~~~~~~+\left.\sigma\left(X_{l}^{n,\delta,{\epsilon}}\left((j-1){\epsilon}\right)\right)\left(B_{l}(r)-B_{l}\left((j-1){\epsilon}\right)\right)\right|^{2}\\ &\leq& K{\epsilon}=Kn^{-2{\alpha}}, \end{array} $$
$$\begin{array}{@{}rcl@{}} \mathbb{E}\left(Q_{2}\right)^{2}&=&\mathbb{E}\left|\tilde{c}^{\delta}\left(X_{l}^{n,\delta,{\epsilon}}\left(r\right),V^{n,\delta,{\epsilon}}\left((j-1){\epsilon}\right)\right)-\tilde{c}^{\delta}\left(X_{l}^{n,\delta,{\epsilon}}\left(r\right),V^{n,\delta,{\epsilon}}(r)\right)\right|^{2}\\ &=&\mathbb{E}\left|\tilde{c}\left(X_{l}^{n,\delta,{\epsilon}}\left(r\right),\int_{\mathbb{R}^{d}}p_{\delta}\left(X_{l}^{n,\delta,{\epsilon}}\left(r\right)-y\right)V^{n,\delta,{\epsilon}}_{(j-1){\epsilon}}\textit{dy}\right)\right.\\ &&~~~~~~~\left.-\tilde{c}\left(X_{l}^{n,\delta,{\epsilon}}\left(r\right),\int_{\mathbb{R}^{d}}p_{\delta}\left(X_{l}^{n,\delta,{\epsilon}}\left(r\right)-y\right)V^{n,\delta,{\epsilon}}_{r}\textit{dy}\right)\right|^{2}\\ &\leq&K\mathbb{E}\left|\int_{\mathbb{R}^{d}}p_{\delta}\left(X_{l}^{n,\delta,{\epsilon}}\left(r\right)-y\right)V^{n,\delta,{\epsilon}}_{(j-1){\epsilon}}\textit{dy}-\int_{\mathbb{R}^{d}}p_{\delta}\left(X_{l}^{n,\delta,{\epsilon}}\left(r\right)-y\right)V^{n,\delta,{\epsilon}}_{r}\textit{dy}\right|^{2}\\ &=&K\mathbb{E}\left|\frac{1}{n}\sum_{l^{'}=1}^{m_{j-1}^{n}}\!p_{\delta}\!\left(\!X_{l}^{n,\delta,{\epsilon}}\left(r\right)\,-\,X_{l^{'}}^{n,\delta,{\epsilon}} \left((j\,-\,1){\epsilon}\right)\!\right)\,-\,\frac{1}{n}\!\sum_{l^{'}=1}^{m_{j-1}^{n}}\!p_{\delta}\!\left(\!X_{l}^{n,\delta,{\epsilon}} \left(r\right)\,-\,X_{l^{'}}^{n,\delta,{\epsilon}}(r)\right)\right|^{2}\\ &\leq&\frac{K}{n^{2}}\mathbb{E}\left(\!m_{j-1}^{n}\sum_{l^{'}=1}^{m_{j-1}^{n}}\!\left|p_{\delta}\!\left(\!X_{l}^{n,\delta,{\epsilon}} \left(r\right)\,-\,X_{l^{'}}^{n,\delta,{\epsilon}}\!\left((j\,-\,1){\epsilon}\right)\!\right)\,-\,p_{\delta}\!\left(\!X_{l}^{n,\delta,{\epsilon}} \!\left(r\right)\,-\,X_{l^{'}}^{n,\delta,{\epsilon}}(r)\right)\!\right|^{2}\right)\\ &\leq&\frac{K{\epsilon}}{\delta n^{2}}\mathbb{E}(m_{j-1}^{n})^{2}\\ &\leq&K_{\delta}n^{-2{\alpha}}, \end{array} $$
and
$$\begin{array}{@{}rcl@{}} \begin{aligned} \mathbb{E}\left(Q_{3}\right)^{2}=&\mathbb{E}\left|\tilde{c}^{\delta}\left(X_{l}^{n,\delta,{\epsilon}}\left(r\right),V^{n,\delta,{\epsilon}}(r)\right)-\tilde{c}^{\delta}\left(X_{l}^{n,\delta,{\epsilon}}(r),V^{\delta}(r)\right)\right|^{2}\\ =&\mathbb{E}\left|\tilde{c}\left(X_{l}^{n,\delta,{\epsilon}}\left(r\right),\int_{\mathbb{R}^{d}}p_{\delta}\left(X_{l}^{n,\delta,{\epsilon}}\left(r\right)-y\right)V^{n,\delta,{\epsilon}}_{r}\textit{dy}\right)\right.\\ &~~~~~~~\left.-\tilde{c}\left(X_{l}^{n,\delta,{\epsilon}}\left(r\right),\int_{\mathbb{R}^{d}}p_{\delta}\left(X_{l}^{n,\delta,{\epsilon}}\left(r\right)-y\right)V^{\delta}_{r}\textit{dy}\right)\right|^{2}\\ \leq&K\mathbb{E}\left|\int_{\mathbb{R}^{d}}p_{\delta}\left(X_{l}^{n,\delta,{\epsilon}}\left(r\right)-y\right)V^{n,\delta,{\epsilon}}_{r}\textit{dy}-\int_{\mathbb{R}^{d}}p_{\delta}\left(X_{l}^{n,\delta,{\epsilon}}\left(r\right)-y\right)V^{\delta}_{r}\textit{dy}\right|^{2}\\ \leq&K_{\delta}\mathbb{E}{\rho_{1}^{2}}\left(V^{n,\delta,{\epsilon}}(r),V^{\delta}(r)\right). \end{aligned} \end{array} $$
Consequently
$$\begin{array}{@{}rcl@{}} &&\mathbb{E}\left|\tilde{c}^{\delta}\left(X_{l}^{n,\delta,{\epsilon}}\left((j-1){\epsilon}\right),V^{n,\delta,{\epsilon}}\left((j-1){\epsilon}\right)\right)-\tilde{c}^{\delta}\left(X_{l}^{n,\delta,{\epsilon}}(r),V^{\delta}(r)\right)\right|^{2}\\ &\leq&K\mathbb{E}\left({Q_{1}^{2}}+{Q_{2}^{2}}+{Q_{3}^{2}}\right)\\ &\leq&K_{\delta}n^{-2{\alpha}}+K_{\delta}{\rho_{1}^{2}}\left(V^{n,\delta,{\epsilon}}(r),V^{\delta}(r)\right). \end{array} $$
$$\begin{array}{@{}rcl@{}} \mathbb{E}\left(I_{22}\right)^{2}&\leq&\frac{{\epsilon}}{n^{2}}\sum_{j=1}^{N}N\mathbb{E}\left\{\sum_{l=1}^{m_{j-1}^{n}}m_{j-1}^{n}\int_{(j-1){\epsilon}}^{j{\epsilon}}{\psi_{r}^{2}}\left(X_{l}^{n,\delta,{\epsilon}}(r)\right)\left(A_{l}^{n,\delta,{\epsilon}}((j-1){\epsilon},r)\right)^{2}\right.\\ &&~~~~~~~~~~~~~~~~~~\left. \left[\! \tilde{c}^{\delta} \!\left(X_{l}^{n,\delta,{\epsilon}}((j-1){\epsilon}),V^{n,\delta,{\epsilon}} \!\left((j \,-\, 1){\epsilon}\right)\right) \,-\, \tilde{c}^{\delta}\left(\! X_{l}^{n,\delta,{\epsilon}}(r),V^{\delta}(r)\right)\right]^{2}dr{\vphantom{\sum_{l=1}^{m_{j-1}^{n}}}}\!\right\}\\ &\leq&\frac{{\epsilon} K}{n^{2}}\sum_{j=1}^{N}N\mathbb{E}\left[\sum_{l=1}^{m_{j-1}^{n}}m_{j-1}^{n}\int_{(j-1){\epsilon}}^{j{\epsilon}}\left(K_{\delta}n^{-2{\alpha}}+K_{\delta}{\rho_{1}^{2}}\left(V^{n,\delta,{\epsilon}}(r),V^{\delta}(r)\right)\right)dr\right]\\ &\leq&\frac{K_{\delta}}{n^{2}}\mathbb{E}\left(m_{j-1}^{n}\right)^{2}n^{-2{\alpha}}+K_{\delta}\int_{0}^{N{\epsilon}}\mathbb{E}{\rho_{1}^{2}}\left(V^{n,\delta,{\epsilon}}(r), V^{\delta}(r)\right)dr\\ &\leq&K_{\delta}n^{-2{\alpha}}+K_{\delta}\int_{0}^{N{\epsilon}}\mathbb{E}{\rho_{1}^{2}}\left(V^{n,\delta,{\epsilon}}(r), V^{\delta}(r)\right)dr. \end{array} $$
Similarly,
$$\mathbb{E}\left|\tilde{b}^{\delta}\left(X_{l}^{n,\delta,{\epsilon}}\left(r\right),V^{n,\delta,{\epsilon}}(r)\right)-\tilde{b}^{\delta}\left(X_{l}^{n,\delta,{\epsilon}}(r),V^{\delta}(r)\right)\right|^{2}\leq K_{\delta}\mathbb{E}{\rho_{1}^{2}}\left(V^{n,\delta,{\epsilon}}(r),V^{\delta}(r)\right), $$
$$\mathbb{E}\left(I_{23}\right)^{2}\leq K_{\delta}\int_{0}^{N{\epsilon}}\mathbb{E}{\rho_{1}^{2}}\left(V^{n,\delta,{\epsilon}}(r), V^{\delta}(r)\right)dr. $$
Therefore,
$$\begin{array}{@{}rcl@{}} \mathbb{E}\left(I_{2}\right)^{2}&\leq&\mathbb{E} K\left(I_{21}^{2}+I_{22}^{2}+I_{23}^{2}\right)\\ &\leq&Kn^{-1}+K_{\delta}n^{-2{\alpha}}+K_{\delta}\int_{0}^{N{\epsilon}}\mathbb{E}{\rho_{1}^{2}}\left(V^{n,\delta,{\epsilon}}(r), V^{\delta}(r)\right)dr\\ &\leq&K_{\delta}n^{-2{\alpha}}+K_{\delta}\int_{0}^{N{\epsilon}}{\rho_{1}^{2}}\left(V^{n,\delta,{\epsilon}}(r), V^{\delta}(r)\right)dr. \end{array} $$
With the result from Lemma 3.3, it is easy to get
$$\begin{array}{@{}rcl@{}} &&\mathbb{E}\left|\left<V^{n,\delta,{\epsilon}}(N{\epsilon}), \phi\right>-\left<V^{n,\delta,{\epsilon}}(0),\psi_{0}\right>\right|^{2}\\ &\leq&Kn^{-(1-2{\alpha})}+K_{\delta}n^{-2{\alpha}}+K_{\delta}\int_{0}^{N{\epsilon}}{\rho_{1}^{2}}\left(V^{n,\delta,{\epsilon}}(r), V^{\delta}(r)\right)dr. \end{array} $$
By our definition of V n,δ,ε (0), we have
$$\begin{array}{@{}rcl@{}} &&\mathbb{E}\left|\left<V^{n,\delta,{\epsilon}}(N{\epsilon}), \phi\right>-\left<V^{\delta}(N{\epsilon}), \phi\right>\right|^{2}\\ &\leq&2\mathbb{E}\left|\left<V^{n,\delta,{\epsilon}}(N{\epsilon}), \phi\right>-\left<V^{n,\delta,{\epsilon}}(0), \psi_{0}\right>\right|^{2}+2\mathbb{E}\left|\left<V^{n,\delta,{\epsilon}}(0), \psi_{0}\right>-\left<V^{\delta}(0),\psi_{0}\right>\right|^{2}\\ &\leq&Kn^{-(1-2{\alpha})}+K_{\delta}n^{-2{\alpha}}+K_{\delta}\int_{0}^{N{\epsilon}}{\rho_{1}^{2}}\left(V^{n,\delta,{\epsilon}}(r), V^{\delta}(r)\right)dr, \end{array} $$
$$\begin{array}{@{}rcl@{}} &&\mathbb{E}{\rho_{1}^{2}}\left(V^{n,\delta,{\epsilon}}(N{\epsilon}), V^{\delta}(N{\epsilon})\right)\\ &\leq&Kn^{-(1-2{\alpha})}+K_{\delta}n^{-2{\alpha}}+K_{\delta}\int_{0}^{N{\epsilon}}{\rho_{1}^{2}}\left(V^{n,\delta,{\epsilon}}(r), V^{\delta}(r)\right)dr. \end{array} $$
By triangle inequality, we have
$$\begin{array}{@{}rcl@{}} \rho_{1}\!\! \left(\!V^{n,\delta,{\epsilon}}(t),\! V^{\delta}(t)\!\right)\!\leq\! \rho_{1}\!\!\left(\!V^{n,\delta,{\epsilon}}(t),\! V^{n,\delta,{\epsilon}}\!\left(\!{\epsilon}(t)\right)\!\right)\,+\,\!\rho_{1}\!\!\left(\!V^{n,\delta,{\epsilon}}\!\left(\!{\epsilon}(t)\right)\!, \!V^{\delta}\!\left({\epsilon}(t)\right)\!\right)\! +\! \rho_{1}\!\! \left(\!V^{\delta}\!\left(\!{\epsilon}(t)\!\right)\!,\! V^{\delta}(t)\right)\!. \end{array} $$
With the results from Lemmas 3.1 and 3.2,
$$\begin{array}{@{}rcl@{}} \mathbb{E}{\rho_{1}^{2}}\left(V^{n,\delta,{\epsilon}}(t), V^{\delta}(t)\right)\leq Kn^{-(1-2{\alpha})}+K_{\delta}n^{-2{\alpha}}+K_{\delta}{\int_{0}^{t}}\mathbb{E}{\rho_{1}^{2}}\left(V^{n,\delta,{\epsilon}}(r), V^{\delta}(r)\right)dr. \end{array} $$
By Gronwall’s inequality, we have
$$\begin{array}{@{}rcl@{}} \mathbb{E}{\rho_{1}^{2}}\left(V^{n,\delta,{\epsilon}}(t), V^{\delta}(t)\right)\leq K_{T}n^{-(1-2{\alpha})}+K_{\delta,T}n^{-2{\alpha}}. \end{array} $$

We define \(v^{n,\delta,{\epsilon }}(t,x)=\frac {1}{n}\sum _{l=1}^{{m_{j}^{n}}}p_{\delta }\left (x-X_{l}^{n,\delta,{\epsilon }}(t)\right)\), i.e. the smooth density of V n,δ,ε (t), as the numerical approximation of v(t,x) and u n,δ,ε (t,x)=v n,δ,ε (Tt,x) as the numerical approximation of u(t,x). Then, we have the following corollary:

Corollary 3.1

For any \(t\in [0,T], 0<\alpha <\frac {1}{2}\), there exists a constant K δ , such that
$$\mathbb{E}\mid u^{n,\delta,{\epsilon}}(t,x)-u(t,x)\mid\leq K_{\delta,T}\left(n^{-\frac{1-2{\alpha}}{2}}\vee n^{-{\alpha}}\right)+K_{T}\sqrt{\delta} $$

Proof

We set u δ (t,x)=v δ (Tt,x). Then
$$\begin{array}{@{}rcl@{}} \begin{aligned} &\mathbb{E}\mid u^{n,\delta,{\epsilon}}(t,x)-u(t,x)\mid\\ =&\mathbb{E}\mid u^{n,\delta,{\epsilon}}(t,x)-u^{\delta}(t,x)\mid+\mathbb{E}\mid u^{\delta}(t,x)-u(t,x)\mid\\ =&\mathbb{E}\left(\left|\int_{\mathbb{R}^{d}}p_{\delta}(x-y)\left(V^{n,\delta,{\epsilon}}_{T-t}\textit{dy}-V^{\delta}_{T-t}\textit{dy}\right)\right|+\mid v^{\delta}(T-t,x)-v(T-t,x)\mid\right)\\ \leq&K_{\delta}\mathbb{E}\rho\left(V^{n,\delta,{\epsilon}}(T-t), V^{\delta}(T-t)\right)+K_{T}\sqrt{\delta}\\ \leq&K_{\delta,T}\left(n^{-\frac{1-2{\alpha}}{2}}\vee n^{-{\alpha}}\right)+K_{T}\sqrt{\delta}. \end{aligned} \end{array} $$

Numerical solution

In this section, we will give a numerical solution of the FBSDE (1.1) based on the branching particle-system representations and derive some estimates about the convergence rate.

We also denote by \(\tilde {u}^{n,\delta,{\epsilon }}(t,x)\) another numerical approximation of u(t,x) obtained by the same numerical scheme with u n,δ,ε (t,x). Firstly, we apply the Euler Scheme to approximate X(t) of the FBSDE (1.1). Define the numerical solution \(\tilde {X}^{n,\delta,{\epsilon }}(t)\) satisfying:
$$ d\tilde{X}^{n,\delta,{\epsilon}}_{t}=b\left(\tilde{X}^{n,\delta,{\epsilon}}_{{\epsilon}(t)},\tilde{u}^{n,\delta,{\epsilon}}\left({\epsilon}(t),\tilde{X}^{n,\delta,{\epsilon}}_{{\epsilon}(t)}\right)\right)dt+\sigma\left(\tilde{X}^{n,\delta,{\epsilon}}_{{\epsilon}(t)}\right){dW}_{t}. $$
(4.1)

Theorem 4.1

The convergence of \(\tilde {X}^{n,\delta,{\epsilon }}(t)\) to X(t) is bounded by K δ,T (n −(1−2α)n −2α )+K T δ.

Proof

By Eqs. (1.1) and (4.1), we have
$$\begin{array}{@{}rcl@{}} \tilde{X}^{n,\delta,{\epsilon}}_{t}-X_{t}&=&{\int_{0}^{t}}\left(b\left(\tilde{X}^{n,\delta,{\epsilon}}_{{\epsilon}(s)},\tilde{u}^{n,\delta,{\epsilon}}\left({\epsilon}(s),\tilde{X}^{n,\delta,{\epsilon}}_{{\epsilon}(s)}\right)\right)-b\left(X_{s},u(s,X_{s})\right)\right)ds\\ &&+{\int_{0}^{t}}\left(\sigma(\tilde{X}^{n,\delta,{\epsilon}}_{{\epsilon}(s)})-\sigma(X_{s})\right){dW}_{s}. \end{array} $$
Applying the Burkholder-Davis-Gundy and Hölder’s inequalities, we obtain
$$\begin{array}{@{}rcl@{}} f_{\epsilon}(t)&\equiv&\mathbb{E}\sup_{s\leq t}\mid\tilde{X}^{n,\delta,{\epsilon}}_{s}-X_{s}\mid^{2}\\ &\leq&2T{\int_{0}^{t}}\mathbb{E}\mid b\left(\tilde{X}^{n,\delta,{\epsilon}}_{{\epsilon}(s)},\tilde{u}^{n,\delta,{\epsilon}} \left({\epsilon}(s),\tilde{X}^{n,\delta,{\epsilon}}_{{\epsilon}(s)}\right)\right)-b\left(X_{s},u(s,X_{s})\right)\mid^{2}ds\\ &&+8{\int_{0}^{t}}\mathbb{E}\mid\sigma(\tilde{X}^{n,\delta,{\epsilon}}_{{\epsilon}(s)})-\sigma(X_{s})\mid^{2}ds. \end{array} $$
(4.2)
It follows from Eq. (4.1) that
$$\begin{array}{@{}rcl@{}} \mathbb{E}\mid\tilde{X}^{n,\delta,{\epsilon}}_{t}-\tilde{X}^{n,\delta,{\epsilon}}_{{\epsilon}(t)}\mid^{2}&\leq & 2K^{2}\left(t-{\epsilon}(t)\right)^{2}+2K^{2}\mathbb{E}\left(\mid W_{t}-W_{{\epsilon}(t)}\mid^{2}\right)\\ &\leq& K{\epsilon}, \end{array} $$
and hence,
$$\begin{array}{@{}rcl@{}} \begin{aligned} &\mathbb{E}\mid b\left(\tilde{X}^{n,\delta,{\epsilon}}_{{\epsilon}(s)},\tilde{u}^{n,\delta,{\epsilon}}\left({\epsilon}(s),\tilde{X}^{n,\delta,{\epsilon}}_{{\epsilon}(s)}\right)\right)-b\left(X_{s},u(s,X_{s})\right)\mid^{2}\\ \leq&5\mathbb{E}\mid b\left(\tilde{X}^{n,\delta,{\epsilon}}_{{\epsilon}(s)},\tilde{u}^{n,\delta,{\epsilon}}\left({\epsilon}(s),\tilde{X}^{n,\delta,{\epsilon}}_{{\epsilon}(s)}\right)\right)-b\left(\tilde{X}^{n,\delta,{\epsilon}}_{s},\tilde{u}^{n,\delta,{\epsilon}}\left({\epsilon}(s),\tilde{X}^{n,\delta,{\epsilon}}_{{\epsilon}(s)}\right)\right)\mid^{2}\\ &+5\mathbb{E}\mid b\left(\tilde{X}^{n,\delta,{\epsilon}}_{s},\tilde{u}^{n,\delta,{\epsilon}}\left({\epsilon}(s),\tilde{X}^{n,\delta,{\epsilon}}_{{\epsilon}(s)}\right)\right)-b\left(\tilde{X}^{n,\delta,{\epsilon}}_{s},u^{\delta}\left({\epsilon}(s),\tilde{X}^{n,\delta,{\epsilon}}_{{\epsilon}(s)}\right)\right)\mid^{2}\\ &+5\mathbb{E}\mid b\left(\tilde{X}^{n,\delta,{\epsilon}}_{s},u^{\delta}\left({\epsilon}(s),\tilde{X}^{n,\delta,{\epsilon}}_{{\epsilon}(s)}\right)\right)-b\left(\tilde{X}^{n,\delta,{\epsilon}}_{s},u\left({\epsilon}(s),\tilde{X}^{n,\delta,{\epsilon}}_{{\epsilon}(s)}\right)\right)\mid^{2}\\ &+5\mathbb{E}\mid b\left(\tilde{X}^{n,\delta,{\epsilon}}_{s},u\left({\epsilon}(s),\tilde{X}^{n,\delta,{\epsilon}}_{{\epsilon}(s)}\right)\right)-b\left(\tilde{X}^{n,\delta,{\epsilon}}_{s},u\left(s,X_{s}\right)\right)\mid^{2}\\ &+5\mathbb{E}\mid b\left(\tilde{X}^{n,\delta,{\epsilon}}_{s},u\left(s,X_{s}\right)\right)-b\left(X_{s},u\left(s,X_{s}\right)\right)\mid^{2}\\ \leq&5K{\epsilon}+5K\mathbb{E}\mid\tilde{u}^{n,\delta,{\epsilon}}\left({\epsilon}(s),\tilde{X}^{n,\delta,{\epsilon}}_{{\epsilon}(s)}\right)-u^{\delta}\left({\epsilon}(s),\tilde{X}^{n,\delta,{\epsilon}}_{{\epsilon}(s)}\right)\mid^{2}\\ &+5K\mathbb{E}\mid u^{\delta}\left({\epsilon}(s),\tilde{X}^{n,\delta,{\epsilon}}_{{\epsilon}(s)}\right)-u\left({\epsilon}(s),\tilde{X}^{n,\delta,{\epsilon}}_{{\epsilon}(s)}\right)\mid^{2}\\ &+5K\mathbb{E}\mid u\left({\epsilon}(s),\tilde{X}^{n,\delta,{\epsilon}}_{{\epsilon}(s)}\right)-u\left(s,X_{s}\right)\mid^{2}+5{Kf}_{\epsilon}(s)\\ \leq&K{\epsilon}+K\mathbb{E}\mid\tilde{v}^{n,\delta,{\epsilon}}\left(T-{\epsilon}(s),\tilde{X}^{n,\delta,{\epsilon}}_{T-{\epsilon}(s)}\right)-v^{\delta}\left(T-{\epsilon}(s),\tilde{X}^{n,\delta,{\epsilon}}_{T-{\epsilon}(s)}\right)\mid^{2}\\ &+K\mathbb{E}\mid v^{\delta}\left(T-{\epsilon}(s),\tilde{X}^{n,\delta,{\epsilon}}_{T-{\epsilon}(s)}\right)-v\left(T-{\epsilon}(s),\tilde{X}^{n,\delta,{\epsilon}}_{T-{\epsilon}(s)}\right)\mid^{2}\\ &+K{\epsilon}^{2}+{Kf}_{\epsilon}(s)+K{\epsilon}+{Kf}_{\epsilon}(s) \end{aligned} \end{array} $$
$$\begin{array}{@{}rcl@{}} \begin{aligned} \leq&K\mathbb{E}\left|\int_{\mathbb{R}^{d}}p_{\delta}\left(\tilde{X}^{n,\delta,{\epsilon}}_{T-{\epsilon}(s)}-y\right)\left(V^{n,\delta,{\epsilon}}_{T-{\epsilon}(s)}\textit{dy}-V^{\delta}_{T-{\epsilon}(s)}\textit{dy}\right)\right|^{2}\\ &+K\int_{\mathbb{R}^{d}}\mid v^{\delta}\left(T-{\epsilon}(s),x\right)-v\left(T-{\epsilon}(s),x\right)\mid^{2}J_{T-{\epsilon}(s)}^{n,\delta,{\epsilon}}(x)dx\\ &+K{\epsilon}+{Kf}_{\epsilon}(s)\\ \leq&K_{\delta}\mathbb{E}{\rho_{1}^{2}}\left(V^{n,\delta,{\epsilon}}(T-{\epsilon}(s)), V^{\delta}(T-{\epsilon}(s))\right)+K_{T}\delta+K{\epsilon}+{Kf}_{\epsilon}(s)\\ \leq&K_{\delta,T}\left(n^{-(1-2{\alpha})}\vee n^{-2{\alpha}}\right)+K_{T}\delta+{Kf}_{\epsilon}(s). \end{aligned} \end{array} $$
The other term in Eq. (4.2) can be estimated similarly. Therefore,
$$\begin{array}{@{}rcl@{}} \begin{aligned} f_{\epsilon}(t)\leq&2T{\int_{0}^{t}}\left({Kf}_{\epsilon}(s)+K_{\delta,T}\left(n^{-(1-2{\alpha})}\vee n^{-2{\alpha}}\right)+K_{T}\delta\right)ds+8{\int_{0}^{t}}\left(K{\epsilon}+{Kf}_{\epsilon}(s)\right)ds\\ \leq&K_{\delta,T}\left(n^{-(1-2{\alpha})}\vee n^{-2{\alpha}}\right)+K_{T}\delta+K_{T}{\int_{0}^{t}}f_{\epsilon}(s)ds. \end{aligned} \end{array} $$
By Gronwall’s inequality, we have
$$\mathbb{E}\sup_{t\leq T}\mid\tilde{X}^{n,\delta,{\epsilon}}_{t}-X_{t}\mid^{2}\leq K_{\delta,T}\left(n^{-(1-2{\alpha})}\vee n^{-2{\alpha}}\right)+K_{T}\delta $$

Then, by the result of the four step scheme, we define \(Y^{n,\delta,{\epsilon }}(t)=u^{n,\delta,{\epsilon }}(t,\tilde {X}^{n,\delta,{\epsilon }}_{t})\) as the numerical solution of Y(t) in FBSDE (1.1). We have the following theorem:

Theorem 4.2

The convergence of Y n,δ,ε (t) to Y(t) is bounded by \(K_{\delta,T}\left (n^{-\frac {1-2{\alpha }}{2}}\vee n^{-{\alpha }}\right)+K_{T}\sqrt {\delta }\).

Proof

$$\begin{array}{@{}rcl@{}} \begin{aligned} \mathbb{E}\mid{Y}^{n,\delta,{\epsilon}}_{t}-Y_{t}\mid=&\mathbb{E}\mid u^{n,\delta,{\epsilon}}\left(t,\tilde{X}^{n,\delta,{\epsilon}}_{t}\right)-u(t,X_{t})\mid\\ \leq&\mathbb{E}\mid u^{n,\delta,{\epsilon}}\left(t,\tilde{X}^{n,\delta,{\epsilon}}_{t}\right)-u\left(t,\tilde{X}^{n,\delta,{\epsilon}}_{t}\right)\mid+\mathbb{E}\mid u\left(t,\tilde{X}^{n,\delta,{\epsilon}}_{t}\right)-u(t,X_{t})\mid\\ =&\mathbb{E}\int_{\mathbb{R}^{d}}\!\mid v^{n,\delta,{\epsilon}}(T-t,x)-v(T-t,x)\mid J_{T-t}^{n,\delta,{\epsilon}}(x)dx+K\mathbb{E}\mid\tilde{X}^{n,\delta,{\epsilon}}_{t}-X_{t}\mid\\ \leq&\mathbb{E}\int_{\mathbb{R}^{d}}\left(\mid v^{n,\delta,{\epsilon}}(T-t,x)-v^{\delta}(T-t,x)\mid\right.\\ &~~~~~~~~~+\left.\mid v^{\delta}(T-t,x)-v(T-t,x)\mid\right)J_{T-t}^{n,\delta,{\epsilon}}(x)dx\\ &+K\mathbb{E}\mid\tilde{X}^{n,\delta,{\epsilon}}_{t}-X_{t}\mid\\ \leq&K_{\delta,T}\left(n^{-\frac{1-2{\alpha}}{2}}\vee n^{-{\alpha}}\right)+K_{T}\sqrt{\delta}+K\mathbb{E}\mid\tilde{X}^{n,\delta,{\epsilon}}_{t}-X_{t}\mid\\ \leq&K_{\delta,T}\left(n^{-\frac{1-2{\alpha}}{2}}\vee n^{-{\alpha}}\right)+K_{T}\sqrt{\delta}+K_{\delta,T}\left(n^{-\frac{1-2{\alpha}}{2}}\vee n^{-{\alpha}}\right)+K_{T}\sqrt{\delta}\\ \leq&K_{\delta,T}\left(n^{-\frac{1-2{\alpha}}{2}}\vee n^{-{\alpha}}\right)+K_{T}\sqrt{\delta}, \end{aligned} \end{array} $$

where \(J_{T-t}^{n,\delta,{\epsilon }}(x)\) is the probability density of \(\tilde {X}^{n,\delta,{\epsilon }}_{t}\). □

Conclusion

In this paper we investigated a new numerical scheme for a class of coupled forward-backward stochastic differential equations. Combining the four step scheme and the Euler Scheme, we defined a new numerical solution of the FBSDE system by branching particle systems in a random environment and proved related convergent results. Prior to our work, there was no literature about particle system representations for the numerical approximations of FBSDE systems.

Declarations

Acknowledgments

We would like to thank an anonymous referee for his/her constructive suggestions which lead to the improvement of this article.

Liu acknowledges research support by National Science Foundation of China NSFC 11501164. Xiong acknowledges research support by Macao Science and Technology Fund FDCT 076/2012/A3 and Multi-Year Research Grants of the University of Macau No. MYRG2014-00015-FST and MYRG2014-00034-FST.

Authors’ contributions

HL conducted earlier derivation of the results. DC made substantial changes over the original derivations and obtained many new proofs. JX posed the problems and supervised the whole process. The manuscript is handled by JX. All authors read and approved the final manuscript.

Competing interests

The authors declare that they have no competing interests.

Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made.

Authors’ Affiliations

(1)
School of Mathematics, Shandong University
(2)
Department of Mathematics, Hebei Normal University
(3)
Department of Mathematics, University of Macau, Avenida da Universidade

References

  1. Bally, V: Approximation scheme for solutions of BSDE, Backward stochastic differential equations, (Paris, 1995–1996), Pitman Res. Notes Math. Ser., Vol. 364. Longman, Harlow (1997).Google Scholar
  2. Bouchard, B, Touzi, N: Discrete-time approximation and Monte-Carlo simulation of backward stochastic differential equations. Stoch. Process. Appl. 111, 175–206 (2004).MathSciNetView ArticleMATHGoogle Scholar
  3. Briand, P, Delyon, B, Mémin, J: Donsker-type theorem for BSDEs. Electron. Comm. Probab. 6, 1–14 (2001).MathSciNetView ArticleMATHGoogle Scholar
  4. Chevance, D: Numerical methods for backward stochastic differential equations, Numerical methods in finance, Publ. Newton Inst.Cambridge Univ. Press, Cambridge (1997).MATHGoogle Scholar
  5. Crisan, D: Numerical methods for solving the stochastic filtering problem, Numerical methods and stochastics (Toronto, ON, 1999), Fields Inst. Commun., 34, Amer. Math. Soc., Providence, RI (2002).Google Scholar
  6. Crisan, D, Lyons, T: Nonlinear filtering and measure-valued processes. Probab. Theory Related Fields. 109, 217–244 (1997).MathSciNetView ArticleMATHGoogle Scholar
  7. Crisan, D, Xiong, J: Numerical solution for a class of SPDEs over bounded domains. Stochastics. 86(3), 450–472 (2014).MathSciNetMATHGoogle Scholar
  8. Cvitanić, J, Ma, J: Hedging options for a large investor and forward-backward SDE’s. Ann. Appl. Probab. 6, 370–398 (1996).MathSciNetView ArticleMATHGoogle Scholar
  9. Cvitanić, J, Zhang, J: The steepest descent method for forward-backward SDEs. Electron. J. Probab. 10, 1468–1495 (2005).MathSciNetView ArticleMATHGoogle Scholar
  10. Del Moral, P: Non-linear filtering: interacting particle resolution. Markov Process. Related Fields 2. 4, 555–581 (1996).MathSciNetMATHGoogle Scholar
  11. Delarue, F, Menozzi, S: A forward-backward stochastic algorithm for quasi-linear PDEs. Ann. Appl. Probab. 16, 140–184 (2006).MathSciNetView ArticleMATHGoogle Scholar
  12. Douglas, J Jr., Ma, J, Protter, P: Numerical methods for forward-backward stochastic differential equations. Ann. Appl. Probab. 6, 940–968 (1996).MathSciNetView ArticleMATHGoogle Scholar
  13. El Karoui, N, Peng, S, Quenez, MC: Backward stochastic differential equations in finance. Math. Finance. 7, 1–71 (1997).MathSciNetView ArticleMATHGoogle Scholar
  14. Föllmer, H, Schied, A: Convex measures of risk and trading constraints. Finance Stoch.6 (2002), 429–447 (1999). Springer-Verlag, New York.MathSciNetMATHGoogle Scholar
  15. Friedman, A: Stochastic Differential Equations and Applications. Vol. 1., Probability and Mathematical Statistics, Vol. 28. Academic Press, New York-London (1975).MATHGoogle Scholar
  16. Henry-Labordère, P, Tan, X, Touzi, N: A numerical algorithm for a class of BSDEs via the branching process. Stochastic Process. Appl. 124(2), 1112–1140 (2014).MathSciNetView ArticleMATHGoogle Scholar
  17. Kurtz, T, Xiong, J: Particle representations for a class of nonlinear SPDEs. Stochastic Process. Appl. 83, 103–126 (1999).MathSciNetView ArticleMATHGoogle Scholar
  18. Kurtz, T, Xiong, J: Numerical solutions for a class of SPDEs with application to filtering, Stochastics in finite and infinite dimensions:in honor of Gopinath Kallianpur. Trends Math.Birkhuser Boston, Boston, MA (2001).Google Scholar
  19. Liu, H, Xiong, J: A branching particle system approximation for nonlinear stochastic filtering. Sci. China Math. 56, 1521–1541 (2013).MathSciNetView ArticleMATHGoogle Scholar
  20. Ma, J, Protter, P, San Martin, J, Torres, S: Numerical method for backward stochastic differential equations. Ann. Appl. Probab. 12, 302–316 (2002).MathSciNetView ArticleMATHGoogle Scholar
  21. Ma, J, Protter, P, Yong, J: Solving forward-backward stochastic differential equations explicitly-a four step scheme. Probab. Theory Related Fields 98. 3, 339–359 (1994).MathSciNetView ArticleMATHGoogle Scholar
  22. Ma, J, Shen, J, Zhao, Y: On numerical approximations of forward-backward stochastic differential equations. SIAM J. Numer. Anal. 46, 2636–2661 (2008).MathSciNetView ArticleMATHGoogle Scholar
  23. Ma, J, Yong, J: Forward-backward stochastic differential equations and their applications. Springer-Verlag, Berlin (1999).MATHGoogle Scholar
  24. Ma, J, Zhang, J: Representations and regularities for solutions to BSDEs with reflections. Stochast. Process. Appl. 115, 539–569 (2005).MathSciNetView ArticleMATHGoogle Scholar
  25. Milstein, GN, Tretyakov, MV: Numerical algorithms for forward-backward stochastic differential equations. SIAM J. Sci. Comput. 28, 561–582 (2006).MathSciNetView ArticleMATHGoogle Scholar
  26. Pardoux, E, Peng, S: Adapted solution of a backward stochastic differential equation. Syst. Control Lett. 14, 55–61 (1990).MathSciNetView ArticleMATHGoogle Scholar
  27. Peng, S: Backward stochastic differential equation, nonlinear expectation and their applications. Proceedings of the International Congress of Mathematicians. Volume I. Hindustan Book Agency, New Delhi (2010).Google Scholar
  28. Rosazza Gianin, E: Risk measures via g-expectations. Insurance Math. Econom. 39, 19–34 (2006).MathSciNetView ArticleMATHGoogle Scholar
  29. Xiong, J: An Introduction to Stochastic Filtering Theory. Oxford Graduate Texts in Mathematics, 18. Oxford University Press, Oxford (2008).Google Scholar
  30. Xiong, J, Zhou, X: Mean-variance portfolio selection under partial information. SIAM J. Control Optim. 46, 156–175 (2007).MathSciNetView ArticleMATHGoogle Scholar
  31. Yong, J, Zhou, X: Stochastic Controls. Hamiltonian Systems and HJB Equations. Springer-Verlag, New York (1999).MATHGoogle Scholar
  32. Zhang, J: A numerical scheme for BSDEs. Ann. Appl. Probab. 14, 459–488 (2004).MathSciNetView ArticleMATHGoogle Scholar

Copyright

© The Author(s) 2016