Open Access

Characterization of optimal feedback for stochastic linear quadratic control problems

Probability, Uncertainty and Quantitative Risk20172:11

https://doi.org/10.1186/s41546-017-0022-7

Received: 21 June 2016

Accepted: 1 September 2017

Published: 27 September 2017

Abstract

One of the fundamental issues in Control Theory is to design feedback controls. It is well-known that, the purpose of introducing Riccati equations in the study of deterministic linear quadratic control problems is exactly to construct the desired feedbacks. To date, the same problem in the stochastic setting is only partially well-understood. In this paper, we establish the equivalence between the existence of optimal feedback controls for the stochastic linear quadratic control problems with random coefficients and the solvability of the corresponding backward stochastic Riccati equations in a suitable sense. We also give a counterexample showing the nonexistence of feedback controls to a solvable stochastic linear quadratic control problem. This is a new phenomenon in the stochastic setting, significantly different from its deterministic counterpart.

Keywords

Stochastic linear quadratic problemFeedback controlBackward stochastic Riccati equationBackward stochastic differential equation

Mathematics Subject Classification

Primary 93E20; Secondary 93B5293C0560H10

Introduction

Let T>0 and \((\Omega,{\mathcal {F}},{\mathbb {F}},{\mathbb {P}})\) be a complete filtered probability space with \({\mathbb {F}}=\{{\mathcal {F}}_{t}\}_{t\in [0,T]}\), which is the natural natural filtration generated by a one-dimensional standard Brownian motion {W(t)} t[0,T].

For any \(k\in {\mathbb {N}}\), t[0,T] and r[1,), denote by \(L_{{\mathcal {F}}_{t}}^{r}(\Omega ;{\mathbb {R}}^{k})\) the Banach space of all \({\mathcal {F}}_{t}\)-measurable random variables \(\xi :\Omega \to {\mathbb {R}}^{k}\) so that \(\mathbb {E}|\xi |_{{\mathbb {R}}^{k}}^{r} < \infty \), with the canonical norm. Denote by \(L^{r}_{{\mathbb {F}}}(\Omega ;C([t,T];{\mathbb {R}}^{k}))\) the Banach space of all \({\mathbb {R}}^{k}\)-valued \({\mathbb {F}}\)-adapted, continuous stochastic processes ϕ(·), with the following norm
$$|\phi(\cdot)|_{L^{r}_{{\mathbb{F}}}(\Omega;C([t,T];{\mathbb{R}}^{k}))}\triangleq \left({\mathbb{E}}\max_{\tau\in[t,T]}|\phi(\tau)|_{{\mathbb{R}}^{k}}^{r}\right)^{1/r}. $$
Fix any r 1,r 2,r 3,r 4[1,). Put
$$\begin{aligned} &L^{r_{1}}_{\mathbb{F}}\!(\Omega;\!L^{r_{2}}\!(t,T;{\mathbb{R}}^{k})\!)\,=\,\left\{\varphi:(t,T)\!\times\!\Omega\!\to\! {\mathbb{R}}^{k}\; \!\Bigg|\!\; \varphi(\cdot)\ \text{is} {\mathbb{F}}-\text{adapted and}\,\,{\mathbb{E}}\left(\!\int_{t}^{T}\!\!\!|\varphi(\tau)|_{{\mathbb{R}}^{k}}^{r_{2}}d\tau\!\right)^{\frac{r_{1}}{r_{2}}}\!\!<\!\infty\!\right\}\!,\\ &L^{r_{2}}_{\mathbb{F}}\!(t,T;\!L^{r_{1}}\!(\Omega;{\mathbb{R}}^{k})\!)\,=\,\left\{\!\varphi:(t,T)\!\times\!\Omega\!\to\! {\mathbb{R}}^{k}\; \Bigg|{\vphantom{{\mathbb{E}}\left(\!\int_{t}^{T}\!\!\!|\varphi(\tau)|_{{\mathbb{R}}^{k}}^{r_{2}}d\tau\!\right)^{\frac{r_{1}}{r_{2}}}}}\; \varphi(\cdot)\ \text{is} {\mathbb{F}}-\text{adapted and}\; {\int_{t}^{T}}\!\!\!\left(\!{\mathbb{E}}|\varphi(\tau)|_{{\mathbb{R}}^{k}}^{r_{1}}\!\right)^{\frac{r_{2}} {r_{1}}}d\tau\!<\!\infty\!\right\}\!. \end{aligned} $$

Both \(L^{r_{1}}_{\mathbb {F}}(\Omega ;L^{r_{2}}(t,T;{\mathbb {R}}^{k}))\) and \(L^{r_{2}}_{\mathbb {F}}(t,T;L^{r_{1}}(\Omega ;{\mathbb {R}}^{k}))\) are Banach spaces with the canonical norms. In a similar way, we may define \(L^{\infty }_{\mathbb {F}}(\Omega ;L^{r_{2}}(t,T;{\mathbb {R}}^{k}))\), \(L^{r_{1}}_{\mathbb {F}}(\Omega ;L^{\infty }(t,T;{\mathbb {R}}^{k}))\) and \(L^{\infty }_{\mathbb {F}}(\Omega ;L^{\infty }(t,T;{\mathbb {R}}^{k}))\). For q[1,], we simply denote \(L^{q}_{\mathbb {F}}(\Omega ;L^{q}(t,T;{\mathbb {R}}^{k}))\) by \(L^{q}_{\mathbb {F}}(t,T;{\mathbb {R}}^{k})\). Denote by \({\mathcal {S}}({\mathbb {R}}^{k})\) the set of all k-dimensional symmetric matrices and I k the k-dimensional identity matrix.

For any \(n,m\in {\mathbb {N}}\), and \((s,\eta)\in [0,T)\times L^{2}_{{\mathcal {F}}_{s}}(\Omega ;{\mathbb {R}}^{n})\), let us consider the following controlled linear stochastic differential equation:
$$ \left\{ \begin{array}{ll} dx(r)=\left(A(r) x(r) + B(r) u(r)\right)dr+ \left(C(r) x(r)+D(r) u(r)\right)dW(r) &\text{in }[s,T],\\ x(s)\;\;=\eta, \end{array} \right. $$
(1)
with the following quadratic cost functional
$$ \begin{aligned}{\mathcal{J}}(s,\eta;u(\cdot)) =\frac{1}{2}{\mathbb{E}}\left[ \int_{s}^{T} \left(\left\langle Q(r) x(r),x(r)\right\rangle_{{\mathbb{R}}^{n}} +\left\langle R(r) u(r),u(r)\right\rangle_{{\mathbb{R}}^{m}}\right)dr + \langle Gx(T),x(T)\rangle_{{\mathbb{R}}^{n}}\right]. \end{aligned} $$
(2)

In (1)–(2), \( u(\cdot)(\in L^{2}_{\mathbb {F}}(s,T;{\mathbb {R}}^{m})\), the space of admissible controls) is the control variable, x(·) is the state variable, the stochastic processes A(·), B(·), C(·), D(·), Q(·), R(·), and the random variable G satisfy suitable assumptions to be given later (See (7) in the next section) such that Eq. (1) admits a unique solution \(x(\cdot ;s,\eta,u(\cdot))\in L^{2}_{\mathbb {F}}(\Omega ;C([s,T];{\mathbb {R}}^{n}))\), and (2) is well-defined. In what follows, to simplify notations, the time variable t is sometimes suppressed in A, B, C, D, etc.

In this paper, we are concerned with the following stochastic linear quadratic control problem (SLQ for short):

Problem (SLQ). For each \((s,\eta)\in [0,T]\times L^{2}_{{\mathcal {F}}_{s}}(\Omega ;{\mathbb {R}}^{n})\), find a \(\bar u(\cdot)\in L^{2}_{\mathbb {F}}(s,T;{\mathbb {R}}^{m})\) so that
$$ {\mathcal{J}}\left(s,\eta;\bar u(\cdot)\right)=\inf_{u(\cdot)\in L^{2}_{\mathbb{F}}(s,T;{\mathbb{R}}^{m})}{\mathcal{J}}\left(s,\eta;u(\cdot)\right). $$
(3)
SLQs have been extensively studied in the literature, for which we refer the readers to (Ait Rami et al. 2001; Athans 1971; Bismut 1976; 1978; Chen et al. 1998; Tang 2003; Wonham 1968; Yong and Zhou 2000) and the rich references therein. Similar to the deterministic setting (Kalman 1960; Wonham 1985; Yong and Lou 2006), Riccati equations (and their variants) are fundamental tools to study SLQs. Nevertheless, for stochastic problems one usually has to consider backward stochastic Riccati equations. For our Problem (SLQ), the desired backward stochastic Riccati equation takes the following form:
$$ \left\{\begin{array}{ll} \!dP \,=\,-\left(\!PA \,+\, A^{\top} P \,+\, \Lambda C \,+\, C^{\top} \Lambda \,+\, C^{\top} PC \,+\, Q \,-\, L^{\top} \!K^{\dag} \!L\! \right)\!dt + \Lambda dW(t) &\text{in }[0,T],\\ \!P(T)=G, \end{array} \right. $$
(4)
where A stands for the transpose of A, and
$$ K\triangleq R+D^{\top} PD,\qquad L\triangleq B^{\top} P+D^{\top} (PC+\Lambda), $$
(5)

and K denotes the Moore-Penrose pseudo-inverse of K.

To the authors’ best knowledge, (Wonham 1968) is the first work which employed Riccati equations to study SLQs. After (Wonham 1968), Riccati equations were systematically applied to study SLQs (e.g. (Athans 1971, Bensoussan 1981, Bismut 1978, Davis 1977, Yong and Zhou 2000)), and the well-posedness of such equations was studied in some literatures (See (Tang 2003; Yong and Zhou 2000) and the references cited therein).

In the early works on SLQs (e.g., (Chen et al. 1998, Wonham 1968, Yong and Zhou 2000)), the coefficients A, B, C, D, Q, R, G appeared in the control system (1) and the cost functional (2) were assumed to be deterministic. For this case, the corresponding Riccati Eq. (4) is deterministic (i.e., Λ≡0 in (4)), as well.

To the best of our knowledge, (Bismut 1976) is the first work that addressed the study of SLQs with random coefficients. In (Bismut 1976; 1978), the author formally derived the equation (4). However, at that time only some special and simple cases could be solved. Later, (Peng 1992) proved the well-posedness for (4) under the condition that D=0 by means of Bellman’s principle of quasi linearization and a monotone convergence result for symmetric matrices. This condition was dropped in (Tang 2003), in which it was proved that (4) admits a unique solution (P,Λ) in a suitable space under the assumptions that Q≥0, G≥0 and R> >0.

In Control Theory, one of the fundamental issues is to find feedback controls, which are particularly important in practical applications. It is well-known that, in the deterministic case, the purpose to introduce Riccati equations into the study of Control Theory (e.g., (Kalman 1960; Wonham 1985; Yong and Lou 2006)) is exactly to design feedback controls for linear quadratic control problems (LQs for short). More precisely, under some mild assumptions, one can show that the unique solvability of deterministic LQs is equivalent to that of the corresponding Riccati equations, via which one can construct the desired optimal feedback controls. Unfortunately, the same problem is only partially well-understood in the stochastic setting, such as the case that all of the coefficients in (1)–(2) are deterministic (Ait Rami et al. 2001; Sun and Yong 2014), or the case that the diffusion term in (1) is control-independent, i.e., D≡0 (Peng 1992). However, for the general case, we shall explain in Remark 1.2 below that, the solution (P,Λ) (to (4)) found in (Tang 2003) is not regular enough to serve as the design of feedback controls for Problem (SLQ).

Because of the difficulty mentioned above, it is quite natural to ask such a question: Is it possible to link the existence of optimal feedback controls (rather than the solvability) for Problem (SLQ) directly to the solvability of the Eq. (4)? Clearly, from the viewpoint of applications, it is more desirable to study the existence of feedback controls for SLQs than the solvability for the same problems.

The main purpose of this work is to give an affirmative answer to the above question under sharp assumptions on the coefficients appearing in (1)–(2). For this purpose, let us introduce below the notion of optimal feedback operator for Problem (SLQ).

Definition 1.1

A stochastic process \(\Theta (\cdot)\in L^{\infty }_{\mathbb {F}}(\Omega ;L^{2}(0,T;{\mathbb {R}}^{m\times n}))\) is called an optimal feedback operator for Problem (SLQ) on [0,T] if, for all \((s,\eta)\in [0,T)\times L^{2}_{{\mathcal {F}}_{s}}(\Omega ;{\mathbb {R}}^{n})\) and \(u(\cdot)\in L^{2}_{\mathbb {F}}(s,T;{\mathbb {R}}^{m})\), it holds that
$$ {\mathcal{J}}(s,\eta;\Theta(\cdot)\bar x(\cdot))\leq {\mathcal{J}}(s,\eta;u(\cdot)), $$
(6)

where \(\bar x(\cdot)=x(\cdot \,;s,\eta, \Theta (\cdot)\bar x(\cdot))\).

Remark 1.1

In Definition 1.1, Θ(·) is required to be independent of the initial state \(\eta \in L^{2}_{{\mathcal {F}}_{s}}(\Omega ;{\mathbb {R}}^{n})\). For a fixed pair \((s,\eta)\in [0,T)\times L^{2}_{{\mathcal {F}}_{s}}(\Omega ;{\mathbb {R}}^{n})\), the inequality (6) implies that the control
$$\bar u(\cdot)\equiv \Theta(\cdot)\bar x(\cdot)\in L^{2}_{\mathbb{F}}(s,T;{\mathbb{R}}^{m}) $$
is optimal for Problem (SLQ). Therefore, for Problem (SLQ), the existence of an optimal feedback operator on [0,T] implies the existence of optimal control for any pair \((s,\eta)\in [0,T)\times L^{2}_{{\mathcal {F}}_{s}}(\Omega ;{\mathbb {R}}^{n})\).

Remark 1.2

Under some assumptions, in (Tang 2003), it was shown that the equation (4) admits a unique solution \((P,\Lambda)\in L^{\infty }_{\mathbb {F}}(0,T;{\mathcal {S}}({\mathbb {R}}^{n}))\times L^{p}_{\mathbb {F}}(\Omega ;L^{2}(0,T;{\mathcal {S}}({\mathbb {R}}^{n})))\) for any given p[1,). Nevertheless the approach in (Tang 2003) does not produce the sharp regularity \(\Theta \in L^{\infty }_{\mathbb {F}}(\Omega ;L^{2}(0,T;\) \({\mathcal {S}}({\mathbb {R}}^{n})))\) (but rather \(\Theta \in L^{p}_{\mathbb {F}}(\Omega ;L^{2}(0,T;{\mathcal {S}}({\mathbb {R}}^{n})))\) for any p[1,)). Although the author showed in (Tang 2003) that if \(\bar x\) is an optimal state, then \(\Theta \bar x \in L^{2}_{\mathbb {F}}(0,T;{\mathbb {R}}^{n})\) and hence it is the desired optimal control, such kind of control strategy is not robust, even with respect to some very small perturbation. Actually, assume that there is an error \(\delta x \in L^{2}_{\mathbb {F}}(\Omega ;C([0,T];{\mathbb {R}}^{n}))\) (the solution space of (1) with s=0) with \(|\delta x|_{L^{2}_{\mathbb {F}}(\Omega ;C([0,T];{\mathbb {R}}^{n}))}=\varepsilon >0\) for ε being small enough in the observation of the state, then by the well-posedness result in (Tang 2003), one cannot conclude that \(\Theta (\bar x + \delta x)\) is an admissible control. Thus, the Θ given in (Tang 2003) is not a “qualified" feedback because it is not robust with respect to small perturbations. How about to assume that Θ has a good sign or to be monotone (in a suitable sense)? Even for such a special case, it is not hard to see that, things will not become better since we have no other information about δ x except that it belongs to \(L^{2}_{\mathbb {F}}(\Omega ;C([0,T];{\mathbb {R}}^{n}))\), the integrability of the function Θ δ x (with respect to the sample point ω) cannot be improved, and therefore one could not conclude that \(\Theta (\bar x + \delta x)\) is an admissible control, either.

In a recent paper (Tang 2015), the well-posedness result in (Tang 2003) was slightly improved and it was shown that the solution (P,Λ) to (4) enjoys the BMO-martingle property. However, this does not help to produce the boundedness of Θ with respect to the sample point ω, either. Actually, we shall give a counterexample (i.e., Example 6.2) showing that such a boundedness result is not guaranteed without further assumptions.

Let us recall that, the main motivation to introduce feedback controls is to keep the corresponding control strategy to be robust with respect to (small) perturbations. Hence, the well-posedness results in (Tang 2003; 2015) are not enough to solve our Problem (SLQ). Nevertheless, for the case that D≡0, the optimal feedback operator in (10) (in the next section) is specialized as
$$\Theta(\cdot)=-K(\cdot)^{\dag}B(\cdot)^{\top} P(\cdot) + \left(I_{m} - K(\cdot)^{\dag}K(\cdot)\right)\theta, $$
which is independent of Λ, and therefore the result in (Tang 2003) (or that in Peng (1992)) is enough for this special case.

We have explained that a suitable optimal feedback control operator for our Problem (SLQ) should belong to \(L^{\infty }_{\mathbb {F}}(\Omega ;L^{2}(0,T;{\mathbb {R}}^{n\times m}))\). Nevertheless, to our best knowledge, the existence of such operator is completely unknown for Problem (SLQ) with random coefficients. In this paper, we shall show that the existence of the optimal feedback operator for Problem (SLQ) is equivalent to the solvability of (4) in a suitable sense. When the coefficients A, B, C, D, G, R, Q are deterministic, such an equivalence was studied in (Ait Rami et al. 2001) (see also (Sun and Yong 2014) for the problem of a linear quadratic stochastic two-person zero-sum differential game). As far as we know, there is no study of such problems for the general case that A, B, C, D, R, Q are stochastic processes, and G is a random variable.

The rest of this paper is organized as follows: “Statement of the main results” section is devoted to presenting the main results of this paper. In “Some preliminary results” section, we give some preliminary results which will be used in the remainder of this paper. “Proof of Theorem 2.1”–“Proofs of Corollary 2.1 and Theorem 2.2” sections are addresses to proofs of our main results. At last, in “Two illustrating examples” section, we give some examples for the existence and nonexistence of the optimal feedback control operator.

Statement of the main results

Let us first introduce the following assumption:

(AS1) The coefficients in (1)–(2) satisfy the following measurability/integrability conditions:
$$ \begin{array}{ll} A(\cdot)\in L^{\infty}_{\mathbb{F}}(\Omega;L^{1}(0,T;{\mathbb{R}}^{n\times n})), \quad C(\cdot)\in L^{\infty}_{\mathbb{F}}(\Omega;L^{2}(0,T;{\mathbb{R}}^{n\times n})), \\ B(\cdot)\in L^{\infty}_{\mathbb{F}}(\Omega;L^{2}(0,T;{\mathbb{R}}^{n\times m})), \quad D(\cdot)\in L^{\infty}_{\mathbb{F}}(0,T;{\mathbb{R}}^{n\times m}),\\ Q(\cdot)\in L^{\infty}_{\mathbb{F}}(\Omega;L^{1}(0,T;{\mathcal{S}}({\mathbb{R}}^{n}))), \quad R(\cdot) \in L^{\infty}_{\mathbb{F}}(0,T;{\mathcal{S}}({\mathbb{R}}^{m})),\quad G\in L^{\infty}_{{\mathcal{F}}_{T}}(\Omega;{\mathcal{S}}({\mathbb{R}}^{n})). \end{array} $$
(7)

We have the following result:

Theorem 2.1

Let the assumption (AS1) hold. Then, Problem (SLQ) admits an optimal feedback operator \(\Theta (\cdot)\in L^{\infty }_{\mathbb {F}}(\Omega ;L^{2}(0,T;{\mathbb {R}}^{m\times n}))\) if and only if the Riccati Eq. (4) admits a solution \(\left (P(\cdot),\Lambda (\cdot)\right)\in L^{\infty }_{{\mathbb {F}}}(\Omega ;C([0,T];{\mathcal {S}}({\mathbb {R}}^{n}))) \times L^{p}_{{\mathbb {F}}}(\Omega ;L^{2}(0,T;{\mathcal {S}}({\mathbb {R}}^{n})))\) (for all p≥1) such that
$$ {\mathcal{R}}(K(t,\omega))\supset{\mathcal{R}}(L(t,\omega)) \quad\text{and}\quad K(t,\omega)\geq 0,\qquad \mathrm{a.e.}\ (t,\omega)\in [0,T]\times\Omega, $$
(8)
and
$$ K(\cdot)^{\dag}L(\cdot)\in L^{\infty}_{\mathbb{F}}(\Omega;L^{2}(0,T;{\mathbb{R}}^{m\times n})). $$
(9)
In this case, the optimal feedback operator Θ(·) is given as
$$ \Theta(\cdot)=-K(\cdot)^{\dag}L(\cdot) + \left(I_{m} - K(\cdot)^{\dag}K(\cdot)\right)\theta, $$
(10)
where \(\theta \in L^{\infty }_{\mathbb {F}}(\Omega ;L^{2}(0,T;{\mathbb {R}}^{m\times n}))\) is arbitrarily given. Furthermore,
$$ \inf_{u\in L^{2}_{\mathbb{F}}(s,T;{\mathbb{R}}^{m})}{\mathcal{J}}(s,\eta;u)=\frac{1}{2}\,{\mathbb{E}}\langle P(s)\eta,\eta\rangle_{{\mathbb{R}}^{n}}. $$
(11)

Corollary 2.1

Let (AS1) hold. Then the Riccati Eq. (4) admits at most one solution \(\left (P(\cdot),\Lambda (\cdot)\right)\in L^{\infty }_{{\mathbb {F}}}(\Omega ;C([0,T];{\mathcal {S}}({\mathbb {R}}^{n}))) \times L^{p}_{{\mathbb {F}}}(\Omega ;L^{2}(0,T;{\mathcal {S}}({\mathbb {R}}^{n})))\) (for all p≥1) satisfying (8) and (9).

The result in Theorem 2.1 can be strengthened as follows.

Theorem 2.2

Let the assumption (AS1) hold. Then, Problem (SLQ) admits a unique optimal feedback operator \(\Theta (\cdot)\in L^{\infty }_{\mathbb {F}}(\Omega ;L^{2}(0,T;{\mathbb {R}}^{m\times n}))\) if and only if the Riccati Eq. (4) admits a unique solution \(\left (P(\cdot),\Lambda (\cdot)\right)\in L^{\infty }_{{\mathbb {F}}}(\Omega ;C([0,T];{\mathcal {S}}({\mathbb {R}}^{n}))) \times L^{p}_{{\mathbb {F}}}(\Omega ;L^{2}(0,T;\) \({\mathcal {S}}({\mathbb {R}}^{n})))\) (for all p≥1) such that K(t,ω)>0 for a.e. (t,ω)[0,TΩ (and hence K in (4) can be replaced by K −1) and \( K(\cdot)^{-1}L(\cdot)\in L^{\infty }_{\mathbb {F}}(\Omega ;L^{2}(0,T;{\mathbb {R}}^{m\times n}))\). In this case, the optimal feedback operator Θ(·) is given by Θ(·)=−K(·)−1 L(·), and (11) (in Theorem 2.1) holds.

Several remarks are in order.

Remark 2.1

We borrow some idea from (Ait Rami et al. 2001; Sun and Yong 2014) to employ the Moore-Penrose pseudo-inverse in the study of Riccati equations for SLQs when the matrix K in (5) is singular.

Remark 2.2

The proof of sufficiency in Theorems 2.1–2.2 is very close to the deterministic setting and also that of the case that the coefficients in (1)–(2) are deterministic. The main difficulty in the proof of necessity in Theorems 2.1–2.2 consists in the very fact that the Eq. (4) is a nonlinear equation with a non-global Lipschitz nonlinearity. Nevertheless, since Riccati equations appearing in Control Theory enjoy some special structures, at least under some assumptions they are still globally solvable. A basic idea to solve Riccati equations globally is to link them with suitable solvable optimal control problems, and via which one obtains the desired solutions. To the best of our knowledge, such an idea was first used to solve deterministic differential Riccati equations in (Reid 1946) (though in that paper, the author considered the second variation for a nonsingular nonparametric fixed endpoint problem in the calculus of variations rather than an optimal control problem). This idea was later adopted by many authors (e.g., (Ait Rami et al. 2001; Bismut 1978; Kalman 1960; Sun and Yong 2014; Tang 2003)). In this work, we shall also use such an idea.

Remark 2.3

To simplify the presentation, in this paper we assume that the filtration \({\mathbb {F}}\) is natural. One can also consider the case of general filtration. Of course, for general filtration the solutions to (4) have to be understood in the sense of transposition (introduced in (Lü and Zhang 2013; 2014)).

Remark 2.4

The same SLQ problems (as those in this paper) but in infinite dimensions still make sense. However, the new difficulty in the infinite dimensional setting is how to explain the stochastic integral \(\int _{0}^{T}\Lambda (t)dW(t)\) that appeared in (4) because for this case Λ(·) is an operator-valued stochastic process, and therefore one has to use the theory of transposition solution for operator-valued backward stochastic evolution equations (Lü and Zhang 2014; 2015). Progress in this respect is presented in (Lü and Zhang 2017).

Remark 2.5

It would be quite interesting to extend the main result in this paper to linear quadratic stochastic differential games or similar problems for mean-field stochastic differential equations. Some relevant studies can be found in (Pham 2017, Sun and Yong 2014) but the full pictures are still unclear.

Some preliminary results

In this section, we present some preliminary results, which will be useful later.

First, for any s[0,T), we consider the following stochastic differential equation:
$$ \left\{ \begin{array}{ll} dx = ({\mathcal{A}} x + f)dt + ({\mathcal{B}} x+g)dW(t) &\text{in }[s,T],\\ x(s)=\eta. \end{array} \right. $$
(12)

Here \({\mathcal {A}},{\mathcal {B}}\in L^{\infty }_{\mathbb {F}}(\Omega ;L^{2}(0,T;{\mathbb {R}}^{k\times k}))\), \(\eta \in L^{2}_{{\mathcal {F}}_{s}}(\Omega ;{\mathbb {R}}^{k})\), and \(f,g\in L^{2}_{\mathbb {F}}(s,T;{\mathbb {R}}^{k})\).

Let us recall the following result (We refer to ((Protter 2005), Chapter V, Section 3) for its proof).

Lemma 3.1

The Eq. (12) admits one and only one \({\mathbb {F}}\)-adapted solution \(x(\cdot)\in L^{2}_{\mathbb {F}}(\Omega ;\) \(C([s,T];{\mathbb {R}}^{k}))\).

Next, we need to consider the following backward stochastic differential equation:
$$ \left\{ \begin{array}{ll} dy = f(t,y,z)dt + zdW(t) &\text{in }[s,T],\\ y(T)=\xi. \end{array} \right. $$
(13)
Here \(\xi \in L^{\infty }_{{\mathcal {F}}_{T}}(\Omega ;{\mathbb {R}}^{k})\), and f satisfies that
$$ \left\{\begin{aligned} &f(\cdot,0,0)\in L^{\infty}_{\mathbb{F}}(\Omega;L^{1}(s,T;{\mathbb{R}}^{k})),\\ &|f(\cdot,\alpha_{1},\alpha_{2})-f(\cdot,\beta_{1},\beta_{2})|_{{\mathbb{R}}^{k}}\leq f_{1}(\cdot)|\alpha_{1}-\beta_{1}|_{{\mathbb{R}}^{k}} + f_{2}(\cdot)|\alpha_{2}-\beta_{2}|_{{\mathbb{R}}^{k}}, \;\forall \alpha_{1},\alpha_{2},\beta_{1},\beta_{2}\in {\mathbb{R}}^{k}, \end{aligned}\right. $$
(14)

where \(f_{1}(\cdot)\in L^{\infty }_{\mathbb {F}}(\Omega ;L^{1}(s,T;{\mathbb {R}}))\) and \(f_{2}(\cdot)\in L^{\infty }_{\mathbb {F}}(\Omega ;L^{2}(s,T;{\mathbb {R}}))\).

By means of (Delbaen and Tang 2010, Theorem 2.7) (See also (Briand et al. 2003) for an early result in this direction), we have

Lemma 3.2

For any p>1, the Eq. (13) admits one and only one \({\mathbb {F}}\)-adapted solution \((y(\cdot),z(\cdot))\in L^{\infty }_{\mathbb {F}}(\Omega ;C([s,T];{\mathbb {R}}^{k}))\times L^{p}_{\mathbb {F}}(\Omega ;L^{2}(s,T;{\mathbb {R}}^{k}))\).

Further, let us recall the following known Pontryagin-type maximum principle (Bismut 1976, Theorem 3.2).

Lemma 3.3

Let \((\bar x(\cdot),\bar u(\cdot))\) \(\in L^{2}_{\mathbb {F}}(\Omega ;C([s,T];{\mathbb {R}}^{n}))\times L^{2}_{\mathbb {F}}(s,T;{\mathbb {R}}^{m})\) be an optimal pair of Problem (SLQ). Then there exists a pair \( (\bar y(\cdot), \bar z(\cdot)) \in L^{2}_{\mathbb {F}}(\Omega ;C([s,T];{\mathbb {R}}^{n}))\times L^{2}_{\mathbb {F}}(s,T;{\mathbb {R}}^{n}) \) satisfying the following backward stochastic differential equation:
$$\left\{ \begin{array}{ll} d\bar y(t)=-\left(A^{\top} \bar y(t)+C^{\top} \bar z(t)+Q\bar x(t)\right)dt +\bar z(t)dW(t) &\text{in}\ [s,T],\cr \bar y(T)\,= G\bar x(T), \end{array} \right. $$
and
$$R\bar u(\cdot)+B^{\top} \bar y(\cdot)+D^{\top} \bar z(\cdot) =0,\qquad\mathrm{a.e.}\ (t,\omega)\in [s,T]\times\Omega. $$

As an immediate consequence of Lemmas 3.1 and 3.3, we have the following result.

Corollary 3.1

Let Θ(·) be an optimal feedback operator for Problem (SLQ). Then, for any \((s,\eta)\in [0,T)\times L^{2}_{{\mathcal {F}}_{s}}(\Omega ;{\mathbb {R}}^{n})\), the following forward-backward stochastic differential equation:
$$\left\{ \begin{array}{ll} d\bar x(t) =(A+B\Theta)\bar x(t) dt+ (C+D\Theta)\bar x(t) dW(t) &\text{in}\ [s,T],\\ d\bar y(t)=-\left(A^{\top} \bar y(t)+C^{\top} \bar z(t)+ Q\bar x(t)\right)dt+\bar z(t)dW(t)\quad&\text{in}\ [s,T],\\ \bar x(s)\;\;=\eta,\quad\bar y(T)=G\bar x(T), \end{array} \right. $$
admits a unique solution \( (\bar x(\cdot),\bar y(\cdot),\bar z(\cdot))\in L^{2}_{\mathbb {F}}(\Omega ;C([s,T];{\mathbb {R}}^{n}))\times L^{2}_{\mathbb {F}}(\Omega ;\) \(C([s,T];{\mathbb {R}}^{n}))\times L^{2}_{\mathbb {F}}(s,T;{\mathbb {R}}^{n}) \), and
$$R\Theta \bar x(\cdot)+B^{\top} \bar y(\cdot) +D^{\top} \bar z(\cdot) =0,\quad \mathrm{a.e.}\ (t,\omega)\in [s,T]\times\Omega. $$

Finally, for the reader’s convenience, let us recall the following result for the Moore-Penrose pseudo-inverse and refer the readers to (Ben-Israel and Greville 1974, Chapter 1) for its proof.

Lemma 3.4

1) Let \(M\in {\mathbb {R}}^{n\times n}\). Then the Moore-Penrose pseudo-inverse M of M satisfies that
$$M^{\dag} = {\lim}_{\delta \searrow 0}\,\, (M^{\top} M + \delta I_{n})^{-1} M^{\top}. $$
2) If \(M\in {\mathcal {S}}({\mathbb {R}}^{n})\), then M M=M M and M M is the orthogonal projector from \({\mathbb {R}}^{n}\) to the range of M.

Proof of Theorem 2.1

In this section, we shall give a proof of Theorem 2.1. The proof will be divided into two subsections.

Proof of the sufficiency in Theorem 2.1

In this subsection, we prove the “if" part in Theorem 2.1. The proof is more or less standard. For the reader’s convenience, we provide here the details.

Let us assume that the Eq. (4) admits a solution \(\left (P(\cdot),\Lambda (\cdot)\right)\in L^{\infty }_{{\mathbb {F}}}(\Omega ;C([0,T];\) \({\mathcal {S}}({\mathbb {R}}^{n}))) \times L^{p}(\Omega ;L^{2}_{{\mathbb {F}}}(0,T;{\mathcal {S}}({\mathbb {R}}^{n})))\) (for any p[1,)) so that (8) and (9) hold. Then, for any \(\theta \in L^{\infty }_{\mathbb {F}}(\Omega ;\) \(L^{2}(0,T;{\mathbb {R}}^{m\times n}))\), by (5) and (9), the function Θ(·) given by (10) belongs to \(L^{\infty }_{{\mathbb {F}}}(\Omega ;L^{2}(0,T;\) \({\mathbb {R}}^{m\times n}))\). For any s[0,T), \(\eta \in L^{2}_{{\mathcal {F}}_{s}}(\Omega ;{\mathbb {R}}^{n})\), and \(u(\cdot)\in L^{2}_{\mathbb {F}}(s,T;{\mathbb {R}}^{m})\), let x(·)≡x(· ;s,η,u(·)) be the corresponding state process for (1). By Itô’s formula, and using (1), (4) and (5), we obtain that
$$ \begin{array}{ll} d \left\langle P x,x \right\rangle_{{\mathbb{R}}^{n}}\\ = \left\langle d P x,x \right\rangle_{{\mathbb{R}}^{n}} + \left\langle P dx,x \right\rangle_{{\mathbb{R}}^{n}} + \left\langle P x,dx \right\rangle_{{\mathbb{R}}^{n}}\\ \quad + \left\langle d P dx,x \right\rangle_{{\mathbb{R}}^{n}}+ \left\langle dP x,dx \right\rangle_{{\mathbb{R}}^{n}} + \left\langle P dx,dx \right\rangle_{{\mathbb{R}}^{n}} \\ =\left\langle - \left[PA+ A^{\top} P+\Lambda C +C^{\top} \Lambda+C^{\top} P C+ Q - L^{\top} K^{\dag} L \right]x,x \right\rangle_{{\mathbb{R}}^{n}}dr\\ \quad + \left\langle P (A x + Bu),x\right\rangle_{{\mathbb{R}}^{n}}dr+ \left\langle P (C x + Du),x\right\rangle_{{\mathbb{R}}^{n}}dW(r)\\ \quad + \left\langle Px,A x + Bu\right\rangle_{{\mathbb{R}}^{n}}dr + \left\langle Px,C x + Du\right\rangle_{{\mathbb{R}}^{n}}dW(r)\\ \quad + \left\langle \Lambda (C x + Du),x\right\rangle_{{\mathbb{R}}^{n}}dr + \left\langle \Lambda x,C x + Du\right\rangle_{{\mathbb{R}}^{n}}dr\\ \quad + \left\langle P (C x + Du),C x + Du\right\rangle_{{\mathbb{R}}^{n}}dr + \left\langle \Lambda x, x\right\rangle_{{\mathbb{R}}^{n}}dW(r)\\ = -\left\langle(Q-L^{\top} K^{\dag}L)x,x \right\rangle_{{\mathbb{R}}^{n}}dr +\left\langle P Bu, x\right\rangle_{{\mathbb{R}}^{n}}dr \\ \quad + \left\langle P x, Bu\right\rangle_{{\mathbb{R}}^{n}}dr+ \left\langle P C x, Du\right\rangle_{{\mathbb{R}}^{n}}dr + \left\langle PDu, Cx+ Du\right\rangle_{{\mathbb{R}}^{n}}dr\\ \quad + \left\langle Du, \Lambda x\right\rangle_{{\mathbb{R}}^{n}}dr + \left\langle \Lambda x, Du\right\rangle_{{\mathbb{R}}^{n}}dr + \langle P (C x + Du),x\rangle_{{\mathbb{R}}^{n}}dW(r)\\ \quad + \langle Px,C x + Du\rangle_{{\mathbb{R}}^{n}}dW(r)+ \left\langle \Lambda x, x\right\rangle_{{\mathbb{R}}^{n}}dW(r)\\ =-\left\langle(Q-L^{\top} K^{\dag}L)x,x \right\rangle_{{\mathbb{R}}^{n}}dr+2\langle L^{\top}u,x\rangle_{{\mathbb{R}}^{n}}dr+\langle D^{\top}P Du,u\rangle_{{\mathbb{R}}^{m}} dr\\ \quad +\left[2\langle P(Cx+Du),x\rangle_{{\mathbb{R}}^{n}}+\langle \Lambda x,x\rangle_{{\mathbb{R}}^{n}}\right]dW(r). \end{array} $$
(15)

Since K is an adapted process, from the first conclusion in Lemma 3.4, we deduce that K is also adapted.

Notice that from (10) one has
$$\begin{array}{ll} K \Theta=-KK^{\dagger}L,\qquad L+K \Theta =L-KK^{\dagger}L. \end{array} $$
Moreover, by \({\mathcal {R}}(K(\cdot))\supset {\mathcal {R}}(L(\cdot))\), we conclude that for a.e. (t,ω)(0,TΩ, and for any \(v\in {\mathbb {R}}^{n}\), there is a \(\hat v\in {\mathbb {R}}^{n}\) such that \(K(t,\omega)\hat v = L(t,\omega) v\). Hence
$$\begin{array}{ll} L(t,\omega)v+K(t,\omega) \Theta(t,\omega)v\\ =L(t,\omega)v-K(t,\omega) K^{\dagger}(t,\omega)L(t,\omega)v \\ =K (t,\omega)v-K(t,\omega)K^{\dagger}(t,\omega)K(t,\omega)\hat v=0. \end{array} $$
This indicates that
$$L(t,\omega)+K(t,\omega) \Theta(t,\omega)=0 \;\mathrm{for\ a.e.}\ (t,\omega)\in (0,T)\times\Omega, $$
which, together with the symmetry of K(·), implies that L =−Θ K. Since in this case \(\Theta (\cdot)\in L^{\infty }_{{\mathbb {F}}}(\Omega ;L^{2}(0,T;{\mathbb {R}}^{m\times n}))\), K(·) is bounded, one has \(L(\cdot)\in L^{\infty }_{{\mathbb {F}}}(\Omega ;L^{2}(0,T;{\mathbb {R}}^{m\times n}))\). Moreover, from the definition of Θ in (10), we derive that,
$$\Theta^{\top}K\Theta=-\Theta^{\top}K\left[K^{\dagger}L+(I_{m}-K^{\dagger}K)\theta\right]=-\Theta^{\top}K K^{\dagger}L=L^{\top}K^{\dagger} L. $$
As a result, we may rewrite (15) as follows:
$$ \begin{array}{ll} d \left\langle P x,x \right\rangle_{{\mathbb{R}}^{n}} =-\left\langle(Q-\Theta^{\top} K\Theta)x,x \right\rangle_{{\mathbb{R}}^{n}}dr+2\langle L^{\top}u,x\rangle_{{\mathbb{R}}^{n}}dr+\langle D^{\top}P Du,u\rangle_{{\mathbb{R}}^{m}} dr\\ \qquad\qquad\qquad\,+\left[2\langle P(Cx+Du),x\rangle_{{\mathbb{R}}^{n}}+\langle \Lambda x,x\rangle_{{\mathbb{R}}^{n}}\right]dW(r). \end{array} $$
(16)
In order to deal with the stochastic integral in (16), for any s[0,T), we introduce the following sequence of stopping times τ j :
$$\tau_{j} \triangleq \inf\left\{t\geq s\;\left|\; \int_{s}^{t}|\Lambda(r)|^{2}dr\geq j\right.\right\}\wedge T,\qquad j=1,2,\cdots. $$
It is easy to see that τ j T, \({\mathbb {P}}\)-a.s., as j. Using (16), we obtain that,
$$\begin{aligned} & {\mathbb{E}}\langle P(\tau_{j})x(\tau_{j}),x(\tau_{j})\rangle_{{\mathbb{R}}^{n}} +{\mathbb{E}}\int_{s}^{T}\chi_{[s,\tau_{j}]}\left[\langle Q x(r),x(r)\rangle_{{\mathbb{R}}^{n}} +\langle Ru(r),u(r)\rangle_{{\mathbb{R}}^{m}}\right]dr\\ &={\mathbb{E}}\langle P(s)\eta,\eta\rangle_{{\mathbb{R}}^{n}}+ {\mathbb{E}}\int_{s}^{T}\chi_{[s,\tau_{j}]}\left[\langle\Theta^{\top}K\Theta x(r),x(r)\rangle_{{\mathbb{R}}^{n}} +2 \langle L^{\top}u(r),x(r)\rangle_{{\mathbb{R}}^{n}}\right]dr\\ &\quad +{\mathbb{E}}\int_{s}^{T} \chi_{[s,\tau_{j}]}\langle (R+D^{\top}P D)u(r),u(r)\rangle_{{\mathbb{R}}^{m}} dr. \end{aligned} $$
Clearly,
$$|\langle P(\tau_{j})x(\tau_{j}),x(\tau_{j})\rangle_{{\mathbb{R}}^{n}}|\leq |P|_{L^{\infty}_{\mathbb{F}}(0,T;{\mathbb{R}}^{n\times n})}\max_{t\in [0,T]}|x(t,\cdot)|^{2}_{{\mathbb{R}}^{n}},\qquad {\mathbb{P}}\text{-a.s.} $$
by Dominated Convergence Theorem, we obtain that
$$ {\lim}_{j\to\infty}\langle P(\tau_{j})x(\tau_{j}),x(\tau_{j})\rangle_{{\mathbb{R}}^{n}}= \langle P(T)x(T),x(T)\rangle_{{\mathbb{R}}^{n}}. $$
(17)
Furthermore,
$$\begin{array}{ll} \left|\chi_{[s,\tau_{j}]}\left[\langle Q x(r),x(r)\rangle_{{\mathbb{R}}^{n}} +\langle Ru(r),u(r)\rangle_{{\mathbb{R}}^{m}}\right]\right|\\ \leq \left|\left[\langle Q x(r),x(r)\rangle_{{\mathbb{R}}^{n}} +\langle Ru(r),u(r)\rangle_{{\mathbb{R}}^{m}}\right]\right|\in L^{1}_{\mathbb{F}}(0,T), \end{array} $$
by Dominated Convergence Theorem again, we obtain that
$$ \begin{aligned} &\lim\limits_{j\to\infty}{\mathbb{E}}\int_{s}^{T}\chi_{[s,\tau_{j}]}\left[\langle Q x(r),x(r)\rangle_{{\mathbb{R}}^{n}} +\langle Ru(r),u(r)\rangle_{{\mathbb{R}}^{m}}\right]dr \\ &={\mathbb{E}}\int_{s}^{T} \left[\langle Q x(r),x(r)\rangle_{{\mathbb{R}}^{n}} +\langle Ru(r),u(r)\rangle_{{\mathbb{R}}^{m}}\right]dr. \end{aligned} $$
(18)
Similarly, we can show that
$$ \begin{aligned} &\lim\limits_{j\to\infty} {\mathbb{E}}\int_{s}^{T}\chi_{[s,\tau_{j}]}\left[\langle\Theta^{\top}K\Theta x(r),x(r)\rangle_{{\mathbb{R}}^{n}} +2 \langle L^{\top}u(r),x(r)\rangle_{{\mathbb{R}}^{n}}\right]dr\\ &\quad +\lim\limits_{j\to\infty} {\mathbb{E}}\int_{s}^{T} \chi_{[s,\tau_{j}]}\langle (R+D^{\top}P D)u(r),u(r)\rangle_{{\mathbb{R}}^{m}} dr \\ &= {\mathbb{E}}\int_{s}^{T} \left[\langle\Theta^{\top}K\Theta x(r),x(r)\rangle_{{\mathbb{R}}^{n}} +2 \langle L^{\top}u(r),x(r)\rangle_{{\mathbb{R}}^{n}}\right]dr\\ &\quad + {\mathbb{E}}\int_{s}^{T} \langle (R+D^{\top}P D)u(r),u(r)\rangle_{{\mathbb{R}}^{m}} dr. \end{aligned} $$
(19)
It follows from (17)–(19) that
$$ \begin{aligned} &2{\mathcal{J}}(s,\eta;u(\cdot))\\ &= {\mathbb{E}}\langle Gx(T),x(T)\rangle_{{\mathbb{R}}^{n}}+{\mathbb{E}}\int_{s}^{T}\left[\langle Q x(r),x(r)\rangle_{{\mathbb{R}}^{n}} +\langle Ru(r),u(r)\rangle_{{\mathbb{R}}^{m}}\right]dr\\ &={\mathbb{E}}\langle P(s)\eta,\eta\rangle_{{\mathbb{R}}^{n}}+ {\mathbb{E}}\int_{s}^{T}\left[\langle\Theta^{\top}K\Theta x,x\rangle_{{\mathbb{R}}^{n}} +2 \langle L^{\top}u,x \rangle_{{\mathbb{R}}^{n}}+\langle K u,u\rangle_{{\mathbb{R}}^{m}} \right]dr\\ &={\mathbb{E}}\left[\left\langle P(s)\eta,\eta\right\rangle_{{\mathbb{R}}^{n}}+\int_{s}^{T}\left(\left\langle K\Theta x,\Theta x\right\rangle_{{\mathbb{R}}^{m}}-2\left\langle K\Theta x,u\right\rangle_{{\mathbb{R}}^{m}}+\langle Ku,u\rangle_{{\mathbb{R}}^{m}}\right)dr\right]\\ &=2{\mathcal{J}}(s,\eta;\Theta\bar x) +{\mathbb{E}}\int_{s}^{T}\left\langle K(u-\Theta x),u-\Theta x\right\rangle_{{\mathbb{R}}^{m}}dr, \end{aligned} $$
(20)
where we have used the fact that L =−Θ K. Hence, by K(·)≥0, we have
$${\mathcal{J}}(s,\eta;\Theta\bar x)\leq {\mathcal{J}}(s,\eta;u),\quad\forall\, u(\cdot)\in L^{2}_{\mathbb{F}}(s,T;{\mathbb{R}}^{m}). $$

Thus, for any \(\theta \in L^{\infty }_{\mathbb {F}}(\Omega ;\) \(L^{2}(0,T;{\mathbb {R}}^{m\times n}))\), the function Θ(·) given by (10) is an optimal feedback operator for Problem (SLQ). This completes the proof of the sufficiency in Theorem 2.1.

Proof of the necessity in Theorem 2.1

This subsection is addressed to proving the “only if” part in Theorem 2.1. We borrow some ideas from (Ait Rami et al. 2001; Bismut 1978; Kalman 1960; Reid 1946; Sun and Yong 2014), and divide the proof into several steps.

Step 1. Let \(\Theta (\cdot)\in L^{\infty }_{\mathbb {F}}(\Omega ;L^{2}(0,T;{\mathbb {R}}^{m\times n}))\) be an optimal feedback operator for Problem (SLQ) on [0,T]. Then, by Corollary 3.1, for any \(\zeta \in {\mathbb {R}}^{n}\), the following forward-backward stochastic differential equation
$$ \left\{ \begin{array}{ll} dx(t)= (A+B\Theta)x(t)dt+ (C+D\Theta)x(t) dW(t)& \text{in }[0,T],\\ dy(t)=-\left(A^{\top} y(t)+C^{\top} z(t)+Qx(t)\right)dt+z(t)dW(t) & \text{in }[0,T],\\ x(0)\;\,=\zeta,\qquad y(T)= Gx(T) \end{array} \right. $$
(21)
admits a solution \((x(\cdot),y(\cdot),z(\cdot))\in L^{2}_{\mathbb {F}}(\Omega ;C([0,T];{\mathbb {R}}^{n}))\times L^{2}_{\mathbb {F}}(\Omega ;C([0,T];\) \({\mathbb {R}}^{n}))\times L^{2}_{\mathbb {F}}(0,T;{\mathbb {R}}^{n})\) so that
$$ R\Theta x+B^{\top} y+D^{\top} z =0,\quad\mathrm{a.e.}\ (t,\omega)\in (0,T)\times\Omega. $$
(22)
Also, consider the following stochastic differential equation:
$$ \left\{ \begin{array}{ll} d\tilde x \;\;\,= \left[-A-B\Theta+\left(C+D\Theta\right)^{2} \right]^{\top} \tilde x dt - \left(C+D\Theta\right)^{\top} \tilde x dW(t) \qquad \text{in }[0,T],\\ \tilde x(0)=\zeta. \end{array} \right. $$
(23)

By Lemma 3.1, the Eq. (23) admits a unique solution \(\tilde x\in L^{2}_{\mathbb {F}}(\Omega ;\) \(C([0,T];{\mathbb {R}}^{n}))\).

Further, consider the following \({\mathbb {R}}^{n\times n}\)-valued equations:
$$ \left\{ \begin{array}{ll} dX= (A+B\Theta)X dt+ (C+D\Theta) X dW(t)& \text{in }[0,T],\\ dY =-\left(A^{\top} Y+ C^{\top} Z+ QX\right)dt+ ZdW(t) &\text{in }[0,T],\\ X(0)=I_{n}, \quad Y(T)= GX(T) \end{array} \right. $$
(24)
and
$$ \left\{ \begin{array}{ll} d\widetilde X= \left[-A-B\Theta+\left(C+D\Theta\right)^{2} \right]^{\top} \widetilde X dt - (C+D\Theta)^{\top} \widetilde X dW(t)& \text{in }[0,T],\\ \widetilde X(0)=I_{n}. \end{array} \right. $$
(25)

In view of Corollary 3.1, it is easy to show that the Eqs. (24) and (25) admit, respectively, unique solutions \( (X,Y,Z)\in L^{2}_{\mathbb {F}}(\Omega ;C([0,T];{\mathbb {R}}^{n\times n}))\times L^{2}_{\mathbb {F}}(\Omega ;C([0,T];{\mathbb {R}}^{n\times n}))\times L^{2}_{\mathbb {F}}(0,T;{\mathbb {R}}^{n\times n}) \) and \( \widetilde X \in L^{2}_{\mathbb {F}}(\Omega ;C([0,T];{\mathbb {R}}^{n\times n}))\).

It follows from (21) to (25) that, for any \(\zeta \in {\mathbb {R}}^{n}\),
$$ \begin{array}{ll} x(t;\zeta) = X(t)\zeta, \quad y(t;\zeta) = Y(t)\zeta, \quad \tilde x(t;\zeta) = \widetilde X(t)\zeta,\quad& \forall\, t\in [0,T],\\ z(t;\zeta) = Z(t)\zeta, & \mathrm{a.e.}\ t\in [0,T]. \end{array} $$
(26)
By (22) and noting (26), we find that
$$ R\Theta X+B^{\top} Y+D^{\top} Z =0, \quad \mathrm{a.e.}\ (t,\omega)\in[0,T]\times\Omega. $$
(27)
For any \(\zeta,\rho \in {\mathbb {R}}^{n}\) and t[0,T], by Itô’s formula, we have
$$\begin{aligned}&\left\langle x(t;\zeta),\tilde x(t;\rho)\right\rangle_{{\mathbb{R}}^{n}} - \left\langle \zeta,\rho\right\rangle_{{\mathbb{R}}^{n}}\\ &=\int_{0}^{t} \!\left\langle \left(A \,+\, B \Theta\right) x(r;\zeta),\tilde x(r;\rho) \right\rangle_{{\mathbb{R}}^{n}} dr + \int_{0}^{t}\! \left\langle \left(C \,+\,D \Theta \right) x(r;\zeta),\tilde x(r;\rho) \right\rangle_{{\mathbb{R}}^{n}} dW(r)\\ & \quad + \int_{0}^{t} \left\langle x(r;\zeta), \left[-A -B \Theta +\left(C+D\Theta\right)^{2}\right]^{*} \tilde x(r;\rho) \right\rangle_{{\mathbb{R}}^{n}} dr\\ &\quad - \int_{0}^{t} \left\langle x(r;\zeta), \left(C+D\Theta\right)^{*} \tilde x(r;\rho) \right\rangle_{{\mathbb{R}}^{n}} dW(r) \\ & \quad- \int_{0}^{t} \left\langle \left(C +D \Theta \right) x(r;\zeta), \left(C +D \Theta \right)^{*} \tilde x(r;\rho) \right\rangle_{{\mathbb{R}}^{n}} dr\\ &=0. \end{aligned} $$
Thus,
$$\left\langle X(t)\zeta, \widetilde X(t)\rho\right\rangle_{{\mathbb{R}}^{n}}=\left\langle x(t;\zeta),\tilde x(t;\rho)\right\rangle_{{\mathbb{R}}^{n}} = \left\langle \zeta,\rho\right\rangle_{{\mathbb{R}}^{n}}, \quad {\mathbb{P}}\text{-}\mathrm{a.s.} $$

This implies that \(X(t)\widetilde X(t)^{*}=I_{n}\), \({\mathbb {P}}\)-a.s., that is, \(\widetilde X(t)^{*}=X(t)^{-1}\), \({\mathbb {P}}\)-a.s.

Step 2. Put
$$ P(t,\omega)\triangleq Y(t,\omega) \widetilde X(t,\omega)^{\top}, \quad \Pi(t,\omega)\triangleq Z(t,\omega) \widetilde X(t,\omega)^{\top}. $$
(28)
By Itô’s formula,
$$\begin{array}{ll} dP&=\left\{-\left(A^{\top}Y+ C^{\top} Z+ Q X \right)X^{-1} + Y X^{-1} \left[(C +D \Theta)^{2} - A -B \Theta \right]\right.\\ &\left.\quad- Z X^{-1} (C +D \Theta) \right\}dt + \left[ Z X^{-1} - Y X^{-1} (C +D \Theta)\right] dW(t)\\ &=\left\{- A^{\top} P - C^{\top} \Pi - Q +P \left[(C+D\Theta)^{2}-A-B\Theta\right]-\Pi (C +D\Theta) \right\}dt \\ &\quad +\left[\Pi-P(C+D\Theta) \right]dW(t). \end{array} $$
Let
$$ \Lambda\triangleq\Pi-P (C+D\Theta). $$
(29)
Then, (P(·),Λ(·)) solves the following \({\mathbb {R}}^{n\times n}\)-valued backward stochastic differential equation:
$$ \left\{ \begin{array}{ll} dP =-\left[PA+ A^{\top}P + \Lambda C+ C^{\top}\Lambda+ C^{\top} P C \right.\\ \qquad\quad\left. +(P B + C^{\top}PD + \Lambda D) \Theta + Q \right]dt + \Lambda dW(t) &\text{in }[0,T],\cr P(T)=G. \end{array} \right. $$
(30)

By Lemma 3.2, we conclude that \((P,\Lambda)\in L^{\infty }_{\mathbb {F}}(\Omega ;C([0,T];{\mathbb {R}}^{n\times n})) \times L^{p}_{\mathbb {F}}(\Omega ;L^{2}(0,T;{\mathbb {R}}^{n\times n}))\) with any p>1.

For any t[0,T) and \(\eta \in L^{2}_{{\mathcal {F}}_{t}}(\Omega ;{\mathbb {R}}^{n})\), let us consider the following forward-backward stochastic differential equation:
$$ \left\{ \begin{array}{ll} d x^{t}(r)=\left(A+B\Theta\right) x^{t}dr + \left(C + D \Theta \right)x^{t}dW(r) &\text{in}\ [t,T],\\ dy^{t}(r) = -\left(A^{\top} y^{t} + C^{\top} z^{t} + Qx^{t} \right) dr + z^{t}dW(r) &\text{in}\ [t,T], \\ x^{t}(t)=\eta, \quad y^{t}(T)= G x^{t}(T). \end{array} \right. $$
(31)
Clearly, the Eq. 31 admits a unique solution \(\left (x^{t}(\cdot),y^{t}(\cdot),z^{t}(\cdot)\right)\in L^{2}_{\mathbb {F}}(\Omega ;C([t,T];\) \({\mathbb {R}}^{n}))\times L^{2}_{\mathbb {F}}(\Omega ;C([t,T];{\mathbb {R}}^{n}))\times L^{2}_{\mathbb {F}}(t,T;{\mathbb {R}}^{n})\). Also, consider the following forward-backward stochastic differential equation:
$$ \left\{ \begin{array}{ll} d X^{t}(r)=\left(A+B\Theta\right) X^{t}dr + \left(C + D \Theta \right)X^{t}dW(r) &\text{in}\ [t,T],\\ dY^{t}(r) = -\left(A^{\top} Y^{t} + C^{\top} Z^{t} + QX^{t} \right) dr + Z^{t}dW(r) &\text{in}\ [t,T], \\ X^{t}(t)=I_{n}, \quad Y^{t}(T)= G X^{t}(T) \end{array} \right. $$
(32)
Likewise, the Eq. (32) admits a unique solution \(\left (x^{t}(\cdot),y^{t}(\cdot),z^{t}(\cdot)\right)\in L^{2}_{\mathbb {F}}(\Omega ;C([t,T];\) \({\mathbb {R}}^{n\times n}))\times L^{2}_{\mathbb {F}}(\Omega ;C([t,T];{\mathbb {R}}^{n\times n}))\times L^{2}_{\mathbb {F}}(t,T;{\mathbb {R}}^{n\times n})\). It follows from (31) and (32) that, for any \(\eta \in L^{2}_{{\mathcal {F}}_{t}}(\Omega ;{\mathbb {R}}^{n})\),
$$ \begin{array}{ll} x^{t}(r) = X^{t}(r)\eta, \quad y^{t}(r) = Y^{t}(r)\eta,\quad& \forall\,r\in[t,T]. \\ z^{t}(r) = Z^{t}(r)\eta,&\mathrm{a.e.}\ r\in[t,T]. \end{array} $$
(33)
By the uniqueness of the solution to (21), for any \(\zeta \in {\mathbb {R}}^{n}\) and t[0,T], we have that
$$X^{t}(r)X(t)\zeta = x^{t}(r;X(t)\zeta)=x(r;\zeta),\quad {\mathbb{P}}\text{-a.s.} $$
thus,
$$Y^{t}(t) X(t)\zeta = y^{t}(t;X(t)\zeta)=Y(t)\zeta,\quad {\mathbb{P}}\text{-a.s.} $$
This implies that for all t[0,T],
$$ Y^{t}(t) =Y(t)\widetilde X(t)^{\top}= P(t),\quad {\mathbb{P}}\text{-a.s.} $$
(34)
Let \(\eta,\xi \in L^{2}_{{\mathcal {F}}_{t}}(\Omega ;{\mathbb {R}}^{n})\). Since Y t (r)η=y t (r;η) and X t (r)ξ=x t (r;ξ), applying Itô’s formula to \(\langle x^{t}(\cdot),y^{t}(\cdot) \rangle _{{\mathbb {R}}^{n}}\), we get that
$$ \begin{aligned} {\mathbb{E}}\langle \xi, P(t) \eta \rangle_{{\mathbb{R}}^{n}} +&= {\mathbb{E}}\langle GX^{t}(T)\eta, X^{t}(T)\xi \rangle_{{\mathbb{R}}^{n}} + {\mathbb{E}}\int_{t}^{T}\langle Q(r)X^{t}(r)\eta,X^{t}(r)\xi \rangle_{{\mathbb{R}}^{n}} dr\\ & \quad - {\mathbb{E}}\int_{t}^{T}\langle B\Theta X^{t}(r)\xi, Y^{t}(r)\eta \rangle_{{\mathbb{R}}^{n}} dr- {\mathbb{E}}\int_{t}^{T}\langle D\Theta X^{t}(r)\xi, Z^{t}(r)\eta \rangle_{{\mathbb{R}}^{n}} dr. \end{aligned} $$
(35)
This, together with Corollary 3.1, implies that
$$\begin{aligned} {\mathbb{E}}\langle P(t)\eta, \xi \rangle_{{\mathbb{R}}^{n}}&={\mathbb{E}}\langle GX^{t}(T) \eta,X^{t}(T)\xi \rangle_{{\mathbb{R}}^{n}}+{\mathbb{E}}\int_{t}^{T} \left(\langle Q(r)X^{t}(r)\eta,X^{t}(r)\xi \rangle_{{\mathbb{R}}^{n}}\right.\\ &\quad \left.+ \langle R(r)\Theta(r)X^{t}(r)\eta,\Theta(r)X^{t}(r)\xi \rangle_{{\mathbb{R}}^{n}} \right)dr. \end{aligned} $$
Therefore,
$$ \begin{aligned} {\mathbb{E}}\langle P(t)\eta, \xi \rangle_{{\mathbb{R}}^{n}}&= {\mathbb{E}}\left\langle{\vphantom{\int_{t}^{T}}} \xi, X^{t}(T)^{\top} GX^{t}(T)\eta\right.\\ &\quad\left. +{\mathbb{E}}\! \int_{t}^{T}\! \left(X^{t}\!(r)^{\top} Q(r)X^{t}(r)\eta + X^{t}(r)^{\top} \Theta(r)^{\top} R(r)\Theta(r)X^{t}(r)\eta \right)dr \right\rangle_{{\mathbb{R}}^{n}}. \end{aligned} $$
(36)
This concludes that
$$ \begin{aligned} P(t) = {\mathbb{E}}&\left({\vphantom{\int_{t}^{T}}}X^{t}(T)^{\top} GX^{t}(T)\right. \\ &\left.\left.+{\mathbb{E}}\int_{t}^{T} \left(X^{t}(r)^{\top} Q(r)X^{t}(r) + X^{t}(r)^{\top} \Theta(r)^{\top} R(r)\Theta(r)X^{t}(r) \right)dr\;\right|\;{\mathcal{F}}_{t}\right). \end{aligned} $$
(37)

By (37) and the symmetry of G, Q(·) and R(·), it is easy to conclude that, for any t[0,T], P(t) is symmetric, \({\mathbb {P}}\)-a.s.

Next, we prove that Λ(t,ω)=Λ(t,ω) for a.e. (t,ω)(0,TΩ.

Clearly, (P ,Λ ) satisfies that
$$ \left\{ \begin{array}{rl} dP^{\top} &=-\left[P^{\top} A + A^{\top} P^{\top} + \Lambda^{\top} C + C^{\top} \Lambda^{\top} + C^{\top} P^{\top} C \right.\\ \qquad\quad &\quad\left.+\Theta^{\top} (P B + C^{\top} PD + \Lambda D)^{\top} + Q \right]dt + \Lambda^{\top} dW(t)\ \text{in }[0,T],\cr P(T)^{\top}&=G. \end{array} \right. $$
(38)
According to (30) and (38), and noting that P(·) is symmetric, we find that for any t[0,T],
$$ \begin{aligned} 0&=-\int_{0}^{t} \left\{{\vphantom{-\left[\Lambda C + C^{\top} \Lambda +(P B + C^{\top} PD + \Lambda D) \Theta \right]^{\top}}}\left[\Lambda C+ C^{\top} \Lambda +(P B + C^{\top} PD + \Lambda D) \Theta \right]\right.\\ &\qquad\qquad \left.-\left[\Lambda C + C^{\top} \Lambda +(P B + C^{\top} PD + \Lambda D) \Theta \right]^{\top} \right\} d\tau + \int_{0}^{t}(\Lambda-\Lambda^{\top}) dW(\tau). \end{aligned} $$
(39)
By (39) and the uniqueness of the decomposition of semimartingale, we conclude that
$$ \Lambda(t,\omega)=\Lambda(t,\omega)^{\top},\quad \mathrm{a.e.}\ (t,\omega)\in (0,T)\times\Omega. $$
(40)

Step 3. In this step, we show that (P,Λ) is a pair of stochastic processes satisfying (4), (8), (9) and (10). Moreover, (11) holds.

From (27) and (28), it follows that
$$ B^{\top} P+D^{\top} \Pi +R\Theta =0, \quad \mathrm{a.e.}\ (t,\omega)\in[0,T]\times\Omega. $$
(41)
By (41) and (29), and noting (5), we see that
$$ \begin{array}{ll} 0&=B^{\top} P + D^{\top}\left[\Lambda + P(C+D\Theta)\right] + R\Theta = B^{\top} P + D^{\top} PC + D^{\top}\Lambda +K\Theta\\ &=L+K\Theta. \end{array} $$
(42)
Thus, it follows from (42) that \({\mathcal {R}}(K(\cdot))\supset {\mathcal {R}}(L(\cdot))\) and
$$K^{\dag}K\Theta=-K^{\dag}L. $$
By Lemma 3.4, K K is an orthogonal projector from \({\mathbb {R}}^{m}\) to the range of K. Hence we have
$$\int_{0}^{T}\!|\!K^{\dag}\!(r\!)L\!(r\!)|_{{\mathbb{R}}^{m\times n}}^{2}dr\,=\,\! \int_{0}^{T}\!\!|\!K^{\dag}\!(r\!)K(r)|_{{\mathbb{R}}^{m\times m}}^{2}|\Theta(r)|_{{\mathbb{R}}^{m\times n}}^{2}dr\!\leq \int_{0}^{T}|\Theta(r)|_{{\mathbb{R}}^{m\times n}}^{2}dr, \ \mathrm{a.s.} $$
This leads to (9). Moreover, we have (10), i.e., Θ(·)=−K(·) L+(I m K(·) K(·))θ for some \(\theta \in L^{\infty }_{\mathbb {F}}(\Omega ;L^{2}(0,T;{\mathbb {R}}^{m\times n}))\). Therefore, by (40), (42) and Lemma 3.4, it follows that
$$ \begin{array}{ll} (PB + C^{\top} PD + \Lambda D)\Theta=L^{\top}\Theta =-\Theta^{\top} K\Theta \\ =-\Theta^{\top} K\left[-K(\cdot)^{\dag}L + \left(I_{m} - K(\cdot)^{\dag}K(\cdot)\right)\theta\right]\\ =\Theta^{\top} KK^{\dag}L= -L^{\top} K^{\dag}L. \end{array} $$
(43)

Hence, by (30), we conclude that (P,Λ) is a solution to (4).

Now, let us show that
$$ K\geq 0,\quad\mathrm{a.e.}\ (t,\omega)\in[0,T]\times\Omega. $$
(44)
For this purpose, from (43), we see that
$$ \Theta^{\top} K\Theta =L^{\top} K^{\dag}L. $$
(45)
Due to (45) and (5), for any \((s,\eta)\in [0,T)\times L_{{\mathcal {F}}_{s}}^{2}(\Omega ;{\mathbb {R}}^{n})\), by repeating the procedures in deriving (20) above, we show that
$$ \begin{aligned} {\mathcal{J}}(s,\eta;u(\cdot))&=\frac{1}{2}{\mathbb{E}}\left(\left\langle P(s)\eta,\eta\right\rangle_{{\mathbb{R}}^{n}}+\int_{s}^{T}\left\langle K(u-\Theta x),u-\Theta x\right\rangle_{{\mathbb{R}}^{m}}dr\right)\\ &={\mathcal{J}}\left(s,\eta;\Theta(\cdot)\bar x(\cdot)\right) +\frac{1}{2}{\mathbb{E}}\int_{s}^{T}\left\langle K(u-\Theta x),u-\Theta x\right\rangle_{{\mathbb{R}}^{m}} dr. \end{aligned} $$
(46)
Hence, by the optimality of the feedback operator Θ(·), (11) holds and
$$ 0\leq {\mathbb{E}}\int_{s}^{T}\left\langle K(u-\Theta x),u-\Theta x\right\rangle_{{\mathbb{R}}^{m}} dr,\qquad\forall\; u(\cdot)\in L^{2}_{\mathbb{F}}(s,T;{\mathbb{R}}^{m}). $$
(47)

For any \(v(\cdot)\in L^{2}_{\mathbb {F}}(s,T;{\mathbb {R}}^{m})\), we may choose a control \(u(\cdot)\in L^{2}_{\mathbb {F}}(s,T;{\mathbb {R}}^{m})\) (in (1)) in the “feedback form" u(·)=v(·)+Θ(·)x(·). Hence, by (47), we obtain (44). This completes the proof of the necessity in Theorem 2.1.

Proofs of Corollary 2.1 and Theorem 2.2

This section is addressed to proving Corollary 2.1 and Theorem 2.2.

Proof of Corollary 2.1

Suppose that the Eq. (4) admits two pairs of solutions \(\left (P_{i}(\cdot),\Lambda _{i}(\cdot)\right)\in L^{\infty }_{{\mathbb {F}}}(\Omega ;C([0,T];{\mathcal {S}}({\mathbb {R}}^{n})))\times L^{p}_{{\mathbb {F}}}(\Omega ;L^{2}(0,T;{\mathcal {S}}({\mathbb {R}}^{n})))\) (i=1,2), so that
$$\begin{array}{ll} {\mathcal{R}}(K_{i}(t,\omega))\supset{\mathcal{R}}(L_{i}(t,\omega)),\ \ K_{i}(t,\omega)\geq 0,\ \ \mathrm{a.e.}\ (t,\omega)\in [0,T]\times\Omega,\\ K_{i}(\cdot)^{\dag}L_{i}(\cdot)\in L^{\infty}_{\mathbb{F}}(\Omega;L^{2}(0,T;{\mathbb{R}}^{m\times n})), \end{array} $$
where \(K_{i}\triangleq R+D^{\top } P_{i}D\) and \( L_{i}\triangleq B^{\top } P_{i}+D^{\top } (P_{i}C+\Lambda _{i})\). Let
$$\Theta_{i}(\cdot)\triangleq-K_{i}(\cdot)^{\dag}L_{i}(\cdot) + \left(I_{m} - K_{i}(\cdot)^{\dag}K_{i}(\cdot)\right)\theta_{i} $$
for some \(\theta _{i}\in L^{\infty }_{\mathbb {F}}(\Omega ;L^{2}(0,T;{\mathbb {R}}^{m\times n}))\). Then by the sufficiency in Theorem 2.1, Θ 1(·) and Θ 2(·) are two optimal feedback operators and
$$ \inf_{u\in L^{2}_{\mathbb{F}}(s,T;{\mathbb{R}}^{m})}{\mathcal{J}}(s,\eta;u)=\frac{1}{2}\,{\mathbb{E}}\langle P_{1}(s)\eta,\eta\rangle_{{\mathbb{R}}^{n}}=\frac{1}{2}\,{\mathbb{E}}\langle P_{2}(s)\eta,\eta\rangle_{{\mathbb{R}}^{n}}. $$
(48)

By the arbitrariness of s, η, one has P 1(·)=P 2(·). Similar to the proof of (40), one can show that Λ 1(·)=Λ 2(·). □

Proof of Theorem 2.2

The “if" part. By the necessity in Theorem 2.1, it remains to show the uniqueness of optimal feedback operators. Suppose there exists another optimal feedback operator \( \widetilde \Theta (\cdot)\). By the necessity in Theorem 2.1, the Riccati equation (4) admits a (corresponding) solution \(\left (\widetilde P(\cdot),\widetilde \Lambda (\cdot)\right)\in L^{\infty }_{{\mathbb {F}}}(\Omega ;C([0,T];{\mathcal {S}}({\mathbb {R}}^{n}))) \times L^{p}_{{\mathbb {F}}}(\Omega ;L^{2}(0,T;{\mathcal {S}}({\mathbb {R}}^{n})))\) (for any p[1,)) so that
$$\begin{array}{ll} {\mathcal{R}}(\widetilde K(t,\omega))\supset{\mathcal{R}}(\widetilde L(t,\omega)),\ \ \widetilde K(t,\omega)\geq 0, \ \ \mathrm{a.e.}\ (t,\omega)\in [0,T]\times\Omega,\\ \widetilde K(\cdot)^{\dag}\widetilde L(\cdot)\in L^{\infty}_{\mathbb{F}}(\Omega;L^{2}(0,T;{\mathbb{R}}^{m\times n})),\ \ \widetilde \Theta(\cdot)=-\widetilde K(\cdot)^{\dag}\widetilde L(\cdot) + \left(I_{m} - \widetilde K(\cdot)^{\dag}\widetilde K(\cdot)\right)\widetilde \theta \end{array} $$
for some \(\widetilde \theta \in L^{\infty }_{\mathbb {F}}(\Omega ;\) \(L^{2}(0,T;{\mathbb {R}}^{m\times n}))\), where \(\widetilde K \triangleq R+D^{\top }\widetilde P D\) and \( \widetilde L \triangleq B^{\top } \widetilde P +D^{\top } (\widetilde P C+\widetilde \Lambda)\). Moreover,
$$ \inf_{u\in L^{2}_{\mathbb{F}}(s,T;{\mathbb{R}}^{m})}{\mathcal{J}}(s,\eta;u)= \frac{1}{2}\,{\mathbb{E}}\langle \widetilde P (s)\eta,\eta\rangle_{{\mathbb{R}}^{n}}. $$
(49)

Since (11) and (49) hold for any \((s,\eta)\in [0,T)\times L^{2}_{{\mathcal {F}}_{s}}(\Omega ;{\mathbb {R}}^{n})\), it follows that \(P(\cdot)=\widetilde P(\cdot)\). Similar to the proof of (40), one can show that \(\Lambda (\cdot)=\widetilde \Lambda (\cdot)\). Hence, \(K=\widetilde K\) and \(L=\widetilde L\). Since K(t,ω)>0 for a.e. (t,ω)[0,TΩ, one has \(\Theta (\cdot)=\widetilde \Theta (\cdot)\). □

The “only if" part. We only need to prove that the uniqueness and existence of optimal feedback operators implies K(·)>0 a.e. For any \(\tilde \theta \in L^{\infty }_{{\mathbb {F}}}(\Omega ;L^{2}(0,T;{\mathbb {R}}^{m\times n}))\), we construct another stochastic process \(\widetilde \Theta \in L^{\infty }_{{\mathbb {F}}}(\Omega ;L^{2}(0,T;{\mathbb {R}}^{m\times n}))\) as follows
$$\widetilde{\Theta}\triangleq -K^{\dagger}L+(I_{m}-K^{\dagger}K)(\theta+\tilde\theta). $$
Repeating the argument in the proof of the sufficiency in Theorem 2.1, one can show that \(\widetilde \Theta (\cdot)\) is an optimal feedback operator. By the uniqueness of optimal feedback operators, we deduce that \(\Theta (\cdot)=\widetilde \Theta (\cdot)\), and therefore \((I_{m}-K^{\dagger }K)\tilde \theta =0\). The arbitrariness of \(\tilde \theta \) indicates that K K=I m . As a result, K =K −1, and hence K(·)>0 a.e.

Two illustrating examples

We have discussed the relationship between the existence of feedback operator and the well-posedness of the Riccati Eq. (4). In this section, we present two examples which are inspired by (Wang, T: New optimality conditions in linear quadratic problems with random coefficients and applications, in submission). In the first example, we show that there does exist a feedback operator \(\Theta (\cdot)\in L^{\infty }_{\mathbb {F}}(\Omega ;L^{2}(0,T;{\mathbb {R}}^{m\times n}))\) for this case; while in the second one, it is shown that the desired feedback operator may not exist for another situation.

We begin with the following positive case.

Example 6.1

Applying Itô’s formula to sinW(·), we obtain that

$$ \sin W(T)-\sin W(t)=\int_{t}^{T}\cos W(s)dW(s)-\frac{1}{2}\int_{t}^{T}\sin W(s)ds. $$
(50)
Write
$$ \begin{aligned} &\xi\triangleq2+\frac{T}{2}+\sin W(T)+\frac{1}{2}\int_{0}^{T}\sin W(s)ds,\\ &y(t)\triangleq2+\frac{T}{2}+\sin W(t)+\frac{1}{2}\int_{0}^{t}\sin W(s)ds,\quad Y(t)\triangleq\cos W(t),\qquad t\in[0,T]. \end{aligned} $$
(51)
From (50), it is clear that 1≤y(·)≤3+T, and (y(·),Y(·)) satisfies
$$y(t)=\xi-\int_{t}^{T}Y(s)dW(s),\qquad t\in[0,T]. $$
Consider an SLQ problem with the following data (Note that, by (51), 1≤ξ≤3+T):
$$ m=n=1,\ \ A=B=C=Q=S=0,\ \ D=1,\ \ R=\frac{1}{2(3+T)},\ \ G=\xi^{-1}-R>0. $$
(52)
The corresponding Riccati equation is
$$ \left\{\begin{array}{ll} dP(s)&=(R+P(s))^{-1}\Lambda^{2}(s)ds+\Lambda(s)dW(s),\quad s\in [0,T],\\ P(T)&=G. \end{array}\right. $$
(53)
By Itô’s formula, one can show that (P(·),Λ(·))=(y(·)−1R,−y(·)−2 Y(·)) is the unique solution to (53). According to Theorem 2.1,
$$\Theta(\cdot)\triangleq -y(\cdot)^{-1}Y(\cdot)\in L^{\infty}_{{\mathbb{F}}}(\Omega;L^{2}(0,T;{\mathbb{R}})) $$
is an optimal feedback operator.

Next, we give a negative example to show the nonexistence of the desired optimal feedback operator.

Example 6.2

Define two (one-dimensional) stochastic processes M(·) and ζ(·) and a stopping time τ as follows:
$$ \left\{\begin{aligned} M(t)&\triangleq \int_{0}^{t}\frac{1}{\sqrt{T-s}}dW(s),\qquad t\in[0,T),\\ \tau&\triangleq \inf\left\{\left.t\in[0,T)\;\right|\; |M(t)|>1\right\}\wedge T,\\ \zeta(t)&\triangleq \frac{\pi}{2\sqrt{2}\sqrt{T-t}}\chi_{[0,\tau]}(t),\qquad t\in[0,T). \end{aligned}\right. $$
(54)
It was shown in ((Frei and dos Reis 2011), Lemma A.1) that
$$ \begin{aligned} \left|\int_{0}^{T}\zeta(s)dW(s)\right|=\frac{\pi}{2\sqrt{2}}\left|\int_{0}^{\tau}\frac{1}{\sqrt{T-t}} dW(t)\right|=\frac{\pi}{2\sqrt{2}}\left|M(\tau)\right|\leq \frac{\pi}{2\sqrt{2}}, \end{aligned} $$
(55)
and
$$ \begin{aligned} {\mathbb{E}}\left[\exp\left(\int_{0}^{T}|\zeta(t)|^{2}dt\right)\right]=\infty. \end{aligned} $$
(56)
Consider the following backward stochastic differential equation:
$$Y(t)=\int_{0}^{T}\zeta(s)dW(s)+\frac{\pi}{2\sqrt{2}}+1-\int_{t}^{T}Z(s)dW(s),\qquad t\in[0,T]. $$
This equation admits a unique solution (Y,Z) as follows
$$Y(t)=\int_{0}^{t}\zeta(s)dW(s)+\frac{\pi}{2\sqrt{2}}+1,\quad Z(t)=\zeta(t),\qquad t\in[0,T]. $$
From (54)–(56), it is easy to see that
$$ \left\{\begin{aligned} &1\leq Y(\cdot)\leq \frac{\pi}{\sqrt{2}}+1, \\ &Z(\cdot) \notin L^{\infty}_{{\mathbb{F}}}(\Omega;L^{2}(0,T;{\mathbb{R}})). \end{aligned}\right. $$
(57)
Consider an SLQ problem with the following data:
$$ m\,=\,n\,=\,1,\!\! \ \ A\,=\,B\!=C=Q=S=0,\ \ D=1,\ \ R=\frac{1}{4}>0,\ \ G=Y(T)^{-1}-\frac{1}{4}>0. $$
(58)
For this problem, the corresponding Riccati equation reads
$$ \left\{\begin{array}{ll} dP(s)&=(R+P(s))^{-1}\Lambda^{2}(s)ds+\Lambda(s)dW(s),\quad s\in [0,T],\\ P(T)&=G, \end{array}\right. $$
(59)

and Θ(·)=−(R+P(·))−1 Λ(·).

Put
$$\widetilde P(\cdot)\triangleq P(\cdot)+R,\ \ \widetilde \Lambda\triangleq \Lambda. $$
It follows from (59) that
$$\left\{\begin{array}{ll} d\widetilde P(s)&=\widetilde P(s)^{-1}\widetilde\Lambda^{2}(s)ds+\widetilde\Lambda(s)dW(s),\quad s\in [0,T],\\ \widetilde P(T)&=Y(T)^{-1}. \end{array}\right. $$
(60)
Applying Itô’s formula to Y(·)−1, we deduce that \((\widetilde P(\cdot),\widetilde \Lambda (\cdot))=(Y(\cdot)^{-1},-Y(\cdot)^{-2}Z(\cdot))\)is the unique solution to (60). As a result,
$$(P(\cdot),\;\;\Lambda(\cdot))\triangleq (Y(\cdot)^{-1}-R,\;\;-Y(\cdot)^{-2}Z(\cdot)) $$
is the unique solution to the Riccati Eq. (59). Moreover, Θ(·)=−Y(·)−1 Z(·). By (57), we see that Θ(·) does not belong to \(L^{\infty }_{{\mathbb {F}}}(\Omega ;L^{2}(0,T;{\mathbb {R}}))\), either. Hence, it is not a “qualified" feedback operator.

Remark 6.1

Clearly, the form of (59) is the same as that of (53) but their endpoint values at T are different. For the endpoint value G given in (52), the corresponding \(\Lambda (\cdot) \in L^{\infty }_{{\mathbb {F}}}(\Omega ;L^{2}(0,T;{\mathbb {R}}))\). However, for the endpoint value G given in (58), the resulting \(\Lambda (\cdot) \notin L^{\infty }_{{\mathbb {F}}}(\Omega ;L^{2}(0,T;{\mathbb {R}}))\).

Generally speaking, it would be quite interesting to find some suitable conditions to guarantee that the Eq. (4) admits a unique solution \((P(\cdot),\Lambda (\cdot))\in L^{\infty }_{\mathbb {F}}(0,T;{\mathcal {S}}({\mathbb {R}}^{n}))\times L^{\infty }_{\mathbb {F}}(\Omega ;L^{2}(0,T;{\mathcal {S}}({\mathbb {R}}^{n})))\) but this is an unsolved problem.

Remark 6.2

Example 6.2 also shows that, a solvable Problem (SLQ) does not need to have feedback controls. This is a significant difference between SLQs and their deterministic counterparts. Indeed, it is well-known that one can always find the desired feedback control through the corresponding Riccati equation whenever a deterministic LQ is solvable.

Declarations

Acknowledgments

This work is supported by the NSF of China under grants 11471231, 11221101, 11231007, 11301298 and 11401404, the PCSIRT under grant IRT _16R53 and the Chang Jiang Scholars Program from Chinese Education Ministry, the Fundamental Research Funds for the Central Universities in China under grant 2015SCU04A02, and the NSFC-CNRS Joint Research Project under grant 11711530142. The authors gratefully acknowledge Professor Jiongmin Yong for helpful discussions, and the anonymous referees for their useful comments.

Authors’ contributions

All authors read and approved the final manuscript.

Competing interests

The authors declare that they have no competing interests.

Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0 International License(http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made.

Authors’ Affiliations

(1)
School of Mathematics, Sichuan University

References

  1. Ait Rami, M, Moore, JB, Zhou, X: Indefinite stochastic linear quadratic control and generalized differential Riccati equation. SIAM J. Control Optim. 40, 1296–1311 (2001).MathSciNetView ArticleMATHGoogle Scholar
  2. Athans, M: The role and use of the stochastic linear-quadratic-Gaussian problem in control system design. IEEE Trans. Automat. Control. 16, 529–552 (1971).MathSciNetView ArticleGoogle Scholar
  3. Ben-Israel, A, Greville, TNE: Generalized Inverses: Theory and Applications. Pure and Applied Mathematics. Wiley-Interscience [John Wiley & Sons], New York-London-Sydney (1974).MATHGoogle Scholar
  4. Bensoussan, A: Lectures on stochastic control. In: In: Nonlinear Filtering and Stochastic Control. Lecture Notes in Math, pp. 1–62. Springer-Verlag, Berlin (1981).Google Scholar
  5. Bismut, J-M: Linear quadratic optimal stochastic control with random coefficients. SIAM J. Control Optim. 14, 419–444 (1976).MathSciNetView ArticleMATHGoogle Scholar
  6. Bismut, J-M: Contrôle des systèmes linéaires quadratiques: applications de l’intégrale stochastique. In: In: Séminaire de Probabilités XII, Université de Strasbourg 1976/77, Lecture Notes in Math, pp. 180–264. Springer-Verlag, Berlin (1978).Google Scholar
  7. Briand, PH, Delyon, B, Hu, Y, Pardoux, E, Stoica, L: L p solutions of backward stochastic differential equations. Stochastic Process. Appl. 108, 109–129 (2003).MathSciNetView ArticleMATHGoogle Scholar
  8. Chen, S, Li, X, Zhou, X: Stochastic linear quadratic regulators with indefinite control weight costs. SIAM J. Control Optim. 36, 1685–1702 (1998).MathSciNetView ArticleMATHGoogle Scholar
  9. Davis, MHA: Linear Estimation and Stochastic Control. Chapman and Hall Mathematics Series. Chapman and Hall, London; Halsted Press [John Wiley & Sons], New York (1977).Google Scholar
  10. Delbaen, F, Tang, S: Harmonic analysis of stochastic equations and backward stochastic differential equations. Probab. Theory Relat. Fields. 146, 291–336 (2010).MathSciNetView ArticleMATHGoogle Scholar
  11. Frei, C, dos Reis, G: A financial market with interacting investors: does an equilibrium exist?Math. Finan. Econ. 4, 161–182 (2011).MathSciNetView ArticleMATHGoogle Scholar
  12. Kalman, RE: Contributions to the theory of optimal control. Bol. Soc. Mat. Mexicana. 5, 102–119 (1960).MathSciNetMATHGoogle Scholar
  13. Lü, Q, Zhang, X: Well-posedness of backward stochastic differential equations with general filtration. J. Diff. Equations. 254, 3200–3227 (2013).MathSciNetView ArticleMATHGoogle Scholar
  14. Lü, Q, Zhang, X: General Pontryagin-Type Stochastic Maximum Principle and Backward Stochastic Evolution Equations in Infinite Dimensions. Springer Briefs in Mathematics. Springer, Cham (2014).Google Scholar
  15. Lü, Q, Zhang, X: Transposition method for backward stochastic evolution equations revisited, and its application. Math. Control Relat. Fields. 5, 529–555 (2015).MathSciNetView ArticleMATHGoogle Scholar
  16. Lü, Q, Zhang, X: Optimal feedback for stochastic linear quadratic control and backward stochastic Riccati equations in infinite dimensions. (2017). Preprint.Google Scholar
  17. Pardoux, E, Peng, S: Adapted solution of backward stochastic equation. Systems Control Lett. 14, 55–61 (1990).MathSciNetView ArticleMATHGoogle Scholar
  18. Peng, S: Stochastic Hamilton-Jacobi-Bellman equations. SIAM J. Control Optim. 30, 284–304 (1992).MathSciNetView ArticleMATHGoogle Scholar
  19. Pham, H: Linear quadratic optimal control of conditional McKean-Vlasov equation with random coefficients and applications.(2017). arXiv:1604.06609v1.Google Scholar
  20. Protter, PE: Stochastic Integration and Differential Equations. Stochastic Modelling and Applied Probability, Vol. 21. Springer-Verlag, Berlin (2005).View ArticleGoogle Scholar
  21. Reid, WT: A matrix differential equation of Riccati type. Amer. J. Math. 68, 237–246 (1946).MathSciNetView ArticleMATHGoogle Scholar
  22. Sun, J, Yong, J: Linear quadratic stochastic differential games: open-loop and closed-loop saddle points. SIAM J. Control Optim. 52, 4082–4121 (2014).MathSciNetView ArticleMATHGoogle Scholar
  23. Tang, S: General linear quadratic optimal stochastic control problems with random coefficients: linear stochastic Hamilton systems and backward stochastic Riccati equations. SIAM J. Control Optim. 42, 53–75 (2003).MathSciNetView ArticleMATHGoogle Scholar
  24. Tang, S: Dynamic programming for general linear quadratic optimal stochastic control with random coefficients. SIAM J. Control Optim. 53, 1082–1106 (2015).MathSciNetView ArticleMATHGoogle Scholar
  25. Wonham, WM: On a matrix Riccati equation of stochastic control. SIAM J. Control. 6, 681–697 (1968).MathSciNetView ArticleMATHGoogle Scholar
  26. Wonham, WM: Linear Multivariable Control, a Geometric Approach. Applications of Mathematics, Vol. 10. Springer-Verlag, New York (1985).MATHGoogle Scholar
  27. Yong, J, Lou, H: A Concise Course on Optimal Control Theory. Higher Education Press, Beijing (2006). (In Chinese).Google Scholar
  28. Yong, J, Zhou, XY: Stochastic Controls: Hamiltonian Systems and HJB Equations. Springer-Verlag, New York, Berlin (2000).MATHGoogle Scholar

Copyright

© The Author(s) 2017