Characterization of Optimal Feedback for Stochastic Linear Quadratic Control Problems

One of the fundamental issues in Control Theory is to design feedback controls. It is well-known that, the purpose of introducing Riccati equations in the deterministic case is to provide the desired feedback controls for linear quadratic control problems. To date, the same problem in the stochastic setting is only partially well-understood. In this paper, we establish the equivalence between the existence of optimal feedback controls for the stochastic linear quadratic control problems with random coefficients and the solvability of the corresponding backward stochastic Riccati equations in a suitable sense. We also give a counterexample showing the nonexistence of feedback controls to a solvable stochastic linear quadratic control problem. This is a new phenomenon in the stochastic setting, significantly different from its deterministic counterpart.


Introduction
Let T > 0 and (Ω, F , F, P) be a complete filtered probability space with F = {F t } t∈[0,T ] , which is the natural filtration generated by a one-dimensional standard Brownian motion For any k ∈ N, t ∈ [0, T ] and r ∈ [1, ∞), denote by L r Ft (Ω; R k ) the Banach space of all F t -measurable random variables ξ : Ω → R k so that E|ξ| r R k < ∞, with the canonical norm.
(1. 3) SLQs have been extensively studied in the literature, for which we refer the readers to [1,2,5,6,8,23,25,29] and the rich references therein. Similar to the deterministic setting ( [12,26,28]), Riccati equations (and their variants) are fundamental tools to study SLQs. Nevertheless, for stochastic problems one usually has to consider backward stochastic Riccati equations. For our Problem (SLQ), the desired backward stochastic Riccati equation takes the following form: (1.4) where A ⊤ stands for the transpose of A, and 5) and K † denotes the Moore-Penrose pseudo-inverse of K.
To the best of our knowledge, [5] is the first work that addressed the study of SLQs with random coefficients. In [5,6], the author formally derived the equation (1.4). However, at that time only some special and simple cases could be solved. Later, [18] proved the wellposedness for (1.4) under the condition that D = 0 by means of Bellman's principle of quasi linearization and a monotone convergence result for symmetric matrices. This condition was dropped in [23], in which it was proved that (1.4) admits a unique solution (P, Λ) in a suitable space under the assumptions that Q ≥ 0, G ≥ 0 and R > > 0.
In Control Theory, one of the fundamental issues is to find feedback controls, which are particularly important in practical applications. It is well-known that, in the deterministic case, the purpose to introduce Riccati equations into the study of Control Theory (e.g., [12,26,28]) is exactly to design feedback controls for linear quadratic control problems (LQs for short). More precisely, under some mild assumptions, one can show that the unique solvability of deterministic LQs is equivalent to that of the corresponding Riccati equations, via which one can construct the desired optimal feedback controls. Unfortunately, the same problem is only partially well-understood in the stochastic setting, such as the case that all of the coefficients in (1.1)-(1.2) are deterministic ( [1,22]), or the case that the diffusion term in (1.1) is control-independent, i.e., D ≡ 0 ( [18]). However, for the general case, we shall explain in Remark 1.2 below that, the solution (P, Λ) (to (1.4)) found in [23] is not regular enough to serve as the design of feedback controls for Problem (SLQ).
Because of the difficulty mentioned above, it is quite natural to ask such a question: Is it possible to link the existence of optimal feedback controls (rather than the solvability) for Problem (SLQ) directly to the solvability of the equation (1.4)? Clearly, from the viewpoint of applications, it is more desirable to study the existence of feedback controls for SLQs than the solvability for the same problems.

Remark 1.2
Under some assumptions, in [23], it was shown that the equation (1.4) admits a unique solution (P, Λ) ∈ L ∞ F (0, T ; S(R n ))×L p F (Ω; L 2 (0, T ; S(R n ))) for any given p ∈ [1, ∞). Nevertheless the approach in [23] does not produce the sharp regularity Θ ∈ L ∞ F (Ω; L 2 (0, T ; S(R n ))) (but rather Θ ∈ L p F (Ω; L 2 (0, T ; S(R n ))) for any p ∈ [1, ∞)). Although the author showed in [23] that ifx is an optimal state, then Θx ∈ L 2 F (0, T ; R n ) and hence it is the desired optimal control, such kind of control strategy is not robust, even with respect to some very small perturbation. Actually, assume that there is an error δx ∈ L 2 F (Ω; C([0, T ]; R n )) (the solution space of (1.1) with s = 0) with |δx| L 2 F (Ω;C([0,T ];R n )) = ε > 0 for ε being small enough in the observation of the state, then by the well-posedness result in [23], one cannot conclude that Θ(x+δx) is an admissible control. Thus, the Θ given in [23] is not a "qualified" feedback because it is not robust with respect to small perturbations. How about to assume that Θ has a good sign or to be monotone (in a suitable sense)? Even for such a special case, it is not hard to see that, things will not become better since we have no other information about δx except that it belongs to L 2 F (Ω; C([0, T ]; R n )), the integrability of the function Θδx (with respect to the sample point ω) cannot be improved, and therefore one could not conclude that Θ(x + δx) is an admissible control, either.
In a recent paper [24], the well-posedness result in [23] was slightly improved and it was shown that the solution (P, Λ) to (1.4) enjoys the BMO-martingle property. However, this does not help to produce the boundedness of Θ with respect to the sample point ω, either. Actually, we shall give a counterexample (i.e., Example 6.2) showing that such a boundedness result is not guaranteed without further assumptions.
Let us recall that, the main motivation to introduce feedback controls is to keep the corresponding control strategy to be robust with respect to (small) perturbations. Hence, the well-posedness results in [23,24] are not enough to solve our Problem (SLQ). Nevertheless, for the case that D ≡ 0, the optimal feedback operator in (2.4) is specialized as which is independent of Λ, and therefore the result in [23] (or that in [18]) is enough for this special case.
We have explained that a suitable optimal feedback control operator for our Problem (SLQ) should belong to L ∞ F (Ω; L 2 (0, T ; R n×m )). Nevertheless, to our best knowledge, the existence of such operator is completely unknown for Problem (SLQ) with random coefficients. In this paper, we shall show that the existence of the optimal feedback operator for Problem (SLQ) is equivalent to the solvability of (1.4) in a suitable sense. When the coefficients A, B, C, D, G, R, Q are deterministic, such an equivalence was studied in [1] (see also [22] for the problem of a linear quadratic stochastic two-person zero-sum differential game). As far as we know, there is no study of such problems for the general case that A, B, C, D, R, Q are stochastic processes, and G is a random variable.
The rest of this paper is organized as follows: Section 2 is devoted to presenting the main results of this paper. In Section 3, we give some preliminary results which will be used in the remainder of this paper. Sections 4-5 are addresses to proofs of our main results. At last, in Section 6, we give some examples for the existence and nonexistence of the optimal feedback control operator.

Statement of the main results
Let us first introduce the following assumption: (AS1) The coefficients in (1.1)-(1.2) satisfy the following measurability/integrability conditions: (2.1) We have the following result: Theorem 2.1 Let the assumption (AS1) hold. Then, Problem (SLQ) admits an optimal feedback operator Θ(·) ∈ L ∞ F (Ω; L 2 (0, T ; R m×n )) if and only if the Riccati equation

2)
and In this case, the optimal feedback operator Θ(·) is given as The result in Theorem 2.1 can be strengthened as follows.
Several remarks are in order.

Remark 2.1
We borrow some idea from [1,22] to employ the Moore-Penrose pseudo-inverse in the study of Riccati equations for SLQs when the matrix K in (1.5) is singular.  4) is a nonlinear equation with a non-global Lipschitz nonlinearity. Nevertheless, since Riccati equations appearing in Control Theory enjoy some special structures, at least under some assumptions they are still globally solvable. A basic idea to solve Riccati equations globally is to link them with suitable solvable optimal control problems, and via which one obtains the desired solutions. To the best of our knowledge, such an idea was first used to solve deterministic differential Riccati equations in [21] (though in that paper, the author considered the second variation for a nonsingular nonparametric fixed endpoint problem in the calculus of variations rather than an optimal control problem). This idea was later adopted by many authors (e.g., [1,6,12,22,23]). In this work, we shall also use such an idea.

Remark 2.3
To simplify the presentation, in this paper we assume that the filtration F is natural. One can also consider the case of general filtration. Of course, for general filtration the solutions to (1.4) have to be understood in the sense of transposition (introduced in [13,14]).

Remark 2.4
The same SLQ problems (as those in this paper) but in infinite dimensions still make sense. However, the new difficulty in the infinite dimensional setting is how to explain the stochastic integral T 0 Λ(t)dW (t) that appeared in (1.4) because for this case Λ(·) is an operator-valued stochastic process, and therefore one has to use the theory of transposition solution for operator-valued backward stochastic evolution equations ( [14,15]). Progress in this respect is presented in [16].
Remark 2.5 It would be quite interesting to extend the main result in this paper to linear quadratic stochastic differential games or similar problems for mean-field stochastic differential equations. Some relevant studies can be found in [19,22] but the full pictures are still unclear. 6

Some preliminary results
In this section, we present some preliminary results, which will be useful later.
As an immediate consequence of Lemmas 3.1 and 3.3, we have the following result.
Corollary 3.1 Let Θ(·) be an optimal feedback operator for Problem (SLQ). Then, for any (s, η) ∈ [0, T ) × L 2 Fs (Ω; R n ), the following forward-backward stochastic differential equation: and Finally, for the reader's convenience, let us recall the following result for the Moore-Penrose pseudo-inverse and refer the readers to [3, Chapter 1] for its proof. In this section, we give a proof of Theorem 2.1.

Proof of sufficiency in Theorem 2.1
In this subsection, we prove the "if" part in Theorem 2.1. The proof is more or less standard. For the reader's convenience, we provide here the details.

Proof of necessity in Theorem 2.1
This subsection is addressed to proving the "only if" part in Theorem 2.1. We borrow some ideas from [1,6,12,21,22], and divide the proof into several steps.
Repeating the argument in the proof of sufficiency in Theorem 2.1, one can show that Θ(·) is an optimal feedback operator. By the uniqueness of optimal feedback operators, we deduce that Θ(·) = Θ(·), and therefore (I m − K † K)θ ′ = 0. The arbitrariness of θ ′ indicates that K † K = I m . As a result, K † = K −1 , and hence K(·) > 0 a.e.
Next, we give an negative example to show the nonexistence of the optimal feedback operator.
Generally speaking, it would be quite interesting to find some suitable conditions to guarantee that the equation (1.4) admits a unique solution (P (·), Λ(·)) ∈ L ∞ F (0, T ; S(R n )) × L ∞ F (Ω; L 2 (0, T ; S(R n ))) but this is an unsolved problem. Remark 6.2 Example 6.2 also shows that, a solvable Problem (SLQ) does not need to have feedback controls. This is a significant difference between SLQs and their deterministic counterparts. Indeed, it is well-known that one can always find the desired feedback control through the corresponding Riccati equation whenever a deterministic LQ is solvable.