- Research
- Open Access
Optimal control with delayed information flow of systems driven by G-Brownian motion
- Francesca Biagini^{1, 2}Email authorView ORCID ID profile,
- Thilo Meyer-Brandis^{3},
- Bernt Øksendal^{2} and
- Krzysztof Paczka^{2}
https://doi.org/10.1186/s41546-018-0033-z
© The Author(s) 2018
- Received: 14 September 2017
- Accepted: 19 September 2018
- Published: 20 October 2018
Abstract
In this paper, we study strongly robust optimal control problems under volatility uncertainty. In the G-framework, we adapt the stochastic maximum principle to find necessary and sufficient conditions for the existence of a strongly robust optimal control.
Keywords
- G-Brownian motion
- optimal control problem
- stochastic maximum principle
1 Introduction
One of the motivations for this paper is to study the problem of optimal consumption and optimal portfolio allocation in finance under model uncertainty. In particular, we focus on volatility uncertainty, i.e., a situation where the volatility affecting the asset price dynamics is unknown and we need to consider a family of different volatility processes instead of just one fixed process (and hence also a family of models related to them).
In this paper, we work in a G-Brownian motion setting as in (Peng 2007) and use the related stochastic calculus, including the Itô formula, G-SDEs, martingale representation and G-BSDEs, as developed in (Peng 2007), (Peng 2010), (Soner et al. 2011a), (Song 2011), (Soner et al. 2011b), (Peng et al. 2014), (Hu et al. 2014c), (Hu et al. 2014a). It is important for understanding the nature of G-Brownian motion to note that its quadratic variation 〈B〉 is not deterministic, but it is absolutely continuous with the density taking value in a fixed set (for example, \([\underline {\sigma }^{2},\bar \sigma ^{2}]\) for d=1). Each \(\mathbb {P}\in \mathcal {P}\) can then be seen as a model with a different scenario for the quadratic variation. That justifies why G-Brownian motion is a good framework for investigating model uncertainty.
where X^{u} is a controlled G-SDE, see (8). This problem has been studied in (Matoussi et al. 2013), (Hu et al. 2014b). In (Hu et al. 2014b), they show that the value function associated with such an optimal control problem satisfies the dynamic programming principle and is a viscosity solution of some HJB equation.^{1} (Matoussi et al. 2013) investigates the robust investment problem for geometric G-Brownian motion, where 2BSDEs (which are closely related to G-BSDEs) are used to find an optimal solution. In both papers the optimal control is robust in a worst-case scenario sense.
It is interesting to note that in the simplest example of the optimal portfolio problem, which is the Merton problem with the logarithmic utility, one can easily prove that there exists a portfolio which is optimal not only in the worst-case scenario, but also for all probability measures \(\mathbb {P}\) (with the optimality criterion \(J^{\mathbb {P}}\)). We call this a strongly robust control. This strongly robust control is thus optimal in a much more robust sense than the worst-case scenario optimality. The new strongly robust optimality uses the fact that probability measures \(\mathbb {P}\) are mutually singular. Informally speaking, one can therefore modify the \(\mathbb {P}\)-optimal control \(\hat {u}^{\mathbb {P}}\) outside the support of a probability measure \(\mathbb {P}\) without losing the \(\mathbb {P}\)-optimality. As a consequence, if the family \(\{\hat {u}^{\mathbb {P}}\}_{\mathbb {P} \in \mathcal {P}}\) satisfies some consistency conditions, under a suitable choice of the underlying filtration the controls can be aggregated into a unique control \(\hat {u}\), which is optimal under every probability measure \(\mathbb {P}\). See (Soner et al. 2011b) for more details on aggregation.
In this paper, we study strongly robust optimal control problems. However, instead of checking the consistency condition for the family of controls and using the aggregation theory established in (Soner et al. 2011b), we adapt the stochastic maximum principle to the G-framework to find necessary and sufficient conditions for the existence of a strongly robust optimal control. We stress that this method has the clear advantage that we solve only one G-BSDE to produce the strongly robust optimal control instead of considering the optimal control problem for all \(\mathbb {P}\in \mathcal {P}\) (which are usually not Markovian laws) and checking the consistency condition. Another advantage is that we work with the raw filtration instead of enlarging it.
In the recent paper (Hu and Ji 2016), they also study a stochastic maximum principle for stochastic recursive optimal control problems in the G-setting, but still using the worst-case approach. They use the Minimax Theorem to obtain the variational inequality under a reference probability P^{∗}: the stochastic maximum principle holds then under such a P^{∗}-a.s, which is the main difference with respect to our approach. They prove that this stochastic maximum principle is also a sufficient condition under some convex assumptions, but our control problem is different from the one in (Hu and Ji 2016) and considers delayed information.
However, this is not the only problem with the standard robust optimality. Since the optimal probability measure \(\hat {\mathbb {P}}\) is mutually singular to any other measure \(\mathbb {Q}\in \mathcal {P}\), we can modify the control \(\hat {u}\) outside the support of \(\hat {\mathbb {P}}\) without losing the (classical) robust optimality. Since, as we noted above, usually the true probability will be different than \(\hat {\mathbb {P}}\), the classical robust optimal control may have little sense for \(\mathbb {Q}\). Moreover, in the standard robust optimality, the measure \(\hat {\mathbb {P}}\) is chosen to be static and does not change with time. As a result, not all available information is taken into consideration, as shown for the Merton problem with logarithmic utility in Section 5.
The paper is structured in the following way. In Section 2, we give a quick overview of the G-framework. Section 3 is devoted to a sufficient maximum principle in the partial-information case. In Section 4, we investigate the necessary maximum principle for the full-information case. In Section 5, we give four examples, including the Merton problem with logarithmic utility, mentioned earlier and an LQ-problem. In Section 6, we provide a counter-example and show that it is not possible to relax the crucial assumption of the sufficient maximum principle without losing the strongly robust sense of optimality.
2 Preliminaries
Let Ω be a given set and \(\mathcal {H}\) be a vector lattice of real functions defined on Ω, i.e. a linear space containing 1 such that \(X\in \mathcal {H}\) implies \(|X|\in \mathcal {H}\). We will treat elements of \(\mathcal {H}\) as random variables.
Definition 1
- 1.
Monotonicity: If \(X,Y\in \mathcal {H}\) and X≥Y, then \(\mathbb {E} [X]\geq \mathbb {E} [Y]\).
- 2.
Constant preserving: For all \(c\in \mathbb {R}\), we have \(\mathbb {E} [c]=c\).
- 3.
Sub-additivity: For all \(X,Y\in \mathcal {H}\), we have \(\mathbb {E} [X] - \mathbb {E}[Y]\leq \mathbb {E} [X-Y]\).
- 4.
Positive homogeneity: For all \(X\in \mathcal {H}\), we have \(\mathbb {E} [\lambda X]=\lambda \mathbb {E} [X]\) for all λ≥0.
The triple \((\Omega,\mathcal {H},\mathbb {E})\) is called a sublinear expectation space.
Definition 2
Definition 3
where Θ is a non-empty bounded and closed subset of \(\mathbb {R}^{d\times d}\).
Definition 4
- 1.
B_{0}=0;
- 2.
\(B_{t}\in \mathcal {H}\) for each t≥0;
- 3.
For each t,s≥0 the increment B_{t+s}−B_{t} is independent of \((B_{t_{1}},\ldots, B_{t_{n}})\) for each \(n\in \mathbb {N}\) and 0≤t_{1}<…<t_{n}≤t. Moreover, (B_{t+s}−B_{t})s^{−1/2} is G-normally distributed.
Definition 5
- 1.
Ω_{t}:={w_{.∧t}:ω∈Ω}, \(\mathcal {F}_{t}:=\mathcal {B}(\Omega _{t})\),
- 2.
L^{0}(Ω): the space of all \(\mathcal {B}(\Omega)\)-measurable real functions,
- 3.
L^{0}(Ω_{t}): the space of all \(\mathcal {B}(\Omega _{t})\)-measurable real functions,
- 4.
Lip(Ω_{t}):=Lip(Ω)∩L^{0}(Ω_{t}), \(L^{p}_{G}(\Omega _{t}):=L^{p}_{G}(\Omega)\cap L^{0}(\Omega _{t})\),
- 5.\(M^{2}_{G}(0,T)\) is the completion of the set of elementary processes of the formwhere 0≤t_{1}<t_{2}<…<t_{n}≤T, n≥1 and \(\phantom {\dot {i}\!}\xi _{i}\in Lip(\Omega _{t_{i}})\). The completion is taken under the norm$$ \eta(t)=\sum_{i=1}^{n-1}\xi_{i}\mathbbm{1}_{[t_{i},t_{i+1})}(s), $$$$ \|\eta\|^{2}_{M^{2}_{G}(0,T)}:=\hat{\mathbb{E}}\left[\int_{0}^{T}|\eta(t)|^{2}ds\right]. $$
Definition 6
Similarly to the G-expectation, the conditional G-expectation might also be extended to the sublinear operator \(\hat {\mathbb {E}}[.|\mathcal {F}_{t}]\colon L^{p}_{G}(\Omega)\to L^{p}_{G}(\Omega _{t})\) using the continuity argument. For more properties of the conditional G-expectation, see (Peng 2010).
G-(conditional) expectation plays a crucial role in the stochastic calculus for G-Brownian motion. In (Denis et al. 2011), it was shown that the analysis of the G-expectation might be embedded in the theory of upper-expectations and capacities.
Theorem 1
For convenience, we always consider only a Brownian motion on the canonical space Ω with the Wiener measure \(\mathbb {P}_{0}\). Similarly, an analogous representation holds for the G-conditional expectation.
Proposition 1
Definition 7
- 1.
A set A is said to be polar, if c(A)=0. Let \(\mathcal {N}\) be a collection of all polar sets. A property is said to hold quasi-surely (abbreviated to q.s.) if it holds outside a polar set.
- 2.
We say that a random variable Y is a version of X if X=Y q.s.
- 3.
A random variable X is said to be quasi-continuous (q.c. in short), if for every ε>0 there exists an open set O such that c(O)<ε and \(\phantom {\dot {i}\!}X|_{O^{c}}\) is continuous.
We have the following characterization of spaces \(L^{p}_{G}(\Omega)\). This characterization shows that \(L^{p}_{G}(\Omega)\) is a rather small space.
Theorem 2
G-expectation turns out to be a good framework to develop stochastic calculus of the Itô type. We can also use G-SDEs and a version of the backward SDEs. As backward equations are a key tool to consider the maximum principle, we now give a short introduction to G-BSDEs and their properties (for simplicity in a one-dimensional case).
where K is a non-increasing G-martingale starting at 0. In (Hu et al. 2014c), the existence and uniqueness of such a G-BSDE are proved under some Lipschitz and regularity conditions on the driver.
3 A sufficient maximum principle
We assume that the coefficients b, μ, σ are Lipschitz continuous w.r.t. the space variable uniformly in (t,u). Moreover, if the coefficients are not deterministic, they must belong to the space \(M^{2}_{G}(0,T)\) for each \((x,u)\in \mathbb {R}\times U\).
Note that the solution of such a G-BSDE exists thanks to the assumption on the functions f and g and on the definition of the admissible control (see (Hu et al. 2014c) for details).
Theorem 3
for all t q.s. Then \(\hat {u}=u\) is a strongly robust optimal control for the problem (10).
Proof
Remark 1
4 A necessary maximum principle for the full-information case
- A1.For all \(u,\beta \in \mathcal {A}\) with β bounded, there exists δ>0 such that$$u+a\beta\in \mathcal{A} \quad \text{for all} \ a\in (-\delta, \delta). $$
- A2.For all t,h such that 0≤t<t+h≤T and all bounded random variables \(\alpha \in L^{1}_{G}(\Omega _{t})\)^{3}, the controlbelongs to \(\mathcal {A}\).$$\beta(s):= \alpha \mathbbm{1}_{[t,t+h]} (s) $$
Remark 2
Before we give the necessary maximum principle for this problem, we will state the following remark showing that it is sufficient to consider just a control which is optimal for all \(\mathbb {P}\in \mathcal {P}_{1}\).
Remark 3
Lemma 1
Proof
Using the lemma, we can easily get the following necessary maximum principle.
Theorem 4
Proof
As we mentioned at the beginning of this section, the assumption on the process \(\hat K\) is a big disadvantage. However, if we limit our considerations to Merton-type problems, we are able to show the necessary maximum principle without this assumption.
Theorem 5
- 1.
A1, A2 hold,
- 2.
b≡0, μ(t,x,u)=ψ(x)l(u)m(t) and σ(t,x,u)=ψ(x)h(u)s(t) for \(\psi, l, h \in \mathcal {C}^{1}(\mathbb {R})\) and some bounded processes m and s such that for each t∈[0,T]m(t) and s(t) are quasi-continuous. Moreover, let c(s(t)=0)=0 for all t∈[0,T],
- 3.
f≡0,
- 4.
X(0)=x≠0.
Proof
5 Examples
i.e., with quadratic variation 〈B〉(t) lying within the bounds \(\underline {\sigma }^{2}t\) and t.
5.1 Example I
Note that by the proof we could choose a general utility function instead of a logarithmic utility function without losing the existence of the strongly robust optimal control.
5.2 Example II
5.3 Example III
Finally, note that the classical robust optimal control for this problem would be \(u^{*}(t)=m(t)/\underline {\sigma }^{2}\). It is clear that this control ignores the flow of information about the volatility path and instead it just sticks to its worst-case scenario assumption. It makes sense to assume the worst-case scenario at time 0, but later one should rather update its view about the past volatility, which is not done for the classical robust optimal controls.
5.4 Example IV
Here, F,G,μ,σ,Q,R are continuous deterministic functions on [0,T], Q(t)>0, R(t)>0 and L>0 is a constant.
We want to find \(\hat {u}\in \mathcal {A}\) (as described in Section 3) which maximizes \(J^{\mathbb {P}}(u)\) over all \(u\in \mathcal {A}\) for all \(\mathbb {P}\in \mathcal {P}\).
where \(\hat p(t)\) refers to the solution of (38) when \(u=\hat {u}\) is applied to the BSDE.
for some deterministic functions \(S, Z\in \mathcal {C}^{1}(\mathbb {R}_{+})\) to be determined.
is a strongly optimal control with S and Z given by (42) and (43), respectively.
6 Counterexample: the Merton problem with the power utility
In this example, we consider the Merton problem with the power utility and show that generally we cannot drop the assumption \(\hat K\equiv 0\) without losing the strong sense of the optimality. First, we solve the classical robust utility maximization problem and then we prove that the optimal control for that problem is optimal usually only in a weaker sense, i.e., there exists a probability measure \(\mathbb P\in \mathcal {P}\) such that the control is not optimal under \(\mathbb P\), even though the control satisfies all the conditions of the sufficient maximum principle with the exception of \(\hat K\equiv 0\).
The last equalities are a consequence of (45) and of the fact that the integrand is deterministic and that B^{u} and \(B^{\hat {u}}\) are G-Brownian motions under \(\mathbb {E}^{u}\) and \(\mathbb {E}^{\hat {u}}\), respectively. Eq. 46 shows then that \(\hat {u}\) is an optimal control for this weaker optimization problem.
To summarize the example so far: we have shown that \(\hat {u}\) is optimal in a weaker sense. We also showed that it satisfies the assumption for the necessary maximum principle for strongly robust optimality and that all assumptions of the sufficient maximum principle are satisfied, with the exception of the vanishing of process \(\hat K\). Now, we prove that \(\hat {u}\) is not optimal in the stronger sense, hence the assumption on the process \(\hat K\) is really crucial for our result and cannot be dropped.
To conclude, \(\hat {u}\) is not optimal for every probability measure \(\mathbb {P}\in \mathcal {P}\) even though it is a maximizer of the Hamiltonian related to \(\hat {u}\). This example shows that the new strong notion of optimality is rather restricted and we may expect it only in very special cases when the process \(\hat K\) vanishes.
Declarations
Acknowledgments
The authors would like to thank three anonymous referees and the editor for a careful reading of the paper, and the Journal for providing English editing of the paper.
Funding
The research leading to these results received funding from the European Research Council under the European Community’s Seventh Framework Program (FP7/2007-2013) / ERC grant agreement 228087.
Availability of data and materials
Data sharing not applicable to this article as no datasets were generated or analyzed during the current study.
Competing Interests
The authors declare that they have no competing interests.
Authors’ contributions
The four authors worked together on the manuscript and approved its final version.
Ethics approval and consent to participate
Not applicable.
Consent for publication
Not applicable.
Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/),which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made.
Authors’ Affiliations
References
- Denis, L., Hu, M., Peng, S.: Function spaces and capacity related to a sublinear expectation: application to G-Brownian motion paths. Potential Anal. 34, 139–161 (2011).MathSciNetView ArticleGoogle Scholar
- Hu, M., Ji, S.: Stochastic maximum principle for stochastic recursive optimal control problem under volatility ambiguity. SIAM J. Control Optim. 54(2), 918–945 (2016).MathSciNetView ArticleGoogle Scholar
- Hu, M., Ji, S., Peng, S.: Comparison theorem, Feynman-Kac formula and Girsanov transformation for BSDEs driven by G-Brownian motion. Stoch. Process. Appl. 124, 1170–1195 (2014a).MathSciNetView ArticleGoogle Scholar
- Hu, M., Ji, S., Yang, S.: A stochastic recursive optimal control problem under the G-expectation framework. Appl. Math. Optim. 70, 253–278 (2014b).MathSciNetView ArticleGoogle Scholar
- Hu, M., Ji, S., Peng, S., Song, Y.: Backward stochastic differential equations driven by G-Brownian motion. Stoch. Process. Appl. 124, 759–784 (2014c).MathSciNetView ArticleGoogle Scholar
- Matoussi, A., Possamai, D., Zhou, C.: Robust utility maximization in non-dominated models with 2BSDEs. Math. Financ. (2013). https://doi.org/10.1111/mafi.12031.MathSciNetView ArticleGoogle Scholar
- Peng, S.: G-expectation, G-Brownian motion and related stochastic calculus of Itô type. Stoch. Anal. Appl. 2, 541–567 (2007).MATHGoogle Scholar
- Peng, S.: Nonlinear expectations and stochastic calculus under uncertainty. Preprint, arXiv1002.4546v1 (2010).Google Scholar
- Peng, S., Song, Y., Zhang, J.: A complete representation theorem for G-martingales. Stochastics. 86, 609–631 (2014).MathSciNetView ArticleGoogle Scholar
- Soner, M., Touzi, N., Zhang, J.: Martingale representation theorem for the G-expectation. Stoch. Anal. Appl. 121, 265–287 (2011a).MathSciNetView ArticleGoogle Scholar
- Soner, M., Touzi, N., Zhang, J.: Quasi-sure stochastic analysis through aggregation. Electron. J. Probab. 16, 1844–1879 (2011b).MathSciNetView ArticleGoogle Scholar
- Soner, M., Touzi, N., Zhang, J.: Wellposedness of second order backward SDEs. Probab. Theory Relat. Fields. 153, 149–190 (2011c).MathSciNetView ArticleGoogle Scholar
- Song, Y.: Some properties on G-evaluation and its applications to G-martingale decomposition. Sci. China. 54, 287–300 (2011).MathSciNetView ArticleGoogle Scholar
- Song, Y.: Uniqueness of the representation for G-martingales with finite variation. Electron. J. Probab. 17, 1–15 (2012).MathSciNetView ArticleGoogle Scholar