Skip to main content

Fully nonlinear stochastic and rough PDEs: Classical and viscosity solutions


We study fully nonlinear second-order (forward) stochastic PDEs. They can also be viewed as forward path-dependent PDEs and will be treated as rough PDEs under a unified framework. For the most general fully nonlinear case, we develop a local theory of classical solutions and then define viscosity solutions through smooth test functions. Our notion of viscosity solutions is equivalent to the alternative using semi-jets. Next, we prove basic properties such as consistency, stability, and a partial comparison principle in the general setting. If the diffusion coefficient is semilinear (i.e, linear in the gradient of the solution and nonlinear in the solution; the drift can still be fully nonlinear), we establish a complete theory, including global existence and a comparison principle.


We study the fully nonlinear second-order SPDE

$$ d u(t,x,{\omega}) = f(t,x,{\omega}, u,{\partial}_{x}u,{\partial}^{2}_{{xx}}u)\,dt+ g(t,x, {\omega}, u, {\partial}_{x}u) \circ dB_{t} $$

with initial condition u(0,x,ω)=u0(x), where \((t,x)\in [0,\infty)\times {\mathbb {R}}\), B is a standard Brownian motion defined on a probability space \((\Omega, \mathcal {F}, {\mathbb {P}})\), f and g are \({\mathbb {F}}^{B}\)-progressively measurable random fields, and denotes the Stratonovic integration.

Our investigation will build on several aspects of the theories of pathwise solutions to SPDEs studied in the past two decades. These include: the theory of stochastic viscosity solutions, initiated by Lions and Souganidis (1998a; 1998b; 2000a; 2000b) and also studied by Buckdahn and Ma (2001a; 2001b; 2002); path-dependent PDEs (PPDEs) studied by Buckdahn et al. (2015), based on the notion of path derivatives in the spirit of Dupire (2019); and the aspect of rough PDEs studied by Keller and Zhang (2016), in terms of the rough path theory (initiated by Lyons (1998)) and using the connection between Gubinelli’s derivatives for “controlled rough paths" (2004) and Dupire’s path derivatives. The main purpose of this paper is to integrate all these notions into a unified framework, in which we shall investigate the most general well-posedness results for fully nonlinear SPDEs of the type (1.1).

A brief history

SPDE (1.1), especially when both f and g are linear or semilinear, has been studied extensively in the literature. We refer to the well-known reference Rozovskii (1990) for a fairly complete theory on linear SPDEs and to Krylov (1999) for an Lp-theory of linear and some semilinear cases. When SPDE (1.1) is fully nonlinear, as often encountered in applications such as stochastic control theory and many other fields (cf. the lecture notes of Souganidis (2019), and Davis and Burstein (1992), Buckdahn and Ma (2007), and Diehl et al. (2017) for applications in pathwise stochastic control problems), the situation is quite different. In fact, in such a case one can hardly expect (global) “classical" solutions, even in the Sobolev sense. Some other forms of solutions will have to come into play.

In a series of works, Lions-Souganidis (1998a; 1998b; 2000a; 2000b) initiated the notion of “stochastic viscosity solutions” for fully nonlinear SPDEs, especially in the case when g=g(xu), along the following two approaches. One is to use the method of stochastic characteristics (cf. Kunita (1997)) to remove the stochastic integrals of SPDE (1.1), and define the (stochastic) viscosity solution by considering test functions along the characteristics (whence randomized) for the transformed ω-wise (deterministic) PDEs. The other approach is to approximate the Brownian sample paths by smooth functions and define the (weak) solution as the limit, whenever it exists, of the solutions to the approximating equations, which are standard PDEs. These two approaches have developed into the aforementioned directions in finding pathwise solutions of SPDE (1.1), which we now describe briefly.

Soon after the seminal works (Lions and Souganidis 1998a; 1998b), Buckdahn and Ma (2001a; 2001b) proposed a method in the spirit of the Doss–Sussmann transformation, a special case along the lines of stochastic characteristics, to define the stochastic viscosity solution in the case when g=g(t,x,u) but independent of xu. The idea was developed further in combination with stochastic Taylor expansions in Buckdahn and Ma (2002), Buckdahn et al. (2011), and with second-order BSDEs by Matoussi et al. (2018) when g is independent of xu. It is worth noting that the dependence of g on the variables x, u, and xu is a non-trivial issue. In fact, the cases of g=g(t,x,u) and g=g(xu) correspond to two simplified systems of stochastic characteristics in the sense of Kunita (1997). In both cases, albeit mutually noninclusive, the stochastic characteristics can be shown to have global solutions so the approaches can be validated. In the general non-linear case when g depends on all variables, however, the stochastic characteristics exist only locally. This, together with other technical difficulties due largely to the fact that the transformed ω-wise PDE along the characteristics is often over-complicated and beyond the standard PDE literature, seems to have blocked the development of a more complete theory (especially the comparison principle) along this line, to the best of our knowledge.

The approach of smooth approximation, on the other hand, has gained a strong push as the rough path theory started to take shape in the early 2000s. Many works emerged along this line, including Caruana et al. (2011), Friz and Oberhauser (2011; 2014), Diehl and Friz (2012), Diehl et al. (2014), Diehl et al. (2015), Diehl et al. (2017), Gubinelli et al. (2014), as well as Friz et al. (2017) and Seeger (2018a; 2018b; 2020), which use certain extension (or solution) operators (see also Souganidis (2019) for more details). Again, all these works consider only the cases where g is either linear in u and xu, or g=g(x,u), or g=g(x,xu). It has been noted, however, that with this approach, the uniqueness often comes for free, provided the uniqueness for the approximating standard PDEs is established, and the main challenge is mostly the existence of the solution (i.e., the existence of the limit).

We should remark that the two approaches are often combined, e.g., path-approximation was also used in the first approach and stochastic characteristics are often used to prove the existence of the limit. However, to the best of our knowledge, the case when g depends on both u and xu in a fully nonlinear manner still seems to be open, even in the local sense.

The Main contributions of this work

The main purpose of this paper is to establish the viscosity theory for general fully nonlinear parabolic SPDEs and path-dependent PDEs through a unified framework based on the combined rough path and Dupire’s pathwise analysis, as well as the idea of stochastic characteristics. We consider the most general case where the diffusion coefficient g is a nonlinear function of all variables (t,ω,x,u,xu). We shall first obtain the existence of local (in time) classical solutions when all the coefficients are sufficiently smooth. We remark that these results, although not surprising, seem to be new in the literature, to the best of our knowledge. More importantly, assuming that g is smooth enough, we shall establish most of the important issues in viscosity theory. These include: 1) consistency (i.e., smooth viscosity solutions must be classical solutions); 2) the equivalence of the notions of stochastic viscosity solutions using test functions and by semi-jets; 3) stability; and 4) a partial comparison principle (between a viscosity semi-solution and a classical semi-solution). Finally, in the case when g is linear in xu (but nonlinear in u, and f can be nonlinear in (u,xu,xxu)), we prove the full comparison principle for viscosity solutions and thus establish the complete theory.

To be more precise, let us briefly describe alternative forms of SPDEs that are equivalent to the underlying one (1.1) in some specific pathwise senses. First, note that Buckdahn et al. (2015) established the connection between (1.1) and the following path-dependent PDE (PPDE):

$$ {\partial}^{{\omega}}_{t} u(t,x,{\omega}) = f(t,x,{\omega}, u,{\partial}_{x}u,{\partial}^{2}_{{xx}}u),~ {\partial}_{{\omega}} u(t,x,{\omega}) = g(t,x, {\omega}, u, {\partial}_{x}u). $$

Here, \({\partial }^{{\omega }}_{t}\) and ω are temporal and spatial path derivatives in the sense of Dupire (2019). On the other hand, Keller and Zhang (2016) showed that the PPDE (1.2) can also be viewed as a rough PDE (RPDE):

$$ d u(t,x,{\omega}) = f(t,x,{\omega}, u,{\partial}_{x}u,{\partial}^{2}_{{xx}}u)\,dt +g(t,x, {\omega}, u, {\partial}_{x}u) \,d{\omega}_{t}, $$

where ω is a geometric rough path corresponding to Stratonovic integration. We should note that the connection between SPDE (1.1) and RPDE (1.3) has been known in the rough path literature, see, e.g., Friz and Hairer (2014).

Bearing these relations in mind, we shall still define the (stochastic) viscosity solutions via the method of characteristics. More precisely, we utilize PPDE (1.2) by requiring that smooth test functions φ satisfy

$$\begin{array}{@{}rcl@{}} {\partial}_{{\omega}} {\varphi}(t,x) = g(t,x,{\varphi},{\partial}_{x}{\varphi}). \end{array} $$

It should be noted that the involvement of g in the definition of test functions is not new (see, e.g., the notion of “g-jets" and the g-dependence of “path derivatives” in Buckdahn and Ma (2001b; 2002) and Buckdahn et al. (2015)). The rough-path language then enables us to define viscosity solutions directly for RPDE (1.3) as well as PPDE (1.2) in a completely local manner in all variables (t,x,ω). We should note that, barring some technical conditions as well as differences in language, our definition is very similar or essentially equivalent to the ones in, say, Lions and Souganidis (1998a; 2000a); and when f does not depend on \({\partial }^{2}_{{xx}} u\) (i.e., in the case of first-order RPDEs), our definition is essentially the same as the one in Gubinelli et al. (2014). Furthermore, we show that our definition is equivalent to an alternative definition through semi-jets (such an equivalence was left open in Gubinelli et al. (2014)). Moreover, by using pathwise characteristics, we show that RPDE (1.3) can be transformed into a standard PDE (with parameter ω) without the dωt term. When g is semilinear (i.e., linear in xu), our definition is also equivalent to the viscosity solution of the transformed PDE in the standard sense of Crandall et al. (1992), as expected. In the general case when g is nonlinear on all (x,u,xu), the issue becomes quite subtle due to the highly convoluted system of characteristics and some intrinsic singularity of the transformed PDE, and thus we are not able to obtain the desired equivalence for viscosity solutions. In fact, at this point it is not even clear to us how to define a notion of viscosity solution for the transformed PDE.

Besides clarifying the aforementioned connections among different notions, the next main contribution of this paper is to establish some important properties of viscosity solutions, including consistency, stability, and a partial comparison principle. Our arguments follow some of our previous works on backward PPDEs (e.g., Ekren et al. (2014) and Ekren et al. (2016a; 2016b)). However, unlike the backward case, the additional requirement (1.4) leads to some extra subtleties when small perturbations on the test function φ are needed, especially in the case of general g. Some arguments for higher-order pathwise Taylor expansions along the lines of Buckdahn et al. (2015) prove to be helpful.

As in all studies involving viscosity solutions, the most challenging part is the comparison principle. The main difficulty, especially along the lines of stochastic characteristics, is the lack of Lipschitz property on the coefficients of the transformed ω-wise PDE in the variable u, except for some trivial linear cases. Our plan of attack is the following. We first establish a comparison principle on small time intervals. Then we extend our comparison principle to arbitrary duration by using a combination of uniform a priori estimates for PDEs and BMO estimates inspired by the backward SDEs with quadratic growth. Such a “cocktail" approach enables us to prove the comparison principle in the general fully nonlinear case under an extra condition, see (6.13). In the case when g is semilinear however, even when f is fully nonlinear (e.g., of Hamilton–Jacobi–Bellman type), we verify the extra condition (6.13) and establish a complete theory including existence and a comparison principle. Thereby, we extend the result of Diehl and Friz (2012), which follows the second approach proposed by Lions and Souganidis (1998a; 1998b) and studies the case when both g and f are semilinear. However, the verification of (6.13) in general cases is a challenging issue and requires further investigation.

Another contribution of this paper is the local (in time) well-posedness of classical solutions in the general fully nonlinear case. We first establish the equivalence between local classical solutions of RPDE (1.3) and those of the corresponding transformed PDE. Next, we provide sufficient conditions for the existence of local classical solutions to this PDE, similar to that of Da Prato and Tubaro (1996) when g is linear in u and xu. To the best of our knowledge, these results for the general fully nonlinear case are new. We emphasize again that our PDE involves some serious singularity issues so that the local existence interval depends on the regularity of the classical solution (which in turn depends on the regularity of u0). Consequently, these results are only valid for classical solutions.


As the first step towards a unified treatment of stochastic viscosity solutions for fully nonlinear SPDEs, in this paper we still need some extra conditions on the coefficients f and g. For example, even in the case when g is semilinear, we need to assume that f is uniformly non-degenerate and convexin xxu. It would be interesting to remove either one, or both constraints on f. Also, as we point out in Remark 7.5, in the general fully nonlinear case the equivalence between our rough PDE and the associated deterministic PDE in the viscosity sense is by no means clear. Consequently, a direct approach for the comparison principle for RPDE (3.6), which is currently lacking, would help greatly. It would also be interesting to investigate the alternative approach by using rough path approximations as in Caruana et al. (2011) and many other aforementioned papers, in the case when g is fully nonlinear. We hope to investigate some of these issues in our future publications.

We would also like to mention that, although the SPDEs in Buckdahn and Ma 2007, Davis and Burstein 1992, Diehl et al. (2017) for pathwise stochastic control problems appear with terminal conditions, they fall into our realm of forward SPDEs with initial conditions by a simple time change (which is particularly convenient here since our rough path integrals correspond to Stratonovic integrals). However, many SPDEs arising in stochastic control theory with random coefficients and in mathematical finance, see, e.g., Peng (1992) and Musiela and Zariphopoulou (2010), have different nature and are not covered by this paper. The main difference lies in the time direction of the adaptedness of the solution with respect to the random noise(s), as illustrated by Pardoux and Peng (1994).

Finally, for notational simplicity throughout the paper, we consider the SPDEs on a finite time horizon [0,T] and in a one-dimensional setting. Our results can be easily extended to the infinite horizon in most of the cases. But the extension to multidimensional rough paths, albeit technical, is more or less standard. We shall provide further remarks when the extension to the multidimensional case requires extra care. For example, Proposition 4.1 relies on results for multidimensional RDEs. Finally, some of the results in this paper involve higher-order derivatives and related norms. For simplicity, we shall use the norms involving all partial derivatives up to the same order; and our estimates, although sufficient for our purpose, will often contain a generic constant, and are not necessarily sharp.

This paper is organized as follows. In Section 2, we review the basic theory of rough paths and rough differential equations (RDEs). Furthermore, we introduce our function spaces and the crucial rough Taylor expansions. In Section 3, we set up the framework for SPDEs, RPDEs, and PPDEs. In Section 4, we introduce the crucial characteristic equations and transform our main object of study, the RPDE (3.6), into a PDE. We establish the equivalence of their local classical solutions and provide sufficient conditions for their existence. Sections 5 and 6 are devoted to viscosity solutions in the general case. In Section 7, we establish the complete viscosity theory in the case that g is semilinear. Finally, in the Appendix (Section 8), we provide the proofs of the results from Section 2 that go beyond the standard literature.

Preliminary results from rough path theory

We begin by briefly reviewing the framework for rough path theory that is used in this paper, mainly following Keller and Zhang (2016) (see Friz and Hairer (2014) and the references therein for the general theory).

To this purpose, we introduce some general notation first. For normed spaces E and V, put

$$\begin{array}{@{}rcl@{}} \mathbb{L}^{\infty}(E; V):=\big\{ u: E\to V:\, \|u\|_{\infty} := \text{sup}_{x\in {E}} \|u(x)\|_{V}<\infty\big\}. \end{array} $$

When \(V={\mathbb {R}}\), we omit V and just write \(\mathbb {L}^{\infty }(E)\). For a constant α>0, set

$$\begin{array}{@{}rcl@{}} C^{{\alpha}}(E; V):= \big\{u\in \mathbb{L}^{\infty}(E; V): [u]_{{\alpha}} :=\sup_{x, y\in E, x\neq y} {\|u(x)-u(y)\|_{V} \over \|x-y\|_{E}^{{\alpha}} }<\infty\big\}. \end{array} $$

Given functions \(u: [0, T]\to {\mathbb {R}}\) and \(\underline {u}: [0, T]^{2}\to {\mathbb {R}}\), we write the time variable as subscript, i.e., ut=u(t) and \(\underline {u}_{s,t} = \underline {u}(s, t)\), and we define

$$ u_{s,t} := u_{t} - u_{s},~ s, t\in [0, T],{\qquad} [\underline{u}]_{{\alpha}} := \sup\limits_{s, t\in [0, T], s\neq t} {|\underline{u}(s,t)| \slash |s-t|^{{\alpha}}}. $$

Moreover, we shall use C to denote a generic constant in various estimates, which will typically depend on T and possibly on other parameters as well. Furthermore, we define the standard Hölder spaces and parabolic Hölder spaces (cf. Lunardi (1995, Chapter 5)): Given \(k\in {\mathbb {N}}_{0}\) and β(0,1], set

$$\begin{array}{@{}rcl@{}} C^{k+\beta}_{b}({\mathbb{R}}) &:=&\big\{ u:{\mathbb{R}}\to{\mathbb{R}}:\, \| u\|_{C^{k+\beta}_{b}({\mathbb{R}})}<\infty\big\},\\ C^{\beta}_{b}({ [0,T]\times{\mathbb{R}}})&:=&\big\{u\in C^{0}({ [0,T]\times{\mathbb{R}}}):\,\| u\|_{C^{\beta}_{b}({ [0,T]\times{\mathbb{R}}})} <\infty\big\},\\ C^{2+\beta}_{b}({ [0,T]\times{\mathbb{R}}})&:=&\big\{u\in C^{1,2}({ [0,T]\times{\mathbb{R}}}): \| u\|_{C^{2+\beta}_{b}({ [0,T]\times{\mathbb{R}}})} <\infty \big\}, \end{array} $$


$$\begin{aligned} \| u\|_{C^{k+\beta}_{b}({\mathbb{R}})}&:={{\sum}_{j=0}^{k}} \| \partial^{j}_{x} u\|_{\infty}+ [\partial^{k} u]_{\beta},\\ \| u\|_{C^{\beta}_{b}({ [0,T]\times{\mathbb{R}}})}&:= \| u\|_{\infty}+ \sup_{t\in [0,T]} [u(t,\cdot)]_{\beta}+ \sup_{x\in{\mathbb{R}}} [u(\cdot,x)]_{\beta/2},\\ \| u\|_{C^{2+\beta}_{b}({ [0,T]\times{\mathbb{R}}})}&:= {{\sum}_{j=0}^{1}} \| {\partial}^{j}_{x} u\|_{\infty}+ \| {\partial}^{2}_{{xx}} u \|_{C^{\beta}_{b}({ [0,T]\times{\mathbb{R}}})}+ \| {\partial}_{t} u\|_{C^{{\beta}}_{b}({ [0,T]\times{\mathbb{R}}})}. \end{aligned} $$

Rough path differentiation and integration

Rough path theory makes it possible to integrate with respect to non-smooth functions (“rough paths") such as typical sample paths of Brownian motions and fractional Brownian motions. In this paper, we use Hölder continuous functions as integrators. To this end, we fix two parameters α(1/3,1/2] and β(0,1] satisfying

$$\begin{array}{@{}rcl@{}} {\alpha}(2+\beta) > 1. \end{array} $$

The parameter α denotes the Hölder exponent of our integrators. The parameter β will take the role of the exponent in the usual Hölder spaces Ck+β. Later, we introduce modified Hölder type spaces suitable for our theory.

To be more precise, a rough path, in general, consists of several components, the first stands for the integrator whereas the additional ones stand for iterated integrals. Those additional components have to be given exogenously and a different choice leads to different integrals, e.g., those corresponding to the Itô and to the Stratonovic integral.

In our setting, the situation is relatively simple. We consider a rough path \(\hat {\omega } := ({\omega }, \underline {\omega })\) with only two components ω and \(\underline {\omega }\) that are required to satisfy the following conditions:

(i) ωCα([0,T]) and \(\underline {\omega }_{s,t} := {(1/ 2)} |{\omega }_{s, t}|^{2}\);

(ii) \(\hat {\omega }\) is truly rough, i.e., there is a set A such that

$$\begin{array}{@{}rcl@{}} \overline{{\lim}_{t \downarrow s}} \frac{\lvert \omega_{s,t} \rvert} {\lvert t-s \rvert^{\alpha(1+\beta)}} =\infty\text{ for all } s\in A \text{ and } {A} \text{ is dense in } [0,T]. \end{array} $$

Remark 2.1

(i) The second component \(\underline {\omega }\) maps [0,T]2 to \({\mathbb {R}}\) with \([\underline {\omega }]_{2{\alpha }} <\infty \). Note that \(\underline {\omega }_{s,t}\) should not be understood as \(\underline {\omega }_{t} - \underline {\omega }_{s}\) as in (2.1).

(ii) For a general d-dimensional rough path, \(\underline {\omega }: [0, T]^{2}\to {\mathbb {R}}^{d\times d}\) has 2α-regularity in the sense that \([\underline {\omega }]_{2{\alpha }}<\infty \), and it satisfies Chen’s relation, i.e., \(\underline {\omega }_{s,t}- \underline {\omega }_{s,r}- \underline {\omega }_{r,t}= \omega _{s,r}\,\omega ^{\top }_{r,t}\), s, t, r[0,T], where denotes the transpose. Moreover, \(\hat {\omega }\) is called a geometric rough path if \(\underline {\omega }_{s,t} +\underline {\omega }_{s,t}^{\top } = {\omega }_{s,t}\,{\omega }_{s,t}^{\top }\), s, t[0,T]. In our setting, \(({\omega },\underline {\omega })\) is a geometric rough path and the related integration theory corresponds to Stratonovic integration.

(iii) In standard rough path theory, it is typically not required that \(\hat {\omega }\) is truly rough as defined in (2.3). But it is convenient for us because, under (2.3), the rough path derivatives we define next will be unique.

Next, we introduce path derivatives with respect to our rough path. To this end, we introduce spaces of multi-indices

$${\mathcal{V}}_{n} := \{ 0, 1\}^{n},{\quad} \|\nu\|:= \sum_{i=1}^{n} [ 2\boldsymbol{1}_{\{\nu_{i}=0\}}+ \boldsymbol{1}_{\{\nu_{i}=1\}}]~\text{for}~ \nu = (\nu_{1},{\cdots}, \nu_{n})\in {\mathcal{V}}_{n}. $$

Definition 2.2

(a) Let uCα([0,T]) and \(C^{0}_{{\alpha }, \beta }([0, T]) := C^{{\alpha }\beta }([0, T])\).

(i) A first-order spatial derivative of u is a \({\partial }_{{\omega }} u \in C^{0}_{{\alpha },\beta }([0, T])\) that satisfies

$$\begin{array}{@{}rcl@{}} u_{s,t}=\partial_{\omega} u_{s}\,\omega_{s,t} + R^{1,u}_{s,t},{\quad} s,t\in [0, T], {\quad}\text{with}{\quad} [R^{1, u}]_{{\alpha}(1+\beta)} <\infty. \end{array} $$

(ii) Assume that ωuCα([0,T])exists and has a derivative ωωu, then a temporal derivative of u is a \({\partial }^{{\omega }}_{t} u \in C^{0}_{{\alpha },\beta }([0, T])\) that satisfies

$$\begin{array}{@{}rcl@{}} u_{s,t}&={\partial}^{{\omega}}_{t} u_{s}~ [t-s] + {\partial}_{{\omega}} u_{s}~ {\omega}_{s,t} + {\partial}_{{\omega}}{\partial}_{{\omega}} u_{s}~ \underline{\omega}_{s,t} + R^{2,u}_{s,t},\, s,t\in [0,T],\\ &\qquad\text{with}~ [R^{2, u}]_{{\alpha}(2+\beta)} <\infty. \end{array} $$

(iii) For \(\nu \in {\mathcal {V}}_{n}, {\mathcal {D}}_{\nu } u := {\partial }_{\nu _{1}} {\cdots } {\partial }_{\nu _{n}} u\), where \({\partial }_{0} := {\partial }^{{\omega }}_{t}\) and 1:=ω.

(b) For k≥1, let \(C^{k}_{{\alpha },\beta }([0,T]):=\{ u\in C^{\alpha }([0,T]):\, {\mathcal {D}}_{\nu } u \text { exists } \forall \|\nu \|\le k\}.\)

Remark 2.3

(i) In the rough path literature, a first-order spatial derivative ωu is typically called a Gubinelli derivative and the corresponding function u is called a controlled rough path. In our case, the path derivatives defined above are unique due to \(\hat {\omega }\) being truly rough (Friz and Hairer2014, Proposition 6.4).

(ii) The derivative ωu depends on ω, but not on \(\underline {\omega }\). The derivative \({\partial }^{{\omega }}_{t} u\) depends on \(\underline {\omega }\) as well and should be denoted by \({\partial }^{\hat {\omega }}_{t} u\). However, in our setting, \(\underline {\omega }\) is a function of ω and thus we write \({\partial }^{{\omega }}_{t} u\) instead.

(iii) When ωu=0, it follows from (2.5) and (2.2) that u is differentiable in t and \({\partial }^{{\omega }}_{t} u = {\partial }_{t} u\), the standard derivative with respect to t.

(iv) In the multidimensional case, \({\partial }_{{\omega }{\omega }} u \in {\mathbb {R}}^{d\times d}\) could be symmetric if u is smooth enough (Buckdahn et al. 2015, Remark 3.3); i.e., \({\partial }_{{\omega }^{i}}\) and \({\partial }_{{\omega }^{j}}\) commute for 1≤i, jd. However, typically \({\partial }^{{\omega }}_{t}\) and ω do not commute, even when d=1.

Remark 2.4

Note that in (2.5) the term ts is the difference of the identity function tt, which is Lipschitz continuous. For all estimates below, it suffices to assume tωuCα(2+β)−1([0,T]). However, to make the estimates more homogeneous, we only use the Hölder- 2αregularity of t and thus require tωuCαβ([0,T]). For this same reason, all of our estimates will actually hold true if we replace t with a Hölder- 2α continuous path ζC2α([0,T]). To be more precise, we define a path derivative of u with respect to ζ as a function \({\partial }^{{\omega }}_{\zeta } u\in C^{0}_{{\alpha },\beta }([0,T])\)that satisfies [R2,u]α(2+β)<, where

$$ u_{s,t}={\partial}^{{\omega}}_{\zeta} u_{s} ~\zeta_{s,t} + {\partial}_{{\omega}} u_{s} ~{\omega}_{s,t} + {\partial}_{{\omega}{\omega}} u_{s} ~\underline{\omega}_{s,t} + R^{2,u}_{s,t}, $$

then Lebesgue integration dt should be replaced with Young integration dζt.

Next, we equip \(C^{k}_{{\alpha },\beta }([0, T])\) with a norm ·k. Given \(u\in C^{k}_{{\alpha },\beta }([0, T])\), put

$$\begin{array}{@{}rcl@{}} \left. \begin{array}{lll} ~ \|u\|_{0} := |u_0| +[u]_{{\alpha}\beta}; {\quad} \|u\|_{1} := \|u\|_{0} + \|{\partial}_{{\omega}} u\|_{0} + [R^{1, u}]_{{\alpha}(1+\beta)};\\ ~ \|u\|_{2} :=\|u\|_1+ \|{\partial}_{{\omega}} u\|_{1} + \|{\partial}^{{\omega}}_{t} u\|_{0} + [R^{2, u}]_{{\alpha}(2+\beta)};\\ ~ \|u\|_{k} :=\|u\|_{k-1}+ \|{\partial}_{{\omega}} u\|_{k-1} + \|{\partial}^{{\omega}}_{t} u\|_{k-2},{\quad} k\ge 3. \end{array}\right. \end{array} $$

We emphasize that, besides k, the norms depend on T, ω, α, and β as well. To simplify the notation, we do not indicate these dependencies explicitly. In some places we restrict u to some subinterval [t1,t2][0,T]. Corresponding spaces \(C^{k}_{{\alpha },\beta }([t_{1}, t_{2}])\) are defined in an obvious way. To not further complicate the notation, the corresponding norm is still denoted by ·k. Note that, for \(u\in C^{1}_{{\alpha },\beta }([0, T])\) and for a constant C depending on ω,

$$\begin{array}{@{}rcl@{}} |u_{t}|\le |u_{0}| + |{\partial}_{{\omega}} u_{0}|[{\omega}]_{{\alpha}} t^{{\alpha}} + [R^{1,u}]_{{\alpha}(1+\beta)} t^{{\alpha}(1+\beta)} \le |u_{0}| + C\|u\|_{1} t^{{\alpha}}. \end{array} $$

Finally, we define the rough integral of \(u\in C^{1}_{{\alpha },\beta }([0, T])\). Let π:0=t0<<tn=T be a time partition and |π|:= max0≤in−1|ti+1ti|. By Gubinelli (2004),

$$\begin{array}{@{}rcl@{}} \int_{0}^{t} u_{s} \,d{\omega}_{s} := {\lim}_{|\pi|\to 0} \sum_{i=0}^{n-1} \big[u_{t_{i}} {\omega}_{t_{i}\wedge t, ~t_{i+1}\wedge t} + {\partial}_{{\omega}} u_{t_{i}} ~\underline{\omega}_{t_{i}\wedge t, ~t_{i+1}\wedge t}\big] \end{array} $$

exists and defines the rough integral. The integration path \(U_{t} := \int _{0}^{t} u_{s}\, d {\omega }_{s}\) belongs to \(C^{1}_{{\alpha }, \beta }([0, T])\) with ωUt=ut and we define \(\int _{s}^{t} u_{r} \,d{\omega }_{r} := U_{s,t}\).

In this context, we define iterated integrals as follows. For \(\nu \in {\mathcal {V}}_{n}\), set

$$\begin{array}{@{}rcl@{}} \mathcal{I}^{\nu}_{s,t} := \int_{s}^{t} \int_{s}^{t_{n}} {\cdots} \int_{s}^{t_{2}} d_{\nu_{1}} t_{1} {\cdots} \,d_{\nu_{n}} t_{n},\,\text{where \(d_{0} t := dt\), \(d_{1} t = d{\omega}_{t}\).} \end{array} $$

One can check that \( \mathcal {I}^{\mu }_{s,t} = \int _{s}^{t} \mathcal {I}^{(\mu _{1},{\cdots }, \mu _{n})}_{s,r}\, d_{\mu _{n+1}} r\) for \(\mu = (\mu _{1},{\cdots }, \mu _{n+1})\in {\mathcal {V}}_{n+1}\). In the multidimensional case, defining iterated integrals is not trivial. Nevertheless, by Lyons (1998, Theorem 2.2.1), this can be accomplished via uniquely determined (higher-order) extensions of the geometric rough path \(\hat {\omega }=({\omega },\underline {\omega })\).

By (2.5) and (2.2), the following result is obvious and we omit the proof.

Lemma 2.5

(i) If \(u \in C^{2}_{{\alpha },\beta }([0, T])\), then

$$\begin{array}{@{}rcl@{}} d u_{t} = {\partial}^{{\omega}}_{t} u_{t} \,dt + {\partial}_{{\omega}} u_{t}\, d{\omega}_t. \end{array} $$

(ii) Suppose that \(u_{t} = u_{0} + \int _{0}^{t} a_{s}\, ds + \int _{0}^{t} \eta _{s}\, d{\omega }_{s}\) with \(a\in C^{0}_{{\alpha }, \beta }([0, T])\) and \(\eta \in C^{1}_{{\alpha },\beta }([0, T])\). Then \(u \in C^{2}_{{\alpha },\beta }([0, T])\) with \({\partial }^{{\omega }}_{t} u = a\) and ωu=η. Moreover, [R2,u]α(2+β)C(a0+η1).

Finally, we introduce backward rough paths. Fix t0(0,T]. Set

$$\begin{array}{@{}rcl@{}} \stackrel{\leftarrow}{{\omega}}^{t_{0}}_{t} := {\omega}_{t_{0}}-{\omega}_{t_0-t},{\quad} \stackrel{\leftarrow}{\underline{\omega}}^{t_{0}}_{s,t}~ := ~\frac{1}{2} |\stackrel{\leftarrow}{{\omega}}^{t_{0}}_{s,t}|^2. \end{array} $$

Then \((\stackrel {\leftarrow }{{\omega }}^{t_{0}}, \stackrel {\leftarrow }{\underline {\omega }}^{t_{0}})\) is a rough path on [0,t0]. Moreover, for \(u\in C^{1}_{{\alpha },\beta }([0, t_{0}])\), the function \(\stackrel {\leftarrow }{u}^{t_{0}}\) defined by \(\stackrel {\leftarrow }{u}^{t_{0}}_{t} := u_{t_{0}-t}\) belongs to \(C^{1}_{{\alpha },\beta }([0, t_{0}])\) with \(\hat {\omega }\) replaced by \((\stackrel {\leftarrow }{{\omega }}^{t_{0}}, \stackrel {\leftarrow }{\underline {\omega }}^{t_{0}})\) in Definition 2.2. In this case, \(\int _{0}^{t_{0}} \stackrel {\leftarrow }{u}^{t_{0}}_{s} \,d\stackrel {\leftarrow }{{\omega }}^{t_{0}}_{s} = \int _{0}^{t_{0}} u_{s} \,d{\omega }_{s}\).

Rough differential equations

We start with controlled rough paths with parameter \(x\in {\mathbb {R}}^{d}\). They serve as solutions to RPDEs and coefficients for RDEs and RPDEs. For this purpose, we have to allow d>1 here. Consider a function \(u: { [0,T]\times {\mathbb {R}}}^{d} \to {\mathbb {R}}\). If, for fixed \(x\in {\mathbb {R}}^{d}\), the mapping tu(t,x) is a controlled rough path, we use the notations ωu, \({\partial }^{{\omega }}_{t} u, {\mathcal {D}}_{\nu } u\) to denote the path derivatives as in the previous subsection. For fixed t, we use xu, \({\partial }^{2}_{{xx}} u\), etc., to denote the derivatives of xu(t,x) with respect to x. Now, we introduce the appropriate spaces, extending Definition 2.2.

Definition 2.6

Let [t1,t2][0,T], \(O \subset {\mathbb {R}}^{d}\) be convex, uC0([t1,t2O).

(i) We say \(u\in C^{0,loc}_{{\alpha },\beta }([t_{1}, t_{2}]\times O)\) if the following holds:

  • xu(·,x) maps O into \(C^{0}_{{\alpha },\beta }([t_{1}, t_{2}])\) and is continuous under ·0.

  • xu(t,x) are locally Hölder- β continuous, uniformly in t[t1,t2].

(ii) We say \(u\in C^{1,loc}_{{\alpha },\beta }([t_{1}, t_{2}]\times O)\) if the following holds:

  • xu(·,x) maps O into \(C^{1}_{{\alpha },\beta }([t_{1}, t_{2}])\) and is continuous under ·1.

  • \({\partial }_{{\omega }} u\in C^{0, loc}_{{\alpha },\beta }([t_{1}, t_{2}]\times O)\) and \({\partial }_{x} u\in C^{0, loc}_{{\alpha },\beta }([t_{1}, t_{2}]\times O; {\mathbb {R}}^{d})\), in the sense that each component \({\partial }_{x_{i}} u\in C^{0, loc}_{{\alpha },\beta }([t_{1}, t_{2}]\times O)\), i=1, …, d.

(iii) We say \(u\in C^{2,loc}_{{\alpha },\beta }([t_{1}, t_{2}]\times O)\) if the following holds:

  • xu(·,x) maps O into \(C^{2}_{{\alpha },\beta }([t_{1}, t_{2}])\) and is continuous under ·2.

  • \({\partial }_{{\omega }} u\in C^{1,loc}_{{\alpha },\beta }([t_{1}, t_{2}]\times O)\), \({\partial }_{x} u \in C^{1,loc}_{{\alpha },\beta }([t_{1}, t_{2}]\times O; {\mathbb {R}}^{d})\), and \({\partial }_{t}^{{\omega }} u\in C^{0,loc}_{{\alpha },\beta }([t_{1}, t_{2}]\times O)\); for all xO, [xR1,u(x)]α(1+β)<.

(iv) For k≥3, we say \(u\in C^{k, loc}_{{\alpha },\beta }([t_{1}, t_{2}]\times O)\) if u, \({\partial }_{{\omega }} u\in C^{k-1, loc}_{{\alpha },\beta }([t_{1}, t_{2}]\times O), {\partial }_{x} u\in C^{k-1, loc}_{{\alpha },\beta }([t_{1}, t_{2}]\times O;{\mathbb {R}}^{d})\), and \({\partial }^{{\omega }}_{t} u \in C^{k-2, loc}_{{\alpha },\beta }([t_{1}, t_{2}]\times O)\).

We first show that the differentiation and integration operators are commutative.

Lemma 2.7

(i) Let \(u \in C^{2, loc}_{{\alpha },\beta }({ [0,T]\times {\mathbb {R}}}^{d})\). Then ω and x commute, i.e.,

$$\begin{array}{@{}rcl@{}} {\partial}_{{\omega}} {\partial}_{x} u = {\partial}_{x} {\partial}_{{\omega}} u. \end{array} $$

Assume further that \(u\in C^{3, loc}_{{\alpha },\beta }({ [0,T]\times {\mathbb {R}}}^{d})\). Then \({\partial }^{{\omega }}_{t}\) and x commute, i.e.,

$$\begin{array}{@{}rcl@{}} {\partial}^{{\omega}}_{t} {\partial}_{x} { u} = {\partial}_{x} {\partial}^{{\omega}}_{t} { u}. \end{array} $$

(ii) If \(u\in C^{1, loc}_{{\alpha },\beta }({ [0,T]\times {\mathbb {R}}}^{d})\), then, for any bounded domain \(O\subset {\mathbb {R}}^{d}\),

$$\begin{array}{@{}rcl@{}} \int_{s}^{t} \int_{O} u(r,x) \,dx \,d{\omega}_{r} = \int_{O} \int_{s}^{t} u(r,x) \,d{\omega}_{r} \,dx. \end{array} $$


See the Appendix. □

The next result is the crucial chain rule (Keller and Zhang 2016, Theorem 3.4).

Lemma 2.8

Assume that \({\varphi }\in C^{1, loc}_{{\alpha },\beta }({ [0,T]\times {\mathbb {R}}}^{d})\) and \(X\in C^{1}_{{\alpha },\beta }([0, T]; {\mathbb {R}}^{d})\). Let Yt:=φ(t,Xt). Then \(Y\in C^{1}_{{\alpha },\beta }([0, T])\) and it holds that

$$\begin{array}{@{}rcl@{}} {\partial}_{{\omega}} Y_{t} = {\partial}_{{\omega}} {\varphi} (t, X_t) + {\partial}_{x} {\varphi}(t, X_t) {\cdot} {\partial}_{{\omega}} X_t. \end{array} $$

If \({\varphi }\in C^{2, loc}_{{\alpha },\beta }({ [0,T]\times {\mathbb {R}}}^{d}), X \in C^{2}_{{\alpha },\beta }([0, T]; {\mathbb {R}}^{d})\), then \(Y\in C^{2}_{{\alpha },\beta }([0, T])\) and

$$\begin{array}{@{}rcl@{}} {\partial}^{{\omega}}_{t} Y_{t} = {\partial}^{{\omega}}_{t} {\varphi} (t, X_t) + {\partial}_{x} {\varphi}(t, X_t) {\cdot} {\partial}^{{\omega}}_{t} X_t. \end{array} $$

Our study relies heavily on the following rough Taylor expansion. The result holds true for multidimensional cases as well and we emphasize that the numbers δ below can be negative.

Lemma 2.9

Let \(u \in C^{k, loc}_{{\alpha },\beta }({ [0,T]\times {\mathbb {R}}})\) and \(K\subset {\mathbb {R}}\) be compact. Then, for every \((t,x)\in { [0,T]\times {\mathbb {R}}}\) and \((\delta,h)\in {\mathbb {R}}^{2}\) with t+δ[0,T] and x+hK, we have \(|R^{k,u}_{t, x; {\delta }, h}| \le C(K,x)\, (|{\delta }|^{{\alpha }}+|h|)^{k+\beta }\), where

$$ u(t+{\delta}, x+h) = \sum_{m=0}^{k} \sum_{\|\nu\|\le k-m} \frac{1}{m!} {\mathcal{D}}_{\nu} {\partial}_{x}^{m} u(t,x) ~ h^{m} ~\mathcal{I}^{\nu}_{t, t+{\delta}} + R^{k,u}_{t, x; {\delta}, h}. $$


See the Appendix. □

To study RDEs, uniform properties for the functions in \(C^{k,loc}_{{\alpha },\beta }([t_{1}, t_{2}]\times O)\) are needed. In the next definition, we abuse the notation ·k from (2.7).

Definition 2.10

(i) We say that \(u\in C^{k}_{{\alpha },\beta }([t_{1}, t_{2}]\times O) \subset C^{k, loc}_{{\alpha },\beta }([t_{1}, t_{2}]\times O)\) if

$$ \|u\|_{k} := \sum_{i=0}^{k}\sup_{x\in O} \|{\partial}^{i}_{x} u({\cdot}, x)\|_{k-i} < \infty. $$

(ii) For solutions to standard PDEs (recall Remark 2.3 (iii)), we use

$$\begin{array}{@{}rcl@{}} C^{k,0}_{{\alpha},\beta}([t_{1}, t_{2}]\times O) := \Big\{u\in C^{k}_{{\alpha},\beta}([t_{1}, t_{2}]\times O): {\partial}_{{\omega}} u = 0\Big\}. \end{array} $$

We remark that in (i) we do not require \(\sup _{t\in [t_{1},t_{2}]} [{\partial }^{k}_{x} u(t,{\cdot })]_{\beta }<\infty \), but restrict ourselves to local Hölder continuity with respect to x (uniformly in t), which suffices for our rough Taylor expansion above.

Although functions in \(C^{k,0}_{{\alpha },\beta }({ [0,T]\times {\mathbb {R}}})\) are, in general, only at most once differentiable in time, they behave in our rough path framework as if they were k times differentiable in time (Friz and Hairer 2014, section 13.1).

Remark 2.11

(i) If \(u:[t_{1},t_{2}]\times O\to {\mathbb {R}}\) satisfies uk+1< (as in (2.19)), then \(u(t, x+h) - u(t,x) = h \int _{0}^{1} {\partial }_{x} u (t, x+ \text {\l } h)\,d\text {\l }\). Thus, the mapping xu(·,x), \(O\to C^{k}_{\alpha,\beta }([t_{1},t_{2}])\), is continuous under ·k (as defined in (2.7)) and, for ν=k, \({\mathcal {D}}_{\nu } u(t,\cdot)\) is Hölder- β continuous, uniformly in t. Hence, the continuity required in the definition of \(C^{k, loc}_{{\alpha },\beta }({ [0,T]\times {\mathbb {R}}}^{d})\) is automatic.

(ii) Similarly, if \(u\in C^{k+1}_{{\alpha },\beta }({ [0,T]\times {\mathbb {R}}}^{d} \times {\mathbb {R}}^{d'})\), then yu(·,y), \({\mathbb {R}}^{d'}\to C^{k+1}_{{\alpha },\beta }({ [0,T]\times {\mathbb {R}}}^{d})\), is continuous under ·k (as defined in (2.19)).

Now, we study rough differential equations of the form

$$\begin{array}{@{}rcl@{}} u_{t} = u_{0} + \int_{0}^{t} f(s, u_s)\, ds + \int_{0}^{t} g(s, u_s)\, d{\omega}_s. \end{array} $$

Lemma 2.12

If \(f \in C^{k}_{{\alpha },\beta }({ [0,T]\times {\mathbb {R}}})\) and \(g \in C^{k+1}_{{\alpha },\beta }({ [0,T]\times {\mathbb {R}}})\) for some k≥2, then RDE (2.21) has a unique solution \(u \in C^{k+2}_{{\alpha },\beta }([0, T])\) and

$$\begin{array}{@{}rcl@{}} \|u-u_{0}\|_{k+2} \le C(T, \|f\|_{k}, \|g\|_{k+1}). \end{array} $$


See the Appendix. □

In the following linear case, we have a representation formula for u:

$$\begin{array}{@{}rcl@{}} u_{t} = u_{0} + \int_{0}^{t} [f_{0}(s) + f_{1}(s) u_{s}]\, ds + \int_{0}^{t} [g_{0}(s) + g_{1}(s)u_{s}]\, d{\omega}_{s}. \end{array} $$

Lemma 2.13

If f0, \(f_{1} \in C^{k}_{{\alpha },\beta }([0, T])\) and g0, \(g_{1}\in C^{k+1}_{{\alpha },\beta }([0, T])\) for some k≥2, then RDE (2.23) has a unique solution \(u\in C^{k+2}_{{\alpha }, \beta }([0, T])\) given by

$$\begin{array}{@{}rcl@{}} u_{t} = {\Gamma}_{t}\left[u_{0} + \int_{0}^{t} {f_{0}(s)\over {\Gamma}_{s}}\,ds + \int_{0}^{t} {g_{0}(s) \over{\Gamma}_{s}}\,d{\omega}_{s}\right], \end{array} $$

where \({\Gamma }_{t} := \exp \big \{\int _{0}^{t} f_{1}(s)\,ds + \int _{0}^{t} g_{1}(s)\,d{\omega }_{s}\big \}\).

This is a direct consequence of Lemma 2.8, and thus the proof is omitted.

Remark 2.14

This representation holds true only in the one-dimensional case. For multidimensional linear RDEs, Keller and Zhang (2016) derived a semi-explicit representation formula. Moreover, note that (2.23) actually does not satisfy the technical conditions in Lemma 2.12 (f and g are not bounded). But nevertheless, due to its special structure, RDE (2.23) is well-posed as shown in this lemma.

Finally, we extend Lemma 2.12 to RDEs with parameters of the form

$$\begin{array}{@{}rcl@{}} u(t,x) = u_{0}(x) + \int_{0}^{t} f(s, x, u(s, x)) \,ds + \int_{0}^{t} g(s, x, u(s, x)) \,d{\omega}_{s}. \end{array} $$

Lemma 2.15

Assume that \(u_{0} \in C^{k+\beta }({\mathbb {R}}), f\in C^{k+1}_{{\alpha },\beta }({ [0,T]\times {\mathbb {R}}}^{2}) \), and \(g\in C^{k+1}_{{\alpha },\beta }({ [0,T]\times {\mathbb {R}}}^{2})\) for some k≥3. Then \(u \in C^{k, {loc}}_{{\alpha },\beta }({ [0,T]\times {\mathbb {R}}})\) and xu solves

$$\begin{array}{@{}rcl@{}} {\partial}_{x} u(t,x) &=& {\partial}_{x} u_{0}(x) + \int_{0}^{t} \big[{\partial}_{x} f(s, x, u(s, x)) + {\partial}_{y} f(s, x, u(s, x)) {\cdot} {\partial}_{x} u(s,x)\big] \,ds \\ &&+ \int_{0}^{t} \big[{\partial}_{x} g(s, x, u(s, x)) + {\partial}_{y} g(s, x, u(s, x)) {\cdot} {\partial}_{x} u(s,x)\big]\,d{\omega}_{s}. \end{array} $$

If all related derivatives of u0 are bounded (but not necessary u0 itself), then \(u - u_{0} \in C^{k}_{{\alpha },\beta }({ [0,T]\times {\mathbb {R}}})\). If u0 is bounded, then \(u\in C^{k}_{{\alpha },\beta }({ [0,T]\times {\mathbb {R}}})\).


See the Appendix. □

Stochastic PDEs, rough PDEs, and path-dependent PDEs

Our initial goal is to study the fully nonlinear stochastic PDE

$$ du(t,x, B_{{\cdot}})=f(t,x, B_{{\cdot}}, u, {\partial}_{x} u, {\partial}^{2}_{{xx}} u)\, +g(t,x,B_{{\cdot}},u,{\partial}_{x} u) \circ dB_{t}. $$

Here, B is a standard Brownian motion, denotes the Stratonovich integral, and f and g are \({\mathbb {F}}^{B}\)-progressively measurable.

We want to consider (3.1) as a rough PDE. To this end, we introduce some notation. Let Ω0:={ωC0([0,T]):ω0=0}, B the canonical process on Ω0, i.e., Bt(ω):=ωt, \({\mathbb {P}}_{0}\) the Wiener measure, and

$$\begin{array}{@{}rcl@{}} {\Omega} := {{\bigcup}_{{1/3} < {\alpha} < {1/ 2}}} {\Omega}_{{\alpha}},\quad {\Omega}_{{\alpha}} := \big\{{\omega}\in{\Omega}_{0}: [{\omega}]_{{\alpha}} <\infty ~\text{and (\ref{trulyrough}) holds}\big\}. \end{array} $$

Then \({\mathbb {P}}_{0}({\Omega }) = 1\) (Friz and Hairer 2014, Theorem 6.6). Moreover, consider the space

$$\begin{array}{@{}rcl@{}} {\mathcal{C}}({\Omega}) &:= \bigcup\big\{{\mathcal{C}}_{{\alpha},\beta}({\Omega}): \text{\({\alpha}\in ({1/ 3}, {1/2 })\), \(\beta\in (0, 1]\), and (\ref{ab}) holds}\big\}, \end{array} $$

where \({\mathcal {C}}_{{\alpha },\beta }({\Omega })\) is the set of all \({\mathbb {F}}\)-progressively measurable real processes (ut)t[0,T] on Ω with \({\mathbb {E}}^{{\mathbb {P}}_{0}}[|u\|_{B;1}^{2}] <\infty \) and with \(u({\omega }) \in C^{1}_{{\omega };{\alpha },\beta }([0, T])\) for all ωΩ. Here, ·ω;1 and \(C^{1}_{{\omega };{\alpha },\beta }([0, T])\) are defined by (2.7) and Definition 2.2, respectively, with indication of the dependence on ω.

Then, for \(u \in {\mathcal {C}}({\Omega })\), we have

$$\begin{array}{@{}rcl@{}} \Big(\int_{0}^{t} u_{s} \circ dB_{s}\Big)({\omega}) = \int_{0}^{t} u_{s}({\omega})\,d{\omega}_{s}, {\quad} 0\le t\le T,~\text{for }{\mathbb{P}}_{0}-\text{a.e.}~{\omega}\in {\Omega}. \end{array} $$

Here, the left-hand side is a Stratonovic integral while the right-hand side is a rough path integral. In this sense, we may write SPDE (3.1) as the RPDE

$$\begin{array}{@{}rcl@{}} du(t,x, {\omega})=f(t,x, {\omega}, u, {\partial}_{x} u, {\partial}^{2}_{{xx}} u)\,dt+g(t,x,{\omega},u,{\partial}_{x} u)\, d{\omega}_{t},\, {\omega}\in {\Omega}. \end{array} $$

Remark 3.1

(i) If u is a classical solution of (3.1) with \(g(\cdot,x,B_{{\cdot }},u,{\partial }_{x} u) \in {\mathcal {C}}({\Omega })\) for all \(x\in {\mathbb {R}}\), then, by (3.4), RPDE (3.5) holds true for \({\mathbb {P}}_{0}\)-a.e. ωΩ.

(ii) In an earlier version of this paper (see arXiv:1501.06978v1), we studied pathwise viscosity solutions of SPDE (3.1) in the a.s. sense. In this version, we study instead the wellposedness of RPDE (3.5) for fixed ω. This is easier and more convenient. Moreover, the rough path framework allows us to prove crucial perturbation results such as Lemma 5.8.

(iii) If we have obtained a solution (in the classical or the viscosity sense) u(·,ω) of RPDE (3.5) for each ω, to go back to SPDE (3.1), one needs to verify the measurability and integrability of the mapping ωu(·,ω). To do so, one can, in principle, apply the strategy by Da Prato and Tubaro (1996, section 3), which relies on construction of solutions to SDEs via iteration so that adaptedness is preserved. This strategy can be applied in our setting and does not require f and g to be continuous in ω. Another possible approach is to follow the argument by Friz and Hairer (2014, section 9.1), which is in the direction of stability and norm estimates but requires at least g to be continuous in ω. Since the paper is already very lengthy, we do not pursue these approaches here in detail.

From now on, we shall fix (α,β) and ω as in Section 2.1 and omit ω in f, g, and u. To be precise, the goal of this paper is to study the RPDE

$$\begin{array}{@{}rcl@{}} du(t,x)=f(t,x,u, {\partial}_{x} u, {\partial}^{2}_{{xx}} u)\,dt+g(t,x,u,{\partial}_{x} u) d{\omega}_t \end{array} $$

with initial condition u(0,x)=u0(x). Note that u(t,x) implicitly depends on ω. In particular, \({\partial }^{{\omega }}_{t} u\) is different from tu in the standard PDE literature. Moreover, by Lemma 2.5, we may write (3.6) as the path-dependent PDE

$$\begin{array}{@{}rcl@{}} {\partial}^{{\omega}}_{t} u(t,x) = f(t,x,u, {\partial}_{x} u, {\partial}^{2}_{{xx}} u),{\quad} {\partial}_{{\omega}} u(t,x) = g(t,x,u,{\partial}_{x} u). \end{array} $$

with initial condition u(0,x)=u0(x). The arguments of f and g are implicitly denoted as f(t,x,y,z,γ) and g(t,x,y,z). Throughout this paper, the following assumptions are employed.

Assumption 3.2

Let \(g\in C^{k_{0}, loc}_{{\alpha },\beta }({ [0,T]\times {\mathbb {R}}}^{3})\) for some sufficiently large regularity index \(k_{0}\in {\mathbb {N}}\).

(i) \({\partial }_{y} g \in C^{k_{0}-1}_{{\alpha },\beta }({ [0,T]\times {\mathbb {R}}}^{3}), {\partial }_{z} g \in C^{k_{0}-1}_{{\alpha },\beta }({ [0,T]\times {\mathbb {R}}}^{3})\).

(ii) For i=0, …, k0 and \((y,z) \in {\mathbb {R}}^{2}, {\partial }^{i}_{x} g({\cdot },y,z) \in C^{k_{0}-i}_{{\alpha },\beta }({ [0,T]\times {\mathbb {R}}})\) with

$$\begin{array}{@{}rcl@{}} \|{\partial}^{i}_{x} g({\cdot},y,z) \|_{k_{0}-i} \le C [1+|y|+|z|]. \end{array} $$

Note that, for any bounded set \(Q\subset {\mathbb {R}}^{2}, g \in C^{k_{0}}_{{\alpha },\beta }({ [0,T]\times {\mathbb {R}}}\times Q)\).

Assumption 3.3

(i) f is nondecreasing in γ.

(ii) \(f\in C^{0}({ [0,T]\times {\mathbb {R}}}^{4})\) and |f(t,x,0,0,0)|≤K0 for all \((t,x)\in { [0,T]\times {\mathbb {R}}}\).

(iii) f is uniformly Lipschitz in (y,z,γ) with Lipschitz constant L0.

Assumption 3.4

Let u0 be continuous and u0K0.

We remark that for RPDE (3.6) there is no comparison principle in terms of g. Hence, a smooth approximation of g does not help for our purpose and thus we require g to be smooth. By more careful arguments, we may figure out the precise value of k0, but that would make the paper less readable. In the rest of the paper, we use k to denote a generic index for regularity, which may vary from line to line. We always assume that k is large enough so that we can freely apply all the results in Section 2, and we assume that the regularity index k0 in Assumption 3.2 is large enough so that we have the desired k-regularity in the related results.

Definition 3.5

Let \(u\in C^{2,loc}_{{\alpha },\beta }({ [0,T]\times {\mathbb {R}}})\). We say that u is a classical solution (resp., subsolution, supersolution) of RPDE (3.6) if

$$\begin{array}{@{}rcl@{}} {}{\partial}_{{\omega}} u(t,x) &\,\,=& g(t,x,u,{\partial}_{x} u),\\ {} {\mathcal{L}} u(t,x) &:=& {\partial}^{{\omega}}_{t} u (t,x) - f(t,x,u, {\partial}_{x} u, {\partial}^{2}_{{xx}} u) = ~(\text{resp.} \le, \ge) ~ 0. \end{array} $$

Again, note that there is no comparison principle in terms of g. So the first line in (3.8) is an equality even for sub/super-solutions.

Classical solutions of rough PDEs

We establish wellposedness of classical solutions for RPDE (3.6). To this end, we must require that the coefficients f, g and the initial value u0 are sufficiently smooth. For general RPDEs, most results are valid only locally in time. However, this is sufficient for our study of viscosity solutions in the next sections.

The characteristic equations

Our main tool is the method of characteristics (see Kunita (1997) for the stochastic setting). It will be used to get rid of the diffusion term g and to transform the RPDE into a standard PDE. Given \({\theta }:= (x, y,z)\in {\mathbb {R}}^{3}\), consider the coupled system of RDEs

$$\begin{array}{@{}rcl@{}} X_{t}&=&x- \int_{0}^{t} {\partial}_{z} g(s,\Theta_s)\,d\omega_{s},\\ Y_{t}&=&y + \int_{0}^{t} \big[g(s,\Theta_s)-Z_{s}{\partial}_{z} g(s,\Theta_s)\big]\,d\omega_{s},\\ Z_t&=&z +\int_{0}^{t} \big[{\partial}_{x} g(s,\Theta_s)+ Z_{s} {\partial}_yg(s,\Theta_s) \big]\,d\omega_s. \end{array} $$

Its solution is denoted by Θt(θ):=(Xt(θ),Yt(θ),Zt(θ)). Fix K0>0 and put

$$\begin{array}{@{}rcl@{}} Q := {\mathbb{R}} \times Q_{2},{\quad} Q_{2}:= \{(y,z)\in{\mathbb{R}}^{2}: { \max\{|y|, |z|\}}\le K_{0} + 1\} \subset {\mathbb{R}}^{2}. \end{array} $$

Proposition 4.1

Let Assumption 3.2 hold and let K0≥0 be a constant. Then there exist constants δ0>0 and C0, depending only on K0 and the k0-th norm of g (in the sense of Definition 2.10 (i)) on [0,TQ, such that for all θQ, the system (4.1) has a unique solution Θ(θ) such that

$$\begin{array}{@{}rcl@{}} \tilde {\Theta} \in C^{k_{0}}_{{\alpha},\beta}([0,\delta_{0}]\times Q; {\mathbb{R}}^{3}),\, \|\tilde {\Theta}\|_{k_{0}}\le C_{0},\, \text{where }\tilde {\Theta}_{t}({\theta}) := {\Theta}_{t}({\theta}) - {\theta}. \end{array} $$


Uniqueness follows directly from an appropriate multidimensional extension of Lemma 2.12 for each θQ. To prove existence, we note that the main difficulty here is that some coefficients in (4.1) are not bounded. To deal with this difficulty, we introduce, for each N>0, a smooth truncation function \(\iota ^{N}:{\mathbb {R}}\to {\mathbb {R}}\) with ιN(z)=z for |z|≤N and ιN(z)=0 for |z|>N+1, and consider gN(t,θ):=g(t,x,ιN(y),ιN(z)). Then, by Assumption 3.2, \(g^{N}\in C^{k_{0}}_{{\alpha },\beta }({ [0,T]\times {\mathbb {R}}}^{3})\). Next, for each \({\theta }\in {\mathbb {R}}^{3}\), consider the system

$$\begin{array}{@{}rcl@{}} X_{t}^{N}&=&x- \int_{0}^{t} {\partial}_{z} g^{N} (s, \Theta_{s}^{N})\,d{\omega}_{s},\\ Y_{t}^{N}&=&y+ \int_{0}^{t} \big[g^{N}(s, \Theta^{N}_{s})- \iota^{N}(Z^{N}_{s}){\partial}_{z} g^{N}(s, \Theta^{N}_{s})\big]\,d{\omega}_{s},\\ Z^{N}_{t}&=&z+ \int_{0}^{t} \big[{\partial}_{x} g^{N}(s, \Theta^{N}_{s})+ \iota^{N}(Z_{s}^{N}){\partial}_{y} g^{N}(s,\Theta^{N}_{s}) \big]\,d{\omega}_{s}. \end{array} $$

Applying Lemma 2.15, but extended to the multidimensional case (using the extended Lemma 2.13 as shown in Remark 2.14), the RDE above has a unique solution \(\phantom {\dot {i}\!}\Theta ^{N}({\theta })=(X^{N},Y^{N},Z^{N})({\theta })\in C^{k_{0}}_{{\alpha },\beta }([0,T]; {\mathbb {R}}^{3})\) and satisfies (4.3) with a constant \(\phantom {\dot {i}\!}C_{N}:= C(N, T, \|g^{N}\|_{k_{0}})\). Now set N:=K0+1. For (t,θ)[0,TQ, it follows from (2.8) that

$$ \lvert Y^{N}_{t}({\theta})\rvert \le K_{0} +C_{N} t^{{\alpha}},{\quad} \lvert Z^{N}_{t}({\theta})\rvert \le K_{0} +C_{N} t^{{\alpha}}. $$

Set \(\delta _{0}:=C_{N}^{-1/\alpha }\wedge T\). Then, for tδ0, we have \(|Y^{N}_{t}({\theta })|\) and \(|Z^{N}_{t}({\theta })|\le N\). Thus \(g^{N}({\Theta }^{N}_{t}) = g({\Theta }^{N}_{t})\). Therefore, ΘN solves the original untruncated equation (4.1) on [0,δ0]. □

Next, we linearize system (4.1). To this end, put

$$\begin{array}{@{}rcl@{}} U := [{\partial}_{x} X, {\partial}_{y} X, {\partial}_{z} X], \, V := [{\partial}_{x} Y, {\partial}_{y} Y, {\partial}_{z} Y],\, W:= [{\partial}_{x} Z, {\partial}_{y} Z, {\partial}_{z} Z]. \end{array} $$


$$\begin{array}{@{}rcl@{}} U_{t} &=&[1, 0, 0] - \int_{0}^{t} \left[ {\partial}_{{xz}} g U_{s}+{\partial}_{{yz}} g V_{s} + {\partial}_{{zz}} g\, W_{s}\right](s, {\Theta}_{s})\,d{\omega}_{s},\\ V_{t} &=&[0,1,0] +\int_{0}^{t} \left[ \left[{\partial}_{x} g - {\partial}_{{xz}} g\,Z_{s}\right] U_{s}+ \left[{\partial}_{y}g-Z_{s}{\partial}_{{yz}}g\right]\right.\,V_{s}\\ &&\left.- Z_{s}{\partial}_{{zz}}g\,W_{s} \right](s, {\Theta}_{s})\,d\omega_{s},\\ W_{t}&=&[0,0,1] +\int_{0}^{t} \left[ \left[ {\partial}^{2}_{{xx}} g+Z_{s} {\partial}_{{xy}} g\right]\right.\,U_{s}+ [{\partial}_{{xy}} g+{\partial}_{{yy}} g\,Z_{s}] V_{s}\\ &&\left. + \left[{\partial}_{{xz}} g+Z_{s}{\partial}_{{yz}} g+{\partial}_{y} g\right]\,W_{s} \right](s, {\Theta}_{s})\,d{\omega}_{s}. \end{array} $$

The next result is due to Peter Baxendale. It is a slight generalization of Kunita (1997, (14), p. 291) (which corresponds to (4.15) below).

Lemma 4.2

Let Assumption 3.2 hold and let K0, δ0 be as in Proposition 4.1. For every (t,θ)[0,δ0Q with θ=(x,y,z) and every \(h=(h_{1},h_{2},h_{3})\in {\mathbb {R}}^{3}\),

$$\begin{array}{@{}rcl@{}} V_{t}({\theta})\cdot h -Z_{t} U_{t}({\theta})\cdot h=(h_{2}-z\cdot h_{1}) \exp\Big\{ \int_{0}^{t} {\partial}_{y} g(s,\Theta_{s}({\theta}))\,d{\omega}_{s} \Big\}. \end{array} $$


Fix θQ, \(h\in {\mathbb {R}}^{3}\). Put Γt:=Vt·hZtUt·h. By Lemma 2.8,

$$\begin{array}{@{}rcl@{}} {\partial}_{{\omega}} {\Gamma}_{t} &=& \Big[ \big[{\partial}_{x} g-{\partial}_{{xz}}g\,Z_{t}\big] U_t+\Bigl[{\partial}_{y} g-Z_{t}{\partial}_{{yz}} g\Bigr]\,V_t- Z_{t}{\partial}_{{zz}} g\,W_{t} \Big] \cdot h \\ &&- [{\partial}_{x} g + Z_{t} {\partial}_{y} g] U_{t} \cdot h + Z_{t} \Big[ {\partial}_{{xz}} g U_t+{\partial}_{{yz}} gV_{t} + {\partial}_{{zz}} g\, W_{t}\Big]\cdot h\\ &=& {\partial}_{y} g V_{t} \cdot h -Z_{t} {\partial}_{y} g U_{t} \cdot h = {\partial}_{y} g {\Gamma}_t. \end{array} $$

Clearly, \({\partial }^{{\omega }}_{t} {\Gamma }_{t} = 0\) and Γ0=h2zh1. Then Lemma 2.13 yields (4.6). □

RPDEs and PDEs

Our goal is to associate RPDE (3.6) with a function v satisfying

$$\begin{array}{@{}rcl@{}} {\partial}_{{\omega}} v(t, x) = 0, \end{array} $$

which would imply that v solves a standard PDE. To illustrate this idea, let us first derive the PDE for v heuristically. Assume that u is a classical solution of RPDE (3.6) with sufficient regularity. Recall (4.1). We want to find v satisfying (4.7) and

$$\begin{array}{@{}rcl@{}} & u\big(t,X_{t}(\theta_{t}(x))\big)=Y_{t}(\theta_{t}(x)),{\quad} \partial_{x} u\big(t,X_{t}(\theta_{t}(x))\big)=Z_{t}(\theta_{t}(x)),&\\ &\text{where}~ {\theta}_{t}(x) := (x, v(t,x), {\partial}_{x} v(t,x)).& \end{array} $$

In fact, recall (4.4) and write

$$\begin{array}{@{}rcl@{}} \hat \Phi_{t}(x) := \Phi(t,{\theta}_{t}(x)) {\quad}\text{for}\ \Phi = {\Theta},\ {U},\ {V},\ {W}. \end{array} $$

Applying the operator \({\partial }^{{\omega }}_{t}\) on both sides of the first equality of (4.8) together with Lemma 2.8 yields

$$\begin{array}{@{}rcl@{}} 0&=&{\partial}^{{\omega}}_{t}\Big[u(t,\hat X_{t}) - \hat Y_{t}\Big] = {\partial}^{{\omega}}_{t} u (t, \hat X_{t}) + {\partial}_{x} u (t,\hat X_{t}) \hat U_{t} {\cdot} {\partial}_{t} {\theta}_{t}(x)- \hat V_{t} {\cdot} {\partial}_{t} {\theta}_{t}(x) \\ &=& f(t, \hat X_{t}, u, {\partial}_{x} u, {\partial}^{2}_{{xx}} u) - \big[ V_{t}({\theta}_{t}(x)) - Z_{t}({\theta}_{t}(x)) U_{t}({\theta}_{t}(x))\big] {\cdot} {\partial}_{t} {\theta}_{t}. \end{array} $$

By Lemma 4.2 with h:=tθt(x)=[0,tv(t,x),txv(t,x)] and z=xv(t,x),

$$\begin{array}{@{}rcl@{}} && \big[ V_{t}({\theta}_{t}(x)) - Z_{t}({\theta}_{t}(x)) U_{t}({\theta}_{t}(x))\big] {\cdot} {\partial}_{t} {\theta}_{t}\\ && = [h_{2} - zh_{1}] e^{\int_{0}^{t} {\partial}_{y} g(s, {\Theta}_{s}({\theta}_{t}(x))) \,d{\omega}_{s}} ={\partial}_{t} v(t,x) e^{\int_{0}^{t} {\partial}_{y} g(s, {\Theta}_{s}({\theta}_{t}(x))) \,d{\omega}_{s}}. \end{array} $$

We emphasize that the variable θt(x) above is fixed when Lemma 4.2 is applied, while the variable t in Vt is viewed as the running time. In particular, in the last term above Θs(θt(x)) involves both times s and t. Then, by (4.10),

$$\begin{array}{@{}rcl@{}} {\partial}_{t} v(t,x) \exp\big(\int_{0}^{t} {\partial}_{y} g(s, {\Theta}_{s}({\theta}_{t}(x)))d{\omega}_{s}\big) &=& f(t, \hat X_{t}, u, {\partial}_{x} u, {\partial}^{2}_{{xx}} u). \end{array} $$

By (4.8), \(u(t, \hat X_{t})\) and \({\partial }_{x} u(t, \hat X_{t})\) are functions of (t,θt(x)). Moreover, by applying the operator x on both sides of the second equality of (4.8),

$$\begin{array}{@{}rcl@{}} {\partial}^{2}_{{xx}} u(t, \hat X_t) \hat U_{t} {\cdot} {\partial}_{x} {\theta}_{t} (x) = \hat W_{t} {\cdot} {\partial}_{x} {\theta}_{t}(x). \end{array} $$

Note that \({\partial }_{x} {\theta }_{t}(x) = [1, {\partial }_{x} v, {\partial }^{2}_{{xx}} v](t,x)\). Then, provided \(\hat U_{t} {\cdot } {\partial }_{x} {\theta }_{t} (x)\neq 0\),

$$\begin{array}{@{}rcl@{}} {\partial}^{2}_{{xx}} u(t, \hat X_t) = {\hat W_{t} {\cdot} {\partial}_{x} {\theta}_{t}(x) \over \hat U_{t} {\cdot} {\partial}_{x} {\theta}_{t} (x)} = {\partial_{x} Z_{t}({\theta}_t)+\partial_{y} Z_{t}({\theta}_t)\, {\partial}_{x} v +\partial_{z} Z_{t}({\theta}_t)\,{\partial}^{2}_{{xx}} v\over \partial_{x} X_{t}({\theta}_t)+\partial_{y} X_{t}({\theta}_t)\, {\partial}_{x} v +\partial_{z} X_{t}({\theta}_t)\,{\partial}^{2}_{{xx}} v}. \end{array} $$

Therefore, formally v should satisfy the PDE

$$\begin{array}{@{}rcl@{}} {\partial}_{t} v (t,x) = F(t,x, v(t,x), {\partial}_{x} v(t,x), {\partial}^{2}_{{xx}} v(t,x)),{\quad} v(0,x) = u_{0}(x), \end{array} $$

where, for θ=(x,y,z),

$$ F(t,{\theta},{\gamma}) := f\Big(t, {\Theta}_{t}({\theta}), {W_{t}({\theta}){\cdot} [1, z, {\gamma}] \over U_{t}({\theta}){\cdot} [1, z, {\gamma}]}\Big) e^{-\int_{0}^{t} {\partial}_{y} g(s, {\Theta}_{s}({\theta})) d{\omega}_{s}}. $$

Now, we carry out the analysis above rigorously. We start from PDE (4.10) and derive the solution for RPDE (3.6). Recall (2.20) and that k is a generic, sufficiently large regularity index that may vary from line to line.

Lemma 4.3

Let Assumption 3.2 hold. Let \(v\in C^{k,0}_{{\alpha },\beta }({ [0,T]\times {\mathbb {R}}})\) for some large k. Put K0:=vxv. Let δ0 be determined by Proposition 4.1. Then there exists a constant δ(0,δ0] such that the following holds:

(i) For every \((t, x)\in [0, {\delta }] \times {\mathbb {R}}, {\partial }_{x} \hat X_{t}(x)=U_{t}({\theta }_{t}(x)) {\cdot } {\partial }_{x} {\theta }_{t}(x) \ge {1/2}\).

(ii) For every t[0,δ], \(\hat {X}_{t}:{\mathbb {R}}\to {\mathbb {R}}\) is a C1-diffeomorphism and \(\hat {S}_{t}(x):=\hat {X}^{-1}_{t}(x)\) belongs to \(C^{k, loc}_{\alpha,\beta } ([0,\delta ]\times {\mathbb {R}})\) (for a possibly different k) and satisfies

$$\begin{array}{@{}rcl@{}} \hat{S}_{t}(x)&=&x-{\int}_{0}^{t} \frac{[\hat U_{s} {\cdot} {\partial}_{t} {\theta}_{s} ](\hat S_{s}(x))}{(\partial_{x}\hat{X}_{s})\,(\hat{S}_{s}(x))}\,ds + \int_{0}^{t} \frac{{\partial}_{z} g(s,\hat{\Theta}_{s}(\hat{S}_{s}(x)))}{(\partial_{x}\hat{X}_{s})\,(\hat{S}_{s}(x))}\,d{\omega}_{s}. \end{array} $$


(i) Note that θt(x)Q for all \((t,x)\in { [0,T]\times {\mathbb {R}}}\). By (4.3), it is clear that \(U \in C^{k}_{{\alpha },\beta }([0, {\delta }_{0}]\times Q; {\mathbb {R}}^{3})\). Recall that, by Definition 2.10 (i), the regularity here is uniform in x. Thus, together with the regularity of v, we have

$$\begin{array}{@{}rcl@{}} &| U_{t}({\theta}_{t}(x))- U_{0}({\theta}_{0}(x)) | \\ &\qquad\le |U_{t}({\theta}_{t}(x)) - U_{t}({\theta}_{0}(x))|+|U_{t}({\theta}_{0}(x)) - U_{0}({\theta}_{0}(x))| \le C t^{\alpha}. \end{array} $$

Since \({\partial }_{x} {\theta }_{t}(x) = [1, {\partial }_{x} v, {\partial }^{2}_{{xx}} v](t,x)\) is bounded,

$$\begin{array}{@{}rcl@{}} | U_{t}({\theta}_{t}(x)) {\cdot} {\partial}_{x} {\theta}_{t}(x) - U_{0}({\theta}_{0}(x)) {\cdot} {\partial}_{x} {\theta}_{0}(x) | \le C t^{\alpha}. \end{array} $$

Note that \(U_{0}({\theta }_{0}(x)) {\cdot } {\partial }_{x} {\theta }_{0}(x) = [1, 0, 0] {\cdot } [1, {\partial }_{x} v, {\partial }^{2}_{{xx}} v](t,x) = 1\). Hence, there exists a δδ0 such that, for every \((t,x)\in [0,\delta ]\times {\mathbb {R}}, \partial _{x}\hat {X}_{t}(x) = U_{t}({\theta }_{t}(x)) {\cdot } {\partial }_{x} {\theta }_{t}(x) \ge 1/2\).

(ii) First, by (i), we see that \(\hat X_{t}\) is one to one for t[0,δ]. Choose \(\iota \in C^{\infty }_{b}({\mathbb {R}})\) with ι(y)=1/y for y≥1/4 and ι(y)=0 for y≤1/5. Define functions \(\hat {a}, \hat {b}:[0,\delta ]\times {\mathbb {R}}\to {\mathbb {R}}\) by

$$\begin{array}{@{}rcl@{}} \hat{a}(t,x)&&:=-\iota\big(\partial_{x}\hat{X}_{t}(x)\big)~ \hat U_{t}(x) {\cdot} {\partial}_{t} {\theta}_{t}(x),\\ \hat{b}(t,x)&&:=\iota\big(\partial_{x}\hat{X}_{t}(x)\big)~ {\partial}_{z} g(t,\hat{\Theta}_{t}(x)). \end{array} $$

Note that \(\hat {a}, \hat {b}\in C^{k}_{{\alpha },\beta } ([0,\delta ]\times {\mathbb {R}})\). Then, by Lemma 2.15, the RDE

$$\begin{array}{@{}rcl@{}} \tilde{S}_{t}(x)=x+\int_{0}^{t} \hat{a}(s,\tilde{S}_{s}(x))\,ds +\int_{0}^{t} \hat{b}(s,\tilde{S}_{s}(x))\,d{\omega}_{s},\quad (t,x)\in [0,\delta]\times{\mathbb{R}}, \end{array} $$

has a unique solution \(\tilde {S}\in C^{k,loc}_{{\alpha },\beta }([0,\delta ]\times {\mathbb {R}})\). Now, by (i), we see that \(\tilde S\) actually satisfies RDE (4.12).

It remains to verify that \(\hat {X}_{t}\circ \tilde {S}_{t}=\text {id}\), t[0,δ]. Indeed, note that

$$\begin{array}{@{}rcl@{}} \hat{X}_{t}\circ\tilde{S}_{t} = X_{t}({\theta}_{t}(\tilde S_{t}(x))) {\quad}\text{and}{\quad} {\partial}_{{\omega}} v=0,{\quad} {\partial}^{{\omega}}_{t} X =0. \end{array} $$

Then, by (4.1) and (4.12),

$$\begin{array}{@{}rcl@{}} {\partial}_{{\omega}} [\hat X_{t}(\tilde S_{t}(x))] &&= {\partial}_{{\omega}} X_{t}({\theta}_{t}(\tilde S_{t}(x))) + U_{t}({\theta}_{t}(\tilde S_{t}(x))) {\cdot} {\partial}_{x} {\theta}_{t}(\tilde S_{t}(x)) {\partial}_{{\omega}} \tilde S_{t}(x)\\ && = - {\partial}_{z} g(t, \hat {\Theta}_{t}(\tilde S_{t}(x))) + {\partial}_{x} \hat X_{t}(\tilde S_{t}(x)) {{\partial}_{z} g(t,\hat{\Theta}_{t}(\tilde {S}_{t}(x))) \over (\partial_{x}\hat{X}_{t})\,(\tilde {S}_{t}(x))} =0;\\ {\partial}^{{\omega}}_{t}[\hat X_{t}(\tilde S_{t}(x))] &&= U_{t}({\theta}_{t}(\tilde S_{t}(x))) {\cdot} \Big[{\partial}_{t} {\theta}_{t} (\tilde S_{t}(x)) + {\partial}_{x} {\theta}_{t}(\tilde S_{t}(x)) {\partial}^{{\omega}}_{t} \tilde S_{t}(x)\Big]\\ &&= \hat U_{t}(\tilde S_{t}(x)) {\cdot} {\partial}_{t} {\theta}_{t} (\tilde S_{t}(x)) +{\partial}_{x} \hat X_{t}(\tilde S_{t}(x)) \Big[- { \hat U_{t}(\tilde S_{t}(x)) {\cdot} {\partial}_{t} {\theta}_{t} (\tilde S_{t}(x))\over (\partial_{x}\hat{X}_{t})\,(\tilde {S}_{t}(x))}\Big]\\ &&=0. \end{array} $$

Thus \(\hat X_{t}(\tilde S_{t}(x))= \hat X_{0}(\tilde S_{0}(x)) = x\). This concludes the proof. □

Theorem 4.4

Let Assumption 3.2 hold and v and δ be as in Lemma 4.3. Assume further that v is a classical solution (resp., subsolution, supersolution) of PDE (4.10). Put \(u(t,x) := \hat Y_{t} \circ \hat X_{t}^{-1}(x)\). Then \(u\in C^{k}_{{\alpha },\beta }([0,\delta ]\times {\mathbb {R}})\) is a classical solution (resp., subsolution, supersolution) of RPDE (3.6).


It is clear that \(u\in C^{2, loc}_{{\alpha },\beta }([0,\delta ]\times {\mathbb {R}})\). To show the uniform properties in terms of x, define first \(\check S_{t}(x) := \hat S_{t}(x) - x, \check Y_{t}({\theta }) := Y_{t}({\theta })-y\). Then, by Lemma 2.15, \(\check S \in C^{k}_{{\alpha },\beta }([0, {\delta }]\times {\mathbb {R}})\) and \(\check Y \in C^{k}_{{\alpha },\beta }([0, {\delta }]\times {\mathbb {R}}^{3})\). Note that

$$\begin{array}{@{}rcl@{}} u(t,x) &=& \hat Y_{t}(\hat S_{t}(x)) \\ &=& \tilde Y_{t}\big(\check S_{t}(x)+x, v_{t}(\check S_{t}(x)+x), {\partial}_{x} v_{t}(\check S_{t}(x)+x)\big) + v_{t}(\check S_{t}(x)+x). \end{array} $$

Since \(v\in C^{k,0}_{{\alpha },\beta }({ [0,T]\times {\mathbb {R}}})\), it is clear that \(u\in C^{k}_{{\alpha },\beta }([0, {\delta }]\times {\mathbb {R}})\).

We prove only the subsolution case. The other statements can be proved similarly. Note that \(u(t,x) = Y_{t}\big ({\theta }_{t}(\hat S_{t}(x))\big)\). Then, denoting \(\hat x := \hat S_{t}(x)\),

$$\begin{array}{@{}rcl@{}} {\partial}_{{\omega}} u(t,x) &=& {\partial}_{{\omega}} Y_{t}\big({\theta}_{t}(\hat x)\big) + V_{t}\big({\theta}_{t}(\hat x)\big) {\cdot} {\partial}_{x} {\theta}_{t}(\hat x) ~ {\partial}_{{\omega}} \hat S_{t}(x) \\ &=& \big[g(t, \hat {\Theta}_{t}(\hat x) - \hat Z_{t}(\hat x) {\partial}_{z} g(t, \hat {\Theta}_{t}(\hat x))\big] + V_{t}\big({\theta}_{t}(\hat x)\big) {\cdot} {\partial}_{x} {\theta}_{t}(\hat x) {{\partial}_{z} g(t,\hat{\Theta}_{t}(\hat x)) \over (\partial_{x}\hat{X}_{t})\,(\hat x)}\\ &=& g(t, \hat {\Theta}_{t}(\hat x)) + {{\partial}_{z} g(t,\hat{\Theta}_{t}(\hat x)) \over (\partial_{x}\hat{X}_{t})\,(\hat x)}\big[\hat V_{t}(\hat x) - \hat Z_{t}(\hat x) \hat U_{t}(\hat x)\big] {\cdot} {\partial}_{x} {\theta}_{t}(\hat x). \end{array} $$

Note that, for (x,y,z):=θt(x)=[x,v,xv] and \(h:= {\partial }_{x} {\theta }_{t}(x) = [1, {\partial }_{x} v, {\partial }^{2}_{{xx}} v]\), we have h2zh1=xvxv=0. Then, by Lemma 4.2, we have

$$\begin{array}{@{}rcl@{}} \Big[\hat V_{t}(\hat x) - \hat Z_{t}(\hat x) \hat U_{t}(\hat x)\Big] {\cdot} {\partial}_{x} {\theta}_{t}(\hat x) =0, \end{array} $$

and thus

$$\begin{array}{@{}rcl@{}} {\partial}_{{\omega}} u(t,x) &=& g(t, \hat {\Theta}_{t}(\hat x)). \end{array} $$

Similarly, note that tθt(x)=[0,tv,txv],

$$\begin{array}{@{}rcl@{}} {\partial}^{{\omega}}_{t} u(t,x) &=& V_{t}\big({\theta}_{t}(\hat x)\big) {\cdot} \Big[{\partial}_{t} {\theta}_{t}(\hat x) + {\partial}_{x} {\theta}_{t}(\hat x) ~ {\partial}^{{\omega}}_{t} \hat S_{t}(x) \Big]\\ &=&\hat V_{t}\big(\hat x) {\cdot} {\partial}_{t} {\theta}_{t}(\hat x) + \hat Z_{t}(\hat x)(\partial_{x}\hat{X}_{t})\,(\hat x)\Big[- \frac{\hat U_{t}(\hat x) {\cdot} {\partial}_{t} {\theta}_{t} (\hat x)}{(\partial_{x}\hat{X}_{t})\,(\hat x)}\Big]\\ &=&\!\! \big[\hat V_{t}(\hat x) - \hat Z_{t}(\hat x) \hat U_{t}(\hat x)\big] {\cdot} {\partial}_{t} {\theta}_{t}(\hat x) \,=\, {\partial}_{t} v(t, \hat x) \exp\Big(\int_{0}^{t} {\partial}_{y} g(s, \hat {\Theta}_{s}(\hat x)) d{\omega}_{s}\Big). \end{array} $$

Since v is a classical subsolution of (4.10)–(4.11), the definition of F yields

$$\begin{array}{@{}rcl@{}} {\partial}^{{\omega}}_{t} u(t,x) \le f\Big(t, \hat {\Theta}_{t}(\hat x), {\hat W_{t}(\hat x){\cdot} [1, {\partial}_{x} v(t, \hat x), {\partial}^{2}_{{xx}} v(t, \hat x)] \over \hat U_{t}(\hat x){\cdot} [1, {\partial}_{x} v(t, \hat x), {\partial}^{2}_{{xx}} v(t, \hat x)]}\Big). \end{array} $$

Now, we identify the functions inside g and f in (4.16) and (4.17). First, by definition,

$$\begin{array}{@{}rcl@{}} \hat X_{t} (\hat S_{t}(x)) = x\text{ and }\hat Y_{t} (\hat S_{t}(x)) = u(t,x). \end{array} $$

Next, differentiating (4.18) with respect to x, we have

$$\begin{array}{@{}rcl@{}} 1 &=& {\partial}_{x} \big[X_{t}({\theta}_{t}(\hat x))\big] = U_{t}({\theta}_{t}(\hat x)) {\cdot} {\partial}_{x} {\theta}_{t}(\hat x) ~ {\partial}_{x} \hat S_{t}(x),\\ &&{\partial}_{x} u(t,x) ={\partial}_{x} \big[Y_{t}({\theta}_{t}(\hat S_{t}(x)))\big] = V_{t}({\theta}_{t}(\hat x)) {\cdot} {\partial}_{x} {\theta}_{t}(\hat x) ~ {\partial}_{x} \hat S_{t}(x).\end{array} $$

Thus, by (4.15),

$$ {\partial}_{x} u(t,x) - \hat Z_{t} (\hat x) = \Big[\hat V_{t}(\hat x) - \hat Z_{t} (\hat x) \hat U_{t}(t, \hat x)\Big] {\cdot} {\partial}_{x} {\theta}(t, \hat x) =0. $$


$$\begin{array}{@{}rcl@{}} {} {\partial}^{2}_{{xx}} u(t,x) &=& {\partial}_{x}[{\partial}_{x} u(t,x)] = {\partial}_{x} \Big[ Z_{t} ({\theta}_{t}(\hat S_{t}(x)))\Big]\\ {}&=& W_{t} ({\theta}_{t}(\hat x)) {\cdot} {\partial}_{x} {\theta}_{t}(\hat x) ~ {\partial}_{x} \hat S_{t}(x)= {\hat W_{t}(\hat x) {\cdot} [1, {\partial}_{x} v(t, \hat x), {\partial}^{2}_{{xx}} v(t, \hat x)] \over \hat U_{t}(\hat x) {\cdot} [1, {\partial}_{x} v(t, \hat x), {\partial}^{2}_{{xx}} v(t,\hat x)] }. \end{array} $$

Plugging (4.18)–(4.20) into (4.16)–(4.17), we see that u satisfies the desired subsolution properties. □

Now, we proceed in the opposite direction, namely deriving v from u. Assume that \(u\in C^{k}_{{\alpha },\beta }({ [0,T]\times {\mathbb {R}}})\) for some large k and define K0:=uxu. Let Q2 and Q be as in (4.2) and δ0 as in Proposition 4.1. For any fixed \((t,x) \in [0, {\delta }_{0}]\times {\mathbb {R}}\), consider the mapping

$$ (y, z) \mapsto [ Y - u(t, X), Z - {\partial}_{x} u(t, X)]\, (t,x,y,z) $$

from Q2 to \({\mathbb {R}}^{2}\). The Jacobi matrix of this mapping is given by

$$\begin{array}{@{}rcl@{}} J(t,x,y,z) := \left[ \begin{array}{lll} {\partial}_{y} Y - {\partial}_{x} u(t, X) {\partial}_{y} X & {\partial}_{y} Z - {\partial}^{2}_{{xx}} u(t, X) {\partial}_{y} X \\ {\partial}_{z} Y - {\partial}_{x} u(t, X) {\partial}_{z} X & {\partial}_{z} Z - {\partial}^{2}_{{xx}} u(t, X) {\partial}_{z} X \end{array}\right]\,{(t,x,y,z)}. \end{array} $$

Note that det(J(0,x,y,z))=1. Thus, noting also that xu and \({\partial }^{2}_{{xx}} u\) are bounded, one can see, similarly to (4.13), that there exists a δδ0 such that det(J(t,x,y,z))≥1/2 for all (t,x,y,z)[0,δQ. This implies that the mapping (4.21) is one to one and the inverse mapping has sufficient regularity. Denote by R(t,x) the range of the mapping (4.21). Then

$$\begin{array}{@{}rcl@{}} R(0,x) = \{(y-u(0,x), z- {\partial}_{x} u(0,x)): (y,z) \in Q_{2}\} \supset {\mathbb{R}} \times (-1, 1). \end{array} $$

Thus, by (4.13) and the boundedness of xu, \({\partial }^{2}_{{xx}} u\) again, and by choosing a smaller δ if necessary, we may assume that (0,0)R(t,x) for all \((t,x) \in [0, {\delta }] \times {\mathbb {R}}\). Therefore, for any \((t,x) \in [0, {\delta }] \times {\mathbb {R}}\), there exists a unique (v(t,x),w(t,x))Q2 such that, denoting \(\tilde {\theta }_{t}(x):= (x, v(t,x), w(t,x))\),

$$\begin{array}{@{}rcl@{}} Y_{t}(\tilde {\theta}_{t}(x)) = u(t, X_{t}(\tilde {\theta}_{t}(x))),{\qquad} Z_{t}(\tilde {\theta}_{t}(x)) = {\partial}_{x} u(t, X_{t}(\tilde {\theta}_{t}(x))){.} \end{array} $$

Differentiating the first equation in (4.22) with respect to x and applying the second, we obtain

$$\begin{array}{@{}rcl@{}} 0 &=& {\partial}_{x} \Big[ Y_{t}(\tilde {\theta}_{t}(x)) - u(t, X_{t}(\tilde {\theta}_{t}(x))\Big] \\ &=& \Big[V_{t}(\tilde {\theta}_{t}(x)) - {\partial}_{x} u (t, X_{t}(\tilde {\theta}_{t}(x))) U_{t}(\tilde {\theta}_{t}(x))\Big] {\cdot} {\partial}_{x} \tilde {\theta}_{t}(x) \\ &=& \Big[V_{t}(\tilde {\theta}_{t}(x)) - Z_{t} U_{t}(\tilde {\theta}_{t}(x))\Big] {\cdot} [1, {\partial}_{x} v(t,x), {\partial}_{x} w(t,x)] \\ &=& \big[{\partial}_{x} v(t,x) - w(t,x)\big] \exp\big(\int_{0}^{t} {\partial}_{y} g(s, {\Theta}_{s}(\tilde {\theta}_{t}(x)))d{\omega}_{s}\big), \end{array} $$

where the last equality holds true thanks to Lemma 4.2. Then w(t,x)=xv(t,x) and thus (4.8) holds. In particular, we may use the notation θt(x) in (4.8) again to replace \(\tilde {\theta }_{t}(x)\).

We verify now that v indeed satisfies PDE (4.10).

Theorem 4.5

Let Assumption 3.2 hold, let \(u\in C^{k}_{{\alpha },\beta }({ [0,T]\times {\mathbb {R}}})\) for some large k, and let δ and v be determined as above. Assume further that u is a classical solution (resp., subsolution, supersolution) of RPDE (3.6). Then, for a possibly smaller δ>0, we have Ut(θt(x))·xθt(x)≥1/2 for all \((t, x)\in [0, {\delta }] \times {\mathbb {R}}\) and \(v \in C^{k,0}_{{\alpha },\beta }([0,{\delta }]\times {\mathbb {R}})\) is a classical solution (resp., subsolution, supersolution) of PDE (4.10) on \([0, {\delta }] \times {\mathbb {R}}\).


The regularity of v is straightforward. We prove only the case that u is a classical subsolution. The other cases can be proved similarly.

Recall the notations in (4.9). Differentiating the first equality of (4.8) with respect to ω and applying the second equality, we obtain

$$\begin{array}{@{}rcl@{}} 0 &= {\partial}_{{\omega}} \Big[Y_{t}({\theta}_{t}(x)) - u(t, X_{t}({\theta}_{t}(x)))\Big] = {\partial}_{{\omega}} Y_{t}({\theta}_{t}(x)) + \hat V_{t} {\cdot} {\partial}_{{\omega}} {\theta}_{t}(x)\\&\qquad - {\partial}_{{\omega}} u(t, \hat X_{t}) - {\partial}_{x} u (t, \hat X_{t})\big[{\partial}_{{\omega}} X(t, {\theta}_{t}(x)) + \hat U_{t} {\cdot} {\partial}_{{\omega}} {\theta}_{t}(x)\big]. \end{array} $$

By (3.8) and (4.8), \( {\partial }_{{\omega }} u(t, \hat X_{t}) = g(t, \hat X_{t}, u(t, \hat X_{t}), {\partial }_{x} u(t, \hat X_{t})) = g(t, \hat {\Theta }_{t}). \) Then, by (4.1) and Lemma 4.2,

$$\begin{array}{@{}rcl@{}} 0&= [g(t, \hat {\Theta}_{t}) - \hat Z_{t} {\partial}_{z} g(t, \hat {\Theta}_{t}) ] + \hat V_{t} {\cdot} {\partial}_{{\omega}} {\theta}_{t}(x) - g(t, \hat {\Theta}_{t}) - \hat Z_{t} [- {\partial}_{z} g(t, \hat {\Theta}_{t})]\\&\qquad - \hat Z_{t} \hat U_{t} {\cdot} {\partial}_{{\omega}} {\theta}_{t}(x)\\ &= \big[\hat V_{t} - \hat Z_{t} \hat U_{t} \big] {\cdot} [0, {\partial}_{{\omega}} v(t,x), {\partial}_{{\omega}}{\partial}_{x} v(t,x) ] = {\partial}_{{\omega}} v(t,x) e^{\int_{0}^{t} {\partial}_{y} g(s, {\Theta}_{s}({\theta}_{t}(x))) d{\omega}_{s}}. \end{array} $$

Thus, ωv(t,x)=0 and Lemma 4.3 can be applied. In particular, for a possibly smaller δ>0, Ut(θt(x))·xθt(x)≥1/2 for all \((t, x)\in [0, {\delta }] \times {\mathbb {R}}\).

Finally, following exactly the same arguments as for deriving (4.10), one can complete the proof that v is a classical subsolution of PDE (4.10). □

Remark 4.6

We shall investigate the case with semilinear g in detail in section 7 below. Here, we consider the special case

$$\begin{array}{@{}rcl@{}} g = {\sigma}(z){,} \end{array} $$

which has received strong attention in the literature. Let σ and σ′′ denote the first- and second-order derivatives of σ, respectively. In this case, the system of characteristic equations (4.1) becomes

$$\begin{array}{@{}rcl@{}} X_{t}=x- \int_{0}^{t} {\sigma}'(Z_{s})\,d\omega_{s},\, Y_{t}=y + \int_{0}^{t} \big[{\sigma}(Z_{s}) -Z_{s}{\sigma}'(Z_{s})\big]\,d\omega_{s},\, Z_{t}=z, \end{array} $$

which has the explicit global solution

$$\begin{array}{@{}rcl@{}} X_{t} = x - {\sigma}'(z) {\omega}_{t},{\quad} Y_{t} = y + [{\sigma}(z) - z{\sigma}'(z)]{\omega}_{t},{\quad} Z_{t} = z. \end{array} $$

Moreover, in this case, (4.11) becomes

$$ F(t,x,y,z,{\gamma}) := f\big(t, x- {\sigma}'(z){\omega}_{t}, ~ y+ [{\sigma}(z)-z{\sigma}'(z)]{\omega}_{t}, ~z, ~ {{\gamma} \over 1 - {\sigma}^{\prime\prime}(z){\omega}_{t} {\gamma}}\big). $$

Local wellposedness of PDE (4.10)

To study the wellposedness of PDE (4.10) and hence that of RPDE (3.6), we first establish a PDE result. Let K0>0 and, similar to (4.2), consider

$$\begin{array}{@{}rcl@{}} Q_3:= \{(y,z,{\gamma}) \in {\mathbb{R}}^3: { \max\{|y|, |z|, |{\gamma}|\}}\le K_0+1\}. \end{array} $$

Lemma 4.7

Let k≥2 and δ0>0.

(i) Suppose that \(u_{0} \in C^{k+1+\beta }_{b}({\mathbb {R}})\) with |u0|, |xu0|, \(|{\partial }^{2}_{{xx}}u_{0}|\le K_{0}\).

(ii) Suppose that \(F\in C^{k+1}_{{\alpha },\beta }([0, {\delta }_{0}]\times {\mathbb {R}}\times Q_{3})\) and γFc0>0.

Then there exists a constant δδ0, depending on K0, c0, and the norm F2 on \([0, {\delta }_{0}]\times {\mathbb {R}}\times Q_{3}\), such that PDE (4.10) has a classical solution \(v\in C^{k,0}_{{\alpha },\beta }([0, {\delta }]\times {\mathbb {R}})\) on \([0, {\delta }] \times {\mathbb {R}}\).


It suffices to prove \(v\in C_{b}^{2+\beta }([0,\delta ]\times {\mathbb {R}})\). The further regularity of v when k≥2 follows from standard bootstrap arguments (Gilbarg and Trudinger 1983, Lemma 17.16) together with Remark 2.11. Since the proof is very similar to that of Lunardi (1995, Theorem 8.5.4), which considers a similar boundary-value problem, we shall present only the main ideas for the more involved existence part of the lemma. The first step is to linearize our equation and set up an appropriate fixed-point problem. To this end, let δ>0 and define an operator \(A: C_{b}^{2+\beta }([0,\delta ]\times {\mathbb {R}})\to C_{b}^{\beta }([0,\delta ]\times {\mathbb {R}})\) by

$$\begin{array}{*{20}l} (A v)\,(t,x)&:= \partial_{y} F(\hat{\theta}_{0}(x))\, v(t,x)+\partial_{z} F(\hat{\theta}_{0}(x))\, \partial_{x} v(t,x) \end{array} $$
$$\begin{array}{*{20}l} &\qquad +\partial_{\gamma} F(\hat{\theta}_{0}(x))\, \partial^{2}_{{xx}}v (t,x), \end{array} $$

where \(\hat {\theta }_{0}(x) := \big (0,x,u_{0}(x),\partial _{x} u_{0}(x),\partial _{{xx}}^{2}u_{0}(x)\big)\). Next, define

$$\begin{array}{@{}rcl@{}} B_{1} := \{ v\in C_{b}^{2+\beta}([0,\delta]\times{\mathbb{R}}): v(0,{\cdot})=u_{0}, \|v- u_{0}\|_{C_{b}^{2+\beta}} \le 1 \}. \end{array} $$

Now given vB1, consider the solution w of the linear PDE

$$\begin{array}{@{}rcl@{}} \partial_{t} w =Aw +[F(t,x,v,\partial_{x} v,\partial^{2}_{{xx}} v)-Av]\text{ on } [0,{\delta}]\times{\mathbb{R}} \end{array} $$

with w(0,·)=u0. Following the arguments by Lunardi (1995, Theorem 8.5.4), when δ>0 is small enough, PDE (4.29) has a unique solution wB1. This defines a mapping Γ(v):=w for vB1. Moreover, when δ>0 is small enough, Γ is a contraction mapping, and hence there exists a unique fixed point vB1. Then v=w and, by (4.29), v solves (4.10) on \([0, {\delta }]\times {\mathbb {R}}\). □

We now turn back to PDE (4.10)–(4.11) and RPDE (3.6).

Theorem 4.8

Let Assumption 3.2 hold and let k≥2, δ0>0.

(i) Suppose that \(u_{0} \in C^{k+1+\beta }_{b}({\mathbb {R}})\) with |u0|, \(|{\partial }_{x} u_{0}|, |{\partial }^{2}_{{xx}}u_{0}|\le K_{0}\).

(ii) Suppose that \(F\in C^{k+1}_{{\alpha },\beta }([0, {\delta }_{0}]\times {\mathbb {R}}\times Q_{3})\) and γfc0>0.

Then there exists a constant δδ0, depending on K0, c0, the regularity of f on [0,δ0Q3, and the regularity of g on [0,δ0Q, such that PDE (4.10)–(4.11) has a classical solution \(v\in C^{k,0}_{{\alpha },\beta }([0, {\delta }]\times {\mathbb {R}})\) on \([0, {\delta }]\times {\mathbb {R}}\), and consequently, for a possibly smaller δ>0, RPDE (3.6) has a classical solution \(u\in C^{k}_{{\alpha },\beta }([0,{\delta }]\times {\mathbb {R}})\).


Recall (4.11). By the uniform regularity of Θ in Proposition 4.1, one can verify straightforwardly that, for δ>0 small enough, F satisfies the conditions in Lemma 4.7 (ii). Then, by Lemma 4.7, PDE (4.10)–(4.11) has a classical solution vB1 for a possibly smaller δ. Finally, it follows from Theorem 4.4 that RPDE (3.6) has a local classical solution. □

The first-order case

We consider the case f being of first-order, i.e.,

$$\begin{array}{@{}rcl@{}} f = f(t,{\theta}) = f(t,x,y,z). \end{array} $$

This case is completely degenerate in terms of γ. It is not covered by Theorem 4.8. However, in this case, PDE (4.10)–(4.11) is also of first-order, i.e.,

$$ F(t,{\theta}) := f\big(t, {\Theta}_{t}({\theta})\big)\, {\exp\big(-\int_{0}^{t} {\partial}_{y} g(s, {\Theta}_{s}({\theta})) d{\omega}_{s}\big)}. $$

When f is smooth, so is F. Thus, we can modify the characteristic Eqs. 4.1 to solve PDE (4.10)–(4.31) explicitly. Put \(\tilde {\Theta } = (\tilde X, \tilde Y, \tilde Z)\) and consider

$$\begin{array}{@{}rcl@{}} \tilde X_t&=&x- \int_{0}^{t} {\partial}_{z} F(s,\tilde \Theta_s)\,ds,\\ \tilde Y_t&=&y + \int_{0}^{t} \big[F(s,\tilde \Theta_s)-\tilde Z_{s}{\partial}_{z} F(s,\Theta_s)\big]\,ds,\\ \tilde Z_t&=&z +\int_{0}^{t} \big[{\partial}_{x} F(s, \tilde \Theta_s)+ \tilde Z_{s} {\partial}_yF(s, \tilde \Theta_s) \big]\,ds. \end{array} $$

Similar to (4.8), let \(\tilde v\) be determined (locally in time) by

$$\begin{array}{@{}rcl@{}} \tilde v\big(t, \tilde X_{t}(\tilde \theta_{t}(x))\big)=\tilde Y_{t}(\tilde \theta_{t}(x)),~ \partial_{x}\tilde v\big(t,\tilde X_{t}(\tilde \theta_{t}(x))\big)=\tilde Z_{t}(\tilde \theta_{t}(x)), \end{array} $$

where \(\tilde {\theta }_{t}(x) := (x, \tilde v(t,x), {\partial }_{x} \tilde v(t,x))\). Then one can see that (4.7) should be replaced with \({\partial }_{t} \tilde v =0\), and thus \(\tilde v(t,x) = u_{0}(x)\). By similar (actually easier) arguments as in previous subsections, one can prove the following statement.

Theorem 4.9

Let Assumption 3.2 hold, f take the form (4.30) with \(f\in C^{k+1}_{{\alpha },\beta }([0, T]\times Q)\), and \(u_{0} \in C^{k+1+\beta }_{b}({\mathbb {R}})\) for some large k with |u0|, |xu0|, \(|{\partial }^{2}_{{xx}}u_{0}|\le K_{0}\). Then there is a constant δ>0 such that the following holds:

(i) The system of ODEs (4.32) is well-posed on [0,δ] for all θQ.

(ii) For each t[0,δ], the mapping \(x\in {\mathbb {R}}\mapsto \tilde X_{t}(x, u_{0}(x), {\partial }_{x} u_{0}(x))\in {\mathbb {R}}\) is invertible and thus possesses an inverse function, to be denoted by \(\tilde S_{t}\).

(iii) The map v defined by \(v(t,x) := \tilde Y_{t}(\tilde {\theta }_{t}(\tilde S_{t}(x)))\) belongs to \(C^{k,0}_{{\alpha },\beta }([0, {\delta }]\times {\mathbb {R}})\) and is a classical solution to PDE (4.10)-(4.31). Consequently, RPDE (3.6)–(4.30) has a classical solution \(u\in C^{k}_{{\alpha },\beta }([0, {\delta }]\times {\mathbb {R}})\).

Viscosity solutions of rough PDEs: definitions and basic properties

We introduce a notion of viscosity solution for RPDE (3.6) and study its basic properties. For any \((t_{0},x_{0})\in (0,T]\times {\mathbb {R}}\) and δ(0,t0), define

$$\begin{array}{@{}rcl@{}} D_{{\delta}}(t_{0},x_0) := [t_0-{\delta}, t_{0}]\times O_{{\delta}}(x_0) := [t_0-{\delta}, t_{0}]\times \{x\in {\mathbb{R}}: |x-x_0|\le {\delta}\}. \end{array} $$

The definition

For \(u\in C({ [0,T]\times {\mathbb {R}}})\) and \((t_{0},x_{0})\in (0,T]\times {\mathbb {R}}\), put

$$\begin{array}{@{}rcl@{}} \mathcal{A}_{g}^{0} u(t_{0},x_{0}; {\delta}) &:=& \left\{\varphi\in C^{2}_{\alpha,\beta}(D_{{\delta}}(t_{0},x_{0})): \varphi(t_{0},x_{0})=u(t_{0},x_{0}) \right.\\ &&\left.\text{and}{\quad} \partial_{\omega} \varphi=g(\cdot,\varphi,\partial_{x} \varphi) ~\text{on} ~ D_{{\delta}}(t_{0},x_{0})\right\},\\ \overline{\mathcal{A}}_{g} u(t_{0},x_{0}) &:=& \bigcup_{0<{\delta}\le t_{0}}\Big\{ {\varphi} \in \mathcal{A}_{g}^{0} u(t_{0},x_{0}; {\delta}): {\varphi} \le u ~\text{on} ~ D_{\delta}(t_{0}, x_{0})\Big\},\\ \underline{\mathcal{A}}_{g} u(t_{0},x_{0}) &:=& \bigcup_{0<{\delta}\le t_{0}}\Big\{ {\varphi} \in \mathcal{A}_{g}^{0} u(t_{0},x_{0};{\delta}): {\varphi} \ge u ~\text{on} ~ D_{\delta}(t_{0}, x_{0})\Big\}. \end{array} $$

Definition 5.1

Let \(u\in C({ [0,T]\times {\mathbb {R}}})\) and recall the operator \({\mathcal {L}}\) in (3.8).

(i) We say that u is a viscosity supersolution (resp., subsolution) of RPDE (3.6) if, for every \((t_{0},x_{0})\in (0,T]\times {\mathbb {R}}\) and for every \(\varphi \in \overline {\mathcal {A}}_{g} u(t_{0},x_{0})\) (resp., \(\underline {\mathcal {A}}_{g} u(t_{0},x_{0})\)), we have \({\mathcal {L}}{\varphi }(t_{0},x_{0})\ge (\text {resp.,}~\le) 0.\)

(ii) We say that u is a viscosity solution of RPDE (3.6) if it is both a viscosity supersolution and a viscosity subsolution of (3.6).

We remark that it is possible to consider semi-continuous viscosity solutions as in the standard literature. However, for simplicity, in this paper we restrict ourselves to continuous solutions only.

Proposition 5.2

(Consistency) Let Assumptions 3.2 and 3.3 hold and let \(u\in C^{2}_{\alpha,\beta }({ [0,T]\times {\mathbb {R}}})\). Then u is a classical subsolution (resp., classical supersolution) of RPDE (3.6) if and only if it is a viscosity subsolution (resp., viscosity supersolution) of (3.6).


We prove only the subsolution case. The supersolution case can be proved similarly.

First, assume that u is a viscosity subsolution. By choosing u itself as a test function, we can immediately infer that u is a classical subsolution.

Next, assume that u is a classical subsolution. Let \((t,x)\in (0,T]\times {\mathbb {R}}\) and \(\varphi \in \underline {\mathcal {A}}_{g} u(t,x)\) with corresponding δ0(0,t]. Then, at (t,x),

$$\begin{array}{@{}rcl@{}} &u-\varphi=0,\quad\partial_{x} [u-\varphi]=0,\qquad \partial^{2}_{{xx}} [u-\varphi]\le 0, \\ &~~{\partial}_{{\omega}} [u-{\varphi}] =0;{\quad} {\partial}_{x{\omega}} [u-{\varphi}] = c~ {\partial}^{2}_{{xx}} [u-{\varphi}], \end{array} $$

where c:=zg(t,x,u,xu). For any \(\phantom {\dot {i}\!}(\delta,h)\in [0,\delta _{0}]\times O_{\delta _{0}}(x)\), by Lemma 2.9,

$$\begin{array}{@{}rcl@{}} 0&\ge& [u-\varphi](t-\delta,x+h)\\ &=&-\partial_{t}^{{\omega}} [u-{\varphi}](t,x)~\delta+\frac{1}{2}\partial^{2}_{{xx}} [u-{\varphi}](t,x) |h- c~\omega_{t-\delta,t}|^{2} +R^{2,u-{\varphi}}_{\delta,h}, \end{array} $$

where Rδ,h2,uφ=O((δα+h)2+β). Fix a number δ1(0,δ0] such that, for every δ(0,δ1], we have |c ωtδ,t|<δ0. From now on, let δ(0,δ1]. Setting h:=c ωtδ,t in (5.3) yields

$$\begin{array}{@{}rcl@{}} -\partial_{t}^{{\omega}} [u-{\varphi}](t,x)\,\delta\le -R^{2, u-{\varphi}}_{\delta, h}\le C(\delta^{\alpha}+|c\omega_{t-\delta,t}|)^{2+\beta}\le C\,\delta^{\alpha(2+\beta)}. \end{array} $$

Recall (2.2). Dividing the inequality above by δ and sending δ to 0, we have \(\partial _{t}^{{\omega }} u(t,x)\ge \partial _{t}^{{\omega }} \varphi (t,x)\). By Assumption 3.3 (i) and by (5.2),

$$\begin{array}{@{}rcl@{}} \Bigl[\partial_{t}^{{\omega}} \varphi-f(\cdot,\varphi,\partial_{x}\varphi, \partial^{2}_{{xx}}\varphi)\Bigr](t,x)&\le \Bigl[\partial_{t}^{{\omega}} u-f(\cdot,u,\partial_{x} u, \partial^{2}_{{xx}} u)\Bigr](t,x)\le 0, \end{array} $$

i.e., u is a viscosity subsolution at (t,x). □

Equivalent definition through semi-jets

As in the standard PDE case (Crandall et al. 1992), viscosity solutions can also be defined via semi-jets. To see this, we first note that, for \({\varphi }\in \mathcal {A}^{0}_{g}u(t_{0}, x_{0};{\delta })\), our second-order Taylor expansion (Lemma 2.9) yields

$$\begin{array}{@{}rcl@{}} {\varphi}(t,x) &=& {\varphi}(t_{0}, x_{0}) + {\partial}^{{\omega}}_{t} {\varphi}(t_{0}, x_{0}) (t-t_{0}) + {\partial}_{{\omega}} {\varphi}(t_{0},x_{0}) {\omega}_{t_{0},t} \\ & +& {\partial}_{x} {\varphi}(t_{0},x_{0}) (x-x_{0})+ {\partial}^{2}_{{\omega}{\omega}}{\varphi}(t_{0},x_{0}) \underline {\omega}_{t_{0},t} + \frac{1}{2} {\partial}^{2}_{{xx}} {\varphi}(t_{0},x_{0}) |x-x_{0}|^{2}\\ &+& {\partial}_{x{\omega}} {\varphi}(t_{0}, x_{0}) {\omega}_{t_{0}, t} (x-x_{0}) + R(t,x), \end{array} $$

where (t,x)Dδ(t0,x0). Since ωφ(t,x)=g(t,x,φ,xφ), we have

$$\begin{array}{@{}rcl@{}} {\partial}_{x{\omega}} {\varphi} &=& {\partial}_{x} g + {\partial}_{y} g {\partial}_{x} {\varphi} + {\partial}_{z} g {\partial}^{2}_{{xx}} {\varphi},\\ {\partial}_{{\omega}{\omega}} {\varphi} &=& {\partial}_{{\omega}} g + {\partial}_{y} g {\partial}_{{\omega}} {\varphi} + {\partial}_{z} g {\partial}_{{\omega} x} {\varphi} \\&=& {\partial}_{{\omega}} g + {\partial}_{y} g g + {\partial}_{z} g [{\partial}_{x} g + {\partial}_{y} g {\partial}_{x} {\varphi} + {\partial}_{z} g {\partial}^{2}_{{xx}}{\varphi} ]. \end{array} $$

Motivated by this, we define semi-jets as follows. Given \(u\in C({ [0,T]\times {\mathbb {R}}}), (t_{0},x_{0})\in (0,T]\times {\mathbb {R}}\), and \((a,z,\gamma)\in {\mathbb {R}}^{3}\), put

$$\begin{array}{@{}rcl@{}} \psi^{a, z,{\gamma}}_{g,u, t_{0}, x_{0}} (t,x) &:=& y + a[t-t_{0}] +b\, {\omega}_{t_{0},t} + z [x-x_{0}] \\ &&+c\,\underline {\omega}_{t_{0},t} + \frac{1}{2} {\gamma} |x-x_{0}|^{2} + q\, {\omega}_{t_{0}, t} [x-x_{0}],{\quad}\text{where}\\ y&:=&u(t_{0},x_{0}),~ b:=g(t_{0},x_{0},y,z),\\ q&:=& [\partial_{x} g+\partial_{y} g z +\partial_{z} g \gamma](t_{0},x_{0},y,z), \\ c&:=&[\partial_{\omega} g+\partial_{y} g g+ \partial_{z} g (\partial_{x} g+\partial_{y} g z +\partial_{z} g \gamma)] (t_{0},x_{0},y,z). \end{array} $$

We then define the g-superjet \(\overline {\mathcal {J}}_{g} u(t_{0},x_{0})\) and the g-subjet \(\underline {\mathcal {J}}_{g} u(t_{0},x_{0})\) by

$$\begin{array}{@{}rcl@{}} \overline{\mathcal{J}}_{g} u(t_{0},x_{0}) &:=\bigcup\limits_{0<{\delta}\le t} \Big\{(a,z,{\gamma}) \in {\mathbb{R}}^{3}: \psi^{a, z,{\gamma}}_{g, u, t_{0}, x_{0}} \le u ~\text{on}~ D_{{\delta}}(t_{0},x_{0})\Big\},\\ \underline{\mathcal{J}}_{g} u(t_{0},x_{0}) &:=\bigcup\limits_{0<{\delta}\le t} \Big\{(a,z,{\gamma}) \in {\mathbb{R}}^{3}: \psi^{a, z,{\gamma}}_{g, u, t_{0}, x_{0}} \ge u~\text{on}~ D_{{\delta}}(t_{0},x_{0})\Big\}. \end{array} $$

Note that \({\partial }_{{\omega }} \psi ^{a, z,{\gamma }}_{g, u, t_{0}, x_{0}} = g({\cdot }, \psi ^{a, z,{\gamma }}_{g, u, t_{0}, x_{0}}, {\partial }_{x} \psi ^{a, z,{\gamma }}_{g, u, t_{0}, x_{0}})\) holds true only at (t0,x0), but not in Dδ(t0,x0), so in general \( \psi ^{a, z,{\gamma }}_{g, u, t_{0}, x_{0}} \notin \mathcal {A}^{0}_{g} u(t_{0},x_{0}; {\delta })\). Nevertheless, we still have the following equivalence.

Proposition 5.3

Let Assumptions 3.2 and 3.3 be in force and let \(u\in C({ [0,T]\times {\mathbb {R}}})\). Then u is a viscosity supersolution (resp., subsolution) of (3.6) at \((t_{0},x_{0})\in (0,T]\times {\mathbb {R}}\) if and only if, for every \((a,z,\gamma)\in \overline {\mathcal {J}}_{g} u(t_{0},x_{0})\) (resp., \(\underline {\mathcal {J}}_{g} u(t_{0},x_{0})\)),

$$\begin{array}{@{}rcl@{}} a-f(t_{0},x_{0},u(t_{0},x_{0}),z,\gamma)\ge 0\text{ ({resp.,}~\(\le 0\)).} \end{array} $$


We prove only the supersolution case. The subsolution case can be proved similarly.

First, we prove the if part. Assume that (5.6) holds for every \((a,z,\gamma)\in \overline {\mathcal {J}}_{g} u(t_{0},x_{0})\). Let \(\varphi \in \overline {\mathcal {A}}_{g} u(t_{0},x_{0})\). Then there exists a δ0(0,t01] such that, whenever 0≤δδ0, |h|≤δ0,

$$\begin{array}{@{}rcl@{}} u(t_{0}-{\delta},x_{0}+h) - u(t_{0},x_{0})\ge \varphi(t_{0}-{\delta},x_{0}+h) -\varphi(t_{0},x_{0}){.} \end{array} $$

By Lemma 2.9, there exists a C>0 such that

$$\begin{array}{@{}rcl@{}} &\varphi(t_{0}-{\delta},x_{0}+h) -\varphi(t_{0},x_{0})\ge \left[ - \partial_{t}^{{\omega}} \varphi~ {\delta} +\partial_{x} \varphi ~h +\frac{1}{2} \partial_{{xx}}^{2}\varphi~|h|^{2} \right.\\ &\left.+\partial_{\omega} \varphi~\omega_{t_{0},t}+\partial^{2}_{\omega\omega}\varphi ~ \underline{\omega}_{t_{0},t} +\partial_{x\omega} \varphi~h~\omega_{t_{0},t}\right.](t_{0},x_{0})- C\big[{\delta}^{\alpha}+|h|\big]^{(2+\beta)}. \end{array} $$

For any ε>0, by (2.2), there exists a δε(0,δ0), such that, for every 0≤δδε, |h|≤δε,

$$\begin{array}{@{}rcl@{}} &u(t_{0}-{\delta},x_{0}+h) - u(t_{0},x_{0})\ge \left[ - \partial_{t}^{{\omega}} \varphi~ {\delta} +\partial_{x} \varphi ~h +\frac{1}{2} \partial_{{xx}}^{2}\varphi~|h|^{2} \right.\\ &\left.+\partial_{\omega} \varphi~\omega_{t_{0},t}+\partial^{2}_{\omega\omega}\varphi ~ \underline{\omega}_{t_{0},t} +\partial_{x\omega} \varphi~h~\omega_{t_{0},t}\right](t_{0},x_{0}) -{\varepsilon}~{\delta} -\frac{1}{2}{\varepsilon}~|h|^{2}{.} \end{array} $$

By (5.4), the above inequality implies \((\partial _{t}^{{\omega }} \varphi +{\varepsilon },\partial _{x} \varphi,\partial _{{xx}}^{2} \varphi -{\varepsilon })(t_{0},x_{0}) \in \overline {\mathcal {J}}_{g} u(t_{0},x_{0})\). Thus, by (5.6), \([\partial _{t}^{{\omega }} \varphi + {\varepsilon } -f(\cdot,\varphi,\partial _{x}\varphi, \partial _{{xx}}^{2}\varphi -{\varepsilon }) ](t_{0},x_{0})\ge 0\). Sending ε→0 yields \({\mathcal {L}}{\varphi }(t_{0},x_{0})\ge 0\), i.e., u is a viscosity supersolution at (t0,x0).

Next, we prove the only if part. Assume u is a viscosity supersolution at \((t_{0},x_{0})\in (0,T]\times {\mathbb {R}}\). Let \((a,z,\gamma)\in \overline {\mathcal {J}}_{g} u(t_{0},x_{0})\) and consider the RPDE

$$\begin{array}{@{}rcl@{}} \varphi(t,x) &= u(t_{0},x_{0})+[a+{\varepsilon}]\,[t-t_{0}] + z\,[x-x_{0}]+\frac{1}{2}[\gamma-{\varepsilon}]\,\lvert x- x_{0}\rvert^{2}\\ &\qquad - \int_{t}^{t_{0}} g(\cdot,\varphi,\partial_{x}\varphi)(s,x)\,d{\omega}_{s}. \end{array} $$

By Theorem 4.9, the RPDE above has a classical solution \( {\varphi }\in C^{2}_{{\alpha },\beta }(D_{{\delta }}(t_{0},x_{0}))\) for some δ(0,δ0]. It is clear that \({\varphi } \in \mathcal {A}^{0}_{g} u(t_{0},x_{0};{\delta })\). Moreover, by using our Taylor expansion (Lemma 2.9), one may easily verify that

$$\begin{array}{@{}rcl@{}} {\varphi} = \psi^{a+{\varepsilon}, z, {\gamma}-{\varepsilon}}_{g, u, t_{0},x_{0}} + R~\text{on}~ D_{{\delta}}(t_{0},x_{0}), \end{array} $$

where |R(t,x)|≤C[|tt0|α+|xx0|]2+β. Then, by choosing δ>0 small enough, we have \({\varphi } \le \psi ^{a, z, {\gamma }}_{g, u, t_{0},x_{0}} \le u~\text {on} ~ D_{{\delta }}(t_{0},x_{0})), \) where the second inequality is due to the assumption \((a,z,\gamma)\in \overline {\mathcal {J}}_{g} u(t_{0},x_{0})\). This implies \({\varphi } \in \overline {\mathcal {A}}_{g} u(t_{0},x_{0})\). Thus \(0 \le {\mathcal {L}}{\varphi }(t_{0}, x_{0}) = a+{\varepsilon } - f(t_{0}, x_{0}, u(t_{0},x_{0}), z, {\gamma }-{\varepsilon })\). Sending ε→0 yields (5.6). □

Remark 5.4

By Proposition 5.3 and its proof, we can see that, depending on the regularity order k0 of g as specified in Assumption 3.2, it is equivalent to use test functions of class \(C^{k}_{{\alpha },\beta }(D_{{\delta }}(t_{0},x_{0}))\) for any k between 2 and k0. This is crucial for Theorem ?? below.

Change of variables formula

Let λC([0,T]) and n≥2 be an even integer. For any \(u:{ [0,T]\times {\mathbb {R}}}\to {\mathbb {R}}\), define

$$\begin{array}{@{}rcl@{}} \tilde u(t,x):={e^{\eta_{t}}\over 1+x^{n}} u(t,x),\text{ where}\ \eta_{t} := \int_{0}^{t} \lambda_{s}\,ds. \end{array} $$

If \(u\in C^{2}_{{\alpha },\beta }({ [0,T]\times {\mathbb {R}}})\), then

$$\begin{array}{@{}rcl@{}} \left.\begin{array}{c} \qquad u = e^{-\eta_{t}}(1+x^{n}) \tilde u,{\quad} {\partial}_{x} u = e^{-\eta_{t}} \big[(1+x^{n}) {\partial}_{x} \tilde u + n x^{n-1} \tilde u\big],\\ {\partial}^{2}_{{xx}} u = e^{-\eta_{t}}\big[(1+x^{n}) {\partial}^{2}_{{xx}} \tilde u + 2n x^{n-1} {\partial}_{x} \tilde u + n(n-1) x^{n-2} \tilde u\big],\\ \,\,\,\,{\partial}_{{\omega}} u = e^{-\eta_{t}}(1+x^{n}) {\partial}_{{\omega}} \tilde u,{\quad} {\partial}^{{\omega}}_{t} u = e^{-\eta_{t}} (1+x^{n}) [{\partial}^{{\omega}}_{t} \tilde u - \text{\l} \tilde u]. \end{array}\right. \end{array} $$

Define \(\tilde {f}:{ [0,T]\times {\mathbb {R}}}^{4}\to {\mathbb {R}}\) and \(\tilde {g}:{ [0,T]\times {\mathbb {R}}}^{3}\to {\mathbb {R}}\) by

$$\begin{array}{@{}rcl@{}} &\tilde{f}(t,x,y,z,\gamma):=\lambda_{t}\,y+{e^{\eta_{t}}\over 1+x^{n}} \, f\left(t,x,{(1+x^{n})y\over e^{\eta_{t}}},\right.\\ &\qquad \left.{(1+x^{n}) z + n x^{n-1} y\over e^{\eta_{t}}}, {(1+x^{n}) {\gamma} + 2n x^{n-1} z + n(n-1) x^{n-2} y\over e^{\eta_{t}}}\right),\\ &\tilde{g}(t,x,y,z):= {e^{\eta_{t}}\over 1+x^{n}}\, g\Big(t,x,{(1+x^{n})y\over e^{\eta_{t}}}, {(1+x^{n}) z + n x^{n-1} y\over e^{\eta_{t}}}\Big). \end{array} $$

Clearly, \(\tilde f\) and \(\tilde g\) inherit the regularity of f and g. Whenever they are smooth,

$$\begin{array}{@{}rcl@{}} &{\partial}_{y} \tilde f = \text{\l} + {\partial}_{y} f + {\partial}_{z} f {nx^{n-1}\over 1+x^{n}} + {\partial}_{{\gamma}} f {n(n-1)x^{n-2}\over 1+x^{n}},\\ &{\partial}_{z} \tilde f ={\partial}_{z} f + {\partial}_{{\gamma}} f {2 nx^{n-1}\over 1+x^{n}};{\quad} {\partial}_{{\gamma}} \tilde f = {\partial}_{{\gamma}} f;{\quad} {\partial}^{2}_{{\gamma}{\gamma}} \tilde f = e^{-\eta_{t}} (1+x^{n}) {\partial}^{2}_{{\gamma}{\gamma}} f. \end{array} $$

Then it is straightforward to verify that \(\tilde f\) and \(\tilde g\) inherit most desired properties of f and g that we utilize later.

Lemma 5.5

(i) If g is of the form of (7.1) or (7.26), then so is \(\tilde g\); and if f is of the form of (7.29), then so is \(\tilde f\).

(ii) If f is convex in γ, then so is \(\tilde f\).

(iii) If f is uniformly parabolic, then so is \(\tilde f\).

(iv) If f is uniformly Lipschitz continuous in y, z, γ, then so is \(\tilde f\).

(v) If \(\|f({\cdot }, y,z,{\gamma })\|_{C^{{\alpha }}({ [0,T]\times {\mathbb {R}}})} \le C[1+|y|+|z|+|{\gamma }|]\), then so is \(\tilde f\).

In particular, if f and g satisfy Assumptions 3.2 and 3.3, then so do \(\tilde f\) and \(\tilde g\). However, we remark that \(\tilde g\) does not inherit the same form when g is in the form of (4.23). Now consider the RPDE for \(\tilde u\):

$$\begin{array}{@{}rcl@{}} \tilde{u}(t,x)&=&u_{0}(x)+\int_{0}^{t} \tilde{f}(s,x,\tilde{u}(s,x),\partial_{x}\tilde{u}(s,x),\partial^{2}_{{xx}}\tilde{u}(s,x))\,ds\\ &+& \int_{0}^{t} \tilde{g}(s,x,\tilde{u}(s,x),\partial_{x}\tilde{u}(s,x))\,d{\omega}_{s},\quad (t,x)\in { [0,T]\times{\mathbb{R}}}. \end{array} $$

Proposition 5.6

Let Assumptions 3.2 and 3.3 be in force, λC([0,T]), n≥2 even, and \(u\in C({ [0,T]\times {\mathbb {R}}})\). Then u is a viscosity subsolution (resp., classical subsolution) of RPDE (3.6) if and only if \(\tilde {u}\) is a viscosity subsolution (resp., classical subsolution) of RPDE (5.11).


The equivalence of the classical solution properties is straightforward. Regarding the viscosity solution properties, we prove the if part; the only if part can be proved similarly.

Assume that \(\tilde u\) is a viscosity subsolution of RPDE (5.11). For any \((t_{0}, x_{0})\in (0, T]\times {\mathbb {R}}\) and \({\varphi } \in \underline {\mathcal {A}}_{g} u(t_{0}, x_{0})\), put \(\tilde {\varphi }(t,x) := {e^{\eta _{t}} \over 1+x^{n}} {\varphi }(t,x)\). It is straightforward to check that \(\tilde {\varphi } \in \underline {\mathcal {A}}_{\tilde g} \tilde u(t_{0}, x_{0})\). Then, by the viscosity subsolution property of \(\tilde u\) at (t0,x0),

$$\begin{array}{@{}rcl@{}} 0 \ge {\partial}^{{\omega}}_{t}\tilde {\varphi} - \tilde f(t_{0}, x_{0}, \tilde {\varphi}, {\partial}_{x} \tilde{\varphi}, {\partial}^{2}_{{xx}} \tilde {\varphi}) = {e^{\eta_{t_{0}}}\over 1+x_{0}^{n}} \Big[{\partial}^{{\omega}}_{t} {\varphi} - f(t_{0}, x_{0}, {\varphi}, {\partial}_{x} {\varphi}, {\partial}^{2}_{{xx}}{\varphi})\Big]. \end{array} $$

This implies that u is a viscosity subsolution of RPDE (3.6). □

Remark 5.7

Let (f,g) satisfy Assumptions 3.2 and 3.3 and let u be a viscosity semi-solution of RPDE (3.6).

(i) If u has polynomial growth, by choosing n large enough, we have

$$\begin{array}{@{}rcl@{}} {\lim}_{|x|\to \infty} \sup_{0\le t\le T}|\tilde u(t,x)| = 0. \end{array} $$

(ii) If f is uniformly Lipschitz continuous in y, by choosing λ sufficiently large (resp., small), we have

$$\begin{array}{@{}rcl@{}} \tilde{f} \text{ is strictly increasing ({resp.,}~decreasing) in } {y}. \end{array} $$

In particular, \(\tilde f\) will be proper in the sense of Crandall et al. (1992).


The following technical lemma is for the stability result. Given \((t_{0},x_{0})\in [0,T)\times {\mathbb {R}}\) and ε>0 small, put

$$\begin{array}{@{}rcl@{}} D^{+}_{{\varepsilon}}(t_{0}, x_{0}) &:=& [t_{0}, t_{0}+{\varepsilon}^{3}) \times O_{{\varepsilon}}(x_{0}),\\ {\partial} D^{+}_{{\varepsilon} }(t_{0}, x_{0}) &:=& \big\{(t,x): t\in [t_{0}, t_{0}+{\varepsilon}^{3}], |x-x_{0}| = {\varepsilon}~\text{or}~ t= t_{0}+{\varepsilon}^{3}, |x|\le {\varepsilon}\big\}. \end{array} $$

Lemma 5.8

Let Assumption 3.2 hold, let \((t_{0},x_{0})\in [0,T)\times {\mathbb {R}}\), and let δ0(0,Tt0]. Assume that \({\varphi }\in C^{k}_{{\alpha },\beta }(D^{+}_{{\delta }_{0}^{1/3}}(t_{0},x_{0}))\)for some large k and ωφ=g(·,φ,xφ) in \(D^{+}_{{\delta }_{0}^{1/ 3}}(t_{0},x_{0})\). Define

$$\begin{array}{@{}rcl@{}} g^{\varphi}(t,x,y,z):=[g(\cdot,\varphi+y,\partial_{x}\varphi+z)- g(\cdot,\varphi,\partial_{x}\varphi)](t,x). \end{array} $$

Then there exists an ε0(0,1] such that, for every ε(0,ε0], there exists a function \(\psi ^{{\varepsilon }}\in C^{4}_{{\alpha },\beta } (D^{+}_{{\varepsilon }}(t_{0},x_{0}))\) that satisfies the following properties:

$$\begin{array}{*{20}l} & \partial_{\omega} \psi^{{\varepsilon}} =g^{\varphi}(\cdot,\psi^{{\varepsilon}},\partial_{x}\psi^{{\varepsilon}}),{\quad} {\partial}^{{\omega}}_{t} \psi^{{\varepsilon}} = {\varepsilon};& \end{array} $$
$$\begin{array}{*{20}l} & \lvert\psi^{{\varepsilon}}\rvert+ \lvert\partial_{x}\psi^{{\varepsilon}}\rvert+\lvert\partial^{2}_{{xx}}\psi^{{\varepsilon}}\rvert \le C{\varepsilon}^2~ \text{in}~ D^+_{{\varepsilon}}(t_{0},x_0);& \end{array} $$
$$\begin{array}{*{20}l} &\psi^{{\varepsilon}}(t_{0},x_0)<0<\inf_{(t,x)\in\partial D^+_{{\varepsilon}} (t_{0},x_0)} \psi^{{\varepsilon}}(t,x).& \end{array} $$


Without loss of generality, we let (t0,x0)=(0,0). Since our results are local, without loss of generality, we can assume that \({\varphi }\in C^{k}_{{\alpha },\beta }({ [0,T]\times {\mathbb {R}}})\). Let \(\iota \in C^{\infty }({\mathbb {R}})\) be such that ι(x)=x4 for |x|≤1 and ι(x)=0 for |x|≥2. For any ε>0 small, consider the RPDE

$$\begin{array}{@{}rcl@{}} \psi^{{\varepsilon}}(t,x) = \iota(x) - {\varepsilon}^{5} + {\varepsilon} t + \int_{0}^{t} g^{{\varphi}}(s,x, \psi^{{\varepsilon}}, {\partial}_{x} \psi^{{\varepsilon}}) d{\omega}_s. \end{array} $$

By Theorem 4.9, there exists a δ1δ0 such that \(\psi ^{{\varepsilon }} \in C^{4}_{{\alpha },\beta }([0,{\delta }_{1}]\times {\mathbb {R}})\) for all ε≤1 and

$$\begin{array}{@{}rcl@{}} \sup_{{\varepsilon}\le 1} \|\psi^{{\varepsilon}}\|_{4} <\infty. \end{array} $$

The equalities in (5.15) are obvious. Now, we verify that ψε satisfies (5.16) and (5.17). Recall the fourth-order Taylor expansion in Lemma 2.9:

$$\begin{array}{@{}rcl@{}} \psi^{{\varepsilon}}(t,x) = \sum_{\ell+|\nu|\le 4} {1\over \ell!} ({\partial}^{\ell}_{x}{\mathcal{D}}_{\nu} \psi^{{\varepsilon}})(0,0) x^{\ell}\mathcal{I}^{\nu}_{0,t} + R_{4}(t,x). \end{array} $$

We claim that

$$\begin{array}{@{}rcl@{}} & {\partial}^{4}_{x}\psi^{{\varepsilon}}(0,0) = 24,{\quad} {\partial}^{\ell_{1}}_{x} {\partial}^{\ell_{2}}_{{\omega}} {\partial}_{{\omega}} \psi^{{\varepsilon}}(0,0) = O(1){\quad}\text{for}~ \ell_{1} + \ell_{2} = 3,\\ & {\partial}^{{\omega}}_{t} \psi^{{\varepsilon}}(0,0) = {\varepsilon},{\quad} {\partial}^{{\omega}}_{t} {\partial}_{{\omega}} \psi^{{\varepsilon}}(0,0) = O({\varepsilon}),{\quad} {\partial}^{{\omega}}_{t} {\partial}^{2}_{{\omega}} \psi^{{\varepsilon}}(0,0) = O({\varepsilon}),\\ & {\partial}^{\ell}_{x}{\mathcal{D}}_{\nu}\psi^{{\varepsilon}}(0,0) = O({\varepsilon}^{5}) ~ \text{for all other terms such that}~ \ell + |\nu|\le 4. \end{array} $$

Further, note that, by (5.19), for every \((t,x) \in \overline D^{+}_{{\varepsilon }}(0,0)\),

$$\begin{array}{@{}rcl@{}} | R_{4}(t,x) | \le C[t^{{\alpha}} + |x|]^{4+\beta} \le C[ {\varepsilon}^{3{\alpha}} + {\varepsilon}]^{4+\beta} \le C{\varepsilon}^{4+\beta}. \end{array} $$

Then, for \((t,x) \in \overline D^{+}_{{\varepsilon }}(0,0)\), plugging (5.21) and (5.22) into (5.20), we obtain

$$\begin{array}{@{}rcl@{}} | \psi^{{\varepsilon}}(t,x)| &\le& C \Big[\sum_{\ell_1+\ell_2=4} |x|^{\ell_{1}} t^{{\alpha}\ell_{2}}+ {\varepsilon} [t + t^{1+{\alpha}} + t^{1+2{\alpha}}] + {\varepsilon}^5+{\varepsilon}^{4+\beta}\Big]\\ &\le& C\Big[ \sum_{\ell_1+\ell_2=4} {\varepsilon}^{\ell_1+ 3{\alpha} \ell_{2}} + {\varepsilon}^{4}\Big] \le C{\varepsilon}^4. \end{array} $$

Similarly, applying the third-order Taylor expansion of xψε and the second-order Taylor expansion of \({\partial }^{2}_{{xx}} \psi ^{{\varepsilon }}\), we obtain

$$\begin{array}{@{}rcl@{}} |{\partial}_{x} \psi^{{\varepsilon}}(t,x)| \le C{\varepsilon}^{3},{\quad} |{\partial}^{2}_{{xx}} \psi^{{\varepsilon}}(t,x)|\le C{\varepsilon}^{2},{\quad} (t,x) \in \overline D^+_{{\varepsilon}}(0,0). \end{array} $$

Thus we have proved (5.16).

To prove (5.17), let \((t,x) \in {\partial } D^{+}_{{\varepsilon }}(0,0)\). By (5.20), (5.21), and (5.22),

$$\begin{array}{@{}rcl@{}} \psi^{{\varepsilon}}(t,x) \ge {\varepsilon} t + {24\over 4!} x^{4} - C\big[\sum_{\ell_{1} + \ell_{2} = 3} |x|^{\ell_{1}} t^{{\alpha} (\ell_{2}+1)} +{\varepsilon} [ t^{1+{\alpha}} + t^{1+2{\alpha}}]+{\varepsilon}^{5} + {\varepsilon}^{4+\beta}\big]. \end{array} $$

Note that in both, the case t[0,ε3] and |x|=ε and the case t=ε3 and |x|≤ε, we have \( {\varepsilon } t + {24\over 4!} x^{4} \ge {\varepsilon }^{4}. \) Hence

$$\begin{array}{@{}rcl@{}} \psi^{{\varepsilon}}(t,x) &\ge& {\varepsilon}^{4} - C\sum_{\ell_{1} + \ell_{2} = 3} {\varepsilon}^{\ell_{1} + 3{\alpha} (\ell_2+1)} - C{\varepsilon}^{1+3(1+{\alpha})} - C{\varepsilon}^{5} - C {\varepsilon}^{4+\beta}\\ &\ge& {\varepsilon}^{4} - C\big[{\varepsilon}^{3 + 3{\alpha}} + {\varepsilon}^{4+\beta}] \ge \frac{1}{2} {\varepsilon}^{4}, \end{array} $$

when ε is small, thanks to the assumption that 3α>1. Moreover, it is clear that ψε(0,0)=−ε5<0. Thus we have proved (5.17).

It remains to prove (5.21). First, by (5.18) it is clear that

$$\begin{array}{@{}rcl@{}} \psi^{{\varepsilon}}(0,x) = \iota(x) -{\varepsilon}^{5},\, {\partial}^{{\omega}}_{t} \psi^{{\varepsilon}}(t,x) = {\varepsilon},\, {\partial}_{{\omega}} \psi^{{\varepsilon}}(t,x) = g^{{\varphi}}(t,x, \psi^{{\varepsilon}}, {\partial}_{x} \psi^{{\varepsilon}}). \end{array} $$

By the first two equalities of (5.23), one can see that

$$\begin{array}{@{}rcl@{}} & \psi^{{\varepsilon}}(0,0) = {\varepsilon}^{5} <0, \, {\partial}^{4}_{x}\psi^{{\varepsilon}}(0,0) = 24,\, {\partial}^{\ell}_{x} \psi^{{\varepsilon}}(0,0)=0,~\text{for}~ 1\le \ell\le 3,\\ & {\partial}^{{\omega}}_{t} \psi^{{\varepsilon}}(0,0) = {\varepsilon}, \, {\partial}^{\ell}_{x} {\mathcal{D}}_{\nu} {\partial}^{{\omega}}_{t} \psi^{{\varepsilon}}(0,0)=0~\text{for}~ 1\le \ell + |\nu|\le 2,\\ & {\partial}_{{\omega}} \psi^{{\varepsilon}}(0,0) = g\big(0,0, {\varphi}(0,0) - {\varepsilon}^{5}, {\partial}_{x}{\varphi}(0,0)\big)\\ &\qquad\qquad\quad - g\big(0,0, {\varphi}(0,0), {\partial}_{x}{\varphi}(0,0)\big) = O({\varepsilon}^{5}). \end{array} $$

All terms above satisfy (5.21). The remaining derivatives involved in (5.20) take the form

$$ {\partial}^{\ell}_{x} {\mathcal{D}}_{\nu} {\partial}_{{\omega}} \psi^{{\varepsilon}}(0,0) = {\partial}^{\ell}_{x} {\mathcal{D}}_{\nu}\big[g^{{\varphi}}({\cdot}, \psi^{{\varepsilon}}, {\partial}_{x} \psi^{{\varepsilon}})\big]\big|_{(t,x) = (0,0)},\,1\le \ell + |\nu|\le 3. $$

Note that

$$\begin{array}{@{}rcl@{}} [{\partial}^{\ell}_{x} {\mathcal{D}}_{\nu} g^{{\varphi}}]({\cdot}, \psi^{{\varepsilon}}, {\partial}_{x} \psi^{{\varepsilon}})\big]\big|_{(t,x) = (0,0)} = [{\partial}^{\ell}_{x} {\mathcal{D}}_{\nu} g^{{\varphi}}](0, 0, -{\varepsilon}^{5}, 0) = O({\varepsilon}^{5}). \end{array} $$

Then (5.25) becomes

$$\begin{array}{@{}rcl@{}} {\partial}^{\ell}_{x} {\mathcal{D}}_{\nu} {\partial}_{{\omega}} \psi^{{\varepsilon}}(0,0) = {\partial}^{\ell}_{x} {\mathcal{D}}_{\nu}\big[g^{{\varphi}}(0, 0, \psi^{{\varepsilon}}(0,x), {\partial}_{x} \psi^{{\varepsilon}}(0,x))\big]\big|_{x = 0} + O({\varepsilon}^{5}) \\ = {\partial}^{\ell}_{x} {\mathcal{D}}_{\nu}\big[g\big(0, 0, {\varphi}(0,0)+ \psi^{{\varepsilon}}(0,x), {\partial}_{x}{\varphi}(0,0)+ {\partial}_{x} \psi^{{\varepsilon}}(0,x)\big)\big]\big|_{x = 0}+ O({\varepsilon}^{5}). \end{array} $$

Thus the derivatives are combinations of terms involving the derivatives of g with respect to (y,z), which are all bounded by our assumption, and the derivatives of (ψε,xψε). By a tedious but quite straightforward computation of the derivatives, we obtain from (5.24) and (5.27) with the abbreviation η(0):=η(0,0,φ(0,0),xφ(0,0)) for any function η,

$$\begin{array}{@{}rcl@{}} & {\partial}^{{\omega}}_{t} {\partial}_{{\omega}} \psi^{{\varepsilon}}(0,0) = {\partial}_{y} g(0) {\varepsilon} + O({\varepsilon}^{5}),{\quad} {\partial}^{{\omega}}_{t} {\partial}^{2}_{{\omega}} \psi^{{\varepsilon}}(0,0) = |{\partial}_{y} g(0)|^{2} {\varepsilon} + O({\varepsilon}^{5}),\\ &\qquad {\partial}^{\ell_{1}}_{x} {\partial}^{\ell_{2}}_{{\omega}} {\partial}_{{\omega}} \psi^{{\varepsilon}}(0,0) = 24 [{\partial}_{z} g(0)]^{\ell_{2} + 1} + O({\varepsilon}^{5}){\quad}\text{for}~ \ell_{1} + \ell_{2} = 3, \end{array} $$

and all other terms either contain \({\partial }^{\ell }_{x} \psi ^{{\varepsilon }}(0,0)=0\) for some 1≤≤3, or \({\partial }^{\ell }_{x} {\mathcal {D}}_{\nu } {\partial }^{{\omega }}_{t} \psi ^{{\varepsilon }}(0,0)=0\) for some 1≤+|ν|≤2, or

$$\begin{array}{@{}rcl@{}} [{\partial}^{\ell}_{x} {\mathcal{D}}_{\nu} g^{{\varphi}}]({\cdot}, \psi^{{\varepsilon}}, {\partial}_{x} \psi^{{\varepsilon}})\big]\big|_{(t,x) = (0,0)} = O({\varepsilon}^{5}) \end{array} $$

for some +|ν|≤3. We thus prove (5.21) for all the cases, and hence complete the proof of the lemma. □

Theorem 5.9

(Stability) Let Assumption 3.2 hold and (fn)n≥1 be a sequence of functions satisfying Assumption 3.3. For each n≥1, let un be a viscosity subsolution of RPDE (3.6) with generator (fn,g). Assume further that, for some functions f and u,

$$\begin{array}{@{}rcl@{}} {\lim}_{n\to\infty} [f_{n}-f](t,x,y,z,\gamma)=0\quad\text{and}\quad {\lim}_{n\to\infty} [u_{n}-u](t,x)=0 \end{array} $$

locally uniformly in \((t,x,y,z,\gamma)\in { [0,T]\times {\mathbb {R}}}^{4}\). Then u is a viscosity subsolution of (3.6).


By the locally uniform convergence, f and u are continuous. Let \((t_{0},x_{0})\in (0,T]\times {\mathbb {R}}\) and \(\varphi \in \underline {\mathcal {A}}_{g} u(t_{0},x_{0})\). We apply Lemma 5.8 at (t0,x0), but in the left neighborhood

$$\begin{array}{@{}rcl@{}} D^{-}_{{\varepsilon}}(t_{0}, x_{0}) &:=& (t_{0}-{\varepsilon}^{3}, t_{0}] \times O_{{\varepsilon}}(x_{0}),\hskip3cm\\ {\partial} D^{-}_{{\varepsilon} }(t_{0}, x_{0}) &:=& \big\{(t,x): (t_{0}-{\varepsilon}^{3}, t_{0}], |x-x_{0}| = {\varepsilon}~\text{or}~ t= t_{0}-{\varepsilon}^{3}, |x|\le {\varepsilon}\big\}. \end{array} $$

We emphasize that, while for notational simplicity we established Lemma 5.8 in the right neighborhood \(D^{+}_{{\varepsilon }}(t_{0}, x_{0})\), we may easily reformulate it to the left neighborhood by using the backward rough paths introduced in (2.12). By Remark 5.4, we may assume without loss of generality that \({\varphi }\in C^{k}_{{\alpha },\beta }({ [0,T]\times {\mathbb {R}}})\) for some large k. Then, for any ε>0 small, by Lemma 5.8, there exists \(\psi ^{{\varepsilon }}\in C^{4}_{{\alpha },\beta } (D^{-}_{{\varepsilon }}(t_{0},x_{0}))\) such that the following holds:

$$\begin{array}{@{}rcl@{}} \left.\begin{array}{c} \partial_{\omega} \psi^{{\varepsilon}} =g^{\varphi}(\cdot,\psi^{{\varepsilon}},\partial_{x}\psi^{{\varepsilon}}),{\quad} {\partial_{t}^{\omega}} \psi^{{\varepsilon}} = {\varepsilon};\\ \lvert\psi^{{\varepsilon}}\rvert+ \lvert\partial_{x}\psi^{{\varepsilon}}\rvert+ \lvert\partial^{2}_{{xx}}\psi^{{\varepsilon}}\rvert \le C{\varepsilon}^2~ \text{in}~ D^-_{{\varepsilon}}(t_{0},x_0);\\ \psi^{{\varepsilon}}(t_{0},x_0)<0<\inf_{(t,x)\in\partial D^-_{{\varepsilon}} (t_{0},x_0)} \psi^{{\varepsilon}}(t,x). \end{array}\right. \end{array} $$

This together with setting φε:=φ+ψε yields

$$\begin{array}{@{}rcl@{}} \sup_{(t,x)\in\partial D^{-}_{{\varepsilon}} (t_{0},x_{0})} \big[[u-\varphi^{{\varepsilon}}](t,x)\big]<0<[u-\varphi^{{\varepsilon}}](t_{0},x_{0}). \end{array} $$

Since un converges to u locally uniformly, we have, for n=n(ε) large enough,

$$\begin{array}{@{}rcl@{}} \sup_{(t,x)\in\partial D^{-}_{{\varepsilon}} (t_{0},x_{0})} \big[[u_{n}-\varphi^{{\varepsilon}}](t,x)\big]<0<[u_{n}-\varphi^{{\varepsilon}}](t_{0},x_{0}). \end{array} $$

Then there exists \((t_{{\varepsilon }}, x_{{\varepsilon }}) = (t^{n}_{{\varepsilon }}, x^{n}_{{\varepsilon }})\in D^{-}_{{\varepsilon }} (t_{0},x_{0})\) such that

$$\begin{array}{@{}rcl@{}} [u_{n}-\varphi^{{\varepsilon}}](t_{{\varepsilon}},x_{{\varepsilon}}) =0= \max_{[t_{0}-{\varepsilon}^{3},t_{{\varepsilon}}]\times\bar{O}_{{\varepsilon}}(x_{0})} \big[[u_{n}-\varphi^{{\varepsilon}}](t,x)\big]. \end{array} $$

Note that

$$\begin{array}{@{}rcl@{}} {\partial}_{{\omega}} {\varphi}^{{\varepsilon}} &=& {\partial}_{{\omega}} {\varphi} + {\partial}_{{\omega}} \psi^{{\varepsilon}} = g({\cdot}, {\varphi}, {\partial}_{x}{\varphi}) \\ &+& \big[g({\cdot}, {\varphi} + \psi^{{\varepsilon}}, {\partial}_{x} {\varphi} + {\partial}_{x} \psi^{{\varepsilon}}) - g({\cdot}, {\varphi}, {\partial}_{x}{\varphi})\big] = g({\cdot}, {\varphi}^{{\varepsilon}}, {\partial}_{x}{\varphi}^{{\varepsilon}}). \end{array} $$

Then \({\varphi }^{{\varepsilon }}\in \underline {\mathcal {A}}_{g} u_{n}(t_{{\varepsilon }},x_{{\varepsilon }})\). By the viscosity subsolution property of un,

$$\begin{array}{@{}rcl@{}} \partial_{t}^{{\omega}}\varphi^{{\varepsilon}}(t_{{\varepsilon}},x_{{\varepsilon}})- f_{n}(\cdot,\varphi^{{\varepsilon}},\partial_{x}{\varphi^{{\varepsilon}}}, \partial^{2}_{{xx}}\varphi^{{\varepsilon}}) (t_{{\varepsilon}},x_{{\varepsilon}})\le 0. \end{array} $$

Fix n and send ε→0. Then, by the convergence of ψε and its derivatives,

$$\begin{array}{@{}rcl@{}} \partial_{t}^{{\omega}}\varphi(t_{0},x_0)-f_{n}(\cdot,\varphi,\partial_{x}\varphi,\partial^{2}_{{xx}}\varphi) (t_{0},x_0)\le 0, \end{array} $$

Now, by sending n, we get \( \partial _{t}^{{\omega }}\varphi (t_{0},x_{0})- f(\cdot,\varphi,\partial _{x}\varphi,\partial ^{2}_{{xx}}\varphi) (t_{0},x_{0})\le 0, \) i.e., u is a viscosity subsolution of (3.6). □

Viscosity solutions of rough PDEs: comparison principle


$$\begin{array}{@{}rcl@{}} &u_{1} \text{ be a viscosity subsolution of RPDE (3.6),}\\ &u_{2} \text{ be a viscosity supersolution of RPDE (3.6),}\\ &u_{1}(0,{\cdot}) \le u_{2}(0,{\cdot}),\text{ and } u_{1}, u_{2} \text{have polynomial growth in } {x}. \end{array} $$

Our goal is to show that u1u2 on \({ [0,T]\times {\mathbb {R}}}\).

When both u1 and u2 are smooth, u:=u1u2 solve a linear RPDE. Then the function F corresponding to this linear RPDE becomes linear, see (7.30) below. Thus, by using the representation formula (7.31)–(7.32) below, one can show that u1u2.

Partial comparison principle

Here, we assume that at least one of the functions u1 and u2 is smooth. We need the following result (cf. Lemma 5.8).

Lemma 6.1

Let Assumption 3.2 be in force. Let \({\varphi }\in C^{k}_{{\alpha },\beta }({ [0,T]\times {\mathbb {R}}})\) for some large k. Let ωφ=g(·,φ,xφ) on \({ [0,T]\times {\mathbb {R}}}\). For any 0≤t0<T, 0<δTt0, and ε>0, recall (5.14), and consider the RPDE

$$\begin{array}{@{}rcl@{}} \psi^{{\varepsilon}}(t,x) = {\varepsilon} + t - t_{0}+ \int_{t_{0}}^{t} g^{{\varphi}}(s, x, \psi^{{\varepsilon}}, {\partial}_{x} \psi^{{\varepsilon}}) d{\omega}_{s},\qquad [t_{0}, t_{0}+{\delta}] \times {\mathbb{R}}. \end{array} $$

Then \(\psi ^{{\varepsilon }}\in C^{2}_{{\alpha },\beta }([t_{0}, t_{0}+{\delta }] \times {\mathbb {R}})\) with \(\|\psi ^{{\varepsilon }}\|_{C^{2}_{{\alpha },\beta }([t_{0}, t_{0}+{\varepsilon }] \times {\mathbb {R}})} \le C\), where C depends only on g and φ, but not on t0, ε, and δ. Moreover, ψε satisfies

$$\begin{array}{@{}rcl@{}} \left.\begin{array}{c} \partial_{\omega} \psi^{{\varepsilon}} =g^{\varphi}(\cdot,\psi^{{\varepsilon}},\partial_{x}\psi^{{\varepsilon}}),{\quad} {\partial}^{{\omega}}_{t} \psi^{{\varepsilon}} = 1,\\ \lvert\psi^{{\varepsilon}}\rvert+ \lvert\partial_{x}\psi^{{\varepsilon}}\rvert+\lvert\partial^{2}_{{xx}}\psi^{{\varepsilon}}\rvert \le C[{\varepsilon}+{\delta}^{{\alpha}\beta}]~ \text{in}~ [t_{0}, t_0+{\delta}]\times {\mathbb{R}},\\ \inf\limits_{x\in {\mathbb{R}}} \psi^{{\varepsilon}}(t,x) >0 {\quad}\text{for all}~ t\in (t_{0}, t_0+{\delta}]. \end{array}\right. \end{array} $$


The uniform regularity of ψε and the first line of (6.3) are clear. Note that ψε(t0,x)=ε, xψε(t0,x)=0, \({\partial }^{2}_{{xx}} \psi ^{{\varepsilon }}(t_{0},x) = 0\). The second line of (6.3) follows from the Hölder continuity of the functions in terms of t. Moreover, since gφ(t,x,0,0)=0, we may write it as gφ(t,x,ψε,xψε)=σ(t,x)ψε+b(t,x)xψε, where σ and b depend on ψε. Then we may view (6.2) as a linear RPDE with coefficients σ and b. Thus, by (7.31)–(7.32), we have a representation formula for ψε. The uniform regularity of ψε implies the uniform regularity of σ and b, which leads to the third line of (6.3). □

Theorem 6.2

Let Assumptions 3.2 and 3.3 and (6.1) be in force. If one of u1 and u2 is in \(C^{k}_{{\alpha },\beta }({ [0,T]\times {\mathbb {R}}})\) for some large k, then u1u2.


For the sake of a contradiction, assume that [u1u2](t0,x0)>0 for some \((t_{0}, x_{0}) \in (0, T]\times {\mathbb {R}}\). By Remark 5.7 (i), without loss of generality we may assume both u1 and u2 satisfy (5.12). Put

$$\begin{array}{@{}rcl@{}} c_{{\delta}} := \sup_{(t,x) \in [0, {\delta}] \times {\mathbb{R}}} [u_{1}-u_{2}](t,x),{\quad} {\delta}_{0} := \inf\{{\delta} \ge 0: c_{{\delta}} >0\}. \end{array} $$

Then cδ is nondecreasing in δ, \(\phantom {\dot {i}\!}c_{0} \le 0 < c_{t_{0}}\), and thus δ0<t0. For any 0<δt0δ0, \(\phantom {\dot {i}\!}c_{{\delta }_{0} + {\delta }} >0\). By (5.12) and since u(0,·)≤0, there exists \(\phantom {\dot {i}\!}(t_{{\delta }}, x_{{\delta }}) \in ({\delta }_{0}, {\delta }_{0} + {\delta }]\times {\mathbb {R}}\) such that \(\phantom {\dot {i}\!}(u_{1}-u_{2})(t_{{\delta }}, x_{{\delta }}) = c_{{\delta }_{0}+{\delta }}\). Set \({\varepsilon } := c_{{\delta }_{0}+{\delta }}\wedge {\delta }^{{\alpha }\beta }\). Applying Lemma 6.1 with φ:=u2 on [δ0,tδ], but again backwardly in time, we have ψε satisfying

$$\begin{array}{@{}rcl@{}} \left.\begin{array}{c} \psi^{{\varepsilon}}(t_{{\delta}},x) = {\varepsilon},{\quad} {\partial}^{{\omega}}_{t} \psi^{{\varepsilon}} = 1,{\quad} \inf\limits_{x\in {\mathbb{R}}} \psi^{{\varepsilon}}({\delta}_{0},x) >0,\\ \lvert\psi^{{\varepsilon}}\rvert+ \lvert\partial_{x}\psi^{{\varepsilon}}\rvert+\lvert\partial^{2}_{{xx}}\psi^{{\varepsilon}}\rvert \le C{\delta}^{{\alpha}\beta}~ \text{in}~ [t_{0}, t_{{\delta}}]\times {\mathbb{R}}. \end{array}\right. \end{array} $$

Define φε:=u2+ψε. Note that \( [u_{1}- {\varphi }^{{\varepsilon }}](t_{{\delta }}, x_{{\delta }}) \ge 0 > \sup _{x\in {\mathbb {R}}} [u_{1} - {\varphi }^{{\varepsilon }}]({\delta }_{0}, x). \) Then there exists \((t_{{\delta }}^{*}, x_{{\delta }}^{*}) \in ({\delta }_{0}, t_{{\delta }}]\times {\mathbb {R}}\) such that

$$\begin{array}{@{}rcl@{}} [u_1- {\varphi}^{{\varepsilon}}](t^{*}_{{\delta}}, x^{*}_{{\delta}}) = 0 = \sup_{(t,x) \in [{\delta}_{0}, t^{*}_{{\delta}}]\times {\mathbb{R}}} [u_1- {\varphi}^{{\varepsilon}}](t^{*}_{{\delta}}, x^{*}_{{\delta}}). \end{array} $$

By the definition of gφ, it is clear that ωφε=g(·,φε,xφε). Then \({\varphi }^{{\varepsilon }} \in \underline {\mathcal {A}}_{g} u_{1}(t^{\ast }_{{\delta }}, x^{\ast }_{{\delta }})\). Thus, by using the classical supersolution property of u2 and the viscosity subsolution property of u1, we have

$$\begin{array}{@{}rcl@{}} &&\Bigl[\partial_{t}^{{\omega}} u_{2}-f(\cdot,u_{2},\partial_{x} u_{2}, \partial^{2}_{{xx}} u_{2}) \Bigr](t^{\ast}_{{\delta}},x^{\ast}_{{\delta}})\\&&\ge 0 \ge \Bigl[ \partial_{t}^{{\omega}} {\varphi}^{{\varepsilon}} - f(\cdot, {\varphi}^{{\varepsilon}}, \partial_{x} {\varphi}^{{\varepsilon}}, \partial^{2}_{{xx}} {\varphi}^{{\varepsilon}})\Bigr](t^{\ast}_{{\delta}}, x^{\ast}_{{\delta}}). \end{array} $$

Now, at \((t^{*}_{{\delta }}, x^{*}_{{\delta }})\), we have

$$\begin{array}{@{}rcl@{}} 1 &=& \partial_{t}^{{\omega}} {\varphi}^{{\varepsilon}} - \partial_{t}^{{\omega}} u_{2} \le f(\cdot, {\varphi}^{{\varepsilon}}, \partial_{x} {\varphi}^{{\varepsilon}}, \partial^{2}_{{xx}} {\varphi}^{{\varepsilon}}) - f(\cdot,u_{2},\partial_{x} u_{2}, \partial^{2}_{{xx}} u_2)\\ &\le& C\Big[ \lvert\psi^{{\varepsilon}}\rvert+ \lvert\partial_{x}\psi^{{\varepsilon}}\rvert+\lvert\partial^{2}_{{xx}}\psi^{{\varepsilon}}\rvert\Big] \le C{\delta}^{{\alpha}\beta}, \end{array} $$

which is an obvious contradiction when δ is small. □

Remark 6.3

When g is independent of y, we can prove Proposition 6.2 much easier without invoking Lemma 6.1. In fact, in this case, assuming to the contrary that (u1u2)(t0,x0)>0 for some \((t_{0},x_{0})\in { [0,T]\times {\mathbb {R}}}\). Then

$$\begin{array}{@{}rcl@{}} c:=\sup_{(t,x)\in [0,t_{0}]\times{\mathbb{R}}} [u_{1}-u_{2}](t,x)\ge [u_{1}-u_{2}](t_{0},x_{0})>0. \end{array} $$

By (5.12) and [u1u2](0,·)≤0, there exists \((t^{*}, x^{*}) \in (0, t_{0}] \times {\mathbb {R}}\) such that

$$\begin{array}{@{}rcl@{}} [u_1-u_{2}](t^{*}, x^{*}) = c := \sup_{(t,x)\in [0,t_{0}]\times{\mathbb{R}}} [u_1-u_{2}](t,x)\ge [u_1-u_2)]t_{0},x_0)>0. \end{array} $$

Define φ=u2+c. Since g is independent of y, we have

$$\begin{array}{@{}rcl@{}} {\partial}_{{\omega}} {\varphi} = {\partial}_{{\omega}} u_{2} = g(t,x, {\partial}_{x} u_2) = g(t,x,{\partial}_{x} {\varphi}). \end{array} $$

Then one can easily verify that \(\varphi \in \underline {\mathcal {A}}_{g} u_{1}(t^{\ast },x^{\ast })\). Moreover, by Remark 5.7 (ii), we can assume without loss of generality that f is strictly decreasing in y. Now it follows from the classical supersolution property of u2 and the viscosity subsolution property of u1 that, taking values at (t,x),

$$\begin{array}{@{}rcl@{}} \partial_{t}^{{\omega}} u_{2}&-&f(\cdot,u_{2},\partial_{x} u_{2}, \partial^{2}_{{xx}} u_{2}) ~\ge~ 0~\ge~ \partial_{t}^{{\omega}} {\varphi} - f(\cdot, {\varphi}, \partial_{x} {\varphi}, \partial^{2}_{{xx}} {\varphi})\\ && = \partial_{t}^{{\omega}} u_{2}-f(\cdot,u_{2}+c,\partial_{x} u_{2}, \partial^{2}_{{xx}} u_{2}), \end{array} $$

which is the desired contradiction since f is strictly decreasing in y.

The following comparison result follows immediately from Theorem 6.2.

Corollary 6.4

Let Assumptions 3.2 and 3.3 and (6.1) be in force. If RPDE (3.6) has a classical solution \(u\in C^{k}_{{\alpha },\beta }({ [0,T]\times {\mathbb {R}}})\) for some large k and u1(0,·)≤u(0,·)≤u2(0,·), then u1uu2. In particular, u is the unique solution in the viscosity sense.

Full comparison

We shall follow the approach of Ekren et al. (2014). For this purpose, we strengthen Assumption 3.2 slightly by imposing some uniform property of g in terms of y.

Assumption 6.5

The diffusion coefficient g belongs to \(C^{k_{0}, loc}_{{\alpha },\beta }({ [0,T]\times {\mathbb {R}}}^{3})\) for some k0 large enough, and

(i) \({\partial }_{z} g \in C^{k_{0}-1}_{{\alpha },\beta }({ [0,T]\times {\mathbb {R}}}^{3})\),

(ii) for i=0, …, k0 and \(z \in {\mathbb {R}}, {\partial }^{i}_{x} g({\cdot },z), {\partial }^{i}_{y} g({\cdot },z) \in C^{k_{0}-i}_{{\alpha },\beta }({ [0,T]\times {\mathbb {R}}}^{2})\) with \(\|{\partial }^{i}_{x} g({\cdot },z) \|_{k_{0}-i} + \|{\partial }^{i}_{y} g({\cdot },z) \|_{k_{0}-i} \le C [1+|z|]\).

We remark that, under Assumption 3.2, all the results in this subsection hold true if we assume instead that T is small enough.

Given an initial condition u0, motived by the partial comparison, we fix a large k and define

$$\begin{array}{@{}rcl@{}} \overline u(t,x) &:= \inf\big\{ {\varphi}(t,x): {\varphi} \in \overline{\mathcal{U}}\big\},{\quad} \underline u(t,x) := \sup\big\{ {\varphi}(t,x): {\varphi} \in \underline{\mathcal{U}}\big\}, \end{array} $$


$$\begin{array}{@{}rcl@{}} &&\mathcal{U} := \left\{{\varphi} \in \mathbb{L}^{0}({ [0,T]\times{\mathbb{R}}}): \text{{c\`{a}gl\`{a}d~} in } {t}, \text{ with polynomial growth in } {x},\right.\\ &&{\qquad}{{\varphi}(0,{\cdot}) = u_{0},} \text{ and } \exists~ 0=t_{0}<{\cdots}< t_{n}=T \text{ such that}\\ &&{\qquad}\left. {\varphi} \in C^{k}_{{\alpha},\beta}((t_{i-1}, t_{i}]\times {\mathbb{R}}) \text{ for } i=1, \ldots, {n} \right\},\\ &&\overline{\mathcal{U}} \,:=\! \left\{{\varphi} \in\mathcal{U}: {\Delta} {\varphi}_{t_{i}} \ge 0, ~{\varphi} \text{ is a classical supersolution of }\right.\\ &&{\qquad}\left.\text{RPDE (3.6) on each } (t_{i-1}, t_{i}] \right\},\\ &&\underline{\mathcal{U}} \,:=\left\{{\varphi} \in\mathcal{U}: {\Delta} {\varphi}_{t_{i}} \le 0, ~{\varphi} \text{ is a classical subsolution of } \right.\\ &&{\qquad}\left.\text{RPDE (3.6) on each } (t_{i-1}, t_{i}] \right\}. \end{array} $$

Lemma 6.6

Let Assumptions 6.5, 3.3, and 3.4 hold. Then \(\overline {\mathcal {U}}, \underline {\mathcal {U}}\neq \emptyset \).


We prove \(\overline {\mathcal {U}} \neq \emptyset \) in several steps. The proof for \(\underline {\mathcal {U}}\) is similar.

Step 1. Put \(Q_{1} := {\mathbb {R}}^{2} \times \{z\in {\mathbb {R}}: |z|\le 1\}\). Then \(g \in C^{k_{0}}_{{\alpha },\beta }([0, T]\times Q_{1})\) and let N0 denote its k0-norm. Under our strengthened conditions in Assumption 6.5, it follows from the arguments in Proposition 4.1 that there exist δ0,C0, depending only on N0, such that

$$\begin{array}{@{}rcl@{}} {\Theta}^{0} \in C^{k_{0}}_{{\alpha},\beta}([0,\delta_{0}]\times Q_{1}; {\mathbb{R}}^{3})~ \text{with}~ \| {\Theta}^{0}\|_{k_{0}}\le C_{0}, \end{array} $$

where \({\Theta }^{0}_{t}({\theta }) := {\Theta }_{t}({\theta }) - {\theta }\). Moreover, for a possibly smaller δ0>0, again depending only on N0, we have

$$\begin{array}{@{}rcl@{}} {\partial}_{x} X_{t}(x,y,0) \ge {1/ 2} {\quad}\text{for all}~ (t,x,y) \in [0, {\delta}_{0}]\times {\mathbb{R}}^2. \end{array} $$

Step 2. Recall (4.11) and put

$$\begin{array}{@{}rcl@{}} \overline{F}(t,y) &:=& \sup_{x\in {\mathbb{R}}} F(t,x,y,0,0)\\ &=& \sup_{x\in {\mathbb{R}}} f\big(t, {\Theta}_{t}(x,y,0), {{\partial}_{x} Z_{t}(x,y,0) \over {\partial}_{x} X_{t}(x,y,0)}\big)\, \exp\big(-\int_{0}^{t} {\partial}_{y} g(s, {\Theta}_{s}(x,y,0))d{\omega}_{s}\big). \end{array} $$

By Assumption 3.3, (6.6) and (6.7) yield, for all \((t,x,y)\in [0, {\delta }_{0}] \times {\mathbb {R}}^{2}\),

$$\begin{array}{@{}rcl@{}} |F(t,x,y,0, 0)| &\le C_{1}\Big[1+ |Y_{t}(x,y,0)|+ |Z_{t}(x,y,0)| + \big|{{\partial}_{x} Z_{t}(x,y,0) \over {\partial}_{x} X_{t}(x,y,0)}\big|\Big]\\ &\le C_{1}[1+|y|], \end{array} $$

where C1 depends on N0 and the K0 and L0 in Assumption 3.3. Then \(|\overline {F}(t,y)|\le C_{1}[1+|y|]\). Moreover, it is clear that F(t,x,y,0,0) is differentiable in y. However, due to the exponential term outside of f, in general yF(t,x,y,0,0) may not be bounded, and we can only claim that |yF(t,x,y,0,0)|≤C1[1+|y|]. By the regularity of f, it is clear that \(\overline {F}\) is continuous in t. Let \(\hat F\) be a smooth mollifier of \(\overline {F}\) such that

$$\begin{array}{@{}rcl@{}} \overline{F} \le \hat F \le \overline{F}+1, {\quad} |\hat F(t,y)| \le C_{1}[1+|y|],{\quad} |{\partial}_{y} \hat F(t,y)| \le C_{1}[1+|y|]. \end{array} $$

Set \(K_{1} := \|u_{0}\|_{\infty } e^{C_{1} T}\) for the above C1. We see that \(\hat F(t,y)\) is uniformly Lipschitz in y on [0,δ0]×[−K1−1,K1+1]. Let ι be a smooth truncation function such that ι(x)=x for |x|≤K1, ι(x)=sign(x)[K1+1] for |x|≥K1+1, and |ι(x)|≤|x| for all x. Now consider the ODE

$$\begin{array}{@{}rcl@{}} \overline{\psi}_{t} = \|u_{0}\|_{\infty} + \int_{0}^{t} \hat F(s, \iota(\overline{\psi}_{s}))ds,{\quad} 0\le t\le {\delta}_{0}. \end{array} $$

Clearly, (6.9) has a solution \(\overline {\psi } \in C^{\infty }([0, {\delta }_{0}])\). Since \(|\hat F(s, \iota (y))|\le C_{1}[1+|\iota (y)|] \le C_{1}[1+|y|]\), \(|\overline {\psi }_{t}| \le \|u_{0}\|_{\infty } e^{C_{1} t} \le K_{1}\) for tδ0. Thus \(\iota (\overline {\psi }_{t}) = \psi _{t}\) and

$$\begin{array}{@{}rcl@{}} \overline{\psi}_{t} = \|u_{0}\|_{\infty} + \int_{0}^{t} \hat F(s, \overline{\psi}_s)ds,{\quad} 0\le t\le {\delta}_0. \end{array} $$

Abusing the notation by letting \(\overline {\psi }(t,x) := \overline {\psi }(t)\), we have

$$\begin{array}{@{}rcl@{}} \qquad\qquad {\partial}_{{\omega}} \overline{\psi}_{t} &=&0,{\quad} {\partial}_{x} \overline{\psi}_{t} = 0,{\quad} {\partial}^{2}_{{xx}} \overline{\psi}_{t} = 0, \\ {\partial}_{t} \overline{\psi}(t,x) &=& \hat F(t, \overline{\psi}_{t})\ge \overline{F}(t, \overline{\psi}_{t}) \ge F(t,x, \overline{\psi}, {\partial}_{x} \overline{\psi}, {\partial}^{2}_{{xx}} \overline{\psi}), \end{array} $$

i.e., \(\overline {\psi }\) is a classical supersolution of PDE (4.10)–(4.11). Thus (4.9) becomes

$$\begin{array}{@{}rcl@{}} \hat {\Theta}_{t}(x) := {\Theta}_{t}(x, \overline{\psi}(t,x), {\partial}_{x}\overline{\psi}(t,x)) = {\Theta}_{t}(x, \overline{\psi}_{t}, 0) \end{array} $$

in this case. By (6.7), for any t[0,δ0], \(x\mapsto \hat X_{t}(x)\) is invertible. Put

$$\begin{array}{@{}rcl@{}} \overline{\varphi}(0,x) := u_{0}(x),{\quad} \overline{\varphi}(t,x):= \hat Y_{t}(\hat X_{t}^{-1}(x)), ~ t\in (0, {\delta}_{0}]. \end{array} $$

Then \({\Delta } \overline {\varphi }_{0}(x) \ge 0, \overline {\varphi } \in C^{k}_{{\alpha },\beta }((0, {\delta }_{0}]\times {\mathbb {R}})\), and by Theorem 4.4, \(\overline {\varphi }\) is a classical supersolution of PPDE (3.7) on \((0, {\delta }_{0}] \times {\mathbb {R}}\).

Step 3. Let δ0 be as in Step 1. We emphasize that δ0 depends only on N0, in particular not on u0. Let 0=t0<<tn=T be a partition such that titi−1δ0, i=1, …, n. By Step 2, we have a desired function \(\overline {\varphi }\) on \([t_{0}, t_{1}]\times {\mathbb {R}}\). In particular, \(\overline {\varphi }(t_{1},{\cdot })\) is bounded. Now consider RPDE (3.6) on [t1,t2] with initial condition \(\overline {\varphi }(t_{1}, {\cdot })\). Following the same arguments, we may extend \(\overline {\varphi }\) to [t0,t2] such that \({\Delta } \overline {\varphi }(t_{1},{\cdot }) \ge 0, \overline {\varphi } \in C^{k}_{{\alpha },\beta }((t_{1}, t_{2}]\times {\mathbb {R}})\), and \(\overline {\varphi }\) is a classical supersolution of RPDE (3.6) on \((t_{1}, t_{2}] \times {\mathbb {R}}\). Repeating the arguments yields the desired \(\overline {\varphi }\) on \({ [0,T]\times {\mathbb {R}}}\), i.e., \(\overline {\varphi }\in \overline {\mathcal {U}}\). □

Now, by Theorem 6.2 and Proposition ??, it is clear that

$$\begin{array}{@{}rcl@{}} \underline{u} \le \overline{u}. \end{array} $$

We next establish the viscosity solution property of \(\overline {u}\) and \(\underline {u}\). We shall follow the arguments in Theorem ??, which rely on the crucial Lemma 5.8.

Lemma 6.7

Let Assumptions 6.5, 3.3, and 3.4 hold.

(i) \(\overline {u}\) (resp. \(\underline {u}\)) is bounded and upper (resp. lower) semi-continuous.

(ii) Moreover, if \(\overline {u}\) (resp., \(\underline {u}\)) is continuous, then \(\overline {u}\) (resp., \(\underline {u}\)) is a viscosity supersolution (resp., viscosity subsolution) of RPDE (3.6).

We remark that it is possible to extend our definition of viscosity supersolutions to lower semi-continuous functions. However, here (i) shows that \(\overline {u}\) is upper semi-continuous. So it seems that the continuity of \(\overline {u}\) in (ii) is intrinsically required in this approach.


By the proof of Lemma 6.6, \(\overline {u}\) is bounded from above. Similarly, \(\underline {u}\) is bounded from below. Then it follows from (6.11) that \(\overline {u}\) and \(\underline {u}\) are bounded.

We establish next the upper semicontinuity for \(\overline {u}\). The regularity for \(\underline {u}\) can be proved similarly. Fix \((\overline {t}, \overline {x})\in { [0,T]\times {\mathbb {R}}}\). For any ε>0, there exists \({\varphi }_{{\varepsilon }}\in \overline {\mathcal {U}}\) such that \({\varphi }_{{\varepsilon }}(\overline {t}, \overline {x}) <\overline {u}(\overline {t}, \overline {x}) +{\varepsilon }\). By the structure of \(\overline {\mathcal {U}}\), it is clear that \({\varphi }_{{\varepsilon }} \ge \overline {u}\) on \({ [0,T]\times {\mathbb {R}}}\). Assume that \({\varphi }_{{\varepsilon }}\in \mathcal {U}\) corresponds to the partition 0=t0<<tn=T as in (6.5). We distinguish between two cases.

Case 1. Assume \(\overline {t} \in (t_{i-1}, t_{i})\) for some i=1, …, n. Since φε is continuous in \((t_{i-1}, t_{i}) \times {\mathbb {R}}\), there exists δ>0 such that \(|{\varphi }_{{\varepsilon }}(t,x) - {\varphi }_{{\varepsilon }}(\overline { t}, \overline { x})|\le {\varepsilon }\) whenever \(|t-\overline { t}|+|x-\overline { x}|\le {\delta }\). Then, for such (t,x),

$$\begin{array}{@{}rcl@{}} \overline{ u} (t,x) \le {\varphi}_{{\varepsilon}}(t,x) \le {\varphi}_{{\varepsilon}}(\overline{ t}, \overline{ x})+{\varepsilon} \le \overline{ u}(\overline{ t}, \overline{ x}) + 2{\varepsilon}. \end{array} $$

This implies that \(\overline { u}\) is upper semi-continuous at \((\overline { t}, \overline { x})\).

Case 2. Assume \(\overline { t} = t_{i}\) for some i=0, …, n. By the same arguments as in Case 1, for any ε>0, there exists δ>0 such that \(\overline { u}(t, x) \le \overline { u}(\overline { t}, \overline { x}) + 2{\varepsilon }\) for all \((t,x) \in (t_{i}-{\delta }, t_{i}]\times O_{{\delta }}(\overline { x})\). To see the regularity in the right neighborhood, assume for notational simplicity that \(\overline { t}=0\). Let \(\hat F\) be as in the proof of Lemma 6.6. Consider the following ODE (with parameter x):

$$\begin{array}{@{}rcl@{}} v(t,x) = {\varphi}_{{\varepsilon}}(0, x) + \int_{0}^{t} \hat F(s,v(s, x)) ds. \end{array} $$

By the arguments in Section 4.4, there exists a δ0>0 such that (6.12) has a classical solution \(v \in C^{k,0}_{{\alpha },\beta }([0, {\delta }_{0}]\times {\mathbb {R}})\), which clearly leads to a classical supersolution \(u\in C^{k}_{{\alpha },\beta }([0, {\delta }_{0}]\times {\mathbb {R}})\) of the original RPDE (3.6) with initial condition φε(0,x). Now consider RPDE (3.6) on [δ0,T] with initial condition u(δ0,·). By the arguments in Lemma 6.6, there exists a \(\tilde {\varphi }_{{\varepsilon }} \in \overline { \mathcal {U}}\) such that \(\tilde {\varphi }_{{\varepsilon }}(t,x) = u(t,x)\) for \((t,x) \in [0, {\delta }_{0}]\times {\mathbb {R}}\). Then \(\overline { u} \le u\) on \([0, {\delta }_{0}]\times {\mathbb {R}}\). Now by the continuity of u, there exists a δδ0 such that, whenever \(|t|+|x-\overline { x}|\le {\delta }\),

$$\begin{array}{@{}rcl@{}} \overline{ u}(t,x) \le u(t,x) \le u(0, \overline{ x}) + {\varepsilon} = {\varphi}_{{\varepsilon}}(0, \overline{ x}) +{\varepsilon}\le \overline{ u}(0, \overline{ x}) + 2{\varepsilon}, \end{array} $$

implying the regularity in the right neighborhood, and thus \(\overline { u}\) is upper semicontinuous.

We finally show that \(\underline {u}\) is a viscosity subsolution provided it is continuous. The viscosity supersolution property of \(\overline { u}\) follows similar arguments. Fix \((t_{0}, x_{0})\in (0, T]\times {\mathbb {R}}\). Let \({\varphi }\in \underline {\mathcal {A}}_{g} \underline {u}(t_{0}, x_{0})\). For any ε>0, let \(({\partial } D_{{\varepsilon }}^{-}(t_{0},x_{0}),\psi ^{{\varepsilon }})\) be as in (5.29)–(5.30). By definition, there exists a \(u_{{\varepsilon }}\in \underline {\mathcal {U}}\) with \(\underline {u}(t_{0}, x_{0}) -u_{{\varepsilon }}(t_{0}, x_{0}) \le - \psi ^{{\varepsilon }}(t_{0}, x_{0})\). Let φε:=φ+ψε, \({\partial } D_{{\varepsilon }}^{-} := {\partial } D_{{\varepsilon }}^{-}(t_{0},x_{0})\). Then

$$\begin{array}{@{}rcl@{}} &&{\varphi}^{{\varepsilon}}(t_{0}, x_{0}) = \underline{u}(t_{0}, x_{0}) + \psi^{{\varepsilon}}(t_{0}, x_{0}) \le u_{{\varepsilon}}(t_{0}, x_{0}\text{ and} \\ && \inf_{(t,x)\in {\partial} D_{{\varepsilon}}^{-}} [{\varphi}^{{\varepsilon}} - u_{{\varepsilon}}] (t, x) \ge \inf_{(t,x)\in{\partial} D_{{\varepsilon}}^{-}} [\underline{u} + \psi^{{\varepsilon}} - u_{{\varepsilon}}] (t, x)\ge \inf_{(t,x) \in{\partial} D_{{\varepsilon}}^{-}} \psi^{{\varepsilon}}(t, x) >0. \end{array} $$

follow. Thus there exists a (tε,xε)(t0ε3,t0Oε(x0) such that

$$\begin{array}{@{}rcl@{}} [{\varphi}^{{\varepsilon}} - u_{{\varepsilon}}] (t_{{\varepsilon}}, x_{{\varepsilon}}) = 0 = \inf_{(t,x)\in [t_0-{\varepsilon}^{3}, t_{{\varepsilon}}]\times O_{{\varepsilon}}(x_0)} [{\varphi}^{{\varepsilon}} - u_{{\varepsilon}}] (t, x). \end{array} $$

Then \({\varphi }^{{\varepsilon }} \in \underline {\mathcal {A}}_{g} u_{{\varepsilon }} (t_{{\varepsilon }}, x_{{\varepsilon }})\). Since uε is a classical subsolution, hence a viscosity subsolution, \({\mathcal {L}} {\varphi }^{{\varepsilon }}(t_{{\varepsilon }},x_{{\varepsilon }}) \le 0\). Sending ε→0 yields \({\mathcal {L}}{\varphi }(t_{0},x_{0}) \le 0\), i.e., \(\underline {u}\) is a viscosity subsolution. □

Theorem 6.8

Let Assumptions 6.5, 3.3, and 3.4 hold. Let (6.1) be in force with u1(0,·)≤u0u2(0,·). Assume further that

$$\begin{array}{@{}rcl@{}} \overline{ u} = \underline{u}. \end{array} $$

Then \(u_{1} \le \overline { u }= \underline {u} \le u_{2}\) and \(\overline { u}\) is the unique viscosity solution of RPDE (3.6).


By Lemma 6.7 and (6.13), it is clear that \(\overline { u} = \underline {u}\) is continuous and is a viscosity solution of RPDE (3.6). By Theorem 6.2 (partial comparison), \(u_{1} \le \overline { u}\) and \(\underline {u} \le u_{2}\). Thus (6.13) leads to the comparison principle immediately. □

Remark 6.9

The introduction of \(\overline { u}\) and \(\underline {u}\) is motivated from Perron’s approach in PDE viscosity theory. However, there are several differences.

(i) In Perron’s approach, the functions in \(\overline {\mathcal {U}}\) are viscosity supersolutions, rather than classical supersolutions. So our \(\overline { u}\) is in principle larger than the counterpart in PDE theory. Similarly, our \(\underline {u}\) is smaller than the counterpart in PDE theory. Consequently, it is more challenging to verify the condition (6.13).

(ii) The standard Perron’s approach is mainly used for the existence of viscosity solution in the case the PDE satisfies the comparison principle. Here we use \(\overline { u}\) and \(\underline {u}\) to prove both the comparison principle and the existence.

(iii) In the standard Perron’s approach, one shows directly that \(\overline { u}\) is a viscosity solution, while in Lemma 6.7 we are only able to show \(\overline { u}\) is a viscosity supersolution.

The condition (6.13) is in general quite challenging. In the next section, we establish the complete result when the diffusion coefficient g is semilinear.

Rough PDEs with semilinear diffusion

We study RPDE (3.6) and PPDE (3.7) in the case that g is semilinear, i.e.,

$$\begin{array}{@{}rcl@{}} g(t,x,y,z)=\sigma(t,x)\,z+g_{0}(t,x,y). \end{array} $$

We employ the following assumption.

Assumption 7.1

\({\sigma } \in C^{k_{0}}_{{\alpha },\beta }({ [0,T]\times {\mathbb {R}}})\) and \(g_{0}\in C^{k_{0}}_{{\alpha },\beta }({ [0,T]\times {\mathbb {R}}}^{2})\) for some large k0.

Clearly, Assumption 7.1 implies Assumption 6.5. Note that in this section, we obtain a global result. Thus, we require that g0 and its derivatives are uniformly bounded in y as well.

Global equivalence with the PDE

Here, (4.1) becomes

$$\begin{array}{@{}rcl@{}} X_{t}(x) &=& x-\int_{0}^{t} \sigma(s,X_{s}(x))\,d{\omega}_{s},\\ Y_{t}(x,y)&=&y+\int_{0}^{t} g_{0}\big(s,X_{s}(x),Y_{s}(x,y)\big)\,d{\omega}_{s}, \end{array} $$

where X (resp., Y) depends only on x (resp., (x,y)), and

$$\begin{array}{@{}rcl@{}} Z_{t}({\theta}) &=& z+\int_{0}^{t} \left[Z_{s}({\theta}) \left[{\partial}_{x}\sigma(s,X_{s}(x))+{\partial}_{y} g_{0}\left(s,X_{s}(x),Y_{s}(x,y)\right)\right]\right.\\ &&\left. +{\partial}_{x} g_{0}\left(s,X_{s}(x),Y_{s}(x,y)\right) \right]\,d{\omega}_{s}{,} \end{array} $$

where θ=(x,y,z). By Lemma 2.13, we have, omitting the variable θ,

$$\begin{array}{@{}rcl@{}} Z_{t} = {\Gamma}_{t} z + \int_{0}^{t} {{\Gamma}_{t}\over {\Gamma}_{s}}{\partial}_{x} g_{0}(s,X_{s},Y_{s})\, d{\omega}_{s},\, \end{array} $$

where \({\Gamma }_{t}:= \exp \big (\int _{0}^{t} [{\partial }_{x}\sigma (s,X_{s})+{\partial }_{y} g_{0}(s,X_{s},Y_{s})]\,d{\omega }_{s}\big)\).

Lemma 7.2

Let Assumption 7.1 hold.

(i) RDE (7.2) has a classical solution (X,Y) satisfying

$$\begin{array}{@{}rcl@{}} X - x \in C^{k}_{{\alpha},\beta}({ [0,T]\times{\mathbb{R}}}),{\quad} Y-y \in C^{k}_{{\alpha},\beta}({ [0,T]\times{\mathbb{R}}}^2){.} \end{array} $$

(ii) There exists a c>0 such that

$$\begin{array}{@{}rcl@{}} \left.\begin{array}{l} \quad{\partial}_{x} X_{t}(x) = \exp\Big(-\int_{0}^{t} {\partial}_{x} {\sigma}(s, X_{s}(x)) \,d{\omega}_{s}\Big) \ge c,\\ {\partial}_{y} Y_{t}(x,y) \,= \exp\Big(\int_{0}^{t} {\partial}_{y} g_{0}(s, X_{s}(x), Y_{s}(x, y)) \,d{\omega}_{s}\Big)\ge c. \end{array}\right. \end{array} $$

(iii) For each t, the mapping xXt(x) has an inverse function \(X^{-1}_{t}({\cdot })\); and for each (t,x), the mapping yYt(x,y)has an inverse function \(Y^{-1}_{t}(x, {\cdot })\).

We remark that the proof below uses (7.5). One can also use the backward rough path in (2.12) to construct the inverse functions directly. This argument works in multidimensional settings as well (Keller and Zhang 2016).


(i) follows directly from Lemma 2.15, which also implies

$$\begin{array}{@{}rcl@{}} {\partial}_{x} X_{t}(x) &=& 1 - \int_{0}^{t} {\partial}_{x} {\sigma}(s, X_{s}(x)) {\partial}_{x} X_{s}(x) \,d{\omega}_{s};\\ {\partial}_{y} Y_{t}(x, y) &=& 1 + \int_{0}^{t} {\partial}_{y} g_{0}\big(s, X_{s}(x), Y_{s}(x,y)) {\partial}_{y} Y_{s}(x,y\big) \,d{\omega}_s.\end{array} $$

Then the representations in (7.5) follow from Lemma 2.13. Moreover, set \(\check X := X-x\in C^{k}_{{\alpha },\beta }({ [0,T]\times {\mathbb {R}}})\) and \(\tilde {\sigma }(t, x):= {\sigma }(t, X_{t}(x)) = {\sigma }(t, x + \check X_{t}(x))\). Then, by the uniform regularity of σ, \(\sup _{x\in {\mathbb {R}}} \|\tilde {\sigma }({\cdot }, x)\|_{k} \le C\). This implies that \( \int _{0}^{t} {\partial }_{x} {\sigma }_{s}(X_{s}(x)) {\partial }_{x} X_{s}(x) \,d{\omega }_{s}\) is uniformly bounded, uniformly in (t,x). Therefore, we obtain the first estimate for xX in (7.5). The second estimate for yY in (7.5) follows from the similar arguments.

Finally, for each t, the fact xXt(x)≥c implies that xXt(x) is one to one and the range is the whole real line \({\mathbb {R}}\). Thus \(X^{-1}_{t}: {\mathbb {R}}\to {\mathbb {R}}\) exists. Similarly, one can show that \(Y^{-1}_{t}(x, {\cdot })\) exists. □

One can easily check, omitting (x,y,z) in Xt(x),Yt(x,y), Zt(x,y,z),

$$\begin{array}{@{}rcl@{}} &{\partial}_{y} X_{t} = 0;{\quad} {\partial}_{z} X_{t} = 0;{\quad} Z_{t} = {{\partial}_{y} Y_{t} \over {\partial}_{x} X_{t}} z + {{\partial}_{x} Y_{t} \over {\partial}_{x} X_{t}}; {\quad} {\partial}_{z} Z_{t} = {{\partial}_{y} Y_{t} \over {\partial}_{x} X_{t}};&\\ &{\partial}_{y} Z_{t} = {{\partial}_{{yy}} Y_{t} \over {\partial}_{x} X_{t}} z + {{\partial}_{x y}Y_{t} \over {\partial}_{x} X_{t}},~ {\partial}_{x} Z_{t} = {{\partial}_{{xy}}Y_{t}{\partial}_{x} X_{t} - {\partial}_{y} Y_{t} {\partial}^{2}_{{xx}} X_{t} \over ({\partial}_{x} X_t)^{2}} z + {{\partial}_{x x}Y_{t} {\partial}_{x} X_{t} - {\partial}_{x} Y_{t} {\partial}^{2}_{{xx}} X_{t}\over ({\partial}_{x} X_t)^{2}},&\end{array} $$

and then (4.11) becomes

$$\begin{array}{@{}rcl@{}} F(t,x,y,z,{\gamma}) := {1\over {\partial}_{y} Y_{t}}~ f\left(t, X_{t}, Y_{t}, {{\partial}_{y} Y_{t} \over {\partial}_{x} X_{t}} z + {{\partial}_{x} Y_{t} \over {\partial}_{x} X_{t}}, {{\partial}_{y} Y_{t} \over ({\partial}_{x} X_{t})^{2}}{\gamma} \right.\qquad \\ +\left. {{\partial}_{{yy}} Y_{t} \over ({\partial}_{x} X_{t})^{2}} z^{2}+ {2{\partial}_{{xy}} Y_{t} {\partial}_{x} X_{t} - {\partial}_{y} Y_{t} {\partial}^{2}_{{xx}} X_{t}\over ({\partial}_{x}X_{t})^{3}} z+ {{\partial}^{2}_{{xx}} Y_{t} {\partial}_{x} X_{t}- {\partial}_{x} Y_{t} {\partial}^{2}_{{xx}} X_{t}\over ({\partial}_{x}X_{t})^{3}}\right). \end{array} $$

Under our conditions, F has typically quadratic growth in z and is not uniformly Lipschitz in y. Moreover, the first equality of (4.8) becomes

$$ v(t,x) = Y^{-1}_{t}\big(x, u(t, X_{t}(x))\big)\text{ or}~ u(t,x) = Y_{t}\big(X^{-1}_{t}(x), v(t, X^{-1}_{t}(x))\big). $$

By using similar arguments as in Section 4.2, we obtain the following result which is global in this semilinear case.

Theorem 7.3

Let Assumptions 7.1 and 3.3 hold. Let \(u\in C^{k}_{{\alpha },\beta }({ [0,T]\times {\mathbb {R}}})\) and let \(v\in C^{k,0}_{{\alpha },\beta }({ [0,T]\times {\mathbb {R}}})\) satisfy (7.7). Then u is a classical solution (resp., subsolution, supersolution) of RPDE (3.6)–(7.1) if and only if v is a classical solution (resp., subsolution, supersolution) of PDE (4.10)–(7.6).

The next result establishes equivalence in the viscosity sense.

Theorem 7.4

Let Assumptions 7.1 and 3.3 hold. Assume that u, \(v\in C({ [0,T]\times {\mathbb {R}}})\) satisfy (7.7). Then u is a viscosity solution (resp., subsolution, supersolution) of RPDE (3.6)–(7.1) at \((t_{0},x_{0})\in (0,T]\times {\mathbb {R}}\) if and only if v is a viscosity solution (resp., subsolution, supersolution) of PDE (4.10)–(7.6) at \((t_{0}, X^{-1}_{t_{0}}(x_{0}))\).


We prove the statement only for supersolutions. First, we prove the if part. Let \(\tilde x_{0} := X^{-1}_{t_{0}}(x_{0})\) and v be a viscosity supersolution of PDE (4.10)–(7.6) at \((t_{0}, \tilde x_{0})\). Let \({\varphi } \in \overline {\mathcal {A}}_{g} u(t_{0},x_{0})\) with corresponding δ0. Define

$$\begin{array}{@{}rcl@{}} \psi (t, x) := Y^{-1}_{t}\big(x, {\varphi}(t, X_{t}(x))\big),\text{ i.e.,} Y_{t}(x, \psi(t,x)) = {\varphi}(t, X_{t}(x)). \end{array} $$

It is clear that \(\psi (t_{0}, \tilde x_{0}) = v(t_{0}, \tilde x_{0})\). By the continuity of X, there exists a δ>0 such that \(\phantom {\dot {i}\!}X_{t}(x) \in O_{{\delta }_{0}}(x_{0})\) for all \((t,x) \in D_{{\delta }}(t_{0}, \tilde x_{0})\). By the same arguments as for (4.7), ωψ=0. Moreover, for \((t,x) \in D_{{\delta }}(t_{0}, \tilde x_{0})\), since \({\varphi } \in \overline {\mathcal {A}}_{g} u(t_{0},x_{0})\), we have φ(t,Xt(x))≤u(t,Xt(x)). By Lemma 7.2, the mapping yYt(x,y) is increasing. Thus Y−1 is also increasing and ψ(t,x)≤v(t,x), i.e., ψ is a test function for v at (t0,x0) and

$$\begin{array}{@{}rcl@{}} {\partial}_{t} \psi(t_{0}, \tilde x_0) \ge F\big(t_{0}, \tilde x_{0}, \psi(t_{0},\tilde x_0), {\partial}_{x} \psi(t_{0}, \tilde x_0), {\partial}^{2}_{xx}\psi(t_{0}, \tilde x_0)\big). \end{array} $$

By the derivation of F, this implies

$$\begin{array}{@{}rcl@{}} {\partial}^{{\omega}}_{t} {\varphi}(t_{0}, x_0) \ge f\big(t_{0}, x_{0}, {\varphi}(t_{0},x_0), {\partial}_{x} {\varphi}(t_{0}, x_0), {\partial}^{2}_{{xx}}{\varphi}(t_{0}, x_0)\big), \end{array} $$

i.e., u is a viscosity supersolution at (t0,x0).

For the opposite direction, assume that u is a viscosity supersolution of RPDE (3.6) at (t0,x0). For \(\psi \in \overline {\mathcal {A}}_{0} v(t_{0},\tilde x_{0})\) corresponding to g=0, define \({\varphi } (t, x) := Y_{t}\big (X^{-1}_{t}(x), \psi (t, X^{-1}_{t}(x))\big)\), which still implies Yt(x,ψ(t,x))=φ(t,Xt(x)). By similar arguments, (7.9) follows from (7.10). □

Remark 7.5

In the general case, there are two major differences:

(i) The transformation determined by (4.8) involves both u and xu, i.e., to extend Theorem 7.4, one has to assume that the candidate viscosity solution u is differentiable in x.

(ii) The transformation is local, in particular, the δ in Theorem 4.5 depends on \(\|{\partial }^{2}_{{xx}} u\|_{\infty }\), i.e., unless \({\partial }^{2}_{{xx}} u\) is bounded and the solution is essentially classical, we have difficulty to extend Theorem 7.4 to the general case, even in just a local sense.

Some a priori estimates

Here, we establish uniform a priori estimates for v that will be crucial for the comparison principle of viscosity solutions in the next subsection. First, we estimate the \(\mathbb {L}^{\infty }\)-norm of v.

Proposition 7.6

Let Assumptions 7.1, 3.3, and 3.4 hold and f be smooth. Assume further that \(v\in C^{k,0}_{{\alpha },\beta }({ [0,T]\times {\mathbb {R}}})\) is a classical solution of PDE (4.10)–(7.6). Then there exists a constant C, which depends only on the constants K0, L0 in Assumption 3.3, and the regularity of σ,g0 in Assumption 7.1, but does not depend on u0 or the further regularity of f, such that

$$\begin{array}{@{}rcl@{}} |v(t,x)| \le e^{Ct} \big[\|u_{0}\|_{\infty} + Ct\big]. \end{array} $$


First, we write (4.10)–(7.6) as

$$\begin{array}{@{}rcl@{}} {\partial}_{t} v &=& a(t,x) {\partial}^{2}_{{xx}} v + b(t,x) {\partial}_{x} v +F(t,x,v, 0, 0), {\quad}\text{where} \\ a(t,x) &:=& \tilde a(t,x, v(t,x), {\partial}_{x} v(t,x), {\partial}^{2}_{{xx}} v(t,x)),\\ b(t,x) &:=& \tilde b(t,x, v(t,x), {\partial}_{x} v(t,x), {\partial}^{2}_{{xx}} v(t,x)), \\ \tilde a(t,x,y,z,{\gamma}) &:=& \int_{0}^{1} {\partial}_{{\gamma}} F(t,x,y,\text{\l} z, \text{\l} {\gamma}) d\text{\l},\\ \tilde b(t,x,y,z,{\gamma}) &:=& \int_{0}^{1} {\partial}_{z} F(t,x,y,\text{\l} z, \text{\l} {\gamma}) d\text{\l}. \end{array} $$

Since v is a classical solution, a and b are smooth functions. Reversing the time by setting \(\hat {\varphi }(t,x) := {\varphi }(T-t, x)\) for φ=v, a, b, F, we have

$$\begin{array}{@{}rcl@{}} {\partial}_{t} \hat v + \hat a(t,x) {\partial}^{2}_{{xx}}\hat v + \hat b(t,x) {\partial}_{x}\hat v +\hat F(t,x,\hat v, 0, 0) =0,{\quad} \hat v(T,x) = u_{0}(x). \end{array} $$

Let B be a standard Brownian motion. Consider the SDE

$$\begin{array}{@{}rcl@{}} \hat X_{t} = x + \int_{0}^{t} \hat b(s, \hat X_s) ds + \int_{0}^{t} \sqrt{2\hat a}(s, \hat X_s) dB_s. \end{array} $$

Then \(\hat Y_{t} := \hat v(t, \hat X_{t})\) solves the BSDE

$$\begin{array}{@{}rcl@{}} \hat Y_{t} = u_{0}(\hat X_T) + \int_{t}^{T} \hat F(s, \hat X_{s}, \hat Y_{s},0,0) ds - \int_{t}^{T} \hat Z_{s} dB_s. \end{array} $$

Since \(F(t,x,y,0,0) = {1\over {\partial }_{y} Y_{t}}f\big (t, X_{t}, Y_{t}, {{\partial }_{x} Y_{t} \over {\partial }_{x} X_{t}}, ~ {{\partial }^{2}_{{xx}} Y_{t} {\partial }_{x} X_{t}- {\partial }_{x} Y_{t} {\partial }^{2}_{{xx}} X_{t}\over ({\partial }_{x}X_{t})^{3}}\big)\), we have

$$\begin{array}{@{}rcl@{}} |F(t,x,y, 0, 0)|\le C[1+ |y|] \end{array} $$

following from Lemma 7.2. Then, by standard BSDE estimates,

$$\begin{array}{@{}rcl@{}} |v(T,x)| = |\hat v(0, x)| = |\hat Y_0| \le e^{CT} \big[\|u_{0}\|_{\infty} + CT \big], \end{array} $$

which yields (7.11) for t=T. Along the same lines, one can prove (7.11) for all t>0. □

Remark 7.7

(i) We are not able to establish similar a priori estimates for xv. Besides the possible insufficient regularity of u0, we emphasize that the main difficulty here is not that F has quadratic growth in z, but that F is not uniformly Lipschitz continuous in y. Nevertheless, we obtain some local estimate for xv in Proposition 7.9, which will be crucial for the comparison principle of viscosity solutions later.

(ii) To overcome the difficulty above and apply standard techniques, Lions and Souganidis (2000a, (1.12)) imposed technical conditions on f in the case f=f(z,γ):

$$\begin{array}{@{}rcl@{}} {\gamma} {\partial}_{{\gamma}} f + z {\partial}_{z} f - f ~\text{is either bounded from above or from below}{.} \end{array} $$

This is essentially satisfied when f is convex or concave in (z,γ). Our f in (7.15) below does not satisfy (7.14), in particular, we do not require f to be convex or concave in z. See also Remark 7.13.

The next result relies on a representation of v and BMO estimates for BSDEs with quadratic growth. For this purpose, we restrict f to Bellman–Isaacs type with the Hamiltonian

$$\begin{array}{@{}rcl@{}} f(t,x,y,z,{\gamma}) &= \sup\limits_{e_{1} \in E_{1}} \inf\limits_{e_{2}\in E_{2}} \left[\frac{1}{2} {\sigma}^{2}_{f}(t, x, e) {\gamma} + b_{f}(t, x, e) z \right.\\ &\qquad\qquad \left.+f_{0}\left(t,x,y, {\sigma}_{f}(t, x,e) z, e\right)\right], \end{array} $$

where \(E:= E_{1} \times E_{2} \subset {\mathbb {R}}^{2}\) is the control set and e=(e1,e2).

Assumption 7.8

(i) σf, \(b_{f} \in C^{0}({ [0,T]\times {\mathbb {R}}}\times E)\) are bounded by K0, uniformly Lipschitz continuous in x with Lipschitz constant L0, and σf≥0;

(ii) \(f_{0}\in C^{0}({ [0,T]\times {\mathbb {R}}}^{3}\times E)\) is uniformly Lipschitz continuous in (x,y,z) with Lipschitz constant L0, and f0(t,x,0,0,e) is bounded by K0.

Assumption 7.8 obviously implies Assumption 3.3.

Proposition 7.9

Let Assumptions 7.1, 7.8, and 3.4 hold, and (g,f) take the form (7.1)–(7.15). Assume that \(v\in C^{k,0}_{{\alpha },\beta }({ [0,T]\times {\mathbb {R}}})\) is a classical solution of PDE (4.10)-(7.6). Then there exist constants δ0>0 and C0, which depend only on K0 and L0 in Assumption 7.8, the regularity of σand g0 in Assumption 7.1, and u0, but not on the further regularity of f and u0, such that

$$\begin{array}{@{}rcl@{}} |{\partial}_{x} v(t,x)|\le C_{0}[1+ \|{\partial}_{x} u_{0}\|_{\infty}]{\quad}\text{for all}~(t,x) \in [0, {\delta}_{0}]\times {\mathbb{R}}. \end{array} $$


Under (7.1) and (7.15), (4.11) and the equivalent (7.6) becomes

$$\begin{array}{@{}rcl@{}} F(t,x,y,z,{\gamma}) &= \sup\limits_{e_{1}\in E_{1}} \inf\limits_{e_{2}\in E_{2}} \left[\frac{1}{2} \hat {\sigma}^{2}_{f}(t, x, e) {\gamma} + \hat b_{f}(t,x,e) z \right.\\ &\qquad\qquad \left.+ F_{0}\left(t,x,y, \hat {\sigma}_{f}(t,x,e) z,e\right)\right], \end{array} $$

where, omitting (x,y) in Xt(x) and Yt(x,y),

$$\begin{array}{@{}rcl@{}} \hat {\sigma}_{f}(t,x,e) &:=&{ {\sigma}_{f}(t, X_{t}, e)\over {\partial}_{x} X_{t}},{\quad} \hat b_{f}(t,x,e) := { b_{f}(t, X_{t}, e)\over {\partial}_{x} X_{t}},\\ F_{0}(t,x,y,z,e) &:=& \frac{1}{2}\hat {\sigma}^{2}_{f}(t, x, e) {{\partial}^{2}_{{yy}} Y_{t} \over {\partial}_{y} Y_{t}} z^{2} +\hat {\sigma}^{2}_{f}(t, x, e) [{{\partial}^{2}_{{xy}} Y_{t} \over {\partial}_{y} Y_{t}} - { {\partial}^{2}_{{xx}} X_{t}\over 2{\partial}_{x} X_{t}}] z\\ {\qquad} &+& {1\over {\partial}_{y} Y_{t}} f_{0}\left(t,X_{t}, Y_{t}, {\partial}_{y} Y_{t} z + \hat{\sigma}_{f} (t, x, e) {\partial}_{x} Y_{t}, e\right)\\ {\qquad} &+&\hat {\sigma}^{2}_{f}(t, x, e) {{\partial}^{2}_{{xx}} Y_{t} {\partial}_{x} X_{t} - {\partial}_{x} Y_{t} {\partial}^{2}_{{xx}} X_{t} \over 2 {\partial}_{y} Y_{t} {\partial}_{x}X_{t}}+\hat b_{f}(t, x, e) {{\partial}_{x} Y_{t} \over {\partial}_{y} Y_{t}}. \end{array} $$

By (7.5), we have, again omitting (x,y) in Xt(x),Yt(x,y),

$$\begin{array}{@{}rcl@{}} {\partial}^{2}_{{xx}} X_{t} &=& -{\partial}_{x} X_{t}\int_{0}^{t} {\partial}^{2}_{{xx}} {\sigma}(s, X_{s}) {\partial}_{x} X_{s} d{\omega}_{s},\\ {\partial}^{2}_{{yy}} Y_{t} &=&{\partial}_{y} Y_{t} \int_{0}^{t} {\partial}^{2}_{{yy}} g_{0}(s, X_{s}, Y_{s}) {\partial}_{y} Y_{s} d{\omega}_{s},\\ {\partial}^{2}_{{xy}} Y_{t} &=& {\partial}_{y} Y_{t} \int_{0}^{t} [{\partial}^{2}_{{xy}} g_{0}(s, X_{s}, Y_{s}){\partial}_{x} X_{s} + {\partial}^{2}_{{yy}} g_{0}(s, X_{s}, Y_{s}) {\partial}_{x} Y_{s}] d{\omega}_{s}. \end{array} $$

Then, by (7.4) we can easily verify that

$$ \left. \begin{array}{c} \hat{\sigma}_{f}, \hat b_{f}, \text{ and } F_{0}({\cdot}, 0, 0,{\cdot}) \text{ are bounded,}{\quad} |{\partial}_{x} \hat{\sigma}_{f} |\le C,{\quad} |{\partial}_{x} \hat b_{f}|\le C,\\ |{\partial}_{z} F_{0}(t,x,y,z)| \le C[1+\rho(t) |z|],\\ |{\partial}_{x} F_{0}(t,x,y,z)| + |{\partial}_{y} F_{0}(t,x,y,z)|\le C[1+|y|+|z|+ \rho(t)|z|^{2}]{,} \end{array} \right. $$

where ρ≥0 is a continuous function with ρ(0)=0. Here, for notational simplicity, we are assuming the relevant functions are differentiable, but actually we only need their uniform Lipschitz continuity.

Now, let \(\bar B\) be a standard Brownian motion and \({\mathcal {E}} = {\mathcal {E}}_{1}\times {\mathcal {E}}_{2}\) be the set of \({\mathbb {F}}^{\bar B}\)-progressively measurable E-valued processes. Fix δ>0 and define \(\bar {\varphi }(t,x,y,z,e) := {\varphi }({\delta }-t, x, y,z, e)\) for \({\varphi } = \hat {\sigma }_{f}, \hat b_{f}, F_{0}\). For any \(e\in {\mathcal {E}}\), introduce the following decoupled FBSDE on [0,δ]:

$$ \begin{aligned} {\mathcal{X}}^{e}_{t} &= x + \int_{0}^{t} \bar b_{f}(s, {\mathcal{X}}^{e}_{s}, e_s) ds +\int_{0}^{t} \bar {\sigma}_{f}(s, {\mathcal{X}}^{e}_{s}, e_s) d\bar B_{s};\\ {\mathcal{Y}}^{e}_{t} &= u_{0}({\mathcal{X}}^{e}_{{\delta}}) + \int_{t}^{{\delta}} \bar F_{0}(s, {\mathcal{X}}^{e}_{s}, {\mathcal{Y}}^{e}_{s}, {\mathcal{Z}}^{e}_{s}, e_s) ds - \int_{t}^{{\delta}} {\mathcal{Z}}^{e}_{s} d\bar B_s. \end{aligned} $$

By Zhang (2017, Theorems 7.2.1, 7.2.3), there exist constants c0, C0, depending on u0 and f(·,0,0,·) (the bound of |f(t,x,0,0,e)|), such that

$$\begin{array}{@{}rcl@{}} \|{\mathcal{Y}}^{e}\|_{\infty} \le C_{0},{\quad} {\mathbb{E}}\Big[\exp\big(c_{0}\int_{0}^{{\delta}} |{\mathcal{Z}}^{e}_s|^{2} ds\big)\Big] \le C_{0} <\infty. \end{array} $$

Differentiating (7.19) with respect to x yields

$$\begin{array}{@{}rcl@{}} {\nabla} {\mathcal{X}}^{e}_{t} &=& 1 + \int_{0}^{t} {\partial}_{x} \bar b_{f} {\nabla} {\mathcal{X}}^{e}_{s} ds +\int_{0}^{t} {\partial}_{x} \bar {\sigma}_{f}{\nabla} {\mathcal{X}}^{e}_{s} d\bar B_{s};\\ {\nabla} {\mathcal{Y}}^{e}_{t} &=& {\partial}_{x} u_{0}({\mathcal{X}}^{e}_{{\delta}}){\nabla} {\mathcal{X}}^{e}_{{\delta}} + \int_{t}^{{\delta}} \Big[{\partial}_{x} \bar F_{0} {\nabla} {\mathcal{X}}^{e}_{s} + {\partial}_{y} \bar F_{0} {\nabla} {\mathcal{Y}}^{e}_{s} + {\partial}_{z} \bar F_{0} {\mathcal{Z}}^{e}_{s}\Big] ds \\ &&- \int_{t}^{{\delta}} {\nabla} {\mathcal{Z}}^{e}_{s} d\bar B_{s}. \end{array} $$

This implies

$$\begin{array}{@{}rcl@{}} {\nabla} {\mathcal{Y}}^{e}_{0} = {\mathbb{E}}\Big[ {\Gamma}^{e}_{{\delta}} {\partial}_{x} u_{0}({\mathcal{X}}_{{\delta}}){\nabla} {\mathcal{X}}^{e}_{{\delta}} + \int_{0}^{{\delta}} {\Gamma}^{e}_{t} {\partial}_{x} \bar F_{0} {\nabla} {\mathcal{X}}^{e}_{t} dt \Big]{,} \end{array} $$

where \({\Gamma }^{e}_{t}:= \exp \big (\int _{0}^{t} {\partial }_{z} \bar F_{0} d\bar B_{s} + \int _{0}^{t} [{\partial }_{y} \bar F_{0} - \frac {1}{2} |{\partial }_{z} \bar F_{0}|^{2}] ds \big)\). By (7.18),

$$\begin{array}{@{}rcl@{}} {\mathbb{E}}[ |{\Gamma}^{e}_{t}|^{4}] &=& {\mathbb{E}}\left[\exp\left(4\int_{0}^{t} {\partial}_{z} \bar F_{0} d\bar B_{s} + \int_{0}^{t} [4{\partial}_{y} \bar F_{0} - 2 |{\partial}_{z} \bar F_{0}|^{2}] ds \right)\right] \\ &=& {\mathbb{E}}\left[\!\exp\left(4\!\int_{0}^{t} {\partial}_{z} \bar F_{0} d\bar B_{s} \,-\, 16 \int_{0}^{t} |{\partial}_{z} \bar F_{0}|^{2} ds + \int_{0}^{t} [4{\partial}_{y} \bar F_{0} +14 |{\partial}_{z} \bar F_{0}|^{2}] ds \right)\right]\\ &\le& \left({\mathbb{E}}\left[\exp\left(8\int_{0}^{t} {\partial}_{z} \bar F_{0} d\bar B_{s} - 32\int_{0}^{t} |{\partial}_{z} \bar F_{0}|^{2} ds \right)\right]\right.\\ &&\times \left.\mathbb{E}\left[\exp\left(\int_{0}^{t} [8{\partial}_{y} \bar F_{0} +28 |{\partial}_{z} \bar F_{0}|^{2}] ds \right)\right]\right)^{\frac{1}{2}}\\ &=&\left({\mathbb{E}}\left[\exp\left(\int_{0}^{t} [8{\partial}_{y} \bar F_{0} +28 |{\partial}_{z} \bar F_{0}|^{2}] ds \right)\right]\right)^{\frac{1}{2}}\\ &=& \left({\mathbb{E}}\left[\exp\left(C\int_{0}^{{\delta}} [ 1+ |{\mathcal{Y}}^{e}_{s}|+ |{\mathcal{Z}}^{e}_{s}| + \rho(t) |{\mathcal{Z}}^{e}_{s}|^{2} + \rho(t)^{2} |{\mathcal{Z}}^{e}_{s}|^{2} ] ds \right)\right]\right)^{\frac{1}{2}}. \end{array} $$

Set δ0>0 small enough so that \(C[\rho ({\delta }_{0}) + \rho ({\delta }_{0})^{2}] \le {c_{0}\over 2}\). Then, for δδ0, by (7.20), we obtain \({\mathbb {E}}\Big [ |{\Gamma }^{e}_{t}|^{4}\Big ] \le C_{0}\), and, by the second line of (7.18), it is clear that \({\mathbb {E}}\Big [\sup _{0\le t\le {\delta }} |{\nabla } {\mathcal {X}}^{e}_{t}|^{4}\Big ] \le C_{0}\). Thus

$$\begin{array}{@{}rcl@{}} |{\nabla} {\mathcal{Y}}^{{\varepsilon}}_{0}|&\le& C_{0} {\mathbb{E}}\left[\|{\partial}_{x} u_{0} \|_{\infty} |{\Gamma}^{e}_{{\delta}}| |{\nabla} {\mathcal{X}}^{e}_{{\delta}}|\right.\\ &&\left. + \int_{0}^{{\delta}} |{\Gamma}^{e}_{t}||{\nabla} {\mathcal{X}}^{e}_{t}|[1+|{\mathcal{Y}}^{e}_{t}| +|{\mathcal{Z}}^{e}_{t}|^{2}]dt \right] \le C_{0}[1+\|{\partial}_{x} u_{0}\|_{\infty}]. \end{array} $$

Finally, we remark that, since we know a priori that \(v\in C^{k,0}_{{\alpha },\beta }({ [0,T]\times {\mathbb {R}}})\), by the standard truncation arguments, we may assume without loss of generality that F0 is uniformly Lipschitz in (x,y,z) (with the Lipschitz constant possibly depending on the regularity of v). Then, byBuckdahn and Li (2008),

$$\begin{array}{@{}rcl@{}} v({\delta},x) = \inf_{{\mathcal{S}}}\sup_{e_{1} \in {\mathcal{E}}_{1}} {\mathcal{Y}}^{(e_{1}, {\mathcal{S}}(e_1))}_{0}, \end{array} $$

where the infimum is taken over the so-called non-anticipating strategies \({\mathcal {S}}: {\mathcal {E}}_{1} \to {\mathcal {E}}_{2}\). This implies

$$\begin{array}{@{}rcl@{}} |{\partial}_{x} v({\delta}, x)| \le \sup_{{\mathcal{S}}}\sup_{e_{1} \in {\mathcal{E}}_{1}} |{\nabla} {\mathcal{Y}}^{(e_{1}, {\mathcal{S}}(e_1))}_0| \le C_{0}[1+\|{\partial}_{x} u_{0}\|_{\infty}]. \end{array} $$

Since δδ0 is arbitrary, the proof is complete. □

Remark 7.10

(i) We reverse the time in (7.19). Hence, in spirit of the backward rough path in (7.19), \(\overline { B}\) and the rough path ω (or the original B in (3.1)) have opposite directions of time evolvement. Thus (7.19) is in the line of the backward doubly SDEs of Pardoux and Peng (1994). When E2 is a singleton, Matoussi et al. (2018) provide a representation for the corresponding SPDE (3.1) in the context of second-order backward doubly SDEs. We shall remark though, while the wellposedness of backward doubly SDEs holds true for random coefficients, its representation for solutions of SPDEs requires Markovian structure, i.e., the f and g in (3.1) depend only on Bt (instead of the path B·). The stochastic characteristic approach used in this paper does not have this constraint. Note again that our f and g in RPDE (3.6) and PPDE (3.7) are allowed to depend on the (fixed) rough path ω.

(ii) For (7.22), from a game theoretical point of view, it is more natural to use the so-called weak formulation (Pham and Zhang 2014). However, as we are here mainly concerned about the regularity, the strong formulation used by Buckdahn and Li (2008) is more convenient.

The global comparison principle and existence of viscosity solution

We need the following PDE result from Safonov (1988) (Mikulevicius and Pragarauskas (1994) have a corresponding statement for bounded domains and Safonov (1989) has one for the elliptic case).

Theorem 7.11

Consider PDE (4.10). Assume that, for some β>0,

(i) F is convex in γ and uniformly parabolic, i.e., \({\partial }^{2}_{{\gamma }{\gamma }} F \ge 0\) and γFc0>0,

(ii) F is uniformly Lipschitz continuous in (y,z,γ),

(iii) \(\|F({\cdot }, y,z,{\gamma })\|_{C_{b}^{\beta }({ [0,T]\times {\mathbb {R}}})} \le C[1+|y|+z|+|{\gamma }|]\),

(iv) \(u_{0} \in C^{2+\beta }_{b}({\mathbb {R}})\).

Then there exists β0(0,1) depending only on c0 such that, whenever β(0,β0], PDE (4.10) has a classical solution \( v\in C^{2+\beta }_{b}({ [0,T]\times {\mathbb {R}}})\).

Theorem 7.12

Let Assumptions 7.1, 7.8, and 3.4 hold, and let (g,f) take the form (7.1)–(7.15). Assume further that

(i) f is either convex or concave in γ, namely, either E1 or E2 in (7.15) is a singleton,

(ii) σfc0>0,

(iii) \(u_{0}\in C_{b}^{k+1+\beta }({\mathbb {R}})\) and \(f\in C^{k+1,loc}_{{\alpha },\beta }({ [0,T]\times {\mathbb {R}}}^{4})\).

Then there exists a δ0>0, depending on K0, L0 in Assumption 7.8, the regularity of σ, g0 in Assumption 7.1, and u0, but independent of the further regularity of u0 and f, such that PDE (4.10)–(7.6) has a classical solution \(v\in C^{k,0}_{{\alpha },\beta }([0, {\delta }_{0}]\times {\mathbb {R}})\).


We prove only the convex case, i.e., E2 is a singleton. When f is concave, one can use following the standard transformation: \(\tilde f(t,x,y,z,{\gamma }) := -f(t, x, -y, -z, -{\gamma })\) is convex and \(\tilde v(t,x) := -v(t,x)\) corresponds to \(\tilde f\). Let δ0 be determined by Proposition 7.9.

First, it is clear that the F in (7.6) (or the equivalent (7.17)) satisfies the requirements in Theorem 7.11 (i). Recall (7.11), (7.16), and the K0 in Assumption 7.8 (ii). Put

$$\begin{array}{@{}rcl@{}} C_1:= e^{CT}[\|u_{0}\|_{\infty} + CT K_{0}] + C_{0}[1+\|{\partial}_{x} u_{0}\|_{\infty}]. \end{array} $$

Introduce a truncation function \(\iota \in C^{\infty }({\mathbb {R}})\) such that ι(x)=x for |x|≤C1, and ι(x)=0 for |x|≥C1+1. Define

$$\begin{array}{@{}rcl@{}} \tilde F(t,x,y,z,{\gamma}) := F(t,x, \iota(y), \iota(z), {\gamma}). \end{array} $$

Then \(\tilde f\) satisfies all the conditions in Theorem 7.11. Thus, PDE (7.23) below has a classical solution \( \tilde {v}\in C^{2+(\beta \wedge \beta _{0})}_{b}({ [0,T]\times {\mathbb {R}}})\):

$$\begin{array}{@{}rcl@{}} {\partial}_{t} \tilde v=\tilde F(t,x,\tilde v, {\partial}_{x} \tilde v, {\partial}^{2}_{{xx}} \tilde v),{\quad} \tilde v(0,{\cdot}) = u_0. \end{array} $$

Applying Propositions 7.6 and 7.9 on the above PDE yields \(|\tilde v|\le C_{1}, |{\partial }_{x} \tilde v|\le C_{1}\) on \([0, {\delta }_{0}] \times {\mathbb {R}}\), i.e., \(v:= \tilde v\) solves PDE (4.10)–(7.6) on \([0, {\delta }_{0}] \times {\mathbb {R}}\).

Finally, the further regularity of v follows from standard bootstrap arguments (Gilbarg and Trudinger 1983, Lemma 17.16) together with Remark 2.11. □

Remark 7.13

The requirement that f is convex or concave is mainly to ensure the existence of classical solutions for PDE (7.23). Theorem 7.11 holds true for the multidimensional case as well. When the dimension of x is 1 or 2, Bellman–Isaacs equations may have classical solutions as well, see Lieberman (1996, Theorem 14.24) for d=1 and Pham and Zhang (2014, Lemma 6.5) for d=2 for bounded domains, and also Gilbarg and Trudinger (1983, Theorem 17.12) for elliptic equations in bounded domains when d=2. We believe such results can be extended to the whole space and thus the theorem above as well as Theorem 7.14 will hold true when f is indeed of Bellman–Isaacs type. However, when the dimension is high, the Bellman–Isaacs equation, in general, does not have a classical solution (Nadirashvili and Vladut (2007) provide a counterexample).

Theorem 7.14

Let (g,f) take the form (7.1)–(7.15). Let Assumptions 7.1, 7.8, and 3.4 hold. Assume that, for any ε>0, there exist \(\overline { f}^{{\varepsilon }}, \underline f^{{\varepsilon }}\) such that

(i) \(\overline { f}^{{\varepsilon }}, \underline f^{{\varepsilon }}\) satisfy Assumption 7.8 uniformly, i.e., with the same K0,L0 for all ε>0,

(ii) for each ε>0, \(\overline { f}^{{\varepsilon }}, \underline f^{{\varepsilon }}\) satisfy all the requirements in Theorem 7.12,

(iii) for each ε>0, \(f-{\varepsilon } \le \underline f^{{\varepsilon }} \le f\le \overline { f}^{{\varepsilon }} \le f+{\varepsilon }\).

Then RPDE (3.6) satisfies the comparison principle and has a unique viscosity solution.


By Lemma 6.7, \(\overline { u}\) and \(\underline {u}\) are bounded by some C0.

Step 1. First, we prove (6.13) locally. Let δ0>0 be determined by Proposition 7.9, corresponding to K0, L0, but with u0 replaced with the global bound C0 of \(\overline { u}\) and \(\underline {u}\). For any ε>0, let \(\overline { f}^{{\varepsilon }}, \underline f^{{\varepsilon }}\) be as in the assumption of the theorem, and \(\overline { f}^{{\varepsilon }}, \underline f^{{\varepsilon }}\) correspond to \(\overline { f}^{{\varepsilon }}, \underline f^{{\varepsilon }}\) as in (7.17). In the spirit of Remark 5.7 (i), we may assume without loss of generality that u0 is uniformly continuous. Then u0 has standard smooth mollifiers \(\overline { u}^{{\varepsilon }}_{0}, \underline {u}^{{\varepsilon }}_{0}\) such that \(u_{0} - {\varepsilon } \le \underline {u}_{0}^{{\varepsilon }} \le u_{0} \le \overline { u}^{{\varepsilon }}_{0} \le u_{0}+{\varepsilon }\). By Theorem 7.12, let \(\overline { v}^{{\varepsilon }}\) (resp., \(\underline {v}^{{\varepsilon }}\)) be the classical solution to PDE (4.10) –(7.17) with coefficients \((\overline { F}^{{\varepsilon }}, g)\) and initial condition \(\overline { u}^{{\varepsilon }}_{0}\) (resp., coefficients \((\underline F^{{\varepsilon }}, g)\) and initial condition \(\underline {u}^{{\varepsilon }}_{0}\)) on [0,δ0]. Then, by (6.4), \(\underline {v}^{{\varepsilon }} \le \underline {v} \le \overline { v} \le \overline { v}^{{\varepsilon }}\), where \(\underline {v} := Y^{-1}_{t}\big (x, \underline {u}(t, X_{t}(x))\big)\) as in (7.7), and similarly for \(\overline { v}\). By (4.11) it is clear that \(0 \le \overline { F}^{{\varepsilon }} - \underline F^{{\varepsilon }} \le C{\varepsilon }\). Define \({\Delta } v^{{\varepsilon }} := \overline { v}^{{\varepsilon }} - \underline {v}^{{\varepsilon }}, {\Delta } u^{{\varepsilon }}_{0} := \overline { u}^{{\varepsilon }}_{0} - \underline {u}^{{\varepsilon }}_{0}, {\Delta } F^{{\varepsilon }} := \overline { F}^{{\varepsilon }} - \underline F^{{\varepsilon }}\). Then

$$\begin{array}{@{}rcl@{}} &{\Delta} v^{{\varepsilon}} = {\partial}_{t} {\Delta} v^{{\varepsilon}} + F^{{\varepsilon}}(t,x, {\Delta} v^{{\varepsilon}}, {\partial}_{x}{\Delta} v^{{\varepsilon}}, {\partial}^{2}_{{xx}} {\Delta} v^{{\varepsilon}}), {\quad} {\Delta} v^{{\varepsilon}}(0,{\cdot}) = {\Delta} u^{{\varepsilon}}_{0},\\ &\text{where } F^{{\varepsilon}}(t,x,y,z,{\gamma}) := \overline{ F}^{{\varepsilon}}(t,x, \underline{v}^{{\varepsilon}} + y, {\partial}_{x} \underline{v}^{{\varepsilon}} + z, {\partial}^{2}_{{xx}} \underline{v}^{{\varepsilon}} + {\gamma})\\ &\qquad\qquad\qquad\qquad\qquad - \underline F^{{\varepsilon}}(t,x, \underline{v}^{{\varepsilon}}, {\partial}_{x} \underline{v}^{{\varepsilon}}, {\partial}^{2}_{{xx}} \underline{v}^{{\varepsilon}}). \end{array} $$

Now following the arguments of Proposition 7.6, we see that there exists a constant C, independent of ε, such that, for every \((t,x) \in [0, {\delta }_{0}]\times {\mathbb {R}}\),

$$\begin{array}{@{}rcl@{}} \overline{ v}(t,x) - \underline{v}(t,x)\!\!\! &\le&\!\!\! {\Delta} v^{{\varepsilon}}(t,x) \le C^{Ct}\Big[\|{\Delta} v^{{\varepsilon}}_{0}\|_{\infty} + Ct \|F^{{\varepsilon}}({\cdot}, 0,0,0)\|_{\infty}\Big] \\ \!\!\! &\le&\!\!\! C\Big[\|{\Delta} u^{{\varepsilon}}_{0}\|_{\infty} + \|{\Delta} F^{{\varepsilon}}({\cdot}, \underline{v}^{{\varepsilon}}, {\partial}_{x} \underline{v}^{{\varepsilon}}, {\partial}^{2}_{{xx}} \underline{v}^{{\varepsilon}})\|_{\infty}\Big] \le C{\varepsilon}. \end{array} $$

This implies that \(\overline { v}(t,x) = \underline {v}(t,x)\). Thus (6.13) holds on [0,δ0]. Therefore, by Theorem 6.8, \(u_{1} \le \overline { u}\le u_{2}\) and \(u:= \overline { u}\) is the unique viscosity solution of RPDE (3.6)–(7.15) on [0,δ0].

Step 2. Now, we prove the global result. Let 0=t0<<tn=T be such that titi−1δ0 for each i=1, …, n. By Step 1, u1(t1,·)≤u(t1,·)≤u2(t1,·). Now consider RPDE (3.6)–(7.15) on [t1,t2] with initial condition u(t1,·). Note that u(t1,·)C0 for the same global bound C0. Since δ0 corresponds to this C0, following the same arguments, we see that the comparison principle holds on [t1,t2]. Repeating the arguments establishes the result on the whole interval [0,T]. □

When f is semilinear, i.e., linear in γ, clearly under natural conditions f satisfies the requirements in Theorem 7.14. We provide next a simple fully nonlinear example.

Example 7.15

Let \(\overline { a} > \underline a >0\) be two constants. Then

$$\begin{array}{@{}rcl@{}} f({\gamma}) := \frac{1}{2} \sup_{\underline a \le a \le \overline{ a}} [a {\gamma}] = \frac{1}{2}[\overline{ a} {\gamma}^+ - \underline a {\gamma}^-] \end{array} $$

satisfies the requirements in Theorem 7.14.


Let η be a smooth symmetric density function with support (−1,1). For any ε>0, introduce a smooth mollifier of f:

$$\begin{array}{@{}rcl@{}} f_{{\varepsilon}}({\gamma}) := \int_{-1}^{1} f({\gamma} - {\varepsilon} x) \eta(x) dx = \frac{1}{2} \underline a {\gamma} + {\overline{ a}-\underline a\over 2}\int_{-1}^{1} ({\gamma}-{\varepsilon} x)^+ \eta(x) dx. \end{array} $$

It is clear that

$$\begin{array}{@{}rcl@{}} |f_{{\varepsilon}} - f|\le \big[{\overline{ a}\over 2}\int_{-1}^{1} |x|\eta(x)dx\big]~ {\varepsilon} =: c{\varepsilon}. \end{array} $$

We next consider the Legendre conjugate of fε:

$$\begin{array}{@{}rcl@{}} h_{{\varepsilon}}(a) := \sup_{{\gamma} \in {\mathbb{R}}} [\frac{1}{2} a{\gamma} - f_{{\varepsilon}}({\gamma})], ~a \in [\underline a, \overline{ a}]. \end{array} $$

By straightforward calculation, we have hε(a)= when \(a \notin [\underline a, \overline { a}]\), and

$$\begin{array}{@{}rcl@{}} h_{{\varepsilon}}(a) = {{\varepsilon}\over 2}[\overline{ a} - \underline a] \int_{-1}^{\Phi^{-1}({a-\underline a\over \overline{ a}-\underline a})} x \eta(x) dx,{\quad} a\in [\underline a, \overline{ a}], \end{array} $$

where \(\Phi (x) := \int _{-1}^{x} \eta (y) dy\), x[−1,1]. Note that

$$f_{{\varepsilon}}({\gamma}) = \sup_{\underline a \le a \le \overline{ a}} [\frac{1}{2} a{\gamma} - h_{{\varepsilon}}(a)].$$

Then \(\overline {f}_{{\varepsilon }} := f_{{\varepsilon }\over 2c} + {{\varepsilon }\over 2}\) and \(\overline {f}_{{\varepsilon }} := f_{{\varepsilon }\over 2c} -{{\varepsilon }\over 2}\) are the desired approximations. □

Remark 7.16

(i) As pointed out in Remark 7.5, for general g=g(t,x,y,z), the transformation is local and the δ in Theorem 4.5 depends on \(\|{\partial }^{2}_{{xx}} u\|_{\infty }\). Then the connection between RPDE (3.6) and PDE (4.10) exists only for local classical solutions, but is not clear for viscosity solutions. Since our current approach relies heavily on the PDE, we have difficulty in extending Theorem 7.4 to the general case, even in just the local sense. We will investigate this challenging problem by exploring other approaches in our future research.

(ii) When f is of first-order, i.e., σf=0 in (7.15), then (7.17) becomes

$$\begin{array}{@{}rcl@{}} & F(t,x,y,z,{\gamma}) = \sup\limits_{e_{1}\in E_{1}} \inf\limits_{e_{2}\in E_{2}} \Big[ \hat b_{f}(t,x,e) z + F_{0}\big(t,x,y,e\big)\Big], \\ &\text{where}\quad \hat b_{f}(t,x,e) := { b_{f}(t, X_{t}, e)\over {\partial}_{x} X_{t}}, \\ &F_{0}(t,x,y,e) := {1\over {\partial}_{y} Y_{t}} f_{0}\big(t,X_{t}, Y_{t}, 0, e\big)+\hat b_{f}(t, x, e) {{\partial}_{x} Y_{t}\over {\partial}_{y} Y_{t}}. \end{array} $$

Under Assumption 7.8, F0 is uniformly Lipschitz continuous in y, and thus the main difficulty mentioned in Remark 7.7 (i) does not exist here. Then, following similar arguments as in this subsection, we can show that the results of Theorems 7.12 and 7.14 still hold true if we replace the uniform nondegeneracy condition σfc0>0 with σf=0.

The case that g is linear

In this subsection, we study the special case when g is linear in (y,z) (by abusing the notation g0)

$$\begin{array}{@{}rcl@{}} g(t,x,y,z) = {\sigma}(t,x) z + h(t,x) y + g_{0}(t,x). \end{array} $$

We remark that strictly speaking this case does not satisfy Assumption 7.1 because g0(t,x,y):=h(t,x)y+g0(t,x) is not bounded in y. However, similar to the situation in Lemma 2.13, the linear structure allows us to extend all of the results in Section 7 to this case.

First, for X given by (7.2), we have

$$\begin{array}{@{}rcl@{}} Y_{t}(x,y) = e^{H_{t}(x)} \Big[y+ \int_{0}^{t} e^{-H_{s}(x)} g_{0}(s, X_{s}(x)) d{\omega}_{s}\Big], \end{array} $$

where \(H_{t} (x):= \int _{0}^{t} h(s, X_{s}(x))d{\omega }_{s}\). By straightforward calculation, we see that (7.6) becomes, omitting (x,y) in (X,Y,H),

$$\begin{array}{@{}rcl@{}} &&F(t,x,y,z,{\gamma}) := e^{-H_{t}} f\left(t, X_{t}, Y_{t}, {e^{H_{t}}\over {\partial}_{x} X_{t}} z + {{\partial}_{x} Y_{t}\over {\partial}_{x} X_{t}}, \right.\\ &&{\qquad}{\qquad} \left. {e^{H_{t}} \over ({\partial}_{x} X_t)^{2}}\left[{\gamma}+ [{\partial}_{x} H_{t} - {{\partial}^{2}_{{xx}} X_{t}\over {\partial}_xX_{t}}] z+ e^{-H_{t}} {\partial}^{2}_{{xx}} Y_{t} - { {\partial}^{2}_{{xx}} X_{t}\over {\partial}_xX_{t}}\right]\right). \end{array} $$

We now provide some sufficient conditions for the existence of classical solutions to PDE (4.10)–(7.28).

Theorem 7.17

Let all the conditions in Theorem 7.12 hold and let g take the form (7.26). Then PDE (4.10)–(7.28) has a classical solution \(v\in C^{k,0}_{{\alpha },\beta }({ [0,T]\times {\mathbb {R}}})\). Consequently, RPDE (3.6)–(7.26) has a classical solution \(u\in C^{k}_{{\alpha },\beta }({ [0,T]\times {\mathbb {R}}})\).


As in Theorem 7.12, we only prove the convex case. By the regularity of f and g, it is clear that F is smooth. By omitting the variables,

$$\begin{array}{@{}rcl@{}} {\partial}_{{\gamma}} F &=& {1 \over ({\partial}_{x} X)^{2}} {\partial}_{{\gamma}} f,{\quad} {\partial}^{2}_{{\gamma}{\gamma}} F = { e^{H_{t}}\over ({\partial}_{x} X)^{4}}{\partial}^{2}_{{\gamma}{\gamma}} f, \\ {\partial}_{z} F &=& {1\over {\partial}_{x} X} {\partial}_{z} f + [{{\partial}_{x} H \over ({\partial}_{x} X)^{2}} - {{\partial}^{2}_{{xx}} X \over ({\partial}_{x} X)^{3}}] {\partial}_{{\gamma}} f,\\ {\partial}_{y} F &=&{\partial}_{y} f + {\partial}_{z} f {{\partial}_{x} H\over {\partial}_{x} X} +{\partial}_{{\gamma}} f {({\partial}_{x} H)^{2} +{\partial}^{2}_{{xx}} H\over ({\partial}_{x} X)^{2}}. \end{array} $$

Then one can verify that F satisfies all the conditions in Theorem 7.11, thus we obtain \(v\in C^{k,0}_{{\alpha },\beta }({ [0,T]\times {\mathbb {R}}})\). Finally the existence of the corresponding function \(u\in C^{k}_{{\alpha },\beta }({ [0,T]\times {\mathbb {R}}})\) follows from Theorem 4.4. □

We now assume further that f is also linear, i.e.,

$$\begin{array}{@{}rcl@{}} f(t,x,y,z,{\gamma}) = a(t,x) {\gamma} + b(t,x) z + c(t,x) y + f_{0}(t,x). \end{array} $$

This case is well understood in the literature. By a straightforward calculation,

$$\begin{array}{@{}rcl@{}} F(t,x,y,z,{\gamma}) = A(t,x) {\gamma} + B(t,x) z + C(t,x) y + F_{0}(t,x), \end{array} $$

where, for the H defined by (7.27) and again omitting the variable x,

$$ \begin{aligned} A(t,x) &:={a(t,X_{t}) \over ({\partial}_{x} X_{t})^{2}}, \hskip 8cm\quad\\ B(t,x) &:= a(t,X_{t}) \big[ {2{\partial}_{x} H_{t} \over ({\partial}_{x}X_{t})^{2}} - {{\partial}^{2}_{{xx}} X_{t}\over ({\partial}_{x}X_{t})^{3}} \big] +{b(t,X_{t}) \over {\partial}_{x} X_{t}},\hskip3cm ~ \\ C(t,x) &:= a(t,X_{t}) {({\partial}_{x} H_{t})^{2} + {\partial}^{2}_{{xx}} H_{t}\over ({\partial}_{x} X_{t})^{2}} + \big[{b(t,X_{t})\over {\partial}_{x} X_{t}} - {a(t,X_{t}) {\partial}^{2}_{{xx}} X_{t} \over ({\partial}_{x} X_{t})^{3}}\big]{\partial}_{x} H_{t}\\ &\ \quad+ c(t,X_{t}),\hskip7cm\\ F_{0}(t,x) &:=\left[ {a(t,X_{t})\over ({\partial}_{x} X_{t})^{2}} [({\partial}_{x} H_{t})^{2} + {\partial}^{2}_{{xx}} H_{t}\big] \right.\\ &\quad \left.+\left[{b(t,X_{t})\over {\partial}_{x} X_{t}} - {a(t,X_{t}) {\partial}^{2}_{{xx}} X_{t} \over ({\partial}_{x} X_{t})^{3}}\right] {\partial}_{x} H_{t} + c(t,X_{t})\right] \int_{0}^{t} e^{-H_{s}} g_{0}(s, X_{s}) d{\omega}_{s} ~~ \\ &\quad+\Big[2{a(t,X_{t})\over ({\partial}_{x} X_{t})^{2}}{\partial}_{x} H_{t} +{b(t,X_{t})\over {\partial}_{x} X_{t}} - {a(t,X_{t}) {\partial}^{2}_{{xx}} X_{t} \over ({\partial}_{x} X_{t})^{3}}\Big]\int_{0}^{t} {\partial}_{x} ({g_{0}(s, X_{s})\over e^{H_{s}}})\, d{\omega}_{s} \\ &\quad+ {a(t,X_{t})\over ({\partial}_{x} X_{t})^{2}} \int_{0}^{t} {\partial}^{2}_{{xx}} (e^{-H_{s}} g_{0}(s, X_{s})) \,d{\omega}_{s} + f_{0}(t, X_{t}) e^{-H_{t}}.\hskip2cm~~ \end{aligned} $$

Thus PDE (4.10) is linear and we have the representation formula

$$ v(t,x) = {\mathbb{E}}\Big[e^{\int_{0}^{t} C(t-r, {\mathcal{X}}^{t,x}_{r}) dr} u_{0}({\mathcal{X}}^{t,x}_{t}) + \int_{0}^{t} e^{\int_{0}^{s} C(t-r, {\mathcal{X}}^{t,x}_{r}) dr} F_{0}(t-s, {\mathcal{X}}^{t,x}_{s}) ds\Big], $$

where, for fixed \((t,x) \in { [0,T]\times {\mathbb {R}}}\) and for a Brownian motion \(\bar B\),

$${\mathcal{X}}^{t,x}_{s} = x + \int_{0}^{s} \sqrt{2A(t-r, {\mathcal{X}}^{t,x}_{r})} dr +\int_{0}^{s} B(t-r, {\mathcal{X}}^{t,x}_{r}) d\bar B_{s},~ 0\le s\le t. $$

Appendix: proofs for some results in Section 2

For notational simplicity, we may often write ut(x):=u(t,x).

Proof of Lemma 2.7. First, we prove (2.13). For \(u \in C^{2, loc}_{{\alpha },\beta }({ [0,T]\times {\mathbb {R}}}^{d})\),

$$\begin{array}{@{}rcl@{}} u_{s, t}(x) = {\partial}_{{\omega}} u_{s}(x) {\omega}_{s,t} + R^{1,u}_{s,t}(x). \end{array} $$

Differentiating both sides with respect to x yields

$$\begin{array}{@{}rcl@{}} {\partial}_{x} u_{s, t}(x) ={\partial}_{x} {\partial}_{{\omega}} u_{s}(x) {\omega}_{s,t} +{\partial}_{x} R^{1,u}_{s,t}(x). \end{array} $$

By Definition 2.6 (iii), \({\partial }_{x} {\partial }_{{\omega }} u({\cdot }, x) \in C^{0, loc}_{{\alpha },\beta }([0, T])\) and [xR1,u(x)]α(1+β)<. Thus, (2.13) follows directly from (2.4). Next, we prove (2.15). Assume that d=1 for simplicity. Put \(v_{t}(x) := \int _{0}^{x} u_{t}(y) dy\). Then

$$\begin{array}{@{}rcl@{}} v_{s,t}(x) = \int_{0}^{x} u_{s, t}(y) dy = \int_{0}^{x} {\partial}_{{\omega}} u_{s}(y)dy ~{\omega}_{s, t} + \int_{0}^{x} R^{1,u}_{s,t}(y) dy. \end{array} $$

By continuity of \(x\mapsto {\partial }_{{\omega }} u({\cdot }, x)\in C^{0}_{{\alpha },\beta }([0, T])\) and \(x\mapsto u({\cdot }, x)\in C^{1}_{{\alpha },\beta }([0, T])\), it is clear that \( \int _{0}^{x} {\partial }_{{\omega }} u({\cdot }, y)dy \in C^{0}_{{\alpha },\beta }([0, T])\) and \([\int _{0}^{x} R^{1,u}(y)dy]_{{\alpha }(1+\beta)} <\infty \). Then \({\partial }_{{\omega }} v_{t}(x) = \int _{0}^{x} {\partial }_{{\omega }} u_{t}(y)dy\). For the partition ti=i 2n t, i=0, …, 2n,

$$\begin{array}{@{}rcl@{}} &\int_{0}^{t} \int_{0}^{x} u_{s}(y) \,dy \,d{\omega}_{s} = \int_{0}^{t} v_{s}(x) \,d{\omega}_{s} \\ &= \lim\limits_{n\to \infty} \sum_{i=0}^{2^{n}-1} \big[v_{t_{i}}(x) {\omega}_{t_{i}, t_{i+1}} + {\partial}_{{\omega}} v_{t_{i}}(x) \underline{\omega}_{t_{i}, t_{i+1}}\big]\\ &= \lim\limits_{n\to \infty} \int_{0}^{x} \sum_{i=0}^{2^{n}-1} \big[u_{t_{i}}(y) {\omega}_{t_{i}, t_{i+1}} + {\partial}_{{\omega}} u_{t_{i}}(y) \underline{\omega}_{t_{i}, t_{i+1}}\big] \,dy. \end{array} $$

By standard estimates (Keller and Zhang 2016, Lemma 2.5),

$$\begin{array}{@{}rcl@{}} \Big|\int_{t_{i}}^{t_{i+1}} u_{s}(y) d{\omega}_{s} - u_{t_{i}}(y) {\omega}_{t_{i}, t_{i+1}} - {\partial}_{{\omega}} u_{t_{i}}(y) \underline{\omega}_{t_{i}, t_{i+1}}\Big| \le {C\|u({\cdot}, y)\|_{1} \over 2^{{\alpha}(2+\beta)n}}. \end{array} $$

Since O is bounded, by continuity of u again, supuOu(·,y)1<. Then, by (2.2) and the dominated convergence theorem, we immediately obtain

$$\begin{array}{@{}rcl@{}} \int_{0}^{t} \int_{0}^{x} u_{s}(y) \,dy \,d{\omega}_{s} &=\lim\limits_{n\to \infty} \int_{0}^{x} \sum_{i=0}^{2^{n}-1} \int_{t_{i}}^{t_{i+1}} u_{s}(y)\,d{\omega}_{s} \,dy \\ &= \int_{0}^{x} \int_{0}^{t} u_{s}(y) \,d{\omega}_{s} \,dy. \end{array} $$

This yields (2.15) for general s, t, and O.

Finally, we prove (2.14). Let \(u \in C^{3, loc}_{{\alpha },\beta }({ [0,T]\times {\mathbb {R}}}^{d})\). By Lemma 2.5,

$$\begin{array}{@{}rcl@{}} u_{s,t}(x) = \int_{s}^{t} {\partial}^{{\omega}}_{t} u_{r}(x) \,dr + \int_{s}^{t} \Big[{\partial}_{{\omega}} u_{r}(0) + \int_{0}^{x} {\partial}_{x}{\partial}_{{\omega}} u_{r}(y) \,dy\Big]\,d{\omega}_r. \end{array} $$

Differentiating both sides with respect to x and with applying (2.15) on the last term above, we obtain

$$\begin{array}{@{}rcl@{}} {\partial}_{x} u_{s,t}(x) = \int_{s}^{t} {\partial}_{x} {\partial}^{{\omega}}_{t} u_{r}(x) \,dr + \int_{s}^{t} {\partial}_{x}{\partial}_{{\omega}} u_{r}(x) \,d{\omega}_r. \end{array} $$

Then (2.14) follows from Lemma 2.5. □

Proof of Lemma 2.9. We proceed in two steps.

Step 1. Let h=0. For notational simplicity, we omit the variable x in this step. We shall prove by induction that

$$\begin{array}{@{}rcl@{}} u_{t} = \sum_{\|\nu\|\le k} {\mathcal{D}}_{\nu} u_{s} I^{\nu}_{s, t} + R^{k,u}_{s,t} ~\text{and}~ |R^{k,u}_{s, t}|\le C\|u({\cdot},x)\|_{k} \,\vert t-s\vert^{{\alpha}(k+\beta)}. \end{array} $$

When k=0, 1, 2, (7.33) follows directly from the definitions of the derivatives. Let m≥2 and assume that (7.33) holds for all km. Now, let \(u \in C^{m+1, loc}_{{\alpha },\beta }([0, T]\times {\mathbb {R}})\). By (2.11),

$$\begin{array}{@{}rcl@{}} u_{s,t} = \int_{s}^{t} {\partial}^{{\omega}}_{t} u_{r} dr + \int_{s}^{t} {\partial}_{{\omega}} u_{r} d{\omega}_r. \end{array} $$

Note that \({\partial }^{{\omega }}_{t} u \in C^{m-1, loc}_{{\alpha },\beta }([0, T]\times {\mathbb {R}})\). By the induction hypothesis,

$$\begin{array}{@{}rcl@{}} &{\partial}^{{\omega}}_{t} u_{r} = \sum_{\|\nu\|\le m-1} \!\!\! {\mathcal{D}}_{\nu} {\partial}^{{\omega}}_{t} u_{s} \mathcal{I}^{\nu}_{s,r} + R^{m-1, {\partial}^{{\omega}}_{t} u}_{s,r}, \\ &\text{with}~|R^{m-1, {\partial}^{{\omega}}_{t} u}_{s,r}|\le C\|{\partial}^{{\omega}}_{t} u({\cdot}, x)\|_{m-1} \vert r-s\vert^{{\alpha}(m-1+\beta)}. \end{array} $$

Then, since 2α<1,

$$\begin{array}{@{}rcl@{}} &\Big| \int_{s}^{t} {\partial}^{{\omega}}_{t} u_{r}\,dr - \sum_{\|\nu\|\le m-1} {\mathcal{D}}_{\nu} {\partial}^{{\omega}}_{t} u_{s} \int_{s}^{t}\mathcal{I}^{\nu}_{s,r}\,dr \Big| = \Big| \int_{s}^{t} R^{m-1, {\partial}^{{\omega}}_{t} u}_{s,r} \,dr \Big|\\ &\le C\|{\partial}^{{\omega}}_{t} u({\cdot}, x)\|_{m-1}\, \Big| \int_{s}^{t} (r- s)^{{\alpha}(m-1+\beta)} \,dr \Big|\\ &\le C\|u({\cdot}, x)\|_{m+1} \, \vert t-s\vert^{{\alpha}(m+1+\beta)}. \end{array} $$

Thus, it remains to show that

$$\begin{array}{@{}rcl@{}} &\Big|\int_{s}^{t} {\partial}_{{\omega}} u_{r} \,d{\omega}_{r} - \sum_{\|\nu\|\le m} {\mathcal{D}}_{\nu}{\partial}_{{\omega}} u_{s} \int_{s}^{t} \mathcal{I}^{\nu}_{s,r} \,d{\omega}_{r} \Big| \\ &\le C\|u({\cdot}, x)\|_{m+1} \vert t-s\vert^{{\alpha}(m+1+\beta)}. \end{array} $$

Note that \(v := {\partial }_{{\omega }} u \in C^{m, loc}_{{\alpha },\beta }([0, T]\times {\mathbb {R}})\). Put \(U_{s,t} := \sum _{\|\nu \|\le m} {\mathcal {D}}_{\nu } v_{s} \int _{s}^{t} \mathcal {I}^{\nu }_{s,r} \,d{\omega }_{r}\). For s<r<t,

$$\begin{array}{@{}rcl@{}} & U_{s,r} +U_{r,t} - U_{s,t} = \sum_{\|\nu\|\le m} {\mathcal{D}}_{\nu} v_{r} \int_{r}^{t} \mathcal{I}^{\nu}_{r,l}\, d{\omega}_{l} ~ - ~ \sum_{\|\nu\|\le m} {\mathcal{D}}_{\nu} v_{s} \int_{r}^{t} \mathcal{I}^{\nu}_{s,l} \,d{\omega}_{l} \\ &= \sum_{\|\nu\|\le m} \Big[\sum_{\|\nu^{\prime}\|\le m-\|\nu\|} {\mathcal{D}}_{\nu^{\prime}} {\mathcal{D}}_{\nu} v_{s} \mathcal{I}^{\nu^{\prime}}_{s, r} + R^{m-\|\nu\|, {\mathcal{D}}_{\nu} v}_{s, r}\Big] \int_{r}^{t} \mathcal{I}^{\nu}_{r,l}\, d{\omega}_{l} \\ &{\qquad}{\qquad} - \sum_{\|\nu\|\le m} {\mathcal{D}}_{\nu} v_{s} \int_{r}^{t} \mathcal{I}^{\nu}_{s,l} d{\omega}_{l} \\ &= \!\!\sum_{\|\nu\|\le m} {\mathcal{D}}_{\nu} v_{s} \int_{r}^{t} \Big[\sum_{(\nu^{\prime}, \tilde \nu)= \nu} \mathcal{I}^{\nu^{\prime}}_{s, r}\, \mathcal{I}^{\tilde \nu}_{r, l} \!\,-\,\! \mathcal{I}^{\nu}_{s,l}\Big] d {\omega}_{l}\! + \!\sum_{\|\nu\|\le m} R^{m-\|\nu\|, {\mathcal{D}}_{\nu} v}_{s, r} \int_{r}^{t} \mathcal{I}^{\nu}_{r,l}\, d{\omega}_{l}. \end{array} $$

By induction, one can verify that

$$\begin{array}{@{}rcl@{}} \sum_{(\nu^{\prime}, \tilde \nu)= \nu} \mathcal{I}^{\nu^{\prime}}_{s, r}\, \mathcal{I}^{\tilde \nu}_{r, l} = \mathcal{I}^{\nu}_{s,l}{\quad}\text{and}{\quad} \Big|\int_{r}^{t} \mathcal{I}^{\nu}_{r,l} \,d{\omega}_{l} \Big| \le C(t-r)^{(\|\nu\|+1){\alpha}}. \end{array} $$

We remark that the former identity here is actually Chen’s relation. Then, since \({\mathcal {D}}_{\nu } v \in C^{m-\|\nu \|, loc}_{{\alpha },\beta }([0, T]\times {\mathbb {R}})\), by our induction assumption,

$$\begin{array}{@{}rcl@{}} |R^{m-\|\nu\|, {\mathcal{D}}_{\nu} v}_{s, r}| &\le C\|{\mathcal{D}}_{\nu} v({\cdot}, x)\|_{m-\|\nu\|}(r-s)^{{\alpha}(m-\|\nu\| + \beta)} \\ &\le C\|u({\cdot}, x)\|_{m+1} (r-s)^{{\alpha}(m-\|\nu\| + \beta)}. \end{array} $$


$$\begin{array}{@{}rcl@{}} \big|U_{s,r} +U_{r,t} - U_{s,t}\big| &\le& C\|u({\cdot}, x)\|_{m+1} (r-s)^{{\alpha}(m-\|\nu\| + \beta)}(t-r)^{(\|\nu\|+1){\alpha}} \\ &\le& C\|u({\cdot}, x)\|_{m+1} (t-s)^{{\alpha}(m+1+ \beta)}. \end{array} $$

Applying the Sewing Lemma (Friz and Hairer 2014, Lemma 4.2) yields (7.34). Hence (7.33) follows.

Step 2. We now prove the general case. By standard Taylor expansion in x and by Step 1,

$$\begin{array}{@{}rcl@{}} &u(t+{\delta}, x+h) = [u(t+{\delta}, x+h) - u(t, x+h)] + u(t,x+h) \\ &= \sum_{1\le \|\nu\|\le k} {\mathcal{D}}_{\nu} u(t, x+h)\, \mathcal{I}^{\nu}_{t, t+{\delta}} + R^{k, u({\cdot}, x+h)}_{t, t+{\delta}} + \sum_{m=0}^{k} \frac{1}{m!}{\partial}^{m}_{x} u(t,x) h^{m} \\ &{\qquad}{\qquad}+ O(|h|^{k+\beta}) \\ &= \sum_{1\le \|\nu\|\le k} \Big[\sum_{m=0}^{k-\|\nu\|} \frac{1}{m!} {\partial}^{m}_{x} {\mathcal{D}}_{\nu} u(t, x) h^{m} + O(|h|^{k-\|\nu\|+\beta}) \Big]\, \mathcal{I}^{\nu}_{t, t+{\delta}} + R^{k, u({\cdot}, x+h)}_{t, t+{\delta}}\\ &{\qquad} + \sum_{m=0}^{k} \frac{1}{m!} {\partial}^{m}_{x} u(t,x) h^{m} + O(|h|^{k+\beta}) \\ &=\!\! \sum_{m+\|\nu\|\le k} \frac{1}{m!} {\mathcal{D}}_{\nu} {\partial}^{m}_{x} u(t, x) h^{m}\, \mathcal{I}^{\nu}_{t, t+{\delta}} +\!\! \sum_{1\le \|\nu\|\le k}\!\! O(|h|^{k-\|\nu\|+\beta})\,\mathcal{I}^{\nu}_{t, t+{\delta}} \\ &{\qquad}{\qquad}+ R^{k, u({\cdot}, x+h)}_{t, t+{\delta}}+ O(|h|^{k+\beta}), \end{array} $$

where the last equality follows from Lemma 2.7. Note that \(|\mathcal {I}^{\nu }_{t, t+{\delta }}| \le C|{\delta }|^{{\alpha } \|\nu \|}\) and, by Step 1, |Rt,t+δk,u(·,x+h)|≤Cu(·,x+h)k |δ|α(k+β). Since u(·,x)k is continuous in x and thus locally bounded, we obtain (2.18) immediately. □

Proof of Lemma 2.12. The wellposedness for a classical solution \(u\in C^{1}_{\alpha,\beta }([0,T])\) with the corresponding estimate is standard. Automatically, by Lemma 2.5, \(u\in C^{2}_{\alpha,\beta }([0,T])\) again with the corresponding estimate. Note that \({\partial }_{{\omega }} u_{t} = g(t, u_{t}), {\partial }^{{\omega }}_{t} u_{t} = f(t, u_{t})\). Applying Lemma 2.8 repeatedly, one can easily prove by induction that \(u\in C^{k+2}_{{\alpha },\beta }([0, T])\) and that

$$\begin{array}{@{}rcl@{}} \|u\|_{k+2} \le C(T, \|f\|_{k}, \|g\|_{k+1}, u_0). \end{array} $$

Moreover, with \(\tilde u_{t} := u_{t} - u_{0}, \tilde f(t, x) := f(t, x+u_{0}), \tilde g(t,x) := g(t, x+u_{0})\),

$$\begin{array}{@{}rcl@{}} \tilde u_{t} = \int_{0}^{t} \tilde f(s, \tilde u_s) \,ds + \int_{0}^{t}\tilde g(s, \tilde u_s) \,d{\omega}_s. \end{array} $$

Thus, by (7.35), \(\|\tilde u\|_{k+2} \le C(T, \|\tilde f\|_{k}, \|\tilde g\|_{k+1}, 0) = C(T, \|f\|_{k}, \|g\|_{k+1})\). □

Proof of Lemma 2.15. First, we assume that \(u_{0} \in C^{k+\beta }_{b}({\mathbb {R}})\), namely u0 and all its related derivatives are bounded. Applying Lemma 2.12 we see that \(u({\cdot }, x) \in C^{k+1}_{{\alpha },\beta }([0, T])\) and

$$\begin{array}{@{}rcl@{}} \sup_{x\in {\mathbb{R}}} \|u({\cdot}, x)\|_{k+1} \le C(\|u_{0}\|_{\infty}, \|f\|_{k}, \|g\|_k)<\infty. \end{array} $$

Recalling Definition 2.10 and in particular Remark 2.11, we note that our space \(C^{k}_{{\alpha },\beta }({ [0,T]\times {\mathbb {R}}}^{d})\) requires stronger conditions than those in (Keller and Zhang 2016). Thus, one may follow the arguments by Keller and Zhang (2016, Theorem 6.1) to obtain the Eq. 2.26. Hence

$$\begin{array}{@{}rcl@{}} {\partial}_{x} R^{1,u}_{s,t}(x) &&= {\partial}_{x} \big[u_{s,t}(x) - g(s,x, u_{s}(x)) {\omega}_{s,t} \big]\\ &&= {\partial}_{x} u_{s,t}(x) - \big[{\partial}_{x} g(s, x, u_{s}(x)) + {\partial}_{y} g(s, x, u_{s}(x)){\partial}_{x} u_{s}(x)\big]{\omega}_{s,t}\\ &&= \int_{s}^{t} [{\partial}_{x} f + {\partial}_{y} f {\partial}_{x} u_{r}(x)](r, x, u_{r}(x)) dr \\ &&+ \int_{s}^{t} [{\partial}_{x} g + {\partial}_{y} g {\partial}_{x} u_{r}(x)](r, x, u_{r}(x)) d{\omega}_{r}\\ &&- \big[{\partial}_{x} g(s, x, u_{s}(x)) + {\partial}_{y} g(s, x, u_{s}(x)){\partial}_{x} u_{s}(x)\big]{\omega}_{s,t}. \end{array} $$

Applying Lemma 2.13 on (2.26) yields a representation formula for xu and

$$\begin{array}{@{}rcl@{}} \sup_{x\in {\mathbb{R}}} \|{\partial}_{x} u({\cdot}, x)\|_{k} \le C(\|{\partial}_{x} u_{0}\|_{\infty}, \|f\|_{k}, \|g\|_k)<\infty, \end{array} $$

which, together with (7.36), implies further that

$$\begin{array}{@{}rcl@{}} \sup_{x\in{\mathbb{R}}}[{\partial}_{x} R^{1,u}(x)]_{{\alpha}(1+\beta)} \le C(\|u_{0}\|_{\infty}, \|{\partial}_{x} u_{0}\|_{\infty}, \|f\|_{k}, \|g\|_k). \end{array} $$

Together with the representation for xu and the arguments in Lemma 2.8,

$$\begin{array}{@{}rcl@{}} {\partial}^{2}_{{xx}} u_{t}(x) &&= {\partial}^{2}_{{xx}} u_{0}(x) \\ &&+ \int_{0}^{t} \big[{\partial}^{2}_{{xx}} f + 2{\partial}^{2}_{{xy}} f {\partial}_{x} u_{s}(x) + {\partial}_{y} f {\partial}^{2}_{{xx}} u_{s}(x)\big](s, x, u_{s}(x)) ds \\ &&+ \int_{0}^{t} \big[{\partial}^{2}_{{xx}} f + 2{\partial}^{2}_{{xy}} f {\partial}_{x} u_{s}(x) + {\partial}_{y} f {\partial}^{2}_{{xx}} u_{s}(x)\big](s, x, u_{s}(x)) d{\omega}_{s}, \end{array} $$

follows. Then one can easily see that u2< and thus \(u\in C^{1}_{{\alpha }, \beta }({ [0,T]\times {\mathbb {R}}})\) in the sense of Definition 2.10 (cf. Remark 2.11). Repeating the arguments (up to order k+1 to ensure Hölder continuity with respect to x of \({\partial }^{k}_{x} u\), etc.), one can show that uk+1<, which implies that \(u\in C^{k}_{{\alpha }, \beta }({ [0,T]\times {\mathbb {R}}})\).

Next, if u0 is not bounded but all of its related derivatives are bounded. Put \(\tilde u := u - u_{0}, \tilde {\varphi }(t,x,y) := {\varphi }(t,x, y+u_{0}(x))\) for φ=f, g. One can easily see that \(\tilde f\in C^{k+1}_{{\alpha },\beta }({ [0,T]\times {\mathbb {R}}}^{2})\) with \(\|\tilde f\|_{k}\) being dominated by fk+1 and the derivatives of u0, similarly for \(\tilde g\). Note that \(\tilde u\) satisfies RDE (2.25) with coefficients \((\tilde f, \tilde g)\) and initial condition 0. Thus \(\tilde u\in C^{k}_{{\alpha }, \beta }({ [0,T]\times {\mathbb {R}}})\).

Finally, if \(u_{0}\in C^{k+\beta }({\mathbb {R}})\), let \(\iota \in C^{\infty }({\mathbb {R}})\) such that ι(x)=1 when |x|≤1 and ι(x)=0 when |x|≥2. Let \(u^{{\varepsilon }}\in C^{k}_{{\alpha }, \beta }({ [0,T]\times {\mathbb {R}}})\) be the solution to RDE (2.25) with coefficients (f,g) and initial condition \(u^{{\varepsilon }}_{0}(x) = u_{0}(\iota ({\varepsilon } x)\,x)\). Note that \(u^{{\varepsilon }}_{0}(x) = u_{0}(x)\) and hence \(u^{{\varepsilon }}_{t}(x) = u_{t}(x)\) whenever |x|≤1/ε. Since ε>0 is arbitrary, we see that \(u\in C^{k,loc}_{{\alpha }, \beta }({ [0,T]\times {\mathbb {R}}})\). □

Availability of data and materials

Not applicable.



almost everywhere


almost sure


bounded mean oscillation


backward stochastic differential equation


ordinary differential equation


partial differential equation


path-dependent partial differential equation


rough differential equation


rough partial differential equation


stochastic differential equation


stochastic partial differential equation


  1. Buckdahn, R., Bulla, I., Ma, J.: On Pathwise Stochastic Taylor Expansions. Math. Control Relat. Fields. 1(4), 437–468 (2011).

    MathSciNet  MATH  Article  Google Scholar 

  2. Buckdahn, R., Li, J.: Stochastic differential games and viscosity solutions of Hamilton-Jacobi-Bellman-Isaacs equations. SIAM J. Control Optim. 47(1), 444–475 (2008).

    MathSciNet  MATH  Article  Google Scholar 

  3. Buckdahn, R., Ma, J.: Stochastic viscosity solutions for nonlinear stochastic partial differential equations. I. Stoch. Process. Appl. 93(2), 181–204 (2001).

    MathSciNet  MATH  Article  Google Scholar 

  4. Buckdahn, R., Ma, J.: Stochastic viscosity solutions for nonlinear stochastic partial differential equations. II. Stoch. Process. Appl. 93(2), 205–228 (2001).

    MathSciNet  MATH  Article  Google Scholar 

  5. Buckdahn, R., Ma, J.: Pathwise stochastic Taylor expansions and stochastic viscosity solutions for fully nonlinear stochastic PDEs. Ann. Probab. 30(3), 1131–1171 (2002).

    MathSciNet  MATH  Article  Google Scholar 

  6. Buckdahn, R., Ma, J.: Pathwise stochastic control problems and stochastic HJB equations. SIAM J. Control Optim. 45(6), 2224–2256 (2007).

    MathSciNet  MATH  Article  Google Scholar 

  7. Buckdahn, R., Ma, J., Zhang, J.: Pathwise Taylor expansions for random fields on multiple dimensional paths. Stoch. Process. Appl. 125, 2820–2855 (2015).

    MathSciNet  MATH  Article  Google Scholar 

  8. Caruana, M., Friz, P., Oberhauser, H.: A (rough) pathwise approach to a class of non-linear stochastic partial differential equations. Ann. Inst. H. Poincaré Anal. Non Linéaire. 28, 27–46 (2011).

    MathSciNet  MATH  Article  Google Scholar 

  9. Crandall, M. G., Ishii, H., Lions, P. -L.: User’s guide to viscosity solutions of second order partial differential equations. Bull. Amer. Math. Soc. (N.S.) 27(1), 1–67 (1992).

    MathSciNet  MATH  Article  Google Scholar 

  10. Da Prato, G., Tubaro, L.: Fully nonlinear stochastic partial differential equations. SIAM J. Math. Anal. 27(1), 40–55 (1996).

    MathSciNet  MATH  Article  Google Scholar 

  11. Davis, M., Burstein, G.: A Deterministic Approach To Stochastic Optimal Control With Application To Anticipative Control. Stochast. Stoch. Rep. 40(3-4), 203–256 (1992).

    MathSciNet  MATH  Article  Google Scholar 

  12. Diehl, J., Friz, P.: Backward stochastic differential equations with rough drivers. Ann. Prob. 40, 1715–1758 (2012).

    MathSciNet  MATH  Article  Google Scholar 

  13. Diehl, J., Friz, P., Gassiat, P.: Stochastic control with rough paths. Appl. Math. Optim. 75(2), 285–315 (2017).

    MathSciNet  MATH  Article  Google Scholar 

  14. Diehl, J., Friz, P., Oberhauser, H.: Regularity theory for rough partial differential equations and parabolic comparison revisited. In: Stochastic Analysis and Applications 2014, pp. 203–238. Springer, Cham (2014).

    Google Scholar 

  15. Diehl, J., Oberhauser, H., Riedel, S.: A Lévy area between Brownian motion and rough paths with applications to robust nonlinear filtering and rough partial differential equations. Stoch. Process. Appl. 125(1), 161–181 (2015).

    MATH  Article  Google Scholar 

  16. Dupire, B.: Functional Itô calculus. Quant. Finan.19(5), 721–729 (2019).

    MathSciNet  MATH  Article  Google Scholar 

  17. Ekren, I., Keller, C., Touzi, N., Zhang, J.: On viscosity solutions of path dependent PDEs. Ann. Probab. 42, 204–236 (2014).

    MathSciNet  MATH  Article  Google Scholar 

  18. Ekren, I., Touzi, N., Zhang, J.: Viscosity Solutions of Fully Nonlinear Parabolic Path Dependent PDEs: Part I. Ann. Probab. 44, 1212–1253 (2016).

    MathSciNet  MATH  Article  Google Scholar 

  19. Ekren, I., Touzi, N., Zhang, J.: Viscosity Solutions of Fully Nonlinear Parabolic Path Dependent PDEs: Part II. Ann. Probab. 44, 2507–2553 (2016).

    MathSciNet  MATH  Article  Google Scholar 

  20. Friz, P., Gassiat, P., Lions, P. L., Souganidis, P. E.: Eikonal equations and pathwise solutions to fully non-linear SPDEs. Stochast. Partial Differ. Equ. Anal. Comput. 5, 256–277 (2017).

    MathSciNet  MATH  Google Scholar 

  21. Friz, P., Hairer, M.: A course on rough paths: With an introduction to regularity structures, Universitext. Springer, Cham (2014).

    Google Scholar 

  22. Friz, P., Oberhauser, H.: On the splitting-up method for rough (partial) differential equations. J. Differ. Equ. 251(2), 316–338 (2011).

    MathSciNet  MATH  Article  Google Scholar 

  23. Friz, P., Oberhauser, H.: Rough path stability of (semi-)linear SPDEs. Probab. Theory Relat. Fields. 158, 401–434 (2014).

    MathSciNet  MATH  Article  Google Scholar 

  24. Gilbarg, D., Trudinger, N.: Elliptic Partial Differential Equations of second order, second edition. Springer-Verlag, Germany (1983).

    Google Scholar 

  25. Gubinelli, M.: Controlling rough paths. J. Funct. Anal. 216(1), 86–140 (2004).

    MathSciNet  MATH  Article  Google Scholar 

  26. Gubinelli, M., Tindel, S., Torrecilla, I.: Controlled viscosity solutions of fully nonlinear rough PDEs (2014). arXiv preprint, arXiv:1403.2832.

  27. Keller, C., Zhang, J.: Pathwise Itô calculus for rough paths and rough PDEs with path dependent coefficients. Stoch. Process. Appl. 126, 735–766 (2016).

    MATH  Article  Google Scholar 

  28. Krylov, N. V.: An analytic approach to SPDEs. Stoch. Partial Differ. Equ. Six Perspect. Math. Surv. Monogr. Amer. Math. Soc. Providence RI. 64, 185–242 (1999).

    MathSciNet  MATH  Google Scholar 

  29. Kunita, H.: Stochastic flows and stochastic differential equations. In: Cambridge Studies in Advanced Mathematics, vol. 24. Cambridge University Press, Cambridge (1997).

    Google Scholar 

  30. Lieberman, G.: Second order parabolic differential equations. World Scientific Publishing Co., Inc., River Edge (1996).

    Google Scholar 

  31. Lions, P. -L., Souganidis, P. E.: Fully nonlinear stochastic partial differential equations. C. R. Acad. Sci. Paris Sér. I Math. 326(9), 1085–1092 (1998).

    MathSciNet  MATH  Article  Google Scholar 

  32. Lions, P. -L., Souganidis, P. E.: Fully nonlinear stochastic partial differential equations: non-smooth equations and applications. C. R. Acad. Sci. Paris Sér. I Math. 327(8), 735–741 (1998).

    MathSciNet  MATH  Article  Google Scholar 

  33. Lions, P. -L., Souganidis, P. E.: Fully nonlinear stochastic PDE with semilinear stochastic dependence. C. R. Acad. Sci. Paris Sér. I Math. 331(8), 617–624 (2000).

    MathSciNet  MATH  Article  Google Scholar 

  34. Lions, P. -L., Souganidis, P. E.: Uniqueness of weak solutions of fully nonlinear stochastic partial differential equations. C. R. l’Acad. Sci.-Ser. I-Math. 331(10), 783–790 (2000).

    MathSciNet  MATH  Google Scholar 

  35. Lunardi, A.: Analytic semigroups and optimal regularity in parabolic problems, Progress in Nonlinear Differential Equations and their Applications 16. Birkhäuser Verlag, Basel (1995).

    Google Scholar 

  36. Lyons, T.: Differential equations driven by rough signals. Rev. Mat. Iberoam. 14(2), 215–310 (1998).

    MathSciNet  MATH  Article  Google Scholar 

  37. Matoussi, A., Possamai, D., Sabbagh, W.: Probabilistic interpretation for solutions of Fully Nonlinear Stochastic PDEs. Probab. Theory Relat. Fields (2018).

  38. Mikulevicius, R., Pragarauskas, G.: Classical solutions of boundary value problems for some nonlinear integro-differential equations. Lithuanian Math. J. 34(3), 275–287 (1994).

    MathSciNet  MATH  Article  Google Scholar 

  39. Musiela, M., Zariphopoulou, T.: Stochastic partial differential equations and portfolio choice, Contemporary Quantitative Finance. Springer, Berlin (2010).

    Google Scholar 

  40. Nadirashvili, N., Vladut, S.: Nonclassical solutions of fully nonlinear elliptic equations. Geom. Funct. Anal. 17(4), 1283–1296 (2007).

    MathSciNet  MATH  Article  Google Scholar 

  41. Pardoux, E., Peng, S.: Backward doubly stochastic differential equations and systems of quasilinear SPDEs. Probab. Theory Relat. Fields. 98, 209–227 (1994).

    MathSciNet  MATH  Article  Google Scholar 

  42. Peng, S.: Stochastic Hamilton-Jacobi-Bellman equations. SIAM J. Control Optim. 30(2), 284–304 (1992).

    MathSciNet  MATH  Article  Google Scholar 

  43. Pham, T., Zhang, J.: Two Person Zero-sum Game in Weak Formulation and Path Dependent Bellman-Isaacs Equation. SIAM J. Control. Optim. 52, 2090–2121 (2014).

    MathSciNet  MATH  Article  Google Scholar 

  44. Rozovskii, B. L.: Stochastic Evolution Systems: Linear Theory and Applications to Non-linear Filtering. Kluwer Academic Publishers, Boston (1990).

    Google Scholar 

  45. Safonov, M. V.: Boundary value problems for second-order nonlinear parabolic equations, (Russian). Funct. Numer. Methods Math. Phys. “Naukova Dumka” Kiev. 274, 99–203 (1988).

    Google Scholar 

  46. Safonov, M. V.: Classical solution of second-order nonlinear elliptic equations. Math. USSR-Izv. 33(3), 597–612 (1989).

    MathSciNet  MATH  Article  Google Scholar 

  47. Seeger, B.: Perron’s method for pathwise viscosity solutions. Commun. Partial Differ. Equ. 43(6), 998–1018 (2018).

    MathSciNet  MATH  Article  Google Scholar 

  48. Seeger, B.: Homogenization of pathwise Hamilton-Jacobi equations. J. Math. Pures Appl. 110, 1–31 (2018).

    MathSciNet  MATH  Article  Google Scholar 

  49. Seeger, B.: Approximation schemes for viscosity solutions of fully nonlinear stochastic partial differential equations. Ann. Appl. Probab.30(4), 1784–1823 (2020).

    MathSciNet  Article  Google Scholar 

  50. Souganidis, P. E.: Pathwise solutions for fully nonlinear first- and second-order partial differential equations with multiplicative rough time dependence, Singular random dynamics, Lecture Notes in Math. vol. 2253. Springer, Cham (2019).

    Google Scholar 

  51. Zhang, J.: Backward Stochastic Differential Equations — from linear to fully nonlinear theory. Springer, New York (2017).

    Google Scholar 

Download references


The authors would like to thank Peter Baxendale and Remigijus Mikulevicius for very helpful discussions.


Rainer Buckdahn is supported in part by the PGMO–Gaspard Monge Program for Optimisation and operational research–Fondation Mathématiques Jacques Hadamard. Jin Ma and Jianfeng Zhang are supported in part by NSF grant DMS-1908665.

Author information




All authors have read and approved the manuscript.

Corresponding author

Correspondence to Christian Keller.

Ethics declarations

Competing interests

The authors declare that they have no competing interests.

Additional information

An earlier version of this paper was entitled“Pathwise Viscosity Solutions of Stochastic PDEs and Forward Path-Dependent PDEs.”

Rights and permissions

Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article’s Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article’s Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit

Reprints and Permissions

About this article

Verify currency and authenticity via CrossMark

Cite this article

Buckdahn, R., Keller, C., Ma, J. et al. Fully nonlinear stochastic and rough PDEs: Classical and viscosity solutions. Probab Uncertain Quant Risk 5, 7 (2020).

Download citation


  • Stochastic PDEs
  • path-dependent PDEs
  • rough PDEs
  • rough paths
  • viscosity solutions
  • comparison principle
  • functional Itô formula
  • characteristics
  • rough Taylor expansion

AMS 2000 subject classifications

  • 60H07
  • 15
  • 30
  • 35R60
  • 34F05