Proofs of the results from Sections “Strictly pathwise hedging of exotic derivatives and Absence of pathwise arbitrage”
Proof of Proposition 2.1.
We first consider the case i=j. Then,
$$\begin{array}{*{20}l} \boldsymbol{\xi}^{ii}(s)\cdot(\mathbf{S}(s')\mathbf{S}(s)) &= 2\left(S_{i}(s)K_{ii}\right)(S_{i}(s')S_{i}(s))\\ &=\left(S_{i}(s')K_{ii}\right)^{2}\left(S_{i}(s)K_{ii}\right)^{2}(S_{i}(s')S_{i}(s))^{2}. \end{array} $$
Summing over \(s\in \mathbb {T}_{n}\) yields
$$ \sum_{s\in\mathbb{T}_{n},\,s\le t}\boldsymbol{\xi}^{ii}(s)\cdot(\mathbf{S}(s')\mathbf{S}(s))=(S_{i}(t_{n})K_{ii})^{2}(S_{i}(0)K_{ii})^{2}\sum_{s\in{\mathbb{T}}_{n},\,s\le t}(S_{i}(s')S_{i}(s))^{2}, $$
(5.1)
where \(t_{n}=\max \{s'\,\,s\in \mathbb {T}_{n},\, s\le t\}\searrow t\) as n
↑
∞. Clearly, the limit of the lefthand side exists if and only if the limit of the righthand side exists, which implies the result for i=j. In case i≠j, the result follows just as above by using the already established existence of 〈S
_{
k
},S
_{
k
}〉(t) for all k and t and by noting that \(\sum _{k,\ell \in \{i,j\}}{\langle }S_{k},S_{\ell }{\rangle }={\left \langle S_{i}+S_{j},S_{i}+S_{j} \right \rangle }\). □
Proof of Proposition 2.3.
The pathwise Itô formula yields that for ,
This immediately yields that (b) implies (a) and that (2.9) must hold.
Let us now assume that (a) holds. Then
Since the righthand side has zero quadratic variation (Sondermann 2006, Proposition 2.2.2), the same must be true of the lefthand side. By (Schied 2014, Proposition 12), the quadratic variation of the lefthand side is given by
$${\int_{r}^{t}}\left(\boldsymbol{\xi}^{\mathbf{S}}(s)\nabla_{\mathbf{x}} v(s,\mathbf{S}(s))\right)^{\top} a(s,\mathbf{S}(s))\left(\boldsymbol{\xi}^{\mathbf{S}}(s)\nabla_{\mathbf{x}} v(s,\mathbf{S}(s))\right){\,\mathrm{d}} s $$
in case of . Taking the derivative with respect to t gives
$$\left(\boldsymbol{\xi}^{\mathbf{S}}(t)\nabla_{\mathbf{x}} v(t,\mathbf{S}(t))\right)^{\top} a(t,\mathbf{S}(t))\left(\boldsymbol{\xi}^{\mathbf{S}}(t)\nabla_{\mathbf{x}} v(t,\mathbf{S}(t))\right)=0 $$
for all t, and the fact that the matrix a(t,S(t)) is positive definite yields that (2.9) must hold. For , the matrix a(s,S(s)) needs to be replaced by the matrix with components a
_{
ij
}(s,S(s))S
_{
i
}(s)S
_{
j
}(s), and we arrive at (2.9) by the same arguments as in the case of . Plugging (2.9) into (5.2) and using (a) implies that the rightmost integral in (5.2) vanishes identically, which establishes (b) by again taking the derivative with respect to t. □
Now we prepare for the proof of Theorem 2.4 (b). The following lemma can be proved by means of a straightforward computation.
Lemma 5.1.
For \(\mathbf {x} = \left (x_{1},\dots, x_{d}\right)^{\top }\in {\mathbb {R}}^{d}\) let \(\exp (\mathbf {x}):=\left (e^{x_{1}},\dots, e^{x_{d}}\right)^{\top }\in {\mathbb {R}}^{d}_{+}\). Then v(t,x) solves (TVP^{+}) if and only if \(\widetilde {v}(t,\mathbf {x}):=v(t,\exp (\mathbf {x}))\) solves
where \(\widetilde {f}(\mathbf {x})=f(\exp (\mathbf {x}))\) and
for \(\widetilde {a}_{ij}(t,\mathbf {x}):=a_{ij}(t,\exp (\mathbf {x}))\) and \(\widetilde {b}_{i}(t,\mathbf {x}):=\frac {1}{2}a_{ii}(t,\exp (\mathbf {x}))\).
Next, the terminalvalue problem \({(\widetilde {\text {TVP}})}\) will be once again transformed into another auxiliary terminalvalue problem. To this end, we need another transformation lemma, whose proof is also left to the reader.
Lemma 5.2.
For p>0 let \(g(\mathbf {x}):=1+\sum _{i=1}^{d}e^{p x_{i}}\). Then \(\widetilde {v}(t,\mathbf {x})\) solves \((\widetilde {\text {TVP}})\) if and only if \(\widehat v(t,\mathbf {x}):=g(\mathbf {x})^{1}\widetilde {v}(t,\mathbf {x})\) solves
where \(\widehat f(\mathbf {x})=\widetilde {f}(\mathbf {x})/g(\mathbf {x})\) and
for
$$\begin{array}{*{20}l}\widehat b_{i}(t,\mathbf{x})&=\widetilde{b}_{i}(t,\mathbf{x})+pg(\mathbf{x})^{1}\sum\limits_{j=1}^{d}e^{px_{j}}\widetilde{a}_{ij}(t,\mathbf{x}),\\ \widehat c(t,\mathbf{x})&=\frac{p(p1)}{2 g(\mathbf{x})}\sum\limits_{i=1}^{d} \widetilde{a}_{ii}(t,\mathbf{x})e^{px_{i}}. \end{array} $$
Proof of Theorem 2.4.
We will show that \({(\widetilde {\text {TVP}})}\) admits a solution \(\widetilde {v}\) if \(\widetilde {f}(\mathbf {x})\le c(1+\sum _{i=1}^{d}e^{p x_{i}})\) for some p>0 and that \(\widetilde {v}\) is unique in the class of functions that satisfy a similar estimate uniformly in t. To this end, note that the coefficients of satisfy the conditions of (Janson and Tysk 2004, Theorem A.14), i.e., \(\widehat a(t,\mathbf {x})=\widetilde {a}(t,\mathbf {x})\) is positive definite, there are constants c
_{1}, c
_{2}, c
_{3} such that for all t, x, and i,j we have that \(\widetilde {a}_{ij}(t,\mathbf {x})\le c_{1}(1+\mathbf {x}^{2})\), \(\widehat b_{i}(t,\mathbf {x})\le c_{2}(1+\mathbf {x})\), \(\widehat c(t,\mathbf {x})\le c_{3}\), and \(\widetilde {a}_{ij}\), \(\widehat b_{i}\), and \(\widehat c\) are locally Hölder continuous in \([0,T)\times {\mathbb {R}}^{d}\). It therefore follows that \({(\widehat {\text {TVP}})}\) admits a unique bounded solution \(\widehat v\) whenever \(\widehat f\) is bounded and continuous. But then \(\widetilde {v}(t,\mathbf {x}):=g(\mathbf {x})\widehat v(t,\mathbf {x})\) solves \({(\widetilde {\text {TVP}})}\) with terminal condition \(\widetilde {f}(\mathbf {x}):=g(\mathbf {x})\widehat f(\mathbf {x})\). Hence, \({(\widetilde {\text {TVP}})}\) admits a solution whenever \(\widetilde {f}(\mathbf {x})\le c\left (1+\sum _{i=1}^{d}e^{p x_{i}}\right)\) for some p>0. Lemma 5.1 now establishes the existence of solutions to (TVP^{+}) if the terminal condition is continuous and has at most polynomial growth. □
Remark 5.3.
It follows from the preceding argument that, if f(x)≤c(1+x^{p}), then the corresponding solution v of (TVP^{+}) satisfies \(v(t,\mathbf {x})\le \widetilde {c}(1+\mathbf {x}^{p})\) for a certain constant \(\widetilde {c}\) and with the same exponent p.
Proof of Theorem 2.5.
We first prove the result in the case of (TVP). The function v
_{
k
} will be welldefined if f
_{
k+1} is continuous and has at most polynomial growth. It is easy to see that these two properties will follow if v
_{
k+1} satisfies the following three conditions:

(i)
(x
_{0},…,x
_{
k+1},x)↦v
_{
k+1}(t,x
_{0},…,x
_{
k+1},x) has at most polynomial growth;

(ii)
x↦v
_{
k+1}(t
_{
k+1},x
_{0},…,x
_{
k+1},x) is continuous for all x
_{0},…,x
_{
k+1};

(iii)
(x
_{0},…,x
_{
k+1})↦v
_{
k+1}(t,x
_{0},…,x
_{
k+1},x) is locally Lipschitz continuous, uniformly in t and locally uniformly in x, with a Lipschitz constant that grows at most polynomially. More precisely, there exist p≥0 and L≥0 such that, for x,x
_{
i
},y
_{
i
}≤m and t∈ [ t
_{
k+1},t
_{
k+2}],
$$\left v_{k+1}(t,\mathbf{x}_{0},\dots, \mathbf{x}_{k+1},\mathbf{x})v_{k+1}(t,\mathbf{y}_{0},\dots, \mathbf{y}_{k+1},\mathbf{x})\right\le (1+m^{p})L\sum\limits_{i=0}^{k+1}\mathbf{x}_{i}\mathbf{y}_{i}. $$
We will now show that v
_{
k
} inherits properties (i), (ii), and (iii) from v
_{
k+1}. Since these properties are obviously satisfied by v
_{
N
}, the assertion will then follow by backward induction.
To establish (i), let p,c>0 be such that \(\widetilde {f}_{k+1}(\mathbf {x}):=c\left (\mathbf {x}_{0}^{p}+\cdots +\mathbf {x}_{k}^{p}+\mathbf {x}^{p}+\mathbf {x}^{p}\right)\) satisfies \(\widetilde {f}_{k+1}\le f_{k+1}\le \widetilde {f}_{k+1}\). Then let \(\widetilde {v}_{k}(t,\mathbf {x}_{0},\dots, \mathbf {x}_{k},\mathbf {x})\) be the solution of (TVP) with terminal condition \(\widetilde {f}_{k+1}\) at time t
_{
k+1}. Theorem 2.4, (Janson and Tysk 2004, Theorem A.7), and the linearity of solutions imply that \((\mathbf {x}_{0},\dots, \mathbf {x}_{k},\mathbf {x})\mapsto \widetilde {v}_{k}(t,\mathbf {x}_{0},\dots, \mathbf {x}_{k},\mathbf {x})\) has at most polynomial growth, while the maximum principle in the form of (Janson and Tysk 2004, Theorem A.5) implies that \(\widetilde {v}_{k}\le v_{k}\le \widetilde {v}_{k}\). This establishes (i).
Condition (ii) is satisfied automatically, as solutions to (TVP) are continuous by construction.
To obtain (iii), let p and L be as in (iii) and x
_{
i
},y
_{
i
} be given. We take m so that m≥x
_{
i
}∨y
_{
i
} for i=1,…,k and let \(\delta :=L\sum _{i=0}^{k}\mathbf {x}_{i}\mathbf {y}_{i}\). Then
$$\begin{array}{*{20}l} &\left(1+m^{p}+\mathbf{x}^{p}\right)\delta\le v_{k+1}\left(t_{k+1},\mathbf{x}_{0},\dots, \mathbf{x}_{k},\mathbf{x},\mathbf{x}\right)\\ &\qquad\qquad\qquadv_{k+1}\left(t_{k+1},\mathbf{y}_{0},\dots, \mathbf{y}_{k},\mathbf{x},\mathbf{x}\right) \le \left(1+m^{p}+\mathbf{x}^{p}\right)\delta. \end{array} $$
Now we define u(t,x) as the solution of (TVP) with terminal condition u(t
_{
k+1},x)=x^{p} at time t
_{
k+1}. Theorem 2.4 implies that u is well defined, and the maximum principle and (Janson and Tysk 2004, Theorem A.7) imply that 0≤u(t,x)≤cx^{p} for some constant c≥0. Another application of the maximum principle yields that
$$(1+m^{p}+u(t,\mathbf{x}))\delta\le v_{k}(t,\mathbf{x}_{0},\dots, \mathbf{x}_{k},\mathbf{x})v_{k}(t,\mathbf{y}_{0},\dots, \mathbf{y}_{k},\mathbf{x})\le (1+m^{p}+u(t,\mathbf{x}))\delta $$
for all t and x, which establishes that (iii) holds for v
_{
k
} with the same p and the new Lipschitz constant (1+c)L.
Now we turn to the proof in case of (TVP^{+}). It is clear from our proof of Theorem 2.4 (b) that (TVP^{+}) inherits the maximum principle from \((\widehat {\text {TVP}})\). Moreover, Remark 5.3 shows that v
_{
k
} inherits property (i) from v
_{
k+1}. So Remark 5.3 can replace (Janson and Tysk 2004, Theorem A.7) in the preceding argument. Therefore, the proof for (TVP^{+}) can be carried out in the same way as for (TVP). □
Proof of Theorem 3.3.
We first prove the result in case of . Let us suppose by way of contradiction that there exists an admissible arbitrage opportunity in , and let 0=t
_{0}<t
_{1}<⋯<t
_{
N
}=t
_{
N+1}=T and v
_{
k
} denote the corresponding time points and functions as in Definition 3.1.
Under our assumptions, the martingale problem for the operator is wellposed (Stroock and Varadhan 1969). Let \({\mathbb {P}}_{t,\mathbf {x}}\) denote the corresponding Borel probability measures on \(C([\!t,T],{\mathbb {R}}^{d})\) under which the coordinate process, (X(u))_{
t≤u≤T
}, is a diffusion process with generator and satisfies X(t)=x
\({\mathbb {P}}_{t,\mathbf {x}}\)a.s. In particular, X
_{
i
} is a continuous local \({\mathbb {P}}_{t,\mathbf {x}}\)martingale for i=1,…,d. Moreover, the support theorem (Stroock and Varadhan 1972, Theorem 3.1) states that the law of (X(u))_{
t≤u≤T
} under \({\mathbb {P}}_{t,\mathbf {x}}\) has full support on \(C_{\mathbf {x}}([\!t,T],{\mathbb {R}}^{d}):=\{\boldsymbol {\omega } \in C([\!t,T],{\mathbb {R}}^{d})\,\,\boldsymbol {\omega }(t)=\mathbf {x}\}\).
In a first step, we now use these facts to show that all functions v
_{
k
} are nonnegative. To this end, we note first that the support theorem implies that the law of (X(t
_{1}),…,X(t
_{
N
})) under \({\mathbb {P}}_{0,\mathbf {x}}\) has full support on \(({\mathbb {R}}^{d})^{N}\). Since \({\mathbb {P}}_{0,\mathbf {x}}\)a.e. trajectory in \(C_{\mathbf {x}}([\!0,T],{\mathbb {R}}^{d})\) belongs to , it follows that the set is dense in \(({\mathbb {R}}^{d})^{N}\). Condition (a) of Definition 3.2 and the continuity of v
_{
N
} thus imply that v
_{
N
}(T,x
_{0},…,x
_{
N+1})≥0 for all x
_{0},…,x
_{
N+1}. In the same way, we get from the admissibility of the arbitrage opportunity that v
_{
k
}(t,x
_{0},…,x
_{
k
},x)≥−c for all k, t∈ [ t
_{
k
},t
_{
k+1}] and \(\mathbf {x}_{0},\dots,\mathbf {x}_{k},\mathbf {x}\in {\mathbb {R}}^{d}\).
For the moment, we fix x
_{0},…,x
_{
N−1} and consider the function u(t,x):=v
_{
N−1}(t,x
_{0},…,x
_{
N−1},x). Let \(Q\subset {\mathbb {R}}^{d}\) be a bounded domain whose closure is contained in \({\mathbb {R}}^{d}\) and let τ:= inf{s  X(s)∉Q} be the first exit time from Q. By Itô’s formula and the fact that u solves (TVP) we have \({\mathbb {P}}_{t,\mathbf {x}}\)a.s. for t∈[t
_{
N−1},T) that
$$ u(T\wedge\tau,\mathbf{X}({T\wedge\tau}))=u(t,\mathbf{x})+\int_{t}^{T\wedge\tau}\nabla_{\mathbf{x}}u(s,\mathbf{X}(s))\,\mathrm{d} \mathbf{X}(s). $$
(5.5)
Since ∇_{
x
}
u and the coefficients of are bounded in the closure of Q, the stochastic integral on the righthand side is a true martingale. Therefore,
$$ u(t,\mathbf{x})={\mathbb{E}}_{t,\mathbf{x}}[\,u(T\wedge\tau,\mathbf{X}({T\wedge\tau}))\,]. $$
(5.6)
Now let us take an increasing sequence Q
_{1}⊂Q
_{2}⊂⋯ of bounded domains exhausting \({\mathbb {R}}^{d}\) and whose closures are contained in \({\mathbb {R}}^{d}\). By τ
_{
n
} we denote the exit time from Q
_{
n
}. Then, an application of (5.6) for each τ
_{
n
}, Fatou’s lemma in conjunction with the fact that u≥−c, and the already established nonnegativity of u(T,·) yield
$$ u(t,\mathbf{x})={\lim}_{n\uparrow\infty}{\mathbb{E}}_{t,\mathbf{x}}[\,u(T\wedge\tau_{n},\mathbf{X}({T\wedge\tau_{n}}))\,]\ge {\mathbb{E}}_{t,\mathbf{x}}[\,u(T,\mathbf{X}({T}))\,]\ge0. $$
(5.7)
This establishes the nonnegativity of v
_{
N−1} and in particular of the terminal condition f
_{
N−1} for v
_{
N−2}. We may therefore repeat the preceding argument for v
_{
N−2} and so forth. Hence, v
_{
k
}≥0 for all k.
Now let and T
_{0} be such that \(V^{\mathbf {S}}_{\boldsymbol {\xi }}(0)\le 0\) and \(V^{\mathbf {S}}_{\boldsymbol {\xi }}(T_{0})>0\), which exists according to the assumption made at the beginning of this proof. If k is such that t
_{
k
}<T
_{0}≤t
_{
k+1} and x
_{0}:=S(0), then v
_{0}(0,x
_{0})=0 and v
_{
k
}(T
_{0},S(t
_{0}),…,S(t
_{
k
}),S(T
_{0}))>0. By continuity, we actually have v
_{
k
}(T
_{0},·)>0 in an open neighborhood \(U\subset C_{\mathbf {x}}([0,T],{\mathbb {R}}^{d})\) of the path S.
Since \({\mathbb {P}}_{0,\mathbf {x}_{0}}\)a.e. sample path belongs to , Itô’s formula gives that \({\mathbb {P}}_{0,\mathbf {x}_{0}}\)a.s.,
$$v_{k}(T_{0},\mathbf{X}(t_{0}),\dots,\mathbf{X}(t_{k}), \mathbf{X}(T_{0}))=v_{0}(0,\mathbf{x}_{0})+\int_{0}^{T_{0}} \boldsymbol{\xi}^{\mathbf{X}}(t){\,\mathrm{d}}\mathbf{X}(t). $$
Localization as in (5.7) and using the fact that v
_{
ℓ
}≥0 for all ℓ implies that
$$0=v_{0}(0,\mathbf{x}_{0})\ge{\mathbb{E}}_{0,\mathbf{x}_{0}} \left[\,v_{k}(T_{0},\mathbf{X}(t_{0}),\dots,\mathbf{X}(t_{k}),\mathbf{X}(T_{0}))\,\right]\ge0. $$
Applying once again the support theorem now yields a contradiction to the fact that v
_{
k
}(T
_{0},·)>0 in the open set U. This completes the proof for .
Now we turn to the proof for . In this case, the martingale problem for the operator defined in (5.3) is well posed since the coefficients of are again bounded and continuous (Stroock and Varadhan 1969). These properties of the coefficients also guarantee that the support theorem holds (Stroock and Varadhan 1972, Theorem 3.1). If \((\widetilde {\mathbb {P}}_{s,\mathbf {x}},\widetilde {\mathbf {X}})\) is a corresponding diffusion process, we can consider the laws of \(\mathbf {X}(t):=\exp (\widetilde {\mathbf {X}}(t))\) and, by Lemma 5.1, obtain a solution to the martingale problem for , which satisfies the support theorem with state space \({\mathbb {R}}^{d}_{+}\). We can now simply repeat the arguments from the proof for to also get the result for □
Proofs of the results from Section “Extension to functionally dependent strategies ”
Proof of Proposition 4.2.
The proof is analogous to the proof of Proposition 2.3. For , all that is needed in addition to the arguments of Proposition 2.3 is the fact that the quadratic variation of
$${\int_{r}^{t}}\left(\boldsymbol{\xi}^{\mathbf{S}}(s)\nabla_{x} F_{s}(\mathbf{S}_{[r,u],s}) \right)\,\mathrm{d} \mathbf{S}(s) $$
is given by
$${\int_{r}^{t}}\left(\boldsymbol{\xi}^{\mathbf{S}}(s)\nabla_{x} F_{s}(\mathbf{S}_{[r,u],s})\right)^{\top} a(s, \mathbf{S}(s))\left(\xi^{\mathbf{S}}(s)\nabla_{x} F_{s}(\mathbf{S}_{[r,u],s})\right) {\,\mathrm{d}} s; $$
see (Schied and Voloshchenko 2016, Proposition 2.1). For , the matrix a(s,S(s)) has to be replaced by the matrix with components a
_{
ij
}(s,S(s))S
_{
i
}(s)S
_{
j
}(s). □
To prove Theorem 4.4 and Theorem 4.7 we need the following lemma, which is a straightforward extension of Lemma 5.1 to the functional setting. Its proof is therefore left to the reader. For X in the Skorohod space \(D([0,T],{\mathbb {R}}^{d})\) we set \(\left (\exp (\mathbf {X})\right)_{t} =\exp (\mathbf {X}_{t}):=\left (\exp (\mathbf {X}(u))\right)_{0\le u\le t}\in D([\!0,t],{\mathbb {R}}^{d}_{+})\).
Lemma 5.4.
The functional F
_{
t
}(X
_{
t
}) solves (FTVP^{+}) if and only if \(\widetilde {F}_{t}(\mathbf {X}_{t}):=F_{t}(\exp (\mathbf {X}_{t}))\) solves
where \(\widetilde {H}(\mathbf {X}_{T})=H(\exp (\mathbf {X}_{T}))\) and
where, as in (Cont and Fournié 2010, Eq. (15)), ∂
_{
i
} are the partial vertical derivatives, \(\widetilde {a}_{ij}(t,\mathbf {X}(t)):=a_{ij}(t,\exp (\mathbf {X}(t)))\), and \(\widetilde {b}_{i}(t,\mathbf {X}(t)):=\frac {1}{2} a_{ii}(t,\exp (\mathbf {X}(t)))\).
Note that the chain rule for functional derivatives (see (Dupire 2009, p.6)) implies the equivalence of the PDEs in \((\widetilde {\text {FTVP}})\) and (FTVP).
Regarding the regularity conditions in Definition 4.1, we note that \(\widetilde {F}\) will be regular enough if and only if F is regular enough (because exp(X(t)) is a sufficiently regular functional).
Proof of Theorem 4.4.
Part (a) directly follows from (Ji and Yang 2013, Theorem 20).
To prove part (b), note that the coefficients of satisfy the conditions of (Ji and Yang 2013, Theorem 20), i.e., \(\widetilde {a}(t,\mathbf {X}(t))\) is positive definite and can be written as \(\widetilde {\sigma }(t,\mathbf {X}(t))\widetilde {\sigma }(t,\mathbf {X}(t))^{\top }\) with a Lipschitz continuous volatility coefficient \(\widetilde {\sigma },\) and \(\widetilde {b}_{i}\) is also Lipschitz. It therefore follows that \({(\widetilde {\text {FTVP}})}\) admits a unique solution \(\widetilde {F}\in \mathbb {C}^{1,2}([\!0,T))\) if \(\widetilde {H}\in C^{2}_{l,lip}(D([\!0,T],{\mathbb {R}}^{d})).\) Lemma 5.4 now establishes the existence of solutions to (FTVP^{+}) if the terminal condition is of class \(C^{2}_{l,lip}(D([\!0,T],{\mathbb {R}}_{+}^{d}))\). □
Proof of Theorem 4.7.
The proof is similar to the one of Theorem 3.3. We first consider the case of . Let X and \({\mathbb {P}}_{t,\mathbf {x}}\) (0≤t≤T, \(\mathbf {x}\in {\mathbb {R}}^{d}\)) be as in the proof of Theorem 3.3. For a path \(\mathbf {Y}\in C([\!0,T],{\mathbb {R}}^{d})\), we define \(\overline {\mathbb {P}}_{t,\mathbf {Y}_{t}}\) as that probability measure on \(C([\!0,T],{\mathbb {R}}^{d})\) under which the coordinate process X satisfies \(\overline {\mathbb {P}}_{t,\mathbf {Y}_{t}}\)a.s. X(s)=Y(s) for 0≤s≤t and under which the law of (X(u))_{
t≤u≤T
} is equal to \({\mathbb {P}}_{t,\mathbf {Y}(t)}\). The support theorem (Stroock and Varadhan 1972, Theorem 3.1) then states that the law of (X(u))_{0≤u≤T
} under \(\overline {\mathbb {P}}_{t,\mathbf {Y}_{t}}\) has full support on \(C_{\mathbf {Y}_{t}}([\!0,T],{\mathbb {R}}^{d}):=\{\boldsymbol {\omega } \in C([\!0,T],{\mathbb {R}}^{d})\,\,\boldsymbol {\omega }_{t}=\mathbf {Y}_{t}\}\).
Now suppose by way of contradiction that there exists an admissible arbitrage opportunity arising from a nonanticipative functional F as in Definition 4.6. In a first step, we show that F is nonnegative on \([0,T]\times C([0,T],{\mathbb {R}}^{d})\). As in the proof of Theorem 3.3, the support theorem implies that is dense in \(C_{\mathbf {x}}([\!0,T],{\mathbb {R}}^{d})\). Condition (a) of Definition 3.2 and the leftcontinuity of F in the sense of (Cont and Fournié 2010, Definition 3) thus imply that F
_{
T
}(Y)≥0 for all \(\mathbf {Y}\in C([\!0,T],{\mathbb {R}}^{d})\). In the same way, we get from the admissibility of the arbitrage opportunity that F
_{
t
}(Y
_{
t
})≥−c for all t∈[ 0,T] and \(\mathbf {Y}\in C([\!0,T],{\mathbb {R}}^{d})\). To show that actually F
_{
t
}(Y
_{
t
})≥0, let \(Q\subset {\mathbb {R}}^{d}\) be a bounded domain whose closure is contained in \({\mathbb {R}}^{d}\) and let τ:= inf{s  X(s)∉Q} be the first exit time from Q. By the functional change of variables formula, in conjunction with the fact that F solves (FTVP) (on continuous paths), we obtain \(\overline {\mathbb {P}}_{t,\mathbf {Y}_{t}}\)a.s. for t∈[0,T) that
$$ F_{T\wedge\tau}(\mathbf{X}_{T\wedge\tau})=F_{t}(\mathbf{Y}_{t})+\int_{t}^{T\wedge\tau}\nabla_{\mathbf{x}}F_{s}(\mathbf{X}_{s})\,\mathrm{d} \mathbf{X}(s). $$
(5.9)
By (Schied and Voloshchenko 2016, Proposition 2.1), we have
$$\left \langle\int_{t}^{\cdot\wedge\tau}\nabla_{\mathbf{x}}F_{s}(\mathbf{X}_{s})\,\mathrm{d} \mathbf{X}(s) \right\rangle(T)= \int_{t}^{T\wedge\tau}\nabla_{\mathbf{x}}F_{s}(\mathbf{X}_{s})^{\top} a(s, \mathbf{X}(s))\nabla_{\mathbf{x}}F_{s}(\mathbf{X}_{s}) {\,\mathrm{d}} s. $$
Since ∇_{
x
}
F and the coefficients of are bounded in the closure of Q, the stochastic integral on the righthand side of (5.9) is a true martingale. Therefore,
$$ F_{t}(\mathbf{Y}_{t})=\overline{\mathbb{E}}_{t,\mathbf{Y}_{t}}[\,F_{T\wedge\tau}(\mathbf{X}_{T\wedge\tau})\,]. $$
(5.10)
Now let us take an increasing sequence Q
_{1}⊂Q
_{2}⊂⋯ of bounded domains exhausting \({\mathbb {R}}^{d}\) and whose closures are contained in \({\mathbb {R}}^{d}\). By τ
_{
n
} we denote the exit time from Q
_{
n
}. Then, an application of (5.10) for each τ
_{
n
}, Fatou’s lemma in conjunction with the fact that F≥−c, and the already established nonnegativity of F
_{
T
}(·) yield
$$ F_{t}(\mathbf{Y}_{t})={\lim}_{n\uparrow\infty}\overline{\mathbb{E}}_{t,\mathbf{Y}_{t}}\left[\,F_{T\wedge\tau_{n}}(\mathbf{X}_{T\wedge\tau_{n}})\,\right]\ge \overline{\mathbb{E}}_{t,\mathbf{Y}_{t}}[\,F_{T}(\mathbf{X}_{T})\,]\ge0. $$
(5.11)
This establishes the nonnegativity of F on \([\!0,T]\times C([\!0,T],{\mathbb {R}}^{d})\).
Now let and T
_{0} be such that \(V^{\mathbf {S}}_{\boldsymbol {\xi }}(0)\le 0\) and \(V^{\mathbf {S}}_{\boldsymbol {\xi }}(T_{0})>0\). Since \(V^{\mathbf {S}}_{\boldsymbol {\xi }}(t)=F_{t}(\mathbf {S}_{t})\) by Proposition 4.2, we have F
_{0}(S
_{0})=0 and \(F_{T_{0}}(\mathbf {S}_{T_{0}})>0\). By leftcontinuity of F, we actually have \(F_{T_{0}}(\cdot)>0\) in an open neighborhood \(U\subset C_{\mathbf {S}(0)}([0,T],{\mathbb {R}}^{d})\) of the path S.
Since \({\mathbb {P}}_{0,\mathbf {S}(0)}\)a.e. sample path belongs to , the functional change of variables formula gives that \({\mathbb {P}}_{0,\mathbf {S}(0)}\)a.s.,
$$F_{T_{0}}(\mathbf{X}_{T_{0}})=F_{0}(\mathbf{S}_{0})+\int_{0}^{T_{0}}\boldsymbol{\xi}^{\mathbf{X}}(t)\,\mathrm{d}\mathbf{X}(t). $$
Localization as in (5.11) and using the fact that F≥0 implies that
$$0=F_{0}(\mathbf{S}_{0})\ge{\mathbb{E}}_{0,\mathbf{S}(0)}\left[\,F_{T_{0}}(\mathbf{X}_{T_{0}})\,\right]\ge0. $$
Applying once again the support theorem now yields a contradiction to the fact that \(F_{T_{0}}(\cdot)>0\) in the open set U. This completes the proof for .
The proof for is completed by an exponential transformation, as in the proof of Theorem 3.3. □