Skip to main content

Piecewise constant martingales and lazy clocks

Abstract

Conditional expectations (like, e.g., discounted prices in financial applications) are martingales under an appropriate filtration and probability measure. When the information flow arrives in a punctual way, a reasonable assumption is to suppose the latter to have piecewise constant sample paths between the random times of information updates. Providing a way to find and construct piecewise constant martingales evolving in a connected subset of \(\mathbb {R}\) is the purpose of this paper. After a brief review of possible standard techniques, we propose a construction scheme based on the sampling of latent martingales \(\tilde {Z}\) with lazy clocks θ. These θ are time-change processes staying in arrears of the true time but that can synchronize at random times to the real (calendar) clock. This specific choice makes the resulting time-changed process \(Z_{t}=\tilde {Z}_{\theta _{t}}\) a martingale (called a lazy martingale) without any assumption on \(\tilde {Z}\), and in most cases, the lazy clock θ is adapted to the filtration of the lazy martingale Z, so that sample paths of Z on [0,T] only requires sample paths of \(\left (\theta, \tilde {Z}\right)\) up to T. This would not be the case if the stochastic clock θ could be ahead of the real clock, as is typically the case using standard time-change processes. The proposed approach yields an easy way to construct analytically tractable lazy martingales evolving on (interval of) \(\mathbb {R}\).

1 Introduction

Martingales play a central role in probability theory, but also in many applications. This is specifically true in mathematical finance where it is used to model Radon—Nikodym derivative processes or discounted prices in arbitrage-free market models (Jeanblanc et al. 2007). More generally, it is very common to deal with conditional expectation processes \(Z = (Z_{t})_{t \in [0,T]},\; Z_{t} := \mathbb {E}[Z_{T}|\mathcal {F}_{t}]\), where \(\mathbb {F}:=(\mathcal {F}_{t})_{t \in [0,T]}\) is a reference filtration and \(\mathbb {E}\) stands for the expectation operator associated with a given probability measure \(\mathbb {P}\). Many different modeling setups have been proposed to represent the dynamics of Z (e.g., random walk, Brownian motion, Geometric Brownian motion, Jump diffusion, etc) depending on some assumptions about its range, pathwise continuity, or continuous versus discrete-time setting. In many circumstances, however, information can be considered to arrive at random times, or in a partial (punctual way).

An interesting application in that respect is the modeling of quoted recovery rates. The recovery rate r of a firm corresponds to the ratio of the debt that will be recovered after the firm’s default during an auction process. It is also a major factor driving the price of corporate bonds or other derivatives instruments likes credit default swaps or credit linked notes. In many standard models (like those suggested by the International Swaps and Derivatives Association (ISDA)), the recovery rate process is assumed constant (see e.g., Markit (2004)). Many studies stressed the fact that r is in fact not a constant: it cannot be observed prior to the firm’s default τ; r is an \(\mathcal {F}_{\tau }\)-measurable random variable in [0,1]. This simple observation can have serious consequences in terms of pricing and risk-management of credit sensitive products, and explains the development of stochastic recovery models (Amraoui et al. 2012; Andersen and Sidenius 2004). A further development in credit risk modeling is to take into account the fact that recovery rates can be “dynamized” (Gaspar and Slinko 2008). Quoted recovery rates, for instance, can thus be modeled as a stochastic process R=(Rt)t≥0 that gives the “market’s view” of a firm’s recovery rate as seen from time t. Hence, \(R_{t} := \mathbb {E}[r| \mathcal {F}_{t}]\) can be seen as a martingale evolving in the unit interval. By correlating R with the creditworthiness of the firm, it becomes possible to account for a well-known fact in finance: recovery rate and default probability are statistically linked (Altman et al. 2003). However, observations for the process R are limited: updates in recovery rate quotes arrive in a scarce and random way. Therefore, in contrast with the common setup, it is more realistic to represent R as a martingale whose trajectories remain constant for long period of times, but “changes” only occasionally, upon arrival of related information (e.g., when a dealer updates its view to specialized data providers). More generally, such types of martingales could be used to model discounted price processes of financial instruments, observed under partial (punctual) information, e.g., at some random times, but also to represent price processes of illiquid products. Indeed, without additional information, a reasonable approach may consist of assuming that discounted prices remain constant between arrivals of market quotes, and jump to the level given by the new quote when a new trade is done.

Whereas discrete-time and continuous martingales have been extensively studied in the literature, very little work has been done with respect to martingales having piecewise constant sample paths. In this paper, we propose a methodology to find and construct such types of martingales. The special case of step-martingales (which are martingales with piecewise constant sample paths, but restricted to a finite number of jumps in any finite interval) have been studied in Boel et al. (1975a, b), with emphasis on representation theorems and applications to communication and control problems. In Herdegen and Herrmann (2016), the authors investigate a single jump case, in which the first part of the path (before the unique jump) is supposed to be deterministic. We extend this research in several ways. First, we relax the (strong) step-martingale restriction and deal with the broader class of processes featuring possibly infinitely many jumps in a time interval. Second, our approach allows one to build martingales that evolve in a bounded interval, a problem that received little attention so far and which relevance is stressed with the above recovery example, but could also be of interest for modeling stochastic probabilities or correlations. This is achieved by introducing a new class of time-change processes called lazy clocks. Finally, we provide and study numerous examples and propose some construction algorithms.

The paper is organized as follows. In Section 2 we formally introduce the concept of piecewise constant martingalesZ and presents several routes to construct these processes. We then introduce a different approach in Section 3, where (Zt)t≥0 is modeled as a time-changed process \(\left (\tilde {Z}_{\theta _{t}}\right)_{t \geq 0}\), where θ is a lazy clock. The latter are time-change processes built in such a way that the stochastic clock always stays in arrears of the real clock (θtt a.s.). This condition is motivated by computational considerations: it guarantees that sampling Z over a fixed time horizon [0,T] only requires the sampling of \(\left (\tilde {Z},\theta \right)\) over the same period. Finally, as our objective is to provide a workable methodology, we derive the analytical expression for the distributions and moments in some particular cases, and provide efficient sampling algorithms for the simulations of such martingales.

2 Piecewise constant martingales

In the literature, pure jump processes defined on a filtered probability space \((\Omega,\mathcal {F}, \mathbb {F}, \mathbb {P})\), where \(\mathbb {F}=(\mathcal {F}_{t})_{t\in [0,T]}\) and \(\mathcal {F}:=\mathcal {F}_{T}\), are often referred to as stochastic processes having no diffusion part. In this paper we are interested in a subclass of pure jump processes: piecewise constant (PWC) martingales defined as follows.

Definition 1

(Piecewise constant martingale) A piecewise constant \(\mathbb {F}\)-martingale Z is a càdlàg \(\mathbb {F}\)-martingale whose jumps \(\Delta Z_{s}=Z_{s}-Z_{s^{-}}\phantom {\dot {i}\!}\) are summable (i.e. \(\sum _{s\leq T} |\Delta Z_{s}|<+\infty \) a.s.) and such that for every t≥0 :

$$Z_{t} =Z_{0} + \sum_{s\leq t} \Delta Z_{s}\;. $$

In particular, the sample paths of Z(ω) for ωΩ belong to the class of piecewise constant functions of time.

Note that an immediate consequence of this definition is that a PWC martingale has finite variation. Such type of processes may be used to represent martingales observed under partial (punctual) information, e.g. at some (random) times. One possible field of application is mathematical finance, where discounted price processes are martingales under an equivalent measure. Without additional information, a reasonable approach may consist in assuming that discounted prices remain constant between arrivals of market quotes, and jump to the level given by the new quote when a new trade is done. More generally, this could represent conditional expectation processes (i.e. “best guess”) where information arrives in a discontinuous way.

Most of the “usual” martingales with no diffusion term fail to have piecewise constant sample paths. For example, Azéma’s first martingale M=(Mt)t≥0 defined as

$$ M_t:=\text{sign}(W_t)\sqrt{\frac{\pi}{2}}\sqrt{t-g_{t}^{0}(W)} \;, g_{t}^{0}(W):=\sup \{s\leq t, W_s=0\}\;, $$
(1)

where W is a Brownian motion, is essentially piecewise square-root. Interestingly, one can show that \(M_{t}=\mathbb {E}\left [W_{t}|\mathcal {F}_{g_{t}^{0}(W)+} \right ]\), so that M actually corresponds to the projection of W onto its slow filtration, see e.g. Dellacherie et al. (1992); Mansuy and Yor (2006) and Chapter IV, section 8 of Protter (2005) for a detailed analysis of this process. Similarly, the Geometric Poisson Process \(\mathrm {e}^{N_{t}\log (1+\sigma)-\lambda \sigma t}\) is a positive martingale with piecewise negative exponential sample paths (Shreve 2004, Ex 11.5.2).

However, finding such type of processes is not difficult. We provide below three different methods to construct some of them. Yet, not all are equally powerful in terms of tractability. The last method proves to be quite appealing in that it yields analytically tractable PWC martingales whose range can be any connected set.

2.1 An autoregressive construction scheme

We start by looking at a subset of PWC martingales, namely step-martingales. These are martingales whose paths belong to the space of step functions on any bounded interval, i.e. whose paths are a finite linear combination of indicator functions of intervals. As a consequence, a step martingale Z admits a finite number of jumps on [0,T] taking places at, say (τk, k≥1), and may be decomposed as (with τ0=0)

$$Z_{t} = Z_{0}+\sum\limits_{k=1}^{+\infty} \left(Z_{\tau_{k}} - Z_{\tau_{k-1}}\right)1_{\{\tau_{k} \leq t\}}\;. $$

Looking at such decomposition, we see that step martingales may easily be constructed by an autoregressive scheme.

Proposition 1

Let Z be a càdlàg process with integrable variation starting from Z0. We assume that \(\mathbb {E}[|Z_{0}|]<+\infty \). Then, the following are equivalent:

  • Z is a step martingale with respect to its natural filtration \(\mathbb {F}\),

  • there exists a strictly increasing sequence of random times (τk, k≥0) starting from τ0=0, taking values in [0,+] and with no point of accumulation such that

    $$ Z_{t} := Z_{0}+\sum\limits_{k=1}^{+\infty} \left(Z_{\tau_{k}}-Z_{\tau_{k-1}}\right)1_{\{\tau_{k} \leq t\}} $$
    (2)

    and which satisfies for any 0≤st :

    $$ \sum\limits_{k=1}^{+\infty} \mathbb{E}\left[\left. \left(Z_{\tau_{k}}-Z_{\tau_{k-1}}\right) 1_{\{s< \tau_{k}\leq t\}} \right|\mathcal{F}_{s}\right]=0\;. $$
    (3)

    Furthermore, the filtration \(\mathbb {F}\) is given for s≥0 by \(\mathcal {F}_{s} = \sigma \left (\left (Z_{\tau _{k}}, \tau _{k}\right.\right) \text { for } \left. k\geq 0 \text { such that} \tau _{k}\leq s\right)\).

Proof

i) →ii) Let Z be a step-martingale with respect to its natural filtration \(\mathbb {F}\), and denote by (τk,k≥0) the sequence of its successive jumps, with τ0=0. If Z only admits a finite number of jumps n0, then we set τn=+ for n>n0. This choice of the sequence (τk,k≥0) implies that the filtration \(\mathbb {F}\) equals

$$\mathcal{F}_{s} = \sigma(Z_{u}, u\leq s) = \sigma \left(\left(Z_{\tau_{k}}, \tau_{k}\right) \text{ for }\tau_{k}\leq s\right),\qquad s\geq0 $$

and that we have the representation :

$$Z_{t} =Z_{0}+ \sum\limits_{k=1}^{+\infty} \left(Z_{\tau_{k}}-Z_{\tau_{k-1}}\right)1_{\{\tau_{k} \leq t\}}\;. $$

Taking the expectation with respect to \(\mathcal {F}_{s}\) with 0≤st on both sides and applying Fubini’s theorem (since Z is of integrable variation), we deduce that :

$$\begin{aligned} Z_{s} &= Z_{0}+ \sum\limits_{k=1}^{+\infty} \mathbb{E} \left[ \left(Z_{\tau_{k}}-Z_{\tau_{k-1}}\right) 1_{\{\tau_{k} \leq t\} } | \mathcal{F}_{s} \right]\\ &=Z_{0} + \sum\limits_{k=1}^{+\infty} \left(Z_{\tau_{k}}-Z_{\tau_{k-1}}\right) 1_{\{\tau_{k} \leq s\} } + \sum\limits_{k=1}^{+\infty} \mathbb{E} \left[\left(Z_{\tau_{k}}-Z_{\tau_{k-1}}\right)1_{\{s< \tau_{k} \leq t\}} |\mathcal{F}_{s}\right] \end{aligned} $$

which implies that the second sum on the right-hand side is null.

ii) →i) Define

$$Z_{t} =Z_{0}+ \sum\limits_{k=1}^{+\infty} \left(Z_{\tau_{k}}-Z_{\tau_{k-1}}\right)1_{\{\tau_{k} \leq t\}} \;, $$

and observe that since the sequence (τk,k≥0) has no point of accumulation, Z is clearly a step process. Furthermore, since Z is of integrable variation, we have

$$ \mathbb{E} [|Z_{t}|] \leq \mathbb{E} [|Z_{0}|] + \mathbb{E} \left[ \left| \sum\limits_{k=1}^{+\infty} \left(Z_{\tau_{k}}\,-\,Z_{\tau_{k-1}}\right) 1_{\{\tau_{k} \leq t\} } \right| \right] \leq \mathbb{E} [|Z_{0}|] + \mathbb{E} \left[ \sum\limits_{0< s\leq t} \left|Z_{s}\,-\,Z_{s^{-}} \right| \right]\! < \!+\infty $$

which prove that Zt is integrable for any t≥0. Finally, as in the first part of the proof, taking the expectation with respect to \(\mathcal {F}_{s}\) in (2) and using (3), we deduce that :

$$\mathbb{E} [Z_{t}|\mathcal{F}_{s}] = Z_{0} + \sum\limits_{k=1}^{+\infty} \left(Z_{\tau_{k}}-Z_{\tau_{k-1}}\right) 1_{\{\tau_{k} \leq s \} } + \sum\limits_{k=1}^{+\infty} \mathbb{E} \left[ \left(Z_{\tau_{k}}-Z_{\tau_{k-1}}\right) 1_{\{s< \tau_{k} \leq t\} } | \mathcal{F}_{s}\right] = Z_{s} $$

which proves that Z is indeed a martingale. □

Corollary 1

Let M be a martingale of integrable variation, and let (τk,k≥0) be an increasing sequence of random times starting from τ0=0, taking values in (0,+], with no point of accumulation and which is independent of M. Then, if Z0 is an integrable random variable, the process

$$Z_{t} = Z_{0} + \sum\limits_{k=1}^{+\infty} \left(M_{\tau_{k}}-M_{\tau_{k-1}}\right) 1_{\{ \tau_{k} \leq t \} },\qquad t\geq0 $$

is a step martingale with respect to the filtration \(\mathbb {F}\) given, for s≥0, by \(\mathcal {F}_{s} = \sigma \left (\left (M_{\tau _{k}}, \tau _{k}\right) \text { for }k\geq 1 \text { such that } \tau _{k}\leq s\right)\).

Proof

We only need to check that (3) is satisfied, which is a consequence of the tower property of conditional expectations. Indeed, define the larger filtration \(\mathcal {G}_{s}=\sigma ((\tau _{i}, i\geq 1),\, (M_{u}, u\leq s))\) and observe that on the set {s<τkt} :

$$\mathcal{F}_{s} \subset \mathcal{F}_{\tau_{k-1}} \subset \mathcal{G}_{\tau_{k-1}}. $$

Then, since M is a \(\mathbb {G}\)-martingale :

$$\small{\begin{aligned} \mathbb{E} \left[ \left(M_{\tau_{k}}-M_{\tau_{k-1}} \right) 1_{\{s< \tau_{k}\leq t \} } |\mathcal{F}_{s}\right] &= \mathbb{E} \left[ \mathbb{E} \left[ \left. \left(M_{\tau_{k}}-M_{\tau_{k-1}}\right) \right| \mathcal{G}_{\tau_{k-1}} \right] 1_{\{s < \tau_{k}\leq t\}} |\mathcal{F}_{s}\right] \\ & = \mathbb{E} \left[ \mathbb{E} \left[ \left. \left(M_{\tau_{k-1}}-M_{\tau_{k-1}}\right) \right| \mathcal{G}_{\tau_{k-1}} \right] 1_{\{ s \leq \tau_{k-1} < \tau_{k} \leq t \} } | \mathcal{F}_{s}\right] =0\;. \end{aligned}} $$

Remark 1

Observe that the natural filtration of Z in Proposition 1 satisfies the identity \(\mathcal {F}_{t} = \mathcal {F}_{\tau _{k}}\) on the set {τkt<τk+1}. Since the random times (τk,k≥0) are stopping times in the filtration \(\mathbb {F}\), this implies that \(\mathbb {F}\) is a jumping filtration, following the definition of Jacod-Skorokhod (1994).

Example 1

Let N be a counting process and let \(\tau _{1},\ldots,\tau _{N_{t}}\phantom {\dot {i}\!}\) be the sequence of jump times of N on [0,t] with τ0:=0. If (Yk,k≥1) is a family of independent and centered random variables, independent from N, then

$$Z_{t}: = Z_{0} + \sum\limits_{k=1}^{\infty} Y_{k}1_{\{ \tau_{k}\leq t \} } = Z_{0} + \sum\limits_{k=1}^{N_{t}} Y_{k}, \;\; Z_{0}\in\mathbb{R} $$

is a PWC martingale. Note that we may choose the range of such a PWC martingale by taking bounded random variables. For instance, if Z0=0 and for any k≥1,

$$\mathbb{P} \left(\frac{6a}{\pi^{2} k^{2}} \leq Y_{k} \leq \frac{6b}{\pi^{2} k^{2}} \right) =1 $$

with a<0<b, then for any t≥0, we have Zt[a,b] a.s.

The above corollary provides us with a simple method to construct PWC martingales. Yet, it suffers from two restrictions. First, the distribution of Zt requires averaging the conditional distribution with respect to the counting process, which may be an infinite sum. Second, a control on the range of the resulting martingale requires strong assumptions. One might try to relax the i.i.d. assumption of the Yk’s. In Example 1 the Yk’s are independent but their support decreases as 1/k2. One could also draw Yk from a distribution whose support is state dependent like \(\left [ a-Z_{\tau _{k-1}}, b-Z_{\tau _{k-1}} \right ]\), then Zt[a,b] for all t[0,T]. In the sequel, we address these drawbacks by proposing another construction scheme.

2.2 PWC martingales from pure jump martingales with vanishing compensator

Pure jump martingales can easily be obtained by taking the difference of a pure jump increasing process with a predictable, grounded, right-continuous process of bounded variation (called dual predictable projections or predictable compensator). The simplest example is probably the compensated Poisson process of parameter λ defined by (Mt=Ntλt, t≥0). This process is a pure jump martingale with piecewise linear sample paths, hence is not a PWC martingale as \(\sum _{s\leq t} \Delta M_{s} = N_{t} \neq M_{t}\). However, it is easy to see that the difference of two Poisson processes with the same intensity is a PWC martingale (in fact a step-martingale), and we shall generalize this idea in the following proposition.

Proposition 2

A \(\mathbb {F}\)-martingale of integrable variation Z (with Z0=0) is PWC if and only if there exist two \(\mathbb {F}\)-adapted, integrable and increasing pure jump processes A and B having the same dual predictable projections (i.e Ap=Bp) such that Z=AB.

Proof

Assume first that Z is a PWC martingale of integrable variation. We define

$$A_{t}= \frac{1}{2} \sum_{s\leq t} \left(|\Delta Z_{s}| + \Delta Z_{s}\right) \qquad \text{ and }\qquad B_{t}=\frac{1}{2} \sum_{s\leq t} \left(|\Delta Z_{s}| - \Delta Z_{s}\right) \;. $$

Then A and B are two increasing pure jump processes such that Z=AB. They are integrable since Z is of integrable variation, and they satisfy ApBp=(AB)p=Zp=0 which proves the first implication.

Assume now that A and B are pure jump increasing processes. We then have the representation

$$Z_{t}=A_{t}-B_{t}= \sum_{s\leq t} \Delta A_{s}- \sum_{s\leq t} \Delta B_{s} = \sum_{s\leq t} \Delta(A_{s} - B_{s}) = \sum_{s\leq t}\Delta Z_{s} \;. $$

By the triangular inequality, we may check that

$$|Z_{t}|\leq \sum_{s\leq t} | \Delta Z_{s}| \leq \sum_{s\leq t} |\Delta A_{s}| + \sum_{s\leq t}| \Delta B_{s} | = A_{t}+B_{t} $$

hence Z is integrable and of integrable variation. Finally, by definition of the predictable dual projections, the processes AAp and BBp are martingales, hence by difference, so is Z since Ap=Bp. □

An easy application of this result is the case of Lévy processes, for which the compensators are deterministic functions.

Corollary 2

Let A,B be two Lévy processes having the same Lévy measure ν, and consider a measurable function f such that f(0)=0 and \(\int _{\mathbb {R}} |f(x)| \nu (dx)<+\infty \). Then the process

$$Z_{t}=\sum_{s\leq t} f(\Delta A_{s}) - f(\Delta B_{s}) $$

is a PWC martingale.

Proof

The proof follows from the fact that the compensator of \( \left (\sum _{s\leq t} f(\Delta A_{s}), t \geq 0 \right)\) is the deterministic process \(\left (t\int _{\mathbb {R}} f(x) \nu (dx), t \geq 0 \right) \). □

Remark 2

A centered Lévy process Z is a PWC martingale if and only if it has no drift, no Brownian component and its Lévy measure ν satisfies the integrability condition \(\int _{\mathbb {R}} |x| \nu (dx)<\infty \).

As obvious examples, one can mention the difference of two independent Gamma or Poisson processes of same parameters. Note that stable subordinators are not allowed here, as they do not fulfill the integrability condition required on A and B. We give below the PDF of these two examples :

Example 2

Let N1,N2 be two independent Poisson processes with parameter λ. Then, Z:=N1N2 is a step martingale taking integer values, with marginal laws given by the Skellam distribution with parameters μ1=μ2=λt :

$$ f_{Z_{t}}(k) = e^{-2\lambda t}I_{|k|}(2\lambda t),\qquad k\in \mathbb{Z} \;, $$
(4)

where Ik is the modified Bessel function of the first kind.

Example 3

Let γ1,γ2 be two independent Gamma processes with parameters a,b>0. Then, Z:=γ1γ2 is a PWC martingale with marginals given by

$$ f_{Z_{t}}(z) = \frac{b}{\sqrt{\pi}\Gamma(at)} \left| \frac{bz}{2} \right|^{at-\frac{1}{2}} K_{\frac{1}{2}-at} \left(b|z| \right)\;, $$
(5)

where Kβ denotes the modified Bessel function of the second kind with parameter \(\beta \in \mathbb {R}\).

Proof

The PDF of Zt is given, for 2at>1, by the inverse Fourier transform, see Gradshteyn and Ryzhik (2007, p. 349 Formula 3.385(9)) :

$$f_{Z_{t}}(z) = \frac{1}{2\pi} \int_{\mathbb{R}} \frac{e^{-iu z}} {\left(1+i \frac{u}{b} \right)^{at} \left(1-i \frac{u}{b} \right)^{at}} du \;. $$

The result then follows by analytic continuation. □

We conclude this section with an example of PWC martingale which does not belong to the family of Lévy processes but has the interesting feature to evolve in a time-dependent range.

Corollary 3

Let R1,R2 be two squared Bessel processes of dimension δ(0,2). For i=1,2 set

$$g^{0}_{t}\left(R^{i}\right): = \sup \left\{ s\leq t,\; R^{i}_{s} = 0 \right\} \;. $$

Then, Z:=g0(R1)−g0(R2) is a 1-self-similar PWC martingale which evolves in the cone {[−t,t],t≥0}.

Proof

Let R be a squared Bessel processes of dimension δ(0,2) and denote by L0(R) its local time at 0 as given by Tanaka’s formula. Set

$$Y_{t} = \left(t-g^{0}_{t}(R) \right)^{1-\frac{\delta}{2}}, \qquad t\geq0 \;. $$

In Rainer (1996, Prop. 4.1 and 6.2.1), it is proven that the process \( X = Y - \frac {1}{2^{2-\frac {\delta }{2}} \Gamma \left (2-\frac {\delta }{2} \right)} L^{0}(R)\) is a martingale with respect to the slow filtration \(\left (\mathcal {F}_{g_{t}^{0}+}, t \geq 0 \right)\). We shall prove that

$$\left(\frac{2}{2-\delta} g_{t}^{0}(R) -t,\, t \geq 0 \right) $$

is also a martingale in the same filtration. Notice first that since the random variable \(g_{t}^{0}(R)\) follows the generalized Arcsine law (see Section 3.1.2 below), the expectation of this process is constant and equal 0. We then apply Itô’s formula to Y with the function \(f(y) = y^{\frac {2}{2-\delta }}\) :

$$t-g_{t}^{0}(R) = {\int\nolimits}_{0}^{t} \frac{2}{2-\delta} Y_{s-}^{\frac{\delta}{2-\delta}}dY_{s} + \sum_{s\leq t} Y_{s}^{\frac{2}{2-\delta}} - Y_{s-}^{\frac{2}{2-\delta}} - \frac{2}{2-\delta} \left(s-g_{s-}^{0}(R) \right)^{\frac{\delta}{2}} \Delta Y_{s}\;. $$

Observe next that the instants of jumps of Y are the same as those of g0(R), i.e. \(\left \{s; Y_{s}\neq Y_{s^{-}} \right \} = \left \{ s; \, g_{s}^{0}(R) \neq g_{s^{-}}^{0}(R) \right \}\). But, the jumps of g0(R) only happen at times s when Rs=0, in which case \(g_{s}^{0}(R)=s\) or equivalently Ys=0. This yields the simplifications :

$$\footnotesize{\begin{aligned} t-g_{t}^{0}(R) &= \frac{2}{2-\delta} \int_{0}^{t}Y_{s-}^{\frac{\delta}{2-\delta}}dY_{s} + \sum_{s\leq t} - \left(s-g_{s-}^{0}(R) \right) + \frac{2}{2-\delta} \left(s-g_{s-}^{0}(R) \right)^{\frac{\delta}{2}} \left(s-g_{s-}^{0}(R) \right)^{1-\frac{\delta}{2}}\\ & = \frac{2}{2-\delta} \int_{0}^{t} Y_{s-}^{\frac{\delta}{2-\delta}}dY_{s} + \left(\frac{2}{2-\delta} -1\right) \sum_{s\leq t} \left(g_{s}^{0}(R)-g_{s-}^{0}(R) \right) \\ & = \frac{2}{2-\delta} \int_{0}^{t} Y_{s-}^{\frac{\delta}{2-\delta}}dY_{s} + \left(\frac{2}{2-\delta} -1\right) g_{t}^{0}(R) \end{aligned}} $$

and it remains to prove that the stochastic integral is a martingale. Since the support of dL0(R) is included in {s; Rs=0}{s;Ys=0}, and L0(R) is continuous, we deduce that

$$\int_{0}^{t} Y_{s-}^{\frac{\delta}{2-\delta}} dL^{0}_{s}(R) = \int_{0}^{t} Y_{s}^{\frac{\delta}{2-\delta}} dL_{s}^{0}(R)=0 $$

hence the process

$$t-\frac{2}{2-\delta} g_{t}^{0}(R)= \int_{0}^{t} Y_{s-}^{\frac{\delta}{2-\delta}} dY_{s} = \int_{0}^{t} Y_{s-}^{\frac{\delta}{2-\delta}} dX_{s} $$

is a local martingale. To prove that it is a true martingale, choose an horizon T and observe that the process

$$\left(t-\frac{2}{2-\delta} g_{t}^{0}(R) + \frac{2}{2-\delta} T, \, 0 \leq t \leq T \right) $$

is now a positive local martingale, hence a supermartingale with constant expectation, hence a true martingale. Finally, the self-similarity of \(g_{t}^{0}(R)\) comes from that of R (see Revuz—Yor1999, Proposition 1.6, p. 443). Indeed, for any fixed t>0 :

$$g_{t}^{0}(R) \,=\, \sup\{s\leq t,\; R_{s}=0\} \mathop{=}\limits^{(\text{law})} \sup\{s\leq t,\; t R_{s/t}=0\} = t \sup\{u\leq 1,\; R_{u}=0\} = t\,g_{1}^{0}(R) \;. $$

Remark 3

When δ=1, we have X=W2 where W is a standard Brownian motion. Using Lévy Arcsine law, the PDF of Z1 is given by the convolution, for z[0,1] :

$$f_{Z_{1}}(z) = \frac{1}{\pi^{2}} \int_{0}^{1-z} \frac{1}{\sqrt{x(1-x)}} \frac{1}{\sqrt{(z+x)(1-z-x)}} dx = \frac{2}{\pi^{2}} F \left(\frac{\pi}{2}, \sqrt{1-z^{2}} \right) \;, $$

where F denotes the incomplete elliptic integral of the first kind, see Gradshteyn and Ryzhik (2007, p. 275, Formula 3.147(5)). This yields, by symmetry and scaling :

$$f_{Z_{t}}(z)= \frac{2}{\pi^{2}} \int_{0}^{\frac{\pi}{2}} \frac{dx}{\sqrt{t^{2} \cos^{2}(x) + z^{2} \sin^{2}(x)}}\; 1_{\{0<|z|\leq t\}} \;. $$

Both the recursive and the vanishing compensators approaches are rather restrictive in terms of attainable range and analytical tractability. In the next subsection, we provide a more general method that can be used to build PWC martingales to any connected set of \(\mathbb {R}\) in a simple and tractable way.

2.3 PWC martingales using time-changed techniques

In this section, we construct a PWC martingale Z by time-changing a latent (\(\mathbb {P}, \mathbb {F}\))-martingale \(\tilde {Z}=\left (\tilde {Z}_{t}\right)_{t\geq 0}\) with the help of a suitable time-change process θ.

Definition 2

(time change process) A \(\mathbb {F}\)-time change process θ=(θt)[0,T] is a stochastic process satisfying

  • θ0=0,

  • for any t[0,T], θt is \(\mathcal {F}_{t}\)-measurable (i.e. θ is adapted to the filtration \(\mathbb {F}\)),

  • the map tθt is càdlàg a.s. non-decreasing.

Under mild conditions stated below, \(Z:=\left (\tilde {Z}_{\theta _{t}}\right)_{t\geq 0}\) is proven to be a martingale with respect to its own filtration, with the desired piecewise constant behavior. Most results regarding time-changed martingales deal with continuous martingales time-changed with a continuous process (Cont and Tankov 2004;Jeanblanc et al. 2007;Revuz and Yor 1999). This does not provide a satisfactory solution to our problem as the resulting martingale will obviously have continuous sample paths. On the other hand, it is obvious that not all time-changed martingales remain martingales, so that conditions are required on \(\tilde {Z}\) and/or on θ.

Remark 4

Every \(\mathbb {F}\)-martingale time-changed with a \(\mathbb {F}\)-adapted process remains a semi-martingale but not necessarily a martingale. For instance, setting \(\tilde {Z} = W\) and θt= inf{s:Ws>t} then \(\tilde {Z}_{\theta _{t}} = t\). Also, if θ is independent from \(\tilde {Z}\), then the martingale property is always satisfied, but Z may fail to be integrable. For example if \(\tilde {Z}=W\) and θ is an independent α-stable subordinator with α=1/2 then the time-changed process Z is not integrable: \(\mathbb {E} \left [ \left |\tilde {Z}_{\theta _{t}} \right |~|\theta _{t} \right ] = \sqrt {\frac {2}{\pi }} \sqrt {\theta _{t}}\) and \(\mathbb {E} \left [ \sqrt {\theta _{t}} \right ] \) is undefined. The proposition below gives sufficient conditions for Z to be integrable.

Proposition 3

Let \(\tilde {Z}\) be a martingale, and θ be a time-change process independent from \(\tilde {Z}\). We assume that θ has PWC paths and that one of the following assumptions hold :

  1. 1.

    \(\tilde {Z}\) is a positive martingale,

  2. 2.

    \(\tilde {Z}\) is uniformly integrable,

  3. 3.

    there exists an increasing function k such that θtk(t) a.s. for all t.

Then \(Z: = \left (\tilde {Z}_{\theta _{t}}\right)_{t\geq 0}\) is a martingale with respect to its natural filtration.

Proof

We first check that Z is integrable.

  1. 1.

    When \(\tilde {Z}\) is a positive martingale, we have \(\mathbb {E} [ |Z_{t} | ] = \mathbb {E} \left [ \tilde {Z}_{\theta _{t}} \right ] = \mathbb {E}[Z_{0}] < + \infty \).

  2. 2.

    When \(\tilde {Z}\) is uniformly integrable, we have

    \(\mathbb {E}[|Z_{t}|] = \mathbb {E} \left [ \left | \tilde {Z}_{\theta _{t}} \right | \right ] \leq \mathbb {E} \left [\left |\tilde {Z}_{\infty }\right |\right ] < + \infty \).

  3. 3.

    When θtk(t) a.s. for all t, we have \(\mathbb {E}[|Z_{t}|] = \mathbb {E} \left [ \left | \tilde {Z}_{\theta _{t}} \right | \right ] \leq \mathbb {E} \left [ \left | \tilde {Z}_{k(t)} \right | \right ] < +\infty.\)

Next, to prove the martingale property, define the larger filtration \(\mathbb {G}\) given for s≥0 by \(\mathcal {G}_{s} = \sigma \left (\left (\theta _{u}, u\geq 0 \right), \left (\tilde {Z}_{u}, u\leq \theta _{s}\right)\right.\). Applying the tower property of conditional expectation with 0≤st, we obtain :

$$\mathbb{E} [Z_{t}| \mathcal{F}_{s}] = \mathbb{E} \left[ \left. \mathbb{E} \left[ \tilde{Z}_{\theta_{t}} | \mathcal{G}_{s} \right] \right| \mathcal{F}_{s} \right] = \mathbb{E} \left[ \tilde{Z}_{\theta_{s}} | \mathcal{F}_{s} \right] = \mathbb{E} \left[Z_{s}| \mathcal{F}_{s} \right] = Z_{s} $$

where the second equality follows from the independence between \(\tilde {Z}\) and θ. Finally, since θ has PWC paths, so does Z :

$$Z_{t} = \tilde{Z}_{\theta_{t}} = \tilde{Z}_{\theta_{0}} + \sum_{s\leq t} \left(\tilde{Z}_{\theta_{s}} - \tilde{Z}_{\theta_{s^{-}}}\right) = Z_{0}+ \sum_{s\leq t} \left(Z_{s} -Z_{s^{-}}\right) $$

which ends the proof. □

From a practical point of view, general time-changed processes θ that are unbounded on [0,T] may cause some problems. Indeed, to simulate sample paths for Z on [0,T], one needs to simulate sample paths for \(\tilde {Z}\) on [0,θT]. This is annoying as θT can take arbitrarily large values. Hence, the class of time changed processes θ that are bounded by some function k on [0,T] for any T< whilst preserving analytical tractability prove to be quite interesting. This is of course violated by most of the standard time change processes (e.g. integrated CIR, Poisson, Gamma, or Compounded Poisson subordinators). A naive alternative consists in capping the later but this would trigger some difficulties. For instance, using θt=Ntt where N is a Poisson process would mean that Z=Z0 before the first jump of N, but then the resulting process may have linear pieces (hence not be piecewise constant). There exist however simple time change processes θ satisfying sups[0,t]θsk(t) for some functions k bounded on any closed interval and being piecewise constant, having stochastic jumps and having a non-zero possibility to jump in any time set of non-zero measure. Building PWC martingales using such type of processes is the purpose of next section.

3 Lazy martingales

We first present a stochastic time-changed process that satisfies this condition in the sense that the calendar time is always ahead of the stochastic clock that is, satisfies the boundedness requirement of the above lemma with the linear boundary k(t)=t. We then use the later to create PWC martingales.

3.1 Lazy clocks

We would like to define stochastic clocks that keep time frozen almost everywhere, can jump occasionally, but can’t go ahead of the real clock. Those stochastic clocks would then exhibit the piecewise constant path and the last constraint has the nice feature that any stochastic process Z adapted to \(\mathbb {F}\) is also adapted to \(\mathbb {F}\) enlarged with the filtration generated by θ. In particular, we do not need to know the value of Z after the real time t. As far as Z is concerned, only the sample paths of Z (in fact \(\tilde {Z}\)) up to θtt matters. In the sequel, we consider a specific class of such processes, called lazy clocks hereafter, that have the specific property that the stochastic clock typically “sleeps” (i.e. is “on hold”), but gets synchronized to the calendar time at some random times.

Definition 3

(lazy clock) The stochastic process \(\theta : \mathbb {R}^{+} \rightarrow \mathbb {R}^{+},~t \mapsto \theta _{t}\) is a \(\mathbb {F}\)-lazy clock if it satisfies the following properties

  • it is an \(\mathbb {F}\)-time change process: in particular, it is grounded (θ0=0), càdlàg and non-decreasing;

  • it has piecewise constant sample paths : \(\theta _{t} = \sum _{s\leq t} \Delta \theta _{s}\);

  • it can jump at any time and, when it does, it synchronizes to the calendar clock, i.e. there is the equality \(\{s>0 ;\, \theta _{s}\neq \theta _{s^{-}}\} = \{s>0; \,\theta _{s} =s\}.\phantom {\dot {i}\!}\)

In the sense of this definition, Poisson and Compound Poisson processes are examples of subordinators that keep time frozen almost everywhere but are not lazy clocks however as nothing constraints them to reach the calendar time at each jump time (i.e., they do not satisfy θτ=τ at every jump time τ). Neither are their capped versions as there are some intervals during which θ cannot jump or grows linearly.

Remark 5

Note that for each t>0, the random variable θt is a priori not a \(\mathbb {F}\)-stopping time. By contrast, if \(\mathbb {F}\) is right-continuous, the first passage time of the stochastic process θ beyond a given level is a stopping time. More precisely, the sequence (Ct, t≥0),

$$C_{t} := \inf\{s~;~\theta_{s}>t\} $$

is an increasing family of \(\mathbb {F}\)-stopping times. Conversely, for every t≥0, the lazy clock θis a family of \((\mathcal {F}_{C_{s}},\,s\geq 0)\)-stopping times, see Revuz-Yor (Revuz and Yor 1999, Chapter V, Prop.(1.1)).

In the following, we show that lazy clocks are essentially linked with last passage times, as illustrated in the next proposition.

Proposition 4

A process θ is a \(\mathbb {F}\)-lazy clock if and only if there exists a càdlàg process A starting from 0, adapted to \(\mathbb {F}\), such that the set \(\mathcal {Z}:=\{s;\, A_{s^{-}}=0\text { or }A_{s}=0\}\) has a.s. zero Lebesgue measure and θ=g with

$$g_{t}:=\sup\{s\leq t;\, A_{s^{-}}=0\text{ or }A_{s}=0\},\qquad t\geq0 \;. $$

Proof

If θ is a lazy clock, then the result is immediate by taking At=θtt which is càdlàg, and whose set of zeroes coincides with the jumps of θ, hence is countable. Conversely, fix a scenario ωΩ. Since A is càdlàg, the set \(\mathcal {Z}(\omega)\,=\,\{s; A_{s^{-}}(\omega)\,=\,0 \text {or }A_{s}(\omega)=0\}\) is closed, hence its complementary may be written as a countable union of disjoint intervals. We claim that

$$ \mathcal{Z}^{c}(\omega) = \bigcup_{s\geq0} ]g_{s^{-}}(\omega), g_{s}(\omega)[ \;. $$
(6)

Indeed, observe first that since sgs(ω) is increasing, its has a countable number of discontinuities, hence the union on the right hand side is countable. Furthermore, the intervals which are not empty are such that As(ω)=0 or \(\phantom {\dot {i}\!}A_{s^{-}}(\omega)=0\) and gs(ω)=s. In particular, if s1<s2 are associated with non empty intervals, then \(\phantom {\dot {i}\!}g_{s_{1}}(\omega)=s_{1} \leq g_{s_{2}^{-}}(\omega)\phantom {\dot {i}\!}\) which proves that the intervals are disjoint.

Now, let \(u\in \mathcal {Z}^{c}(\omega)\). Then Au(ω)≠0. Define \(d_{u}(\omega) = \inf \{s\geq u, \,A_{s^{-}}(\omega)=0 \text {or } A_{s}(\omega)=0\}\). By right-continuity, du(ω)>u. We also have \(A_{u^{-}}(\omega)\neq 0\phantom {\dot {i}\!}\) which implies that gu(ω)<u. Therefore, u]gu(ω),du(ω)[ which is non empty, and this may also be written \(u\in ]g_{d_{u}^{-}(\omega)}(\omega),\, g_{d_{u}(\omega)}(\omega)[\) which proves the first inclusion. Conversely, it is clear that if \(\phantom {\dot {i}\!}u\in ]g_{s^{-}}(\omega), g_{s}(\omega)[\), then Au(ω)≠0 and \(\phantom {\dot {i}\!}A_{u^{-}}(\omega)\neq 0\). Otherwise, we would have \(\phantom {\dot {i}\!}u=g_{u}(\omega)\leq g_{s^{-}}(\omega)\) which would be a contradiction. Equality (6) is thus proved. Finally, it remains to write :

$$g_{t} = {\int\nolimits}_{0}^{g_{t}} 1_{\mathcal{Z}} ds + \int_{0}^{g_{t}} 1_{\mathcal{Z}^{c}} ds = {\sum\nolimits}_{s\leq t} \Delta g_{s} $$

since \(\mathcal {Z}\) has zero Lebesgue measure. □

Remark 6

  1. 1.

    Note that lazy clocks are naturally involved with PWC martingales. Indeed, if M is a PWC martingale, then \(M_{t} = M_{g_{t}(M)}\phantom {\dot {i}\!}\) where gt(M)= sup{st, ΔMs≠0} is a lazy clock.

  2. 2.

    If \(\mathbb {G}\) denotes the natural filtration of the process A, then, following the definition in Dellacherie-Meyer (Dellacherie et al. 1992, Chapter XX, section 28), we see that θ is adapted to the slow filtration \((\mathcal {G}_{g_{t}+})_{t\geq 0}\).

  3. 3.

    It was observed in Remark 5 that lazy clocks are, in general, not stopping times. \(\mathbb {F}\)-lazy clocks are however \(\mathbb {F}\)-honest times, see e.g. Aksamit and Jeanblanc (2017);Mansuy and Yor (2006)Footnote 1. To see this, observe first that when st, θt is obviously \(\mathcal {F}_{s}\)-measurable. Consider now the case s<t. Conditionally on the event {gt<s}, we have gt<s<t. By definition, the lazy clock takes a constant value on [gt,t), leading to gt=gs. Therefore gt is (conditionally) \(\mathcal {F}_{s}\)-measurable in this case as well. This shows that gt is an honest time. Observe that honest times are known to be closely linked with last passage times. In this specific context, the connection is given in Proposition 4.

  4. 4.

    The natural filtration of a lazy clock is called a lazy filtration, by extension of the slow filtration.

We give below a few examples of lazy clocks related to last passage times prior a given time t, whose PDF is known explicitly. Whereas some of these random variables (and corresponding distributions) have been studied in the literature, we use last passage times as clocks, i.e. in a dynamic way, as stochastic processes evolving with t.

3.1.1 Poisson lazy clocks

Let (Xn,n≥1) be strictly positive random variables and consider the counting process N:=(Nt)t≥0 defined as

$$N_{t} := \sum\limits_{k=1}^{+\infty} 1_{\left\{ {\sum\nolimits}_{i=1}^{k} X_{i}\leq t \right\}}, \, t\geq0 \;. $$

Then the process (gt(N),t≥0) defined as the last jump time of N prior to t or zero if N did not jump by time t:

$$ g_{t}(N):=\sup\{s\leq t,\; N_{s}\neq N_{s^-}\} = \sum\limits_{k=1}^{+\infty} X_{k} 1_{\left\{ {\sum\nolimits}_{i=1}^{k} X_{i}\leq t \right\}} \;. $$
(7)

is a lazy clock. Its cumulative distribution function (CDF) is easily given, for st, by \(\mathbb {P}(g_{t}(N) \leq s) = \mathbb {P}(N_{t}=N_{s})\). If N is a Poisson process with intensity λ, i.e. when the random variables (Xk, k≥1) are i.i.d. with an exponential distribution of parameter λ, we obtain in particular \(\mathbb {P}(g_{t}(N)\leq s) = e^{-\lambda (t-s)}\), seeVrins (2016) for similar computations. Sample paths are shown on Fig. 1.

Fig. 1
figure 1

(Sample path of lazy clocks) : a Poisson lazy clock (λ=3/2, see Section 3.1.1), b Brownian lazy clock (see Section 3.1.2)

3.1.2 Diffusion lazy clock

Another simple example is given by the last passage time \(g_{t}^{a}(X)\) of a diffusion X to some level a before time t. Its CDF may be written, applying the Markov property :

$$\mathbb{P}\left(g_{t}^{a}(X)\leq s \right) = \mathbb{E}\left[ \mathbb{P}_{X_{s}}(T_{a}>t-s)\right] $$

where Ta= inf{u≥0: Xu=a}.

  • Let \(b\in \mathbb {R}\) and consider the drifted Brownian motion (Xt)t≥0, Xt:=Btbt. Then, the probability density function (PDF) of \(g_{t}^{a}(B-b)\) is given by (see for instance Salminen (1988) or Kahale (2008) :

    $$f_{g^{a}_{t}(B-b)}(s)\! =\! \frac{\phi\left(\frac{a+bs}{\sqrt{s}}\right)} {\sqrt{s}}\! \left(\frac{2}{\sqrt{t-s}} \phi \left(b\sqrt{t-s} \right) + 2b \Phi \left(b\sqrt{t-s} \right)\! - b \right)\,, \; 0< s< t $$

    where Φ denotes the standard Normal CDF Φ and Φ=ϕ. Note that when a≠0, the distribution of \(g_{t}^{a}(B-b)\) may have a mass at 0, see Shreve (2004, Corollary 7.2.2).

  • Let R be a Bessel process with dimension δ(0,2) and set \(\nu =\frac {\delta }{2}-1\). Then, the PDF of \(g_{t}^{0}(R)\) is given by the generalized Arcsine law (see Gradinaru et al. (1999)) :

    $$f_{g_{t}^{0}(R)}(s) = \frac{1}{\Gamma(|\nu|)\Gamma(1+\nu)} (t-s)^{\nu} s^{-1-\nu}~~,~~ 0< s< t \;. $$

3.1.3 Stable lazy clock

The generalized Arcsine law also appears when dealing with stable Lévy processes L with parameter α(1,2]. Then, from Bertoin (1996, Chapter VIII, Theorem 12), the PDF of \(g_{t}^{0}(L)\) is given by :

$$f_{g_{t}^{0}(L)}(s) = \frac{\sin(\pi/\alpha)} {\pi} \, s^{-\frac{1}{\alpha}} \, (t-s)^{\frac{1}{\alpha}-1} \;, \; 0< s< t \;. $$

3.2 Time-changed martingales with lazy clocks

In this section we introduce lazy martingales. A lazy martingale Z is defined as a stochastic process obtained by time-changing a latent martingale \(\tilde {Z}\) with an independent lazy clock θ. Lazy martingales \(Z=\left (\tilde {Z}_{\theta _{t}}\right)_{t\geq 0}\) are expected to be PWC martingales; this is proven in Theorem 1 below. Note that from Point 3) of Proposition 3, the process Z is always a martingale, i.e. no assumption are needed on \(\tilde {Z}\).

We first show that (in most situations) the lazy clock is adapted to the filtration generated by Z. This is done by observing that the knowledge of θ amounts to the knowledge of its jump times, since the size of the jumps are always obtained as a difference with the calendar time. In particular, the properties of the lazy clocks allow one to reconstruct the trajectories of Z on [0,t] only from past values of \(\tilde {Z}\) and θ; no information about the future (measured according to the real clock) is required. We then provide the resulting distribution when the clock g(N) is governed by Poisson, inhomogeneous Poisson or Cox processes.

Theorem 1

Let \(\tilde {Z}\) be a martingale independent from the lazy clock θ. Then \(Z = \tilde {Z}_{\theta }\) is a PWC martingale in its natural filtration \(\mathbb {F}\). If furthermore \(\mathbb {F}\) is assumed to be complete and if \(\forall u\neq v,\; \mathbb {P} \left (\tilde {Z}_{u} =\tilde {Z}_{v} \right) = 0\), then, θ is adapted to the filtration of Z.

Proof

Since by definition θtt for any t≥0, we first deduce from Point 3) of Proposition 3 that the process Z is a PWC martingale. Then, the fact that θ is adapted to the natural filtration of Z follows from the identity

$$\{0< s\leq t ; \,\theta_{s} \neq \theta_{s^{-}}\} = \{0< s\leq t;\, Z_{s}\neq Z_{s^{-}}\} \cup \{0< s\leq t; \,Z_{s}= Z_{s^{-}} \text{ and } \theta_{s}=s\} \;. $$

Indeed, observe that the set \(\mathcal {N} = \{0< s\leq t; \,Z_{s}= Z_{s^{-}} \text { and } \theta _{s}=s\}\) is of measure zero since, using the independence between Z and θ,

$$\mathbb{P}(\mathcal{N}) \,=\, \mathbb{P} \left(\left\{ 0\!<\! s\leq t; \, \tilde{Z}_{\theta_{s}}\,=\, \tilde{Z}_{\theta_{s^{-}}} \text{ and } \theta_{s}\,=\,s \right\} \right) \leq \mathbb{E} \left[ \sum_{0< s\leq t, \theta_{s}=s} \mathbb{P} \left(Z_{\theta_{s}} \,=\,Z_{\theta_{s^{-}}} \right) \!\right]\! \,=\, 0 $$

thanks to the assumption \(\forall u\neq v,\; \mathbb {P} \left (\tilde {Z}_{u} =\tilde {Z}_{v} \right) = 0\). Therefore, we have

$$\{0< s\leq t ; \,\theta_{s} \neq \theta_{s^{-}}\} = \{0< s\leq t;\, Z_{s}\neq Z_{s^{-}}\}\qquad \text{a.s.} $$

and taking the supremum on both sides and using Point 3) in the definition of a lazy clock, we deduce that \(\theta _{t} = \sup \left \{s\leq t;\;Z_{s}\neq Z_{s^{-}} \right \}\) a.s., which proves that θ is adapted to the natural filtration of Z since \(\mathbb {F}\) is complete. □

Example 4

Let \(\tilde {Z}\) be a continuous martingale and N an independent Poisson process with intensity λ. Then, Z=(Zt)t≥0 defined as \(Z_{t}:=\tilde {Z}_{g_{t}(N)}\) is a right-continuous PWC martingale in its natural filtration with same range as Z. Moreover, its CDF is given by

$$ F_{Z_{t}}(z) = \mathbb{P}(Z_{t}\leq z) = e^{-\lambda t} \left(1_{\{Z_{0}\leq z\}}+ \lambda \int_{0}^{t} F_{\tilde{Z}_{u}}(z) e^{\lambda u} du \right) \;. $$
(8)

This result follows from the example of Subsection 3.1.1, using the independence assumption between \(\tilde {Z}\) and N :

$$ F_{Z_{t}}(z)= \int_{0}^{\infty} F_{\tilde{Z}_{u}}(z) \mathbb{P}(g_{t}(N)\in du) \;. $$
(9)

A similar result applies when N is a Cox process, i.e. an inhomogeneous Poisson process whose intensity λ:=(λt)t≥0 is an independent (positive) stochastic process.

Corollary 4

Let N be Cox process independent from \(\tilde {Z}\) and define \(P(s,t):=\mathbb {E} \left [ e^{-(\Lambda _{t}-\Lambda _{s})} \right ]\) where \(\Lambda _{t}: = \int _{0}^{t}\lambda _{u} du\). Then,

$$ F_{Z_{t}}(z) = \left(1_{\{Z_{0}\leq z\}}P(0,t)+ \int_{0}^{t} F_{\tilde{Z}_{s}}(z) d_{s}P(s,t)\right)\;. $$
(10)

Proof

If λ is deterministic, i.e. in the inhomogeneous Poisson case, a direct adaptation of Example 4 yields the expression

$$F_{Z_{t}}(z)=e^{-\Lambda(t)} \left(1_{\{ Z_{0}\leq z \}} + \int_{0}^{t} \lambda(u) F_{\tilde{Z}_{u}}(z) e^{\Lambda(u)} du \right) \;. $$

Now, the Cox case may be obtained from the inhomogeneous Poisson case by conditioning with respect to the (independent) stochastic intensity. Indeed, applying the tower property of conditional expectation:

$$\begin{aligned} F_{Z_{t}}(z) &= \mathbb{E} \left[ \mathbb{E} \left[ \mathbb{P}(Z_{t} \leq z) | \lambda_{u},~0 \leq u \leq t \right] \right]\\ & = 1_{\{Z_{0} \leq z\}} \mathbb{E} \left[ e^{-\Lambda_{t}} \right] + \mathbb{E} \left[ \int_{0}^{t} \lambda_{s} \mathbb{P} \left(\tilde{Z}_{s} \leq z \right) e^{-(\Lambda_{t}-\Lambda_{s})} ds \right]\\ & =1_{\{Z_{0}\leq z\}}P(0,t)+ \int_{0}^{t} F_{\tilde{Z}_{s}}(z) \mathbb{E} \left[ \lambda_{s} e^{-(\Lambda_{t}-\Lambda_{s})} \right] ds \end{aligned} $$

where in the second line we have used the independence between λ and \(\tilde {Z}\), and in the last equality Tonelli’s theorem to exchange the integral and expectation operators when applied to non-negative functions. Finally, from Leibniz rule, \(\lambda _{s} e^{-(\Lambda _{t}-\Lambda _{s})}=\frac {d}{ds} e^{-(\Lambda _{t}-\Lambda _{s})}\) so

$$ \mathbb{E} \left[ \lambda_{s} e^{-(\Lambda_{t}-\Lambda_{s})} \right] = \frac{d}{ds} \mathbb{E} \left[ e^{-(\Lambda_t-\Lambda_s)} \right] = \frac{d}{ds}P(s,t)\;. $$
(11)

Remark 7

Notice that P(s,t)does not correspond to the expectation of \(e^{-\int _{s}^{t} \lambda _{u} du}\) conditional upon \(\mathcal {F}_{s}\), the filtration generated by λ up to s as often the case e.g. in mathematical finance. It is an unconditional expectation that can be evaluated with the help of the tower law. In the specific case where λ is an affine process, for example if \(\mathbb {E} \left [e^{-\int _{s}^{t} \lambda _{u} du}| \lambda _{s} = x \right ]\) takes the form A(s,t)eB(s,t)x for some deterministic functions A, B, then

$$P(s,t) = \mathbb{E} \left[ e^{-\int_{s}^{t} \lambda_{u} du} \right] = \mathbb{E} \left[ \mathbb{E} \left[ A(s,t)e^{-B(s,t)\lambda_{s}} \right] \right] = A(s,t) \varphi_{\lambda_{s}}(iB(s,t)) \;. $$

where \(\varphi _{\lambda _{s}}(u): = \mathbb {E} \left [ e^{iu\lambda _{s}} \right ]\) denotes the characteristic function of the random variable λs.

Example 5

In the case λ follows a CIR process, i.e. if \(d \lambda _{t} = k (\theta -\lambda _{t}) dt + \sigma \sqrt {\lambda _{t}} dW_{t}\) with λ0>0 then λs has the same law as rs/cs where cs=ν/(θ(1−eks)) and rs is a non-central chi-squared random variable with non-centrality parameter ν=4kθ/σ2 and κs=csλ0eks degrees of freedom. In this case, \(\varphi _{\lambda _{s}}(u) = \mathbb {E} \left [ \mathrm {e}^{i(u/c_{s}) r_{s}} \right ] = \varphi _{r_{s}} (u/c_{s})\) where \(\varphi _{r_{s}}(v) = \frac {\exp \left (\frac {\nu iv}{1-2iv} \right)} {(1-2iv)^{\kappa _{s}/2}}\).

3.3 Some lazy martingales without independence assumption

We have seen that when \(\tilde {Z}\) is a martingale and θ an independent lazy clock, then \(\left (Z_{t}=\tilde {Z}_{\theta _{t}}, \, t \geq 0\right)\) is a PWC martingale in its natural filtration. We now give an example where the lazy clock θ is not independent from the latent process \(\tilde {Z}\).

Proposition 5

Let B and W be two correlated Brownian motions with coefficient ρ and f a continuous function. Define the lazy clock :

$$g_{t}^{f}(W) := \sup\{s\leq t,\; W_{s}=f(s)\} \;. $$

Let h(W) be a progressively measurable process with respect to the natural filtration of W and such that \(\mathbb {E} \left [ \int _{0}^{t} h^{2}_{u}(W) du \right ] <+\infty \) a.s. for any t≥0. Assume that there exists a deterministic function ψsuch that:

$$\int_{0}^{g_{t}^{f}(W)} h_{u}(W) dW_{u} = \psi\left(g_{t}^{f}(W)\right) \;. $$

Then, the process \(Z = \left (\int _{0}^{g_{t}^{f}(W)} h_{u}(W) dB_{u}- \rho \psi \left (g_{t}^{f}(W) \right),\; t \geq 0 \right)\) is a lazy martingale in its natural filtration.

Proof

Let β be a Brownian motion independent from W such that \(B = \rho W+\sqrt {1-\rho ^{2}}\,\beta \). We first write:

$$\small{\begin{aligned} Z_{t} = \int_{0}^{g_{t}^{f}(W)} h_{u}(W) dB_{u} - \rho \psi\left(g_{t}^{f}(W)\right) &= \int_{0}^{g_{t}^{f}(W)} h_{u}(W) dB_{u} - \rho \int_{0}^{g_{t}^{f}(W)} h_{u}(W) dW_{u} \\ & = \sqrt{1-\rho^{2}}\int_{0}^{g_{t}^{f}(W)} h_{u}(W) d\beta_{u} \;. \end{aligned}} $$

Observe now that Z is integrable, since from Itô’s isometry :

$$\mathbb{E} \left[ |Z_{t}| \right]^{2}\leq \mathbb{E} \left[ |Z_{t}|^{2} \right] = (1-\rho^{2}) \mathbb{E} \left[ \int_{0}^{g_{t}^{f}(W)} h^{2}_{u}(W) du \right] \leq (1-\rho^{2}) \mathbb{E} \left[\int_{0}^{t} h^{2}_{u}(W) du \right] < + \infty \;. $$

Define next the larger filtration \(\mathbb {G}=(\mathcal {G}_{t})_{t\geq 0}\) defined as \(\mathcal {G}_{t}=\sigma ((W_{u}, u \geq 0), (\beta _{u}, u\leq g_{t}^{f}(W))\). Using the tower property of conditional expectations :

$$\mathbb{E} \left[Z_{t}| \mathcal{F}_{s} \right] \,=\, \sqrt{1-\rho^{2}} \!\int_{0}^{g_{s}^{f}(W)} \! h_{u}(W) d\beta_{u} + \sqrt{1\,-\,\rho^{2}} \mathbb{E} \left[ \!\left. \mathbb{E} \left[ \!\int_{g_{s}^{f}(W)}^{g_{t}^{f}(W)} h_{u}(W) d \beta_{u}| \mathcal{G}_{s} \right] \right|\mathcal{F}_{s}\right] = Z_{s} $$

since, conditionally to some scenario ω(hence with tWt(ω)some fixed continuous path), the random variable \(\int _{g_{s}^{f}(W(\omega))}^{g_{t}^{f}(W(\omega))} h_{u}(W(\omega)) d\beta _{u}\) is a centered Gaussian random variable with variance \(\int _{g_{s}^{f}(W(\omega))}^{g_{t}^{f}(W(\omega))} h^{2}_{u}(W(\omega)) du\) independent from \(\left (\beta _{u}, u\leq g_{s}^{f}(W(\omega)) \right)\), hence

$$\mathbb{E} \left[ \int_{g_{s}^{f}(W)}^{g_{t}^{f}(W)} h_{u}(W) d \beta_{u}| \mathcal{G}_{s} \right] = 0 \;. $$

It is interesting to point out here that the latent process \(\tilde {Z}_{t} = \int _{0}^{t} h_{u}(W) dB_{u} - \rho \psi (t)\) is, in general, not a martingale (not even a local martingale). One obtains a martingale thanks to the lazy time-change.

Example 6

We give below several examples of application of this proposition.

  1. 1.

    Take hu=1. Then, ψ=f and \(\left (B_{g_{t}^{f}(W)} -\rho f\left (g_{t}^{f}(W)\right), \,t\geq 0\right)\) is a PWC martingale.

    More generally, we may observe from the proof above that if H is a space-time harmonic function (i.e. (t,z)→H(t,z) is \(\mathcal {C}^{1,2}\) and such that \(\frac {\partial H}{\partial t} + \frac {1}{2} \frac {\partial ^{2} H}{\partial z^{2}} =0\)), then the process

    $$\left(H\left(B_{g_{t}^{f}(W)}-\rho f \left(g_{t}^{f}(W) \right),\, \left(1-\rho^{2}\right) g_{t}^{f}(W)\right),\; t \geq 0 \right) $$

    is a PWC martingale. Notice in particular that the latent process here is not, in itself, a martingale.

  2. 2.

    Following the same idea, take \(h_{u}(W) = \frac {\partial H}{\partial z}(W_{u},u)\) for some harmonic function H. Then

    $$\int_{0}^{g_{t}^{f}(W)}\! \frac{\partial H}{\partial z}(W_{u},u) dW_{u}\! =\! H\! \left(\! W_{g_{t}^{f}(W)}, g_{t}^{f} (W)\! \right) - H(0,0)\! =\! H \!\! \left(\! f \! \left(\! g_{t}^{f}(W)\! \right), g_{t}^{f}(W)\! \right)\,-\, H(0,0) $$

    and the process

    $$\left(\! \int_{0}^{g_{t}^{f}(W)} \frac{\partial H}{\partial z}(W_{u},u) dB_{u} -\rho H \left(\! f \left(g_{t}^{f}(W)\! \right), g_{t}^{f}(W) \right), \, t \geq 0 \right) $$

    is a PWC martingale.

  3. 3.

    Consider the stochastic process \(\tilde {Z}\) which time-t value is defined as the stochastic integral of any \(\mathcal {C}^{1}\)-function of the local time of W at 0 with respect to B up to time t. Then, the time-changed integral (Zt)t≥0, \(Z_{t}:= \tilde {Z}_{g^{0}_{t}(W)}\), is a PWC martingale in its natural filtration. To see this, take f=0 and \(h_{u}=r\left (L^{0}_{u}\right)\) where r is a \(\mathcal {C}^{1}\) function and L0 denotes the local time of W at 0. Then, integrating by parts :

    $$\int_{0}^{g_{t}^{f}(W)} r \left(L^{0}_{u}\right) dW_{u} = r \left(L_{g_{t}^{f}(W)} \right) W_{g_{t}^{f}(W)} - \int_{0}^{g_{t}^{f}(W)} W_{u} r^{\prime} \left(L^{0}_{u}\right) dL^{0}_{u} = 0 $$

    since the support of dL is included in {u,Wu=0}. Therefore, the process (Zt, t≥0), \(Z_{t}:=\int _{0}^{g_{t}^{f}(W)} r\left (L^{0}_{u}\right)dB_{u}\) is a PWC martingale.

4 Numerical simulations

In this section, we briefly sketch the construction schemes to sample paths of the lazy clocks discussed above. These procedures have been used to generate Fig. 1. Finally, we illustrate sample paths and distributions of a specific martingale in [0,1] time-changed with a Poisson lazy clock.

4.1 Sampling of lazy clock and lazy martingales

By definition, the number of jumps of a lazy clock θ on [0,T] is countable, but may be infinite. Therefore, except in some specific cases (such as the Poisson lazy clock), an exact simulation is impossible. Using a discretization grid, the simulated trajectories of a lazy clock θ on [0,T] will take the form

$$\theta_{t}:=\sup\{\tau_{i},\tau_{i}\leq t\} $$

where τ0:=0 and τ1,τ2,… are (some of) the synchronization times of the lazy clock up to time T. We can thus focus on the sampling times τ1,τ2… whose values are no greater than T.

4.1.1 Poisson lazy clock

Trajectories of a Poisson lazy clock θt(ω)=gt(N(ω)) on a fixed interval [0,T] are very easy to obtain thanks to the properties of Poisson jump times.

4.1.2 Brownian lazy clock

Sampling a trajectory for a Brownian lazy clock requires the last zero of a Brownian bridge. This is the purpose of the following lemma.

Lemma 1

Let Wx,y,t be a Brownian bridge on [0,t],tT, starting at \(W_{0}^{{x,y,t}}=x\) and ending \(W_{t}^{{x,y,t}}=y\), and define its last passage time at 0 :

$$g_{t}\left(W^{\text{\textit{x,y,t}}}\right):= \sup \left\{s\leq t,\; W^{\text{\textit{x,y,t}}}_{s}=0 \right\} \;. $$

Then, the CDF F(x,y,t;s) of gt(Wx,y,t) is given, for s[0,t] by :

$$\begin{array}{@{}rcl@{}} \mathbb{P} \left(g_{t} \left(W^{\text{\textit{x,y,t}}}\right)\! \leq s \right)\! \,=\, F (x,y,t;s)\! &:\! =\! 1 \,-\, \mathrm{e}^{-\frac{xy}{t}} \left(d_{+} (x,y,t;s) \,+\, d_{-}(x,y,t;s) \right) \;, \end{array} $$
(12)
$$\begin{array}{@{}rcl@{}} \text{where } \qquad d_{\pm}(x,y,t;s) &: = \mathrm{e}^{\frac{\pm|xy|}{t}} \Phi \left(\mp|x|\sqrt{\frac{t-s}{st}}-|y| \sqrt{\frac{s}{t(t-s)}} \right) \;. \end{array} $$
(13)

In particular, the probability that Wx,y,t does not hit 0 during [0,t] equals:

$$\mathbb{P} \left(g_{t} \left(W^{\text{\textit{x,y,t}}} \right) = 0 \right) = F(x,y,t;0) = 1-e^{-\frac{xy + |xy|}{t}} \;. $$

Note also the special case when y=0 :

$$\mathbb{P} \left(g_{t} \left(W^{x,0,t} \right) = t \right) = 1 \;. $$

Proof

Using time reversion and the absolute continuity formula of the Brownian bridge with respect to the free Brownian motion (see Salminen (1997)), the PDF of gt(Wx,y,t) is given, for y≠0, by :

$$\mathbb{P} \left(g_{t} \left(W^{\text{\textit{x,y,t}}} \right)\in ds \right) = \frac{|y|\sqrt{t}} {\sqrt{2\pi}} e^{\frac{(y-x)^{2}}{2t}} \frac{1}{\sqrt{s} (t-s)^{3/2}} e^{-\frac{x^{2}}{2s}} e^{-\frac{y^{2}} {2(t-s)}} \, ds \;. $$

Integrating over [0,t], we first deduce that

$$ \frac{|y|\sqrt{t}} {\sqrt{2\pi}} \int_{0}^{t} \frac{e^{-\frac{x^{2}}{2s}}} {\sqrt{s}} \frac{e^{-\frac{y^{2}} {2(t-s)}}} {(t-s)^{3/2}} \, ds = \exp \left(\frac{(|y|+|x|)^{2}}{2t} \right) \;. $$
(14)

We shall now compute a modified Laplace transform of F, and then invert it. Integrating by parts and using (14), we deduce that :

$$\lambda \int_{0}^{t} \frac{e^{-\frac{\lambda}{2s}}}{2s^{2}} F(x,y,t; s) ds = e^{-\frac{\lambda}{2t}} - e^{-\frac{\lambda}{2t}} \exp\left(-\frac{xy}{t} -\frac{|y| \sqrt{\lambda+x^{2}}}{t}\right)\;. $$

Observe next that by a change of variable :

$$\lambda \int_{0}^{t} \frac{e^{-\frac{\lambda}{2s}}}{2s^{2}} F(x,y,t; s) ds = \lambda e^{-\frac{\lambda}{2t}}\int_{0}^{+\infty} e^{-\lambda v} F\left(x,y,t; \frac{1}{2v+1/t}\right) dv $$

hence

$$\int_{0}^{+\infty} e^{-\lambda v} F\left(x,y,t; \frac{1}{2v+1/t}\right) dv = \frac{1}{\lambda} - \frac{1}{\lambda} \exp\left(-\frac{xy}{t} -\frac{|y| \sqrt{\lambda+x^{2}}}{t}\right) $$

and the result follows by inverting this Laplace transform thanks to the formulae, for a>0 and b>0 :

$$\frac{1}{\lambda}\exp\left(-a\sqrt{\lambda+x^{2}}\right) = \frac{a}{2\sqrt{\pi}} \int_{0}^{+\infty} e^{-\lambda v} \int_{0}^{v} e^{-ux^{2}} \frac{1}{u^{3/2}} e^{-\frac{a^{2}}{4u}}du\, dv $$

and

$$\int_{0}^{z} \!e^{-au - b/u} \frac{du}{u^{3/2}} \,=\, \frac{\sqrt{\pi}}{2\sqrt{b}}\left(e^{2\sqrt{ab}}\text{Erfc}\left(\sqrt{\frac{b}{z}} \,-\, \sqrt{az}\right)\,+\,e^{-2\sqrt{ab}}\text{Erfc}\left(\sqrt{\frac{b}{z}} \,+\, \sqrt{az}\right) \right) \,. $$

Simulating a continuous trajectory of a Brownian lazy clock θ in a perfect way is an impossible task. The reason is that (Wt)ts hits infinitely many times the level Ws during an arbitrary future period starting from s. In particular, the path tWt(ω) crosses 0 infinitely many times in the time interval [0,ε] for every ε>0. See e.g. Baldi (2017, p. 58-59, Remark 4) or Karatzas and Shreve (2005, p. 94, Problem 7.18). Consequently, it is impossible to depict such trajectories in a perfect way. Just like for the Brownian motion, one could only hope to sample trajectories on a discrete time grid, where the maximum stepsize provides some control about the approximation, and corresponds to a basic unit of time. By doing so, we disregard the specific jump times of θ, but focus on the supremum of the zeroes of a Brownian motion in these intervals. To do this, we proceed as follows.

Example 7

(PWC martingale on (0,1)) Let N be a Poisson process with intensity λ and \(\tilde {Z}\) be the Φ-martingale (Jeanblanc and Vrins 2018) with constant diffusion coefficient η,

$$ \tilde{Z}_{t}:=\Phi\left(\Phi^{-1}(Z_0)e^{\eta^2/2 t}+\eta \int_{0}^{t}e^{\frac{\eta^{2}}{2}(t-s)}dW_{s}\right)\;. $$
(15)

Then, the stochastic process Z defined as \(Z_{t}:=\tilde {Z}_{g_{t}(N)}\), t≥0, is a pure jump martingale on (0,1) with CDF

$$ F_{Z_{t}} (z) = e^{-\lambda t} \left(1_{\{Z_{0}\leq z\}}+ \lambda \int_{0}^{t} \Phi \left(\frac{\Phi^{-1}(z) - \Phi^{-1} (Z_0) e^{\eta^2/2 u}} {\sqrt{e^{\eta^2u}-1}} \right) e^{\lambda u} du \right) \;. $$
(16)

Some sample paths for \(\tilde {Z}\) and Z are drawn on Fig. 2. Notice that all the martingales \(\tilde {Z}\) given above can be simulated without error using the exact solution.

Fig. 2
figure 2

(Sample paths of Z): four sample paths of \(\tilde {Z}\) (circles) and Z (no marker) up to T=15 years, where \(\tilde {Z}\) is the Φ-martingale with Z0=0.5. a (η=25%, λ=20%) and b (η=15%, λ=50%)

Figure 3 gives the CDF of Z and \(\tilde {Z}\) where the later is a Φ-martingale. The main differences between these two sets of curves result from the fact that \(\mathbb {P} \left (\tilde {Z}_{t}=Z_{0} \right) = 0 \) for all t>0 while \(\mathbb {P} \left (Z_{t}=Z_{0} \right) = \mathbb {P} \left (\tilde {Z}_{g_{t}(N)} = Z_{0} \right) = \mathbb {P} (N_{t}=0)>0\) and that there is a delay resulting from the fact that Zt correspond to some past value of \(\tilde {Z}\).

Fig. 3
figure 3

\(\left (\text {CDF of } \tilde {Z}_{t}\right)\) : CDF of \(\tilde {Z}_{t}\) (circles) and Zt (no marker) where \(\tilde {Z}\) is the Φ-martingale with Z0=0.5 and t in 0.5 (blue), 5 (red) and 40 (magenta) years. a (Z0=50%, η=25%, λ=20%), b (Z0=50%η=15%, λ=50%), c (Z0=35%η=15%, λ=50%), and d (Z0=35%, η=25%, λ=5%)

5 Conclusion and future research

Many applications, like mathematical finance, extensively rely on martingales. In this context, discrete- or continuous-time processes are commonly considered. However, in some specific cases like when we work under partial information or when market quotes arrive in a scarce way, it is more realistic to assume that conditional expectations move in a more piecewise constant fashion. Such type of processes didn’t receive attention so far, and our paper aims at filling this gap. We focused on the construction of piecewise constant martingales that is, martingales whose trajectories are piecewise constant. Such processes are indeed good candidates to model the dynamics of conditional expectations of random variables under partial (punctual) information. The time-changed approach proves to be quite powerful: starting with a martingale in a given range, we obtain a PWC martingale by using a piecewise constant time-change process. Among those time-change processes, lazy clocks are specifically appealing: these are time-change processes staying always in arrears to the real clock, and that synchronizes to the calendar time at some random times. This ensures that θtt which is a convenient feature when one needs to sample trajectories of the time-change process. Such random times can typically be characterized as last passage times, and enjoy appealing tractability properties. The last jump time of a Poisson process before the current time for instance exhibits a very simple distribution. Other lazy clocks have been proposed as well, based on Brownian motions and Bessel processes, some of which rule out the probability mass at zero. We provided several martingales time-changed with lazy clocks (called lazy martingales) whose range can be any interval in \(\mathbb {R}\) (depending on the range of the latent martingale) and showed that the corresponding distributions can be easily obtained in closed form. Finally, we presented algorithms to sample Poisson and Brownian lazy clocks, thereby providing the reader with a workable toolbox to efficiently use piecewise constant martingales in practice.

This paper paves the way for further research, in either fields of probability theory and mathematical finance. Tractability and even more importantly, the martingale property results from the independence assumption between the latent martingale and the time-change process. It might be interesting however to consider cases where the sample frequency (synchronization rate of the lazy clock θ to the real clock) depends on the level of the latent martingale Z. Finding a tractable model allowing for this coupling remains an open question at this stage. On the other hand, it is yet unclear how dealing with more realistic processes like piecewise constant ones would impact hedging strategies and model completeness in finance. In fact, investigating this route is the purpose of a research project that we are about to initiate.

Notes

  1. We are grateful to an anonymous referee for pointing this out.

Abbreviations

CDF:

Cumulative distribution function

CIR:

Cox–Ingersoll–Ross

PWC:

Piecewise constant martingale

PDF:

Probability distribution function

References

  • Aksamit, A., Jeanblanc, M.: Enlargement of Filtrations with Finance in View. Springer, Switzerland (2017).

    Book  Google Scholar 

  • Altman, E., Brady, B., Resti, A., Sironi, A.: The link between defaults and recovery rates: theory, empirical evidence, and implications. Technical report, Stern School of Business (2003).

  • Amraoui, S., Cousot, L., Hitier, S., Laurent, J. -P.: Pricing CDOs with state-dependent stochastic recovery rates. Quant. Finan. 12(8), 1219–1240 (2012).

    Article  MathSciNet  Google Scholar 

  • Andersen, L., Sidenius, J.: Extensions to the gaussian copula: random recovery and random factor loadings. J. Credit Risk. 1(1), 29–70 (2004).

    Article  Google Scholar 

  • Baldi, P.: Stochastic Calculus. Universitext. Springer, Switzerland (2017).

    Book  Google Scholar 

  • Bertoin, J.: Lévy processes, volume 121 of Cambridge Tracts in Mathematics. Cambridge University Press, Cambridge (1996).

    Google Scholar 

  • Boel, R., Varaiya, P., Wong, E.: Martingales on jump processes. I. Representation results. SIAM J. Control. 13(5), 999–1021 (1975).

    Article  MathSciNet  Google Scholar 

  • Boel, R., Varaiya, P., Wong, E.: Martingales on jump processes. II. Applications. SIAM. J. Control. 13(5), 1022–1061 (1975).

    Article  MathSciNet  Google Scholar 

  • Cont, R., Tankov, P.: Financial Modelling with Jump Processes. Chapman & Hall, USA (2004).

    MATH  Google Scholar 

  • Dellacherie, C., Maisonneuve, B., Meyer, P. -A.: Probabilités et Potentiel - Processus de Markov. Hermann, France (1992).

    Google Scholar 

  • Gaspar, R., Slinko, I.: On recovery and intensity’s correlation - a new class of credit models. J. Credit Risk. 4(2), 1–33 (2008).

    Article  Google Scholar 

  • Gradinaru, M., Roynette, B., Vallois, P., Yor, M.: Abel transform and integrals of Bessel local times. Ann. Inst. H. Poincaré Probab. Statist. 35(4), 531–572 (1999).

    Article  MathSciNet  Google Scholar 

  • Gradshteyn, I. S., Ryzhik, I. M.: Table of integrals, series, and products. seventh edition. Elsevier/Academic Press, Amsterdam (2007).

    MATH  Google Scholar 

  • Herdegen, M., Herrmann, S.: Single jump processes and strict local martingales. Stoch. Process. Appl. 126(2), 337–359 (2016).

    Article  MathSciNet  Google Scholar 

  • Jacod, J., Skorohod, A. V.: Jumping filtrations and martingales with finite variation. In: Séminaire de Probabilités, XXVIII, volume 1583 of Lecture Notes in Math, pp. 21–35. Springer, Berlin (1994).

    Google Scholar 

  • Jeanblanc, M., Vrins, F.: Conic martingales from stochastic integrals. Math. Financ. 28(2), 516–535 (2018).

    Article  MathSciNet  Google Scholar 

  • Jeanblanc, M., Yor, M., Chesney, M.: Martingale Methods for Financial Markets. Springer Verlag, Berlin (2007).

    MATH  Google Scholar 

  • Kahale, N.: Analytic crossing probabilities for certain barriers by Brownian motion. Ann. Appl. Probab. 18(4), 1424–1440 (2008).

    Article  MathSciNet  Google Scholar 

  • Karatzas, I., Shreve, S.: Brownian Motion and Stochastic Calculus. Springer, New York (2005).

    MATH  Google Scholar 

  • Mansuy, R., Yor, M.: Random Times and Enlargement of Filtrations in a Brownian Setting. Lecture Notes in Mathematics. Springer, Berlin Heidelberg (2006).

    MATH  Google Scholar 

  • Markit: ISDA CDS Standard Model. Technical report (2004). http://www.cdsmodel.com/cdsmodel/.

  • Protter, P.: Stochastic Integration and Differential Equations. Second edition. Springer, Berlin (2005).

    Book  Google Scholar 

  • Rainer, C.: Projection d’une diffusion sur sa filtration lente. In: Séminaire de Probabilités, XXX, volume 1626 of Lecture Notes in Math, pp. 228–242. Springer, Berlin (1996).

    Google Scholar 

  • Revuz, D., Yor, M.: Continuous martingales and Brownian motion. Springer-Verlag, New-York (1999).

    Book  Google Scholar 

  • Salminen, P.: On the first hitting time and the last exit time for a Brownian motion to/from a moving boundary. Adv. Appl. Probab. 20(1), 411–426 (1988).

    Article  MathSciNet  Google Scholar 

  • Salminen, P.: On last exit decompositions of linear diffusions. Studia. Sci. Math. Hungar. 33(1–3), 251–262 (1997).

    MathSciNet  MATH  Google Scholar 

  • Shreve, S. E.: Stochastic Calculus for Finance vol. II - Continuous-time models. Springer, New York (2004).

    Book  Google Scholar 

  • Vrins, F.: Characteristic function of time-inhomogeneous Lévy-driven Ornstein-Uhlenbeck processes. Stat. Probab. Lett. 116, 55–61 (2016).

    Article  MathSciNet  Google Scholar 

Download references

Acknowledgements

The authors are grateful to M. Jeanblanc, D. Brigo, K. Yano, and G. Zhengyuan for stimulating discussions about an earlier version of this manuscript. We wish to thank the two anonymous referees and the AE whose many suggestions helped to improve the presentation of this paper.

Funding

This research benefitted from the support of the Chaire Marchés en Mutation from the Fédération Bancaire Française. The project “Dynamic Modeling of Recovery Rates” receives the support of Wallonie-Bruxelles International and of the Fonds de la Recherche Scientifique, of the Ministère Français des Affaires étrangères et européennes, of the Ministère de l’Enseignement supérieur et de la Recherche via the instrument Partenariats Hubert Curien. This work was supported by the Fonds de la Recherche Scientifique -FNRS under Grant J.0037.18.

Availability of data and materials

Data sharing not applicable to this article as no datasets were generated or analysed during the current study.

Author information

Authors and Affiliations

Authors

Contributions

The authors contributed equally to this paper. They both read and approved the final manuscript and jointly bear full responsibility regarding potential remaining errors.

Corresponding author

Correspondence to Frédéric Vrins.

Ethics declarations

Authors’ information

C.P. is with the Laboratoire de Mathématiques et Modélisation d’Évry. He is the author of the monograph Peacocks and Associated Martingales, With Explicit Constructions with F. Hirsh, B Roynette, and M. Yor. F.V. served as a quantitative analyst on the trading floor of a major European bank, before moving back to academia. He is Research Director of the Louvain Finance Center (LFIN) and Faculty Member of the Center for Operations Research and Econometrics (CORE).

Competing interests

The authors declare that they have no competing interests.

Rights and permissions

Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0 International License(http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Profeta, C., Vrins, F. Piecewise constant martingales and lazy clocks. Probab Uncertain Quant Risk 4, 2 (2019). https://doi.org/10.1186/s41546-019-0036-4

Download citation

  • Received:

  • Accepted:

  • Published:

  • DOI: https://doi.org/10.1186/s41546-019-0036-4

Keywords