Piecewise Constant Martingales and Lazy Clocks

This paper discusses the possibility to find and construct \textit{piecewise constant martingales}, that is, martingales with piecewise constant sample paths evolving in a connected subset of $\mathbb{R}$. After a brief review of standard possible techniques, we propose a construction based on the sampling of latent martingales $\tilde{Z}$ with \textit{lazy clocks} $\theta$. These $\theta$ are time-change processes staying in arrears of the true time but that can synchronize at random times to the real clock. This specific choice makes the resulting time-changed process $Z_t=\tilde{Z}_{\theta_t}$ a martingale (called a \textit{lazy martingale}) without any assumptions on $\tilde{Z}$, and in most cases, the lazy clock $\theta$ is adapted to the filtration of the lazy martingale $Z$. This would not be the case if the stochastic clock $\theta$ could be ahead of the real clock, as typically the case using standard time-change processes. The proposed approach yields an easy way to construct analytically tractable lazy martingales evolving on (intervals of) $\mathbb{R}$.


Introduction
In the literature, pure jump processes defined on a filtered probability space (Ω, F, F, Pr), where F := (F t , 0 t T ) and F := F T , are often referred to as stochastic processes having no diffusion part. In this paper we are interested in a subclass of pure jump (PJ) processes: piecewise constant (PWC) martingales defined as follows. Definition 1.1 (Piecewise constant martingale). A piecewise constant F-martingale Z is a càdlàg Fmartingale whose jumps ∆Z s = Z s − Z s − are summable (i.e. s T |∆Z s | < +∞ a.s.) and such that for every t ∈ [0, T ] : In particular, the sample paths Z(ω) for ω ∈ Ω belong to the class of piecewise constant functions of time.
Note that an immediate consequence of this definition is that a PWC martingale has finite variation. Such type of processes may be used to represent martingales observed under partial (punctual) information, e.g. at some (random) times. One possible field of application is mathematical finance, where discounted price processes are martingales under an equivalent measure. Without additional information, a reasonable approach may consist in assuming that discounted prices remain constant between arrivals of market quotes, and jump to the level given by the new quote when a new trade is done. More generally, this could represent conditional expectation processes (i.e. "best guess") where information arrives in a random and discontinuous way. An interesting application in that respect is the modeling of quoted recovery rates. They correspond to the market's view of a firm's recovery rate R upon default. Being conditional expectations of random variables in [0, 1] associated to remote events, they are martingales evolving in the unit interval, whose trajectories remain constant for long period of times, but jumps from time to time, when dealers update their views to specialized data providers.
Pure jump martingales can easily be obtained by taking the difference of a pure jump increasing process with a predictable, grounded, right-continuous process of bounded variation (called compensator ). The simplest example is probably the compensated Poisson process of parameter λ defined by (M t = N t − λt, t 0). This process is a pure jump martingale with piecewise linear sample paths, hence is not a PWC martingale as s t ∆M s = N t = M t . Clearly, not all martingales having no diffusion term are piecewise linear. For example, the Azéma martingale M defined as where W is a Brownian motion, is essentially piecewise square-root (see e.g. Section 8 of [8] for a detailed analysis of this process). Similarly, the Geometric Poisson Process e Nt log(1+σ)−λσt is a positive martingale with piecewise negative exponential sample paths [12,Ex 11.5.2].
In Section 2, we present several routes to construct PWC martingales. We then introduce a different approach in Section 3, adopting a time-changed technique. This method proves to be very flexible as the time-changed and the latent processes have the same range (if not time-dependent).

Piecewise constant martingales
Most of the "usual" martingales with no diffusion term fail to have piecewise constant sample paths. However, finding such type of processes is not difficult. We provide below three different methods to construct such type of processes. Yet, not all are equally powerful in terms of tractability. The last method proves to be quite appealing in that it yields PWC martingales whose range can be any connected set.

An autoregressive construction scheme
We start by looking at a subset of PWC martingales, namely step martingales. These are martingales whose paths belong to the space of step functions on any bounded interval. As a consequence, a step martingale Z admits a finite number of jumps on [0, T ] taking places at, say (τ k , k 1), and may be decomposed as (with τ 0 := 0) Looking at such decomposition, we see that step martingales may easily be constructed by an autoregressive scheme.
be an increasing sequence of random times, independent from M , and set A t := +∞ k=1 1 {τ k t} . We assume that E[A t ] < +∞. Then, the process is a step martingale with respect to its natural filtration.
Proof. We first have Pr(τ k t) < +∞ which proves that Z t is integrable. The martingale property is then an immediate consequence of the increasing time change A. Example 2.1. Let N be a Cox process with intensity λ = (λ t ) t 0 and τ 1 , . . . , τ Nt be the sequence of jump times of N on [0, t] with τ 0 := 0. If (Y k , k 1) is a family of independent and centered random variables, then is a PWC martingale. Note that we may choose the range of such a PWC martingale by taking bounded random variables. For instance, if Z 0 = 0 and for any k 1, The above construction scheme provides us with a simple method to construct PWC martingales. Yet, it suffers from two restrictions. First, the distribution of Z t requires averaging the conditional distribution with respect to the Poisson distribution of rate λ, i.e. an infinite sum. Second, a control on the range of the resulting martingale requires strong assumptions. In Example 2.1, the Y i 's are independent but their support decreases as 1/k 2 . One might try to relax the independence assumption by drawing Y i from a distribution whose support is state dependent like By doing so however, we typically loose the tractability of the distribution. In Example 2.1 for instance, the characteristic function can be found in closed form, but it features an infinite sum (over the Poisson states) of products (of increasing size) of characteristic functions associated to the random variables (Y i ). In the sequel, we address these drawbacks by proposing another construction scheme, that would provide us with more tractable expressions.

PWC martingales from PJ martingales with vanishing compensator
As hinted in the introduction, PWC martingales can be easily obtained by taking the difference of two pure jump processes whose compensators cancel out. We start by looking at subordinators. Lemma 2.1 (Pure jump martingales constructed from subordinators). Let J 1 and J 2 be two i.i.d. subordinators, with characteristic exponent : We assume that the Lévy measure ν satisfies the integrability condition +∞ 0 xν(dx) < +∞. Then, Z := J 1 − J 2 is a PWC symmetric martingale whose characteristic function is given by Proof. Observe first that the assumption +∞ 1 xν(dx) < +∞ implies that J 1 is integrable, while 1 0 xν(dx) < +∞ implies that J 1 admits the decomposition J 1 t = s t ∆J s , see [1, p.15]. The result then follows from the fact that As obvious examples, one can mention the difference of two independent Gamma or Poisson processes of same parameters. Note that stable subordinators are not allowed here, as they do not fulfill the integrability condition. We give below the probability density of these two examples : Example 2.2. Let N 1 , N 2 be two independent Poisson processes with parameter λ. Then, Z := N 1 − N 2 is a step martingale taking integer values, with marginals given by the Skellam distribution with parameters µ 1 = µ 2 = λ :

1)
where I k is the modified Bessel function of the first kind.
Example 2.3. Let γ 1 , γ 2 be two independent Gamma processes with parameters a, b > 0. Then, Z := γ 1 − γ 2 is a PWC martingale with marginals given by where K β denotes the modified Bessel function of the second kind with parameter β ∈ R.
Proof. The probability density of Z t is given, for 2at > 1, by the inverse Fourier transform, see [4, p.349 Formula 3.385(9)] : The result then follows by analytic continuation.
Note that more generally, a similar proof allows to characterize the centered Lévy processes which are PWC martingales.
Proposition 2. A centered Lévy process L is a PWC martingale if and only if it has no drift, no Brownian component and its Lévy measure ν satisfies the integrability condition R |x|ν(dx) < ∞, i.e. its Lévy triple is (0, 0, ν) with ν integrable as above.
We conclude this section with an example of PWC martingale which does not belong to the family of Lévy processes but has the interesting feature to evolve in a time-dependent range.
Lemma 2.2. Let W 1 , W 2 be two independent Brownian motions. For i = 1, 2 set Its Laplace transform admits the expansion : and its cumulative distribution function (for t > 0) is given, for −t z t, by : Proof. By Protter [8,Theorem 87], the processes g 0 t (W i ) − t 2 , t 0 are martingales, hence so is Z. Denoting by M the Azéma martingale (1.1), the PWC property follows from the fact that the event Next, the self-similarity of Z comes from that of g 0 (W ), which further implies that for t 0 : Finally, since g 0 1 (W ) follows the Arcsine law, we deduce on the one hand, using a Cauchy product, that : On the other hand, the density of Z 1 is given by the convolution, for z ∈ [0, 1] : where F denotes the incomplete elliptic integral of the first kind, see [4, p.275, Formula 3.147 (5)]. This yields, by symmetry and scaling : and the resulting cumulative distribution function is obtained upon integration in z.
Both the recursive and the vanishing compensators approaches are rather restrictive in terms of attainable range and analytical tractability. In the next section, we provide a more general method that can be used to build PWC martingales to any connected set of R (compatible with the martingale property, i.e. non-decreasing w.r.t. time) in a simple and tractable way.

PWC martingales using time-changed techniques
In this section, we construct a PWC martingale Z by time-changing a latent (Pr, F)-martingaleZ = (Z t ) t∈[0,T ] with the help of a suitable time-change process θ.
Under mild conditions stated below, Z := (Z θt ) t 0 is proven to be a martingale on [0, T ] with respect to its own filtration, with the desired piecewise constant behavior. Most results regarding timechanged martingales deal with continuous martingales time-changed with a continuous process [6,9]. This does not provide a satisfactory solution to our problem as the resulting martingale will obviously have continuous sample paths. On the other hand, it is obvious that not all time-changed martingales remain martingales, so that conditions are required on Z and/or on θ.
Remark 2.1. Every F-semi-martingale time-changed with a F-adapted process remains a semi-martingale but not necessarily a martingale. For instance, settingZ = W and θ t = inf{s : W s > t} thenZ θt = t. Also, even if θ is independent fromZ, Z may fail to be a martingale in the above filtration because of integrability issues. For example ifZ = W and θ is an independent α-stable subordinator with α = 1/2 then the time-changed process Z is not integrable: A sufficient condition to ensure that the time-changed martingale remains a martingale is to constraint Z to be positive independent from θ. Taking as θ a time-change process independent fromZ > 0, this result allows one to construct piecewise constant martingales having the same range asZ. This is shown in the next lemma [2, Lemma 15.2 ] 1 Lemma 2.3. LetZ be a positive martingale (in its own filtration) and θ be an independent time-change process. Then, the time-changed process Z is again a martingale in the filtration generated by the timechanged processZ and the stochastic clock θ.
As suggested in [2], one possibility to relax the positivity constraint onZ is to impose an integrability condition onZ only. For instance, uniform integrability ofZ is enough in that respect.
Lemma 2.4. LetZ be a uniformly integrable martingale relative to its natural filtration. Then Z · :=Z θ· is a martingale in the filtration generated by the time-changed processZ and the stochastic clock θ.
Proof. It is enough to discuss the integrability of Z (the conditional expectation discussion is the same as above). The martingale property ofZ forces |Z| to be a submartingale: where the right-hand side is bounded by some constant M from uniform integrability. Hence, Note that the requirement thatZ is integrable on [0, ∞) is needed in the case where θ is unbounded. One can weaken the condition onZ by moving the integrability requirement on the time-changed process θ as shown in the below lemma.
there exists an increasing function k such that θ t k(t) for all t and thus θ t k(T )) andZ be a martingale (in its own filtration) on [0, k(T )], independent from θ. Then, Z is a martingale on [0, θ T ] in the filtration generated by the time-changed processZ and the stochastic clock θ.
From a practical point of view, time-changed processes θ that are unbounded on [0, T ] may cause some problems, especially when the transition densities ofZ are not explicitly known. In such cases indeed (or whenZ needs to be simulated jointly with other processes), sampling paths ofZ calls for a discretization scheme, whose error typically increases with the time step. Hence, sampling Z on [0, T ] typically requires a fine sampling ofZ on [0, θ T ], leading to prohibitive computational times if θ T is allowed to take very large values.Hence, the class of time-changed processes θ that are bounded by some function k on [0, T ] for any T < ∞ whilst preserving analytical tractability proves to be quite interesting. This is of course violated by most of the standard time-change processes (e.g. integrated CIR, Poisson, Gamma, or Compounded Poisson subordinators). A naive alternative consists in capping the later but this would trigger some difficulties. Using θ t = N t ∧ t would mean that Z = Z 0 on [0, 1] whilst if we choose θ t = J t ∧ t the resulting process may have linear pieces (hence not be piecewise constant). There exist however simple time-change processes θ satisfying sup s∈[0,t] θ s k(t) for some functions k bounded on any closed interval and being piecewise constant, having stochastic jumps and having a non-zero possibility to jump in any time set of non-zero measure. Building PWC martingales using such type of processes is the purpose of next section.

Lazy martingales
We first present a stochastic time-change process that satisfies this condition in the sense that the calendar time is always ahead of the stochastic clock that is, satisfies the boundedness requirement of the above lemma with the linear boundary k(t) = t. We then use the later to create PWC martingales.

Lazy clocks
We would like to define stochastic clocks that keep time frozen almost everywhere, can jump occasionally, but can't go ahead of the real clock. Those stochastic clocks would then exhibit the piecewise constant path and the last constraint has the nice feature that any stochastic process Z adapated to F, Z t ∈ F t is also adapted to F enlarged with the filtration generated by θ. In particular, we do not need to know the value of Z after the real time t. As far as Z is concerned, only the sample paths of Z (in factZ) up to θ t t matters. In the sequel, we consider a specific class of such processes, called lazy clocks hereafter, that have the specific property that the stochastic clock typically "sleeps" (i.e. is "on hold"), but gets synchronized to the calendar time at some random times.
Definition 3.1. The stochastic process θ : R + → R + , t → θ t is a F-lazy clock if it satisfies the following properties i) it is a F-time change process: in particular, it is grounded (θ 0 = 0), F-adapted, càdlàg and nondecreasing; ii) it has piecewise constant sample paths : θ t = s t ∆θ s ; iii) it can jump at any time and, when it does, it synchronizes to the calendar clock. Remark 3.1. Note that for each t > 0, the random variable θ t is a priori not a (F s , s 0)-stopping time. In fact, defining C t := inf{s ; θ s > t} then (C t , t 0) is an increasing family of F-stopping times. Conversely, for every t 0, the lazy clock θ is a family of (F Cs , s 0)-stopping times, see Revuz-Yor [9, Chapter V].
In the following, we shall show that lazy clocks are essentially linked with last passage times, as illustrated in the next proposition. Proof. If θ is a lazy clock, then the result is immediate by taking A t = θ t − t which is càdlàg, and whose set of zeroes coincides with the jumps of θ, hence is countable. Conversely, fix a path ω. Since A is càdlàg, the set Z(ω) = {s; A s − (ω) = 0 or A s (ω) = 0} is closed, hence its complementary may be written as a countable union of disjoint intervals. We claim that Indeed, observe first that since s −→ g s (ω) is increasing, its has a countable number of discontinuities, hence the union on the right hand side is countable. Furthermore, the intervals which are not empty are such that A s (ω) = 0 or A s − (ω) = 0 and g s (ω) = s. In particular, if s 1 < s 2 are associated with non empty intervals, then g s1 (ω) = s 1 g s −
We give below examples of lazy clocks admitting simple closed-form distributions.

Poisson Lazy clock
Example 3.1. Let (X k , k 1) be strictly positive random variables and consider the counting process Xi t} , t 0 . Then the process (g t (N ), t 0) defined as the last jump time of N prior to t or zero if N did not jump by time t: is a lazy clock.
In the case where N is a Poisson process of intensity λ, i.e. when the r.v.'s (X k , k 1) are i.i.d. with an exponential distribution of parameter λ, the law of g t (N ) may easily be computed as follows. and is zero elsewhere. Hence, the cumulative distribution function is and the moments are given by Proof. This result may be proven adopting a similar strategy as in Propostion 3 of [13], but we shall take here a shorter route. We merely have to show that (i) Pr(g t (N ) = 0) = e −λt , (ii) f gt(N ) (s) = λe −λ(t−s) for all 0 < s < t and (iii) Pr(g t (N ) s) = 1 if s t. The event {g t (N ) = 0} is equivalent to {N t = 0} whose probability is e −λt , proving (i). But g t (N ) t Pr-a.s. justifying (iii). The central point is to notice that the stochastic clock synchronizes to the real clock at each jump. When t > s, the event {g t (N ) s} is equivalent to say that no synchronization took place after s, i.e. {N t = N s }, whose probability is Pr(N t−s = 0) = e −λ(t−s) . Hence, g t (N ) has a mixed distribution: it is zero for s < 0 and s > t, has a probability mass of e −λt at s = 0, and a density part of λe −λ(t−s) for s ∈ (0, t]; the proof is complete.

Brownian Lazy clock
Another simple example is given by the last passage time of a Brownian motion to zero 2 , i.e. (g 0 t (W ), t 0). The initial value of the process is g 0 0 (W ) = 0 and the density of g 0 t (W ) is given by the Lévy's arcsine law (see e.g. [8] p.230): and zero otherwise. It is also possible to consider several extensions, like the last passage time of W at an affine barrier,g t (W ) := sup{s t ; W s = a + bs}. The corresponding density expressed in integral form can be found in [10] but can be further simplified with the help of the standard Normal cumulative distribution function Φ and Φ = φ, see [7] : Observe thatg t (W ) is not always well-defined. When a = 0 indeed, one needs to specifyg t (W ) in the cases where W never reaches the barrier before t. We set, as is usual,g 0 (W ) := 0. By doing so,g is adapted to the natural filtration of W . In contrasts with g 0 t (W ),g t (W ) may have a probability mass at zero, corresponding to the probability of W not to reach the affine barrier prior to t. Suppose for instance that a 0. Then the event {g t (W ) = 0} is equivalent to the event {W s < a + bs; ∀s ∈ (0, t]}, itself equivalent to {max s∈(0,t] {W s − bs} < a}. Hence, the probability mass ofg t (W ) at 0 corresponds to the probability for a Brownian motion with drift −b to stay below the threshold a, which is known to be (see e.g. [12] Corollary 7.2.2) Observe that this probability vanishes when a = b = 0. Hence, one can use g 0 (W ) org(W ) as a lazy clock, depending on whether we want Pr(θ t = 0) to be zero or strictly positive for t > 0. The moments ofg t (W ), k ∈ {1, 2, . . .} read which, in the a = 0 case, simplifies to

Bessel lazy clock
Lemma 3.2. Let R denote a Bessel process with index ν ∈ (−1, 0) started from 0. The probability density of the lazy clock g 0 t (R) = sup{s t; R s = 0} is given, for 0 < s < t, by the generalized Arcsine law : Its moments are given via the representation of Beta functions : Note that this lazy clock is 1-self similar.
Proof. We have, using the Markov property and applying Fubini (see [3]) : after the change of variable u = rs. The result then follows by differentiation.

Time-changed martingales with lazy clocks
In this section we consider a martingaleZ whose time is changed with an independent lazy clock to obtain a PWC martingale Z. We first show that (in most situations) the lazy clock is adapted to the filtration generated by Z. This is done by observing that the knowledge of θ amounts to the knowledge of its jump times, since the size of the jumps are always obtained as a difference with the calendar time.
In particular, the properties of the lazy clock allow one to reconstruct the trajectories of Z on [0, t] only from past values ofZ and θ; no information about the future (measured according to the real clock) is required. We then provide the resulting distribution when the clock g(N ) is governed by Poisson, inhomogeneous Poisson or Cox processes.
Lemma 3.3. LetZ be a stochastic process independent from the lazy clock θ and assume that ∀u = v, Pr(Z u =Z v ) = 0. Then, θ is adapted to the filtration (F Z t , t 0).
Proof. Observe first that the countable union is of measure zero sinceZ and θ are independent. This implies that a.s., the sample paths of θ (both the jump times and the jump sizes) can be recovered from the sample paths of Z up to θ t , hence up to t. Indeed, the set of the jump times of θ on [0, t] is given by {s ∈ [0, t] : θ s = s}. Moreover, the "synchronization events" {θ s = s} coincide with the "jump events" {Z s −Z s − > 0} so that all jump times of θ are identified by the jumps of Z. But θ is constant between two jumps and jumps to a known value (the calendar time) each time Z jumps, so we have the a.s. representation θ t = sup{s t; Z s = Z s − }. This means that both θ t andZ θt are revealed in F Z θt and, in particular, F θ t ⊆ F Z θt . The proof is concluded by noting that θ t t, leading to F Z θt ⊆ F Z t .
Lemma 3.4. LetZ be a martingale and N an independent Poisson process with intensity λ. Then Z t :=Z gt(N ) is a PWC martingale with same range asZ. Its cumulative distribution function is given by : Proof. This result is obvious from the independence assumption betweenZ and N (i.e. θ = g(N )), In the case where N is an inhomogeneous Poisson process with stochastic intensity (i.e. Cox process) independent fromZ, 14) where we have set P (s, t) := E[e −(Λt−Λs) ] with Λ t := t 0 λ u du the integrated intensity process. Proof. We start from the inhomogeneous Poisson case, set as hazard rate function λ(u) for all u ∈ [0, T ] a sample path λ u (ω) of the stochastic intensity and take the expectation, which amounts to replace λ(u) by λ u (hence Λ(u) by Λ u ) and take the expected value of the resulting cumulative distribution function derived above with respect to the intensity paths: (3.15) where in the last equality we have used Tonelli's theorem to exchange the integral and expectation operators when applied to non-negative functions as well as independence between λ andZ.
Remark 3.2. Notice that P (s, t) does not correspond to the expectation of e − t s λudu conditional upon F s , the filtration generated by λ up to s as often the case e.g. in mathematical finance. It is an unconditional expectation that can be evaluated with the help of the tower law. In the specific case where λ is an affine process for example, E e − t s λudu |λ s = x takes the form A(s, t)e −B(s,t)x for some deterministic functions A, B so that P (s, t) = E e − t s λudu = E E A(s, t)e −B(s,t)λs = A(s, t)ϕ λs (iB(s, t)) .

Some Lazy martingales without independence assumption
We have seen that whenZ is a martingale and θ an independent lazy clock, then (Z t =Z θt , t 0) is a PWC martingale. We now give an example where the lazy time-change θ is not independent from the latent processZ.

Sampling of lazy clock and lazy martingales
By definition, the number of jumps of a lazy clock θ on [0, T ] is countable, but may be infinite. Therefore, except in some specific cases (such as the Poisson lazy clock), an exact simulation is impossible. Using a discretization grid, the simulated trajectories of a lazy clock θ on [0, T ] will take the form θ t := sup{τ i , τ i t} where τ 0 := 0 and τ 1 , τ 2 , . . . are (some of) the synchronization times of the lazy clock up to time T . We can thus focus on the sampling times τ 1 , τ 2 . . . whose values are no greater than T .

Poisson lazy clock
Trajectories of a Poisson lazy clock θ t (ω) = g t (N (ω)) on a fixed interval [0, T ] are very easy to obtain thanks to the properties of Poisson jump times.
Algorithm 1 (Sampling of a Poisson lazy clock).
1. Draw a sample n = N T (ω) for the number of jump times of N up to T : N T ∼ P oi(λT ).

Brownian lazy clock
Sampling a trajectory for a Brownian lazy clock requires the last zero of a Brownian bridge. This is the purpose of the following lemma. In particular, the probability that W x,y,t does not hit 0 during [0, t] equals: Note also the special case when y = 0 : Proof. Using time reversion and the absolute continuity formula of the Brownian bridge with respect to the free Brownian motion (see Salminen [11]), the density of g t (W x,y,t ) is given, for y = 0, by : Integrating over [0, t], we first deduce that We shall now compute a modified Laplace transform of F , and then invert it. Integrating by parts and using (4.1), we deduce that : Observe next that by a change of variable : and the result follows by inverting this Laplace transform thanks to the formulae, for a > 0 and b > 0 : Simulating a continuous trajectory of a Brownian lazy clock θ in a perfect way is an impossible task. The reason is that when a Brownian motion reaches zero at a specific time say s, it does so infinitely many times on (s, s+ε] for all ε > 0. Consequently, it is impossible to depict such trajectories in a perfect way. Just like for the Brownian motion, one could only hope to sample trajectories on a discrete time grid, where the maximum stepsize provides some control about the approximation, and corresponds to a basic unit of time. By doing so, we disregard the specific jump times of θ, but focus on the supremum of the zeroes of a Brownian motion in these intervals. To do this, we proceed as follows. Algorithm 2 (Sampling of a Brownian lazy clock).
Example 4.1 (PWC martingale on (0, 1)). Let N be a Poisson process with intensity λ andZ be the Φ-martingale [5] with constant diffusion coefficient η, where Φ denotes as before the standard Normal CDF. Then, the stochastic process Z defined as Z t := Z gt(N ) , t 0, is a PWC martingale on (0, 1) with CDF Some sample paths forZ and Z are drawn on Fig. 2. Notice that this martingaleZ can be simulated without error using the exact solution.  Figure 3 gives the cumulative distribution function of Z andZ where the later is a Φ-martingale. The main differences between these two sets of curves result from the fact that Pr(Z t = Z 0 ) = 0 for all t > 0 while Pr(Z t = Z 0 ) = Pr(Z gt(N ) = Z 0 ) = Pr(N t = 0) > 0 and that there is a delay resulting from the fact that Z t correspond to some past value ofZ.

Conclusion and future research
In this paper, we focused on the construction of piecewise constant martingales that is, martingales whose trajectories are piecewise constant. Such processes are indeed good candidates to model the dynamics of conditional expectations of random variables under partial (punctual) information. The time-changed approach proves to be quite powerful: starting with a martingale in a given range, we obtain a PWC martingale by using a piecewise constant time-change process. Among those time-change processes that lazy clocks are specifically appealing: these are time-change processes staying always in arrears to the real clock, and that synchronizes to the calendar time at some random times. This ensures that θ t t which is a convenient feature when one needs to sample trajectories of the time-change process. Such random times can typically be characterized as last passage times, and enjoy appealing tractability properties. The last jump time of a Poisson process before the current time for instance exhibits a very simple distribution. Other lazy clocks have been proposed as well, based on Brownian motions and Bessel processes, some of which rule out the probability mass at zero. Finally, we provided several martingales time-changed with lazy clocks, called lazy martingales, whose range can be any interval in R (depending on the range of the latent martingale) and showed that the corresponding distributions can be easily obtained from the law of iterated expectations.
Yet, tractability and even more importantly, the martingale property result from the independence assumption between the latent martingale and the time-change process. In practice however, it might be more realistic to consider cases where the sample frequency (synchronization rate of the lazy clock θ to the real clock) depends on the level of the latent martingale Z. Finding a tractable model allowing for this coupling remains an open question and is the purpose of future research.