Open Access

On the compensator of the default process in an information-based model

Probability, Uncertainty and Quantitative Risk20172:10

https://doi.org/10.1186/s41546-017-0017-4

Received: 18 November 2016

Accepted: 18 April 2017

Published: 11 September 2017

Abstract

This paper provides sufficient conditions for the time of bankruptcy (of a company or a state) for being a totally inaccessible stopping time and provides the explicit computation of its compensator in a framework where the flow of market information on the default is modelled explicitly with a Brownian bridge between 0 and 0 on a random time interval.

Keywords

Default time Totally inaccessible stopping time Brownian bridge on random intervals Local time Credit risk Compensator process

Introduction

One of the most important objects in a mathematical model for credit risk is the time τ (called default time) at which a certain company (or state) bankrupts. Modelling the flow of market information concerning a default time is crucial and in this paper we consider a process, β=(β t , t≥0), whose natural filtration \(\mathbb {F}^{\beta }\) describes the flow of information available for market agents about the time at which the default occurs. For this reason, the process β will be called the information process. In the present paper, we define β to be a Brownian bridge between 0 and 0 of random length τ:
$$\beta_{t}:=W_{t}-\frac{t}{\tau\vee t}W_{\tau\vee t},\quad t \geq 0, $$
where W=(W t , t≥0) is a Brownian motion independent of τ.
In this paper, the focus is on the classification of the default time with respect to the filtration \(\mathbb {F}^{\beta }\) and our main result is the following: If the distribution of the default time τ admits a continuous density f with respect to the Lebesgue measure, then τ is a totally inaccessible stopping time and its compensator K=(K t , t≥0) is given by
$$K_{t}=\int_{0}^{t\wedge\tau}\frac{f(s)}{\int_{s}^{\infty} v^{\frac{1}{2}}\,(2\pi s\,\left(v-s\right))^{-\frac{1}{2}}\,f(v)\mathrm{d} v}\,\mathrm{d} L^{\beta}\left(s,0\right), $$
where L β (t,0) is the local time of the information process β at level 0 up to time t.

Knowing whether the default time is a predictable, accessible, or totally inaccessible stopping time is very important in a mathematical credit risk model. A predictable default time is typical of structural credit risk models, while totally inaccessible default times are one of the most important features of reduced-form credit risk models. In the first framework, market agents know when the default is about to occur, while in the latter default occurs by surprise. The fact that financial markets cannot foresee the time of default of a company makes the reduced-form models well accepted by practitioners. In this sense, totally inaccessible default times seem to be the best candidates for modelling times of bankruptcy. We refer, among others, to the papers of Jarrow and Protter (2004) and of Giesecke (2006) on the relations between financial information and the properties of the default time, and also to the series of papers of Jeanblanc and Le Cam (2008,2009,2010). It is remarkable that in our setting the default time is a totally inaccessible stopping time under the common assumption that it admits a continuous density with respect to the Lebesgue measure. Both the hypothesis that the default time admits a continuous density and its consequence that the default occurs by surprise are standard in mathematical credit risk models, but in the information-based approach there is the additional feature of an explicit model for the flow of information which is more sophisticated than the standard approach. There, the available information on the default is modelled through \(\left (\mathbb {I}_{\left \{ \tau \leq t\right \} },\,t\geq 0\right)\), the single-jump process occurring at τ, meaning that people only know if the default has or has not occurred. Financial reality can be more complex and there are actually periods in which default is more likely to happen than in others. In the information-based approach, periods of fear of an imminent default correspond to situations where the information process is close to 0, while periods when investors are relatively sure that the default is not going to occur immediately correspond to situations where β t is far from 0.

The paper is organized as follows. In the section “The information process and its basic propertiesThe information process andits basic properties”, we recall the definition and the main properties of the information process. In the section “The compensator of the default time”, we state and prove Theorem 3.2, which is the main result of the paper. In Appendix Appendix A, we provide the properties of the local time associated with the information process. In Appendix Appendix B, we give the proofs of some auxiliary lemmas. Finally, in Appendix Appendix C, for the sake of easy reference, we recall the so-called Laplacian approach developed by Meyer (1966) (see, e.g., his book (Meyer 1966)) for computing the compensator of a right-continuous potential of class (D). It is an important ingredient of the approach adopted in this note to determine the compensator of the \(\mathbb {F}^{\beta }\)-submartingale \(\left (\mathbb {I}_{\left \{ \tau \leq t\right \} },\,t\geq 0\right)\).

The idea of modelling the information about the default time with a Brownian bridge defined on a stochastic interval was introduced in the thesis (Bedini 2012). The definition of the information process β, the study of its basic properties, and an application to the problem of pricing a Credit Default Swap (one of the most traded derivatives in the credit market) have also recently appeared in the paper (Bedini et al. 2016).

Non-trivial and sufficient conditions for making the default time a predictable stopping time will be considered in another paper, (Bedini and Hinz 2017). Other topics related to Brownian bridges on stochastic intervals (which will not be considered in this paper) are concerned with the problem of studying the progressive enlargement of a reference filtration \(\mathbb {F}\) by the filtration \(\mathbb {F}^{\beta }\) generated by the information process and further applications to Mathematical Finance.

The information process and its basic properties

We start by recalling the definition and the basic properties of a Brownian bridge between 0 and 0 of random length. The material in this section gives a résumé of some of the results obtained in the paper (Bedini et al. 2016), to which we shall refer for the proofs and more details on the basic properties of such a process.

If \(A\subseteq \mathbb {R}\) (where \(\mathbb {R}\) denotes the set of real numbers), then the set A + is defined as \(A_{+}:=A\cap \{x\in \mathbb {R}:x\geq 0\}\). If E is a topological space, then \(\mathcal {B}(E)\) denotes the Borel σ-algebra over E. The indicator function of a set A will be denoted by \(\mathbb {I}_{A}\). A function \(f:\mathbb {R}\rightarrow \mathbb {R}\) will be said to be càdlàg if it is right-continuous with limits from the left.

Let \(\left (\Omega,\mathcal {F},\mathbf {P}\right)\) be a complete probability space. We denote by \(\mathcal {N}_{P}\) the collection of P-null sets of \(\mathcal {F}\). If \(\mathcal {L}\) is the law of the random variable ξ we shall write \(\xi \sim \mathcal {L}\). Unless otherwise specified, all filtrations considered in the following are supposed to satisfy the usual conditions of right continuity and completeness.

Let τ:Ω→(0,+) be a strictly positive random time, whose distribution function is denoted by F: \(F(t):=\mathbf {P}\left (\tau \leq t\right),\;t\in \mathbb {R}_{+}\). The time τ models the random time at which some default occurs and, hereinafter, it will be called default time.

Let W=(W t , t≥0) be a Brownian motion defined on \(\left (\Omega,\mathcal {F},\mathbf {P}\right)\) and starting from 0. We shall always make use of the following assumption:

Assumption 2.1

The random time τ and the Brownian motion W are independent.

Given W and a strictly positive real number r, a standard Brownian bridge \(\beta ^{r}=\left (\beta _{t}^{r},\,t\geq 0\right)\) between 0 and 0 of length r is defined by
$$\beta^{r}_{t}=W_{t}-\frac{t}{r\vee t}W_{r\vee t},\quad t\geq 0\,. $$
For further references on Brownian bridges, see, e.g., Section 5.6.B of the book (Karatzas and Shreve 1991) by Karatzas and Shreve.

Now, we are going to introduce the definition of the Brownian bridge of random length (see (Bedini et al. 2016), Definition 3.1).

Definition 2.2

The process β=(β t , t≥0) given by
$$ \beta_{t}:=W_{t}-\frac{t}{\tau\vee t}W_{\tau\vee t},\quad t\geq 0\,, $$
(1)

will be called Brownian bridge of random length τ. We will often say that β=(β t , t≥0) is the information process (for the random time τ based on W).

The natural filtration of β will be denoted by \(\mathbb {F}^{\beta }=(\mathcal {F}_{t}^{\beta })_{t\geq 0}\):
$$\mathcal{F}_{t}^{\beta}:=\sigma\left(\beta_{s},\,0\leq s\leq t\right)\vee\mathcal{N}_{P}\,. $$
Note that according to (Bedini et al. 2016), Corollary 6.1, the filtration \(\mathbb {F}^{\beta }\) (denoted therein by \(\mathbb {F}^{P}\)) satisfies the usual conditions of right-continuity and completeness.

Remark 2.3

The law of β, conditional on τ=r, is the same as that of a standard Brownian bridge between 0 and 0 of length r (see (Bedini et al. 2016), Lemma 2.4 and Corollary 2.2). In particular, if 0<t<r, the law of β t , conditional on τ=r, is Gaussian with expectation zero and variance \(\frac {t\left (r-t\right)}{r}\):
$$\mathbf{P}\left(\beta_{t}\in\cdot\;\big|\tau=r\right)=\mathcal{N}\left(0,\frac{t\left(r-t\right)}{r}\right), $$
where \(\mathcal {N}\left (\mu,\sigma ^{2}\right)\) denotes the Gaussian law of mean μ and variance σ 2.
By p(t,·,y), we denote the density of a Gaussian random variable with mean \(y\in \mathbb {R}\) and variance t>0:
$$ p\left(t,x,y\right):=\frac{1}{\sqrt{2\pi t}}\exp\left[-\frac{\left(x-y\right)^{2}}{2t}\right],\,x\in\mathbb{R}. $$
(2)
For later use, we also introduce the functions φ t (t>0):
$$ \varphi_{t}\left(r,x\right):=\left\{ \begin{array}{ll} p\left(\frac{t\left(r-t\right)}{r},x,0\right), & 0<t<r,\ x\in\mathbb{R},\\ 0, & r\leq t,\ x\in\mathbb{R}. \end{array}\right. $$
(3)

We notice that for 0<t<r the conditional density of β t , conditional on τ=r, is just equal to the density φ t (r,·) of a standard Brownian bridge \(\beta ^{r}_{t}\) of length r at time t.

We proceed with the property that the default time τ is nonanticipating with respect to the filtration \(\mathbb {F}^{\beta }\) and the Markov property of the information process β.

Lemma 2.4

For all t>0, {β t =0}={τt}, P-a.s. In particular, τ is an \(\mathbb {F}^{\beta }\)-stopping time.

Proof

See (Bedini et al. 2016), Proposition 3.1 and Corollary 3.1. □

Theorem 2.5

The information process β is a Markov process with respect to the filtration \(\mathbb {F}^{\beta }\): For all 0≤t<u and measurable real functions g such that g(β u )is integrable,
$$\mathbf{E}[g(\beta_{u})|\mathcal{F}^{\beta}_{t}]=\mathbf{E}[g(\beta_{u})|\beta_{t}],\quad \mathbf{P}\text{-a.s.} $$

Proof

See Theorem 6.1 in (Bedini et al. 2016). □

As the following theorem combined with Theorem 2.5 shows, the function ϕ t defined by
$$ \phi_{t}\left(r,x\right):=\frac{\varphi_{t}\left(r,x\right)}{{\int_{\left(t,+\infty\right)}\varphi_{t}\left(v,x\right)\,\mathrm{d} F(v)}}, $$
(4)

\(\left (r,t\right)\in \left (0,+\infty \right)\times \mathbb {R}_{+}\), \(x\in \mathbb {R}\), is, for t<r, the a posteriori density function of τ on {τ>t}, conditional on β t =x.

Theorem 2.6

Let t>0, \(g:\mathbb {R}^{+}\rightarrow \mathbb {R}\) be a Borel function such that E[|g(τ)|]<+. Then, P-a.s.
$$ \mathbf{E}\left[g(\tau)|\mathcal{F}_{t}^{\beta}\right]=g(\tau)\mathbb{I}_{\left\{ \tau\leq t\right\} }+{\int_{\left(t,+\infty\right)}g(r)}\,\phi_{t}\left(r,\beta_{t}\right)\,\mathrm{d} F(r)\mathbb{I}_{\left\{ t<\tau\right\} }. $$
(5)

Proof

See Theorem 4.1, Corollary 4.1 and Corollary 6.1 in (Bedini et al. 2016). □

Before stating the next result, which is concerned with the semimartingale decomposition of the information process, let us give the following definition:

Definition 2.7

Let B be a continuous process, \(\mathbb {F}\) a filtration, and T an \(\mathbb {F}\)-stopping time. Then, B is called an \(\mathbb {F}\)-Brownian motion stopped at T, if B is an \(\mathbb {F}\)-martingale with square variation process 〈B,B t =tT, t≥0.

Now, we introduce the real-valued function u defined by
$$ u\left(s,x\right):=\mathbf{E}\left[\frac{\beta_{s}}{\tau-s}\mathbb{I}_{\left\{ s<\tau\right\} }\big|\beta_{s}=x\right],\quad s\in\mathbb{R}_{+},\;x\in\mathbb{R}. $$
(6)

Theorem 2.8

The process b defined by
$$b_{t}:=\beta_{t}+\int_{0}^{t}u(s,\beta_{s})\,\mathrm{d} s,\quad t\geq 0\,, $$
is an \(\mathbb {F}^{\beta }\)-Brownian motion stopped at τ. The information process βis therefore an \(\mathbb {F}^{\beta }\)-semimartingale with decomposition
$$ \beta_{t}=b_{t}-\int_{0}^{t\wedge\tau}u\left(s,\beta_{s}\right)\,\mathrm{d} s,\quad t\geq 0\,. $$
(7)

Proof

See Theorem 7.1 in (Bedini et al. 2016). □

Remark 2.9

The quadratic variation of the information process β is given by
$$ \left\langle \beta,\beta\right\rangle_{t}=\left\langle b,b\right\rangle_{t}=t\wedge\tau, \quad t\geq 0\,. $$
(8)

The compensator of the default time

In this section, we explicitly compute the compensator of the single-jump process with the jump occurring at τ, which will be denoted by H= (H t , t≥0):
$$ H_{t}:=\mathbb{I}_{\left\{ \tau\leq t\right\} },\quad t\geq 0. $$
(9)

The process H, called default process, is an \(\mathbb {F}^{\beta }\)-submartingale and its compensator is also known as the compensator of the \(\mathbb {F}^{\beta }\)-stopping time τ. Our main goal consists in providing a representation of the compensator of H. As we shall see below, this representation involves the local time L β (t,0) of the information process β (see Appendix Appendix A for properties of local times of continuous semimartingales and, in particular, of β). From its representation we immediately obtain that the compensator of the default process H is continuous. As a result, from the continuity of the compensator of H it follows that the default time τ is a totally inaccessible \(\mathbb {F}^{\beta }\)-stopping time.

In this section, the following assumption will always be in force.

Assumption 3.1

(i) The distribution function F of τ admits a continuous density function f with respect to the Lebesgue measure λ + on \(\mathbb {R}_{+}\).

(ii) F(t)<1 for all t≥0.

The following theorem is the main result of this paper:

Theorem 3.2

Suppose that Assumption 3.1 is satisfied.

(i) The process K=(K t , t≥0) defined by
$$ K_{t}:=\int_{0}^{t\wedge\tau}\frac{f(s)}{\int_{s}^{\infty}\varphi_{s}\left(v,0\right)f(v)dv}\,\mathrm{d} L^{\beta}(s,0),\quad t\geq 0\,, $$
(10)

is the compensator of the default process H. Here, L β (t,x) denotes the local time of the information process β up to t at level x.

(ii) The default time τ is a totally inaccessible stopping time with respect to the filtration \(\mathbb {F}^{\beta }\).

Proof

First, we verify statement (ii) under the supposition that (i) is true. Obviously, as L β (s,0) is continuous in s (see Lemma A.4), the process K given by (10) is continuous. Consequently, because of the well-known equivalence between this latter property and the continuity of the compensator (see, e.g., (Kallenberg 2002), Corollary 25.18), we can conclude that the default time τ is a totally inaccessible stopping time with respect to \(\mathbb {F}^{\beta }\).

Now, we prove statement (i) of the theorem. For every h>0 we define the process \(K^{h}=\left (K_{t}^{h},\,t\geq 0\right)\) by
$$\begin{array}{@{}rcl@{}} K_{t}^{h}&:=&\frac{1}{h}\int_{0}^{t}\left(\mathbb{I}_{\left\{ s<\tau\right\} }-\mathbf{E}\left[\mathbb{I}_{\left\{ s+h<\tau\right\} }|\mathcal{F}_{s}^{\beta}\right]\right)\,\mathrm{d} s\\ &=&\int_{0}^{t}\frac{1}{h}\mathbf{P}\left(s<\tau<s+h|\mathcal{F}_{s}^{\beta}\right)\,\mathrm{d} s\,,\quad\mathbf{P}\textrm{-a.s.} \end{array} $$
(11)

The proof is divided into two parts. In the first part, we prove that \(\phantom {\dot {i}\!}K_{t}-K_{t_{0}}\) is the P-a.s. limit of \(\phantom {\dot {i}\!}K_{t}^{h}-K^{h}_{t_{0}}\) as h 0, for every t 0,t≥0 such that 0<t 0<t. In the second part of the proof, we show that the process K is indistinguishable from the compensator of H. Auxiliary results used throughout the proof are postponed to Appendix Appendix B.

For the first part of the proof, we fix t 0,t such that 0<t 0<t and notice that
$$ \begin{aligned} K_{t}^{h}-K_{t_{0}}^{h} & =\int_{t_{0}}^{t}\frac{1}{h}\mathbf{P}\left(s<\tau<s+h|\mathcal{F}_{s}^{\beta}\right)\,\mathrm{d} s\\ &=\int_{t_{0}\wedge\tau}^{t\wedge\tau}\frac{1}{h}\left(\frac{\int_{s}^{s+h}\varphi_{s}\left(r,\beta_{s}\right)f(r)\,\mathrm{d} r}{\int_{s}^{\infty}\varphi_{s}\left(v,\beta_{s}\right)f(v)\,\mathrm{d} v}\right)\,\mathrm{d} s, \end{aligned} $$
(12)
where the last equality is a consequence of Theorem 2.6 and Definition (4) of the a posteriori density function of τ. Later, we shall verify that
$$ {\lim}_{h\downarrow 0} \int_{t_{0}\wedge\tau}^{t\wedge\tau}\frac{1}{h}\left(\frac{\int_{s}^{s+h}\varphi_{s}\left(r,\beta_{s}\right)\left[f(r)-f(s)\right]\,\mathrm{d} r}{\int_{s}^{\infty} \varphi_{s}\left(v,\beta_{s}\right)f(v)\,\mathrm{d} v}\right)\,\mathrm{d} s=0\quad \mathbf{P}\text{-a.s.} $$
(13)
So, we have to deal with the limit behaviour as h 0 of
$$\begin{array}{@{}rcl@{}} &&\int_{t_{0}\wedge\tau}^{t\wedge\tau}\frac{1}{h}\left(\frac{\int_{s}^{s+h}\varphi_{s}\left(r,\beta_{s}\right)\,\mathrm{d} r}{\int_{s}^{\infty} \varphi_{s}\left(v,\beta_{s}\right)f(v)\,\mathrm{d} v}\right)\,f(s)\,\mathrm{d} s\\ &&=\int_{t_{0}\wedge\tau}^{t\wedge\tau}\frac{1}{h}\left(\frac{\int_{0}^{h}\varphi_{s}\left(s+u,\beta_{s}\right)\,\mathrm{d} u}{\int_{s}^{\infty} \varphi_{s}\left(v,\beta_{s}\right)f(v)\,\mathrm{d} v}\right)\,f(s)\mathrm{d} s\\ &&=\int_{t_{0}\wedge\tau}^{t\wedge\tau}\frac{1}{h}\int_{0}^{h}p\left(\frac{su}{s+u},\beta_{s},0\right)\,\mathrm{d} u\, g(s,\beta_s)\,f(s)\,\mathrm{d} s, \end{array} $$
(14)
where we have introduced the function \(g:\left (0,+\infty \right)\times \mathbb {R}\rightarrow \mathbb {R}_{+}\) by
$$ g\left(s,x\right):=\left(\int_{s}^{\infty}\varphi_{s}\left(v,x\right)f(v)\,\mathrm{d} v\right)^{-1},\quad s>0,\,x\in\mathbb{R}\,. $$
(15)
In (14), we want to replace \(p\left (\frac {su}{s+u},\beta _{s},0\right)\) with p(u,β s ,0). To this end, we estimate the absolute value of the difference:
$$\begin{array}{@{}rcl@{}} &&\left\vert p\left(\frac{su}{s+u},x,0\right)-p(u,x,0)\right\vert\\ &&=p(u,x,0)\,\left\vert\left(\frac{s+u}{s}\right)^{\frac{1}{2}} \exp\left(-\frac{x^{2}}{2s}\right)-1\right\vert\\ &&\le{p}(u,x,0)\left[\left(\frac{s+u}{s}\right)^{\frac{1}{2}} \left\vert\exp\left(-\frac{x^{2}}{2s}\right)-1\right\vert +\left\vert\left(\frac{s+u}{s}\right)^{\frac{1}{2}} -1\right\vert\right]\\ &&\le\left((2\pi)^{\frac{1}{2}}|x|\right)^{-1}\exp\left(-\frac{1}{2}\right)\left(\frac{s+1}{s}\right)^{\frac{1}{2}}\left(\frac{x^{2}}{2s}\right)+(2\pi u)^{-\frac{1}{2}} \left(\frac{u}{2s}\right)\\ &&\le{c}_1|x|+c_{2}\,u^{\frac{1}{2}}\,, \end{array} $$
(16)
with some constants c 1 and c 2, for 0≤uh≤1 and s[t 0,t], where for the estimate of the first summand we have used that the function up(u,x,0) has its unique maximum at u=x 2, the standard estimate 1−e z z for all z≥0 and that \((s+1)s^{-1}\le 1+t_{0}^{-1}\) as well as for the estimate of the second summand the inequalities \(p(u,x,0)\le (2\pi u)^{-\frac {1}{2}}\) and \(|\sqrt {\frac {s+u}{s}}-1|\le \frac {u}{2s}\). Putting x=β s , and integrating from 0 to h, and dividing by h, for 0≤uh≤1 and s[t 0,t], from (16) we obtain
$$\begin{array}{@{}rcl@{}} &&\frac{1}{h}\int_{0}^{h}\left\vert p\left(\frac{su}{s+u},\beta_{s},0\right) -p\left(u,\beta_{s},0\right)\right\vert\,\mathrm{d} u\, g(s,\beta_s)\,f(s)\\ &&\quad\le\left(c_1|\beta_s|+c_{2}\,h^{\frac{1}{2}}\right)\,g(s,\beta_s)\,f(s)\\ &&\quad\le\left(c_1|\beta_s|+c_{2}\right)\,c_{3}\,C(t_0,t,\beta_s), \end{array} $$
where C(t 0,t,x) is an upper bound of g(s,x) on [t 0,t] continuous in x (see Lemma B.2) and c 3 is an upper bound for the continuous density function f on [t 0,t]. The right-hand side is integrable over [t 0,t] with respect to the Lebesgue measure λ +. On the other side, by the fundamental theorem of calculus, we have that, for every x≠0,
$$ {\lim}_{h\downarrow 0}\frac{1}{h}\int_{0}^h p(u,x,0)\,\mathrm{d} u=0,\quad {\lim}_{h\downarrow 0}\frac{1}{h}\int_{0}^h p\left(\frac{su}{s+u},x,0\right)\,\mathrm{d} u=0\,. $$
(17)
For this, we notice that p(0,x,0)=0 is a continuous extension of the function up(u,x,0) if x≠0. By Corollary A.3, we have that the set {0≤stτ: β s =0} has Lebesgue measure zero. Then, using Lebesgue’s theorem on dominated convergence, we can conclude that P-a.s.
$$ {\lim}_{h\downarrow 0}\int_{t_{0}\wedge\tau}^{t\wedge\tau}\frac{1}{h}\int_{0}^{h}\left\vert p\left(\frac{su}{s+u},\beta_{s},0\right)-p(u,\beta_s,0)\right\vert\,\mathrm{d} u\, g(s,\beta_s)\,f(s)\,\mathrm{d} s=0\,. $$
(18)

This completes the first step of the proof of the first part, meaning that in (14) we can replace \(p\left (\frac {su}{s+u},\beta _{s},0\right)\) with p(u,β s ,0) for identifying the limit.

The second step of the first part is to prove that
$$\begin{array}{@{}rcl@{}} {\lim}_{h\downarrow 0} \int_{t_{0}\wedge\tau}^{t\wedge\tau}\frac{1}{h}\int_{0}^{h}p\left(u,\beta_{s},0\right)\,\mathrm{d} u\, g(s,\beta_s)\,f(s)\,\mathrm{d} s=K_t-K_{t_0}\,,\quad \mathbf{P}\text{-a.s.} \end{array} $$
Setting
$$ q(h,x):=\frac{1}{h}\int_{0}^h p(u,x,0)\,\mathrm{d} u,\quad 0<h\le 1, \ x\in\mathbb{R}\,, $$
(19)
an application of the occupation time formula (see Corollary A.3) yields
$$\begin{array}{@{}rcl@{}} &&\int_{t_{0}\wedge\tau}^{t\wedge\tau}\frac{1}{h}\int_{0}^{h}p\left(u,\beta_{s},0\right)\,\mathrm{d} u\, g(s,\beta_s)\,f(s)\,\mathrm{d} s\\ &&=\int_{t_{0}\wedge\tau}^{t\wedge\tau}\,q(h,\beta_s)\, g(s,\beta_s)\,f(s)\,\mathrm{d} s\\ &&=\int_{-\infty}^{+\infty}\left(\int_{t_0}^{t}g\left(s,x\right)f(s)\,\mathrm{d} L^\beta(s,x)\right)q\left(h,x\right)\,\mathrm{d} x\,,\quad \mathbf{P}\text{-a.s.} \end{array} $$
(20)
For every h>0, q(h,·) is a probability density function with respect to the Lebesgue measure on \(\mathbb {R}\). According to Lemma B.1, the probability measures Q h with density q(h,·) converge weakly to the Dirac measure δ 0 at 0. On the other hand, Lemma B.4 shows that the function \(x\mapsto \int _{t_{0}}^{t}g\left (s,x\right)f(s)\,\mathrm {d} L^{\beta }(s,x)\) is continuous and bounded. Hence, in (20) we can pass to the limit and obtain the following
$$\begin{array}{@{}rcl@{}} &&{\lim}_{h\downarrow0}\int_{t_{0}\wedge\tau}^{t\wedge\tau}\frac{1}{h}\int_{0}^{h}p\left(u,\beta_{s},0\right)\,\mathrm{d} u\, g(s,\beta_s)\,f(s)\,\mathrm{d} s\\ &&=\int_{t_0}^{t}g\left(s,0\right)f(s)\,\mathrm{d} L^\beta(s,0),\quad \mathbf{P}\text{-a.s.} \end{array} $$
(21)
In the third step of the proof of the first part, we must show that (13) holds. Note that the function f is uniformly continuous on [t 0,t+1]. We fix ε>0 and choose 0<δ≤1 such that |f(s+u)−f(s)|≤ε for every 0≤u<δ. Proceeding similarly as above, we obtain the following
$$\begin{array}{@{}rcl@{}} \limsup_{h\downarrow 0}\!\!\! &&\left\vert\int_{t_{0}\wedge\tau}^{t\wedge\tau}\frac{1}{h}\left(\frac{\int_{s}^{s+h}\varphi_{s}\left(r,\beta_{s}\right)\left[f(r)-f(s)\right]\,\mathrm{d} r}{\int_{s}^{\infty} \varphi_{s}\left(v,\beta_{s}\right)f(v)\,\mathrm{d} v}\right)\,\mathrm{d} s\right\vert\\ &\le&\limsup_{h\downarrow 0}\int_{t_{0}\wedge\tau}^{t\wedge\tau}\frac{1}{h}\int_{0}^{h}p\left(\frac{su}{s+u},\beta_{s},0\right)\,\big|f(s+u)-f(s)\big|\,\mathrm{d} u\, g(s,\beta_s)\,\mathrm{d} s\\ &\le&\varepsilon\,\limsup_{h\downarrow 0}\int_{t_{0}\wedge\tau}^{t\wedge\tau}\frac{1}{h}\int_{0}^{h}p\left(\frac{su}{s+u},\beta_{s},0\right)\,\mathrm{d} u\, g(s,\beta_s)\,\mathrm{d} s\\ &=&\varepsilon\,\limsup_{h\downarrow 0}\int_{t_{0}\wedge\tau}^{t\wedge\tau}\frac{1}{h}\int_{0}^{h}p\left(u,\beta_{s},0\right)\,\mathrm{d} u\, g(s,\beta_s)\,\mathrm{d} s\\ &=&\varepsilon\,\int_{t_0}^{t}g\left(s,0\right)\,\mathrm{d} L^\beta(s,0),\quad \mathbf{P}\text{-a.s.} \end{array} $$

Since ε>0 is choosen arbitrarily and the integral above is P-a.s. finite, we conclude that (13) holds.

The first part of the proof is complete.

The second part of the proof relies on the so-called Laplacian approach of P.-A. Meyer and, for the sake of easy reference, related results are recalled in Appendix Appendix C. Let us denote by K w the compensator of the default process H introduced in (9): \(H_{t}:=\mathbb {I}_{\left \{ \tau \leq t\right \}},\ t\geq 0\). We first show that \(K^{h}_{t}\) converges to \(K^{w}_{t}\) as h 0 in the sense of the weak topology σ(L 1,L ) (see Definition C.3), for every t≥0. We then prove that the process K is actually indistinguishable from K w .

For the sake of simplicity of the notation, if a sequence of integrable random variables \(\left (\xi _{n}\right)_{n\in \mathbb {N}}\) converges to an integrable random variable ξ in the sense of the weak topology σ(L 1,L ), we will write
$$\xi_{n}\xrightarrow[n\rightarrow+\infty]{{\!~\!}^{\sigma\left(L^{1},L^{\infty}\right)}}\xi. $$
Furthermore, we will denote by G the right-continuous potential of class (D) (cf. beginning of Appendix Appendix C) given by
$$ G_{t}:=1-H_{t}=\mathbb{I}_{\left\{ t<\tau\right\}},\quad t\geq 0\,. $$
(22)
By Corollary C.5, we know that there exists a unique integrable predictable increasing process \(K^{w}=\left (K_{t}^{w},\,t\geq 0\right)\) which generates, in the sense of Definition C.1, the potential G given by (22) and, for every \(\mathbb {F}^{\beta }\)-stopping time T, we have that
$$K_{T}^{h}\xrightarrow[h\downarrow0]{{\!~\!}^{\sigma\left(L^{1},L^{\infty}\right)}}K_{T}^{w}. $$
The process K w is actually the compensator of H. Indeed, it is a well-known fact that the process H admits a unique decomposition
$$ H=M+A $$
(23)
into the sum of a right-continuous martingale M and an adapted, natural, increasing, integrable process A. The process A is then called the compensator of H. On the other hand, from the definition of the potential generated by an increasing process (see Definition C.1), the process
$$ L:=G+K^{w} $$
(24)
is a martingale. By combining the definition (22) of the process G and (24), we obtain the following decomposition of H:
$$H=1-L+K^{w}. $$

However, by the uniqueness of the decomposition (23), we can identify the martingale M with 1−L and we have that A=K w , up to indistinguishability. Since the submartingale H and the martingale 1−L appearing in the above proof are right-continuous, the process K w is also right-continuous.

By applying Lemma C.8, we see that \(K_{t}-K_{t_{0}}\phantom {\dot {i}\!}\) is a modification of \(K^{w}_{t}-K^{w}_{t_{0}}\phantom {\dot {i}\!}\), for all t 0,t such that 0<t 0<t. Passing to the limit as t 0 0, we get \(K_{t}=K_{t}^{w}\) P-a.s. for all t≥0. Since both processes have right-continuous sample paths they are indistinguishable.

The theorem is proved. □

Remark 3.3

We close this part of the present paper with the following observations.

(1) Note that \(\left (\mathbb {I}_{\left \{ \tau \leq t\right \} },\,t\geq 0\right)\) does not admit an intensity with respect to the filtration \(\mathbb {F}^{\beta }\) (hence, it is not possible to apply, for example, Aven’s Lemma for computing the compensator (see, e.g., (Aven 1985))).

(2) Assumption 3.1(ii) on the distribution function F that F(t)<1 for all t≥0 ensures that the denominator of the integrand of the right-hand side of (10) is always strictly positive. However, it can be removed. Indeed, if the density function f of τ is continuous (as required by Assumption 3.1(i)), then exactly as above we can show that relation (10) is satisfied for all tt 1:= sup{t>0: F(t)<1}. On the other hand, it is obvious that τt 1 P-a.s. (hence, the right-hand side of (10) is constant for t[t 1,)) and also that the compensator K=(K t , t≥0) of \(\left (\mathbb {I}_{\left \{ \tau \leq t\right \} },\,t\geq 0\right)\) is constant on [t 1,). Altogether, it follows that relation (10) is satisfied for all t≥0.

Appendix A

On the local time of the information process

In this section, we introduce and study the local time process associated with the information process.

For any continuous semimartingale X=(X t , t≥0) and for any real number x, it is possible to define the (right) local time L X (t,x) associated with X at level x up to time t using Tanaka’s formula (see, e.g., (Revuz and Yor 1999), Theorem VI.(1.2)) as follows:
$$ L^{X}(t,x):=|X_{t}-x|-|X_{0}-x|-\int_{0}^{t}\text{sign}\left(X_{s}-x\right)\,\mathrm{d} X_{s},\quad t\geq 0, $$
(25)

where sign(x):=1 if x>0 and sign(x):=−1 if x≤0. The process L X (·,x)=(L X (t,x), t≥0) appearing in relation (25) is called the (right) local time of X at level x.

Now, we recall the occupation time formula for local times of continuous semimartingales which is given in a form convenient for our applications. By 〈X,X〉, we denote the square variation process of a continuous semimartingale X.

Lemma A.1

Let X=(X t , t≥0) be a continuous semimartingale. There is a P-negligible set outside of which
$$\int_{0}^{t}h\left(s,X_{s}\right)\mathrm{d} \left\langle X,X\right\rangle_{s}=\int_{-\infty}^{+\infty}\left(\int_{0}^{t}h\left(s,x\right)\,\mathrm{d} L^{X}\left(s,x\right)\right)\,\mathrm{d} x\,, $$
for every t≥0 and every non-negative Borel function h on \(\mathbb {R}_{+}\times \mathbb {R}\).

Proof

See Corollary VI.(1.6) from the book by Revuz and Yor (1999) for the case when h is a non-negative Borel function defined on \(\mathbb {R}\) (i.e., it does not depend on time). The statement of the lemma is then proved by first considering the case in which h has the form \( h\left (t,x\right)=\mathbb {I}_{\left [u,v\right ]}(t)\gamma (x) \) for 0≤u<v< and a non-negative Borel function γ on \(\mathbb {R}\), and then using monotone class arguments (see Revuz and Yor (1999), Exercise VI.(1.15) or Rogers and Williams (2000), Theorem IV.(45.4)). □

Concerning continuity properties of local times, there is the following result.

Lemma A.2

Let X=(X t , t≥0) be a continuous semimartingale with canonical decomposition given by X=M+A, where M is a local martingale and A a finite variation process. Then, there exists a modification of the local time process \(\left (L^{X}\left (t,x\right),t\geq 0,\,x\in \mathbb {R}\right)\) of X such that the map (t,x)L X (t,x) is continuous in t and càdlàg in x, P-a.s. Moreover,
$$ L^{X}\left(t,x\right)-L^{X}\left(t,x-\right) =2\int_{0}^{t}\mathbb{I}_{\{x\}}(X_s)\, \mathrm{d} A_{s}, $$
(26)

for all \(t\geq 0,\,x\in \mathbb {R}\), P-a.s.

Proof

See, e.g., (Revuz and Yor 1999), Theorem VI.(1.7). □

The information process β is a continuous semimartingale (cf. Theorem 2.8), hence the local time L β (t,x) of β at level \(x\in \mathbb {R}\) up to time t≥0 is well defined. The occupation time formula takes the following form.

Corollary A.3

We have
$$\intop_{0}^{t\wedge\tau}h\left(s,\beta_{s}\right)\mathrm{d} s=\intop_{0}^{t}h\left(s,\beta_{s}\right)\mathrm{d} \left\langle \beta,\beta\right\rangle_{s}=\intop_{-\infty}^{+\infty}\left(\intop_{0}^{t}h\left(s,x\right)\,\mathrm{d} L^{\beta}\left(s,x\right)\right)\,\mathrm{d} x\,, $$
for all t≥0 and all non-negative Borel functions h on \(\mathbb {R}_{+}\times \mathbb {R}\), P-a.s.

Proof

The first equality follows from relation (8) and the second is an application of Lemma A.1. □

An important property of the local time L β is the existence of a bicontinuous version.

Lemma A.4

There is a version of L β such that the map \((t,x)\in \mathbb {R}_{+}\times \mathbb {R}\mapsto L^{\beta }\left (t,x\right)\) is continuous, P-a.s.

Proof

We choose a version of the local time L β according to Lemma A.2. Using (26), we have that
$$L^{\beta}\left(t,x\right)-L^{\beta}\left(t,x-\right)=-2\int_{0}^{t\wedge\tau} \mathbb{I}_{\{x\}}(\beta_{s})\,u\left(s,\beta_{s}\right)\mathrm{d} s, $$
for all \(t\geq 0,\,x\in \mathbb {R}\), P-a.s., where u is the function defined by (6). Applying Corollary A.3 to the right-hand side of the last equality above, we see that
$$2\intop_{0}^{t\wedge\tau}\mathbb{I}_{\{x\}}(\beta_{s})\,u\left(s,\beta_{s}\right)\,\mathrm{d} s=2\intop_{-\infty}^{+\infty}\mathbb{I}_{\{x\}}(y)\left(\intop_{0}^{t}u\left(s,y\right)\,\mathrm{d} L^{\beta}\left(s,y\right)\right)\,\mathrm{d} y=0, $$
and hence L β (t,x)−L β (t,x−)=0, for all \(t\geq 0,\,x\in \mathbb {R}\), P-a.s., because {x} has Lebesgue measure zero. This completes the proof. □

We also make use of the boundedness of the local time with respect to the space variable.

Lemma A.5

The function xL β (t,x) is bounded for all \(t\in \mathbb {R}_{+}\) P-a.s. (the bound may depend on t and ω).

Proof

It follows from the occupation time formula (or from Revuz and Yor (1999), Corollary VI.(1.9)) that the local time L β (t,·) vanishes outside of the compact interval [−M t (ω),M t (ω)] where
$$ M_{t}(\omega) :=\sup_{s\in\left[0,t\right]}\left|\beta_{s}(\omega)\right|,\quad t\geq0,\;\omega\in\Omega\,, $$
(27)

which together with the continuity of L β (t,·) (see Lemma A.4) yields the boundedness of this function, P-a.s. □

Outside a negligible set, for fixed \(x\in \mathbb {R}\), the local time L β (·,x) is a positive continuous increasing function, and we can associate with it a random measure on \(\mathbb {R}_{+}\):
$$L^{\beta}\left(B,x\right):=\int_{B}\mathrm{d} L^{\beta}\left(s,x\right),\quad B\in\mathcal{B}\left(\mathbb{R}_{+}\right). $$

Lemma A.6

Outside a negligible set, for any sequence \(\left (x_{n}\right)_{n\in \mathbb {N}}\) in \(\mathbb {R}\) converging to \(x\in \mathbb {R}\), the sequence \(\left (L^{\beta }\left (\cdot,x_{n}\right)\right)_{n\in \mathbb {N}}\) converges weakly to L β (·,x), i.e.,
$$\int_{\mathbb{R}_{+}} g(s)L^{\beta}\left(\mathrm{d} s,x_{n}\right)\xrightarrow[n\rightarrow \infty]{}\int_{\mathbb{R}_{+}} g(s)L^{\beta}\left(\mathrm{d} s,x\right), $$
for all bounded and continuous functions \(g:\mathbb {R}_{+}\mapsto \mathbb {R}\).

Proof

We fix a negligible set outside of which L β is bicontinuous (cf. Lemma A.4) and outside of which we will be working now. The measures \(\left (L^{\beta }\left (\cdot,x_{n}\right)\right)_{n\in \mathbb {N}}\) are finite on \(\mathbb {R}\) and they are supported by [0,τ]. By the continuity of L β (t,·), we have that \(L^{\beta }\left (s,x_{n}\right)\xrightarrow [n\rightarrow \infty ]{}L^{\beta }\left (s,x\right),\,s\geq 0\), from which it follows that
$$ L^{\beta}\left(\left[0,s\right],x_{n}\right)\xrightarrow[n\rightarrow \infty]{}L^{\beta}\left(\left[0,s\right],x\right),\quad s\geq0\,. $$
(28)
We also have this convergence for the whole space \(\mathbb {R}_{+}\):
$$L^{\beta}\left(\mathbb{R}_{+},x_{n}\right)=L^{\beta}\left(\left[0,\tau\right],x_{n}\right) \xrightarrow[n\rightarrow\infty]{}L^{\beta}\left(\left[0,\tau\right],x\right)=L^{\beta}\left(\mathbb{R}_{+},x\right)\,. $$
From this, we can conclude that the measures L β (·,x n ) converge weakly to L β (·,x). □

Appendix B

Auxiliary results

In (19), we had introduced the function q by
$$q(h,x):=\frac{1}{h}\int_{0}^{h} p(u,x,0)\,\mathrm{d} u,\quad 0<h\le 1, \ x\in\mathbb{R}\,, $$
where p(t,·,y) is the density of the normal distribution with variance t and expectation y (see (2)).

Lemma B.1

The functions q(h,·) are probability density functions with respect to the Lebesgue measure on \(\mathbb {R}\). The probability measures \(\mathbb {Q}_{h}\) on \(\mathbb {R}\) associated with the density q h converge weakly as h 0 to the Dirac measure δ 0 at 0.

Proof

The first statement of the lemma is obvious. For verifying the second statement, let f be a bounded continuous function on \(\mathbb {R}\). Using Fubini’s theorem, we obtain
$$\begin{array}{@{}rcl@{}} \int_{\mathbb{R}} f(x)\,\mathbb{Q}_{h}(\mathrm{d} x)&=&\int_{\mathbb{R}} f(x)\,q_{h}(x)\,\mathrm{d} x\\ &=&\int_{\mathbb{R}} f(x)\,\left(\frac{1}{h}\int_{0}^hp(u,x,0)\,\mathrm{d} u\right)\,\mathrm{d} x\\ &=&\frac{1}{h}\int_{0}^{h}\left(\int_{\mathbb{R}} f(x)\,p(u,x,0)\,\mathrm{d} x\right)\,\mathrm{d} u\\ &=&\frac{1}{h}\int_{0}^{h}\left(\int_{\mathbb{R}} f(x)\,\mathcal{N}(0,u)(\mathrm{d} x)\right)\,\mathrm{d} u\,. \end{array} $$
Since the function \(u\in [0,1]\mapsto \mathcal {N}(0,u)\), which associates to every u[0,1], the centered Gaussian law \(\mathcal {N}(0,u)\) is continuous with respect to weak convergence of probability measures (note that \(\mathcal {N}(0,0)=\delta _{0}\)), we observe that the function \(u\in [0,1]\mapsto \int _{\mathbb {R}} f(x)\,\mathcal {N}(0,u)(\mathrm {d} x)\) is continuous. An application of the fundamental theorem of calculus yields that the right-hand side converges to \(\int _{\mathbb {R}} f(x)\,\delta _{0}(\mathrm {d} x)\) as h 0 and hence
$${\lim}_{h\downarrow0}\int_{\mathbb{R}} f(x)\,\mathbb{Q}_{h}(\mathrm{d} x)=f(0)\,, $$
proving the second statement of the lemma. □
Now, we consider the function g introduced in (15):
$$g\left(s,x\right):=\Big(\int_{s}^{\infty}\varphi_{s}\left(v,x\right)f(v)\,\mathrm{d} v\Big)^{-1},\quad s>0,\,x\in\mathbb{R}\,. $$

Lemma B.2

(1) For all \(x\in \mathbb {R}\) and 0<t 0<t, the function \(g\left (\cdot,x\right): [t_{0},t]\mapsto \mathbb {R}\) is bounded, i.e., there exists a real constant C(t 0,t,x) such that
$$\sup_{s\in[t_{0},t]}g\left(s,x\right)\leq C(t_{0},t,x)\,. $$
(2) For all \(x\in \mathbb {R}\) and 0<t 0<t, the function \(g(\cdot,x): [t_{0},t]\mapsto \mathbb {R}\) is continuous, i.e., for all s n ,s[t 0,t] such that s n s,
$${\lim}_{s_{n}\rightarrow s}g(s_{n},x)=g(s,x)\,. $$
(3) Let \(\left (x_{n}\right)_{n\in \mathbb {N}}\) be a sequence converging monotonically to \(x\in \mathbb {R}\). Then, for all 0<t 0<t,
$$\sup_{s\in\left[t_{0},t\right]}\left|g(s,x_{n})-g(s,x)\right| \xrightarrow[n\rightarrow \infty]{}0\,. $$

Proof

Let us define, for every s[t 0,t] and \(x\in \mathbb {R}\),
$$\begin{array}{@{}rcl@{}} D\left(s,x\right)&&\\ &:=&\int_{s}^{\infty} \sqrt{\frac{v}{2\pi s\,(v-s)}}\exp\left(-\frac{v\,x^{2}}{2s\,(v-s)}\right)f(v)\,\mathrm{d} v\,, \end{array} $$
(29)
and rewrite g as
$$ g(s,x)=\frac{1}{D\left(s,x\right)},\quad s\in\left[t_{0},t\right],\;x\in\mathbb{R}\,. $$
(30)
In order to prove statement (1), it suffices to verify that there exists a constant \(\tilde {C}\left (t_{0},t,x\right)\) such that
$$ 0<\tilde{C}\left(t_{0},t,x\right)\leq D\left(s,x\right),\quad s\in[t_{0},t],\; x\in\mathbb{R}\,. $$
(31)
Such a constant can be found by setting
$$ \tilde{C}\left(t_{0},t,x\right):=\int_{t}^{\infty}\sqrt{\frac{1}{2\pi t}}\exp\left(-\frac{v\,x^{2}}{2t_{0}(v-t)}\right)f(v)\,\mathrm{d} v\,, $$
(32)

proving the first statement of the lemma.

In order to prove statement (2) of the lemma, it suffices to verify that the function sD(s,x), s[t 0,t], is continuous, a fact that can be proved using Lebesgue’s dominated convergence theorem. Indeed, let s n ,s[t 0,t] such that s n s as n. Rewriting (29), we get
$$\begin{array}{@{}rcl@{}} D\left(s_n,x\right)\\ &=&\int_{t_0}^{\infty} \mathbb{I}_{(s_n,+\infty)}(v)\sqrt{\frac{v}{2\pi s_{n}\,(v-s_n)}}\exp\left(-\frac{v\,x^{2}}{2s_{n}\,(v-s_n)}\right)f(v)\,\mathrm{d} v\,. \end{array} $$
First, we consider the integral from t to : For vt, we can make an upper estimate of the integrand by \(\sqrt {\frac {v}{2\pi t_{0}\,(v-t)}}\, f(v)\) which is integrable over [t,+). For the second part of the integral from t 0 to t, we estimate the integrand by \(\mathbb {I}_{(s_{n},+\infty)}(v)\sqrt {\frac {t}{2\pi t_{0}\,(v-s_{n})}}\,c\), where c is an upper bound of f on [t 0,t], and by integrating we observe that
$${\lim}_{n\rightarrow\infty}\int_{t_{0}}^{t}\!\mathbb{I}_{(s_{n},+\infty)}(v) \sqrt{\frac{t}{2\pi t_{0}\,(v-s_{n})}}\,\mathrm{d} v \,=\,\int_{t_{0}}^{t}\!\mathbb{I}_{(s,+\infty)}(v)\sqrt{\frac{t}{2\pi t_{0}\,(v-s)}}\,\mathrm{d} v\,. $$
As the integrands are nonnegative, we get convergence in L 1([t 0,t]) and hence uniform integrability (cf. Theorem C.7). This means that the sequence
$$I_{(s_{n},+\infty)}(v)\sqrt{\frac{v}{2\pi s_{n}\,(v-s_{n})}}\exp\left(-\frac{v\,x^{2}}{2s_{n}\,(v-s_{n})}\right)f(v) $$
is uniformly integrable on [t 0,t] and we can apply Lebesgue’s theorem (cf. Theorem C.7) to conclude
$$\begin{array}{@{}rcl@{}} \lefteqn{{\lim}_{n\rightarrow\infty}\int_{t_0}^t \mathbb{I}_{(s_n,+\infty)}(v)\sqrt{\frac{v}{2\pi s_{n}\,(v-s_n)}}\exp\left(-\frac{v\,x^{2}}{2s_{n}\,(v-s_n)}\right)f(v)\,\mathrm{d} v}\\ &=&\int_{t_0}^t \mathbb{I}_{(s,+\infty)}(v)\sqrt{\frac{v}{2\pi s\,(v-s)}}\exp\left(-\frac{v\,x^{2}}{2s\,(v-s)}\right)f(v)\,\mathrm{d} v\,. \end{array} $$
Summarizing, we get
$${\lim}_{n\rightarrow\infty}D\left(s_{n},x\right)=D\left(s,x\right) $$
and the proof of statement (2) of the lemma finished.
We turn to the proof of statement (3) of the lemma. Using relation (30), we see that
$$\left|g\left(s,x_{n}\right)-g\left(s,x\right)\right| =\frac{\left|D\left(s,x_{n}\right)-D\left(s,x\right)\right|} {D\left(s,x_{n}\right)D\left(s,x\right)} $$
and from inequality (31) we get that
$$\sup_{s\in\left[t_{0},t\right]}\left|g\left(s,x_{n}\right)-g\left(s,x\right)\right| \leq\frac{\sup_{s\in\left[t_{0},t\right]}\left|D\left(s,x_{n}\right)-D\left(s,x\right)\right|} {\tilde{C}\left(t_{0},t,x_{n}\right)\,\tilde{C}\left(t_{0},t,x\right)}, $$
where \(\tilde {C}\left (t_{0},t,x\right)\) is defined by (32). It is easy to see that
$${\lim}_{n\rightarrow \infty} \frac{1} {\tilde{C}\left(t_{0},t,x_{n}\right)\tilde{C}\left(t_{0},t,x\right)}=\frac{1}{\tilde{C}\left(t_{0},t,x\right)^{2}}<+\infty\,. $$
Hence, it remains to prove that
$$\sup_{s\in\left[0,t\right]}\left|D\left(s,x_{n}\right)-D\left(s,x\right)\right|\xrightarrow[n\rightarrow \infty]{}0. $$

By assumption, the sequence x n converges monotonically to x. In such a case, it is easy to see that the sequence of functions D(·,x n ) is monotone. Furthermore, using Lebesgue’s dominated convergence theorem, we verify that D(s,x n ) converges to D(s,x), for all s[t 0,t]. Since the function sD(s,x) is also continuous on [t 0,t], according to Dini’s theorem, D(·,x n ) converges uniformly to D(·,x) on [t 0,t]. This implies the third statement of the lemma and the proof is finished. □

Lemma B.3

Let h,h n be bounded and continuous functions on a metric space E, and μ,μ n be finite measures on \(\left (E,\mathcal {B}(E)\right)\). Suppose that the following two conditions are satisfied:
  1. 1.

    The sequence of functions h n converges uniformly to h.

     
  2. 2.

    The sequence of measures μ n converges weakly to μ.

     

Then, \({\lim }_{n\uparrow +\infty }\int _{E}h_{n}\,\mathrm {d}\mu _{n}=\int _{E}h \,\mathrm {d}\mu \).

Proof

It can immediately be verified that
$$\begin{aligned} &\left|\int_{E}h_{n}\,\mathrm{d}\mu_{n}-\int_{E}h \,\mathrm{d}\mu\right|\\ &\leq\sup_{x\in E}\left|h(x)-h_{n}(x)\Big|\int_{E}\,\mathrm{d} \mu_{n}+\Big|\int_{E}h \,\mathrm{d} \mu_{n}-\int_{E}h\, \mathrm{d} \mu\right|, \end{aligned} $$
which converges to 0 as n +. □

Lemma B.4

Let 0<t 0<t. The function \(k:\,\mathbb {R}\rightarrow \mathbb {R}_{+}\) given by
$$k(x):=\int_{t_{0}}^{t}g\left(s,x\right)\,f(s)\,\mathrm{d} L^{\beta}(s,x),\quad x\in \mathbb{R}\,, $$
is bounded and continuous, where the function g is given by (15).

Proof

Let us first restrict to a compact subset E of \(\mathbb {R}\). First, we prove the right- and left-continuity, hence the continuity, of the function k. Let x n be a sequence from E converging monotonically to xE. From Lemma B.2, we know that the bounded and continuous functions \(g\left (\cdot,x_{n}\right):\,[t_{0},t]\rightarrow \mathbb {R}\) converge uniformly to the bounded and continuous function \(g\left (\cdot,x\right):\,[t_{0},t]\rightarrow \mathbb {R}\) as n. From Lemma A.6, we obtain that the sequence of measures L β (·,x n ) converges weakly to L β (·,x) as n. Applying Lemma B.3, we have that
$$\begin{array}{@{}rcl@{}} {\lim}_{n\rightarrow\infty}k\left(x_{n}\right)&=&{\lim}_{n\rightarrow\infty}\int_{t_0}^{t}g\left(s,x_{n}\right)f(s)\,\mathrm{d} L^{\beta} \left(s,x_{n}\right)\\ &=&\int_{t_0}^{t}g\left(s,x\right)f(s)\,\mathrm{d} L^{\beta} \left(s,x\right)=k(x). \end{array} $$

Consequently, the function k is continuous on E. The boundedness of k now follows from the compactness of E. In order to show that the statement also holds for \(\mathbb {R}\), let us choose E=[−M t −1,M t +1] (see (27) for notation). As L β (s,x)=0, s[0,t], x[−M t ,M t ] (see the proof of Lemma A.5), the statement follows. □

Appendix C

The Meyer approach to the compensator

Below, we briefly recall the approach developed by Meyer (1966) for computing the compensator of a right-continuous potential of class (D). In this section, \(\mathbb {F}=\left (\mathcal {F}_{t}\right)_{t\geq 0}\) denotes a filtration satisfying the usual hypothesis of right-continuity and completeness.

We begin with the definition of a right-continuous potential of class (D). Let X=(X t , t≥0) be a right-continuous \(\mathbb {F}\)-supermartingale and let \(\mathcal {T}\) be the collection of all finite \(\mathbb {F}\)-stopping times relative to this family. The process X is said to belong to the class(D) if the collection of random variables \(X_{T},\,T\in \mathcal {T}\) is uniformly integrable. We say that the right-continuous supermartingale X is a potential if the random variables X t are non-negative and if
$${\lim}_{t\rightarrow+\infty}\mathbf{E}\left[X_{t}\right]=0. $$

Definition C.1

Let C=(C t , t≥0) be an integrable \(\mathbb {F}\)-adapted right-continuous increasing process, and let L=(L t , t≥0) be a right-continuous modification of the martingale \(\left (\mathbf {E}\left [C_{\infty }|\mathcal {F}_{t}\right ],\,t\geq 0\right)\); the process Y=(Y t , t≥0) given by
$$Y_{t}:=L_{t}-C_{t} $$
is called the potential generated by C.

The following result establishes a connection between potentials generated by an increasing process and potentials of class (D). Let h be a strictly positive real number and X=(X t , t≥0) be a potential of class (D), and denote by (p h X t , t≥0) the right-continuous modification of the supermartingale \(\left (\mathbf {E}\left [X_{t+h}|\mathcal {F}_{t}\right ],\,t\geq 0\right)\).

Theorem C.2

Let X=(X t , t≥0) be a potential of class (D), let h>0 and \(A^{h}=\left (A_{t}^{h},\,t\geq 0\right)\) be the process defined by
$$ A_{t}^{h}:=\frac{1}{h}\intop_{0}^{t}\left(X_{s}-p_{h}X_{s}\right)\mathrm{d} s. $$
(33)
Then, A h is an integrable increasing process which generates a potential of class (D) \(X^{h}=\left (X_{t}^{h},\,t\geq 0\right)\) dominated by X, i.e., the process XX h is a potential. It holds that
$$X_{t}^{h}=\frac{1}{h}\mathbf{E}\left[\int_{0}^{h}X_{t+s}\,\mathrm{d} s|\mathcal{F}_{t}\right], \quad \mathbf{P}\text{-a.s.},\; t\geq 0\,. $$

Proof

See, e.g., (Meyer 1966), VII.T28. □

An increasing process A=(A t , t≥0) is called natural (with respect to the filtration \(\mathbb {F}\)) if, for every bounded right-continuous \(\mathbb {F}\)-martingale M=(M t , t≥0), we have
$$\mathbf{E}\left[\intop_{\left(0,t\right]}M_{s}\,\mathrm{d} A_{s}\right]=\mathbf{E}\left[\intop_{\left(0,t\right]}M_{s-}\,\mathrm{d} A_{s}\right],\quad t>0\,. $$

It is well known that an increasing process A is natural with respect to \(\mathbb {F}\) if and only if it is \(\mathbb {F}\)-predictable.

For the following definition of convergence in the sense of the weak topology σ(L 1,L ), see (Meyer 1966), II.10.

Definition C.3

Let \(\left (\xi _{n}\right)_{n\in \mathbb {N}}\) be a sequence of integrable real-valued random variables. The sequence \(\left (\xi _{n}\right)_{n\in \mathbb {N}}\) is said to converge to an integrable random variable ξ in the weak topology σ(L 1,L ) if
$${\lim}_{n\rightarrow+\infty}\mathbf{E}\left[\xi_{n}\eta\right]=\mathbf{E}\left[\xi\eta\right],\;\textrm{for all }\eta\in L^{\infty}\left(\mathbf{P}\right). $$

Theorem C.4

Let X=(X t , t≥0) be a right-continuous potential of class (D). Then, there exists an integrable natural increasing process A=(A t , t≥0) which generates X, and this process is unique. For every stopping time T we have
$$A_{T}^{h}\xrightarrow[h\downarrow0]{{\!~\!}^{\sigma\left(L^{1},L^{\infty}\right)}}A_{T}. $$

Proof

See, e.g., (Meyer 1966), VII.T29. □

In the framework of the information-based approach, the process H=(H t , t≥0), given by (9), is a bounded increasing process which is \(\mathbb {F}^{\beta }\)-adapted. It is a submartingale and it can be immediately seen that the process G=(G t , t≥0), given by (22), is a right-continuous potential of class (D). By Theorem C.2, the processes K h , h>0, defined by (11), generate a family of potentials G h dominated by G.

Corollary C.5

There exists a unique integrable natural increasing process \(K^{w}=\left (K_{t}^{w},\,t\geq 0\right)\) which generates the process G, defined by (22) and, for every \(\mathbb {F}^{\beta }\)-stopping time T, we have that
$$K_{T}^{h}\xrightarrow[h\downarrow0]{{\!~\!}^{\sigma\left(L^{1},L^{\infty}\right)}}K_{T}^{w}, $$
where K h is the process defined by (11).

Proof

See Theorem C.4. □

Theorem C.6

Compactness Criterion of Dunford–Pettis Let \(\mathcal {A}\) be a subset of the space L 1(P). The following two properties are equivalent:
  1. 1.

    \(\mathcal {A}\) is uniformly integrable;

     
  2. 2.

    \(\mathcal {A}\) is relatively compact in the weak topology σ(L 1,L ).

     

Proof

See (Meyer 1966), II.T23. □

Theorem C.7

Let \(\left (\xi _{n}\right)_{n\in \mathbb {N}}\) be a sequence of integrable random variables converging in probability to a random variable ξ. Then, ξ n converges to ξ in L 1(P) if and only if \(\left (\xi _{n}\right)_{n\in \mathbb {N}}\) is uniformly integrable. If the random variables ξ n are non-negative, then \(\left (\xi _{n}\right)_{n\in \mathbb {N}}\) is uniformly integrable if and only if
$${\lim}_{n\rightarrow+\infty}\mathbf{E}\left[\xi_{n}\right]=\mathbf{E}[\xi]<+\infty\,. $$

Proof

See (Meyer 1966), II.T21. □

Lemma C.8

Let \(\left (\xi _{n}\right)_{n\in \mathbb {N}}\) be a sequence of random variables and ξ,ηL 1(P) such that:
  1. 1.

    \(\xi _{n}\xrightarrow [\:n\rightarrow +\infty ]{\sigma \left (L^{1},L^{\infty }\right)}\eta ;\)

     
  2. 2.

    ξ n ξ, P-a.s.

     

Then, η=ξ, P-a.s.

Proof

From condition (1), we see that \(\left (\xi _{n}\right)_{n\in \mathbb {N}}\) is relatively compact in the weak-topology σ(L 1,L ). By Theorem C.6, it follows that the family \(\left (\xi _{n}\right)_{n\in \mathbb {N}}\) is uniformly integrable. We also know that ξ n ξ P-a.s. Hence, by Theorem C.7, we see that ξ n ξ in the L 1-norm and, consequently, \(\xi _{n}\xrightarrow [\:n\rightarrow +\infty ]{\sigma \left (L^{1},L^{\infty }\right)}\xi \). The statement of the lemma then follows by the uniqueness of the limit. □

Declarations

Funding

This work has been financially supported by the European Community’s FP 7 Program under contract PITN-GA-2008-213841, and Marie Curie ITN Controlled Systems .

Authors’ contributions

The three authors worked together on the manuscript and approved its final version.

Competing interests

The authors declare that they have no competing interests.

Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0 International License(http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made.

Authors’ Affiliations

(1)
Numerix
(2)
Université de Bretagne Occidentale
(3)
School of Mathematics, Shandong University
(4)
Friedrich-Schiller-Universität, Fakultät für Mathematik und Informatik, Institut für Stochastik

References

  1. Aven, T: A theorem for determining the compensator of a counting process. Scand. J. Stat. 12, 62–72 (1985).MathSciNetMATHGoogle Scholar
  2. Bedini, ML: Information on a Default Time: Brownian Bridges on Stochastic Intervals and Enlargement of Filtrations. Dissertation, Friedrich-Schiller-Universität (2012).Google Scholar
  3. Bedini, ML, Buckdahn, R, Engelbert, HJ: Brownian bridges on random intervals. Teor. Veroyatnost. i Primenen. 61:1, 129–157 (2016).View ArticleMATHGoogle Scholar
  4. Bedini, ML, Hinz, M: Credit default prediction and parabolic potential theory. Statistics & Probability Letters. 124, 121–125 (2017).MathSciNetView ArticleMATHGoogle Scholar
  5. Giesecke, K: Default and information. J. Econ. Dyn. Control. 30, 2281–2303 (2006).MathSciNetView ArticleMATHGoogle Scholar
  6. Jarrow, R, Protter, P: Structural versus Reduced Form Models: A New Information Based Perspective. J. Invest. Manag. 2(2), 1–10 (2004). Second Quarter.Google Scholar
  7. Jeanblanc, M, Le Cam, Y: Reduced form modelling for credit risk. SSRN (2008). http://ssrn.com/abstract=1021545. Accessed 14 November 2016.
  8. Jeanblanc, M, Le Cam, Y: Progressive enlargement of filtrations with initial times. Stoch. Proc. Appl. 119(8), 2523–2543 (2009).MathSciNetView ArticleMATHGoogle Scholar
  9. Jeanblanc, M, Le Cam, Y: Immersion property and Credit Risk Modelling. In: Delbaen, F, Miklós, R, Stricker, C (eds.)Optimality and Risk - Modern Trends in Mathematical Finance, pp. 99–132. Springer Berlin Heidelberg (2010).Google Scholar
  10. Jeanblanc, M, Yor, M, Chesney, M: Mathematical Methods for Financial Markets. Springer-Verlag, London (2009).View ArticleMATHGoogle Scholar
  11. Kallenberg, O: Foundations of Modern Probability. Second edition. Springer-Verlag, New-York (2002).View ArticleMATHGoogle Scholar
  12. Karatzas, I, Shreve, S: Brownian Motion and Stochastic Calculus. Second edition. Springer-Verlag, Berlin (1991).MATHGoogle Scholar
  13. Meyer, PA: Probability and Potentials. Blaisdail Publishing Company, London (1966).MATHGoogle Scholar
  14. Revuz, D, Yor, M: Continuous Martingales and Brownian Motion. Third edition. Springer-Verlag, Berlin (1999).View ArticleMATHGoogle Scholar
  15. Rogers, LCG, Williams, D: Diffusions, Markov Processes and Martingales. Vol. 2: Itô Calculus. Second edition. Cambridge University Press, Cambridge (2000).View ArticleMATHGoogle Scholar

Copyright

© The Author(s) 2017