# On the compensator of the default process in an information-based model

- Matteo Ludovico Bedini
^{1}Email authorView ORCID ID profile, - Rainer Buckdahn
^{2, 3}and - Hans-Jürgen Engelbert
^{4}

**2**:10

https://doi.org/10.1186/s41546-017-0017-4

© The Author(s) 2017

**Received: **18 November 2016

**Accepted: **18 April 2017

**Published: **11 September 2017

## Abstract

This paper provides sufficient conditions for the time of bankruptcy (of a company or a state) for being a totally inaccessible stopping time and provides the explicit computation of its compensator in a framework where the flow of market information on the default is modelled explicitly with a Brownian bridge between 0 and 0 on a random time interval.

### Keywords

Default time Totally inaccessible stopping time Brownian bridge on random intervals Local time Credit risk Compensator process## Introduction

*τ*(called

*default time*) at which a certain company (or state) bankrupts. Modelling the flow of market information concerning a default time is crucial and in this paper we consider a process,

*β*=(

*β*

_{ t },

*t*≥0), whose natural filtration \(\mathbb {F}^{\beta }\) describes the flow of information available for market agents about the time at which the default occurs. For this reason, the process

*β*will be called the

*information process*. In the present paper, we define

*β*to be a Brownian bridge between 0 and 0 of random length

*τ*:

*W*=(

*W*

_{ t },

*t*≥0) is a Brownian motion independent of

*τ*.

*τ*admits a continuous density

*f*with respect to the Lebesgue measure, then

*τ*is a totally inaccessible stopping time and its compensator

*K*=(

*K*

_{ t },

*t*≥0) is given by

*L*

^{ β }(

*t*,0) is the local time of the information process

*β*at level 0 up to time

*t*.

Knowing whether the default time is a predictable, accessible, or totally inaccessible stopping time is very important in a mathematical credit risk model. A predictable default time is typical of structural credit risk models, while totally inaccessible default times are one of the most important features of reduced-form credit risk models. In the first framework, market agents know when the default is about to occur, while in the latter default occurs by surprise. The fact that financial markets cannot foresee the time of default of a company makes the reduced-form models well accepted by practitioners. In this sense, totally inaccessible default times seem to be the best candidates for modelling times of bankruptcy. We refer, among others, to the papers of Jarrow and Protter (2004) and of Giesecke (2006) on the relations between financial information and the properties of the default time, and also to the series of papers of Jeanblanc and Le Cam (2008,2009,2010). It is remarkable that in our setting the default time is a totally inaccessible stopping time under the common assumption that it admits a continuous density with respect to the Lebesgue measure. Both the hypothesis that the default time admits a continuous density and its consequence that the default occurs by surprise are standard in mathematical credit risk models, but in the information-based approach there is the additional feature of an explicit model for the flow of information which is more sophisticated than the standard approach. There, the available information on the default is modelled through \(\left (\mathbb {I}_{\left \{ \tau \leq t\right \} },\,t\geq 0\right)\), the single-jump process occurring at *τ*, meaning that people only know if the default has or has not occurred. Financial reality can be more complex and there are actually periods in which default is more likely to happen than in others. In the information-based approach, periods of fear of an imminent default correspond to situations where the information process is close to 0, while periods when investors are relatively sure that the default is not going to occur immediately correspond to situations where *β*
_{
t
} is far from 0.

The paper is organized as follows. In the section “The information process and its basic propertiesThe information process andits basic properties”, we recall the definition and the main properties of the information process. In the section “The compensator of the default time”, we state and prove Theorem 3.2, which is the main result of the paper. In Appendix Appendix A, we provide the properties of the local time associated with the information process. In Appendix Appendix B, we give the proofs of some auxiliary lemmas. Finally, in Appendix Appendix C, for the sake of easy reference, we recall the so-called Laplacian approach developed by Meyer (1966) (see, e.g., his book (Meyer 1966)) for computing the compensator of a right-continuous potential of class (D). It is an important ingredient of the approach adopted in this note to determine the compensator of the \(\mathbb {F}^{\beta }\)-submartingale \(\left (\mathbb {I}_{\left \{ \tau \leq t\right \} },\,t\geq 0\right)\).

The idea of modelling the information about the default time with a Brownian bridge defined on a stochastic interval was introduced in the thesis (Bedini 2012). The definition of the information process *β*, the study of its basic properties, and an application to the problem of pricing a Credit Default Swap (one of the most traded derivatives in the credit market) have also recently appeared in the paper (Bedini et al. 2016).

Non-trivial and sufficient conditions for making the default time a predictable stopping time will be considered in another paper, (Bedini and Hinz 2017). Other topics related to Brownian bridges on stochastic intervals (which will not be considered in this paper) are concerned with the problem of studying the progressive enlargement of a reference filtration \(\mathbb {F}\) by the filtration \(\mathbb {F}^{\beta }\) generated by the information process and further applications to Mathematical Finance.

## The information process and its basic properties

We start by recalling the definition and the basic properties of a Brownian bridge between 0 and 0 of random length. The material in this section gives a résumé of some of the results obtained in the paper (Bedini et al. 2016), to which we shall refer for the proofs and more details on the basic properties of such a process.

If \(A\subseteq \mathbb {R}\) (where \(\mathbb {R}\) denotes the set of real numbers), then the set *A*
_{+} is defined as \(A_{+}:=A\cap \{x\in \mathbb {R}:x\geq 0\}\). If *E* is a topological space, then \(\mathcal {B}(E)\) denotes the Borel *σ*-algebra over *E*. The indicator function of a set *A* will be denoted by \(\mathbb {I}_{A}\). A function \(f:\mathbb {R}\rightarrow \mathbb {R}\) will be said to be *càdlàg* if it is right-continuous with limits from the left.

Let \(\left (\Omega,\mathcal {F},\mathbf {P}\right)\) be a complete probability space. We denote by \(\mathcal {N}_{P}\) the collection of **P**-null sets of \(\mathcal {F}\). If \(\mathcal {L}\) is the law of the random variable *ξ* we shall write \(\xi \sim \mathcal {L}\). Unless otherwise specified, all filtrations considered in the following are supposed to satisfy the usual conditions of right continuity and completeness.

Let *τ*:*Ω*→(0,+*∞*) be a strictly positive random time, whose distribution function is denoted by *F*: \(F(t):=\mathbf {P}\left (\tau \leq t\right),\;t\in \mathbb {R}_{+}\). The time *τ* models the random time at which some default occurs and, hereinafter, it will be called *default time*.

Let *W*=(*W*
_{
t
}, *t*≥0) be a Brownian motion defined on \(\left (\Omega,\mathcal {F},\mathbf {P}\right)\) and starting from 0. We shall always make use of the following assumption:

###
**Assumption 2.1**

The random time *τ* and the Brownian motion *W* are independent.

*W*and a strictly positive real number

*r*, a standard Brownian bridge \(\beta ^{r}=\left (\beta _{t}^{r},\,t\geq 0\right)\) between 0 and 0 of length

*r*is defined by

Now, we are going to introduce the definition of the Brownian bridge of random length (see (Bedini et al. 2016), Definition 3.1).

###
**Definition 2.2**

*β*=(

*β*

_{ t },

*t*≥0) given by

will be called Brownian bridge of random length *τ*. We will often say that *β*=(*β*
_{
t
}, *t*≥0) is the information process (for the random time *τ* based on *W*).

*β*will be denoted by \(\mathbb {F}^{\beta }=(\mathcal {F}_{t}^{\beta })_{t\geq 0}\):

###
**Remark 2.3**

*β*, conditional on

*τ*=

*r*, is the same as that of a standard Brownian bridge between 0 and 0 of length

*r*(see (Bedini et al. 2016), Lemma 2.4 and Corollary 2.2). In particular, if 0<

*t*<

*r*, the law of

*β*

_{ t }, conditional on

*τ*=

*r*, is Gaussian with expectation zero and variance \(\frac {t\left (r-t\right)}{r}\):

*μ*and variance

*σ*

^{2}.

*p*(

*t*,·,

*y*), we denote the density of a Gaussian random variable with mean \(y\in \mathbb {R}\) and variance

*t*>0:

*φ*

_{ t }(

*t*>0):

We notice that for 0<*t*<*r* the conditional density of *β*
_{
t
}, conditional on *τ*=*r*, is just equal to the density *φ*
_{
t
}(*r*,·) of a standard Brownian bridge \(\beta ^{r}_{t}\) of length *r* at time *t*.

We proceed with the property that the default time *τ* is nonanticipating with respect to the filtration \(\mathbb {F}^{\beta }\) and the Markov property of the information process *β*.

###
**Lemma 2.4**

For all *t*>0, {*β*
_{
t
}=0}={*τ*≤*t*}, **P**-a.s. In particular, *τ* is an \(\mathbb {F}^{\beta }\)-stopping time.

###
*Proof*

See (Bedini et al. 2016), Proposition 3.1 and Corollary 3.1. □

###
**Theorem 2.5**

*β*is a Markov process with respect to the filtration \(\mathbb {F}^{\beta }\): For all 0≤

*t*<

*u*and measurable real functions

*g*such that

*g*(

*β*

_{ u })is integrable,

###
*Proof*

See Theorem 6.1 in (Bedini et al. 2016). □

*ϕ*

_{ t }defined by

\(\left (r,t\right)\in \left (0,+\infty \right)\times \mathbb {R}_{+}\), \(x\in \mathbb {R}\), is, for *t*<*r*, the a posteriori density function of *τ* on {*τ*>*t*}, conditional on *β*
_{
t
}=*x*.

###
**Theorem 2.6**

*t*>0, \(g:\mathbb {R}^{+}\rightarrow \mathbb {R}\) be a Borel function such that

**E**[|

*g*(

*τ*)|]<+

*∞*. Then,

**P**-a.s.

###
*Proof*

See Theorem 4.1, Corollary 4.1 and Corollary 6.1 in (Bedini et al. 2016). □

Before stating the next result, which is concerned with the semimartingale decomposition of the information process, let us give the following definition:

###
**Definition 2.7**

Let *B* be a continuous process, \(\mathbb {F}\) a filtration, and *T* an \(\mathbb {F}\)-stopping time. Then, *B* is called an \(\mathbb {F}\)-Brownian motion stopped at *T*, if *B* is an \(\mathbb {F}\)-martingale with square variation process 〈*B*,*B*〉_{
t
}=*t*∧*T*, *t*≥0.

*u*defined by

###
**Theorem 2.8**

*b*defined by

*τ*. The information process

*β*is therefore an \(\mathbb {F}^{\beta }\)-semimartingale with decomposition

###
*Proof*

See Theorem 7.1 in (Bedini et al. 2016). □

###
**Remark 2.9**

*β*is given by

## The compensator of the default time

*τ*, which will be denoted by

*H*= (

*H*

_{ t },

*t*≥0):

The process *H*, called default process, is an \(\mathbb {F}^{\beta }\)-submartingale and its compensator is also known as the compensator of the \(\mathbb {F}^{\beta }\)-stopping time *τ*. Our main goal consists in providing a representation of the compensator of *H*. As we shall see below, this representation involves the local time *L*
^{
β
}(*t*,0) of the information process *β* (see Appendix Appendix A for properties of local times of continuous semimartingales and, in particular, of *β*). From its representation we immediately obtain that the compensator of the default process *H* is continuous. As a result, from the continuity of the compensator of *H* it follows that the default time *τ* is a totally inaccessible \(\mathbb {F}^{\beta }\)-stopping time.

In this section, the following assumption will always be in force.

###
**Assumption 3.1**

*(i)* The distribution function *F* of *τ* admits a continuous density function *f* with respect to the Lebesgue measure *λ*
_{+} on \(\mathbb {R}_{+}\).

*(ii)*
*F*(*t*)<1 for all *t*≥0.

The following theorem is the main result of this paper:

###
**Theorem 3.2**

Suppose that Assumption 3.1 is satisfied.

*(i)*The process

*K*=(

*K*

_{ t },

*t*≥0) defined by

is the compensator of the default process *H*. Here, *L*
^{
β
}(*t*,*x*) denotes the local time of the information process *β* up to *t* at level *x*.

*(ii)* The default time *τ* is a totally inaccessible stopping time with respect to the filtration \(\mathbb {F}^{\beta }\).

###
*Proof*

First, we verify statement (ii) under the supposition that (i) is true. Obviously, as *L*
^{
β
}(*s*,0) is continuous in *s* (see Lemma A.4), the process *K* given by (10) is continuous. Consequently, because of the well-known equivalence between this latter property and the continuity of the compensator (see, e.g., (Kallenberg 2002), Corollary 25.18), we can conclude that the default time *τ* is a totally inaccessible stopping time with respect to \(\mathbb {F}^{\beta }\).

*h*>0 we define the process \(K^{h}=\left (K_{t}^{h},\,t\geq 0\right)\) by

The proof is divided into two parts. In the first part, we prove that \(\phantom {\dot {i}\!}K_{t}-K_{t_{0}}\) is the **P**-a.s. limit of \(\phantom {\dot {i}\!}K_{t}^{h}-K^{h}_{t_{0}}\) as *h*
*↓*0, for every *t*
_{0},*t*≥0 such that 0<*t*
_{0}<*t*. In the second part of the proof, we show that the process *K* is indistinguishable from the compensator of *H*. Auxiliary results used throughout the proof are postponed to Appendix Appendix B.

*t*

_{0},

*t*such that 0<

*t*

_{0}<

*t*and notice that

*τ*. Later, we shall verify that

*h*

*↓*0 of

*p*(

*u*,

*β*

_{ s },0). To this end, we estimate the absolute value of the difference:

*c*

_{1}and

*c*

_{2}, for 0≤

*u*≤

*h*≤1 and

*s*∈[

*t*

_{0},

*t*], where for the estimate of the first summand we have used that the function

*u*↦

*p*(

*u*,

*x*,0) has its unique maximum at

*u*=

*x*

^{2}, the standard estimate 1−

*e*

^{−z }≤

*z*for all

*z*≥0 and that \((s+1)s^{-1}\le 1+t_{0}^{-1}\) as well as for the estimate of the second summand the inequalities \(p(u,x,0)\le (2\pi u)^{-\frac {1}{2}}\) and \(|\sqrt {\frac {s+u}{s}}-1|\le \frac {u}{2s}\). Putting

*x*=

*β*

_{ s }, and integrating from 0 to

*h*, and dividing by

*h*, for 0≤

*u*≤

*h*≤1 and

*s*∈[

*t*

_{0},

*t*], from (16) we obtain

*C*(

*t*

_{0},

*t*,

*x*) is an upper bound of

*g*(

*s*,

*x*) on [

*t*

_{0},

*t*] continuous in

*x*(see Lemma B.2) and

*c*

_{3}is an upper bound for the continuous density function

*f*on [

*t*

_{0},

*t*]. The right-hand side is integrable over [

*t*

_{0},

*t*] with respect to the Lebesgue measure

*λ*

_{+}. On the other side, by the fundamental theorem of calculus, we have that, for every

*x*≠0,

*p*(0,

*x*,0)=0 is a continuous extension of the function

*u*↦

*p*(

*u*,

*x*,0) if

*x*≠0. By Corollary A.3, we have that the set {0≤

*s*≤

*t*∧

*τ*:

*β*

_{ s }=0} has Lebesgue measure zero. Then, using Lebesgue’s theorem on dominated convergence, we can conclude that

**P**-a.s.

This completes the first step of the proof of the first part, meaning that in (14) we can replace \(p\left (\frac {su}{s+u},\beta _{s},0\right)\) with *p*(*u*,*β*
_{
s
},0) for identifying the limit.

*h*>0,

*q*(

*h*,·) is a probability density function with respect to the Lebesgue measure on \(\mathbb {R}\). According to Lemma B.1, the probability measures

**Q**

_{ h }with density

*q*(

*h*,·) converge weakly to the Dirac measure

*δ*

_{0}at 0. On the other hand, Lemma B.4 shows that the function \(x\mapsto \int _{t_{0}}^{t}g\left (s,x\right)f(s)\,\mathrm {d} L^{\beta }(s,x)\) is continuous and bounded. Hence, in (20) we can pass to the limit and obtain the following

*f*is uniformly continuous on [

*t*

_{0},

*t*+1]. We fix

*ε*>0 and choose 0<

*δ*≤1 such that |

*f*(

*s*+

*u*)−

*f*(

*s*)|≤

*ε*for every 0≤

*u*<

*δ*. Proceeding similarly as above, we obtain the following

Since *ε*>0 is choosen arbitrarily and the integral above is **P**-a.s. finite, we conclude that (13) holds.

The first part of the proof is complete.

The second part of the proof relies on the so-called Laplacian approach of P.-A. Meyer and, for the sake of easy reference, related results are recalled in Appendix Appendix C. Let us denote by *K*
^{
w
} the compensator of the default process *H* introduced in (9): \(H_{t}:=\mathbb {I}_{\left \{ \tau \leq t\right \}},\ t\geq 0\). We first show that \(K^{h}_{t}\) converges to \(K^{w}_{t}\) as *h*
*↓*0 in the sense of the weak topology *σ*(*L*
^{1},*L*
^{
∞
}) (see Definition C.3), for every *t*≥0. We then prove that the process *K* is actually indistinguishable from *K*
^{
w
}.

*ξ*in the sense of the weak topology

*σ*(

*L*

^{1},

*L*

^{ ∞ }), we will write

*G*the right-continuous potential of class (D) (cf. beginning of Appendix Appendix C) given by

*G*given by (22) and, for every \(\mathbb {F}^{\beta }\)-stopping time

*T*, we have that

*K*

^{ w }is actually the compensator of

*H*. Indeed, it is a well-known fact that the process

*H*admits a unique decomposition

*M*and an adapted, natural, increasing, integrable process

*A*. The process

*A*is then called the compensator of

*H*. On the other hand, from the definition of the potential generated by an increasing process (see Definition C.1), the process

*G*and (24), we obtain the following decomposition of

*H*:

However, by the uniqueness of the decomposition (23), we can identify the martingale *M* with 1−*L* and we have that *A*=*K*
^{
w
}, up to indistinguishability. Since the submartingale *H* and the martingale 1−*L* appearing in the above proof are right-continuous, the process *K*
^{
w
} is also right-continuous.

By applying Lemma C.8, we see that \(K_{t}-K_{t_{0}}\phantom {\dot {i}\!}\) is a modification of \(K^{w}_{t}-K^{w}_{t_{0}}\phantom {\dot {i}\!}\), for all *t*
_{0},*t* such that 0<*t*
_{0}<*t*. Passing to the limit as *t*
_{0}
*↓*0, we get \(K_{t}=K_{t}^{w}\)
**P**-a.s. for all *t*≥0. Since both processes have right-continuous sample paths they are indistinguishable.

The theorem is proved. □

###
**Remark 3.3**

We close this part of the present paper with the following observations.

*(1)* Note that \(\left (\mathbb {I}_{\left \{ \tau \leq t\right \} },\,t\geq 0\right)\) does not admit an intensity with respect to the filtration \(\mathbb {F}^{\beta }\) (hence, it is not possible to apply, for example, Aven’s Lemma for computing the compensator (see, e.g., (Aven 1985))).

*(2)* Assumption 3.1(ii) on the distribution function *F* that *F*(*t*)<1 for all *t*≥0 ensures that the denominator of the integrand of the right-hand side of (10) is always strictly positive. However, it can be removed. Indeed, if the density function *f* of *τ* is continuous (as required by Assumption 3.1(i)), then exactly as above we can show that relation (10) is satisfied for all *t*≤*t*
_{1}:= sup{*t*>0: *F*(*t*)<1}. On the other hand, it is obvious that *τ*≤*t*
_{1}
**P**-a.s. (hence, the right-hand side of (10) is constant for *t*∈[*t*
_{1},*∞*)) and also that the compensator *K*=(*K*
_{
t
}, *t*≥0) of \(\left (\mathbb {I}_{\left \{ \tau \leq t\right \} },\,t\geq 0\right)\) is constant on [*t*
_{1},*∞*). Altogether, it follows that relation (10) is satisfied for all *t*≥0.

## Appendix A

### On the local time of the information process

In this section, we introduce and study the local time process associated with the information process.

*X*=(

*X*

_{ t },

*t*≥0) and for any real number

*x*, it is possible to define the (right) local time

*L*

^{ X }(

*t*,

*x*) associated with

*X*at level

*x*up to time

*t*using Tanaka’s formula (see, e.g., (Revuz and Yor 1999), Theorem VI.(1.2)) as follows:

where sign(*x*):=1 if *x*>0 and sign(*x*):=−1 if *x*≤0. The process *L*
^{
X
}(·,*x*)=(*L*
^{
X
}(*t*,*x*), *t*≥0) appearing in relation (25) is called the (right) *local time of X at level x*.

Now, we recall the occupation time formula for local times of continuous semimartingales which is given in a form convenient for our applications. By 〈*X*,*X*〉, we denote the square variation process of a continuous semimartingale *X*.

###
**Lemma A.1**

*X*=(

*X*

_{ t },

*t*≥0) be a continuous semimartingale. There is a

**P**-negligible set outside of which

*t*≥0 and every non-negative Borel function

*h*on \(\mathbb {R}_{+}\times \mathbb {R}\).

###
*Proof*

See Corollary VI.(1.6) from the book by Revuz and Yor (1999) for the case when *h* is a non-negative Borel function defined on \(\mathbb {R}\) (i.e., it does not depend on time). The statement of the lemma is then proved by first considering the case in which *h* has the form \( h\left (t,x\right)=\mathbb {I}_{\left [u,v\right ]}(t)\gamma (x) \) for 0≤*u*<*v*<*∞* and a non-negative Borel function *γ* on \(\mathbb {R}\), and then using monotone class arguments (see Revuz and Yor (1999), Exercise VI.(1.15) or Rogers and Williams (2000), Theorem IV.(45.4)). □

Concerning continuity properties of local times, there is the following result.

###
**Lemma A.2**

*X*=(

*X*

_{ t },

*t*≥0) be a continuous semimartingale with canonical decomposition given by

*X*=

*M*+

*A*, where

*M*is a local martingale and

*A*a finite variation process. Then, there exists a modification of the local time process \(\left (L^{X}\left (t,x\right),t\geq 0,\,x\in \mathbb {R}\right)\) of

*X*such that the map (

*t*,

*x*)↦

*L*

^{ X }(

*t*,

*x*) is continuous in

*t*and càdlàg in

*x*,

**P**-a.s. Moreover,

for all \(t\geq 0,\,x\in \mathbb {R}\), **P**-a.s.

###
*Proof*

See, e.g., (Revuz and Yor 1999), Theorem VI.(1.7). □

The information process *β* is a continuous semimartingale (cf. Theorem 2.8), hence the local time *L*
^{
β
}(*t*,*x*) of *β* at level \(x\in \mathbb {R}\) up to time *t*≥0 is well defined. The occupation time formula takes the following form.

###
**Corollary A.3**

*t*≥0 and all non-negative Borel functions

*h*on \(\mathbb {R}_{+}\times \mathbb {R}\),

**P**-a.s.

###
*Proof*

The first equality follows from relation (8) and the second is an application of Lemma A.1. □

An important property of the local time *L*
^{
β
} is the existence of a bicontinuous version.

###
**Lemma A.4**

There is a version of *L*
^{
β
} such that the map \((t,x)\in \mathbb {R}_{+}\times \mathbb {R}\mapsto L^{\beta }\left (t,x\right)\) is continuous, **P**-a.s.

###
*Proof*

*L*

^{ β }according to Lemma A.2. Using (26), we have that

**P**-a.s., where

*u*is the function defined by (6). Applying Corollary A.3 to the right-hand side of the last equality above, we see that

*L*

^{ β }(

*t*,

*x*)−

*L*

^{ β }(

*t*,

*x*−)=0, for all \(t\geq 0,\,x\in \mathbb {R}\),

**P**-a.s., because {

*x*} has Lebesgue measure zero. This completes the proof. □

We also make use of the boundedness of the local time with respect to the space variable.

###
**Lemma A.5**

The function *x*↦*L*
^{
β
}(*t*,*x*) is bounded for all \(t\in \mathbb {R}_{+}\)
**P**-a.s. (the bound may depend on *t* and *ω*).

###
*Proof*

*L*

^{ β }(

*t*,·) vanishes outside of the compact interval [−

*M*

_{ t }(

*ω*),

*M*

_{ t }(

*ω*)] where

which together with the continuity of *L*
^{
β
}(*t*,·) (see Lemma A.4) yields the boundedness of this function, **P**-a.s. □

*L*

^{ β }(·,

*x*) is a positive continuous increasing function, and we can associate with it a random measure on \(\mathbb {R}_{+}\):

###
**Lemma A.6**

*L*

^{ β }(·,

*x*), i.e.,

###
*Proof*

*L*

^{ β }is bicontinuous (cf. Lemma A.4) and outside of which we will be working now. The measures \(\left (L^{\beta }\left (\cdot,x_{n}\right)\right)_{n\in \mathbb {N}}\) are finite on \(\mathbb {R}\) and they are supported by [0,

*τ*]. By the continuity of

*L*

^{ β }(

*t*,·), we have that \(L^{\beta }\left (s,x_{n}\right)\xrightarrow [n\rightarrow \infty ]{}L^{\beta }\left (s,x\right),\,s\geq 0\), from which it follows that

*L*

^{ β }(·,

*x*

_{ n }) converge weakly to

*L*

^{ β }(·,

*x*). □

## Appendix B

### Auxiliary results

*q*by

*p*(

*t*,·,

*y*) is the density of the normal distribution with variance

*t*and expectation

*y*(see (2)).

###
**Lemma B.1**

The functions *q*(*h*,·) are probability density functions with respect to the Lebesgue measure on \(\mathbb {R}\). The probability measures \(\mathbb {Q}_{h}\) on \(\mathbb {R}\) associated with the density *q*
_{
h
} converge weakly as *h*
*↓*0 to the Dirac measure *δ*
_{0} at 0.

###
*Proof*

*f*be a bounded continuous function on \(\mathbb {R}\). Using Fubini’s theorem, we obtain

*u*∈[0,1], the centered Gaussian law \(\mathcal {N}(0,u)\) is continuous with respect to weak convergence of probability measures (note that \(\mathcal {N}(0,0)=\delta _{0}\)), we observe that the function \(u\in [0,1]\mapsto \int _{\mathbb {R}} f(x)\,\mathcal {N}(0,u)(\mathrm {d} x)\) is continuous. An application of the fundamental theorem of calculus yields that the right-hand side converges to \(\int _{\mathbb {R}} f(x)\,\delta _{0}(\mathrm {d} x)\) as

*h*

*↓*0 and hence

*g*introduced in (15):

###
**Lemma B.2**

*(1)*For all \(x\in \mathbb {R}\) and 0<

*t*

_{0}<

*t*, the function \(g\left (\cdot,x\right): [t_{0},t]\mapsto \mathbb {R}\) is bounded, i.e., there exists a real constant

*C*(

*t*

_{0},

*t*,

*x*) such that

*(2)*For all \(x\in \mathbb {R}\) and 0<

*t*

_{0}<

*t*, the function \(g(\cdot,x): [t_{0},t]\mapsto \mathbb {R}\) is continuous, i.e., for all

*s*

_{ n },

*s*∈[

*t*

_{0},

*t*] such that

*s*

_{ n }→

*s*,

*t*

_{0}<

*t*,

###
*Proof*

*s*∈[

*t*

_{0},

*t*] and \(x\in \mathbb {R}\),

*g*as

proving the first statement of the lemma.

*s*↦

*D*(

*s*,

*x*),

*s*∈[

*t*

_{0},

*t*], is continuous, a fact that can be proved using Lebesgue’s dominated convergence theorem. Indeed, let

*s*

_{ n },

*s*∈[

*t*

_{0},

*t*] such that

*s*

_{ n }→

*s*as

*n*→

*∞*. Rewriting (29), we get

*t*to

*∞*: For

*v*≥

*t*, we can make an upper estimate of the integrand by \(\sqrt {\frac {v}{2\pi t_{0}\,(v-t)}}\, f(v)\) which is integrable over [

*t*,+

*∞*). For the second part of the integral from

*t*

_{0}to

*t*, we estimate the integrand by \(\mathbb {I}_{(s_{n},+\infty)}(v)\sqrt {\frac {t}{2\pi t_{0}\,(v-s_{n})}}\,c\), where

*c*is an upper bound of

*f*on [

*t*

_{0},

*t*], and by integrating we observe that

*L*

^{1}([

*t*

_{0},

*t*]) and hence uniform integrability (cf. Theorem C.7). This means that the sequence

*t*

_{0},

*t*] and we can apply Lebesgue’s theorem (cf. Theorem C.7) to conclude

By assumption, the sequence *x*
_{
n
} converges monotonically to *x*. In such a case, it is easy to see that the sequence of functions *D*(·,*x*
_{
n
}) is monotone. Furthermore, using Lebesgue’s dominated convergence theorem, we verify that *D*(*s*,*x*
_{
n
}) converges to *D*(*s*,*x*), for all *s*∈[*t*
_{0},*t*]. Since the function *s*↦*D*(*s*,*x*) is also continuous on [*t*
_{0},*t*], according to Dini’s theorem, *D*(·,*x*
_{
n
}) converges uniformly to *D*(·,*x*) on [*t*
_{0},*t*]. This implies the third statement of the lemma and the proof is finished. □

###
**Lemma B.3**

*h*,

*h*

_{ n }be bounded and continuous functions on a metric space

*E*, and

*μ*,

*μ*

_{ n }be finite measures on \(\left (E,\mathcal {B}(E)\right)\). Suppose that the following two conditions are satisfied:

- 1.
The sequence of functions

*h*_{ n }converges uniformly to*h*. - 2.
The sequence of measures

*μ*_{ n }converges weakly to*μ*.

Then, \({\lim }_{n\uparrow +\infty }\int _{E}h_{n}\,\mathrm {d}\mu _{n}=\int _{E}h \,\mathrm {d}\mu \).

###
*Proof*

*n*

*↑*+

*∞*. □

###
**Lemma B.4**

*t*

_{0}<

*t*. The function \(k:\,\mathbb {R}\rightarrow \mathbb {R}_{+}\) given by

*g*is given by (15).

###
*Proof*

*E*of \(\mathbb {R}\). First, we prove the right- and left-continuity, hence the continuity, of the function

*k*. Let

*x*

_{ n }be a sequence from

*E*converging monotonically to

*x*∈

*E*. From Lemma B.2, we know that the bounded and continuous functions \(g\left (\cdot,x_{n}\right):\,[t_{0},t]\rightarrow \mathbb {R}\) converge uniformly to the bounded and continuous function \(g\left (\cdot,x\right):\,[t_{0},t]\rightarrow \mathbb {R}\) as

*n*→

*∞*. From Lemma A.6, we obtain that the sequence of measures

*L*

^{ β }(·,

*x*

_{ n }) converges weakly to

*L*

^{ β }(·,

*x*) as

*n*→

*∞*. Applying Lemma B.3, we have that

Consequently, the function *k* is continuous on *E*. The boundedness of *k* now follows from the compactness of *E*. In order to show that the statement also holds for \(\mathbb {R}\), let us choose *E*=[−*M*
_{
t
}−1,*M*
_{
t
}+1] (see (27) for notation). As *L*
^{
β
}(*s*,*x*)=0, *s*∈[0,*t*], *x*∉[−*M*
_{
t
},*M*
_{
t
}] (see the proof of Lemma A.5), the statement follows. □

## Appendix C

### The Meyer approach to the compensator

Below, we briefly recall the approach developed by Meyer (1966) for computing the compensator of a right-continuous potential of class (D). In this section, \(\mathbb {F}=\left (\mathcal {F}_{t}\right)_{t\geq 0}\) denotes a filtration satisfying the usual hypothesis of right-continuity and completeness.

*X*=(

*X*

_{ t },

*t*≥0) be a right-continuous \(\mathbb {F}\)-supermartingale and let \(\mathcal {T}\) be the collection of all finite \(\mathbb {F}\)-stopping times relative to this family. The process

*X*is said to

*belong to the class*(D) if the collection of random variables \(X_{T},\,T\in \mathcal {T}\) is uniformly integrable. We say that the right-continuous supermartingale

*X*is a

*potential*if the random variables

*X*

_{ t }are non-negative and if

###
**Definition C.1**

*C*=(

*C*

_{ t },

*t*≥0) be an integrable \(\mathbb {F}\)-adapted right-continuous increasing process, and let

*L*=(

*L*

_{ t },

*t*≥0) be a right-continuous modification of the martingale \(\left (\mathbf {E}\left [C_{\infty }|\mathcal {F}_{t}\right ],\,t\geq 0\right)\); the process

*Y*=(

*Y*

_{ t },

*t*≥0) given by

*potential generated by C*.

The following result establishes a connection between potentials generated by an increasing process and potentials of class (D). Let *h* be a strictly positive real number and *X*=(*X*
_{
t
}, *t*≥0) be a potential of class (D), and denote by (*p*
_{
h
}
*X*
_{
t
}, *t*≥0) the right-continuous modification of the supermartingale \(\left (\mathbf {E}\left [X_{t+h}|\mathcal {F}_{t}\right ],\,t\geq 0\right)\).

###
**Theorem C.2**

*X*=(

*X*

_{ t },

*t*≥0) be a potential of class (D), let

*h*>0 and \(A^{h}=\left (A_{t}^{h},\,t\geq 0\right)\) be the process defined by

*A*

^{ h }is an integrable increasing process which generates a potential of class (D) \(X^{h}=\left (X_{t}^{h},\,t\geq 0\right)\) dominated by

*X*, i.e., the process

*X*−

*X*

^{ h }is a potential. It holds that

###
*Proof*

See, e.g., (Meyer 1966), VII.T28. □

*A*=(

*A*

_{ t },

*t*≥0) is called

*natural*(with respect to the filtration \(\mathbb {F}\)) if, for every bounded right-continuous \(\mathbb {F}\)-martingale

*M*=(

*M*

_{ t },

*t*≥0), we have

It is well known that an increasing process *A* is natural with respect to \(\mathbb {F}\) if and only if it is \(\mathbb {F}\)-predictable.

For the following definition of convergence in the sense of the weak topology *σ*(*L*
^{1},*L*
^{
∞
}), see (Meyer 1966), II.10.

###
**Definition C.3**

*converge to an integrable random variable ξ in the weak topology σ*(

*L*

^{1},

*L*

^{ ∞ }) if

###
**Theorem C.4**

*X*=(

*X*

_{ t },

*t*≥0) be a right-continuous potential of class (D). Then, there exists an integrable natural increasing process

*A*=(

*A*

_{ t },

*t*≥0) which generates

*X*, and this process is unique. For every stopping time

*T*we have

###
*Proof*

See, e.g., (Meyer 1966), VII.T29. □

In the framework of the information-based approach, the process *H*=(*H*
_{
t
}, *t*≥0), given by (9), is a bounded increasing process which is \(\mathbb {F}^{\beta }\)-adapted. It is a submartingale and it can be immediately seen that the process *G*=(*G*
_{
t
}, *t*≥0), given by (22), is a right-continuous potential of class (D). By Theorem C.2, the processes *K*
^{
h
}, *h*>0, defined by (11), generate a family of potentials *G*
^{
h
} dominated by *G*.

###
**Corollary C.5**

*G*, defined by (22) and, for every \(\mathbb {F}^{\beta }\)-stopping time

*T*, we have that

*K*

^{ h }is the process defined by (11).

###
*Proof*

See Theorem C.4. □

###
**Theorem C.6**

*L*

^{1}(

**P**). The following two properties are equivalent:

- 1.
\(\mathcal {A}\) is uniformly integrable;

- 2.
\(\mathcal {A}\) is relatively compact in the weak topology

*σ*(*L*^{1},*L*^{ ∞ }).

###
*Proof*

See (Meyer 1966), II.T23. □

###
**Theorem C.7**

*ξ*. Then,

*ξ*

_{ n }converges to

*ξ*in

*L*

^{1}(

**P**) if and only if \(\left (\xi _{n}\right)_{n\in \mathbb {N}}\) is uniformly integrable. If the random variables

*ξ*

_{ n }are non-negative, then \(\left (\xi _{n}\right)_{n\in \mathbb {N}}\) is uniformly integrable if and only if

###
*Proof*

See (Meyer 1966), II.T21. □

###
**Lemma C.8**

*ξ*,

*η*∈

*L*

^{1}(

**P**) such that:

- 1.
\(\xi _{n}\xrightarrow [\:n\rightarrow +\infty ]{\sigma \left (L^{1},L^{\infty }\right)}\eta ;\)

- 2.
*ξ*_{ n }→*ξ*,**P**-a.s.

Then, *η*=*ξ*, **P**-a.s.

###
*Proof*

From condition (1), we see that \(\left (\xi _{n}\right)_{n\in \mathbb {N}}\) is relatively compact in the weak-topology *σ*(*L*
^{1},*L*
^{
∞
}). By Theorem C.6, it follows that the family \(\left (\xi _{n}\right)_{n\in \mathbb {N}}\) is uniformly integrable. We also know that *ξ*
_{
n
}→*ξ*
**P**-a.s. Hence, by Theorem C.7, we see that *ξ*
_{
n
}→*ξ* in the *L*
^{1}-norm and, consequently, \(\xi _{n}\xrightarrow [\:n\rightarrow +\infty ]{\sigma \left (L^{1},L^{\infty }\right)}\xi \). The statement of the lemma then follows by the uniqueness of the limit. □

## Declarations

### Funding

This work has been financially supported by the European Community’s FP 7 Program under contract PITN-GA-2008-213841, and Marie Curie ITN ≪Controlled Systems ≫.

### Authors’ contributions

The three authors worked together on the manuscript and approved its final version.

### Competing interests

The authors declare that they have no competing interests.

**Open Access** This article is distributed under the terms of the Creative Commons Attribution 4.0 International License(http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made.

## Authors’ Affiliations

## References

- Aven, T: A theorem for determining the compensator of a counting process. Scand. J. Stat. 12, 62–72 (1985).MathSciNetMATHGoogle Scholar
- Bedini, ML: Information on a Default Time: Brownian Bridges on Stochastic Intervals and Enlargement of Filtrations. Dissertation, Friedrich-Schiller-Universität (2012).Google Scholar
- Bedini, ML, Buckdahn, R, Engelbert, HJ: Brownian bridges on random intervals. Teor. Veroyatnost. i Primenen. 61:1, 129–157 (2016).View ArticleMATHGoogle Scholar
- Bedini, ML, Hinz, M: Credit default prediction and parabolic potential theory. Statistics & Probability Letters. 124, 121–125 (2017).MathSciNetView ArticleMATHGoogle Scholar
- Giesecke, K: Default and information. J. Econ. Dyn. Control. 30, 2281–2303 (2006).MathSciNetView ArticleMATHGoogle Scholar
- Jarrow, R, Protter, P: Structural versus Reduced Form Models: A New Information Based Perspective. J. Invest. Manag. 2(2), 1–10 (2004). Second Quarter.Google Scholar
- Jeanblanc, M, Le Cam, Y: Reduced form modelling for credit risk. SSRN (2008). http://ssrn.com/abstract=1021545. Accessed 14 November 2016.
- Jeanblanc, M, Le Cam, Y: Progressive enlargement of filtrations with initial times. Stoch. Proc. Appl. 119(8), 2523–2543 (2009).MathSciNetView ArticleMATHGoogle Scholar
- Jeanblanc, M, Le Cam, Y: Immersion property and Credit Risk Modelling. In: Delbaen, F, Miklós, R, Stricker, C (eds.)Optimality and Risk - Modern Trends in Mathematical Finance, pp. 99–132. Springer Berlin Heidelberg (2010).Google Scholar
- Jeanblanc, M, Yor, M, Chesney, M: Mathematical Methods for Financial Markets. Springer-Verlag, London (2009).View ArticleMATHGoogle Scholar
- Kallenberg, O: Foundations of Modern Probability. Second edition. Springer-Verlag, New-York (2002).View ArticleMATHGoogle Scholar
- Karatzas, I, Shreve, S: Brownian Motion and Stochastic Calculus. Second edition. Springer-Verlag, Berlin (1991).MATHGoogle Scholar
- Meyer, PA: Probability and Potentials. Blaisdail Publishing Company, London (1966).MATHGoogle Scholar
- Revuz, D, Yor, M: Continuous Martingales and Brownian Motion. Third edition. Springer-Verlag, Berlin (1999).View ArticleMATHGoogle Scholar
- Rogers, LCG, Williams, D: Diffusions, Markov Processes and Martingales. Vol. 2: Itô Calculus. Second edition. Cambridge University Press, Cambridge (2000).View ArticleMATHGoogle Scholar