 Research
 Open Access
 Published:
On the compensator of the default process in an informationbased model
Probability, Uncertainty and Quantitative Risk volume 2, Article number: 10 (2017)
Abstract
This paper provides sufficient conditions for the time of bankruptcy (of a company or a state) for being a totally inaccessible stopping time and provides the explicit computation of its compensator in a framework where the flow of market information on the default is modelled explicitly with a Brownian bridge between 0 and 0 on a random time interval.
Introduction
One of the most important objects in a mathematical model for credit risk is the time τ (called default time) at which a certain company (or state) bankrupts. Modelling the flow of market information concerning a default time is crucial and in this paper we consider a process, β=(β _{ t }, t≥0), whose natural filtration \(\mathbb {F}^{\beta }\) describes the flow of information available for market agents about the time at which the default occurs. For this reason, the process β will be called the information process. In the present paper, we define β to be a Brownian bridge between 0 and 0 of random length τ:
where W=(W _{ t }, t≥0) is a Brownian motion independent of τ.
In this paper, the focus is on the classification of the default time with respect to the filtration \(\mathbb {F}^{\beta }\) and our main result is the following: If the distribution of the default time τ admits a continuous density f with respect to the Lebesgue measure, then τ is a totally inaccessible stopping time and its compensator K=(K _{ t }, t≥0) is given by
where L ^{β}(t,0) is the local time of the information process β at level 0 up to time t.
Knowing whether the default time is a predictable, accessible, or totally inaccessible stopping time is very important in a mathematical credit risk model. A predictable default time is typical of structural credit risk models, while totally inaccessible default times are one of the most important features of reducedform credit risk models. In the first framework, market agents know when the default is about to occur, while in the latter default occurs by surprise. The fact that financial markets cannot foresee the time of default of a company makes the reducedform models well accepted by practitioners. In this sense, totally inaccessible default times seem to be the best candidates for modelling times of bankruptcy. We refer, among others, to the papers of Jarrow and Protter (2004) and of Giesecke (2006) on the relations between financial information and the properties of the default time, and also to the series of papers of Jeanblanc and Le Cam (2008,2009,2010). It is remarkable that in our setting the default time is a totally inaccessible stopping time under the common assumption that it admits a continuous density with respect to the Lebesgue measure. Both the hypothesis that the default time admits a continuous density and its consequence that the default occurs by surprise are standard in mathematical credit risk models, but in the informationbased approach there is the additional feature of an explicit model for the flow of information which is more sophisticated than the standard approach. There, the available information on the default is modelled through \(\left (\mathbb {I}_{\left \{ \tau \leq t\right \} },\,t\geq 0\right)\), the singlejump process occurring at τ, meaning that people only know if the default has or has not occurred. Financial reality can be more complex and there are actually periods in which default is more likely to happen than in others. In the informationbased approach, periods of fear of an imminent default correspond to situations where the information process is close to 0, while periods when investors are relatively sure that the default is not going to occur immediately correspond to situations where β _{ t } is far from 0.
The paper is organized as follows. In the section “The information process and its basic propertiesThe information process andits basic properties”, we recall the definition and the main properties of the information process. In the section “The compensator of the default time”, we state and prove Theorem 3.2, which is the main result of the paper. In Appendix Appendix A, we provide the properties of the local time associated with the information process. In Appendix Appendix B, we give the proofs of some auxiliary lemmas. Finally, in Appendix Appendix C, for the sake of easy reference, we recall the socalled Laplacian approach developed by Meyer (1966) (see, e.g., his book (Meyer 1966)) for computing the compensator of a rightcontinuous potential of class (D). It is an important ingredient of the approach adopted in this note to determine the compensator of the \(\mathbb {F}^{\beta }\)submartingale \(\left (\mathbb {I}_{\left \{ \tau \leq t\right \} },\,t\geq 0\right)\).
The idea of modelling the information about the default time with a Brownian bridge defined on a stochastic interval was introduced in the thesis (Bedini 2012). The definition of the information process β, the study of its basic properties, and an application to the problem of pricing a Credit Default Swap (one of the most traded derivatives in the credit market) have also recently appeared in the paper (Bedini et al. 2016).
Nontrivial and sufficient conditions for making the default time a predictable stopping time will be considered in another paper, (Bedini and Hinz 2017). Other topics related to Brownian bridges on stochastic intervals (which will not be considered in this paper) are concerned with the problem of studying the progressive enlargement of a reference filtration \(\mathbb {F}\) by the filtration \(\mathbb {F}^{\beta }\) generated by the information process and further applications to Mathematical Finance.
The information process and its basic properties
We start by recalling the definition and the basic properties of a Brownian bridge between 0 and 0 of random length. The material in this section gives a résumé of some of the results obtained in the paper (Bedini et al. 2016), to which we shall refer for the proofs and more details on the basic properties of such a process.
If \(A\subseteq \mathbb {R}\) (where \(\mathbb {R}\) denotes the set of real numbers), then the set A _{+} is defined as \(A_{+}:=A\cap \{x\in \mathbb {R}:x\geq 0\}\). If E is a topological space, then \(\mathcal {B}(E)\) denotes the Borel σalgebra over E. The indicator function of a set A will be denoted by \(\mathbb {I}_{A}\). A function \(f:\mathbb {R}\rightarrow \mathbb {R}\) will be said to be càdlàg if it is rightcontinuous with limits from the left.
Let \(\left (\Omega,\mathcal {F},\mathbf {P}\right)\) be a complete probability space. We denote by \(\mathcal {N}_{P}\) the collection of Pnull sets of \(\mathcal {F}\). If \(\mathcal {L}\) is the law of the random variable ξ we shall write \(\xi \sim \mathcal {L}\). Unless otherwise specified, all filtrations considered in the following are supposed to satisfy the usual conditions of right continuity and completeness.
Let τ:Ω→(0,+∞) be a strictly positive random time, whose distribution function is denoted by F: \(F(t):=\mathbf {P}\left (\tau \leq t\right),\;t\in \mathbb {R}_{+}\). The time τ models the random time at which some default occurs and, hereinafter, it will be called default time.
Let W=(W _{ t }, t≥0) be a Brownian motion defined on \(\left (\Omega,\mathcal {F},\mathbf {P}\right)\) and starting from 0. We shall always make use of the following assumption:
Assumption 2.1
The random time τ and the Brownian motion W are independent.
Given W and a strictly positive real number r, a standard Brownian bridge \(\beta ^{r}=\left (\beta _{t}^{r},\,t\geq 0\right)\) between 0 and 0 of length r is defined by
For further references on Brownian bridges, see, e.g., Section 5.6.B of the book (Karatzas and Shreve 1991) by Karatzas and Shreve.
Now, we are going to introduce the definition of the Brownian bridge of random length (see (Bedini et al. 2016), Definition 3.1).
Definition 2.2
The process β=(β _{ t }, t≥0) given by
will be called Brownian bridge of random length τ. We will often say that β=(β _{ t }, t≥0) is the information process (for the random time τ based on W).
The natural filtration of β will be denoted by \(\mathbb {F}^{\beta }=(\mathcal {F}_{t}^{\beta })_{t\geq 0}\):
Note that according to (Bedini et al. 2016), Corollary 6.1, the filtration \(\mathbb {F}^{\beta }\) (denoted therein by \(\mathbb {F}^{P}\)) satisfies the usual conditions of rightcontinuity and completeness.
Remark 2.3
The law of β, conditional on τ=r, is the same as that of a standard Brownian bridge between 0 and 0 of length r (see (Bedini et al. 2016), Lemma 2.4 and Corollary 2.2). In particular, if 0<t<r, the law of β _{ t }, conditional on τ=r, is Gaussian with expectation zero and variance \(\frac {t\left (rt\right)}{r}\):
where \(\mathcal {N}\left (\mu,\sigma ^{2}\right)\) denotes the Gaussian law of mean μ and variance σ ^{2}.
By p(t,·,y), we denote the density of a Gaussian random variable with mean \(y\in \mathbb {R}\) and variance t>0:
For later use, we also introduce the functions φ _{ t } (t>0):
We notice that for 0<t<r the conditional density of β _{ t }, conditional on τ=r, is just equal to the density φ _{ t }(r,·) of a standard Brownian bridge \(\beta ^{r}_{t}\) of length r at time t.
We proceed with the property that the default time τ is nonanticipating with respect to the filtration \(\mathbb {F}^{\beta }\) and the Markov property of the information process β.
Lemma 2.4
For all t>0, {β _{ t }=0}={τ≤t}, Pa.s. In particular, τ is an \(\mathbb {F}^{\beta }\)stopping time.
Proof
See (Bedini et al. 2016), Proposition 3.1 and Corollary 3.1. □
Theorem 2.5
The information process β is a Markov process with respect to the filtration \(\mathbb {F}^{\beta }\): For all 0≤t<u and measurable real functions g such that g(β _{ u })is integrable,
Proof
See Theorem 6.1 in (Bedini et al. 2016). □
As the following theorem combined with Theorem 2.5 shows, the function ϕ _{ t } defined by
\(\left (r,t\right)\in \left (0,+\infty \right)\times \mathbb {R}_{+}\), \(x\in \mathbb {R}\), is, for t<r, the a posteriori density function of τ on {τ>t}, conditional on β _{ t }=x.
Theorem 2.6
Let t>0, \(g:\mathbb {R}^{+}\rightarrow \mathbb {R}\) be a Borel function such that E[g(τ)]<+∞. Then, Pa.s.
Proof
See Theorem 4.1, Corollary 4.1 and Corollary 6.1 in (Bedini et al. 2016). □
Before stating the next result, which is concerned with the semimartingale decomposition of the information process, let us give the following definition:
Definition 2.7
Let B be a continuous process, \(\mathbb {F}\) a filtration, and T an \(\mathbb {F}\)stopping time. Then, B is called an \(\mathbb {F}\)Brownian motion stopped at T, if B is an \(\mathbb {F}\)martingale with square variation process 〈B,B〉_{ t }=t∧T, t≥0.
Now, we introduce the realvalued function u defined by
Theorem 2.8
The process b defined by
is an \(\mathbb {F}^{\beta }\)Brownian motion stopped at τ. The information process βis therefore an \(\mathbb {F}^{\beta }\)semimartingale with decomposition
Proof
See Theorem 7.1 in (Bedini et al. 2016). □
Remark 2.9
The quadratic variation of the information process β is given by
The compensator of the default time
In this section, we explicitly compute the compensator of the singlejump process with the jump occurring at τ, which will be denoted by H= (H _{ t }, t≥0):
The process H, called default process, is an \(\mathbb {F}^{\beta }\)submartingale and its compensator is also known as the compensator of the \(\mathbb {F}^{\beta }\)stopping time τ. Our main goal consists in providing a representation of the compensator of H. As we shall see below, this representation involves the local time L ^{β}(t,0) of the information process β (see Appendix Appendix A for properties of local times of continuous semimartingales and, in particular, of β). From its representation we immediately obtain that the compensator of the default process H is continuous. As a result, from the continuity of the compensator of H it follows that the default time τ is a totally inaccessible \(\mathbb {F}^{\beta }\)stopping time.
In this section, the following assumption will always be in force.
Assumption 3.1
(i) The distribution function F of τ admits a continuous density function f with respect to the Lebesgue measure λ _{+} on \(\mathbb {R}_{+}\).
(ii) F(t)<1 for all t≥0.
The following theorem is the main result of this paper:
Theorem 3.2
Suppose that Assumption 3.1 is satisfied.
(i) The process K=(K _{ t }, t≥0) defined by
is the compensator of the default process H. Here, L ^{β}(t,x) denotes the local time of the information process β up to t at level x.
(ii) The default time τ is a totally inaccessible stopping time with respect to the filtration \(\mathbb {F}^{\beta }\).
Proof
First, we verify statement (ii) under the supposition that (i) is true. Obviously, as L ^{β}(s,0) is continuous in s (see Lemma A.4), the process K given by (10) is continuous. Consequently, because of the wellknown equivalence between this latter property and the continuity of the compensator (see, e.g., (Kallenberg 2002), Corollary 25.18), we can conclude that the default time τ is a totally inaccessible stopping time with respect to \(\mathbb {F}^{\beta }\).
Now, we prove statement (i) of the theorem. For every h>0 we define the process \(K^{h}=\left (K_{t}^{h},\,t\geq 0\right)\) by
The proof is divided into two parts. In the first part, we prove that \(\phantom {\dot {i}\!}K_{t}K_{t_{0}}\) is the Pa.s. limit of \(\phantom {\dot {i}\!}K_{t}^{h}K^{h}_{t_{0}}\) as h ↓0, for every t _{0},t≥0 such that 0<t _{0}<t. In the second part of the proof, we show that the process K is indistinguishable from the compensator of H. Auxiliary results used throughout the proof are postponed to Appendix Appendix B.
For the first part of the proof, we fix t _{0},t such that 0<t _{0}<t and notice that
where the last equality is a consequence of Theorem 2.6 and Definition (4) of the a posteriori density function of τ. Later, we shall verify that
So, we have to deal with the limit behaviour as h ↓0 of
where we have introduced the function \(g:\left (0,+\infty \right)\times \mathbb {R}\rightarrow \mathbb {R}_{+}\) by
In (14), we want to replace \(p\left (\frac {su}{s+u},\beta _{s},0\right)\) with p(u,β _{ s },0). To this end, we estimate the absolute value of the difference:
with some constants c _{1} and c _{2}, for 0≤u≤h≤1 and s∈[t _{0},t], where for the estimate of the first summand we have used that the function u↦p(u,x,0) has its unique maximum at u=x ^{2}, the standard estimate 1−e ^{−z}≤z for all z≥0 and that \((s+1)s^{1}\le 1+t_{0}^{1}\) as well as for the estimate of the second summand the inequalities \(p(u,x,0)\le (2\pi u)^{\frac {1}{2}}\) and \(\sqrt {\frac {s+u}{s}}1\le \frac {u}{2s}\). Putting x=β _{ s }, and integrating from 0 to h, and dividing by h, for 0≤u≤h≤1 and s∈[t _{0},t], from (16) we obtain
where C(t _{0},t,x) is an upper bound of g(s,x) on [t _{0},t] continuous in x (see Lemma B.2) and c _{3} is an upper bound for the continuous density function f on [t _{0},t]. The righthand side is integrable over [t _{0},t] with respect to the Lebesgue measure λ _{+}. On the other side, by the fundamental theorem of calculus, we have that, for every x≠0,
For this, we notice that p(0,x,0)=0 is a continuous extension of the function u↦p(u,x,0) if x≠0. By Corollary A.3, we have that the set {0≤s≤t∧τ: β _{ s }=0} has Lebesgue measure zero. Then, using Lebesgue’s theorem on dominated convergence, we can conclude that Pa.s.
This completes the first step of the proof of the first part, meaning that in (14) we can replace \(p\left (\frac {su}{s+u},\beta _{s},0\right)\) with p(u,β _{ s },0) for identifying the limit.
The second step of the first part is to prove that
Setting
an application of the occupation time formula (see Corollary A.3) yields
For every h>0, q(h,·) is a probability density function with respect to the Lebesgue measure on \(\mathbb {R}\). According to Lemma B.1, the probability measures Q _{ h } with density q(h,·) converge weakly to the Dirac measure δ _{0} at 0. On the other hand, Lemma B.4 shows that the function \(x\mapsto \int _{t_{0}}^{t}g\left (s,x\right)f(s)\,\mathrm {d} L^{\beta }(s,x)\) is continuous and bounded. Hence, in (20) we can pass to the limit and obtain the following
In the third step of the proof of the first part, we must show that (13) holds. Note that the function f is uniformly continuous on [t _{0},t+1]. We fix ε>0 and choose 0<δ≤1 such that f(s+u)−f(s)≤ε for every 0≤u<δ. Proceeding similarly as above, we obtain the following
Since ε>0 is choosen arbitrarily and the integral above is Pa.s. finite, we conclude that (13) holds.
The first part of the proof is complete.
The second part of the proof relies on the socalled Laplacian approach of P.A. Meyer and, for the sake of easy reference, related results are recalled in Appendix Appendix C. Let us denote by K ^{w} the compensator of the default process H introduced in (9): \(H_{t}:=\mathbb {I}_{\left \{ \tau \leq t\right \}},\ t\geq 0\). We first show that \(K^{h}_{t}\) converges to \(K^{w}_{t}\) as h ↓0 in the sense of the weak topology σ(L ^{1},L ^{∞}) (see Definition C.3), for every t≥0. We then prove that the process K is actually indistinguishable from K ^{w}.
For the sake of simplicity of the notation, if a sequence of integrable random variables \(\left (\xi _{n}\right)_{n\in \mathbb {N}}\) converges to an integrable random variable ξ in the sense of the weak topology σ(L ^{1},L ^{∞}), we will write
Furthermore, we will denote by G the rightcontinuous potential of class (D) (cf. beginning of Appendix Appendix C) given by
By Corollary C.5, we know that there exists a unique integrable predictable increasing process \(K^{w}=\left (K_{t}^{w},\,t\geq 0\right)\) which generates, in the sense of Definition C.1, the potential G given by (22) and, for every \(\mathbb {F}^{\beta }\)stopping time T, we have that
The process K ^{w} is actually the compensator of H. Indeed, it is a wellknown fact that the process H admits a unique decomposition
into the sum of a rightcontinuous martingale M and an adapted, natural, increasing, integrable process A. The process A is then called the compensator of H. On the other hand, from the definition of the potential generated by an increasing process (see Definition C.1), the process
is a martingale. By combining the definition (22) of the process G and (24), we obtain the following decomposition of H:
However, by the uniqueness of the decomposition (23), we can identify the martingale M with 1−L and we have that A=K ^{w}, up to indistinguishability. Since the submartingale H and the martingale 1−L appearing in the above proof are rightcontinuous, the process K ^{w} is also rightcontinuous.
By applying Lemma C.8, we see that \(K_{t}K_{t_{0}}\phantom {\dot {i}\!}\) is a modification of \(K^{w}_{t}K^{w}_{t_{0}}\phantom {\dot {i}\!}\), for all t _{0},t such that 0<t _{0}<t. Passing to the limit as t _{0} ↓0, we get \(K_{t}=K_{t}^{w}\) Pa.s. for all t≥0. Since both processes have rightcontinuous sample paths they are indistinguishable.
The theorem is proved. □
Remark 3.3
We close this part of the present paper with the following observations.
(1) Note that \(\left (\mathbb {I}_{\left \{ \tau \leq t\right \} },\,t\geq 0\right)\) does not admit an intensity with respect to the filtration \(\mathbb {F}^{\beta }\) (hence, it is not possible to apply, for example, Aven’s Lemma for computing the compensator (see, e.g., (Aven 1985))).
(2) Assumption 3.1(ii) on the distribution function F that F(t)<1 for all t≥0 ensures that the denominator of the integrand of the righthand side of (10) is always strictly positive. However, it can be removed. Indeed, if the density function f of τ is continuous (as required by Assumption 3.1(i)), then exactly as above we can show that relation (10) is satisfied for all t≤t _{1}:= sup{t>0: F(t)<1}. On the other hand, it is obvious that τ≤t _{1} Pa.s. (hence, the righthand side of (10) is constant for t∈[t _{1},∞)) and also that the compensator K=(K _{ t }, t≥0) of \(\left (\mathbb {I}_{\left \{ \tau \leq t\right \} },\,t\geq 0\right)\) is constant on [t _{1},∞). Altogether, it follows that relation (10) is satisfied for all t≥0.
Appendix A
On the local time of the information process
In this section, we introduce and study the local time process associated with the information process.
For any continuous semimartingale X=(X _{ t }, t≥0) and for any real number x, it is possible to define the (right) local time L ^{X}(t,x) associated with X at level x up to time t using Tanaka’s formula (see, e.g., (Revuz and Yor 1999), Theorem VI.(1.2)) as follows:
where sign(x):=1 if x>0 and sign(x):=−1 if x≤0. The process L ^{X}(·,x)=(L ^{X}(t,x), t≥0) appearing in relation (25) is called the (right) local time of X at level x.
Now, we recall the occupation time formula for local times of continuous semimartingales which is given in a form convenient for our applications. By 〈X,X〉, we denote the square variation process of a continuous semimartingale X.
Lemma A.1
Let X=(X _{ t }, t≥0) be a continuous semimartingale. There is a Pnegligible set outside of which
for every t≥0 and every nonnegative Borel function h on \(\mathbb {R}_{+}\times \mathbb {R}\).
Proof
See Corollary VI.(1.6) from the book by Revuz and Yor (1999) for the case when h is a nonnegative Borel function defined on \(\mathbb {R}\) (i.e., it does not depend on time). The statement of the lemma is then proved by first considering the case in which h has the form \( h\left (t,x\right)=\mathbb {I}_{\left [u,v\right ]}(t)\gamma (x) \) for 0≤u<v<∞ and a nonnegative Borel function γ on \(\mathbb {R}\), and then using monotone class arguments (see Revuz and Yor (1999), Exercise VI.(1.15) or Rogers and Williams (2000), Theorem IV.(45.4)). □
Concerning continuity properties of local times, there is the following result.
Lemma A.2
Let X=(X _{ t }, t≥0) be a continuous semimartingale with canonical decomposition given by X=M+A, where M is a local martingale and A a finite variation process. Then, there exists a modification of the local time process \(\left (L^{X}\left (t,x\right),t\geq 0,\,x\in \mathbb {R}\right)\) of X such that the map (t,x)↦L ^{X}(t,x) is continuous in t and càdlàg in x, Pa.s. Moreover,
for all \(t\geq 0,\,x\in \mathbb {R}\), Pa.s.
Proof
See, e.g., (Revuz and Yor 1999), Theorem VI.(1.7). □
The information process β is a continuous semimartingale (cf. Theorem 2.8), hence the local time L ^{β}(t,x) of β at level \(x\in \mathbb {R}\) up to time t≥0 is well defined. The occupation time formula takes the following form.
Corollary A.3
We have
for all t≥0 and all nonnegative Borel functions h on \(\mathbb {R}_{+}\times \mathbb {R}\), Pa.s.
Proof
The first equality follows from relation (8) and the second is an application of Lemma A.1. □
An important property of the local time L ^{β} is the existence of a bicontinuous version.
Lemma A.4
There is a version of L ^{β} such that the map \((t,x)\in \mathbb {R}_{+}\times \mathbb {R}\mapsto L^{\beta }\left (t,x\right)\) is continuous, Pa.s.
Proof
We choose a version of the local time L ^{β} according to Lemma A.2. Using (26), we have that
for all \(t\geq 0,\,x\in \mathbb {R}\), Pa.s., where u is the function defined by (6). Applying Corollary A.3 to the righthand side of the last equality above, we see that
and hence L ^{β}(t,x)−L ^{β}(t,x−)=0, for all \(t\geq 0,\,x\in \mathbb {R}\), Pa.s., because {x} has Lebesgue measure zero. This completes the proof. □
We also make use of the boundedness of the local time with respect to the space variable.
Lemma A.5
The function x↦L ^{β}(t,x) is bounded for all \(t\in \mathbb {R}_{+}\) Pa.s. (the bound may depend on t and ω).
Proof
It follows from the occupation time formula (or from Revuz and Yor (1999), Corollary VI.(1.9)) that the local time L ^{β}(t,·) vanishes outside of the compact interval [−M _{ t }(ω),M _{ t }(ω)] where
which together with the continuity of L ^{β}(t,·) (see Lemma A.4) yields the boundedness of this function, Pa.s. □
Outside a negligible set, for fixed \(x\in \mathbb {R}\), the local time L ^{β}(·,x) is a positive continuous increasing function, and we can associate with it a random measure on \(\mathbb {R}_{+}\):
Lemma A.6
Outside a negligible set, for any sequence \(\left (x_{n}\right)_{n\in \mathbb {N}}\) in \(\mathbb {R}\) converging to \(x\in \mathbb {R}\), the sequence \(\left (L^{\beta }\left (\cdot,x_{n}\right)\right)_{n\in \mathbb {N}}\) converges weakly to L ^{β}(·,x), i.e.,
for all bounded and continuous functions \(g:\mathbb {R}_{+}\mapsto \mathbb {R}\).
Proof
We fix a negligible set outside of which L ^{β} is bicontinuous (cf. Lemma A.4) and outside of which we will be working now. The measures \(\left (L^{\beta }\left (\cdot,x_{n}\right)\right)_{n\in \mathbb {N}}\) are finite on \(\mathbb {R}\) and they are supported by [0,τ]. By the continuity of L ^{β}(t,·), we have that \(L^{\beta }\left (s,x_{n}\right)\xrightarrow [n\rightarrow \infty ]{}L^{\beta }\left (s,x\right),\,s\geq 0\), from which it follows that
We also have this convergence for the whole space \(\mathbb {R}_{+}\):
From this, we can conclude that the measures L ^{β}(·,x _{ n }) converge weakly to L ^{β}(·,x). □
Appendix B
Auxiliary results
In (19), we had introduced the function q by
where p(t,·,y) is the density of the normal distribution with variance t and expectation y (see (2)).
Lemma B.1
The functions q(h,·) are probability density functions with respect to the Lebesgue measure on \(\mathbb {R}\). The probability measures \(\mathbb {Q}_{h}\) on \(\mathbb {R}\) associated with the density q _{ h } converge weakly as h ↓0 to the Dirac measure δ _{0} at 0.
Proof
The first statement of the lemma is obvious. For verifying the second statement, let f be a bounded continuous function on \(\mathbb {R}\). Using Fubini’s theorem, we obtain
Since the function \(u\in [0,1]\mapsto \mathcal {N}(0,u)\), which associates to every u∈[0,1], the centered Gaussian law \(\mathcal {N}(0,u)\) is continuous with respect to weak convergence of probability measures (note that \(\mathcal {N}(0,0)=\delta _{0}\)), we observe that the function \(u\in [0,1]\mapsto \int _{\mathbb {R}} f(x)\,\mathcal {N}(0,u)(\mathrm {d} x)\) is continuous. An application of the fundamental theorem of calculus yields that the righthand side converges to \(\int _{\mathbb {R}} f(x)\,\delta _{0}(\mathrm {d} x)\) as h ↓0 and hence
proving the second statement of the lemma. □
Now, we consider the function g introduced in (15):
Lemma B.2
(1) For all \(x\in \mathbb {R}\) and 0<t _{0}<t, the function \(g\left (\cdot,x\right): [t_{0},t]\mapsto \mathbb {R}\) is bounded, i.e., there exists a real constant C(t _{0},t,x) such that
(2) For all \(x\in \mathbb {R}\) and 0<t _{0}<t, the function \(g(\cdot,x): [t_{0},t]\mapsto \mathbb {R}\) is continuous, i.e., for all s _{ n },s∈[t _{0},t] such that s _{ n }→s,
(3) Let \(\left (x_{n}\right)_{n\in \mathbb {N}}\) be a sequence converging monotonically to \(x\in \mathbb {R}\). Then, for all 0<t _{0}<t,
Proof
Let us define, for every s∈[t _{0},t] and \(x\in \mathbb {R}\),
and rewrite g as
In order to prove statement (1), it suffices to verify that there exists a constant \(\tilde {C}\left (t_{0},t,x\right)\) such that
Such a constant can be found by setting
proving the first statement of the lemma.
In order to prove statement (2) of the lemma, it suffices to verify that the function s↦D(s,x), s∈[t _{0},t], is continuous, a fact that can be proved using Lebesgue’s dominated convergence theorem. Indeed, let s _{ n },s∈[t _{0},t] such that s _{ n }→s as n→∞. Rewriting (29), we get
First, we consider the integral from t to ∞: For v≥t, we can make an upper estimate of the integrand by \(\sqrt {\frac {v}{2\pi t_{0}\,(vt)}}\, f(v)\) which is integrable over [t,+∞). For the second part of the integral from t _{0} to t, we estimate the integrand by \(\mathbb {I}_{(s_{n},+\infty)}(v)\sqrt {\frac {t}{2\pi t_{0}\,(vs_{n})}}\,c\), where c is an upper bound of f on [t _{0},t], and by integrating we observe that
As the integrands are nonnegative, we get convergence in L ^{1}([t _{0},t]) and hence uniform integrability (cf. Theorem C.7). This means that the sequence
is uniformly integrable on [t _{0},t] and we can apply Lebesgue’s theorem (cf. Theorem C.7) to conclude
Summarizing, we get
and the proof of statement (2) of the lemma finished.
We turn to the proof of statement (3) of the lemma. Using relation (30), we see that
and from inequality (31) we get that
where \(\tilde {C}\left (t_{0},t,x\right)\) is defined by (32). It is easy to see that
Hence, it remains to prove that
By assumption, the sequence x _{ n } converges monotonically to x. In such a case, it is easy to see that the sequence of functions D(·,x _{ n }) is monotone. Furthermore, using Lebesgue’s dominated convergence theorem, we verify that D(s,x _{ n }) converges to D(s,x), for all s∈[t _{0},t]. Since the function s↦D(s,x) is also continuous on [t _{0},t], according to Dini’s theorem, D(·,x _{ n }) converges uniformly to D(·,x) on [t _{0},t]. This implies the third statement of the lemma and the proof is finished. □
Lemma B.3
Let h,h _{ n } be bounded and continuous functions on a metric space E, and μ,μ _{ n } be finite measures on \(\left (E,\mathcal {B}(E)\right)\). Suppose that the following two conditions are satisfied:

1.
The sequence of functions h _{ n } converges uniformly to h.

2.
The sequence of measures μ _{ n } converges weakly to μ.
Then, \({\lim }_{n\uparrow +\infty }\int _{E}h_{n}\,\mathrm {d}\mu _{n}=\int _{E}h \,\mathrm {d}\mu \).
Proof
It can immediately be verified that
which converges to 0 as n ↑+∞. □
Lemma B.4
Let 0<t _{0}<t. The function \(k:\,\mathbb {R}\rightarrow \mathbb {R}_{+}\) given by
is bounded and continuous, where the function g is given by (15).
Proof
Let us first restrict to a compact subset E of \(\mathbb {R}\). First, we prove the right and leftcontinuity, hence the continuity, of the function k. Let x _{ n } be a sequence from E converging monotonically to x∈E. From Lemma B.2, we know that the bounded and continuous functions \(g\left (\cdot,x_{n}\right):\,[t_{0},t]\rightarrow \mathbb {R}\) converge uniformly to the bounded and continuous function \(g\left (\cdot,x\right):\,[t_{0},t]\rightarrow \mathbb {R}\) as n→∞. From Lemma A.6, we obtain that the sequence of measures L ^{β}(·,x _{ n }) converges weakly to L ^{β}(·,x) as n→∞. Applying Lemma B.3, we have that
Consequently, the function k is continuous on E. The boundedness of k now follows from the compactness of E. In order to show that the statement also holds for \(\mathbb {R}\), let us choose E=[−M _{ t }−1,M _{ t }+1] (see (27) for notation). As L ^{β}(s,x)=0, s∈[0,t], x∉[−M _{ t },M _{ t }] (see the proof of Lemma A.5), the statement follows. □
Appendix C
The Meyer approach to the compensator
Below, we briefly recall the approach developed by Meyer (1966) for computing the compensator of a rightcontinuous potential of class (D). In this section, \(\mathbb {F}=\left (\mathcal {F}_{t}\right)_{t\geq 0}\) denotes a filtration satisfying the usual hypothesis of rightcontinuity and completeness.
We begin with the definition of a rightcontinuous potential of class (D). Let X=(X _{ t }, t≥0) be a rightcontinuous \(\mathbb {F}\)supermartingale and let \(\mathcal {T}\) be the collection of all finite \(\mathbb {F}\)stopping times relative to this family. The process X is said to belong to the class(D) if the collection of random variables \(X_{T},\,T\in \mathcal {T}\) is uniformly integrable. We say that the rightcontinuous supermartingale X is a potential if the random variables X _{ t } are nonnegative and if
Definition C.1
Let C=(C _{ t }, t≥0) be an integrable \(\mathbb {F}\)adapted rightcontinuous increasing process, and let L=(L _{ t }, t≥0) be a rightcontinuous modification of the martingale \(\left (\mathbf {E}\left [C_{\infty }\mathcal {F}_{t}\right ],\,t\geq 0\right)\); the process Y=(Y _{ t }, t≥0) given by
is called the potential generated by C.
The following result establishes a connection between potentials generated by an increasing process and potentials of class (D). Let h be a strictly positive real number and X=(X _{ t }, t≥0) be a potential of class (D), and denote by (p _{ h } X _{ t }, t≥0) the rightcontinuous modification of the supermartingale \(\left (\mathbf {E}\left [X_{t+h}\mathcal {F}_{t}\right ],\,t\geq 0\right)\).
Theorem C.2
Let X=(X _{ t }, t≥0) be a potential of class (D), let h>0 and \(A^{h}=\left (A_{t}^{h},\,t\geq 0\right)\) be the process defined by
Then, A ^{h} is an integrable increasing process which generates a potential of class (D) \(X^{h}=\left (X_{t}^{h},\,t\geq 0\right)\) dominated by X, i.e., the process X−X ^{h} is a potential. It holds that
Proof
See, e.g., (Meyer 1966), VII.T28. □
An increasing process A=(A _{ t }, t≥0) is called natural (with respect to the filtration \(\mathbb {F}\)) if, for every bounded rightcontinuous \(\mathbb {F}\)martingale M=(M _{ t }, t≥0), we have
It is well known that an increasing process A is natural with respect to \(\mathbb {F}\) if and only if it is \(\mathbb {F}\)predictable.
For the following definition of convergence in the sense of the weak topology σ(L ^{1},L ^{∞}), see (Meyer 1966), II.10.
Definition C.3
Let \(\left (\xi _{n}\right)_{n\in \mathbb {N}}\) be a sequence of integrable realvalued random variables. The sequence \(\left (\xi _{n}\right)_{n\in \mathbb {N}}\) is said to converge to an integrable random variable ξ in the weak topology σ(L ^{1},L ^{∞}) if
Theorem C.4
Let X=(X _{ t }, t≥0) be a rightcontinuous potential of class (D). Then, there exists an integrable natural increasing process A=(A _{ t }, t≥0) which generates X, and this process is unique. For every stopping time T we have
Proof
See, e.g., (Meyer 1966), VII.T29. □
In the framework of the informationbased approach, the process H=(H _{ t }, t≥0), given by (9), is a bounded increasing process which is \(\mathbb {F}^{\beta }\)adapted. It is a submartingale and it can be immediately seen that the process G=(G _{ t }, t≥0), given by (22), is a rightcontinuous potential of class (D). By Theorem C.2, the processes K ^{h}, h>0, defined by (11), generate a family of potentials G ^{h} dominated by G.
Corollary C.5
There exists a unique integrable natural increasing process \(K^{w}=\left (K_{t}^{w},\,t\geq 0\right)\) which generates the process G, defined by (22) and, for every \(\mathbb {F}^{\beta }\)stopping time T, we have that
where K ^{h} is the process defined by (11).
Proof
See Theorem C.4. □
Theorem C.6
Compactness Criterion of Dunford–Pettis Let \(\mathcal {A}\) be a subset of the space L ^{1}(P). The following two properties are equivalent:

1.
\(\mathcal {A}\) is uniformly integrable;

2.
\(\mathcal {A}\) is relatively compact in the weak topology σ(L ^{1},L ^{∞}).
Proof
See (Meyer 1966), II.T23. □
Theorem C.7
Let \(\left (\xi _{n}\right)_{n\in \mathbb {N}}\) be a sequence of integrable random variables converging in probability to a random variable ξ. Then, ξ _{ n } converges to ξ in L ^{1}(P) if and only if \(\left (\xi _{n}\right)_{n\in \mathbb {N}}\) is uniformly integrable. If the random variables ξ _{ n } are nonnegative, then \(\left (\xi _{n}\right)_{n\in \mathbb {N}}\) is uniformly integrable if and only if
Proof
See (Meyer 1966), II.T21. □
Lemma C.8
Let \(\left (\xi _{n}\right)_{n\in \mathbb {N}}\) be a sequence of random variables and ξ,η∈L ^{1}(P) such that:

1.
\(\xi _{n}\xrightarrow [\:n\rightarrow +\infty ]{\sigma \left (L^{1},L^{\infty }\right)}\eta ;\)

2.
ξ _{ n }→ξ, Pa.s.
Then, η=ξ, Pa.s.
Proof
From condition (1), we see that \(\left (\xi _{n}\right)_{n\in \mathbb {N}}\) is relatively compact in the weaktopology σ(L ^{1},L ^{∞}). By Theorem C.6, it follows that the family \(\left (\xi _{n}\right)_{n\in \mathbb {N}}\) is uniformly integrable. We also know that ξ _{ n }→ξ Pa.s. Hence, by Theorem C.7, we see that ξ _{ n }→ξ in the L ^{1}norm and, consequently, \(\xi _{n}\xrightarrow [\:n\rightarrow +\infty ]{\sigma \left (L^{1},L^{\infty }\right)}\xi \). The statement of the lemma then follows by the uniqueness of the limit. □
References
Aven, T: A theorem for determining the compensator of a counting process. Scand. J. Stat. 12, 62–72 (1985).
Bedini, ML: Information on a Default Time: Brownian Bridges on Stochastic Intervals and Enlargement of Filtrations. Dissertation, FriedrichSchillerUniversität (2012).
Bedini, ML, Buckdahn, R, Engelbert, HJ: Brownian bridges on random intervals. Teor. Veroyatnost. i Primenen. 61:1, 129–157 (2016).
Bedini, ML, Hinz, M: Credit default prediction and parabolic potential theory. Statistics & Probability Letters. 124, 121–125 (2017).
Giesecke, K: Default and information. J. Econ. Dyn. Control. 30, 2281–2303 (2006).
Jarrow, R, Protter, P: Structural versus Reduced Form Models: A New Information Based Perspective. J. Invest. Manag. 2(2), 1–10 (2004). Second Quarter.
Jeanblanc, M, Le Cam, Y: Reduced form modelling for credit risk. SSRN (2008). http://ssrn.com/abstract=1021545. Accessed 14 November 2016.
Jeanblanc, M, Le Cam, Y: Progressive enlargement of filtrations with initial times. Stoch. Proc. Appl. 119(8), 2523–2543 (2009).
Jeanblanc, M, Le Cam, Y: Immersion property and Credit Risk Modelling. In: Delbaen, F, Miklós, R, Stricker, C (eds.)Optimality and Risk  Modern Trends in Mathematical Finance, pp. 99–132. Springer Berlin Heidelberg (2010).
Jeanblanc, M, Yor, M, Chesney, M: Mathematical Methods for Financial Markets. SpringerVerlag, London (2009).
Kallenberg, O: Foundations of Modern Probability. Second edition. SpringerVerlag, NewYork (2002).
Karatzas, I, Shreve, S: Brownian Motion and Stochastic Calculus. Second edition. SpringerVerlag, Berlin (1991).
Meyer, PA: Probability and Potentials. Blaisdail Publishing Company, London (1966).
Revuz, D, Yor, M: Continuous Martingales and Brownian Motion. Third edition. SpringerVerlag, Berlin (1999).
Rogers, LCG, Williams, D: Diffusions, Markov Processes and Martingales. Vol. 2: Itô Calculus. Second edition. Cambridge University Press, Cambridge (2000).
Funding
This work has been financially supported by the European Community’s FP 7 Program under contract PITNGA2008213841, and Marie Curie ITN ≪Controlled Systems ≫.
Authors’ contributions
The three authors worked together on the manuscript and approved its final version.
Competing interests
The authors declare that they have no competing interests.
Author information
Rights and permissions
Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0 International License(http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made.
About this article
Cite this article
Bedini, M., Buckdahn, R. & Engelbert, H. On the compensator of the default process in an informationbased model. Probab Uncertain Quant Risk 2, 10 (2017). https://doi.org/10.1186/s4154601700174
Received:
Accepted:
Published:
Keywords
 Default time
 Totally inaccessible stopping time
 Brownian bridge on random intervals
 Local time
 Credit risk
 Compensator process