# Pricing formulae for derivatives in insurance using Malliavin calculus

## Abstract

In this paper, we provide a valuation formula for different classes of actuarial and financial contracts which depend on a general loss process by using Malliavin calculus. Similar to the celebrated Black–Scholes formula, we aim to express the expected cash flow in terms of a building block. The former is related to the loss process which is a cumulated sum indexed by a doubly stochastic Poisson process of claims allowed to be dependent on the intensity and the jump times of the counting process. For example, in the context of stop-loss contracts, the building block is given by the distribution function of the terminal cumulated loss taken at the Value at Risk when computing the expected shortfall risk measure.

## Introduction

Risk analysis in the context of insurance or reinsurance is often based on the study of properties of a so-called cumulative loss process L:=(L t )t[0,T] over a period of time [0,T] where T>0 denotes the maturity of a contract. Usually, L takes the form

$$L_{t}:=\sum_{i=1}^{N_{t}} X_{i}, \quad t\in [0,T],$$

where N:=(N t )t[0,T] is a counting process, and the random variables $$(X_{i})_{i\in {\mathbb {N}}^{*}}$$ represent the amount of the claims. A typical contract in reinsurance is the stop-loss contract that offers protection against an increase in either (or both) severity and frequency of a company’s loss experience. More precisely, stop-loss contracts provide to its buyer (another insurance company) the protection against losses which are larger than a given level K and its payoff function is given by a “call” function. In some cases, there is also an upper limit given by some real number M, which specifies the maximal reimbursement amount. Thus, the payoff of such a contract is given by

$$\Phi(L_{T})= \left\{\begin{array}{ll} 0, \; &\mathrm{ if~} L_{T}< K;\\ L_{T}-K, &\mathrm{ if~} K \leq L_{T} < M;\\ M {-K}, &\mathrm{ if~} L_{T}\geq M. \end{array}\right.$$
(1)

Broadly, the risk carried out by the claims is neither hedgeable nor related to a financial market, hence the premium of the stop-loss contract is equal to $${\mathbb {E}}[\Phi (L_{T})]$$ which immediately re-writes as

$${\mathbb{E}}[\Phi(L_{T})] = {\mathbb{E}}\left[L_{T}\mathbf{1}_{\{L_{T} \in [K,M]\}}\right] - K {\mathbb{P}}\left[L_{T} \in [K,M]\right] + (M-K) {\mathbb{P}}\left[L_{T} \geq M\right].$$
(2)

There are a large number of papers describing how to approximate the compound distribution function of the cumulated loss L T and to compute the stop-loss premium. The aggregate claims distribution function can in some cases be calculated recursively, using, for example, the Panjer recursion formula, see Panjer (1981) and Gerber (1982). Various approximations of stop-loss reinsurance premiums are described in the literature, some of them assuming a specific dependence structure.

Similarly to the celebrated Black–Scholes formula, we aim to express the first term of the right side of (2) in terms of a building block which represents the distribution function of the terminal loss L T . This feature is hidden in the Black–Scholes model since the terminal value of the stock has an explicit log-normal distribution. More specifically, we aim in computing $${\mathbb {E}}\left [L_{T}\mathbf {1}_{\{L_{T} \in [K,M]\}}\right ]$$ by using the building block $$x\mapsto {\mathbb {P}}\left [L_{T} \in [K-x,M-x]\right ]$$. Note that, on the credit derivative market, the payoff function (1) can also be related to Collateralized Debt Obligations (CDOs) where there are several tranches, and so several K and M levels, which are expressed in proportion of the underlying which is the loss of a given asset portfolio.

Stop-loss contracts are the paradigm of reinsurance contracts, but we aim in dealing with more general payoffs whose valuation involves the computation of the quantity

$${\mathbb{E}}\left[{\hat{L}}_{T} h\left(L_{T}\right)\right],$$
(3)

where $$h:{\mathbb {R}}_{+}\to {\mathbb {R}}_{+}$$ is a Borelian map and where $$\hat {L}$$ is of the form $${\hat {L}}_{T} := \sum _{i=1}^{N_{T}} {\hat {X}}_{i},$$ involving claims $${\hat {X}}_{i}$$ which are related to the X i of the original loss L T . To be more precise, $${\hat {L}}_{T}$$ will be the effective loss covered by the reinsurance company, whereas L T is the loss quantity that activates the contract. Typical examples will be given in Section 2.1. Once again, this is similar to the valuation of CDO tranches where the recovery rate is often supposed to be a random variable of beta distribution with mean 40%, whereas the realized rate, often revealed only after the formal bankruptcy, does not necessarily match this value.

In this paper, we provide an exact formula for (3) in terms of the building block $$x\mapsto {\mathbb {E}}\left [h(L_{T}+x)\right ]$$ (or of a related quantity for the more general situation (3), see (11) for a precise statement). This goal will be achieved by using Malliavin calculus available for jump processes. Before turning to the exposition of the model, we emphasize that this methodology goes beyond the analysis of pricing and finds application in the computation of the expected shortfall of contingent claims in the realm of risk measures for instance. Indeed, the expected shortfall is a useful risk measure, that takes into account the size of the expected loss above the value at risk. Formally it is defined as

$$ES_{\alpha}(-L_{T}) = {\mathbb{E}}\left[ -L_{T}\middle\vert -L_{T} > V@R_{\alpha}(-L_{T}) \right], \quad \alpha \in (0,1).$$

As it is well known, the expected shortfall coincides with Average Value at Risk (AV@R), that is

$$ES_{\alpha}(-L_{T}) = AV@R(-L_{T}):=\frac{1}{1-\alpha} \int_{\alpha}^{1} V@R_{s}(-L_{T}) ds,$$

if and only if $${\mathbb {P}}[-L_{T}\leq q_{-L_{T}}^{+}(t)]=t$$, t(0,1), where $$q_{-L_{T}}^{+}(t)$$ denotes the quantile of level t of −L T (see Section 2.2.2 for a precise definition). However, already in the trivial example where the size claims X i are constant and equal to 1, this property fails as L T =N T is a Poisson random variable which exhibits a discontinuous distribution function. However, our approach gives an alternative explicit computation of $$\mathbb {E}\left [L_{T}\mathbf {1}_{\{L_{T}<\beta \}}\right ]$$ and thus of ES α (−L T ) as

$$ES_{\alpha}(-L_{T}) =\frac{-\mathbb{E}\left[L_{T}\mathbf{1}_{\{L_{T}<\beta\}}\right]}{\mathbb{P}(L_{T}<\beta)}, \quad \beta:=-V@R_{\alpha}(-L_{T}).$$

We conclude this section with some comments about the modeling of the claims X i and $${\hat {X}}_{i}$$. In the classic Cramer–Lundberg model, the claims are independent and identically distributed (i.i.d.) and, in addition, independent of the counting process N which is an inhomogeneous Poisson process. In this work, we consider a doubly stochastic Poisson process N and we allow dependency between the size of the claims, their arrivals, and the intensity of N. In particular, we do not assume a Markovian setting. The impact of certain dependence structures on the stop-loss premium is studied in the reinsurance literature, such as in Albers (1999), Denuit et al. (2001), or De Lourdes Centeno (2005), but those works usually assume dependency between the successive claim sizes and the arrival intervals. Nevertheless, in the ruin theory literature, some contributions already propose explicit dependencies among inter-arrival times and the claim sizes, such as Albrecher and Boxma (2004), Boudreault et al. (2006), and related works. A general framework of dependencies is proposed by Albrecher et al. (2011) in which the dependence arises via mixing through a so-called frailty parameter. Recently, Albrecher et al. (The single server queue with mixing dependencies, submitted) extend duality results that relate survival and ruin probabilities in the insurance risk model to waiting time distributions in the ”corresponding” queueing model. The risk processes have a counterpart in the workload models of queueing theory, and a similar mixing dependencies structure is considered in a queueing context. In a similar way, in the credit risk modeling we can also suppose that the recovery rate depends on the underlying default intensity such as in Bakshi et al. (2006).

This paper proposes a general framework of dependencies: we do not assume a Markovian setting, nor specify explicit dependencies among inter-arrival times and the claim sizes. Besides, our framework extends the mixing approach of Albrecher et al. (2011) and (Albrecher et al.: The single server queue with mixing dependencies, submitted) by allowing a non-exchangeable family of random variables for the claims amounts. In particular, the distribution of the claim arriving at time τ i may depend on the random cumulative intensity along the time interval [0,τ i ]: this situation cannot be handled by the mixing method over a frailty parameter of i.i.d. sequences, and new computation techniques are needed. The one we propose here relies on Malliavin computation in order to provide a decomposition formula into a building block.

In contrast with Malliavin calculus in a Gaussian framework, one may consider different types of Malliavin derivatives operators with associated integration by parts formulae (see Privault (2009) for a description of several Malliavin derivatives on the Poisson space) on the Poisson space. For instance, one can design a differential calculus with respect to the jump times of the counting process. However, for our analysis we choose to consider an alternative Malliavin calculus involving the so-called difference operator (as presented in Picard (1996a,b) which allows us to perform explicit computations in our setting. For instance, the Malliavin derivative of key quantities for our approach, such as the terminal loss, reduces to comparing the original terminal loss with a perturbation of it consisting of an adjunction of a jump at a deterministic time. The computation of this derivative is explicit as seen in Lemma 3.5. In addition, this algebra is very natural in the context of insurance risk management as the Malliavin derivative translates in a probabilistic language the fact that one needs to analyse the impact of a new sinister on the overall terminal loss in order to get a better understanding of the risk structure of the loss process. Before going further, we would like to stress that the aforementioned structural account of the loss process provided by the Malliavin derivative calls for a precise description of the probability space on which the Cox process is defined. Surprisingly, it appears that very few explicit and complete descriptions are presented in the literature. As a consequence, we propose in Section 3.1 a construction of the Cox process which makes the use of the Malliavin derivative transparent. As far as we know, albeit quite natural, this explicit construction of the Cox process using a time change is new.

We proceed as follows. In Section 2 we describe our model for the loss process and present the insurance contracts for which we will propose a pricing formula. The latter will be stated and proved as Theorem 3.6 in Section 3. Particular cases of this result for several types of contracts in insurance are also given in this section. Finally, explicit examples are presented in Section 4.

## Model setup

In this section, we describe the loss process and the associated reinsurance contracts we will study. Throughout this paper, T will denote a positive finite real number which represents the final horizon time.

### The loss process

We begin by introducing the loss process L:=(L t )t[0,T] where the size of claims and their arrival times are correlated. Let (N t )t[0,T] be a Cox process (also called a doubly stochastic Poisson process) with random intensity (λ t )t[0,T], whose jump times, denoted by $$(\tau _{i})_{i\in {\mathbb {N}}^{*}}$$, model the arrival times of the claims. We suppose that the claim size X i depends on both the cumulated intensity defined by $$\Lambda _{t}:=\int _{0}^{t} \lambda _{s} ds$$ and the claim arrival time τ i . Moreover, it will also depend on some random variable ε i where we suppose that $$(\varepsilon _{i})_{i\in {\mathbb {N}}^{*}}$$ is a sequence of positive i.i.d. random variables independent of the Cox process N. More precisely, the loss is given by

$$L_{t} := \sum_{i=1}^{N_{t}} X_{i} \, e^{-\kappa (t-\tau_{i})}, \quad \mathrm{~with~} X_{i}:=f(\tau_{i}, \Lambda_{\tau_{i}},\varepsilon_{i}), \quad t\in [0,T],$$
(4)

where κ is the discounting factor and $$f:{\mathbb {R}}_{+}^{3} \to {\mathbb {R}}_{+}$$ is a bounded deterministic function. We provide several examples below.

### Example 2.1

1. 1.

In classic ruin theory, the claim size is often supposed to be independent of the arrival and the intensity process. In this case, we have f(t,,x)=x.

2. 2.

In the second example, we suppose that the dependence of f on the exogenous factor ε is linear and the linear coefficient is a function of the cumulated intensity Λ rescaled by time, i.e., $$\frac {\Lambda _{t}}{t}$$, which stands for some mean level of the intensity. For instance, let

$$f(t,\ell,x)= \sqrt{\frac{\ell}{t}} x.$$

In this example, if ε i follows an exponential distribution with parameter 1, then X i =f(τ i ,Λ τi ,ε i ) follows an exponential distribution with parameter $$\sqrt {\frac {\tau _{i}}{\Lambda _{\tau _{i}}}}$$ conditionally to the vector (τ i ,Λ τi ).

#### Generalized loss process

We can also consider a more general case where the realized claim sizes $$(X_{i})_{i\in {\mathbb {N}}^{*}}$$ are not exactly the ones that are computed to activate the reinsurance contract. More precisely, assume that in addition to the factors $$(\varepsilon _{i})_{i\in {\mathbb {N}}^{*}}$$, there exists a family of i.i.d. positive random variables $$(\vartheta _{i})_{i\in {\mathbb {N}}^{*}}$$ which may depend on the random variables ε i ’s. Let $$g:{\mathbb {R}}_{+}^{4}\to {\mathbb {R}}_{+}$$ be a deterministic bounded function. We can define a modified cumulative loss process as

$${\hat{L}}_{t} := \sum_{i=1}^{N_{t}} g(\tau_{i}, \Lambda_{\tau_{i}},\varepsilon_{i},\vartheta_{i}) e^{-\kappa(t-\tau_{i})}, \quad t\in [0,T].$$
(5)

More precisely, although the insurance contract is triggered by the loss process L, the compensation amount can depend on some other exogenous factors $$(\vartheta _{i})_{i\in {\mathbb {N}}^{*}}$$. This would mean, for instance, that the amounts 𝜗 i ’s are much lower than the ε i ’s. A typical example is given by the housing insurance market on east coast of the United States of America. Indeed, this region is seasonally exposed to hurricanes of different magnitudes. Most of the damage impacts the houses of the insured who may as well buy contracts on other belongings, such as cars, which are much less valuable. After a hurricane episode, the reinsurance stop-loss contract will be activated on the basis of the total damages L T on the houses (which are represented by the claims ε i ), whereas the effective damages $$\hat L_{T}$$ will also include all other insured belongings (which would be modeled by the 𝜗 i ). In the special case where the function g does not depend on the fourth variable, the general loss $${\hat {L}}_{T}$$ reduces to the standard loss defined in (4). We give below some examples of the joint distribution (ε i ,𝜗 i ).

### Example 2.2

1. 1.

The first natural case is that ε i and 𝜗 i are independent random variables. For example, each of them can follow an exponential distribution (or Erlang distribution) with different positive parameters θ1 and θ2.

2. 2.

We can introduce dependence between ε i and 𝜗 i by using the mixing method in Albrecher et al. (2011). Let ε i and 𝜗 i follow Pareto marginal distributions and a dependence structure according to a Clayton copula, respectively (according to Example 2.3 in Albrecher et al. (2011), this can be achieved by mixing the two Pareto marginal distributions where the mixing parameter follows a Gamma distribution).

3. 3.

Case of explicit dependence : let ε i follow a Pareto distribution and 𝜗 i follow a Weibull distribution with form or scaling parameter depending of ε i .

### Reinsurance contracts and related quantities

#### Generalized stop-loss contrats

In the introduction, we considered the stop-loss contract whose payoff is given by Φ(L T ) where Φ has been defined in (1) and corresponds to a call spread, that is, the difference of two call functions. Our approach allows us to go beyond the case of the stop-loss contract. Now, consider a contract where the reinsurance company pays

$$\widetilde\Phi(L_{T},\hat L_{T})= \left\{\begin{array}{ll} 0, &\mathrm{~if~} L_{T}\leq K\\ {\hat{L}}_{T}-K, &\mathrm{~if~} K \leq L_{T} \leq M\\ M -K, &\mathrm{~if~} L_{T}\geq M \end{array},\right.$$
(6)

with $$\hat L_{T}$$ defined in (5) if the a priori loss L T exceeds some amount K or belongs to some interval [K,M]. Then, the price of such a contract is :

$${\mathbb{E}}\left[{\hat{L}}_{T}\mathbf{1}_{\{L_{T}>K\}}\right] - K {\mathbb{P}}\left[L_{T} \in [K,M]\right] + (M-K) {\mathbb{P}}\left[L_{T} \geq M\right].$$
(7)

#### Expected shortfall

The expected shortfall is a useful risk measure which takes into account the size of the expected loss above the value at risk. We recall the expected shortfall with level α as

$$ES_{\alpha}(-L_{T}) = {\mathbb{E}}\left[ -L_{T}\middle\vert -L_{T} > V@R_{\alpha}(-L_{T}) \right], \quad \alpha \in (0,1),$$

where the definition of V@R is

$$V@R_{\alpha}(X)=-q_{X}^{+}(\alpha)=q_{-X}^{-}(1-\alpha)$$

with

$$q_{X}^{+}(t)=\inf\{x|\,{\mathbb{P}}[X\leq x]>t\}=\sup\{x|\,{\mathbb{P}}[X<x]\leq t\}$$
$$q_{X}^{-}(t)=\sup\{x|\,{\mathbb{P}}[X<x]<t\}=\inf\{x|\,{\mathbb{P}}[X\leq x]\geq t\}.$$

It is well known that ES α (X) is equal to $$AV@R(X):=\frac {1}{1-\alpha } \int _{\alpha }^{1} V@R_{s}(X) ds$$ if and only if $${\mathbb {P}}[X\leq q_{X}^{+}(t)]=t$$, t(0,1), which, in particular, is satisfied if the distribution function of X is continuous (see, e.g., [Hans and Schied (2011) Relation (4.38)]). However, the latter property already fails in the case where the size claims X i are constant. Thus, one cannot rely on the above relation and must directly compute the conditional expectation ES α (−L T ).

We will provide an alternative expression for the expected shortfall. We denote by β:=−V@R α (−L T ), then

$$ES_{\alpha}(-L_{T}) =\frac{-{\mathbb{E}}\left[L_{T}\mathbf{1}_{\{L_{T}<\beta\}}\right]}{{\mathbb{P}}[L_{T}<\beta]},$$

where

$$\beta=q^{+}_{-L_{T}}(\alpha)=\inf\{x|\,{\mathbb{P}}[L_{T}>-x]>\alpha.\}$$

Once again the key term to compute turns out to be the expectation $${\mathbb {E}}\left [L_{T}\mathbf {1}_{\{L_{T}<\beta \}}\right ]$$.

### General payoffs

More generally, we are interested in computing quantities of the form

$${\mathbb{E}}\left[\hat L_{T} h\left(L_{T}\right)\right],$$

where $$h:{\mathbb {R}}_{+}\to {\mathbb {R}}_{+}$$ is a Borelian map with $${\mathbb {E}}[h(L_{T})]<\infty$$. Since, in our model, the counting process is given by a Cox process with stochastic intensity, the building block becomes the following mapping by using the conditional expectation

$$x\mapsto {\mathbb{E}}\left[h(L_{T}+x)\vert (\lambda_{t})_{t\in[0,T]}\right].$$

Note that the examples of Section 2.2.1 (respectively, of Section 2.2.2) are contained in this setting by choosing h:=1[K,M] for some −K<M≤+ (respectively, h:=1[−,β] and $${\hat {L}}_{T}=L_{T}$$).

Our approach calls for a short stochastic analysis review that we present in the next section.

## The pricing formulae using Malliavin calculus

In this section, we establish our main pricing formulae by using Malliavin calculus. To this end, we first make precise the Poisson space associated with the loss process. Then, we provide basic tools for Malliavin calculus.

### Construction of the Poisson space

#### The counting process and intensity process

We recall that the loss process involves the Cox process (N t )t[0,T] with its intensity and jump times, and the family of random variables $$(\varepsilon _{i})_{i\in {\mathbb {N}}^{*}}$$. We begin by introducing a general counting process which will be useful for the construction of (N t )t[0,T] on a suitable space. Let Ω1 be the set of (finite or infinite) strictly increasing sequences in ]0,+[. We define a continuous-time stochastic process $${\mathbb {C}}$$ on the set Ω1 as

$$\forall\,(t,\omega_{1})\in [0,+\infty[\times\Omega_{1},\quad {\mathbb{C}}_{t}(\omega_{1}):=\text{card}([0,t]\cap\omega_{1}).$$

Let $$\mathbb F^{{\mathbb {C}}}=\left ({\mathbb {F}}_{t}^{{\mathbb {C}}}\right)$$ be the filtration generated by the process $${\mathbb {C}}$$, namely, $${\mathbb {F}}_{t}^{{\mathbb {C}}}:=\sigma ({\mathbb {C}}_{s},\,s\leq t)$$. It is known that there exists a unique probability measure $${\mathbb {P}}_{1}$$ on $$\left (\Omega _{1},{\mathbb {F}}_{\infty }^{{\mathbb {C}}}\right)$$ under which the process $${\mathbb {C}}$$ is a Poisson process of intensity 1, that is, for every (s,t)[0,+)2, with s<t, the random variable $$\mathcal {C}_{t}-\mathcal {C}_{s}$$ is independent of $${\mathcal {F}}_{s}^{\mathcal {C}}$$ and Poisson distributed with parameter ts.

We then consider a probability space $$(\Omega _{2},\mathcal {A},{\mathbb {P}}_{2})$$ on which is defined :

• A positive stochastic process (λ t )t[0,T] such that

$$\int_{0}^{T} \lambda_{s} ds<+\infty, \;\;\; {\mathbb{P}}_{2}\text{ - a.s.},$$
• A collection of i.i.d. $${\mathbb {R}}_{+}^{2}$$-valued bounded random variables $$(\varepsilon _{i},\vartheta _{i})_{i\in {\mathbb {N}}^{*}}$$ and a $${\mathbb {R}}_{+}^{2}$$-random variable $$(\overline \varepsilon,\overline \vartheta)$$ independent from $$(\varepsilon _{i},\vartheta _{i})_{i\in {\mathbb {N}}^{*}}$$, with $$(\overline {\varepsilon },\overline \vartheta) \overset {\mathcal {L}}{=} (\varepsilon _{1},\vartheta _{1})$$ (where $$\overset {\mathcal {L}}{=}$$ stands for the equality of probability distributions). We set μ the law of the pair $$(\overline {\varepsilon },\overline \vartheta)$$.

### Assumption 3.1

We assume that λ is independent of $$(\varepsilon _{i},\vartheta _{i})_{i\in {\mathbb {N}}^{*}}$$, and of $$(\overline {\varepsilon },\overline {\vartheta })$$.

$$\mathbb F^{\lambda }=\left ({\mathbb {F}}_{t}^{\lambda }\right)_{t\in [0,T]}$$ be the right-continuous complete filtration generated by the stochastic process λ. Moreover, we set

$$\Lambda_{t}:=\int_{0}^{t} \lambda_{s} ds, \quad t\in [0,T].$$
(8)

Let $${\mathbb {F}}^{\varepsilon,\vartheta }$$ be the σ-algebra generated by $$(\varepsilon _{i})_{i\in {\mathbb {N}}^{*}}$$ and $$(\vartheta _{i})_{i\in {\mathbb {N}}^{*}}$$. Note that only $$(\varepsilon _{i})_{i\in {\mathbb {N}}^{*}}$$ and $$(\vartheta _{i})_{i\in {\mathbb {N}}^{*}}$$ will be involved in the loss process and $$\overline {\varepsilon }$$ and $$\overline {\vartheta }$$ are just independent copies which play an auxiliary role. We denote by μ the probability law of the pair (ε i ,𝜗 i ).

### Assumption 3.2

Throughout this paper, we assume that : $$\Lambda _{T} <+\infty, \; {\mathbb {P}}_{2} - a.s..$$

#### The doubly stochastic Poisson process

We now consider the product space $$(\Omega :=\Omega _{1}\times \Omega _{2},{\mathcal {F}}:={\mathcal {F}}^{\mathcal {C}}_{\infty }\otimes \mathcal {A},{\mathbb {P}}:= {\mathbb {P}}_{1}\otimes {\mathbb {P}}_{2})$$. By abuse of notation, any random variable Y on Ω1 can be considered as a random variable on Ω which sends ω=(ω1,ω2) to Y(ω1). Similarly, any random variable Z on Ω2 can be considered as a random variable on Ω which sends ω=(ω1,ω2) to Z(ω2).

We define a counting process N:=(N t )t[0,T] on Ω by using a time change as

$$[N_{t}(\omega_{1},\omega_{2}):=\mathcal{C}_{\Lambda_{t}(\omega_{2})}(\omega_{1}) = \mathcal{C}_{\int_{0}^{t} \lambda_{s}(\omega_{2}) ds}(\omega_{1}), \quad t\in [0,T], \; (\omega_{1},\omega_{2})\in \Omega.$$

Note that for any t, N t is an $${\mathcal {F}}_{\infty }^{{\mathbb {C}}}\otimes {\mathbb {F}}^{\lambda }_{T}$$-measurable random variable. Moreover, for any fixed ω2 in Ω2, N t (·,ω2) is an inhomogeneous Poisson process on Ω1 with intensity tλ t (ω2) with respect to the filtration $$\left ({\mathcal {F}}^{{\mathbb {C}}}_{\Lambda _{t}(\omega _{2})}\right)_{t\in [0,T]}$$ which reads asFootnote 1

$${\mathbb{E}}\left[e^{iu (N_{t}-N_{s})} \middle\vert {\mathcal{F}}^{\lambda}_{s} \right] = {\mathbb{E}}\left[\exp\left((e^{iu}-1)\int_{s}^{t} \lambda_{r} dr \right)\middle\vert {\mathcal{F}}^{\lambda}_{s} \right], \quad 0\leq s<t\leq T,$$

where $${\mathbb {E}}$$ denotes the expectation with respect to the measure $${\mathbb {P}}$$. For a process (u t )t[0,T] such that :

$$\left\{ \begin{array}{l} u_{t} \mathrm{~is~} {\mathcal{F}}\mathrm{~-measurable~}, \quad t \in[0,T],\\ \mathrm{~for~a.e.~} \omega_{2} \in \Omega_{2}, (u_{t}(\cdot,\omega_{2}))_{t\in [0,T]} \mathrm{~is~} \left({\mathcal{F}}_{\Lambda_{t}(\omega_{2})}^{{\mathbb{C}}}\right)_{t\in [0,T]}\mathrm{~-predictable ~},\\ {\mathbb{E}}\left[\int_{0}^{T} |u_{t}| dt\right]<+\infty, \end{array} \right.$$
(9)

we denote by $$\left (\int _{0}^{T} u_{s} dN_{s}\right)(\omega _{1},\omega _{2})$$ the Lebesgue–Stieltjes integral of u(ω1,ω2) against the measure N(ω1,ω2).

For any $$i\in {\mathbb {N}}$$, we let τ i be the i-th jump time of the process N, namely,

$$\forall\,\omega=(\omega_{1},\omega_{2})\in\Omega,\quad \tau_{i} (\omega):= \inf\{t>0, \; N_{t}={\mathcal{C}}_{\Lambda_{t}(\omega_{2})}(\omega_{1}) \geq i \},$$

with the convention τ0=0.

### The Malliavin integration by parts formula

We can now state the Malliavin integration by parts formula on the product space. For any t[0,T], and ω1Ω1 which is of finite length or has a limit greater than t, we define ω1{t} in Ω1 as the increasing sequence whose underlying set is the union of ω1 and t. The effect of this operator is to add a jump at time t to the Poisson process N. Finally, for ω:=(ω1,ω2)Ω, and t[0,T], we set

$$\omega \cup \{t\} :=(\omega_{1} \cup \{t\},\omega_{2}),$$

provided that ω1{t} is well defined. The following lemma is a direct extension of the one presented, for example, in [Picard (1996a), Corollaire 5] or (1996b) (see also Privault (2009)).

### Lemma 3.3

Let $$u:\Omega \times [0,T] \to {\mathbb {R}}$$ be a stochastic process which enjoys (9), and $$F:\Omega \to {\mathbb {R}}$$ be a bounded $${\mathcal {F}}$$-measurable random variable. Then the stochastic process (ω,t)F(ω{t}) is well-defined $${\mathbb {P}}\otimes dt$$-a.e. and

$${\mathbb{E}}\left[F \int_{0}^{T} u_{s} dN_{s} \middle\vert {\mathcal{F}}_{T}^{\lambda} \vee {\mathcal{F}}^{\varepsilon,\vartheta}\right] = {\mathbb{E}}\left[\int_{0}^{T} u_{t} \; F(\cdot\cup \{t\}) \lambda_{t} dt \middle\vert {\mathcal{F}}_{T}^{\lambda} \vee {\mathcal{F}}^{\varepsilon,\vartheta} \right].$$
(10)

### Remark 3.4

As mentioned, the proof of this result can be found in (Picard1996a,b); (Privault 2009) and so we do not reproduce it here. However, to give a bit of intuition, let us just mention that this formula extends the classical integration by parts formula for a Poisson distribution N with parameter λ>0 on $$\mathbb {N}$$, namely, that

$$\forall g: \mathbb{N} \to \mathbb{N}, \quad {\mathbb{E}}[N g(N)] = \lambda {\mathbb{E}}[g(N+1)].$$

### The main result

In this section, we present our main result concerning the computation of the quantity

$${\mathbb{E}}\left[\hat L_{T} h\left(L_{T}\right)\right],$$

where $$h:{\mathbb {R}}_{+}\to {\mathbb {R}}_{+}$$ is a Borelian map with $${\mathbb {E}}[h(L_{T})]<\infty$$ and where L T and $$\hat L_{T}$$, respectively, are defined in (4) and (5). We set

$$\varphi_{\lambda}^{h}(x):={\mathbb{E}}\left[h(L_{T}+x)\vert {\mathcal{F}}_{T}^{\lambda}\right], \quad x\in {\mathbb{R}}_{+}.$$
(11)

It might be surprising at first glance to consider the conditional expectation given λ in the building block. In fact, as the intensity λ of N is random, it can be compared to a Black–Scholes model with independent stochastic volatility. In that context, the Black–Scholes formula would be written in terms of the conditional law of the terminal value of the stock given the volatility (which would simply be a lognormal distribution with variance given by the volatility). Recall that for the insurance contract presented in Section 2.2.1, h:=1[K,M] and thus $$\varphi _{\lambda }^{h}$$ coincides with the conditional distribution function of L T .

Before turning to the statement and the proof of the main result, note that

$${\hat{L}}_{T} =\int_{0}^{T} \hat Z_{s} dN_{s},$$
(12)

with

$${\hat{Z}}_{s}:=\sum_{i=1}^{+\infty} g(s,\Lambda_{s},\varepsilon_{i},\vartheta_{i}) e^{-\kappa (T-s)}{1}_{(\tau_{i-1},\tau_{i}]}(s), \quad s \in [0,T].$$
(13)

Moreover, on the set {Δ s N=0}, one has

$${\hat{Z}}_{s}=g(s,\Lambda_{s},\varepsilon_{1+N_{s}},\vartheta_{1+N_{s}})e^{-\kappa(T-s)}.$$
(14)

As Λ is a continuous process, $$\hat Z$$ satisfies Relation (9), provided that $${{\mathbb {E}}\left [\int _{0}^{T}|{\hat {Z}}_{t}|dt\right ]<+\infty }$$.

We start our analysis with the following lemma.

### Lemma 3.5

Under Assumptions 3.1 and 3.2, for any t[0,T], it holds that

\begin{aligned} &\left(g(t,\Lambda_{t},\varepsilon_{1+N_{t}},\vartheta_{1+N_{t}}) e^{-\kappa(T-t)},L_{T}(\cdot \cup \{t\}),\lambda_{t}\right) \overset{\mathcal{L}}=\\&\quad \left(g(t,\Lambda_{t},\overline\varepsilon,\overline\vartheta) e^{-\kappa (T-t)}, L_{T} + f(t,\Lambda_{t},\overline \varepsilon) e^{-\kappa (T-t)},\lambda_{t}\right). \end{aligned}

### Proof

We set

\begin{aligned} L_{t} &:= \sum_{i=1}^{N_{t}} f(\tau_{i},\Lambda_{\tau_{i}},\varepsilon_{i}) e^{-\kappa (T-\tau_{i})}, \\ L^{+}_{t}\! &:= \sum_{i=1}^{N_{t}} f(\tau_{i},\Lambda_{\tau_{i}},\varepsilon_{i+1}) e^{-\kappa (T-\tau_{i})}, \quad t\in [0,T]. \end{aligned}

We first make precise the value of L T (ω{t}) for a fixed element t(0,T) and for ω:=(ω1,ω2) in Ω such that tω1 and ω1{t} are well defined (the set of such ω has probability 1). By definition, we have that

$$L_{T}(\omega \cup \{t\})= \sum_{i=1}^{N_{T}(\omega \cup \{t\})} f(\tau_{i}(\omega \cup \{t\}),\Lambda_{\tau_{i}(\omega \cup \{t\})}(\omega_{2}),\varepsilon_{i}(\omega_{2})) e^{-\kappa (T-\tau_{i}(\omega \cup \{t\}))}$$

Note that one has

$$\forall\,i\in{\mathbb{N}},\quad \tau_{i}(\omega\cup\{t\})= \left\{\begin{array}{ll} \tau_{i}(\omega),& \mathrm{~if~}i\leq N_{t}(\omega),\\ t,&\mathrm{~if~}i=N_{t}(\omega)+1\\ \tau_{i-1}(\omega),&\mathrm{~if~}i>N_{t}(\omega)+1. \end{array}\right.$$

Therefore, we can write L T (ω{t}) as the sum of three terms as follows

\begin{aligned} L_{T}(\omega\cup\{t\})&=\sum_{i=1}^{N_{t}(\omega)} f(\tau_{i}(\omega_{1}),\Lambda_{\tau_{i}(\omega)}(\omega_{2}),\varepsilon_{i}(\omega_{2})) e^{-\kappa (T-\tau_{i}(\omega_{1}))}\\ &\quad +f(t,\Lambda_{t}(\omega_{2}),\varepsilon_{1+N_{t}(\omega)}(\omega_{2}))e^{-\kappa(T-t)}\\ &\quad+\sum_{i=N_{t}(\omega)+2}^{N_{T}(\omega)+1}f(\tau_{i-1}(\omega_{1}),\Lambda_{\tau_{i-1}(\omega)}(\omega_{2}),\varepsilon_{i}(\omega_{2}))e^{-\kappa(T-\tau_{i-1}(\omega_{1}))}. \end{aligned}
(15)

By definition, the first term in the sum is just L t (ω). Moreover, by a change of index, we can write the third term as

$$\sum_{i=N_{t}(\omega)+1}^{N_{T}(\omega)}f(\tau_{i}(\omega),\Lambda_{\tau_{i}(\omega)}(\omega_{2}), \varepsilon_{i+1}(\omega_{2}))\mathrm{e}^{-\kappa(T-\tau_{i}(\omega))}=L^{+}_{T}(\omega)-L^{+}_{t}(\omega).$$
(16)

Therefore, by (15), the following equality holds almost surely

$$f(t,\Lambda_{t},\varepsilon_{1+N_{t}}) e^{-\kappa(T-t)}=(L_{T}(\cdot\cup\{t\})-L_{t})-(L^{+}_{T}-L^{+}_{t}).$$
(17)

Moreover, from the decomposition formula (15), we also observe that $$\varepsilon _{1+N_{t}}$$ is independent of $$L_{t}+ L^{+}_{T}- L^{+}_{t}$$ given $${\mathbb {F}}_{\infty }^{{\mathbb {C}}}\otimes {\mathbb {F}}_{T}^{\lambda }$$. In addition, by Assumption 3.1, the conditional law of $$\varepsilon _{1+N_{t}}$$ given $${\mathbb {F}}_{\infty }^{{\mathbb {C}}}\otimes {\mathbb {F}}_{T}^{\lambda }$$ identifies with the law of $$\overline {\varepsilon }$$ since $${\mathbb {F}}^{\varepsilon }$$ is independent of $${\mathbb {F}}_{T}^{\lambda }$$.

We now compute the characteristic functions of the two random vectors of interest. Let χ be the characteristic function of the random vector

$$\left(g(t,\Lambda_{t},\varepsilon_{1+N_{t}},\vartheta_{1+N_{t}}) e^{-\kappa(T-t)},L_{T}(\cdot \cup \{t\}),\lambda_{t}\right).$$

Let $$(u_{1},u_{2},u_{3})\in \mathbb R^{3}$$. One has

\begin{aligned} \chi(u_{1},u_{2},u_{3})&:={\mathbb{E}}\left[e^{i u_{1} g\left(t,\Lambda_{t},\varepsilon_{1+N_{t}},\vartheta_{1+N_{t}}\right) e^{-\kappa(T-t)} + i u_{2} L_{T}(\cdot \cup \{t\}) + iu_{3} \lambda_{t}}\right]\\ &={\mathbb{E}}\left[e^{iu_{3} \lambda_{t}} e^{i u_{1} g\left(t,\Lambda_{t},\varepsilon_{1+N_{t}},\vartheta_{1+N_{t}}\right) e^{-\kappa(T-t)} + i u_{2} \left(L_{t} + e^{-\kappa (T-t)} (f\left(t,\Lambda_{t}, \varepsilon_{1+N_{t}}\right) + L^{+}_{T} - L^{+}_{t}\right)}\right]\\ &={\mathbb{E}}\left[e^{iu_{3} \lambda_{t}} e^{i u_{2} \left(L_{t}+L^{+}_{T}-L^{+}_{t}\right)} e^{iu_{2} e^{-\kappa (T-t)} f\left(t,\Lambda_{t},\varepsilon_{1+N_{t}}\right)} e^{iu_{1} e^{-\kappa (T-t)} g\left(t,\Lambda_{t},\varepsilon_{1+N_{t}},\vartheta_{1+N_{t}}\right) }\right]. \end{aligned}

Since $$\varepsilon _{1+N_{t}}$$ and $$\vartheta _{1+N_{t}}$$ are independent of $$L_{t}+L^{+}_{T}-L^{+}_{t}$$ given $${\mathbb {F}}_{\infty }^{{\mathbb {C}}}\otimes {\mathbb {F}}_{T}^{\lambda }$$, we obtain that

{\begin{aligned} \chi(u_{1},u_{2},u_{3})= &{\mathbb{E}}\left[e^{iu_{3} \lambda_{t}} e^{iu_{2} e^{-\kappa (T-t)} f(t,\Lambda_{t},\overline{\varepsilon})} e^{iu_{1}e^{-\kappa (T-t)} g(t,\Lambda_{t},\overline{\varepsilon},\overline{\vartheta})}\right.\\ &\left.{\mathbb{E}}\left[e^{i \mu_{2} \left(L_{t}+ L^{+}_{T}-L^{+}_{t}\right)}\,\left|\,{\mathbb{F}}^{{\mathbb{C}}}_{\infty}\otimes{\mathbb{F}}_{T}^{\lambda}\right.\right]\right], \end{aligned}}

where we also use the fact that the probability law of $$(\varepsilon _{1+N_{t}},\vartheta _{1+N_{t}})$$ given $${\mathbb {F}}_{\infty }^{{\mathbb {C}}}\otimes {\mathbb {F}}^{\lambda }_{T}$$ coincides with μ (which, we recall, is the probability law of $$(\overline {\varepsilon },\overline {\vartheta })$$). Moreover, from (16), we observe that $$L_{t}+{L}^{+}_{T}-{L}^{+}_{t}$$ has the same law as L T conditioned on $${\mathbb {F}}_{\infty }^{{\mathbb {C}}}\otimes {\mathbb {F}}_{T}^{\lambda }$$. Therefore, we obtain

\begin{aligned} \chi(u_{1},u_{2},u_{3})&={\mathbb{E}}\left[e^{iu_{3}\lambda_{t}}e^{iu_{2} e^{-\kappa(T-t)}f(t,\Lambda_{t},\overline{\varepsilon})} e^{iu_{1}e^{-\kappa(T-t)}g(t,\Lambda_{t},\overline{\varepsilon},\overline{\vartheta})} e^{iu_{2}L_{T}}\right]\\ &={\mathbb{E}}\left[e^{iu_{2}e^{-\kappa(T-t)}f(t,\Lambda_{t},\overline{\varepsilon})}e^{iu_{1} \left(e^{-\kappa(T-t)}g(t,\Lambda_{t},\overline{\varepsilon},\overline{\vartheta})+L_{T}\right)}e^{iu_{3}\lambda_{t}}\right], \end{aligned}

which shows that χ coincides with the characteristic function of the vector

$$\left(g(t,\Lambda_{t},{\overline{\varepsilon}},{\overline{\vartheta}}) e^{-\kappa (T-t)}, L_{T} + f(t,\Lambda_{t},{\overline{\varepsilon}}) e^{-\kappa (T-t)},\lambda_{t}\right).$$

The lemma is thus proved. □

We now turn to the statement and the proof of the main result of this paper.

### Theorem 3.6

Recall that $$(\varepsilon _{i},\vartheta _{i})_{i \in \mathbb {N}^{*}}$$ and $$(\overline \varepsilon,\overline \vartheta)$$ are i.i.d. with common law μ. Under Assumptions 3.1 and 3.2, it holds that

\begin{aligned} & {\mathbb{E}}\left[\hat L_{T} h\left(L_{T}\right)\right] \\ &=\int_{0}^{T} e^{-\kappa (T-t)} {\mathbb{E}}\left[ g(t,\Lambda_{t},\overline \varepsilon, \overline \vartheta) \, \lambda_{t} \, \varphi_{\lambda}^{h}\left(f(t,\Lambda_{t},\overline \varepsilon) e^{-\kappa (T-t)}\right) \right] dt\\ &=\int_{\mathbb R_{+}^{2}}\int_{0}^{T} e^{-\kappa (T-t)} {\mathbb{E}}\left[ g(t,\Lambda_{t}, x,y) \, \lambda_{t} \, \varphi_{\lambda}^{h}\left(f(t,\Lambda_{t},x) e^{-\kappa (T-t)}\right) \right] \mu(dx,dy) \, dt, \end{aligned}
(18)

where $$\hat L_{T}$$ is defined in (5) and the mapping $$\varphi _{\lambda }^{h}(x):={\mathbb {E}}\left [h(L_{T}+x)\vert {\mathcal {F}}_{T}^{\lambda }\right ]$$ is defined in (11).

### Proof

Assumptions 3.1 and 3.2 are in force. Using relation (12) and the integration by parts formula on the Poisson space (10), it holds that

\begin{aligned} {\mathbb{E}}\left[\hat L_{T} h\left(L_{T}\right) \right]&={\mathbb{E}}\left[{\mathbb{E}}\left[\hat L_{T} h\left(L_{T}\right)\middle\vert {\mathcal{F}}^{\varepsilon,\vartheta} \vee {\mathcal{F}}_{T}^{\lambda} \right]\right]\\ &={\mathbb{E}}\left[{\mathbb{E}}\left[h\left(L_{T}\right) \int_{0}^{T} Z_{t} dN_{t}\middle\vert {\mathcal{F}}^{\varepsilon,\vartheta} \vee {\mathcal{F}}_{T}^{\lambda} \right]\right]\\&={\mathbb{E}}\left[\int_{0}^{T} Z_{t} h\left(L_{T}(\cdot \cup {\{t\}})\right) \lambda_{t} dt \right] \end{aligned}

By Relation (14) and the fact that the set {Δ t N≠0} is negligeable, we obtain

\begin{aligned} {\mathbb{E}}\left[\hat L_{T} h\left(L_{T}\right) \right]&={\mathbb{E}}\left[\int_{0}^{T} g(t,\Lambda_{t},\varepsilon_{1+N_{t}},\vartheta_{1+N_{t}}) e^{-\kappa (T-t)} h\left(L_{T}(\cdot \cup \{t\})\right) \lambda_{t} dt \right]\\ &=\int_{0}^{T} {\mathbb{E}}\left[g(\Lambda_{t},\varepsilon_{1+N_{t}},\vartheta_{1+N_{t}}) e^{-\kappa (T-t)} h\left(L_{T}(\cdot \cup \{t\})\right) \lambda_{t}\right] dt. \end{aligned}

Finally, by Lemma 3.5, the above formula leads to

$${\mathbb{E}}\!\left[L_{T}\mathbf{1}_{\{L_{T} \in [K,M]\}} \!\right]\,=\,\int_{0}^{T}\!\!{\mathbb{E}}\left[ g(t,\Lambda_{t},\overline{\varepsilon},\overline{\vartheta}) e^{-\kappa (T-t)} h\!\left(L_{T} \,+\, f(t,\Lambda_{t},\overline{\varepsilon}) e^{-\kappa (T-t)}\!\right) \!\lambda_{t}\!\! \right] dt.$$

Since $$\overline {\varepsilon }$$ is independent of $${\mathbb {F}}^{\lambda }_{T}\vee {\mathbb {F}}^{\varepsilon }$$, one has

$${\mathbb{E}}\left[h\left(L_{T} + f(\Lambda_{t},\overline{\varepsilon}) e^{-\kappa (T-t)} \right)\,\middle\vert\,{\mathbb{F}}_{T}^{\lambda}\vee\sigma(\overline{\varepsilon})\right]=\varphi_{\lambda}^{h} \left(f(t,\Lambda_{t},\overline{\varepsilon})e^{-\kappa(T-t)}\right).$$

Therefore,

\begin{aligned} {\mathbb{E}}\left[\hat L_{T} h\left(L_{T}\right) \right] &=\int_{0}^{T}{\mathbb{E}}\left[ g(t,\Lambda_{t},\overline \varepsilon, \overline \varepsilon) e^{-\kappa (T-t)} \lambda_{t} \varphi_{\lambda}^{h}\left(f(t,\Lambda_{t},\overline \varepsilon) e^{-\kappa (T-t)},\right.\right.\\&\qquad \left.\left. f(t,\Lambda_{t},\overline \varepsilon) e^{-\kappa (T-t)}\right) \right] dt\\ &=\int_{\mathbb R_{+}^{2}}\int_{0}^{T} e^{-\kappa (T-t)} {\mathbb{E}}\left[ g(t,\Lambda_{t}, x,y) \, \lambda_{t} \, \varphi_{\lambda}^{h}\left(f(t,\Lambda_{t},x) e^{-\kappa (T-t)}\right) \right]\\&\qquad dt\,\mu(dx,dy), \end{aligned}

as asserted by the theorem. □

### Remark 3.7

1. 1.

Note that from Equality (18), it is clear that our approach only requires the knowledge of the conditional law of L T given λ (via the mapping φ λ ) and not the one of the pair $$(L_{T},\hat L_{T})$$. This seems to be particularly useful for the numerical approximation of the aforementioned expectation.

2. 2.

The theorem above provides us the relation of the pricing formula with respect to the intensity process (λ t )t≥0 of the counting process.

Relation (18) allows us to give a lower (respectively, upper) bound on the price if h is assumed to be convex (respectively, concave).

### Corollary 3.8

Under the assumptions of Theorem 3.6, it holds that :

1. (i)

If h is convex, then

\begin{aligned} &{\mathbb{E}}\left[\hat L_{T} h\left(L_{T}\right)\right] \\ &\geq \int_{\mathbb R_{+}^{2}}\int_{0}^{T} e^{-\kappa (T-t)} {\mathbb{E}}\left[ g(t,\Lambda_{t}, x,y) \, \lambda_{t} \, h\left({\mathbb{E}}\left[L_{T}\middle\vert {\mathbb{F}}_{T}^{\lambda}\right]\,+\,f(t,\Lambda_{t},x) e^{-\kappa (T-t)}\right) \right] \\&\quad \mu(dx,dy) \, dt; \end{aligned}
2. (i)

If h is concave, then

\begin{aligned} &{\mathbb{E}}\left[\hat L_{T} h\left(L_{T}\right)\right] \\ &\leq \int_{\mathbb R_{+}^{2}}\int_{0}^{T} e^{-\kappa (T-t)} {\mathbb{E}}\!\left[ g(t,\Lambda_{t}, x,y) \, \lambda_{t} \, h\left({\mathbb{E}}\left[L_{T}\middle\vert {\mathbb{F}}_{T}^{\lambda}\right]+f(t,\Lambda_{t},x) e^{-\kappa (T-t)}\right)\! \right]\\ &\qquad \mu(dx,dy) \, dt. \end{aligned}

### Proof

We prove (i) as statement (ii) follows the same line. As h is assumed to be convex, Jensen’s inequality implies that

$$\varphi_{\lambda}^{h}(x) \geq h\left({\mathbb{E}}\left[L_{T}\middle\vert {\mathbb{F}}_{T}^{\lambda}\right]+x\right), \quad x \in {\mathbb{R}}_{+}.$$

The result is then obtained by plugging this estimate in Relation (18). □

## Applications and examples

In this section, we provide some examples of the application of our main result, in particular, for the (generalized) stop-loss contract. Such explicit computations will also be useful for the CDO tranches and expected shortfall risk measure.

### Computation of the building block

We first focus on the building block $$\varphi _{\lambda }^{h}$$ (defined in (11)) when h:=1{[K,M]} :

$$\varphi_{\lambda}(x) := \varphi_{\lambda}^{h}(x) = {\mathbb{P}}\left[L_{T} \in [K-x,M-x]\vert {\mathcal{F}}_{T}^{\lambda}\right], \quad x\in {\mathbb{R}}_{+}$$

which corresponds to the payoff of a stop-loss contract or a CDO tranche. Let $${\mathcal {F}}^{\varepsilon }:=\sigma (\varepsilon _{i},\; i\in {\mathbb {N}}^{*})$$. For any $$i\in {\mathbb {N}}^{*}$$, we set X i :=f(τ i ,Λ τi ,ε i ) and x in $${\mathbb {R}}_{+}$$, we have

\begin{aligned} &{\mathbb{P}}\left[L_{T} \in \left[K-x,M-x\right] \middle\vert {\mathcal{F}}^{\lambda}_{T} \vee {\mathcal{F}}^{\varepsilon}\right]\\ &={\mathbb{P}}\left[\sum_{i=1}^{N_{T}} X_{i} e^{\kappa \tau_{i}} \in \left[(K-x) e^{\kappa T},(M-x) e^{\kappa T}\right] \Bigg\vert {\mathcal{F}}^{\lambda}_{T} \vee {\mathcal{F}}^{\varepsilon}\right]\\ &=\sum_{k=1}^{+\infty} {\mathbb{E}}\left[ \sum_{i=1}^{k} X_{i} e^{\kappa \tau_{i}} \in \left[(K-x) e^{\kappa T},(M-x) e^{\kappa T}\right] \Bigg\vert N_{T}=k,{\mathcal{F}}^{\lambda}_{T} \vee {\mathcal{F}}^{\varepsilon} \right] \\&\qquad{\mathbb{N}}[N_{T}=k\vert {\mathcal{F}}_{T}^{\lambda}]\\ &= \sum_{k=1}^{+\infty} e^{-\int_{0}^{T} \lambda_{s} ds} \int_{\mathcal S_{k}} {\mathbb{N}} \left[\sum_{i=1}^{k} X_{i} e^{\kappa t_{i}} \in \left[(K-x) e^{\kappa T},(M-x) e^{\kappa T} \right] \Bigg\vert {\mathcal{F}}^{\lambda}_{T} \vee {\mathcal{F}}^{\varepsilon}\right]\\&\qquad \lambda_{t_{1}} dt_{1}\cdots \lambda_{t_{k}} dt_{k}\\ &= \sum_{k=1}^{+\infty} e^{-\int_{0}^{T} \lambda_{s} ds} \int_{\mathcal S_{k}} \int_{{\mathbb{R}}_{+}^{k}} {\mathbf{1}}_{\left\{\sum_{i=1}^{k} x_{i} e^{\kappa t_{i}} \in \left[(K-x) e^{\kappa T},(M-x) e^{\kappa T} \right]\right\}}\\&\qquad \mathcal{L}_{X_{(1:k)}}^{\vert \lambda}(dx_{1},\ldots,dx_{k}) \lambda_{t_{1}} dt_{1}\cdots \lambda_{t_{k}} dt_{k}, \end{aligned}

where $$\mathcal S_{k}:=\{0<t_{1}<\cdots < t_{k} \leq T\}$$, X(1:k):=(X1,…,X k ) and

$$\mathcal{L}_{X_{(1:k)}}^{\vert \lambda}(dx_{1},\ldots,dx_{k}):={\mathbb{N}}\left[X_{(1:k)} \in (dx_{1},\ldots,dx_{k})\Bigg\vert {\mathcal{F}}^{\lambda}_{T}\right].$$

It just remains to compute the joint distribution of the claims X(1:k) in different situations. In particular, we provide an explicit example below.

Model onε i : We assume that $$(\varepsilon _{i})_{i\in {\mathbb {N}}^{*}}$$ are i.i.d. random variables with Pareto distributions $$\mathcal {P}(\alpha _{\varepsilon },\beta _{\varepsilon })$$ with $$(\alpha _{\varepsilon },\beta _{\varepsilon }) \in ({\mathbb {R}}^{*}_{+})^{2}$$ whose density ψ ε is defined as

$$\psi_{\varepsilon}(z)=\left(\beta_{\varepsilon}\frac{\alpha_{\varepsilon}^{\beta_{\varepsilon}}}{z^{\beta_{\varepsilon}+1}}\right){1}_{\{z \geq \alpha_{\varepsilon}\}}dz.$$

Choosing $$f(t,\ell,x):= \sqrt {\frac {\ell }{t}}x$$, the conditional distribution $$\mathcal {L}_{X_{(1:k)}}^{\vert \lambda }(dx_{1},\ldots,dx_{k})$$ in Relation (4.1) becomes

\begin{aligned} &\mathcal{L}_{X_{(1:k)}}^{\vert \lambda}(dx_{1},\ldots,dx_{k})\\ &={\mathbb{N}}\left[\left(\sqrt{\frac{\Lambda_{t_{1}}}{t_{1}}} \varepsilon_{1},\cdots,\sqrt{\frac{\Lambda_{t_{k}}}{t_{k}}} \varepsilon_{k}\right) \in (dx_{1},\ldots,dx_{k})\Bigg\vert {\mathcal{F}}^{\lambda}_{T}\right]\\ &=\prod_{i=1}^{k} {\mathbb{N}}\left[\sqrt{\frac{\Lambda_{t_{i}}}{t_{i}}} \varepsilon_{i} \in dx_{i}\Bigg\vert {\mathcal{F}}^{\lambda}_{T}\right]\\ &=\prod_{i=1}^{k} \left(\beta_{\varepsilon}\frac{\left(\sqrt{\frac{t_{i}}{\Lambda_{t_{i}}}}\alpha_{\varepsilon}\right)^{\beta_{\varepsilon}}}{z_{i}^{\beta_{\varepsilon}+1}}\right){1}_{\left\{z_{i} \geq \sqrt{\frac{t_{i}}{\Lambda_{t_{i}}}} \alpha_{\varepsilon}\right\}}dz_{i}. \end{aligned}

The next step to compute the right side of Relation (18) is to specify the joint law of (ε1,𝜗1).

Model on (ε i ,𝜗 i ) : We assume that $$(\varepsilon _{i},\vartheta _{i})_{i\in {\mathbb {N}}^{*}}$$ are i.i.d. random vectors, with marginal distributions following Pareto distributions $$\mathcal {P}(\alpha _{\varepsilon },\beta _{\varepsilon })$$ and $$\mathcal {P}(\alpha _{\vartheta },\beta _{\vartheta })$$ (for a set of parameters α ε , β ε , α 𝜗 , β 𝜗 >0). The dependence structure is modeled through a Clayton copula with parameter θ>0. We recall that the Clayton copula is $$C(u,v):=\left (u^{-\theta }+v^{-\theta }-1\right)^{-\frac {1}{\theta }}$$ and the density c of the Clayton copula is given by

$$c(u,v):= (1+\theta) (uv)^{-1-\theta}\left(u^{-\theta}+v^{-\theta}-1\right)^{-\frac{1}{\theta}-2}.$$

The joint distribution of (ε1,𝜗1) is then given by

$$\mu(dx, dy)= c\left(F_{\varepsilon}(x), F_{\vartheta}(y)\right) \psi_{\varepsilon}(x) \psi_{\vartheta}(y) dx dy,$$

with $$F_{\varepsilon }(z)=\left (1- \frac {\alpha _{\varepsilon }^{\beta _{\varepsilon }}}{z^{\beta _{\varepsilon }}}\right)$$ and $$F_{\vartheta }(z)=\left (1- \frac {\alpha _{\vartheta }^{\beta _{\vartheta }}}{z^{\beta _{\vartheta }}}\right)$$.

Joint law of (λ t ,Λ t ): The final step in the computation of Relation (18) is to make precise the joint law of (λ t ,Λ t ). More precisely, we need to compute

$${\mathbb{E}} \left[ g(t,\Lambda_{t},x, y) \, \lambda_{t} \, \varphi_{\lambda}\left(K-f(t,\Lambda_{t},x) e^{-\kappa (T-t)},M-f(t,\Lambda_{t},x) e^{-\kappa (T-t)}\right) \right].$$

Assume the intensity process (λ t )t[0,T] is given by

$$\lambda_{t}= \lambda_{0} \exp(2 \beta W_{t})$$

where W is a Brownian motion, and β a constant (nonnull). Then, the cumulative intensity is

$$\Lambda_{t}=\lambda_{0} \int_{0}^{t} \exp(2 \beta W_{s})ds.$$

By Borodin and Salminen [page 169], the joint law of (Λ t ,W t ) is given by

$${\mathbb{P}} \left(\Lambda_{t} \in dv, W_{t} \in dz \right) = \frac{\lambda_{0} |\beta|}{2v} \exp\left(- \frac{\lambda_{0} (1+ e^{2\beta z})}{2 \beta^{2} v}\right) i_{\beta^{2}t/2}\left(\frac{\lambda_{0} e^{\beta z}}{\beta^{2}v} \right) {\mathbf{1}}_{\{v>0\}} dv dz$$

where the function

$$i_{y}(z)=\frac{z e^{\frac{\pi^{2}}{4y}}}{\pi \sqrt{\pi y}} \int_{0}^{\infty} \exp\left(-z ch(x) -\frac{x^{2}}{4y} \right) sh(x) \sin\left(\frac{\pi x}{2y}\right) dx.$$

The expectation term in the right-hand side of Eq. 18 is then

$${\mathbb{E}} \left[ g(t,\Lambda_{t},x, y) \, \lambda_{t} \, \varphi_{\lambda}\left(K-f(t,\Lambda_{t},x) e^{-\kappa (T-t)},M-f(t,\Lambda_{t},x) e^{-\kappa (T-t)}\right) \right]$$
\begin{aligned} &= \int_{\mathbb R^{2}} g(t,v,x, y) {\varphi_{\lambda} \left(K-\sqrt{\frac{v}{t}}x e^{-\kappa (T-t)},M-\sqrt{\frac{v}{t}}xe^{-\kappa (T-t)}\right) } \frac{\lambda_{0}^{2} |\beta|}{2v} e^{2 \beta z} \\&\qquad\exp\left(- \frac{\lambda_{0} (1+ e^{2\beta z})}{2 \beta^{2} v}\right) i_{\beta^{2}t/2}\left(\frac{\lambda_{0} e^{\beta z}}{\beta^{2}v} \right) {\mathbf{1}}_{\{v>0\}} dv dz. \end{aligned}

### A Black–Scholes-type formula for generalized stop-loss contracts in the Cramer–Lundberg

As an illustration, we conclude our analysis by specifying our result in the classic Cramer–Lundberg model. More precisely, we assume that the Cox process is a homogeneous Poisson process with constant intensity λ0>0 and set h:=1[K,M], with K<M. The building block reduces to the distribution function

$$\varphi_{\lambda_{0}}(x):=\varphi_{\lambda_{0}}^{h}(x)= {\mathbb{N}}\left[L_{T} \in [K-x,M-x] \right], \quad x\in {\mathbb{R}}_{+}.$$
(19)

In that case, we omit the dependency on Λ for the mappings f and g (as Λ t =tλ0).

### Corollary 4.1

Under the assumptions of Theorem 3.6, it holds that

\begin{aligned} & {\mathbb{E}}\!\left[\hat L_{T} {1}_{L_{T}\in [K,M]}\!\right]\,=\,\lambda_{0}\! \!\!\int_{0}^{T} \int_{{\mathbb{R}}_{+}^{2}} e^{-\kappa (T-t)} g(t,x,y) \, \varphi_{\lambda_{0}}\!\left(\!f(t,x) e^{-\kappa (T-t)}\!\right) \!\mu(dx,dy)dt, \end{aligned}

(recall that $$\mu :=\mathcal {L}_{(\overline \varepsilon,\overline \vartheta)}$$).

If we assume, furthermore, that f(t,x)=g(t,x,y)=x and κ=0, then the loss process L T corresponds to the cumulated loss of the classic Cramer–Lundberg model. In this context, much of the literature deals with the computation of the ruin probability and related quantities, such as the discounted penalty function at ruin (Gerber–Shiu function). Others papers are concerned with the pricing of stop-loss contracts. The pricing relies on the computation of a term of the form $$\int _{K}^{M} y dF(y)$$ with F being the cumulative distribution function of the loss process L T , and the discussion in the literature mainly focuses on the derivation of the compound distribution function F (usually calculated recursively, using the Panjer recursion formula and numerical methods/approximations) cf. Panjer (1981) and Gerber (1982). Our Malliavin approach provides another formula which reads as

$${\mathbb{E}}\left[\hat L_{T} {1}_{L_{T}\in [K,M]}\right]=\lambda_{0} T \int_{{\mathbb{R}}_{+}} x \, (F(M-x)-F(K-x)) \mu(dx).$$
(20)

Note that our result coincides with the one obtained in Gerber (1982). Indeed, in a general setting, if one translates Formula (6) of Gerber (1982) (as μ is constrained to have a finite support in $$\mathbb {N}$$ in Gerber (1982)), the distribution F satisfies

$$y dF(y)= \lambda_{0} T \int_{{\mathbb{R}}_{+}} x dF(y-x) \mu(dx),$$

from which one deduces that

\begin{aligned} \int_{K}^{M} y dF(y) &= \lambda_{0} T \int_{K}^{M} \int_{{\mathbb{R}}_{+}} x \mu(dx) dF(y-x) \\ &= \lambda_{0} T \int_{{\mathbb{R}}_{+}} x \int_{K}^{M} dF(y-x) \mu(dx)\\ &= \lambda_{0} T \int_{{\mathbb{R}}_{+}} x (F(K-x)-F(M-x)) \mu(dx), \end{aligned}

which is exactly (20).

## Conclusion

This paper provides an original and efficient formula for the pricing of stop loss contracts. This formula is efficient since the computation is easy once the building block is calculated. It allows one to handle general dependencies frameworks that have not been studied in the literature, in particular, between the claims and the intensity processes. Note that for the standard Cramer–Lundberg model, our formula coincides with the evaluation formula of Gerber (1982). To obtain this formula, we rigorously modeled the dependencies setting of a doubly stochastic Poisson process through a time change, and used Malliavin calculus machinery.

1. 1.

By a slight abuse of notation, $${\mathbb {E}}\left [\cdot \middle \vert {\mathcal {F}}_{T}^{\lambda }\right ] := {\mathbb {E}}\left [\cdot \middle \vert {\mathcal {F}}_{0}^{{\mathbb {C}}}\otimes {\mathcal {F}}_{T}^{\lambda }\right ]$$ and $${\mathbb {E}}\left [\cdot \middle \vert {\mathcal {F}}_{T}^{\lambda } \vee {\mathcal {F}}^{\varepsilon,\vartheta }\right ] := {\mathbb {E}}\left [\cdot \middle \vert {\mathcal {F}}_{0}^{{\mathbb {C}}}\otimes \left ({\mathcal {F}}_{T}^{\lambda }\vee {\mathcal {F}}^{\varepsilon,\vartheta }\right)\right ]$$.

## References

1. Albers, W: Stop-loss premiums under dependence. Insur. Math. Econ. 24(3), 173–185 (1999).

2. Albrecher, H, Boxma, O: A ruin model with dependence between claim sizes and claim intervals. Insur. Math. Econ. 35(2), 245–254 (2004).

3. Albrecher, H, Constantinescu, C, Loisel, S: Explicit ruin formulas for models with dependence among risks. Insur. Math. Econ. 48(2), 265–270 (2011).

4. Bakshi, G, Madan, D, Zhang, F: Understanding the role of recovery in default risk models: empirical comparisons and implied recovery rates (2006). FDIC Center for Financial Research Working Paper No. 2006-06. https://doi.org/10.2139/ssrn.285940.

5. Borodin A, Salminen P: Handbook of Brownian motion—facts and formulae. 2nd edn. Probability and its Applications. Birkhäuser Verlag, Basel (2002).

6. Boudreault, M, Cossette, H, Landriault, D, Marceau, E: On a risk model with dependence between interclaim arrivals and claim sizes. Scandinavian Actuarial Journal. 2006(5), 265–285 (2006).

7. de Lourdes Centeno, M: de Lourdes Centeno. Dependent risks and excess of loss reinsurance. Insur. Math. Econ. 37(2), 229–238 (2005).

8. Denuit, M, Dhaene, J, Ribas, C: Does positive dependence between individual risks increase stop-loss premiums?Insur. Math. Econ. 28(3), 305–308 (2001).

9. Hans, F, Schied, A: Stochastic finance. Walter de Gruyter & Co.An introduction in discrete time, Berlin, extended edition (2011).

10. Gerber, HU: On the numerical evaluation of the distribution of aggregate claims and its stop-loss premiums. Insur. Math. Econ. 1(1), 13–18 (1982).

11. Panjer, HH: Recursive evaluation of a family of compound distributions. ASTIN Bull. J. IAA. 12(1), 22–26 (1981).

12. Picard, J: Formules de dualité sur léspace de Poisson. Ann. Inst. H. Poincaré Probab. Statist. 32(4), 509–548 (1996a).

13. Picard, J: On the existence of smooth densities for jump processes. Probab. Theory Related Fields. 105(4), 481–511 (1996b).

14. Privault, N: Stochastic analysis in discrete and continuous settings with normal martingales, volume 1982 of Lecture Notes in Mathematics. Springer-Verlag, Berlin (2009).

## Acknowledgments

The authors acknowledge anonymous referees and the Associate Editor for comments and suggestions that have allowed us to improve the paper. The authors acknowledge Projet PEPS égalité (part of the European project INTEGER-WP4) “Approximation de Stein : approche par calcul de Malliavin et applications à la gestion des risques financiers" for financial support.

## Author information

Authors

### Contributions

The three authors have contributed equally to obtaining the results and to the elaboration of the paper. All authors read and approved the final manuscript.

### Corresponding author

Correspondence to Anthony Réveillac.

## Ethics declarations

### Competing interests

The authors declare that they have no competing interests.

## Rights and permissions 