Information uncertainty related to marked random times and optimal investment

We study an optimal investment problem under default risk where related information such as loss or recovery at default is considered as an exogenous random mark added at default time. Two types of agents who have different levels of information are considered. We first make precise the insider's information flow by using the theory of enlargement of filtrations and then obtain explicit logarithmic utility maximization results to compare optimal wealth for the insider and the ordinary agent. MSC: 60G20, 91G40, 93E20


Introduction
The optimization problem in presence of uncertainty on a random time is an important subject in finance and insurance, notably for risk and asset management when it concerns a default event or a catastrophe occurrence. Another related source of risk is the information associated to the random time concerning resulting payments, the price impact, the loss given default or the recovery rate etc. Measuring these random quantities is in general difficult since the relevant information on the underlying firm is often not accessible to investors on the market. For example, in the credit risk analysis, modelling the recovery rate is a subtle task (see e.g. Duffie and Singleton [12, Section 6], Bakshi et al. [4] and Guo et al. [16]). In this paper, we study the optimal investment problem with a random time and consider the information revealed at the random time as an exogenous factor of risk. We suppose that all investors on the market can observe the arrival of the random time such as the occurrence of a default event. However, for the associated information such as the recovery rate, there are two types of investors: the first one is an informed insider and the second one is an ordinary investor. For example, the insider has private information on the loss or recovery value of a distressed firm at the default time and the ordinary investor has to wait for the legitimate procedure to be finished to know the result. Both investors aim at maximizing the expected utility on the terminal wealth and each of them will determine the investment strategy based on the corresponding information set. Following Amendinger et al. [2,3], we will compare the optimization results and deduce the additional gain of the insider.
Let the financial market be described by a probability space (Ω, A, P) equipped with a reference filtration F = (F t ) t≥0 which satisfies the usual conditions. In the literature, the theory of enlargements of filtrations provides essential tools for the modelling of different information flows. In general, the observation of a random time, in particular a default time, is modelled by the progressive enlargement of filtration, as proposed by Elliott et al. [13] and Bielecki and Rutkowski [5]. The knowledge of insider information is usually studied by using the initial enlargement of filtration as in [2,3] and Grorud and Pontier [15]. In this paper, we suppose that the filtration F represents the market information known by all investors including the default information. Let τ be an F-stopping time which represents the default time. The information flow associated to τ is modelled by a random variable G on (Ω, A) valued in a measurable space (E, E). In the classic setting of insider information, G is added to F at the initial time t = 0, while in our model, the information is added punctually at the random time τ . Therefore, we need to specify the corresponding filtration which is a mixture of the initial and the progressive enlargements. Let the insider's filtration G = (G t ) t≥0 be a punctual enlargement of F by adding the information of G at the random time τ . In other words, G is the smallest filtration which contains F and such that the random variable G is G τ -measurable. We shall make precise the adapted and predictable processes in the filtration G in order to describe investment strategy and wealth processes. As usual, we suppose the density hypothesis of Jacod [17] that the F-conditional law of G admits a density with respect to its probability law. By adapting arguments in Föllmer and Imkeller [14] and in [15], we deduce the insider martingale measure Q which plays an important role in the study of (semi)martingale processes in the filtration G. We give the decomposition formula of an F-martingale as a semimartingale in G, which gives a positive answer to the Jacod's (H')-hypothesis and allows us to characterize the G-portfolio wealth processes as in [3].
In the optimization problem with random default times, it is often supposed that the random time satisfies the intensity hypothesis (e.g. Lim and Quenez [24] and Kharroubi et al. [23]) or the density hypothesis (e.g. Blanchet-Scalliet et al. [6], Jeanblanc et al. [19] and Jiao et al. [21]), so that it is a totally inaccessible stopping time in the market filtration. In particular, in [21], we consider marked random times where the random mark represents the loss at default and we suppose that the couple of default time and mark admits a conditional density. In this current paper, the random time τ we consider does not necessarily satisfy the intensity nor the density hypothesis: it is a general stopping time in F and may also contain a predictable part. We obtain the optimal strategy and wealth for the two types of investors with a logarithmic utility function and deduce the additional gain due to the extra information. As a concrete case, we consider a hybrid default model similar as in Campi et al. [8] where the filtration F is generated by a Brownian motion and a Poisson process, and the default time is the minimum of two random times: the first hitting time of a Brownian diffusion and the first jump time of the Poisson process and we compute the additional expected logarithmic utility wealth.
The rest of the paper is organized as following. We model in Section 2 the filtration which represents the default time together with the random mark and we study its theoretical properties. Section 3 focuses on the logarithmic utility optimization problem for the insider and compares the result with the case for ordinary investor. In Section 4, we present the optimization results for an explicit hybrid default model. Section 5 concludes the paper.

Model framework
In this section, we present our model setup. In particular, we study the enlarged filtration including the random mark which is a mixture of the initial and the progressive enlargements of filtrations.

The enlarged filtration and martingale processes
Let (Ω, A, P) be a probability space equipped with a filtration F = (F t ) t≥0 which satisfies the usual conditions and τ be an F-stopping time. Let G be a random variable valued in a measurable space (E, E) and G = (G t ) t≥0 be the smallest filtration containing F such that G is G τ -measurable. By definition, one has (2.1) In particular, similar as in Jeulin [20] (see also Callegaro et al. [7]), a stochastic process Z is G-adapted if and only if it can be written in the form where Y is an F-adapted process and Y (·) is an F ⊗ E-adapted process on Ω × E, where F ⊗ E denotes the filtration (F t ⊗ E) t≥0 . The following proposition characterizes the G-predictable processes. The proof combines the techniques in those of [20] where Y is an F-predictable process and Y (·) is a P(F) ⊗ E-measurable function.
We study the martingale processes in the filtrations F and G. One basic martingale in F is related to the random time τ . Let D = (1l {τ ≤t} , t ≥ 0) be the indicator process of the F-stopping time τ . Recall that the F-compensator process Λ of τ is the F-predictable increasing process Λ such that N := D − Λ is an F-martingale. In particular, if τ is a predictable F-stopping time, then Λ coincides with D.
To study G-martingales, we assume the following hypothesis for the random variable G with respect to the filtration F (c.f. [15] in the initial enlargement setting, see also [17] for comparison).
Assumption 2.2 For any t ≥ 0, the F t -conditional law of G is equivalent to the probability law η of G, i.e., P(G ∈ ·|F t ) ∼ η(·), a.s.. Moreover, we denote by p t (·) the conditional density As pointed out in [17, Lemma 1.8], we can choose a version of the conditional probability density p(·), such that p t (·) is F t ⊗ E-measurable for any t ≥ 0 and that (p t (x), t ≥ 0) is a positive càdlàg (F, P)-martingale for any x ∈ E. In the following we will fix such a version of the conditional density.

Remark 2.3
We assume the hypothesis of Jacod which is widely adopted in the study of initial and progressive enlargements of filtrations. Compared to the standard initial enlargement of F by G, the information of the random variable G is added at a random time τ but not at the initial time; compared to the progressive enlargement, the random variable added here is the associated information G instead of the random time τ . In particular, the behavior of G-martingales is quite different from the classic settings, and worth to be examined in detail. Similar as in [14] and [15], we introduce the insider martingale measure Q which will be useful in the sequel.

Proposition 2.4
There exists a unique probability measure Q on F ∞ ∨ σ(G) which verifies the following conditions: (1) the probability measures Q and P are equivalent; (2) Q identifies with P on F and on σ(G); (3) G is independent of F under the probability Q.
Moreover, the Radon-Nikodym density of Q with respect to P on G t is given by Proof. Let Q be defined by Since (F t ∨σ(G)) t≥0 is the initial enlargement of F by G, we obtain by [14,15] that Q is the unique equivalent probability measure on F ∞ ∨ σ(G) which satisfies the conditions (1)- (3). Moreover, the Radon-Nikodym density dQ/dP on G t is given by which leads to Hence we obtain (2.5).
The following proposition shows that the filtration G also satisfies the usual conditions under the F-density hypothesis on the random variable G. The idea follows [1, Proposition 3.3].

Proposition 2.5 Under Assumption 2.2, the enlarged filtration G is right continuous.
Proof. The statement does not involve the underlying probability measure. Hence we may assume without loss of generality (by Proposition 2.4) that G is independent of F under the probability P. Let t 0 and ε > 0. Let X t+ε be a bounded G t+ε -measurable random variable. We write it in the form where Y t+ε and Y t+ε (·) are respectively bounded F t+ε -measurable and F t+ε ⊗ E-measurable functions. Then for δ ∈ (0, ε), by the independence between G and F one has where η is the probability law of G. Since the filtration F satisfies the usual conditions, any F-martingale admits a càdlàg version. Therefore, by taking a suitable version of the expectations In particular, if X is a bounded G t+ := ε>0 G t+ε -measurable random variable, then one has Under the probability measure Q, the random variable G is independent of F. This observation leads to the following characterization of (G, Q)-(local)-martingales. In the particular case where τ = 0, we recover the classic result on initial enlargement of filtrations.
for any x ∈ E (resp. an (F, P)-locally square-integrable martingale with a common localizing stopping time sequence independent of x), (2) the process is well defined and is an (F, P)-martingale (resp. an (F, P)-local martingale).
Then the process Z is a (G, Q)-martingale (resp. a (G, Q)-local martingale).
Proof. We can reduce the local martingale case to the martingale case by taking a sequence of Fstopping times which localizes the processes appearing in the conditions (1) and (2). Therefore, we only treat the martingale case. Note that since N and Y (x) are square integrable (c.f. For t ≥ s ≥ 0, one has where the second equality comes from the fact that G is indenpendent of F under the probability Q and that η coincides with the Q-probability law of G, and the third equality comes from the fact that the probability measures P and Q coincide on the filtration F. Since Y (x) is an (F, P)-martingale, one has where the last equality comes from the fact that since Λ is an integrable increasing process which is F-predictable (see [11,VI.61]). Therefore, by (2.6) we obtain The proposition is thus proved.
if the following conditions are fulfilled: is an (F, P)-square integrable martingale (resp. a (F, P)locally square integrable martingale with a common localizing stopping time sequence); (2) the process is a (F, P)-martingale (resp. a local (F, P)-martingale).
Proof. By Proposition 2.4, Z is a (G, P)-(local)-martingale if and only if the process Z(1l Therefore the assertion results from Proposition 2.6.
and that the following conditions are fulfilled: (2) the process is well defined and is an (F, P)-martingale.
Proof. Since Z T is a G T -measurable random variable, we can write it in the form We then let, for t ∈ [0, T ] Then Y is an (F, P)-martingale. For any t ∈ [0, T ], we let Y t be an F t -measurable random variable such that This is always possible since where the second equality is obtained by an argument similar to (2.7) and (2.8). We finally show that Z t = 1l {τ >t} Y t + 1l {τ ≤t} Y t (G) P-a.s. for any t ∈ [0, T ]. Note that we already have . Therefore it remains to prove that the G-adapted process is an (G, P)-martingale. This follows from the construction of the processes Y , Y (·) and Corollary 2.7.
Remark 2.9 (1) We observe from the proof of the previous proposition that, if Z is a (G, P)martingale on [0, T ] (without boundedness hypothesis) such that Z T can be written into the form (2.9) with Y T (x)p T (x) ∈ L 2 (Ω, F T , P) for any x ∈ E, then we can construct the F⊗E-adapted process Y (·) by using the relation (2.10). Note that for any x ∈ E, the process Y (x)p(x) is a square-integrable (F, P)-martingale. Therefore, the result of Proposition 2.8 remains true provided that the conditional expectation in (2.11) is well defined.
(2) Let Z be a (G, P)-martingale on [0, T ]. In general, the decomposition of Z into the form with Y being F-adapted and Y (·) being F ⊗ E-adapted is not unique. Namely, there may exist an F-adapted process Y and an F ⊗ E-adapted process G). Moreover, although the proof of Proposition 2.8 provides an explicit way to construct the decomposition of the (G, P)-martingale Z which satisfies the two conditions, in general such decomposition is not unique neither.
(3) Concerning the local martingale analogue of Proposition 2.8, the main difficulty is that a local (G, P)-martingale need not be localized by a sequence of F-stopping times. To solve this problem, it is crucial to understand the G-stopping times and their relation with F-stopping times.

(H')-hypothesis and semimartingale decomposition
In this subsection, we prove that under Assumption 2.2, the (H')-hypothesis (see [17]) is satisfied and we give the semimartingale decomposition of an F-martingale in G.
Theorem 2. 10 We suppose that Assumption 2.2 holds. Let M be an (F, P)-locally square integrable martingale, then it is a (G, P)-semimartingale. Moreover, the process We present two proofs of Theorem 2.10. The first one relies on the following Lemma, which computes the (G, Q)-predictable bracket of an (F, P)-local martingale with a general (F, P)-local martingale. This approach is more computational, but Lemma 2.11 has its own interest, in particular for the study of G-adapted processes. The second proof is more conceptual and relies on a classic result of Jacod [17] on initial enlargement of filtrations under an additional (positivity or integrability) assumption on the process M .

Lemma 2.11 Let Y be an F-adapted process and Y (·) be an F ⊗ E-adapted process such that (as in Proposition 2.6) (1) Y (x) is an (F, P)-locally square integrable martingale for any x ∈ E,
(2) the process is well defined and of finite variation, and Y = 1l Proof. It follows from Proposition 2.6 that Z is a (G, Q)-martingale. In the following, we establish the equality (2.12).
We x=G . In the second step, we assume that M is stopped at τ . In this case one has It is a local (F, P)-martingale. Moreover, since M is stopped at τ , also is M, Y (x) F,P . In

In addition, by definition
where M − · H and H − · M denote respectively the integral processes Since H is a predictable process of finite variation and M is an F-martingale, the process [M, H] is a local F-martingale (see [18] Chapter I, Proposition 4.49). In particular, This is an F-predictable process, and hence is G-predictable. Moreover, this process is stopped at τ . Let V be the (F, P)-martingale defined as where D = (1l {τ ≤t} , t ≥ 0) = N + Λ. In particular, is a local (F, P)-martingale. Therefore, one has which is a local (F, P)-martingale.
We write the process M Z − A in the form where the last equality comes from (2.13). We have seen that U (x)−V is a local (F, P)-martingale for any x ∈ E. Hence by Proposition 2.6 we obtain that M Z − A is a local (G, Q)-martingale.
In the final step, we consider the general case. We decompose the (F, P)-martingale into the sum of two parts M τ and M − M τ , where M τ is an (F, P)-martingale stopped at τ , and M − M τ is an (F, P)-martingale which vanishes on [[0, τ ]]. Combining the results obtained in the two previous steps, we obtain the formula (2.12).
Proof of Theorem 2.10. Since P and Q coincide on F, we obtain that M is an (F, Q)martingale. Moreover, since G is independent of F under the probability Q, M is also a (G, Q)martingale.
We keep the notation of the Lemma 2.11 and specify the terms in the situation of the theorem. Note that the Radon-Nikodym derivative of P with respect to Q on G t equals Z t := 1l {τ >t} + 1l {τ ≤t} p t (G).
In particular, with the notation of the lemma, one has Moreover, one has Therefore, by Lemma 2.11 one has Finally, since M is a (G, Q)-local martingale, by Girsanov's theorem (cf. [18] Chapter III, Theorem 3.11), the process is a local (G, P)-martingale. The theorem is thus proved.
Second proof of Theorem 2.10. Let H = F ∨ σ(G) be the initial enlargement of the filtration F by σ(G). Clearly the filtration H is larger than G. More precisely, the filtration G coincides with F before the stopping time τ , and coincides with H after the stopping time τ . We first observe that the stopped process at τ of an (F, P)-martingale L is a (G, P)-martingale. In fact, for t ≥ s ≥ 0 one has We remark that, as shown by Jeulin's formula, this result holds more generally for any enlargement G which coincides with F before a random time τ .
We now consider the decomposition of M as M = M τ + (M − M τ ), where M τ is the stopped process of M at τ . Since G coincides with F before τ , we obtain by the above argument that M τ is an (G, P)-local martingale. Consider now the process Y := M − M τ , which begins at τ . It is also an (F, P)-local martingale. By Jacod's decomposition formula (see [17, Theorem 2.1]), the process is an (H, P)-local martingale. Note that the predictable quadratic variation process Y, p(x) F,P s vanishes on [[0, τ ]] since the process Y begins at τ . Hence This observation also shows that the process (2.14) is G-adapted. Hence it is a (G, P)-local martingale under the supplementary assumption that Y is positive or Y 1 < +∞, by Stricker [25,Theorem 1.2], where Y 1 is defined as the supremum of Y σ L 1 with σ running over all finite G-stopping times. Note that the condition Y 1 < +∞ is satisfied if and only if the process Y is a (G, P)-quasimartingale (see [22]).

Remark 2.12
Even if if the second proof of Theorem 2.10 needs the additional assumption on the positivity or integrability of the process M − M τ , it remains interesting since it allows to weaken Assumption 2.2. Indeed, to apply Jacod's decomposition formula we only need to assume that the conditional law P(G ∈ .|F t ) is absolutely continuous w.r.t. P(G ∈ .).

Logarithmic utility maximization
In this section, we study the optimization problem for two types of investors: an insider and an ordinary agent. We consider a financial market composed by d stocks with discounted prices given by the d-dimensional process X = (X 1 , . . . , X d ) ⊤ . This process is observed by both agents and is F-adapted. We suppose that each X i , i = 1, . . . , d, evolve according to the following stochastic differential equations with X i 0 a positive constant, M i an F-locally square integrable martingale and α a P(F)measurable process valued in R d such that The ordinary agent has access to the information flow given by the filtration F, while the information flow of the insider is represented by the filtration G. The optimization for the ordinary agent is standard. For the insider, we follow [2,3] to solve the problem. We first describe the insider's portfolio in the enlarged filtration G. Recall that under Assumption 2.2, the process M is a G-semimartingale with canonical decomposition given by Theorem 2.10: where M is a G-local martingale and M τ is the stopped process (M t∧τ ) t≥0 .
Applying Theorem 2.5 of [17] to the F-locally square integrable martingale M − M τ , we have the following result.
for all x ∈ E and all t ≥ 0.
We now rewrite the integral of m w.r.t. M − M τ .

Lemma 3.2 Under Assumption 2.1, there exists a P(F) ⊗ E-measurable process µ valued in
Proof. The proof is the same as that of Lemma 2.8 in [3]. We therefore omit it.
We can then rewrite the process M in (3.2) in the following way 3) and the dynamics of the process X can be expressed with the G-local martingale M as follows where Diag(X t− ) stands for the d × d diagonal matrix whose i-th diagonal term is X i t− for i = 1, . . . , d. We then introduce the following integrability assumption.
Denote by H ∈ {F, G} the underlying filtration. We define an H-portfolio as a couple (x, π) where x is a constant representing the initial wealth and π is an R d -valued P(H)-measurable process π such that T 0 π ⊤ t d M t π t < ∞ , P-a.s.
Here π i t represents the proportion of discounted wealth invested at time t in the asset X i . For such an H-portfolio, we define the associated discounted wealth process V (x, π) by By the condition (3.4), the wealth process is positive. We suppose that the agents preferences are described by the logarithmic utility function. For a given initial capital x, we define the set of admissible H-portfolio processes by For an initial capital x we then consider the two optimization problems: • the ordinary agent's problem consists in computing • the insider's problem consists in computing To solve these problems, we introduce the minimal martingale density processesẐ F andẐ G defined byẐ , where E (·) denotes the Doléans-Dade exponential. We first have the following result.
Proposition 3.4 (i) The processesẐ F X andẐ F V (x, π) are F-local martingales for any portfolio (x, π) such that π ∈ A F (x).
(ii) The processesẐ G X andẐ G V (x, π) are G-local martingales for any portfolio (x, π) such that π ∈ A G (x).
Proof. We only prove assertion (ii). The same arguments can be applied to prove (i) by taking µ(G) ≡ 0. From Ito's formula we have From the dynamics ofẐ G and X we have Therefore we get which shows thatẐ G X is a G-local martingale.
We are now able to compute V F and V G and provide optimal strategies. Theorem 3.5 (i) An optimal strategy for the ordinary agent is given by and the maximal expected logarithmic utility is given by (ii) An optimal strategy for the insider is given by and the maximal expected logarithmic utility is given by

(iii) The insider's additional expected utility is given by
Proof. We do not prove (i) since it relies on the same arguments as for (ii) with µ(G) ≡ 0 and Z F in place ofẐ G .
(ii) We recall that for a C 1 concave function u such that its derivative u ′ admits an inverse function I we have for all a, b ∈ R. Applying this inequality with u = log, a = V T (x, π) for π ∈ A G (x) and b = yẐ G T for some constant y > 0 we get Since V (x, π) is a non-negative process andẐ G V (x, π) is a G-local martingale it is a G-supermartingale. therefore, we get Since this inequality holds for any π ∈ A G (x), we obtain by taking y = 1 Moreover, we have From (3.1) and Assumption 3.3, we get π ins ∈ A log G (x). Therefore π ins is an optimal strategy for the insider's problem.
Using (3.1), we get that . 0 α ⊤ dM and . 0 α ⊤ d M are respectively F and G martingales. Therefore we have which gives (iii) The result is a consequence of (i) and (ii).

Example of a hybrid model
with for all t ≥ 0. We can then compute the terms m 1 and m 2 appearing in Lemma 3.1 and we get and for t ≥ 0. Since the matrix M is diagonal the process µ given by Lemma 3.2 can be taken such that µ = (m 1 , m 2 ) ⊤ . We easily check that Assumption 3.3 is satisfied. We can then apply Theorem 3.5 to the optimization problem with maturity T and we get • an optimal strategy for the ordinary agent given by • an optimal strategy for the insider given by π ins t = α t + 1l τ ≤t µ t (G) , t ∈ [0, T ] , and the maximal expected logarithmic utility • the insider's additional expected utility In the following, we proceed with the proof of the "only if" part. Let Z be a G-preditable process. We first show that the process Z1l ]]0,τ ]] is an F-predictable process. Again by a monotone class argument, we may assume that Z is left continuous. In this case the process Z1l ]]0,τ ]] is also left continuous. Moreover, by the left continuity of Z one has Z t 1l {τ ≥t} = lim ε→0+ Z t−ε 1l {τ >t−ε} , t > 0.
For the study of the process Z on 1l ]]τ,+∞[[ , we use the following characterization of the predictable σ-algebra P(G). The σ-algebra P(G) is generated by sets of the form B × [0, +∞) with B ∈ G 0 and sets of the form B ′ × [s, s ′ ) with 0 < s < s ′ < +∞ and B ′ ∈ G s− := 0≤u<s G u . It suffices to show that, if Z is the indicator function of such a set, then 1l ]]τ,+∞[[ Z can be written as 1l ]]τ,+∞[[ Y (G) with Y (·) being a P(F) ⊗ E-measurable function.