Skip to main content

A brief history of quantitative finance

A ship in harbour is safe, but that is not what ships are built for

John A. Shedd

Abstract

In this introductory paper to the issue, I will travel through the history of how quantitative finance has developed and reached its current status, what problems it is called to address, and how they differ from those of the pre-crisis world.

Background

In this introductory paper to the issue, I will travel through the history of how quantitative finance has developed and reached its current status, what problems it is called to address, and how they differ from those of the pre-crisis world.

I take the privileged vantage point of being the quantitative finance editor of Risk magazine and risk.net, responsible for the publication of peer-reviewed papers in their Cutting Edge section. Having been a member of the team since 2007, I have witnessed the impact the credit crisis had on the industry and the practice of derivatives pricing. What started as a localised crisis in the US mortgage market, first signalled in 2007, became a full-blown credit crisis and liquidity crisis for the industry, even spilling into a sovereign crisis in some countries.

The following charts total all papers submitted to Risk from 2007 to 2016 (including those not published), divided by category (although, it is often difficult to attribute a single category to a research paper). On average, Risk receives a hundred papers per year. These represent only a small subset of the literature on quantitative finance but they help get a sense of current research activities for a given asset class or subject. It will not come as a surprise, for example, that research in credit derivatives has declined sharply over time and that on valuation adjustments has taken centre stage. Developments in other fields have had less obvious patterns.

I will not touch on the quantitative research on buy-side topics here; papers on portfolio management, algorithmic trading and trade execution, though a significant part of Risk’s output, are not relevant to the special edition of this journal.

I thank the editors and publishers of Probability, Uncertainty and Quantitative Risk for this opportunity and I apologise in advance for the numerous omissions of important contributions that lack of space makes inevitable.

Finding sigma

The history of quantitative finance has been a long journey in which mathematicians, physicists, statisticians, economists, and those who are known as quants have pursued a common objective: managing volatility, or, in broader terms, the riskiness of financial markets.

The 20th century saw the birth and development of modern finance through numerous phases of progress in the quantitative infrastructure it was built on. Option trading was already alive and well in the 17th century, when merchants sought to protect themselves from the risk connected to their trades in what we would consider today rudimental form. It was in 1900, however, that the first milestone of quantitative finance was notched, when Louis de Bachelier published his Ph.D. thesis The theory of speculation (Bachelier, 1900). Bachelier introduced the concept of Brownian motion to finance after it was, apparently independently, formalised in mathematical terms by the Danish astronomer, mathematician, and statistician Thorvald N. Thiele (Thiele, 1880) a few years earlier. Brownian motion, as a continuous and normally distributed random process, could be applied to approximate asset prices’ volatile path.

Though for decades Bachelier’s work was largely overlooked and only fully re-evaluated in the sixties, his studies have been enormously influential. Brownian motion provided the essential tool for the study of stochastic processes, which are a fundamental ingredient of quant finance. They were fundamental for Japanese mathematician Kiyoshi Itô, who in the 1951 paper On stochastic differential equations presented his lemma on how to differentiate a time-dependent function of a stochastic process. In essence, with Itô’s lemma one can derive the differential equations for calculating the value of financial derivatives. Itô is considered the founder of stochastic calculus, and the most commonly used of its variations is named after him.

The game-changing breakthrough in modern finance came in 1973 with the publication of Fischer Black and Myron Scholes’ The pricing of options and corporate liabilities (Black, Scholes, 1973), and Robert Merton’s On the pricing of corporate debt: the risk structure of interest rates (Merton, 1974). The papers presented, using delta-hedging and arbitrage-free arguments, a call and put option pricing model. It quickly entered wide use and, in fact, allowed the explosion of the options market as we know it today.

The original BSM model notoriously had a number of limitations. Its assumptions such as Gaussian-distributed underlying’s returns, no dividends, no transaction costs, complete and liquid markets, and a known and constant risk-free rate, to cite just a few–oversimplified the reality of options markets and restricted the reliability and applicability of the model. Successive research in the field focussed on relaxing those assumptions and building models that would match market prices more accurately. It introduced variable risk-free rates, incorporated transaction costs and dividends, and used distributions with fatter tails to capture non-normality.

The biggest challenge, however, was moving away from the assumption of constant and known volatility, as it soon proved to be a highly unrealistic assumption. Historical volatility is not fit for this purpose, as it gives no indication of expected future volatility. Implied volatility was then introduced as an estimate of sigma in the BSM formula, allowing the calibration of option prices to market prices, and the hedging of a position.

Throughout the years, many models have been proposed to improve volatility modelling. In the’80s, Robert Engle and Tim Bollerslev introduced the so-call ARCH (autoregressive conditional heteroskedasticity) model (Engle, 1982) and its generalised variant, GARCH (Bollerslev, 1986). They estimate volatility as a function of past realised volatility and innovations in the time series. While often referred to as stochastic volatility models, the deterministic nature of the volatility terms they employ does not make the GARCH family stochastic in a strict sense.

Stochastic models have been extensively used in the industry and continue to be a fundamental method of modelling equity and interest rate derivatives. In the Heston model (Heston, 1993), the volatility term follows a mean-reverting, stochastic process with a square-root diffusion. Its peculiarity also resides in the correlation between the volatility and the returns of the underlying asset. The Heston model is probably the most popular stochastic volatility model, not so much due to its accuracy, but rather its relatively inexpensive computational requirements–a consequence of its closed-form Fourier solution.

Heston has been enormously influential, with a large number of variants appearing in the quant finance literature, which are widely applied to derivatives pricing across different asset classes.

Almost contemporaneously to Heston, Bruno Dupire (Dupire 1994) developed the local volatility model. It describes the volatility of a European option while capturing its smile (or the tendency of increasingly in-the-money or out-of-the-money options to exhibit higher implied volatility than that of at-the-money options, as observed in particular in FX markets), fitting market prices with high accuracy. In its setting, the volatility that plugs into the Black-Scholes formula is derived as a function of strike price and time to maturity. In truth, local volatility represents a class of models not entirely separated from stochastic volatility, as its volatility is an averaging function of all instantaneous volatilities obtained from the stochastic models.

The stochastic alpha beta rho (SABR) model (Hagan et al., 2002) is a more recent stochastic model, mostly used for interest rate derivatives. The authors admittedly developed it in order to overcome a mismatch they verified between the dynamics of the smile Dupire’s model was predicting and actual market realisation, which they believed could lead to incorrect hedging strategies.

(Dupire, 1994) and (Hagan et al., 2002) have been the two most cited papers in Risk over the past 10 years–confirming the impact these two models have had in the derivatives market, and the effort other researchers have dedicated to their improvement (Day, 2016). Because, of course, no model is perfect: in Dupire’s model, for example, one difficulty is to interpolate option prices between strikes, which are assumed to be traded in a continuum of values. More generally, it is also less suitable than stochastic models in pricing complex hybrid products.

The original SABR model was imperfect, in the sense that it lacked a few desirable features: the mean reversion of volatility, the interaction between different forward rates (which are assumed to be independent), and the possibility of calibrating the parameters to multiple maturities, for instance. Several adjustments to address those issues have been proposed in the literature, however–and the model is still considered the industry standard for rates derivatives.

In 2012, unprecedented conditions in some major markets in which interest rates became negative challenged the SABR model, as it was not designed–like many other models–to work in a negative interest rate environment. A solution to the problem had to be found in very short order, with quants opting for a manual adjustment on the rates’ distribution. By shifting the distribution to the right, one could make sure that the rate would remain positive, so the modelling framework in place would continue working. The trick of making rates artificially positive did not fix the underlying problem, however. Furthermore, the distribution shift had to be recalibrated each time there were significant negative rate moves in order to guarantee non-negativity. It was only in 2015 that an elegant new solution was introduced (Antonov, Konikov, and Spector, 2015). In The free boundary SABR, the authors proposed an extension of the SABR model in which rates can go negative, with no need to decide in advance how negative. Interestingly, the model has proved very useful not only for its ability to capture negative rates, but near-zero rates as well. That is important, because rates tend to display stickiness to zero before swinging into negative territory. Thanks to this work and others published the same year, Alexandre Antonov was awarded Risk’s Quant of the Year in 2016.

Among numerous stochastic volatility models, four outlined in papers by Lorenzo Bergomi between 2004 and 2009 stand out: Smile Dynamics I, II, III, and IV (Bergomi 2004, 2005, 2008, 2009). The importance of his work–which gained him Risk’s Quant of the Year award in 2009–lies in the way volatility and spot price dynamics are combined in a framework that is easy to implement, computationally inexpensive, and applicable to vanilla products as well as exotic equity derivatives and volatility derivatives. In particular, it overcomes the limited freedom the Heston model imposes on the joint dynamics of spot prices, volatility, and correlation between the two.

In the following two charts, we notice the trend in research in interest rates derivatives (Fig. 1) and that of derivatives on equity and volatility (Fig. 2), as seen in Risk.

Fig. 1
figure 1

Fig. 2
figure 2

Aside from SABR-related works, several of those papers investigated ways to address the dislocation between Libor and OIS rates that emerged with the advent of the crisis. The spread between the two had been mostly single-digit numbers of basis points up until 2007 and reckoned negligible. Upon Lehman Brothers’ collapse in September 2008, it ballooned to 365 basis points. Suddenly, Libor was not a risk-free rate anymore. Of course, by definition it never was, being the rate at which the banks make (or declare they could make) unsecured loans to each other. It moves in tandem with the perceived credit quality of the banking system. But it had never reached such levels that its non-risk-free nature became obvious. Banks adapted to the new environment by switching to OIS-based curves to discount cash flows.

It was Vladimir Piterbarg, in his seminal paper on funding and discounting (Piterbarg, 2010), who first set out how derivatives pricing and risk management was affected by the new rate environment and each bank’s own cost of funds. He started from first principles, and derived the formulae for derivatives valuation in the presence of costs associated with borrowing and lending, highlighting that different rates should be applied depending on whether a trade is secured through collateral posting, or unsecured. This work, also praised for its clarity, won him his second Quant of the Year award in 2011 (having already won in 2006 with a paper on volatility smile modelling).

Further research by (Bianchetti, 2010) confirms the market practice is now that of using two curves to obtain no-arbitrage solutions. He does so by taking into account the term structure of the basis (the spread between Libor and OIS), which can be obtained from the market, and derives pricing formulae based on an arbitrage-free double-curve setting. This solution uses an analogy to the cross-currency derivative pricing method in which an adjustment is needed to express the discounted cash flows under different rates.

In the cross-currency derivatives markets this basis was already known to be significant, with a difference in funding costs applying depending on the currency of the collateral posted for a given trade. Masaaki Fujii and Akihiko Takahashi (Fujii, Takahashi, 2011) showed the impact of this on derivatives pricing by pricing the embedded cheapest-to-deliver option dealers have in a cross currency swap, which gives them a choice of currencies to post as collateral. Their model allows one to avoid the potentially large mispricing of future cash flows and subsequent losses encountered in case of applying the incorrect discount rate.

A natural step in the study of the Libor-OIS spread was moving from a deterministic to a stochastic basis. Among others, two papers by Fabio Mercurio provided a solution based on the Brace-Gatarek-Musiela model for interest rates to model the joint behaviour of Libor and OIS through the tenors of the term structure (Mercurio, 2010); and one that models the basis explicitly in the multi-curve environment (Mercurio, 2012).

In later research, (Crépey and Douady, 2013) proposed an equilibrium approach to explain the mechanism according to which banks lend at an optimised rate between Libor and OIS. That rate is built on Lois (Libor-OIS spread) and the authors show it is a function of the credit skew of a typical borrower and the liquidity of a typical lender.

One of the visible effects of the crisis was the decline in the market for complex derivative instruments. Hybrid products, fancy exotic options, and cross-asset structured securities lost most of their appeal, both because they were often seen as unnecessary and excessively risky by clients, and because they became uneconomical for banks, as capital, funding, and regulatory compliance eroded profits. However, Figs. 1 and 2, which also comprise papers on these topics, seem to suggest there has been only a moderate decline in research on derivatives pricing. This is because volatility, smile dynamics, and correlation modelling have been inspiring new ideas, such as modelling swap rates’ volatility (Rheinlaender, 2015) and local or stochastic correlation models (see, for example, Langnau, 2010 and Zetocha, 2015) and the trend is unlikely to end soon.

Risk management

We observe risk management research has recently been more active than it was during the crisis years. Several studies on building stress-testing methodologies, liquidity risk, and operational risk have appeared in Risk. Others have proposed risk measures based on the omega risk measure or expectiles, as an alternative to value-at-risk (VaR) or expected shortfall (ES).

Needless to say, the vast majority of the papers comprising Fig. 3 deal with VAR, or in smaller measure, ES. That pattern is expected to change in years to come. Under the Basel Committee’s Fundamental review of the trading book (FRTB), ES is due to replace VaR for the calculation of market risk capital requirements. One of the debates sparked by the move centres on the possibility of back-testing ES, a desirable feature given the wide applications ES will have. ES has been proven not to be elicitable, a property–possessed by some risk measures (like VaR) and not others (like ES)–which was thought necessary for a back-test to be valid. (Acerbi and Székely, 2014) and (Fissler et al., 2016) proved this not to be the case. Elicitability is a necessary property in case of comparative back-testing between two risk measures, but does not affect an individual measure’s backtestability.

Fig. 3
figure 3

One notable recent adjustment to VaR has been proposed in the paper Risk management for whales (Cont and Wagalath, 2016) and it involves the inclusion of liquidation costs in the risk management model for financial portfolios. The paper explicitly refers to the famous 2012 ‘London whale’ incident, in which JP Morgan lost USD 6.2 billion in liquidation costs while trying to reduce its risk-weighted assets. The liquidation component of the model is designed to capture the price impact of large sales. They show that a standard model that neglects liquidity issues would have estimated a VaR five times lower than their LVaR. A warning on the riskiness of liquidating a large portfolio might suggest to a bank to revise their exit strategy.

Credit derivatives

As mentioned above, the chart in Fig. 4–showing post-crisis credit derivatives research in Risk–will surprise no one. In fact, if there is anything surprising in it, it is the relatively intense research activity on collateralised debt obligations (CDOs) pricing still persisting in 2010 and more moderately in 2011. Following the spectacular failure of these products, the details of which are not of interest to us here, their issuance volumes went from USD 180 billion per quarter at the beginning of 2007 to about USD 10 billion a year later. The banking system, from 2007 and 2009, suffered losses connected to CDOs in the region of a half-trillion US dollars.

Fig. 4
figure 4

Having got the pricing so wrong, not only were credit derivatives desks decimated by banks, even doing research on CDOs carried reputational risk that no model could manage. In early 2010, it was not rare to see discussion of complex securities like CDOs appearing in the mainstream media, with the institution or person involved being named and shamed.

However, it is true that in the past 3-4 years the issuance of CDOs has been on the rise again–but that has not stimulated a new wave of frenetic search for the perfect copula model.

Valuation adjustments

Valuation adjustments (XVAs) affect derivatives pricing by incorporating counterparty risk, funding, margin requirements, and regulatory capital. By construction, the calculation of their values, sensitivities, and joint behaviour is an enormous computational challenge, which has forced banks to equip themselves by adding computing power (like GPUs) and adapting and deploying mathematical solutions (like AAD) to the cause. XVAs are now a key ingredient in the pricing of derivatives of all asset classes, and are intrinsically connected with all areas discussed in this paper.

Their rise to prominence in finance can be seen as a by-product of the credit crisis and the subsequent introduction of more stringent regulatory frameworks in banking. The evolving ideas regarding price modelling, risk management, accounting, and the controversies of valuation adjustments have generated a rich stream of research in which the last word has yet to be said: Fig. 5 tells the story eloquently.

Fig. 5
figure 5

A credit valuation adjustment (CVA) is commonly defined as the difference between the price of an instrument including credit risk and the price of the same instrument where both parties are credit risk free (Green, 2016). In essence, it represents the price of counterparty credit risk.

CVA had been calculated by some top-tier banks long before the crisis, since approximately 2003. At the time, however, it was merely a back-office exercise whose value was not included in derivative prices. Rather, it was used as a measure to monitor counterparty risk at trade level.

The first CVA models focussed only on the unilateral case, meaning they were only taking into account the default risk of the counterparty. It was soon observed that this could create a price asymmetry, as each counterparty would propose a price that considered only the other party’s default risk and not its own–potentially scuppering any deal. Bilateral models were then introduced to address this issue. In a bilateral setting, symmetry is restored by calculating the CVA term for each counterparty as a cost and considering its algebraic opposite as a benefit for the other party. The total adjustment for a given counterparty is therefore the difference between the cost and the benefit. The benefit is known as debit valuation adjustment (DVA)–a source of great controversy upon its introduction. The mechanism allows a bank to report a profit as a consequence of its deteriorating financial health, given that a higher default risk means a higher CVA for its counterparty and a symmetrically higher DVA for itself. This not only raises an eyebrow for its economic implications and propensity for perverse incentives; it also creates a challenge, since this quantity cannot be replicated, and therefore hedging and monetising it is not possible. One theoretical solution to hedge DVA is for a bank to take a long position in its own credit default swaps (CDS)–but that is not technically doable. A way around the problem some banks are said to employ (see Carver, 2012) is to take a long position on a basket of CDSs on correlated entities: a US bank could, for example, use CDSs on other US banks to hedge its own DVA.

Among the first studies on the subject are Brigo and Capponi’s paper (Brigo and Capponi, 2010)–which was first made public in 2008–and Jon Gregory’s 2009 paper. Gregory provides a set of pricing equations for the bilateral counterparty risk case with non-simultaneous and simultaneous defaults, but dismisses the idea of reporting the DVA benefit as nonsensical. Brigo and Capponi do not touch upon the issue of the benefit, but provide an arbitrage-free and symmetric CVA model applicable to the trades in CDSs.

In 2011, Burgard and Kjaer (see Burgard and Kjaer 2011a, b) proposed an alternative hedging strategy for own-credit risk that involves the repurchase of the bank’s issued bonds. On this semi-replication strategy, they build a Black-Scholes PDE with bilateral counterparty risk that also takes the funding costs of hedging into account. The bonds to buy back are of different seniorities (from risky to risk-free) to allow consideration of either of the two cases at default.

While subject to criticism because of the practical restrictions of the approach, the strategy has set the market standard and greatly influenced successive research in the field. In 2013, Burgard and Kjaer (Burgard, Kjaer 2013) expanded on the funding considerations by developing a strategy to decide whether to hold bonds or issue new ones. They were named Risk’s Quants of the Year in 2014 thanks to these papers. The authors have continued building on their framework and a recent paper (Burgard and Kjaer, 2017) extends the model to multiple netting sets while contributing to an important ongoing debate on funding as explained later in this chapter.

Funding valuation adjustment (FVA) is the funding cost of hedging an uncollateralised client trade. It has caused controversy very early in its young history. Its economic value, accounting treatment, and even philosophical meaning were debated for several months by academic and practitioners alike, creating two rather clearly distinct camps. Through the pages of Risk, John Hull and Alan White (Hull and White, 2012a) voiced their opinion, stating: “FVA should not be considered when determining the value of a derivatives portfolio, and it should not be considered when determining the prices the dealer should charge when buying or selling derivatives.” To support this conclusion, they assume complete and perfectly efficient markets and they rely on the Modigliani-Miller theorem, according to which the price of an asset does not depend on the way it is funded.

Immediately after Hull and White’s piece was published, quants from various institutions contacted Risk’s editorial team in London and New York to express their views. Hull and White said they were themselves “inundated” with responses from all over the world. It became apparent that debate was not merely philosophical: it had important practical implications. This was understandable because adopting one principle over the other could mean a difference in the region of hundreds of millions of dollars for the largest dealers.

Stephen Laughton and Aura Vaisbrot, then at the Royal Bank of Scotland, were the first to publish their counterargument (Laughton and Vaisbrot, 2012). Their main point was that market incompleteness does not allow one to hedge all risk factors in a derivatives portfolio as Hull and White assume. In essence, they were saying that a funding adjustment to the risk-neutral value is necessary. They also imply that, as a consequence, pricing differs depending on the bank’s funding rate–that is, it violates the law of one price, which determines that a given asset, traded in two different markets, must have only one price in order to avoid arbitrage opportunities. Since the 2008 crisis, prices have become dealer dependent.

Others intervened pointing out that post-crisis discounting cannot be performed using risk-free rates as that would ignore the cost of funding the replicating instruments.

To the delight of those following it, the debate continued for some time; Hull and White, unconvinced by their critics’ arguments, replied, maintaining their position that FVA should be ignored (Hull and White, 2012b).

While both sides might be able to substantiate their viewpoints on theoretical grounds, the reality is that banks have almost universally accepted accounting for FVA and have been including it in derivatives pricing for several years. The losses banks faced when first incorporating FVA caused many a storm on earnings days: JP Morgan reported a $1.5 billion loss attributable to FVA in the fourth quarter of 2013 (Cameron, 2014). In 2014, other banks followed suit: UBS reported a $282 million loss on FVA in the third quarter, Citi $474 million, BAML $497 million, to name just some of the casualties (Rennison 2014, Becker 2015a and 2015b).

Meanwhile, the debate moved from the justification of FVA’s existence to its pricing and accounting, with quants and academics trying to find consistent answers.

In their most recently published paper, Burgard and Kjaer shared their findings on FVA and the effect of funding on the economic value of a firm for the shareholder. Taking a step back, funding can be symmetric or asymmetric depending on whether a derivative desk can borrow and lend at the same rate. Symmetry is the common market assumption, while the asymmetry in funding has been presented in (Albanese, Andersen, and Iabichino, 2015). Views on this are far from unanimous.

Capital valuation adjustment (KVA) accounts for the cost of equity capital a bank incurs when entering a derivative trade. To explain the concept, some would say KVA is affine to FVA, in the sense that, in both cases, the cost is associated with a source of funding. The difference is that KVA, rather than measuring the funding cost coming from debt, refers to the funding cost from equity. That is, as a new deal is agreed on, it generates a cost of borrowing from the shareholder. That cost is normally of comparable scale to that of other valuation adjustments, which means its importance cannot be underestimated.

KVA was first introduced by Andrew Green, Chris Kenyon, and Chris Dennis in 2014. They model it by adapting the semi-replication hedging strategy of (Burgard and Kjaer, 2011a) to replicate the cost of capital of a trade. The issue here is not so much in pricing KVA, but rather knowing regulatory capital requirements and computing the expected value of capital through time. That is either done through standardised approaches or internal model methods.

Albanese et al. (2016) offer a different reading on KVA, which they define as a cost proportional to the capital-at-risk multiplied by the hurdle rate. This aligns with the previous definition of KVA as the compensation to shareholders’ capital. In their paper they focus on the accounting framework for FVA and KVA, while explaining the tight connection between the two, due to the fact that they both are forms of funding. Criticising market practice and accounting standards, they argue that by not reporting capital and funding costs transparently, banks are implicitly allowed to hide those costs and report inflated profits.

The most recent addition to the XVA family is the adjustment for the cost of funding initial margin, known as MVA. It became mandatory to post initial margin on derivatives transactions that are not-centrally cleared in September 2016 in the US, Canada, and Japan, and in February 2017 in Europe.

Initial margin requirements are calculated using the International Swaps and Derivatives Association’s Standard initial margin model, which is a way of enabling participants in non-cleared trades to collect margin from each other (Osborn, 2016). To then calculate the MVA, Kenyon and Green (2015) again propose to adapt Burgard and Kjaer’s semi-replication method. As with KVA, the evolution of a quantity–here the initial margin–has to be estimated for the expected lifetime of the portfolio. This is done through the calculation of VaR or ES, which may require a computationally expensive Monte Carlo simulation. According to their model, applying a regression technique makes the whole process become much faster.

Credit risk

Credit risk is a vast subject. The two categories above (credit derivatives and valuation adjustments) have been isolated in order to observe their specific trends. The “residual” series, as seen in Fig. 6, is therefore partial and not representative of the entire category.

Fig. 6
figure 6

Topics that fall under this umbrella are related to credit portfolios and the correlation structure therein, collateral posting, central counterparties (CCPs) and systemic risk, with some interactions between them.

In the post-crisis world, CCPs have assumed a fundamental role in the financial system. They exist as the agent between two counterparties on opposite sides of a trade, with the purpose of eliminating counterparty risk. The questions their role poses are whether they are themselves a safe counterparty, and whether their size constitutes a significant systemic risk. CCPs’ business is complex to model. Each CCP may have thousands of clients, also called general clearing members (GCMs), and each GCM may deal with multiple CCPs. There are initial margins, variation margins and default fund contributions to be considered. The correlation structure is such that an analytical description of this system is impossible. Studies on CCPs’ efficiency have returned mixed results.

A paper by Duffie and Zhu (2011) focusses on the counterparty risk a CCP is supposed to mitigate. The authors argue that, under given circumstances, the use of multiple CCPs may have the counterintuitive effect of increasing counterparty exposure as a consequence of the fragmentation of netting. Also, they conclude that if a GCM deals with derivatives in multiple asset classes, clearing them through a single CCP is more efficient than doing so through multiple CCPs.

Borovkova and El Mouttalibi (2013) conducted an extensive analysis on the effect of CCPs’ activity on systemic risk. They apply a network simulation approach through a Monte Carlo method, in which scenarios include those of market shocks, default contagion, CCPs’ default. Their finding suggests that in the case of a homogeneous financial system (which is only a theoretical construct) central clearing guarantees higher stability. However, in the more realistic case of an inhomogeneous financial system, market instability is greater and CCPs’ presence is harmful.

More recently, (Barker et al., 2017) analysed the credit and liquidity risks for a GCM that clears with a given CCP. Their rather complex model aims at quantifying the impact of another GCM’s default on the CCP and, consequently, on the GCM itself. Their research concludes, in contrast to Borovkova and El Mouttalibi (2013), that a greater use of central clearing does not increase systemic risk.

Computational finance

The manipulation of large datasets–computationally expensive operations such as the simulation of millions of scenarios for the calculation of sensitivities and valuation adjustments, for instance–are daily struggles for a derivatives desk. As Fig. 7 shows, attention to these issues is gaining pace.

Fig. 7
figure 7

Several of the papers appearing in Risk on the subject of computational finance during this period focussed on the applications of Monte Carlo methods. This versatile technique for simulation of processes has been proven functional for a myriad of applications. Originally, Monte Carlo was applied for the pricing of exotic options and the calculation of sensitivities of a derivatives portfolio. But it is also commonly used, for example, to estimate the distribution of credit portfolio losses, with a growing body of research on this subject aimed at speeding up computing time. This method is also used to support stress testing, as an ideal tool to generate financial stress scenarios which can be plugged into a risk measure (for example, value-at-risk).

The work of (Capriotti and Giles, 2010) and (Capriotti, Lee, and Peacock, 2011) applied Monte Carlo in conjunction with adjoint algorithmic differentiation (AAD) to build a framework that showed considerably reduced computational costs compared to other methods. The first paper delivered a fast estimate of correlation risk and Greeks, while the second provided a framework for real-time counterparty credit risk measure, allowing banks to react with their hedging strategies.

Adjoint methods were introduced to finance by (Giles and Glasserman, 2006) as a fast calculation technique for calculating portfolio Greeks. Capriotti and his co-authors contributed extensively to the development and popularisation of AAD in the following years. Most major banks have now adopted it, as its time-saving and accurate outputs are extremely valuable–though the cost of implementation (in terms of project time, expertise required, and adaptation of existing database and software libraries) remains an obstacle for some (see Sherif, 2015).

Conclusion

As Andrew Green explains in his book on XVAs (Green, 2016), before the crisis the components of the price of a derivative instrument were its risk-neutral price (discounted at Libor rate), hedging costs, CVA (with the limitations mentioned above, and only in the latest pre-crisis years), and the bank’s profit. Since the crisis, the price components are risk-neutral price (discounted using OIS), hedging costs, CVA, bank’s profit, FVA (including cost of liquidation buffers), KVA, MVA, and even, for some financial entities, tax valuation adjustment. If a derivative is cleared, not all of these components will apply–but other factors such as clearing costs will.

The number of components has doubled and the computational requirement has grown by orders of magnitude. The search for a comprehensive quantitative solution that can cope with all the challenges continues.

References

Download references

Competing interests

The author declares that he has no competing interests.

Author information

Authors and Affiliations

Authors

Rights and permissions

Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Cesa, M. A brief history of quantitative finance. Probab Uncertain Quant Risk 2, 6 (2017). https://doi.org/10.1186/s41546-017-0018-3

Download citation

  • Received:

  • Accepted:

  • Published:

  • DOI: https://doi.org/10.1186/s41546-017-0018-3