What is risk?

Dealing with risk management issues properly requires a clear definition of risk. Defining and managing risk depend to a large part on the end-user perspective. An options market maker may focus on some aspects of risk that an investor or treasurer (hedger) may not. Indeed, there are many types of risk separate from market risk, although here I focus primarily on market risk.

A market maker or an arbitrageur may need to implement a static/dynamic strategy that ensures they can create or profit from the management or (synthetic) creation of some underlying process - a position keeping form of risk management. Embedded in this strategy is a direct link between the value of the deliverable/outcome and the rebalancing/risk management process. This can be quite a complex topic. Notably, these methods tend not to rely on, or indeed try to remove depending on, a particular view of future market levels.

Another approach to risk management is to ask the question “how much money could I lose?” In this case, one may not need to make a connection between a dynamic strategy and the “risk measure”. However, and importantly, a “directional forecast” of the future behaviour of the profit and loss (P&L) needs to be made.

Since humans are generally incapable of making consistent and accurate “outright” forecasts of the markets, a statistical approach is often used. In this case, we could restate the question and ask “how much money could my portfolio lose, say 5% of the time, over my forecast period?” This is the essence of Value at Risk (VaR).

Describing risk

In practice, the implementation of VaR methods can be either directly related to (statistically) forecasting P&L, or indirectly forecasting P&L by revaluing the portfolio for the “likely” range of market effects.

Modelling the shape of uncertainty can be approached in a manner similar to “statistical games”, such as the rolling of dice. Any attempt to predict the future implies a model of expected behaviour and, for statistical processes, this modelling requires estimation of the chance or probability of events. If we can assign a single probability to each event (such as market level, or portfolio value), then we have created a “probability distribution” (or simply a “distribution”). The resulting distribution is our model of the future (of uncertainty), and the distribution's properties are used to determine the VaR.

A distribution is usually considered in the context of the frequency of an event. So the proverbial coin toss has a 50:50 probability distribution (over very many trials). Similarly, throwing dice has associated with it a distribution of possible outcomes as well. The distribution for these two “processes” can be constructed by considering the number of times a particular outcome is observed over a large number of experiments (tosses or throws).

What can be said about the “risk management” of these two “games”? Suppose the objective was to estimate (limiting) losses when the outcome of the bet was a win with heads or a win with the dice above 7, and a loss in all other cases. By looking at the distributions, it is possible to deduce that the chance of losing money on the next coin toss (on average) is 50%, since there are only two possible outcomes of equal probability. More formally, the expected outcome is the sum of probability-weighted events.

Total changes = 0.5 (heads) + 0.5 (tails)

Therefore, chance of win = 0.5 (heads)

= 50%

In the case of the dice, the expected outcome is calculated the same way, except that there are a larger number of possible outcomes.

We can always determine the expected loss level by determining the sum of the (possibly weighted) probabilities from the distribution. For the financial markets, the “experiments” or “trials” are generally assumed scenarios, either based on possible scenarios and/or based on historic outcomes.

This is a very useful way to test any level of loss expectations if the distribution is known. A key point to notice is that we will generally be interested in determining the “sum of probabilities”. Therefore, the distribution will attempt to assign a probability to the possibility of a market price or position value occurring.

In VaR terms, we do not actually specify the threshold P&L explicitly, but rather ask what the threshold P&L is if we specify a threshold frequency/probability in terms of a quantile. So we actually say something like, what is the P&L loss level in the worst 5% cases and work backwards to determine the P&L that this 5% implies. Additionally, VaR is often expressed in terms of the “loss from current P&L”.

The choice for the distribution is everything. The distribution, for better or for worse, incorporates all of the desired or known or mandated or assumed information about the behaviour of the uncertain process at a given point in time.

Summing up the probabilities from empirical data can be automated to some extent by standard mathematical methods. But in some cases more convenient approaches (if they are consistent) may be available. And, in risk management terms, one might be interested only in looking at the change in P&L.

The difficulty for market risk is that we don't actually know the exact description of the distribution and, worse, that distribution may change over time. Thus, one type of danger is to assume an inaccurate distribution and then obtain an inaccurate loss expectation (VaR). In the example above, if the players were throwing dice and they did not know the correct distribution they might assume one. Suppose they assume the coin toss distribution (perhaps because it's easy to conceive) and base their chances of winning at dice on the wrong distribution? A more subtle variation is to use, say, the normal distribution when playing dice, since the two shapes (the dice histogram and normal distribution) are similar.

Another type of danger lies in that VaR statistics themselves assume a particular underlying distribution. That is, different measures of variability and confidence intervals are defined for specific distributions. For example, it is quite common to use a well-known statistic on market data, which may have a distribution that is different from that assumed by the VaR measure.

This brings our attention to an important distinction. The choice for the distribution is a modelling or policy decision. By choosing a distribution, we are imposing our own (or management's or shareholders') view of the variability of the future. The choice of statistical measures, however, is a technical issue. Once a model distribution has been set, we should attempt to use a consistent statistical measure. Frequently, though, many people apply the measures that they know, whether or not they are consistent, because it is expedient to do so.

In addition to the modelling of the “shape of uncertainty” (ie. the distribution), our uncertainty about future events generally varies with length of forecast. Usually we are less certain about long dated predictions than we are about short dated predictions. Modelling uncertainty requires not only defining the shape of the distribution on a given day, but also how this shape evolves over the length of forecast. This issue is particularly important for portfolios that include term products such as bonds and options.

To put things on an intuitive footing, and avoiding the complexities, consider that virtually all market models achieve this by choosing the same basic shape for each future date but “stretched” a suitable amount, usually scaling by the square root of the forecast period.

In practice, it is often the case that the problem of P&L VaR is expressed in terms of returns. For example, return on investment (ROI), a simple measure of returns, is expressed in terms of portfolio values (or prices). So we can always rewrite the VaR problems in terms of returns, as all returns relate prices from one set of dates to another.

We are focusing on risk in the sense of “what is my likely loss level a given percentage of the time” based on a probability distribution of the P&L. In its most direct sense, this requires obtaining the distribution of the P&L and summing the probability-weighted P&Ls up to our chosen threshold. VaR is the calculation of the threshold P&L given a percentage tolerance and a distribution. However, if we can apply a model distribution, then a more convenient form may be possible, such as by expressing the threshold as multiples of standard deviation.

Standard approaches to VaR

There are three categories of VaR measurement: Covariance or CVaR based approaches (eg. RiskMetrics®), Historical or HVaR based approaches, and simulation (Monte Carlo). Each attempts to accomplish similar things, but they have important differences, costs and benefits.

It is generally the “change” or P&L that is of interest, and so it is the change that is modelled in some sense, as opposed to outright position values. This presents the question of how one defines, obtains and generates such changes. It is in the manner that these changes (sometimes called displacements) are defined/treated that results in “flavours” of VaR.

Historical methods: HVaR

HVaR methods are perhaps the most intuitive. One very simple form is to obtain the last “n-days” of changes in position values (P&Ls) and create a distribution from this data set. These “changes” in the prices of the components of the portfolio can then be applied to today's portfolio, resulting in a distribution of position values for today's portfolio as if it had undergone fluctuations based on this history. Of course, this assumes that the future behaves exactly as the past.

Covariance methods: CVaR

Variance/Covariance VaR (CVaR) methods take a slightly different route to the manner in which the position value changes (or returns) are incorporated into the VaR calculation. The basic idea is that, instead of revaluing the position for “shifted prices”, the variance/standard deviation of the price movements is related to position value changes. The key benefit of this approach is that it can be simpler to implement a wider range of desired features, albeit sometimes at the cost of “accuracy”.

Monte Carlo methods: MCVaR

In the application of Monte Carlo (MC) methods to VaR, the key issue is that MC may be used to provide a kind of scenario analysis that incorporates the market dynamics, and indeed most other dynamics of interest. This makes MCVaR the most general of the three methods referred to. However, it is also the most involved of these methods as well.

Reality impact and future considerations

There are so many important real world issues in connection with risk management and VaR that it is not possible to cover them here. However, a few key “issues” provide some indications. They include theory versus reality, “know your stuff” (no black boxes), and making sure that the results/processes are validated in some sense (the “I want proof” issue), and that these are specific not only to your markets but also to your mandate.

  • Non-normal distributions

    It is generally the case that the distribution of prices is not normal although, in many cases, the actual distribution may be sufficiently normal for your mandate. Of course, one needs to check these things. The central limit theorem suggests that a large portfolio composed of many different instruments will result in a normally distributed portfolio (in the mathematical limit). If you have a large (linear) portfolio, then this may be helpful. If your portfolio only has a few constituents, or highly non-linear constituents, then it's time for some additional analyses. Even more important perhaps, is that the prime purpose for employing traders is exactly to alter the distribution of the P&L. The diagram illustrates that a treasurer or investor may hedge their portfolio by buying a put option, leading to an extremely non-normal shape. In this instance, many VaR measures will be very poor indeed.

  • Stationarity

    Statistical parameters such as variance are not constant in real world trading. Thus, measuring VaR today will almost surely result in a different number than that of tomorrow (this is a natural consequence of the non-stationary behaviour of markets). Standard VaR methods do not account for this.

  • VaR validity and accuracy

    Even if the most sophisticated approaches are used to implement the ideas above, one must still ask if VaR is reliable. For example, if VaR is to hold on 5% of the occasions, does it really? And, if not, what is the implication? In reality, VaRs do not hold. This is primarily due to the assumptions about the distribution, stationarity, linearity, etc. The question now is how often do they fail and how badly?

    For example, analysis of a single share in Barclays plc over a particular 1,000-day period indicates that it fails VaR 25% of the time. This means that the 5% rule was broken 25% of the time and so, in this case, the VaR process would have provided too much in the way of trading limits. Other cases exist where the VaR fails on the other side, and thus unnecessarily restricts trading/investing, reducing the firm's ROI.

    Clearly, a good understanding of your portfolio's behaviour is required for sensible application. For example, back-testing and forward-testing are recommended.

  • Non-linearity

    Many instruments, such as options, exhibit highly non-linear behaviour. Some VaR methods attempt to account for this by adding additional terms to the “projected value sensitivity”, such as using delta+gamma. Unfortunately, this is of little help with exotic options, or indeed can be misleading with vanilla options. For example, a poorly implemented delta+gamma approach will fail badly on even simple options positions such as a ratio call spread.

  • Indexing (eg. CAPM, yield curves)

    Virtually all instruments are correlated to one another. This permits reducing the number of “effective” instruments that need to be included in the covariance problem by linking the remaining one via some additional correlation function. For example, if there is a number of shares that all trade on the FTSE 100, then we may wish to only analyse the FTSE 100 index and have the capital asset pricing model (CAPM) relate the VaR of the specific instruments. This can considerably reduce the VaR calculation. Unfortunately, some care is required here as correlations are themselves just statistical measures with the same sort of statistical difficulties as VaR. In other words, you may be exchanging one type of difficulty for another. As usual, the correct answer will depend on your circumstances and analyses of the use of such indexing methods.

  • Accounting for future activities (eg. rebalancing)

    Standard VaR methods completely ignore the activities that, say, treasurers may carry out during the forecast period. So the VaR number assumes nothing is done, should the market move against you. In reality, portfolios are dynamically managed, and so there will be interaction that is not accounted for in the VaR. This can be quite significant.

  • Reality of real world systems, data and human factors

    It cannot be stated too strongly that the implementation of systems (both the calculators and the interfaces to databases, etc.) can give rise to very great problems indeed. For example: vendors make claims that may or may not be true; vendors, IT, traders, etc, may use the same words to mean something completely different; each of the groups involved has a different culture and objectives, and it is very rare indeed that the person at the helm understands all the major components. The list goes on ad absurdum ...

    Dr Oliver Bajor is head of proprietary trading, Arbitrage Research and Trading Ltd. *This summary is part of a more detailed article - ARTicles: A VaR Primer - which can be viewed, with other information on VaR, trading and risk management, at www.arbitrage-trading.com