FINANCIAL institutions, especially banks, are essentially in the business of risk taking. Whether they are lending to a client, issuing bonds in the capital market or taking position in foreign currency, they are always exposed to different types of risks. However, over the years, as banks ventured into new business lines, and with the development of more complex instruments, it became imperative for risk managers to obtain a useful measure of market risk, which not only aggregates risk across asset classes in a portfolio, but is also easily comprehendible by the senior management. In order to obtain such a measure for market risk, a new concept was developed by JP Morgan in 1989. At the end of each business day, its then president Dennis Weatherstone required a one page report, summarizing for him the company’s exposure to moves in the market and a reliable estimate of potential losses over the next 24 hours. This aggregate measure of risk has since gained immense popularity and has come to be known as Value at Risk (VaR).
Having since become an industry standard, the Bank of International Settlements now requires banks to use their internal VaR models for calculating market risk, under the recently ratified Basel II Accord.
In simple terms, VaR is the maximum estimated loss, measured in currency units, that an asset or a portfolio of assets is likely to suffer over a given time horizon, with a certain level of certainty specified by the decision-maker.
Computing VaR requires two input decisions: the confidence level and the time horizon. The selection of these variables usually depends upon the purpose and usage of VaR. Usually, a confidence level of 95 per cent or 99 per cent is used. The time horizon is usually taken as the period over which the portfolio remains unchanged.
There are three popular methods available for estimating VaR. As we shall see later, all three have their own merits and demerits, and no one method can be declared as the best. As a matter of fact, a few, more sophisticated banks use more than one of these methods at a time to compare and contrast their results.
Historical simulation method: As the name suggests, this method is based on historical data. It is a simple, empirical approach based on few statistical assumptions. It involves using historical changes in market rates and prices to construct a distribution of potential future profits and losses. In its most basic form, this method calculates VaR in the following steps:
a) For each instrument, determine the market factors that influence it and collect historical data on each factor for N days (100 days or a year). For example, in case of an exposure in Euro, the market factor affecting such a position would be the EUR/PKR exchange rate. b) In the next step, calculate the percentage change in these factors over the pre-determined time horizon. For example, if we are interested in a one-day VaR, we would compute daily percentage changes in the market factors. c) Next, we construct N hypothetical values of each market factor. For instance, if the change in EUR/PKR exchange rate for two past observations is 0.089 per cent and the current rate is
78.45, one hypothetical rate would be 78.52 (78.45*0.089% + 78.45). d) Using these N hypothetical values, the asset or portfolio is marked-to-market and a distribution of N profits and losses is obtained. The distribution is then sorted in ascending order and, if we are using a 95 per cent confidence level, we look for a return below which five per cent of the observations lie. For example, if we take N as 100, then since five per cent of 100 observations is five observations, we take VaR as the 5th worst loss. This gives us the maximum possible loss that we can suffer in 95 out of 100 days.
Variance Covariance Method: This is perhaps the most widely used method. Its underlying assumption is that the returns on assets are normally distributed, and that the relation between assets (correlation) is constant.
The single variable required for this approach is the portfolio standard deviation and correlation between assets. Since this approach assumes a normal distribution for asset returns, statistical properties of this distribution are applied for the calculation of VaR. One property is that outcomes less than or equal to 1.65 standard deviations below the mean occur only five per cent of the time. Hence, if we are using a confidence level of 95 per cent, then the VaR would be calculated as 1.65 times the portfolio standard deviation. Similarly, multiplication factors for other confidence levels (90 per cent, 99 per cent etc.) are available from the normal distribution table.
Monte Carlo Simulation Method: This methodology has a number of similarities to historical simulation method. The main difference is that, rather than carrying out the simulation using actual historical changes in the market factors to generate N hypothetical portfolio profits or losses, one chooses a statistical distribution that is believed to adequately approximate the possible changes in market factors.
Theoretically, any appropriate distribution can be chosen, although most use the normal distribution, as its parameters are easy to compute and comprehend. However, the log normal distribution is also used in some cases.
After estimating the parameters of the chosen distribution, a random number generator is used to generate thousands of hypothetical changes in the values of each market factor. From here onwards, VaR is determined in the same way as was done in the historical simulation method.
With three methods from which to choose, the obvious question is: which method of calculating value at risk is best? However, there is no easy answer. The best choice will be determined by the specific considerations and constraints of the institution calculating VaR: a brief comparison of these methods is given below:
Historical simulation approach: Merits: This method is simple to understand and explain, as it does not make any statistical assumptions about the distribution of returns. Since it is based on actual historical records, it gives a realistic picture by capturing any extreme or major market event that took place during the period for which the data is collected.
Demerits: The principal difficulty in implementing historical simulation is that it requires a time series of the relevant market factors covering the last N days. This can pose a problem in situations where availability of reliable data is difficult. Another disadvantage is that the N days considered for the calculation may be atypical due to certain market events.
Variance covariance approach: Merits: The easy availability of required data makes VaR computation relatively easy, and hence it is the most widely used method of the three.
Demerits: Despite its ease of calculation, it may be difficult to explain to senior management because of its reliance on the statistical properties of normal distribution for the calculation of VaR. Another major drawback is the assumption of normality of asset returns, which does not always hold. Also, correlation between assets may not always be stable, particularly when there is a major event like a stock market crash.
Monte Carlo simulation approach: Merits: In terms of precision, it is perhaps the most effective of all methods, especially when more complex instruments are involved. It is also very flexible, as it does not make a definite assumption about asset returns.
Demerits: The procedure for Monte Carlo simulation can be quite complex and time consuming, requiring expensive intellectual and technological skills. This makes it even more difficult to explain to the senior management. Also, the distribution chosen by risk managers may not turn out to accurately reflect the portfolio returns.
Despite its worldwide popularity and usage as an effective risk management tool, VaR is not a panacea. Hence, it should not be relied upon on a stand-alone basis. Banks that actively implement VaR methodologies supplement their calculations with stress testing and back testing.
Stress- testing is a procedure that addresses the question that, under non-normal market condition, when VaR is exceeded, how large the losses can be. That is, it investigates the effect of extreme market conditions on VaR calculations. If the effects are found to be unacceptable, the risk manager needs to revise the portfolio strategy or composition.
On the other hand, back testing is the process of comparing losses predicted by VaR models to those actually experienced over the testing period. It basically tests the effectiveness and accuracy of VaR calculations. If a model were completely accurate, the VaR losses would be exceeded with the same frequency as predicted by the confidence level employed in the calculation.
For instance, if a VaR of $10 million was calculated at a 95 per cent confidence level, we would expect the actual losses over the testing period to exceed this amount only five per cent of the time.
































