With such a strong consensus of opinion amongst regulators, it seems the market has the option of either encouraging industry initiatives to tackle risk management, or be faced with regulation that will dampen the creative nature of the structured product market.

Hedge funds have already addressed this issue to a degree by adopting some sophisticated risk management practices and hiring dedicated staff. A recent study by Mercer Olivier Wyman stated that 85% of the surveyed hedge funds have added a risk officer. This is also borne out by our experience, as we are increasingly meeting with dedicated staff at hedge funds that have advanced degrees in mathematics and finance.

As well as hiring experienced staff to handle the more complex risk control measures, hedge funds are also availing themselves of the many software packages available to compute risk. However, the problem is determining the methodologies that match the individual portfolio characteristics. For funds that are trading across different instrument types, the risk measures must be consistent across these instruments. Different models for each instrument type will not permit accurate risk budgeting. Risk control is chiefly viewed as a cost centre in the back office at the moment, and to avoid regulatory intervention, this view should be modified so that risk control is ultimately seen instead as a cost avoidance activity. To avoid the costs of losses and governmental regulation and to do so, expenditures need to be on par with the front office.

We believe that regulators are fully aware of the dynamic nature and benefits accruing from financial engineering and hedge funds, and that it would be a daunting task to provide governmental oversight. It would be more efficient to ensure market discipline, as market participants have the information to pinpoint areas of risk within current market activities far in advance of regulators. With an adequate framework for analysis, financial firms will always see the risks enter the system before oversight from regulators.

Here we illustrate some of the techniques hedge funds can employ which will satisfy regulatory concern. For example, some of the tools which firms are currently using employ assumptions which mean that risk is not computed accurately. We will address the large difference that occurs with small changes in assumptions and demonstrate how, with market discipline, regulation can be avoided.

Value at Risk (VaR) is a powerful measure when correctly computed and used appropriately, and was initially developed as a measure of catastrophic risk for highly levered financial firms like banks and insurance companies. For a portfolio, VaR is the loss that will be exceeded on a fraction of occasions. This can be referred to as the loss at the tail of the distribution.

It is a common practice to assume that the loss distribution is Gaussian. This assumption is made to save on computation time when creating the Marginal VaR (MVaR). The MVaR is the VaR added to the portfolio by adding one additional dollar of exposure. Adding one dollar of exposure at a time is impractical. An adequate method is to determine the change when adding each exposure to the portfolio. The sum of the marginal VaRs add up to the Total VaR calculated on the entire portfolio.

Many VaR calculations in use today simply assume that the losses are Gaussian. With this assumption, each exposure has its MVaR calculated and rescaled from the total portfolio VaR. This dramatically increases the speed of computation at the cost of introducing several estimation errors. When doing detailed analysis and stress testing, the estimation errors are compounded and do not provide an accurate risk budget.

We cannot assume that the returns and losses are normally distributed because exposures exhibit strong non-Gaussian behaviour. The returns/loss distribution often exhibit tails that are clearly fatter than those ina normal distribution. The actual distribution is often skewed (either to the left or right depending on the type of portfolio – to the left in the case of equity portfolios) and thus unlike normal distributions which are symmetric about the mean. Returns and losses often exhibit conditional heteroskedasticity, that is, some degree of predictability, which violates the independence assumption for normal distributions.

The shortcut of assuming the returns are normally distributed is no longer necessary. There are many techniques to accurately compute VaR and the cost of computer equipment has dropped to the point that a low-cost grid of computers can do the detailed computation in a reasonable amount of time. The industry needs to make sure its computations of VaRs are as accurate as possible, largely because the inputs to these calculations are also estimates. VaR computations use the volatility and the correlation of the instruments, and these items are computed using historical prices. If not enough historical data is available, or the time frame judicially selected, important events will not be represented in the estimates. As we are often working with new financial instruments, it is not always possible to obtain enough historical data.

The VaR is an easily explainable concept and has become the risk measure of choice for many hedge funds and market participants. VaR is extremely good for calculating catastrophic losses but is not suitable for strategic, liquidity or model losses, which unfortunately are the additional risks it is commonly being used to measure.

One example of a strategic risk that is not captured by VaR is in pairs trading. If one of the pair defaults in this scenario, the risk is far greater than its MVaR would suggest. For liquidity risk there is no way to reflect a situation where a thinly traded instrument suddenly has a wall of sell orders in front of it. The historical volatility and correlation do not give any hint that the risk is higher when an event significantly increases the volume.

At its base, VaR relies on historical prices. For some instruments these are market prices while for others the prices are created by models. The model prices are estimations and do not necessarily capture all factors for an exposure. Small differences in a calm market are not usually a problem. The model errors may become measurable, and problematic, during a period of market perturbation. Once again, this would be an event that would stress the model. If the model has errors, the risk measures also have errors.

Many limitations of the VaR measure can be overcome by performing stress tests on the inputs variables of the pricing and risk models. This is in line with the suggestion from regulators that firms include stress testing in risk procedures. Stress testing simulates out of the ordinary events to gauge the affect on the risk computations and portfolio performance.

The risk model should be run through numerous scenarios that examine the results when there are changes in, among others, interest rates, FX rates, macro economic variables and other factors suitable for the portfolio. In the first instance, one factor should be changed while holding all others constant. This will help determine the sensitivities of a portfolio to a single factor. When these sensitivities are determined, the stress tests should change several factors at the same time to determine the effect of more complex market behaviour that could have a non-linear result on the portfolio.

Stress tests should be run on portfolios organized by counterparty and this will highlight the potential problems with each counterparty. Once the sensitivities are determined, market conditions can be monitored and proactive action can be taken.

Trading, especially for prime brokers, is done with many parties following a similar strategy. Stress tests should also be completed for all portfolios aggregated together to determine the results of manycounterparties making the same trading decisions at the same time. Several counterparties simultaneously closing out of a position will have greater market impact which may flow through their ability to cover leverage or collateral.

An aggregate stress test requires that you have a portfolio risk model which correctly computes the marginal VaRs and is consistent across different exposure types. Ideally, the model will have liquidity, market and credit risk components that are consistent across instruments.

Stress tests on an aggregation of all portfolios is a formidable task that will require efficient algorithms and serious computing resources. It is essential that shortcuts are not taken when computing these measures or the risk budgeting for counterparties will not be correct. The reason for these detailed computations is to determine which counterparties expose you to the most risk. If gross approximations are made to speed computations, the results can lead to results that do not highlight the true risks. Action will then be taken on the wrong portfolios.

Selecting the correct factors and bounds for stress testing requires some knowledge of the strategy of the portfolio. It is not always possible to be aware of a counterparty's strategy as they guard this as a proprietary advantage. Without some idea of the strategy and a full list of positions, it will be difficult to create a robust stress test for each counterparty. In the case of hedge funds, their prime broker will have a more complete list of total positions and could run a more thorough stress test.

Prime brokers have the information to provide a high degree of market discipline. However, as the Federal Reserve points out, the competitive pressures in the prime brokerage market have prevented the brokers from taking that lead. It may be better for the prime brokers to pull back and ensure that high standards of transparency are provided to create a detailed analysis of the risks in a hedge fund portfolio. This is much more palatable than some other suggestions of creating central clearing houses of hedge fund holdings.

The Federal Reserve stated that the risk computations for structured products are not accurately computed. This can stem from the fact that many risk practitioners treat structured products as if they were bonds with unusual coupon payments. These approximations create an inaccurate view of the risk profile for a structured product. Such approximations occur across a number of instruments. As an example, draw down loans, a seemingly simple instrument, have a number of special conditions to accurately price. Typically, just before default, a company will draw down a portion of the loan and it is necessary to capture this information when running the simulations. The percentage and timing of the drawdown must also be modelled.

Accurate and complete risk modelling for structured products is performed through running two sets of Monte Carlo simulations. The first one does the pricing simulations for the complex instrument and cash flow calculations. The subsequent Monte Carlo run uses these prices and paths as inputs for computing risk. There might also be additional computations to determine repayment rates, recovery rates and other factors used in the pricing of the instrument. There is a computational cost for accurate results but, as before, with low cost grid computing systems the accuracy can be provided at a reasonable cost and time. Better modelling of the structured products in risk calculations, coupled with stress testing, will provide additional information about their risk characteristics.

Throughout this article risk computations have been referenced in terms of a single counterparty. This is the case for the classic sell and buy side trading partners, but with hedge funds, and their prime brokers, there are the additional parties of the executing broker or novation partythat provide an avenue for risk. It would be prudent to treat the executing broker or novation party as a counterparty and perform stress tests as if they were a direct trading partner. In the case of a novation, the third party may be adding a large amount of risk to their own book to which you are then exposed.

It will be difficult to have complete information about the third parties, although there are discussions in the industry around creating a central clearing house of position to compute counterparty risk statistics or to have their prime broker provide some indication of their profile. Both of these ideas would be difficult to implement and would require a regulator to act as a clearing house.

Currently, operational risk is treated as a set of procedures that are modified as new negative events occur and are solved. These tend to be workflow centred sets of rules. Research by several consulting companies and universities are looking into the idea of providing some methods of bucketing and profiling operational risk.

The regulators are also increasingly driving the industry to create robust risk budgeting systems. When all aspects of their suggestions are in place, there will be accurate risk measures for each counterparty, trading group and the aggregation of all groups. The aggregation will capture the non-linear nature of some trading activity to ensure that simple measures are not viewed in isolation. Risk budgeting, based on accurate VaR calculation, will provide a risk adjusted analysis of return. This will provide financial institutions with a tool that will show where they are allocating their risk and allow them to determine if it is worth the return and the expended regulatory capital. The stress tests are more robust than a single point risk measure and can provide a greater level of understanding about the risks that banks have on their balance sheet. With this knowledge and prudent actions, market discipline will negate the necessity for regulation.

**Explore Categories**- Commentary
- Event
- Manager Writes
- Opinion
- Profile
- Research
- Sponsored Statement
- Technical

## Commentary

## Issue 149

## Where Fed Has and Hasn’t Contained Volatility

## Are credit spreads signaling a sluggish economic recovery?

## Erik Norland, Senior Economist, CME Group