Quantitative Equity Management

Will model-driven funds outperform people-driven funds?

FRANK J FABOZZI, YALE SCHOOL OF MANAGEMENT, SERGIO M FOCARDI, CAROLINE JONAS, THE INTERTEK GROUP
Originally published in the July/August 2008 issue

The classical view of financial markets holds that the relentless activity of market speculators makes markets efficient, hence the absence of profit opportunities. This view formed the basis of academic thinking for several decades starting in the 1960s. Practitioners have long held the more pragmatic view, however, that a market formed by fallible human agents offers profit opportunities arising from the many small, residual imperfections that ultimately result in delayed or distorted responses to news.

As models gain broad diffusion and are made responsible for the management of a growing fraction of equity assets, one might ask what the impact of model-driven investment strategies will be on market efficiency, price processes and performance. Intuition tells us that changes will occur. Because of the variety of modelling strategies, however, how these strategies will affect price processes is difficult to understand. Some strategies are based on reversion to the mean and realign prices; others are based on momentum and cause prices to diverge. Two broad classes of models are in use in investment management: models that make explicit return forecasts and models that estimate risk, exposure to risk factors and other basic quantities. Models that make return forecasts are key to defining an investment strategy and to portfolio construction; models that capture exposures to risk factors are key to managing portfolio risk. Note that, implicitly or explicitly, all models make forecasts. For example, a model that determines exposure to risk factors is useful insofar as it measures future exposure to risk factors. Changes in market processes come from both return forecasting and risk models. Return forecasting models have an immediate impact on markets through trading; risk models have a less immediate impact through asset allocation, risk budgeting and other constraints.

Self-referential, adaptive markets.

Return forecasting models are affected by the self-referential nature of markets, which is the conceptual basis of the classical notion of market efficiency. Price and return processes are ultimately determined by how investors evaluate and forecast markets. Forecasts influence investor behaviour (hence, markets) because any forecast that allows one to earn a profit will be exploited. As agents exploit profit opportunities, those opportunities disappear, invalidating the forecasts themselves.1 As a consequence, according to finance theory, one can make profitable forecasts only if the forecasts entail a corresponding amount of risk or if other market participants make mistakes (because either they do not recognise the profit opportunities or they think there is a profit opportunity when none exists). Models that make risk estimations are not necessarily subject to the same self-referentiality.

If someone forecasts an increase in risk, this forecast does not necessarily affect future risk. There is no simple link between the risk forecasts and the actions that these forecasts will induce. Actually, the forecasts might have the opposite effect. Some participants hold the view that the market turmoil of July-August 2007, sparked by the sub-prime mortgage crisis in the United States, was made worse by risk forecasts that prompted a number of firms to rush to reduce risk by liquidating positions.

The concept of market efficiency was introduced some 40 years ago when assets were managed by individuals with little or no computer assistance. At that time, the issue was to understand whether markets were forecastable or not. The initial answer was: no, markets behave as random walks and are thus not forecastable. A more subtle analysis showed that markets could be both efficient and forecastable if subject to risk-return constraints.2 Here is the reasoning. Investors have different capabilities in gathering and processing information, different risk appetites, and different biases in evaluating stocks and sectors.3 The interaction of the broad variety of investors shapes the risk-return trade-off in markets. Thus, specific classes of investors may be able to take advantage of clientele effects even in efficient markets.4 The academic thinking on market efficiency has continued to evolve. Investment strategies are not static but change over time. Investors learn which strategies work well and progressively adopt them. In so doing, however, they progressively reduce the competitive advantage of the strategies.

The diffusion of forecasting models raises two important questions. First, do these models make markets more efficient or less efficient? Second, do markets adapt to forecasting models so that model performance decays and models need to be continuously adapted and changed? Both questions are related to the self-referentiality of markets, but the time scales are different. The adaptation of new strategies is a relatively long process that requires innovations, trials and errors. The empirical question regarding the changing nature of markets has received academic attention. For example, using empirical data for 1927-2005, Hwang and Rubesam (2007) argued that momentum phenomena disappeared during the 2000-05 period. Figelman (2007), however, analysing the S&P 500 Index over the 1970-2004 period, found new evidence of momentum and reversal phenomena previously not described.

Khandani and Lo (2007) showed how in testing market behaviour, the mean reversion strategy they used lost profitability in the 12 year period of 1995-2007; it went from a high daily return of 1.35% in 1995 to a daily return of 0.45% in 2002 and of 0.13% in 2007.

Good models, bad models

Perhaps the question should be asked for every class of forecasting model, will any good model make markets more efficient? A source at a large financial firm that has both fundamental and quantitative processes said, "The impact of models on markets and price processes is asymmetrical.

[Technical], model-driven strategies have a worse impact than fundamental-driven strategies because the former are often based on trend following. Consider price-momentum models, which use trend following. Clearly, they result in a sort of self-fulfilling prophecy: momentum investors create additional momentum by bidding up or down the prices of momentum stocks.

Momentum models exploit delayed market responses. It takes 12-24 months for a reversal to play out, while momentum plays out in 1, 3, 6 and 9 months. That is, reversals work on a longer horizon than momentum and, therefore, models based on reversals will not force efficiency."

Nevertheless, the variety of models and modelling strategies have a risk-return trade-off that investors can profitably exploit. These profitable strategies will progressively lose profitability and be replaced by new strategies, starting a new cycle.

Speaking at the end of August 2007, one comment was that, "Any good investment process would make prices more accurate, but over the last three weeks, what we have learned from the newspapers is that the quant investors have strongly interfered with the price process. Because model-driven strategies allow broad diversification, taking many small bets, the temptation is to boost the returns of low-risk, low-return strategies using leverage," but adding, "any leverage process will put pressure on prices. What we saw was an unwinding at quant funds with similar positions."

Quantitative processes and price discovery: discovering mispricings

The fundamental idea on which the active asset management industry is based is that of mispricing. The assumption is that each stock has a 'fair price' and that this fair price can be discovered. A further assumption is that, for whatever reason, stock prices may be momentarily mispriced (ie. prices may deviate from the fair prices) but that the market will re-establish the fair price. Asset managers try to outperform the market by identifying mispricings. Fundamental managers do so by analysing financial statements and talking to corporate officers; quantitative managers do so by using computer models to capture the relationships between fundamental data and prices or therelationships between prices.

The basic problem underlying attempts to discover deviations from the 'fair price' of securities is the difficulty in establishing just what a stock's fair price is. In a market economy, goods and services have no intrinsic value. The value of any product or service is the price that the market is willing to pay for it. The only constraint on pricing is the 'principle of one price' or absence of arbitrage, which states that the same 'thing' cannot be sold at different prices. A 'fair price' is thus only a 'relative fair price' that dictates the relative pricing of stocks. In absolute terms, stocks are priced by the law of supply and demand; there is nothing fair or unfair about a price.5

Stocks are mispriced not in absolute terms but relative to each other and hence to a central market tendency. The difference is important. Academic studies have explored whether stocks are mean reverting toward a central exponential deterministic trend. This type of mean reversion has not been empirically found: mean reversion is relative to the prevailing market conditions in each moment. How then can stocks be mispriced? In most cases, stocks will be mispriced through a 'random path'; that is, there is no systematic deviation from the mean and only the path back to fair prices can be exploited. In a number of cases, however, the departure from fair prices might also be exploited. Such is the case with price momentum, in which empirical studies have shown that stocks with the highest relative returns will continue to deliver relatively high returns.

One of the most powerful and systematic forces that produce mispricings is leverage. The use of leverage creates demand for assets as investors use the borrowed money to buy assets. Without entering into the complexities of the macroeconomics of the lending process underlying leveraging and shorting securities (where does the money come from? where does it go?), we can reasonably say that leveraging through borrowing boosts security prices (and deleveraging does the opposite), whereas leveraging through shorting increases the gap between the best and the worst performers (deleveraging does the opposite).

Model performance today: do we need to redefine performance?

The diffusion of computerised models in manufacturing has been driven by performance. The superior quality (and often the lower cost) of CAD products allowed companies using the technology to capture market share. In the automotive sector, Toyota is a prime example. But whereas the performance advantage can be measured quantitatively in most industrial applications, it is not so easy in asset management. Leaving aside the question of fees (which is not directly related to the investment decision-making process), good performance in asset management is defined as delivering high returns. Returns are probabilistic, however, and subject to uncertainty. So, performance must be viewed on a risk-adjusted basis.

People actually have different views on what defines 'good' or 'poor' performance. One view holds that good performance is an illusion, a random variable. Thus, the only reasonable investment strategy is to index. Another view is that good performance is the ability to optimise properly the active risk-active return tradeoff so as to beat one's benchmark. A third view regards performance as good if positive absolute returns are produced regardless of market conditions. The first view is that of classical finance theory, which states that one cannot beat the markets through active management but that long-term, equilibrium forecasts of asset class risk and return are possible. Thus, one can optimise the risk-return trade-off of a portfolio and implement an efficient asset allocation. An investor who subscribes to this theory will hold an index fund for each asset class and will rebalance to the efficient asset-class weights.

The second is the view that prevailsamong most traditional active managers today and that is best described by Grinold and Kahn (2000). According to this view, the market is not efficient and profitable forecasts are possible, but not for everyone (because active management is still a zero-sum game). Moreover, the active bets reflecting the forecasts expose the portfolio to 'active risk' over and above the risk of simply being exposed to the market. Note that this view does not imply that forecasts cannot be made. On the contrary, it requires that forecasts be correctly made but views them as subject to risk-return constraints. According to this view, the goal of active management is to beat the benchmark on a risk-adjusted (specifically, beta-adjusted) basis. The tricky part is: given the limited amount of information we have, how can we know which active managers will make better forecasts in the future?

The third view, which asserts that investors should try to earn positive returns regardless of market conditions, involves a misunderstanding. The misunderstanding is that one can effectively implement market-neutral strategies and realise a profit regardless of market conditions. A strategy that produces only positive returns regardless of market conditions is called an 'arbitrage'. Absence of arbitrage in financial markets is the basic tenet or starting point of finance theory. For example, following Black and Scholes (1973), the pricing of derivatives is based on constructing replicating portfolios under the strict assumption of the absence of arbitrage.

Therefore, the belief that market-neutral strategies are possible undermines the pricing theory on which hundreds of trillions of dollars of derivatives trading is based! Clearly, no strategy can produce only positive returns regardless of market conditions. So-called market-neutral strategies are risky strategies whose returns are said to be uncorrelated with market returns. Note that market-neutral strategies, however, are exposed to risk factors other than those to which long only strategies are exposed.

In particular, market-neutral strategies are sensitive to various types of market 'spreads', such as value versus growth or corporate bonds versus government bonds. Although long-only strategies are sensitive to sudden market downturns, long/short strategies are sensitive to sudden inversions of market spreads. The markets experienced an example of a sharp inversion of spreads in July-August 2007 when many long/short funds experienced a sudden failure of their relative forecasts.

Clearly, market neutrality implies that these new risk factors are uncorrelated with the risk factors of long only strategies. Only an empirical investigation can ascertain whether or not this is the case. Whatever view we hold on how efficient markets are and thus what risk-return trade-offs they offer, the measurement of performance is ultimately model based. We select a positive measurable characteristic, be it returns, positive returns, or alphas, and we correct the measurement with a risk estimate. The entire process is ultimately model dependent insofar as it captures performance against the background of a global market model.

For example, the discrimination between alpha and beta is based on the capital asset pricing model. If markets are driven by multiple factors, however, and the residual alpha is highly volatile, alpha and beta may be poor measures of performance. (See Hübner 2007 for a survey of performance measures and their applicability). This consideration brings us to the question of model breakdown.

Performance and model breakdown

Do models break down? If they do, why? Is the eventuality of model breakdown part of performance evaluation? Fund-rating agencies evaluate performance irrespective of the underlying investment process; investment consultants look at the investment process to form an opinion on the sustainability of performance.

Empirically, every once in a while, assets managed with computer-driven models suffer major losses. Consider, for example, the high-profile failure of Long- Term Capital Management (LTCM) in 1998 and the similar failure of long/short funds in July-August 2007. As one source, referring to a few days in the first week of August 2007, said, "Models seemed not to be working." These failures received headline attention. Leaving aside for the moment the question of what exactly was the problem (the models or the leverage) at that time, blaming the models was clearly popular.

Perhaps the question of model breakdown should be reformulated:
 

  • Are sudden and large losses such as those incurred by LTCM or by some quant funds in 2007 the result of modelling mistakes? Could the losses have been avoided with better forecasting and/or risk models?
  • Alternatively, is every quantitative strategy that delivers high returns subject to high risks that can take the form of fat tails? In other words, are high-return strategies subject to small fluctuations in business as usual situations and devastating losses in the case of rare adverse events?
  • Did asset managers know the risks they were running (and thus the possible large losses in the case of a rare event) or did they simply misunderstand (and/or misrepresent) the risks they were taking?

A basic tenet of finance theory is that risk (uncertainty of returns) can be eliminated only if one is content with earning the risk-free rate that is available. In every other case, investors face a risk-return trade-off: high expected returns entail high risks. High risk means that there is a high probability of sizeable losses or a small but not negligible probability of (very) large losses. These principles form the fundamental building blocks of finance theory; derivatives pricing is based on these principles.

Did the models break down in July- August 2007?

Consider the following. Financial models are stochastic (ie. probabilistic) models subject to error. Modellers make their best efforts to ensure that errors are small, independent and normally distributed. Errors of this type are referred to as 'white noise' or 'Gaussian' errors. If a modeller is successful in rendering errors truly Gaussian, with small variance and also serially independent, the model should be safe.

However, this kind of success is generally not the case. Robert Engle and Clive Granger received the 2003 Nobel Memorial Prize in Economic Sciences partially for the discovery that model errors are heteroscedastic; that is, for extended periods, modelling errors are large and for other extended periods, modelling errors are small.

Engle and Granger's autoregressive conditional heteroscedasticity (ARCH) models and generalised autoregressive conditional heteroscedasticity (GARCH) models capture this behaviour; they do not make model errors smaller, but they predict whether errors will be large or small. The ARCH/GARCH modelling tools have been extended to cover the case of errors that have finite variance but are not normal.

A general belief is that not only do errors (ie. variances) exhibit this pattern but so does the entire matrix of co-variances. Consequently, we also expect correlations to exhibit the same pattern; that is, we expect periods of high correlation to be followed by periods of low correlation. Applying ARCH/GARCH models to co-variances and correlations has proven to be difficult, however, because of the exceedingly large number of parameters that must be estimated. Drastic simplifications have been proposed, but these simplifications allow a modeller to capture only some of the heteroscedastic behaviour of errors and co-variances.

ARCH/GARCH models represent the heteroscedastic behaviour of errors that we might call 'reasonably benign'; that is, although errors and correlations vary, we can predict their increase with some accuracy. Extensive research has shown, however, that many more variables of interest in finance show fattails (ie. non-negligible extreme events). The tails of a distribution represent the probability of 'large' events-that is, events very different from the expectation. If the tails are thin, as in Gaussian bell-shaped distributions, large events are negligible; if the tails are heavy or fat, large events have a non-negligible probability. Fat-tailed variables include returns, the size of bankruptcies, liquidity parameters that might assume infinite values and the time one has to wait for mean reversion in complex strategies. In general, whenever there are non-linearities, fat tails are also likely to be found.

Many models produce fat-tailed variables from normal noise, whereas other models that represent fat-tailed phenomena are subject to fat-tailed errors. A vast body of knowledge is now available about fat-tailed behaviour of model variables and model errors (see Rachev, Menn, and Fabozzi 2005). If we assume that noise is small and Gaussian, predicting fat-tailed variables may be exceedingly difficult or even impossible.

The conclusion of this discussion is that what appears to be model breakdown may, in reality, be nothing more than the inevitable fat-tailed behaviour of model errors. For example, predictive factor models of returns are based on the assumption that factors predict returns. This assumption is true in general but is subject to fat-tailed inversions. When correlations increase and a credit crunch propagates to financial markets populated by highly leveraged investors, factor behaviour may reverse (as it did in July-August 2007).

Does this behaviour of model errors represent a breakdown of factor models? Hardly so if one admits that factor models are subject to noise that might be fat tailed. Eliminating the tails from noise would be an exceedingly difficult exercise. One would need a model that can predict the shift from a normal regime to a more risky regime in which noise can be fat tailed. Whether the necessary data are available is problematic.

For example, participants in this study admitted that they were surprised by the level of leverage present in the market in July-August 2007.

If the large losses at that time were not caused by outright mistakes in modelling returns or estimating risk, the question is: was the risk underevaluated or miscommunicated? Here, we wish to make some comments about risk and its measurement.

Two decades of risk management have allowed modellers to refine risk management. The statistical estimation of risk has become a highly articulated discipline. We now know how to model the risk of instruments and portfolios from many different angles, including modelling the non-normal behaviour of many distributions, as long as we can estimate our models. The estimation of the probability of large events is by nature highly uncertain.

Actually, by extrapolating from known events, we try to estimate the probability of events that never happened in the past. How? The key statistical tool is extreme value theory (EVT). It is based on the surprising result that the distribution of extreme events belongs to a restricted family of theoretical extreme value distributions.

Essentially, if we see that distributions do not decay as fast as they should under the assumption of a normal bell-shaped curve, we assume a more perverse distribution and we estimate it. Despite the power of EVT, much uncertainty remains in estimating the parameters of extreme value distributions and, in turn, the probability of extreme events. This condition may explain why so few asset managers use EVT. A 2006 study by the authors involving 39 asset managers in North America and Europe found that, whereas 97% of the participating firms used value at risk as a risk measure, only 6% (or 2 out of 38 firms) used EVT (see Fabozzi, Focardi, and Jonas 2007).

Still, some events are both too rare and too extreme either to be estimated through standard statistical methods or to be extrapolated from less extreme events, as EVT allows one to do. Nor do we have a meta-theory that allows one to predict these events.6 In general, models break down because processes behave differently today from how they behaved in the past. We know that rare extreme events exist; we do not know how to predict them and the assessment of the risk involved in these extreme events is highly subjective.

We can identify, however, areas in which the risk of catastrophic events is high. Khandani and Lo (2007) suggested that it was perhaps a bit "disingenuous" for highly sophisticated investors using state-of-the-art strategies to fail to understand that using six to eight times leverage, just to outperform the competition, might signal some form of market stress. The chief investment officer at one firm commented, "Everyone is greedy, and they have leveraged their strategies up to the eyeballs."

The diffusion of model-based equity investment processes

The introduction of new technologies typically creates resistance because these technologies pose a threat to existing skills. This reaction occurs despite the fact that the introduction of computerised processes has often created more jobs (albeit jobs requiring a different skill set) than it destroyed. Financial engineering itself opened whole new lines of business in finance. In asset management, the human factor in adoption (or resistance to it) is important because the stakes in terms of personal reward are high.

A major factor affecting the acceptance of model-based equity investment processes should be performance. Traditionally, asset managers have been rewarded because, whatever their methods of information gathering, they were credited with the ability to combine information and judgment in such a way as to make above average investment decisions.

We would like to know, however, whether above average returns, from people or models, are the result of luck or skill. Clearly, if exceptional performance is the result of skill rather than luck, performance should be repeatable.

Evidence here is scant; few managers can be backtested for a period of time sufficiently long to demonstrate consistent superior performance. Model-based active equity investing is a relatively new discipline. We have performance data on perhaps 10 years of active quantitative investing, whereas we have comparable data on traditional investing for 50 years or more.

Of course, we could backtest models for long periods, but these tests would be inconclusive because of the look-ahead bias involved. Many people believe model-driven funds do deliver better returns than people-driven funds and more consistently. Sheer performance is not the only factor affecting the diffusion of models

This article is an extract taken from "Challenges in Quantitative Equity Management" published by the Research Foundation of CFA Institute.

The full report can be downloaded at http://www.cfapubs.org/toc/rf/200820082

Biography

FRANK J. FABOZZI

Frank J Fabozzi, CFA, is Professor in the Practice of Finance and Becton Fellow in the School of Management at Yale University and editor of the Journal of Portfolio Management.

SERGIO M. FOCARDI

Sergio M Focardi is a founding partner of The Intertek Group, where he is a consultant and trainer on financial modelling.

CAROLINE JONAS

Caroline Jonas is a founding partner of The Intertek Group, where she is responsible for research projects. She is a co-author of various reports and articles on finance and technology.

Notes

  1. Self-referentiality is not limited to financial phenomena. Similar problems emerge whenever a forecast influences a course of action that affects the forecast itself. For example, if a person is told that he or she is likely to develop cancer if he or she continues to smoke and, as a consequence, stops smoking, the forecast also changes.
  2. Under the constraint of absence of arbitrage, prices are martingales after a change in probability measure. (A martingale is a stochastic process-that is, a sequence of random variables-such that the conditional expected value of an observation at some time t, given all the observations up to some earlier time s, is equal to the observation at that earlier time s.) See the original paper by LeRoy (1989) and the books by Magill and Quinzii (1996) and Duffie (2001).
  3. To cite the simplest of examples, a long-term bond is risky to a short-term investor and relatively safe for a long-term investor. Thus, even if the bond market is perfectly efficient, a long-term investor should overweight long-term bonds (relative to the capitalisation of bonds available in the market).
  4. "Clientele effects" is a reference to the theory that a company's stock price will move as investors react to a tax, dividend or other policy change affecting the company.
  5. Discounted cash flow analysis yields a fair price, but it requires a discount factor as input. Ultimately, the discount factor is determined by supply and demand
  6. A meta-theory is a theory of the theory. A familiar example is model averaging. Often, we have different competing models (ie. theories) to explain some fact. To obtain a more robust result, we assign a probability to each model and average the results. The assignment of probabilities to models is a meta-theory.