Smart Alpha: Smart Beta's Smarter Cousin

Market Efficiency: Says Who!

The case against smart alpha is tied to the case against any sort of alpha (a return greater than what’s expected based on market performance and the level of risk that is assumed). It’s the idea that stock prices reflect all available information and that absent occasional instances of luck, nobody can consistently earn excess returns. Adherents to this idea go on to argue that money management is useless, that asset management fees are rip-offs, and that everybody should passively invest in the SPDR S&P 500 ETF ($SPY) and be done with it.

Interestingly, they never explain what makes this a passive choice. It seems to me like an active bet on U.S. large-cap value-momentum stocks, in contrast to what you might get by picking up the smaller cap iShares Russell 2000 ETF ($IWM), more complete size-oriented market exposure through the iShares Russell 3000 ETF ($IWV) or the iShares S&P 1500 ETF ($ITOT), or a Wisdom Tree or RAFI ETF that strips the momentum bias out of the market cap weighted indexes. (I can go on and on, but I think you’re likely getting the point. “Passive Investing” doesn’t exist, unless you want to apply the label to stuffing money under the mattress). The idea of “passive” Investing is useful only to fund companies pushing ETFs based on the most well-known indexes or researchers trying to explain why THEY haven’t been able to come up with a way to beat the market.

Let’s try this from a different angle. It’s impossible for everybody to be above average. So, the passive advocates argue, focus on average.

Yes, that is true. The market is a zero sum game and ultimately, it is all about being average. But that applies only to the total of the gazillion investors that make up the whole market, or as a former boss once put it, God’s investment portfolio. That doesn’t prevent you from trying to be one of those who is above average. Anybody who thinks this is unreasonable because there’s no room for everybody to do it really needs to get out more. Trust me, most of the world is not going to strive to be above average, at anything.

I’ve stalled long enough. It’s now time to get to the most indelicate argument, one hinted at above and which sounds so nasty, but is really the one that’s most valid. I’ll address this to the many who can and do publish articles and studies demonstrating money management’s historic the lack of success. The logic of this “research” violates a well-known and well-established logical fallacy, argumentum ad ignorantiam, which holds that a proposition cannot be deemed false merely because it hasn’t been proven true. In other words, these studies only prove that the authors (and those with whom they worked and who they may have surveyed) have been unable to figure out how to generate alpha. But they cannot presume that others are unable to do so.

We’re Already Seeing Some Market Inefficiency and Alpha

I screened the Mornigstar.com database, I found that out of 8,428 U.S. open-end equity mutual funds, 3,363 have succeeded in generating three-year alpha above zero. Of those, 2,038 generated alphas at an annual rate above 1%. Going to the Portfolio123 ETF Screener and searching among U.S. Equity ETFs that openly strive to improve on plain-vanilla equity indexes (those whose Method is listed as “Quant Model”), I saw that 14 out of 109 have successfully generated positive five-year alpha, and that 12 out of 68 so-called-Smart Beta (“Special Weights”) ETFs also generated positive Alpha.

OK. I get it. Those are three and five year track records. What about 20-year histories? What about 50-year histories? What about 346-year histories? I don’t know. Ask me in 20 years, in 50 years (or rather ask my grandkids), or in 346 years. We can only work with what we have. And what we have is a new set of analytic tools that enables investment strategists to work in ways that were impossible or largely inaccessible to the generations of researchers and money managers whose historic performance woes have been well documented. The situation is analogous to human flight. It was well known and conclusively proven to have been absolutely impossible – until barely a century ago, when somebody figured out that wing surfaces should be curved. The investment community’s curved-wing innovation consists of modern databases and the analytical platforms that empower us to model using the data.

Consider an idea as simple as this: I want stocks for which the Price Earnings ratio is less than the average of other companies in the same industry. Imagine what it would have taken for Graham & Dodd to crunch numbers and arrive at an answer. Start by imagining the nightmarish task of even collect the numbers, since they’d probably have had to wait for each company to mail its financials to them, and probably by third class mail (as was typical among companies back as recently as the early 1980s, when I started working at Value Line). And once they finish (ugh, grunt) they have to almost immediately start over as prices move and new earnings reports are issued. Note, too, that they’d need to sort, identify, collect, and crunch PEs for each company in the same industry. And all that is for just one little simple measly relative PE ratio. I couldn’t imagine what it would take to add consideration of sales growth, margins, turnover, balance sheets, earnings quality, return on investment, and so forth. Note, too that a lot of academic research is based on annual data, which is stale for most of the year, and whatever manual crunching PhD students can be browbeat into doing.

And yes, the percentage of funds that have produced positive lately is low. But this whole area of quantitative fundamental research based on accounting data (where we might model based on accruals to assets rather than physics-like concepts such as Brownian motion) is very, very new. I can’t show 20-plus years of success because we don’t yet have enough investors who have been doing it that long. But personally, I’d be embarrassed to count myself among those who say it can’t be done because . . . well, because.

Based on Financial Theory, Not Alchemy

It would be reasonable to wonder whether the promise of a new set of tools will actually be realized. Just because you give somebody the ability to do something better doesn’t mean they’ll successfully take advantage of it. There have recently been published articles gleefully pointing out the missteps of less-capable practitioners (which can be fun to write and which can draw lots of eyeballs if the headline is sufficiently enticing).

I’m not going to play that game. I’d rather help you form your own opinions by showing you, up close, what the quest for alpha is really about, what it looks like from the vantage point of those who do it.

There’s no mystery at all to how stocks are priced. We know the answer with complete certainty. A stock is worth the present value of future expected dividends. And theoreticians should agree here since this is, after all, an academic concept. The reason we don’t all succeed with every investment is because of the difficulties in articulating the required inputs. In fact, none of the inputs can be articulated with any reasonable degree of certainty. But we can and do look to the available data for clues that make it more probable than not that a stock is priced more in line than not with this theoretical target. That is what fundamental analysis is all about. Technical analysis involves piggybacking on the price and volume movements caused by the shifting opinions and actions of those who’ve done this sort of fundamental analysis. And there’s a branch of sentiment, or behavioral analysis that can be done to assess the impact of those who trade (and impact stock prices) based on factors having nothing to do with objectively assessed valuation.

Legitimate practitioners who work to generate alpha are not flipping and flapping this way and that way until they do the financial equivalent of turning straw into gold. We are working, albeit in new more modern and efficient ways, with classic investment finance theory. Tis can best be seen with a simple demonstration.

Smart Alpha in Action

As a quick example, I created a Portfolio123 screen that identified Russell 3000 constituent stocks for which the forward-looking PEG ratio (PE using the current-year estimate as E and the consensus long-term growth-rate forecast as G) was in the cheapest 20% relative PEG ratios of other stocks in the same industry. From among the 235 stocks that satisfied this requirement, I selected the 25 that ranked highest in a sort based on trailing-12-month return on equity. I backtested against an ETF that gives passive exposure to the Russell 3000 ($IWV) and assumed the equally-weighted portfolio would be refreshed based on a new run of the model every four weeks. I tested over the past 10 years and assumed price slippage of 0.25% for each buy and sell/trade. The hypothetical portfolio achieved a simulated annual alpha of 1.36%. (I’m aware that the portfolio is equally weighted while the benchmark is market-cap weighted. So I checked by creating another portfolio that included all Russell 3000 stocks on an equal-weighted basis. The latter showed a minus 0.54% annual alpha.)

I’m not rushing out to invest real money based on this model. Performance relative to the market is somewhat less stable than I’d want to see. But I’ve little doubt the model could be tweaked to the point where it would be more readily usable because of my confidence in the logic behind the selection of factors, as well as my experience investing based on other models built on the basis of similar ideas.

And speaking of the ideas and how they relate to theory, here they are:

  • I start knowing I want to be in reasonably liquid (i.e. not penny) stocks that are properly priced relative to the present value of future dividends. Knowing that I can’t specifically execute that calculation, I try to identify stocks for which there’s reason to believe the price may be reasonably, if not precisely, aligned with that ideal (and in the case of non-dividend payers, we look for firms with current characteristics that support projecting into a theoretical future when dividends would, presumably, be paid).
  • A stock with a low PE stands a better chance of being aligned with future dividends because dividends (future if not presently) come from earnings.
  • A stock that’s reasonably priced relative to the company’s earnings growth rate stands a better chance if making the grade since dividend growth is an important part of the core academic present-value model.
  • Since stocks are valued with respect to future dividends rather than past achievements, I choose to use forward-looking numbers for the E and G in PEG, which is not something that is universally done.
  • There really isn’t a serious reason for putting the PEG threshold at 1, as many do, or any other number. That’s why I choose to sort relative to industry peers. I’m looking for situations that are potentially attractive on their own, rather than because of a rising-tide-lifts-all-boats phenomenon.
  • The final sort, based on recent return on equity, is motivated by this metric’s stature as the single best measure of company quality and more particularly, the company’s ability to generate good profit growth in the future (which, of course, is logically tied to good future dividend growth).
  • I select 25 stocks because that’s a number that diversifies me in a way that mitigates data risk (the risk that an oddity in a company’s numbers will cause PEG or ROE to have a real-world meaning that differs from the spirit of the law; a topic known to quants as the “mis-specified model” further discussion of which will/be deferred for another day) without presenting me, as an individual, with undue trading burden.
  • Finally, I rebalance every four weeks, an interval/that strikes a good balance between my wanting to use data that’s reasonably fresh with the need to give ideas tome to work (we can transmit information in nanoseconds, but it still takes time for investment cases to get reflected in the market).

There you have it. That’s an example of the way a smart alpha protocol can come into being. No magic. No tealeaf reading. No chanting or spells. No physics. No rocket science. No fancy math. It’s plain old-fashioned logic consistent with common sense and bedrock investment theory. I would feel absolutely zero jitters about sitting down, if possible, with Graham, Dodd, Buffett or anybody and discussing this. And I already know the kinds of modifications they’d suggest (after all, this is just something I created in less than a minute for purposes of demonstration).

You’ll notice I made reference to a backtest. So, too, do some other smart-alpha strategies. Understandably, that can raise some concerns. Testing is a vital process but one that can be misused. But that’s so with every endeavor and in every profession. The hallmarks of a professionally proper test are:

  1. Use of a point-in-time database that eliminates survivorship bias and look-ahead bias (in other words, companies that vanish due to bankruptcy, acquisition, etc. aren’t retroactively pulled from the database but are included up until the day their shares stopped trading, and data is made available to the test only when it became available to investors, so fourth quarter numbers aren’t available to the test on January 1st). We use a point-in-time database on Portfolio123. If you’re looking at a Smart Alpha test results elsewhere, ask about it, and feel free to draw negative conclusions from a non-response.
  2. A well-articulated strategy that rests on the logic behind why some stocks should perform better than others. The purpose of testing is not to discover “what works” (or, rather, what just so happened to have worked in a particular study period maybe through substance and maybe through luck, a disreputable practice known as curve-fitting, data-mining) but to test the efficacy of the strategy developer’s effort to translate the ideas into language that can be read by a computer and processed using a database. In other words, smart-alpha strategy development is not so much a statistical process as it is an exercise in language translation. When you encounter ETFs and so forth that use smart alpha, note that the providers have a right to protect intellectual property (the way they translated ideas into computer-speak) so don’t expect as much detail as I supplied here in this demonstration. What you’re looking for is clear-cut indication that the strategy springs from reputable ideas.

I’ll present and maintain for you some genuine smart-alpha models starting next week.

What Makes Alpha Smart

For a definition of Smart Alpha, I’m going to start with a characterization by Bruce J. Jacobs and Kenneth N. Levy and their Invited Editorial Comment “Smart Alpha versus Smart Beta” the Summer 2014 issue of Journal of Portfolio Management (p.1), where they say describe the approach as one that “rests on the proposition that the equity market is not entirely efficient, that security prices are subject to a large number of interrelated inefficiencies, and that it is possible, although not easy, to detect and exploit these inefficiencies with proprietary factors.”

Sometimes these factors can be complex. Often, though, they can be quite simple, as per the above demonstration, with the proprietary element being the decision to choose and combine these particular factors in lieu of an infinite number of possibilities.

Ultimately I’ll say the difference between regular alpha and smart alpha turns on whether the alpha we see is simply what we computed after subtracting Expected Return from Realized Return, or whether the alpha is linked to a thoughtful valid strategy as discussed above. Smart Alpha is the latter.

Where Smart Beta Fits

Smart Beta’s place is in the world of marketing.

The weighting protocols used by those who market under that label, however, fall squarely within the world of smart alpha.

In the above demo, I chose stocks on the basis of meeting certain thresholds relating to PEG and ROE. A small number of stocks (25) made the grade. Most didn’t.

Suppose, on the other hand, I want to launch an ETF based on my ideas. Liquidity and asset-gathering considerations suggest 25 stocks may not be enough. I may, therefore, carve out a bigger chunk of the Russell 3000, or even the entire constituent list, and apply a RAFI-like fundamental-weighting protocol. In my case, the fundamental score would be based on PEG relative to the industry norm, and trailing-12-month ROE.

That’s all there is to the difference. In one case, I use my ideas to select or reject stocks. In the other case, I use my ideas to determine how much money within a portfolio should be allocated to particular stocks with larger weightings going to those that rank higher in terms of compliance. Either way, the success or failure of the portfolio is going to be governed by the efficacy of my decision to drive performance on the basis of stocks with particular exposure to low relative PEG and high relative ROE. The differences are in the details of implementation, differences that are quite logical considering the difference between a portfolio for an individual (such as are offered in Portfolio123 Ready-to-Go) or an ETF marketed broadly to the public at large.

Appendix – For Quants

Shortly before completing this post, I fielded an interesting question from an investment advisor who showed one of my alpha-producing models to a quant with whom we are both acquainted. The latter asked what the “residual error” was.

To translate this to plain English, let’s restate the Capital Asset Pricing Model which equates expected return(ER) to the risk-free rate (RF), Beta (B) and the equity-risk premium. In an earlier post, I expressed the model as:

  • ER = RF + (B * RP)

Actually, a serious quant would have phrased it thusly:

  • ER = RF + (B * RP) + e

The last add-on, e, is the residual error term and is appended to any model in this form. Ideally, if the model truly explains the market as a whole, e will be zero or an insignificant element of randomness. But e in this model is not random and not zero. That’s why Fama French were able to expand it to also include RP-like factors representing the small-cap effect and stock valuation (each with their own B-like coefficients).

The work I do does not purport to explain the market as a whole. So I don’t care about minimizing e. If the model I create is generating alpha for me, I’m satisfied, and remain so even if other different models, also generate alpha. If I think the other factors subsumed by e can also produce alpha (which is usually the case), then I build other models based on them. And rather than investing all my money based on one model, I “diversify” (call it intellectual diversification) by having other portfolios based on other models. And it’s why I’ll present and maintain multiple models (stock lists) to and for you, rather than one grand Gerstein model.

How did you like this article? Let us know so we can better customize your reading experience.

Comments

Leave a comment to automatically be entered into our contest to win a free Echo Show.