Chapter 11 – Three Hundred Years of Stock Market Manipulations


300 Years of Stock Market Manipulations – From the Coffeehouse to the World Wide Web’s Stock Manipulations

In previous chapters, we saw that many of the changes in securities markets brought about by information technology in general and the Internet in particular are positive, democratizing access to markets and information. We also saw that technology is not always an unadulterated boon, and there is ample opportunity to fool yourself by blind data mining, and to find people trying to fool you using an ever-expanding bag of tricks, cons, and manipulations.

As information technology has expanded the scope of resources available to legitimate investors and traders, the Web has also become the prime new venue for the old game of market manipulation. Institutional traders and other long-term market participants often comment that they see far more inexplicable price moves than they did in the pre-Web era. In many cases, these moves are tied to subtle, and not so subtle, attempts at market manipulation using the newfound power of the Internet to transmit and spread rumors, manipulate beliefs, and post incorrect information at little cost, while maintaining the cloak of anonymity.

Price distortions arising from manipulations may be short-lived, but they are real prices, and can dramatically affect the cost of trading and investment performance. The influence of the rumor machine is overlaid on the influences of more fundamental (and benign) factors that move stock prices. Therefore, interest in manipulations is not confined to the most obvious potential victims, specialists, dealers, and market makers but to all buy-side traders. This chapter examines the nature and characteristics of Internet market manipulations and their non-electronic precedents, going back 300 years.

The Real Economic Impact of Pump and Dump Stock Market Manipulations

In 1999, NEI Webworld, Inc. (NEIP) was an obscure, nearly bankrupt printing company. Its stock barely had a pulse. It had been kept alive as a shell company, used by firms that wanted to access the public markets, but without the scrutiny that comes with an initial public offering (IPO). The last trade had been over a year earlier, for a penny and a half.

Suddenly it rocketed up 106,600 percent in one morning. What happened? A miracle cure? A hit movie? Pokémon lunch boxes? No, none of these. NEIP’s move was propelled purely by the power of Internet message boards. Two (subsequently indicted) UCLA students dramatically demonstrated how the new technology of the Internet had dramatically transformed the old game of market manipulation.

The Internet raises market manipulation to a level only dreamed of by past shysters. It used to take a real effort, a PR firm, or a major newspaper column to reach millions of potential traders. Now anyone can do it from their desktop. The Internet era is defined by the unparalleled ability of the new style of manipulator to use the Internet to affect the perceptions of vast numbers of investors at lightning speed, all the while remaining completely anonymous. This article looks at market manipulations — from early scams of the 1600s to the high-tech frauds of today — and asks how the game has changed and what you can do to protect yourself.(1)

Who cares about this, anyway? Aren’t these just isolated instances of little concern to ordinary investors? Absolutely not! There are hundreds of well-documented cases involving message manipulation, with financial impacts running into the billions of dollars. And it’s not just micro-cap stocks; recently multibillion-dollar Lucent Technologies was the subject of a successful manipulation attempt. When the people who get burned complain to the authorities, the Financial Industry Regulatory Authority (FINRA), the Securities and Exchange Commission (SEC), New York Stock Exchange (NYSE), and National Association of Securities Dealers (NASD), all examine messages in their investigations.

The SEC set up an office back in 2000 just to deal with Internet scammers. These agencies are reactive — the investigators head for the message boards after someone complains that something suspicious has occurred in a stock.

Brokers, market makers, specialists, and traders, however, care in a proactive sense. They want early warnings of potential trouble ahead so they don’t get left holding the proverbial bag. The Web has become the new prime venue for the old game of market manipulation. But it is not just the most obvious potential victims — specialists, dealers, and market makers — who care about manipulations. All good buy-side traders care about manipulations. If you have an order to trade in size — say, one day’s average daily volume — you may be planning to execute the trade in parcels over three days. Price distortions arising from manipulations may be short-lived, but they are real prices, and can dramatically affect the cost of trading(2) and investment performance. The influence of the rumor machine is overlaid on the influences of more fundamental (and benign) factors that move stock prices. In an extreme case like NEIP, there are no other factors, but there are few if any stocks that are immune from these effects.

A Classic Market Manipulation – A Long History of Investment Manipulation

The ignoble history of stock market manipulations doubtless goes back to the most ancient markets. In one of the earliest accounts of manipulations, Joseph de la Vega, in Confusión de Confusiones , wrote of the Amsterdam Stock Exchange over 300 years ago (1688): The greatest comedy is played at the Exchange. There, . . . the speculators excel in tricks, they do business and find excuses wherein hiding places, concealment of facts, quarrels, provocations, mockery, idle talk, violent desires, collusion, artful deceptions, betrayals, cheatings, and even tragic end are to be found.(3)

In the Amsterdam market at the time, market manipulations were common. De la Vega provides a comprehensive model of the various manipulations used to trick unsuspecting investors, including early versions of such perennial favorites as “painting the tape” * , making small trades to move the price. De la Vega’s book, Confusión de Confusiones , was picked by the Financial Times as one of the 10 best investment books ever written.(4) [Painting the tape is the illegal practice in which traders buy and sell a specific security among themselves, in order to create an illusion of high trading volume. Traders profit when unsuspecting investors, lured in by the unusual market volume, buy the stock.]

In the Amsterdam market of the late 1600s, there were two active stocks — the Dutch East India Company and the Dutch West India Company — and most of the activity revolved around speculation about the cargoes of the ships of these companies entering the port. One of the most successful stratagems was the spreading of false rumors in Amsterdam coffeehouses ( coffy huysen in Dutch) frequented by traders and brokers. As de la Vega describes it: “The bulls spread a thousand rumors about the stocks, of which one would be enough to force up the prices.”(5) Manipulators would falsely bid up the prices of stocks through a variety of artifices, including painting the tape and the spreading of overly optimistic news. Brokers would hint that ships soon to enter port carried rich cargoes ( “No tea and spices — they’ve got furs and diamonds” ), and soon the rumors would get ever more extravagant ( “Lots of furs and really big diamonds” ), leading to large price run-ups. Some things in life are fairly constant.

The Very Model of a Modern Market Manipulator on Stock Market Message Boards

There is no de la Vega for the twenty-first century, but there is Tel212, an anonymous poster to Yahoo!’s message boards.(6) The remarkable message that follows contains much of the same material, updated for the Internet, 320 years later. …

>>>>>> READ MORE HERE < <<<<<<

All notes for this chapter about Stock Market Message Boards, Stock Recommendations, and Pump and Dump Stock Manipulations on Internet Social Media:

This article originally appeared in the Summer 2001 issue of the Journal of Investing. It is reprinted in Nerds on Wall Street with permission. To view the original article, please go to iijoi.com. My coauthor, Ananth Madhavan, was at the University of Southern California when we started this, was at Investment Technology Group (ITG) when we finished, and is now director of trading research at Barclays Global Investors.

1. Corners and short squeezes, including various railroad manipulations by Cornelius Vanderbilt and others at the turn of the twentieth century, represent another form of manipulation through scarcity as opposed to redirecting people’s beliefs. This chapter focuses on manipulations based on false information of one type or another.

2. The real cost of trading is the difference between the price at the time you decide to trade and the total price at the time you actually trade. Commissions are usually the smallest part of this. Market impact and the opportunity cost of delay far outweigh the commissions. See the discussion in Chapter 5.

3. Joseph de la Vega, Confusión de Confusiones (New York: John Wiley & Sons, 1995), p. 169.

4. Confusión de Confusiones and Charles Mackay’s Extraordinary Popular Delusions and the Madness of Crowds (another of the FT editors’ top 10) are published together in the Wiley Investment Classics Series.

5. De la Vega, Confusión de Confusiones , p. 203.

6. The URL at the time the article was written was http://messages.yahoo.com/bbs? action=m & board=18185330 & tid=wgat & sid=18185330 & mid=16909 . However, this guide seems to have wisely been removed.

7. De la Vega, Confusión de Confusiones, p. 199.

8. Peter Wysocki, “Cheap Talk on the Web: The Determinants of Postings on Stock Message Boards” (Working Paper No. 98025, University of Michigan, 1999).

9. Tokyo Joe eventually settled with the SEC, paying $748,000 in fines but not admitting guilt. Gretchen Morgenson, “‘Tokyo Joe’ Settles Suit with S.E.C. over Web Site,” New York Times, March 9, 2001.

10. Gretchen Morgenson, “Internet’s Role Is Implicated in Stock Fraud,” New York Times, December 16, 1999.

11. SEC, Release No. 42483, March 2, 2000, Administrative Proceeding, File No. 3-10154.

12. This was written in 2001, and the prediction has proven correct. The entire genre of so-called pictograms — pictures of text touting a stock in spam e-mail — rose to a minor industry, and vanished as better anti-spam measures took hold.

Stock Market Message Boards, Stock Recommendations, and Pump and Dump Stock Manipulations
stock pick, stock picking, stock picks, insider trading, insider trading report, insider trading reports, market manipulation, stock manipulation, day trading, day trading, daytrading, hot stock picks, hot stocks, investing tips, pump and dump, stock market manipulation, stock market message boards, stock market recommendation, stock market recommendations, stock market trading tip, stock market tips, stock tip, stock tips, stock recommendation, stock recommendations, stock trading tips, wall street stock, wall street stock market, wall street stocks, wall street trading

.

Chapter 10 – Collective Intelligence, Social Media, and Web Market Monitors


Web Market Monitors and the Impact of Social Media on Financial Markets

“The words of the prophets are written on the subway walls.” — Simon & Garfunkel, The Sound of Silence

Opinions vary widely on the value of collective wisdom, with ample supporting evidence both for and against. The Internet has many positive examples: The collective ratings at consumer sites like Amazon for books (and almost anything that can be shipped in a box), Newegg for electronics, and Yelp for restaurants are almost always reliable when there is a strong consensus among a large crowd. When 95 out of 100 people say a software program doesn’t work, it is probably no prize (and the other five likely work for the publisher). When 250 out of 275 people rave about the latest Asian Cajun(1) spot and the waiting line winds around the block, dinner is not likely to be too bad. When every other customer complains about meeting a man with a stomach pump, you’re better off packing your own lunch.

Markets themselves are a form of collective intelligence (CI), and since transactions occur, they clearly arrive at prices seen as fair by buyers and sellers alike for everything from stocks to Pez dispensers (the first eBay merchandise). A recent book by James Surowiecki, The Wisdom of Crowds: Why the Many Are Smarter Than the Few and How Collective Wisdom Shapes Business, Economies, Societies and Nations (Random House, 2004) has nearly 300 pages of examples of group wisdom. One such example is the television quiz show Who Wants to Be a Millionaire. Contestants are asked a series of increasingly difficult questions, worth increasingly large payoffs if answered correctly. At any point, they can take the money and run or proceed to the next level.

If they are stumped, contestants are allowed to choose among a “lifeline,”(2) calling a friend, and polling the studio audience. Friends have provided the correct answer 65 percent of the time, but the audience has been right on 91 percent of the questions they’ve been asked. This is clearly an unscientific approach, since the friends and the audience have been given different questions, but it does suggest the value of collective intelligence, particularly for perky blond quiz show contestants.

That is not the case for the oft-repeated “guess the number of beans in the jar” experiment, popularized in finance circles by Jack Treynor(3) and repeated endlessly at financial conferences. A typical result was that when the jar contained 850 beans, the average estimate of the 56 students in Treynor’s class was 871, and only a single student had a guess better than the collective. This game is not anywhere near as popular on campuses as beer pong, but it is still played with the same kind of positive “wisdom of the collective” result.

In contrast, H.L. Mencken, the author, reporter, and editor known as the Sage of Baltimore, wrote, “No one in this world, so far as I know, has ever lost money by underestimating the intelligence of the great masses of the plain people.” Charles Mackay’s Extraordinary Popular Delusions and the Madness of Crowds, originally published in 1841,(4) supplies ample evidence to support Mencken’s thesis.

Modern Extraordinary Popular Delusions and the Madness of Crowds

The best known events are the Dutch tulip bulb mania, when sufficiently exotic bulbs (stripes were big) sold for more than a house, and the South Sea Bubble, when utterly worthless securities came to dominate the financial markets. Imagine something like that happening today. One anonymous Amazon reviewer concisely summarizes Mackay: “Why do otherwise intelligent individuals form seething masses of idiocy when they engage in collective action? Why do financially sensible people jump lemming-like into harebrained speculative frenzies — only to jump broker-like out of windows when their fantasies dissolve?”

Both schools of thought are correct, depending on the situation. Mencken and Mackay have nothing good to say about collective intelligence, but Surowiecki writes: Groups work well under certain circumstances, and less well under others. Groups generally need rules to maintain order and coherence, and when they are missing or malfunctioning, the result is trouble. . . . While big groups are often good at solving certain kinds of problems, big groups can also be unmanageable and inefficient. Conversely, small groups have the virtue of being easy to run, but they risk having too little diversity of thought and too much consensus.(5)

It is worthwhile to take an economic and game-theoretic view of this. When people have an incentive to be truthful, most of them will be truthful. When there is a reward for deceiving others, people will be deceptive. The studio audiences at Who Wants to Be a Millionaire and the bean population guessers have no reason to lie. In the case of the beans, the winner often gets to keep the jar or some other swag. For product rating scores, people feel good taking a whack at companies that sell some of the junk that passes for software and the like, and they earn a psychic payoff by sharing their positive experience with others, without risking or incurring any penalty for doing so. Scarcity issues are rare. For a restaurant reviewer, there is a slight disincentive in that raving about your favorites may result in long lines, somewhat offset by the feeling that since the half-life of new restaurants is about six months, you are helping your favorites to stick around.

If there is scarcity and competition involved, the incentives for the collective can be quite different. It becomes a game where at least some players will see a positive reward for providing false information.

Investing with Crowds: Stock Message Boards, Stock Tips, and Market Manipulation

Unlike restaurant or shopping advice, collective stock recommendations on message boards and “share the love” investment sites are examples of situations where deceit can be profitable. Holders of long positions in a stock have a powerful incentive to drive up its price, and those with short positions have a powerful incentive to drive it down, irrespective of the actual merit of either position. The anonymity of the Internet and the ability to create multiple identities make this easy to do.

As long as this looks like opinions being shared and does not cross over into outright fraud, the Securities and Exchange Commission (SEC) will not come knocking on an opinion sharer’s door. Many Web denizens do cross the line into criminal manipulations (some of the most egregious examples are described in the next chapter), but the distinction between the illegal and the merely malicious can be fuzzy.

During the tech bubble, a company called iExchange opened for business with a huge burst of PR, including segments on the major network news programs. The company T-shirts, which seemed to be everywhere in its hometown of Pasadena, read “BUY SELL” on the front and “TELL” on the back. It raised over $ 30 million from some of the biggest names in venture capital, and had one of the slickest social web sites seen up to that time. The home page from that site, www.iexchange.com , on June 20, 2000, is shown in Figure 10.1.

Figure 10.1 A profit of 1,200 percent in four months! Pretty soon these anonymous investment wizards will have all the money. Source: The Wayback Machine (iexchange.com, on the Internet Archive site at www. archive.org).

Figure 10.1 The comment in the lower right column claims a profit of 1,200 percent in four months! Pretty soon these anonymous investment wizards will have all the money. Source: The Wayback Machine (iexchange.com, on the Internet Archive site at www. archive.org).

The business model was that the first few tastes were free, and then investor users (like H.V., J.P., and I.G. over on the right) could pay analyst users (like “The Visionary,” “Big Jim,” and “Biotech Believer” in the middle pane) a modest fee, usually just a few dollars, for new research. iExchange got a piece of every transaction. There were $25,000 monthly prizes for the best stock picks, which was supposed to keep everyone honest.

The Epinions.com rating site for social web sites gave iExchange four stars, “a good place to make money.” The anonymous successful investors on the right are minting money. Surely they will be willing to pay the insightful analysts who let them reap these rewards? What could go wrong? Plenty. Perhaps you noticed that the screen grab is from the Internet Archive’s Wayback Machine, 6 the elephant graveyard of the Internet. Either those 1,200 percent returns weren’t enough to keep people happy or something went awry. The party ended, fittingly enough, just before April Fools’ Day in 2001, with the following sign – off, comprising the entirety of the iExchange site: …

>>>>>> READ MORE HERE < <<<<<<

All notes for this chapter about Investment Alpha and Trading Alpha from Wall Street Rumors, News, and Social Media:

1. Asian Cajun is an actual cuisine that is suddenly all the rage in the huge Los Angeles area Asian neighborhoods of Westminster, Alhambra, and Monterey Park. Those keen on elegant presentation may want to pass. This is a meal after which you wash off with a garden hose. Tasty variants on Cajun crawfish, shrimp, lobster, and crab are all served up, with your choice of sauce (spicy, greasy, or spicy and greasy), in big clear plastic bags dropped on a table covered in brown paper, with a roll of paper towels on the side. Paper plates and plastic forks are available on request. It’s much better than it sounds.

2. In Tina Fey’s dead-on parody of Sarah Palin’s interview with Katie Couric, the faux Palin asked “to use one of my lifelines, Katie.” She would have been better off asking to poll the audience. The funniest line in that skit, a long disconnected utterance that failed to parse as English, was taken verbatim from the actual interview.

3. Treynor is widely regarded as having been a shoo-in to have shared the 1990 Nobel Prize in economics had he only published a paper that was sitting in his desk drawer.

4. Still in print after 150 years (New York: Three Rivers Press, 1995).

5. James Surowiecki, The Wisdom of Crowds: Why the Many Are Smarter Than the Few and How Collective Wisdom Shapes Business, Economies, Societies and Nations (New York: Random House, 2004), xix.

6. Used in other chapters, this is a fabulous site founded to allow the historians of the Internet era to have something to use. The Internet Archive is at www.archive.org , and also has an extensive collection of nearly 400,000 free music downloads. Of these, 60,000 are live concert recordings, and 6,000 of those are of the Grateful Dead.

7. Entire contents of iExchange web site, March 29, 2001. Bye-bye, Big Jim. Source: www.archive.org

8. FTP is file transfer protocol, an Internet service that predates the World Wide Web and is still used for moving large chunks of information.

9. Landon Thomas Jr., “John A. Mulheren Jr., 54, Leading Trader in 80’s, Dies,” New York Times, December 17, 2003.

10. Ibid.

11. Michael Lewis, “Jonathan Lebed: Stock Manipulator, S.E.C. Nemesis — and 15,” New York Times Magazine, February 25, 2001.

12. Peter Wysocki, “Cheap Talk on the Web: The Determinants of Postings on Stock Message Boards” (Working Paper No. 98025, University of Michigan, 1999).

13. Werner Antweiler and Murray Frank, “Is All That Talk Just Noise? The Information Content of Stock Message Boards,” Journal of Finance 59, no. 3 (June 2004): 1259 – 1294.

14. Georgette Jason, who did the Dartboard articles for many years, says that while actual darts were thrown, they were never (as was alleged) thrown by monkeys. I went five rounds against the dartboard, using stocks selected by our quant methods, described in the chapters on seeking alpha and quantitative investing. I needed to explain why I’d picked a particular stock, and Georgette gently explained that “highest aggregate short-term forecast factor alpha” was too nerdy for WSJ readers. So I used to pick stocks near the top of our list that were amenable to having a better story back-filled as an explanation. For Home Depot (one of my winners) I explained that, having recently moved, I had personally dropped enough cash into the register to move the stock. Not keen on prison food, I never succumbed to the temptation to load up on a stock I’d picked before the story appeared in the paper. Rumor had it that not everyone in the game had the same dietary concerns.

15. Mark Hirschey, Vernon J. Richardson, and Susan Scholz, “How ‘Foolish’ Are Internet Investors?” Financial Analysts Journal 56, no. 1 (January/February 2000).

16. M. Bagnoli, M. D. Beneish, and S. G. Watts, “Whisper Forecasts of Quarterly Earnings per Share,” Journal of Accounting and Economics 28, no. 1 (November 1999): 27 – 50.

17. University of Maryland Human-Computer Interaction Lab, www.cs.umd.edu/hcil/

.

Chapter 09 – The Text Frontier – AI, IA, and the New Research


Hunting Investment Alpha and Trading Alpha from Online News, Social Media, and Rumors

Alpha hunters are always looking for new territory. When a strategy becomes known and used by too many players, the collective market impact of getting in and getting out will squeeze out all the profit juice, and only the lowest-cost transactors (large sell-side firms and hedge funds) will be able to use it. The pack needs to move on.

The Web is promising new territory, but while it is full of information it is also something of a pain to deal with, so a case can be made that there’s an economic rent (alpha) to be earned by doing a good job at using the Web effectively. This was suggested at the end of Chapter 4, “Where Does Alpha Come From?” This chapter gets into the specifics, showing real examples relating textual patterns and events (not just individual documents) to excess returns.

Bill Gross, of the PIMCO investment management company, described equity valuation as that mysterious fragile flower where price is part perception, part valuation, and part hope or lack thereof.” (1) An old Wall Street proverb says, more tersely, “Stocks are stories, bonds are mathematics.”

Sources of Investment News and Securities Trading Rumors

This has enough truth in it that looking for the right stories is a worthwhile activity. With hundreds of billions of pages available on the surface Web (the portion covered by search engines), and even more information stashed in proprietary databases and other deep Web locations, there are plenty of places to look. It makes sense in looking at these to break them into vaguely more manageable categories. A useful way to do this is to consider four broad classifications:

1. News. This is the old standby, and we all know this when we see it. It is often called the mainstream media (MSM). It is written by reporters, edited by editors, and published by more or less reputable sources. News was once exclusively disseminated on paper, radio, and television, and later via expensive dedicated electronic feeds. It is now ubiquitous on the Web, and news vendors are trying to move upscale, with tagged news that is more amenable to machine understanding using intelligence amplification (IA) and artificial intelligence (AI) tools. These deluxe feeds go for deluxe prices, tens of thousands of dollars per month.

2. Pre-news. Pre-news is the raw material reporters read before they write news. It comes from primary sources, the originators themselves: the Securities and Exchange Commission (SEC), court documents, and other government agencies. Not every reporter knows Deep Throat, but they all talk with people who might have something newsworthy to say. In pre-Web days, primary source information was much harder to come by, so we were far more dependent on reporters and established news organizations to find it for us. Today, in yet another instance of disintermediation by the Internet, many information middlemen have been eliminated.

3. Rumors. Here is content with a slightly to dramatically lower pedigree than reputable, signed news reporting, or the primary source material that goes into MSM news. Internet advertising has created a means to monetize spreading rumors, and spawned a new segment of the information industry. Some blogs and web sites are driven entirely by rumors, with little or no regard for truth. Others have much higher standards, closer to the highest-minded bastions of reputation-driven journalism, and may be spawned by those news organizations as they evolve. Others have an “all Britney, all the time, except for shark attacks” attitude, but keep people coming back by breaking an occasional legitimate true story overlooked by the mainstream media.

4. Social media. The barriers to entry at the low end of the “news” business on the Web are vanishingly small. Anyone can send spam, create a blog, or post on message boards for stocks or other topics. A great deal of this is genuinely useful — think of the product reviews on Amazon — and some is just noise. On stock message boards, there have been CEOs who reveal valuable information; but for the most part, the typical posting still reads like it came from some guy sitting around in his underwear in Albania at 3 a.m. , on vodka number nine. A great deal of research has gone into trying to sort out the legitimate sources from the louts. Some seems to have promise, at least in identifying future volatility. But you may be better off looking for the words of the prophets on the subway walls. The fi rst two items on this list are the subject in this chapter; the second two are the subject of the next one.

Extracting Investment Information from Wall Street News and Rumors

This chapter reviews research and ideas relating to extracting investable information from news and pre-news sources. A recurring theme is molecular search : the idea of looking for patterns and changes in groups of documents, rather than just characterizing atoms of information , the individual documents or stories we find as the result of conventional search engine queries. The choice of molecules and atoms instead of the usual “forest and trees” metaphor is not just some fancy science talk; it’s because there is only one basic relationship between trees and forests — spread out a bunch of the first to make the second.

Molecules made from atoms have infinite variety and complexity, as do the relationships we can infer across groups of documents.

Ten Pounds of Financial News in a Five-Pound Bag

The tagline in the corner of the paper version of the New York Times is “All the News That’s Fit to Print.” In fact, this was never true. The size of the paper is limited by many factors: press speeds, cost, and the limits of physical delivery. Editors have to pick and choose. This is true for all paper and ink publications. The Wall Street Journal index of companies mentioned rarely includes more than 300 names. But on the same day, Web sources will have news on thousands of firms. International and specialized news sources used to be costly and difficult to come by. Now they are as accessible as the local paper.

News is a time-honored source for investment information, and there is more of it than ever before, more than a person can handle. With the relentless march of technology to the beat of Moore’s law, previously impractical computationally intense approaches to natural language can be used to parse, categorize, and understand the onslaught of news. Reporters help the process along by tagging story elements at their point of origin. They inject some valuable wetware into the mix of hardware and software involved in the modern production, dissemination, and consumption of news. There is a great deal of commercial effort in this area, applying language and Web technologies to gather, filter, and rank individual news by type, sentiment, or intensity. Some are available to try on the Web.(2)

Stock Market Manipulations — Accidental and Intentional Stock Manipulations

The purest of efficient market purists once claimed that all news was already incorporated in prices. Someone always knew the news before you did, so there was no point in paying attention to it. This is another case of someone having to pick up that $ 100 bill on the sidewalk first, but those hundreds do get picked up pretty fast.

An example in the fall of 2008 shows how truly unexpected news can impact prices dramatically. At 1:37 a.m. EDT on Sunday, September 7, 2008, Google’s newsbots picked up a 2002 story about United Airlines possibly filing for bankruptcy. Apparently, activity at 1:36 a.m. on the web site of the Orlando Sentinel caused an old story to resurface on the list of “most viewed stories.” In Orlando, in the middle of the night, with Mickey sound asleep and Gatorland closed, a single viewing of the story was enough to do this, and attract the attention of the newsbot, one of many search agent programs that populates Google’s news database. In a cascade of errors, the story was picked up by a person, who, failing to notice that the date on the story was six years gone, put it on Bloomberg, which then set off a chain reaction on services that monitor Bloomberg news. This remarkable ability of the Internet to disseminate “news” resulted in the stock of United’s parent, UAL Corporation, dropping 76 percent in six minutes, with a huge spike in volume, as seen in Figure 9.1.

Figure 9.1 UAL Corporation on September 8, 2008. The Old news rises from the news crypt. Source: Google Finance.

Figure 9.1 UAL Corporation on September 8, 2008. The Old news rises from the news crypt. Source: Google Finance.

This looks like (at the very least) an accidental manipulation. The SEC announced an investigation into the incident by the end of the week. Trades made during the period when the price dropped were not reversed. This example, though based on what turned out to be false news, underscores the point about time acceleration of the effects of news on markets. This is an update on the “time isn’t what it used to be” lesson seen in comparing pre-and postmodern Web-era market reaction to earnings surprise news, shown in Chapter 4.

In fact, it didn’t take long for another major manipulation based on false news to occur. A legitimate news story followed a falsely planted one that had hammered Apple stock down by 5.4 percent, less than a month after the UAL presumed accident: CNN’s plunge into online citizen journalism backfired yesterday when the cable-news outlet posted what turned out to be a bogus report claiming that Apple Inc. Chief Executive Officer Steve Jobs had suffered a heart attack. Apple shares fell as much as 5.4 percent after the post on CNN’s iReport.com …

>>>>>> READ MORE HERE < <<<<<<

All notes for this chapter about Investment Alpha and Trading Alpha from Wall Street Rumors, News, and Social Media:

1. PIMCO December 2008 Market Commentary, www.pimco.com/LeftNav/Featured+Market+Commentary/IO/2008/IO+Dow+5000+Gross+Dec+08.htm

2. Relegence was an early first-wave company that did this. It was acquired by AOL, but remains in the news machine business ( www.relegence.com/ ). Newcomers in 2007 and 2008 include Skygrid ( www.skygrid.com) and StockMood ( www.stockmood.com ). Firstrain.com aggregates a wide range of services.

3. James Callan, “CNN’s Citizen Journalism Goes ‘Awry’ with False Report on Jobs,” Bloomberg News, October 4, 2008.

4. Paul Tetlock, Maytal Saar-Tsechansky, and Sofus Macskassy, “More Than Words: Quantifying Language (in News) to Measure Firms’ Fundamentals,” Journal of Finance 63 (June 2008): 1437–1467. (An earlier working version is available at the Social Science Research Network, http://ssrn.com )

5. General Inquirer is found at http://www.wjh.harvard.edu/~inquirer/
http://www.wjh.harvard.edu/~inquirer/

6. Event study charts group similar events together that are actually spread out in time. In this case, the vertical line in the middle of the chart is the day the story appeared; the region to the left shows the basis point (hundredths of percent) price changes prior to publication; and the region to the right shows price changes afterward.

7. “Abnormal returns” often refer to returns in excess of the market over the period in question. This study used a slightly fancier definition of abnormal based on the widely used Fama-French three-factor model, a more modest version of the multifactor “Barr’s better betas” approach described in Chapter 4. In addition to broad market moves, it adjusts for large-capitalization and small-capitalization companies, and for the value/growth style of the stocks, measured by book-to-price ratio.

8. “Mining of Concurrent Text and Time Series,” by Victor Lavrenko, Matt Schmill, Dawn Lawrie, Paul Ogilvie, David Jensen, and James Allant, Department of Computer Science, University of Massachusetts – Amherst, 2001.

9. “Technical Interface and Operational Specification for Public Dissemination Subscribers,” TRW/SEC specification by Craig Odell (TRW), May 3, 2001.

10. “Do Stock Market Investors Understand the Risk Sentiment of Corporate Annual Reports?” by Feng Li, University of Michigan, April 2006. Available at the Social Science Research Network, http://papers.ssrn.com/ (paper number 898181).

11. Zhen Deng, Baruch Lev, and Francis Narin, “Science and Technology as Predictors of Stock Performance,” Financial Analysts Journal 55, no. 3 (May/June 1999): 20–32.

12. Majestic Research specializes in this sort thing (www.majesticresearch.com/).

investment alpha, trading alpha,
stock market trading, stock market trading strategy, capital market efficiency, capital markets, inefficient market, insider stock trading, insider trading, insider trading report, insider trading reports, investment information, market anomalies, market anomaly, market efficiency, market manipulation, market volatility, stock manipulation, stock market, day traders, day trading, daytrading, pump and dump, stock market manipulation, stock market message boards, stock market recommendation, stock market recommendations, stock market trading tip, stock market tips, equity trader, equity trades, equity trading, wall street stock market, wall street stocks, wall street trading

.

Chapter 08 – Perils and Promise of Evolutionary Computation on Wall Street


Using Genetic Algorithms, Optimization Models, and Evolutionary Computation on Wall Street

“Be careful what you ask for — you might get it.”

My enthusiasm for machine learning, described at the end of the previous chapter, led me to kiss many artificial intelligence ( AI ) frogs. This included many flavors of inductive and explanation – based learning, as well as connectionist ideas, such as neural nets, that were based on simulating simple nervous systems. There were some interesting notions, but nothing came close to reproducing that “Wow!” Macsyma moment, until I found artificial evolution and genetic algorithms (GAs).

These techniques used populations of solutions, and applied digital versions of the principles of evolution to select the fittest, and to combine the best of the bunch for successor generations of hybridized and mutated solutions. There were some remarkable examples — robots that started out wandering aimlessly and bumping into things evolved before your eyes into what looked like precision drill teams. Symbolic regressions “discovered” complex algebraic relationships instead of just calculating coefficients on an assumed model structure. There were very capable network controllers and logic circuits, all of which emerged from a clearly useless population of random initial solutions.(1)

Promotion of Evolutionary Artificial Intelligence and Genetic Algorithm Applications for Investment Management

I became a major cheerleader for learning in finance using artificial evolution. I attended academic conferences where I met the leading lights in the field and hired their grad students, and where I got to stay in college dorms. It was a refreshing change from all of the four-star hotels favored for investment management conferences. Actually, sharing a bathroom with 25 other people is not so refreshing, but it was a change.

One of the founding fathers was Dave Goldberg, the big dog of evolutionary computation (EC), at the University of Illinois in Champaign-Urbana (also home to 2001: A Space Odyssey’s HAL 9000). In 1989 Dave wrote the classic and still best-selling text, Genetic Algorithms in Search, Optimization and Machine Learning (Addison-Wesley, 1989). I borrowed code from Dave for the experiments described in this paper, and got to speak at the GA and EC conferences. Eventually, Dave felt sorry for me getting in the bathroom line with all the grad students, and reserved one of the prime speaker slots at another conference, arranging to have me quartered in accommodations with my own plumbing at the spectacular Jumer’s Bavarian Grotto in Champaign-Urbana.

Fans of the Madonna Inn in San Luis Obispo, California, which features the Cave Man Room, the Spanish Inquisition Room, and the Li’l Abner Suite, among too many others to mention, would feel right at home in Jumer’s. I gather that Jumer’s has gone more upscale and mainstream, but back in the mid-1990s it was one of the kitsch capitals of the United States. You should trust me on this — I have been to Gatorland, home of the Gator Jump-a-roo, nine times.

Jumer’s had an all-medieval, all-Teutonic all the time theme going on: lots of stuffed actual bears; lots of weaponry on the walls, including in the guest rooms; battle axes, chain mail, and heavy purple draperies everywhere you looked; many, many suits of armor; and an actual stuffed horse, wearing more armor. All were well secured to the walls and floor to discourage University of Illinois students trolling for dorm decor items. I didn’t want to leave for the GA and EC events down the road at the much duller university, but I did, and hired more grad students and borrowed more code.

My willingness to show up at Jumer’s earned me an invitation to do a keynote talk on evolutionary computation and finance at the 2002 Genetic and Evolutionary Computation Conference (GECCO) in New York. GECCO was the major confab for this branch of the AI world, and there were more than a few fellow travelers on the EC – finance trail. I was willing to talk about it, since I’d begun to have some doubts and felt that maybe I’d learn more than I gave away. This chapter is based on that talk.

The AI Spring? — Precursors to Using Genetic Algorithms on Wall Street

The AI winter was not just a write-off for the venture capitalists who had drunk too deeply of the Kool-Aid; it spawned many genuine innovations, which came from questioning in a scientific way what had gone wrong. The symbolic predicate calculus logic programming view of AI had its limitations. Learning how to solve problems in the really messy, noisy, dynamic world was different from theorem proving and chess.

Scientists looking for successful models of learning and adaptive behavior do not have to look far. Birds do it, bees do it, even monkeys in the trees do it. But they all do it using wetware that we understand well enough to appreciate the crucial lessons for the next generation of AI paradigms. There is massive parallelism. Computation is going on all over the place, not in one instruction stream. Brains do not have accumulators.

AI went parallel. Thinking Machines, founded by computational superstar Danny Hillis (son-in-law of Marvin Minsky, the pope of symbolic AI), gathered some of the leading lights to build and program massive machines with up to 64K (65,536) processors. That is a lot more than one, but still a lot less than the 100 billion neurons in the brain.

You don’t need a machine with a billion processors to try out solutions that would use them. A simulator will do fine, if not as fast. For theory buffs, this is an example of the idea of a universal computation; a Turing machine or its equivalent can emulate anything you want. The Nintendo 64 emulators you can run on your PC to play Pac-Man are another.

The neural net movement exploited this idea, seeking to realize learning by mimicking structure and function. Another branch of the turn to biologically inspired approaches to learning used the intriguing idea of mimicking evolution. The mechanics of evolution at the chromosome level — the processes of mutation and crossover, dominant and recessive traits — are understood well. John Holland of the Santa Fe Institute proposed the idea of genetic algorithms, using computers to emulate evolution of solutions to problems, in order to use computer programs to evolve better programs.

Genetic Algorithms Test Whether Financial Models and Trading Rules Have Predictive Ability or Provide Alpha Returns Over the Portfolio Benchmark

Genetic algorithms are a tool for machine learning and discovery modeled on the time-tested process of Darwinian evolution. Potential forecasting models and trading rules are modeled as “chromosomes” containing all of their salient characteristics. A population of these solutions is allowed to “evolve,” with the fittest solutions rewarded by inclusion in subsequent generations. Each individual’s fitness is calculated explicitly as a payoff.

For example, fitness can be measured by predictive ability or alpha (excess return over the benchmark). Solutions with the lowest fitness become extinct in a few generations.

Variety is introduced into the population of solutions by mimicking the natural processes of crossover and mutation. Crossover effectively combines features of fit models to produce fitter models in subsequent generations. In crossover, we blend chromosomes (bit strings) defining two successful models in the hope of developing a still better model.

Mutation stirs the pot and introduces variations that would not be produced by crossover. Here we randomly alter any bit in any chromosome to create a mutation. Most fail badly, but a few survive. As long as we are playing God, we can give the fittest members of the population a free pass into the next generation without participating in the breeding and selection cycle.

Using Genetic Algorithms for Financial and Investment Management Applications

Genetic algorithms have been used successfully in many contexts, including meteorology, structural engineering, robotics, econometrics, and computer science. The genetic algorithm is particularly appealing for financial applications because of its robust nature and the importance of the payoff in guiding the process.

The genetic algorithm is robust in the sense that very few restrictions are placed on the form of the financial model to be optimized. …

>>>>>> READ MORE HERE < <<<<<<

All notes for this chapter about Computerized Investing, Genetic Algorithms, Artificial Intelligence Applications, and Evolutionary Computation on Wall Street:

This article originally appeared in the Winter 2003 issue of the Journal of Investing. It is reprinted in Nerds on Wall Street with permission. To view the original article, please go to iijoi.com

1. A good starting point for genetic algorithms is ILLIGAL, the GA lab at the University of Illinois (www.illigal.uiuc.edu/web/). For genetic programming, John Koza’s work is found here: www.genetic-programming.org/. The Santa Fe Institute, where many of these ideas first got started, is still in the game: www.santafe.edu/research/topics-innovation-evolutionary-systems.php

2. There are videos of some these early examples accompanying John Koza’s books, Genetic Computation I and II (Cambridge, MA: MIT Press). A search for “genetic algorithm demonstrations” turns up hundreds.

3. First Quadrant ( www.firstquadrant.com ), in Pasadena, California. Assets under management were in excess of $20 billion; clients were primarily pension funds.

4. Thanks to Andy Lo of MIT for this clear view of the central issues, discussed in many other contexts in his fine text Econometrics of Financial Markets (written with John Y. Campbell and A. Craig MacKinlay; Princeton University Press, 1996) and popular work A Non-Random Walk Down Wall Street (written with A. Craig MacKinlay; Princeton University Press, 1999).

5. Actually, just the variable specification portion; as will be discussed later, the first two segments were fixed.

6. Using the formula n !/[( n – r )! r !] and recognizing that there are 192 variations on each variable.

7. For information on these models, see “A Disciplined Approach to Global Asset Allocation,” by Robert D. Arnott and Robert M. Lovell, Jr., Financial Analysts Journal (January/February 1989).

.

Chapter 07 – A Little Artificial Intelligence Goes a Long Way on Wall Street


A Little AI Goes a Long Way on Wall Street: Artificial Intelligence and Securities Trading

“If you give someone a program, you will frustrate them for a day; if you teach them how to program, you will frustrate them for a lifetime.”

This is a history and technical overview of one of the earliest artificial intelligence (AI) successes in securities trading. In the Introduction, I described the early experiences in the late 1980s at the MIT Artificial Intelligence Laboratory spin-offs LISP Machines and Inference to apply their tools and techniques on Wall Street. Once we stopped blowing air at the subject and tried doing something useful with real market data, it became obvious that the LISP world and Wall Street were far from compatible.

LISP was (and is) an elegant, mathematically pure approach to computation that made for some remarkable feats of programming. My very first exposure to anything from the AI world came in 1971, when I was a newbie at MIT. Up in the truly strange Technology Square AI Lab machine room, filled with humming PDP-10s programmed to push the boundaries of computer science (and to operate the vending machines in the hall), someone showed me Macsyma, the first symbolic math program, developed by Joel Moses. Computers had been doing math in the sense of calculations from the beginning. ENIAC did ballistics. Big science machines did big numerical science problems from nuclear physics to meteorology.(1) In all cases, what was going on was that the formulas were in the program; then the machine read in all the numerical inputs and ran with it to produce numerical answers.

The difference in Macsyma was that the formula or equation itself was the input, and the machine produced transformations of formulas or solutions to equations in the same symbolic language used in abstract, non-numerical math. It could take derivatives, do integrals, and do fancy matrix manipulations, all in terms of the x’s, y’s, integral signs, d/dx’s, and all the rest. When you asked for the derivative of “x 3 _ x 2 _ x” you got “3 x 2 _ 2 x _ 1” and, unlike all the programs that preceded it, Macsyma didn’t have to know the value of x to do this.

It was absolutely amazing to see. Macsyma utterly blew us away. The median math SAT score for MIT guys hanging in the AI Lab was 800. Everyone thought that, while they had a tough time getting a date and maybe were a little confused on personal hygiene, they were BSDs when it came to doing integrals and derivatives. And here is this machine solving problems in a second that would take any of us a week (likely with a mistake), and solving problems in 10 seconds that we couldn’t touch. It was a humbling experience, and the first time I experienced awe at what clever people could do with computers.

When the Macsyma symbolic math system was first run, it found hundreds of errors in the CRC Handbook tables of integrals and derivatives. The Handbook , at the time, was in its 42nd edition, and on the bookshelf of every working engineer and scientist. Other programs proved theorems, solved logic problems, and played more than passable chess.

But all of this logical magic came with a great deal of baggage. The showstopper was the long pause LISP had to take periodically for “garbage collection” to recover the memory left behind as programs ran. The ability to change large, complex data structures on the fl y allowed LISP to deal with the complexity of problems like symbolic integration, but the need to clean up after those changes created the need for garbage collection.(2)

Wall Street Equity Hedging and Early LISP Trading Systems

When we ran our first, very simple LISP trading systems demonstrations (crossover rules, for the most part) using recorded data for our visitors from Wall Street, we saw their eyes glaze over when, in the middle of the simulated run, the machine would take a break for a few minutes and we would offer more coffee.

My colleague Dale Prouty, a brilliant Caltech Ph.D. physicist whose metabolism seemed to make his own caffeine, and I quickly realized there was no way LISP systems would fit in trading. Similar realizations, in other contexts, contributed to the AI winter, described in Chapter 2 .

Dale had heard that PaineWebber’s equity block desk was looking for proposals for an “intelligent hedging advisory system” for the desk. Ideally, the block traders would “go home flat,” with no net long or short exposure to the market, to sectors, or to other common equity factors. This was not always possible, so the firm had more overnight risk exposure than it wanted. There were many ways to reduce that risk; portfolios of long or short positions in options, futures, and stocks could be constructed to offset the risk on the desk’s book, and unwound as that risk changed. These differed in their effectiveness as a hedge (all those Greek letters dear to the quants) and in the implementation cost of putting them on and taking them off.

Prouty read everything he could find on hedging, shuffled in what he knew about expert systems, and after a competition with some of the bigger names in whiz-bang computer consulting (IBM, Coopers & Lybrand, and Arthur D. Little, as I recall), he walked off with a million – dollar contract to build an Intelligent Hedging Advisory System (IHAS) for PaineWebber. Dale and I had commiserated over our woeful situation of banging the square peg of LISP systems into the round hole of trading. His new contract let us do something about it. IHAS was pretty specialized, and there were only so many block trading desks on Wall Street, but pieces of the solution had much wider applicability. It would fund the development of software components with much broader appeal.

Early Quantitative Finance Systems for Institutional Equity Investors and Institutional Money Managers

We decided to start a new company, Integrated Analytics Corporation (IAC), to use what made sense from the LISP world, but without LISP and LISP machines. Sun Microsystems was emerging as the platform of choice for serious computation. The DOS-based 640K PCs of the time were great for WordStar and Lotus 1-2-3, but not what you would choose to analyze the torrent of data on a market feed in real time.

Our product, which we called MarketMind (later incorporated into QuantEx), was written in the mainstream language C, and ran on Sun hardware. It included only as much AI as the financial user community could deal with, and integrated tightly with their electronic environment. Computational resources were used to make a simple, highly application-specific user interface. This combination of advanced technologies, appropriately applied, resulted in a system used by many of the largest institutional equity investors and money managers in the United States. The system was directly linked to the New York Stock Exchange (NYSE) and other electronic equity execution channels. This was not a prototype or a proposal. This was real and was in wide use for many years, often generating transaction volumes exceeding five million shares per day.

Traders used a special purpose rule-based language to describe a wide variety of market conditions. The system kept up with high-speed incoming market data in real time. Its displays told traders when, where, and how strongly their specified conditions matched the current state of the market. Trading recommendations were formulated based on those specified conditions. Finally, and most importantly, direct electronic execution channels allowed quick action on these recommendations.

Tight integration with both market data and electronic execution channels, combined with an appropriate, accessible user interface and a high level of support contributed to a major AI success story with MarketMind/QuantEx. The transactions flowing through these systems produced more after tax revenue on a busy day than many other AI applications generated over their entire operational lifetimes.

Prehistory of Artificial Intelligence Applications on Wall Street

Summer 1987. AI godfather Marvin Minsky warns American Association for Artificial Intelligence (AAAI) Conference attendees in Seattle that “the AI winter will soon be upon us.” This isn’t news to most of them. Many of the pioneer firms have been pared down to near invisibility. AI stocks have dropped so low that Ferraris are being traded in for Yugos in Palo Alto and Cambridge.

On Wall Street, the expert systems that were last year’s breakthrough of the century are this year’s R & D write-off. LISP(3) machines can be had in lower Manhattan for 10 cents on the dollar. What went wrong? Minsky and many of the others in the AI community had it exactly right: Overblown expectations, awkward user interface …

>>>>>> READ MORE HERE < <<<<<<

All notes for this chapter about artificial intelligence applications, computerized investing, and Wall Street Analytics:

This article originally appeared (coauthored by Yossi Beinart) in the Winter 1996 issue of the Journal of Portfolio Management. It is reprinted in Nerds on Wall Street with permission.

1. One of those numerical meteorology problems led to the discovery of deterministic chaos, the strong dependence of a result on what was presumed to be meaninglessly small differences in the inputs. This was popularized as the so-called butterfly effect, since the seemingly insignificant pressure changes caused by a fl uttering butterfly, well within the limits of error of barometers used to measure them, could result in wildly different simulated future weather and climate outcomes. James Gleick’s book, Chaos (New York: Viking Penguin, 1987), is the place to start for the story of chaos.

2. Garbage collection is just gathering up blocks of memory no longer needed by the program. It is part of most implementations of the LISP language. It is very useful to programmers, who then don’t have to keep track of memory themselves. It is obviously not a good idea to use in a real-time application like trading unless it can be accomplished without stopping.

3. LISP is a computer language particularly suited to manipulation of symbols (as opposed to numbers). It is widely used in the academic AI community. LISP machines are workstations with specialized hardware to run LISP programs efficiently.

4. First described in “Knowledge-Based Systems for Financial Applications,” D. Leinweber, IEEE Expert (Fall 1988).

5. The names “Gensym” and “G2” are subtle LISP jokes. LISP and other AI programming languages sometimes need to create variables, which need names. Gensym is the name of the “generate symbol” function that returns these names. The first time it’s called, you get “G1,” the second time, “G2,” and so on. When Gensym’s founders, including Ed Fredkin of MIT’s AI Lab, needed names for both company and product, they did what LISP programmers do and called them “Gensym” and “G2.” They are found here: www.gensym.com

.

Chapter 05 – A Gentle Introduction to Computerized Investing


Computerized Investing, Index Funds, Quantitative Investing, and Active Management

“Life would be so much easier if we only had the source code.” — Hacker proverb

The beginning of index investing in the 1970s was the result of a convergence of events, one of those ripe apple moments. Institutional investors began to use firms like A.G. Becker to actually compare the total performance of their hired managers with index benchmarks, and found that many of them fell short, especially after the substantial fees the investors were paying.

Yale professor Burton Malkiel popularized the academic efficient market arguments in A Random Walk Down Wall Street, writing in 1973, “[We need] a new investment instrument: a no – load, minimum – management – fee mutual fund that simply buys the hundreds of stocks making up the market averages and does no trading [of securities]. . . . Fund spokesmen are quick to point out, ‘you can’t buy the averages.’ It’s about time the public could.”

Computers had gotten to the point where one could be put in an office setting without having to tear out walls and bring in industrial – strength air – conditioning, raised floors for the cables, and special power systems. It was slightly easier to install a computer in an office building than a particle accelerator, but not by much. I recall visiting an insurance company in Hartford one winter where they were using their IBM System 360 to heat several floors of a large building. Minicomputers, like the Digital Equipment Corporation (DEC) systems described in the Introduction, the Data General Nova, and the Prime, all from companies in the first Silicon Valley, Boston’s Route 128 (the same crowd that came to the TX – 2 going – away party), were manageable enough to fit in a normal office setting. You needed to crank the AC and have a high tolerance for noise, but they didn’t break the bank, or the floor.

The idea, the desire, and the means to achieve it all came together in the early 1970s for index funds. But this is a chapter about alpha strategies, the anti-index funds — so why are we talking about them at all? Because they are a starting point for all active quantitative computerized equity strategies.

Indexing 101 and Tracking Error

Calculating an index is fairly simple. Multiply the prices of the stocks in the index by their weights (usually their share of the total capitalization of the index constituents), add them up, and there’s your index. Charles Dow, a journalist, started doing it with a pencil and paper in 1896. You need to make adjustments for mergers, splits, and the like, and can get fancy, including dividends for total return.

Running an index fund is less simple. You have to figure out how many of hundreds or thousands of different stocks to buy (or sell) each time cash moves in or out of the portfolio in the form of investments, withdrawals, and dividends. For the most common S & P 500 there are 500 stocks to deal with. For a total market index like the Russell 3000, there are 3,000. For the Wilshire 5000, there are about 6,700.

The measure of how well you are doing in an index fund is clearly not alpha; that should be zero. It is tracking error, a measure of the difference between the calculated index and the actual portfolio. An ideal index fund has a tracking error of zero. Real-world index funds have tracking errors around 0.1 percent. If it gets much larger than that, someone is confused.

Index Funds: The Godfather of Quantitative Investing

Index funds have an interesting history. Prior to the 1960s, most institutional equity portfolios were managed by bank trust departments, and performance reporting was not the refined art that it has become today. Bill Fouse, one of the founders of the world’s first indexing group at Wells Fargo in the 1970s, tells stories of when performance reporting by a bank trust department consisted of a table listing all stocks held, the acquisition price of each, the current price, and the size of the position. This introduced some unusual biases into the perception of these reports. Looking at stocks just by price and ignoring dividends tends to favor stocks that don’t pay dividends. Simply listing acquisition price and current price ignores the aspect of time, and not comparing it to any well-defined benchmark (like the S & P 500 index) leaves the meaning of even a well-studied report unclear.

A.G. Becker was the first firm to compare the total return of a stock portfolio to an index. Since an S & P 500 index fund is just a passive investment consisting of a capitalization-weighted portfolio of the 500 stocks in the index, it will do no better than the index — and if managed effectively, no worse.

An index fund can’t just be started up and left alone to run itself forever. The stocks in the index change; dividends need to be reinvested, and most significantly, there are cash flows in and out of the portfolio from new funding or payment requirements. All of these events result in a need to trade, and it costs money to trade, not just in the explicit commissions, but in the market impact incurred when large volumes of stock are bought or sold. Managing an index fund effectively means keeping control of trading costs. These costs can drag the index portfolio’s performance down from the theoretically calculated index we see reported all the time. The reported index levels don’t include any real or simulated trading costs. They incur no commission costs and no market impact. For smaller index portfolios, under $ 20 million or so, the trading costs can become a significant problem. The lower-weighted stocks in the index will be held in very small quantities, and the cost associated with trading 100 shares is much more than one-tenth of the cost of trading 1,000, or one-hundredth the cost of trading 10,000.

For large index portfolios, the sheer size of trades can impose another trading cost in the form of market impact. Even so, there are economies of scale to be had in managing large index funds. This is reflected by the current business situation in which there are a small number of large index fund providers around the world, such as State Street and Barclays Global Investments. Estimates of total assets managed using this sort of passive approach, in a variety of markets, now exceed $4 trillion.

Setting aside considerations of trading costs for now, the idea of an index fund is a very simple one. Nevertheless, it is a quantitative concept, and running an index fund requires the use of a computer. The most straightforward way to manage an index fund is simply to hold all of the stocks in the index: every single one of them, each in its index weight. This is illustrated in Figure 5.1, which represents all of the stocks in the S & P 500 put into a portfolio. This is simple and will, in fact, ignoring trading costs, replicate the index exactly. This type of index fund is called a full replication fund.

Even with full replication funds, the trading costs and fixed 100-share increments for holdings will cause the portfolio to have a performance somewhat different from that of the index.

>>>>>> READ MORE HERE < <<<<<<

All notes for this chapter about computerized investing, index funds, quantitative investing, and active management:

1. Tim Loughran and Jay R. Ritter, “The New Issues Puzzle,” Journal of Finance 50, no. 1 (March 1995): 23–51.

2. Vanessa O’Connell, “Some Stock Funds Beat Rivals by Following Insiders’ Trades,” Wall Street Journal, January 29, 1997.

3. This is true when the long and short portfolios have equal betas, or sensitivity to broad market moves. For long-short portfolios where this is not the case, a portion of the overall return may be due to exposure to the overall market.

4. Jia Ye, “Excess Returns, Stock Splits, and Analyst Earnings Forecasts,” Journal of Portfolio Management 25, no. 2 (1999): 70–76.

5. See www.starmine.com for a world of information on this subject.

6. David Leinweber, “Uses and Views of Equity Style,” in Handbook of Equity Style Management, ed. T. Daniel Coggin and Frank J. Fabozzi (New Hope, PA: Fabozzi Associates, 1997).

7. The all-time classic paper on trading costs is “Implementation Shortfall” by Andre Perold, published in the Journal of Portfolio Management (Spring 1988). It is a hot topic in algo trading, so a search may be overwhelming. Perold was the first to demonstrate the significance of trading costs in such a persuasive manner. The transaction cost measurement industry, which followed, was really originated by one firm, Plexus Group, founded by Wayne Wagner and now part of Investment Technology Group, Inc. (ITG). Wayne’s personal perspective is found in “The Incredible Story of Transaction Cost Management: A Personal Recollection,” Journal of Trading 3, no. 3 (Summer 2008).

8. See “Founders of Modern Finance” (c) 1991, Research Foundation of the Institute of Chartered Financial Analysts, www.aimr.org ) for the goods from the founders themselves, or Capital Ideas by Peter Bernstein for the salient points, intellectual history, and best stories.

9. Visit www.stanford.edu/~wfsharpe/ for the word from the Crackpot himself. He has an extensive web site on quantitative finance.

10. The difference between index enhancement and active management is a matter of degree. Enhanced index and fully active portfolios each have two components: one piece to provide the benchmark index return and another to provide additional return on top of the benchmark. The difference between enhanced index strategies and active management is really just a matter of the relative sizes of the two pieces. Use just a little bit of active management, and you are enhancing the index. Use some more and you have an active strategy. Make really large active, often leveraged, bets and you have a hedge fund.

It is like the progression from 3.2-proof beer to 151-proof rum. The active ingredient is the same; the difference is a matter of degree. The distinction can be quantified by specifying a target tracking error for the portfolio, analogous to the proof content of your favorite bar beverages. The tracking error is just the standard deviation of the pre-tax difference between the portfolio return and the benchmark index return. A perfect index fund would have a tracking error of zero before any taxes and transactions costs are taken into consideration. Often, for example, enhanced index funds can have a tracking error under a few percent, but you want to make sure that it stays that way over time. Thus, using investment tracking software is definitely a wise move considering the nature of common investment monitoring situations like this.

11. Value and growth are called “equity styles.” When all stocks are ranked by book-to-price ratio, value stocks are in the top half and growth stocks are in the bottom half. Net performance depends upon appreciation, costs, and taxes. Taxes for individuals may vary depending upon the long-term capital gains of the various equity styles. For businesses things tend to be more complex and corporate tax software can help with the estimation of pro-forma taxes on tax realization events for various equity strategies.

Regarding the underlying assets, value stocks tend to have lots of real assets, like land and plants. Extreme examples are utilities. Growth stocks are generally the more exciting, newer firms, with ideas and products that get on the covers of magazines. Historically, value stocks have outperformed growth stocks, though this changed for a while in the late 1990s when the tech stocks, all growth stocks, did so well. Now we have returned to the old pattern.

.

Related Wall Street Analytics Articles