The enemy of knowledge is not ignorance, it’s the illusion of knowledge (Stephen Hawking)

It ain’t what you don’t know that gets you into trouble. It’s what you know for sure that just ain’t so (Mark Twain)

Invest with smart knowledge and objective odds

GOOD READ: THE IT THREAT

On January 24, 2014, I posted Google chief warns of IT threat. Danny, my geek son, had been warning me about that for most of last year. It is now a reality. The Economist ran a great article (Tks Gary) on that last week (The future of jobs, The onrushing wave). Some excerpts:

(…) A 2013 paper by Carl Benedikt Frey and Michael Osborne, of the University of Oxford, argued that jobs are at high risk of being automated in 47% of the occupational categories into which work is customarily sorted. That includes accountancy, legal work, technical writing and a lot of other white-collar occupations.

Answering the question of whether such automation could lead to prolonged pain for workers means taking a close look at past experience, theory and technological trends. The picture suggested by this evidence is a complex one. It is also more worrying than many economists and politicians have been prepared to admit. (…)

The case for a highly disruptive period of economic growth is made by Erik Brynjolfsson and Andrew McAfee, professors at MIT, in “The Second Machine Age”, a book to be published later this month. Like the first great era of industrialisation, they argue, it should deliver enormous benefits—but not without a period of disorienting and uncomfortable change. (…)

A startling progression of inventions seems to bear their thesis out. Ten years ago technologically minded economists pointed to driving cars in traffic as the sort of human accomplishment that computers were highly unlikely to master. Now Google cars are rolling round California driver-free no one doubts such mastery is possible, though the speed at which fully self-driving cars will come to market remains hard to guess. (…)

The machines are not just cleverer, they also have access to far more data. The combination of big data and smart machines will take over some occupations wholesale; in others it will allow firms to do more with fewer workers. Text-mining programs will displace professional jobs in legal services. Biopsies will be analysed more efficiently by image-processing software than lab technicians. Accountants may follow travel agents and tellers into the unemployment line as tax software improves. Machines are already turning basic sports results and financial data into good-enough news stories.

Jobs that are not easily automated may still be transformed. New data-processing technology could break “cognitive” jobs down into smaller and smaller tasks. As well as opening the way to eventual automation this could reduce the satisfaction from such work, just as the satisfaction of making things was reduced by deskilling and interchangeable parts in the 19th century. (…)

There will still be jobs. Even Mr Frey and Mr Osborne, whose research speaks of 47% of job categories being open to automation within two decades, accept that some jobs—especially those currently associated with high levels of education and high wages—will survive (see table). Tyler Cowen, an economist at George Mason University and a much-read blogger, writes in his most recent book, “Average is Over”, that rich economies seem to be bifurcating into a small group of workers with skills highly complementary with machine intelligence, for whom he has high hopes, and the rest, for whom not so much.

And although Mr Brynjolfsson and Mr McAfee rightly point out that developing the business models which make the best use of new technologies will involve trial and error and human flexibility, it is also the case that the second machine age will make such trial and error easier. It will be shockingly easy to launch a startup, bring a new product to market and sell to billions of global consumers. Those who create or invest in blockbuster ideas may earn unprecedented returns as a result.

In a forthcoming book Thomas Piketty, an economist at the Paris School of Economics, argues along similar lines that America may be pioneering a hyper-unequal economic model in which a top 1% of capital-owners and “supermanagers” grab a growing share of national income and accumulate an increasing concentration of national wealth. The rise of the middle-class—a 20th-century innovation—was a hugely important political and social development across the world. The squeezing out of that class could generate a more antagonistic, unstable and potentially dangerous politics. (…)

TWO GOOD READS: HUNT & HUNT

I came across two interesting articles which are related although the authors, both named Hunt, are not. Lacy Hunt argues why all the QEs are experimental failures with unknown (uncertain) consequences. Ben Hunt explains the differences between decision-making under risk vs decision-making under uncertainty.

Federal Reserve Policy Failures Are Mounting

Lacy H. Hunt, Ph.D., Economist

(…) Four considerations suggest the Fed will continue to be unsuccessful in engineering increasing growth and higher inflation with their continuation of the current program of Large Scale Asset Purchases (LSAP):

  • First, the Fed’s forecasts have consistently been too optimistic, which indicates that their knowledge of how LSAP operates is flawed. LSAP obviously is not working in the way they had hoped, and they are unable to make needed course corrections.
  • Second, debt levels in the U.S. are so excessive that monetary policy’s traditional transmission mechanism is broken.
  • Third, recent scholarly studies, all employing different rigorous analytical methods, indicate LSAP is ineffective.
  • Fourth, the velocity of money has slumped, and that trend will continue—which deprives the Fed of the ability to have a measurable influence on aggregate economic activity and is an alternative way of confirming the validity of the aforementioned academic studies.

1. The Fed does not understand how LSAP operates

If the Fed were consistently getting the economy right, then we could conclude that their understanding of current economic conditions is sound. However, if they regularly err, then it is valid to argue that they are misunderstanding the way their actions affect the economy.

During the current expansion, the Fed’s forecasts for real GDP and inflation have been consistently above the actual numbers. (…)

One possible reason why the Fed have consistently erred on the high side in their growth forecasts is that they assume higher stock prices will lead to higher spending via the so-called wealth effect. The Fed’s ad hoc analysis on this subject has been wrong and is in conflict with econometric studies. The studies suggest that when wealth rises or falls, consumer spending does not generally respond, or if it does respond, it does so feebly. During the run-up of stock and home prices over the past three years, the year-over-year growth in consumer spending has actually slowed sharply from over 5% in early 2011 to just 2.9% in the four quarters ending Q2.

Reliance on the wealth effect played a major role in the Fed’s poor economic forecasts. LSAP has not been able to spur growth and achieve the Fed’s forecasts to date, and it certainly undermines the Fed’s continued assurances that this time will truly be different.

2. US debt is so high that Fed policies cannot gain traction

Another impediment to LSAP’s success is the Fed’s failure to consider that excessive debt levels block the main channel of monetary influence on economic activity. Scholarly studies published in the past three years document that economic growth slows when public and private debt exceeds 260% to 275% of GDP. In the U.S., from 1870 until the late 1990s, real GDP grew by 3.7% per year. It was during 2000 that total debt breached the 260% level. Since 2000, growth has averaged a much slower 1.8% per year.

Once total debt moved into this counterproductive zone, other far-reaching and unintended consequences became evident. The standard of living, as measured by real median household income, began to stagnate and now stands at the lowest point since 1995. Additionally, since the start of the current economic expansion, real median household income has fallen 4.3%, which is totally unprecedented. Moreover, both the wealth and income divides in the U.S. have seriously worsened.

Over-indebtedness is the primary reason for slower growth, and unfortunately, so far the Fed’s activities have had nothing but negative, unintended consequences.

3. Academic studies indicate the Fed’s efforts are ineffectual

(…) It is undeniable that the Fed has conducted an all-out effort to restore normal economic conditions. However, while monetary policy works with a lag, the LSAP has been in place since 2008 with no measurable benefit. This lapse of time is now far greater than even the longest of the lags measured in the extensive body of scholarly work regarding monetary policy.

Three different studies by respected academicians have independently concluded that indeed these efforts have failed. These studies, employing various approaches, have demonstrated that LSAP cannot shift the Aggregate Demand (AD) Curve. (…)

The papers I am talking about were presented at the Jackson Hole Monetary Conference in August 2013. The first is by Robert E. Hall, one of the world’s leading econometricians and a member of the prestigious NBER Cycle Dating Committee. He wrote, “The combination of low investment and low consumption resulted in an extraordinary decline in output demand, which called for a markedly negative real interest rate, one unattainable because the zero lower bound on the nominal interest rate coupled with low inflation put a lower bound on the real rate at only a slightly negative level.”

Dr. Hall also wrote the following about the large increase in reserves to finance quantitative easing: “An expansion of reserves contracts the economy.” In other words, not only have the Fed not improved matters, they have actually made economic conditions worse with their experiments. (…)

The next paper is by Hyun Song Shin, another outstanding monetary theorist and econometrician and holder of an endowed chair at Princeton University. He looked at the weighted-average effective one-year rate for loans with moderate risk at all commercial banks, the effective Fed Funds rate, and the spread between the two in order to evaluate Dr. Hall’s study. He also evaluated comparable figures in Europe. In both the U.S. and Europe these spreads increased, supporting Hall’s analysis.

Dr. Shin also examined quantities such as total credit to U.S. non-financial businesses. He found that lending to non-corporate businesses, which rely on the banks, has been essentially stagnant. Dr. Shin states, “The trouble is that job creation is done most by new businesses, which tend to be small.” Thus, he found “disturbing implications for the effectiveness of central bank asset purchases” and supported Hall’s conclusions.

Dr. Shin argued that we should not forget how we got into this mess in the first place when he wrote, “Things were not right in the financial system before the crisis, leverage was too high, and the banking sector had become too large.” For us, this insight is highly relevant since aggregate debt levels relative to GDP are greater now than in 2007. Dr. Shin, like Dr. Hall, expressed extreme doubts that forward guidance was effective in bringing down longer-term interest rates.

The last paper is by Arvind Krishnamurthy of Northwestern University and Annette Vissing-Jorgensen of the University of California, Berkeley. They uncovered evidence that the Fed’s LSAP program had little “portfolio balance” impact on other interest rates and was not macro-stimulus. (…)

4. The velocity of money—outside the Fed’s control

The last problem the Fed faces in their LSAP program is their inability to control the velocity of money. The AD curve is planned expenditures for nominal GDP. Nominal GDP is equal to the velocity of money (V) multiplied by the stock of money (M), thus GDP = M x V. This is Irving Fisher’s equation of exchange, one of the important pillars of macroeconomics.

V peaked in 1997, as private and public debt were quickly approaching the nonproductive zone. Since then it has plunged. The level of velocity in the second quarter is at its lowest level in six decades. By allowing high debt levels to accumulate from the 1990s until 2007, the Fed laid the foundation for rendering monetary policy ineffectual. Thus, Fisher was correct when he argued in 1933 that declining velocity would be a symptom of extreme indebtedness just as much as weak aggregate demand.

Fisher was able to make this connection because he understood Eugen von Böhm-Bawerk’s brilliant insight that debt is future consumption denied. Also, we have the benefit of Hyman Minsky’s observation that debt must be able to generate an income stream to repay principal and interest, thereby explaining that there is such a thing as good (productive) debt as opposed to bad (non-productive) debt. Therefore, the decline in money velocity when there are very high levels of debt to GDP should not be surprising. Moreover, as debt increases, so does the risk that it will be unable to generate the income stream required to pay principal and interest.

(chart from Ed Yardeni)

Perhaps well intended, but ill advised

The Fed’s relentless buying of massive amounts of securities has produced no positive economic developments, but has had significant negative, unintended consequences.

For example, banks have a limited amount of capital with which to take risks with their portfolio. With this capital, they have two broad options: First, they can confine their portfolio to their historical lower-risk role of commercial banking operations—the making of loans and standard investments. With interest rates at extremely low levels, however, the profit potential from such endeavors is minimal.

Second, they can allocate resources to their proprietary trading desks to engage in leveraged financial or commodity market speculation. By their very nature, these activities are potentially far more profitable but also much riskier. Therefore, when money is allocated to the riskier alternative in the face of limited bank capital, less money is available for traditional lending. This deprives the economy of the funds needed for economic growth, even though the banks may be able to temporarily improve their earnings by aggressive risk taking.

Perversely, confirming the point made by Dr. Hall, a rise in stock prices generated by excess reserves may sap, rather than supply, funds needed for economic growth.

Incriminating evidence: the money multiplier

It is difficult to determine for sure whether funds are being sapped, but one visible piece of evidence confirms that this is the case: the unprecedented downward trend in the money multiplier.

The money multiplier is the link between the monetary base (high-powered money) and the money supply (M2); it is calculated by dividing the base into M2. Today the monetary base is $3.5 trillion, and M2 stands at $10.8 trillion. The money multiplier is 3.1. In 2008, prior to the Fed’s massive expansion of the monetary base, the money multiplier stood at 9.3, meaning that $1 of base supported $9.30 of M2.

If reserves created by LSAP were spreading throughout the economy in the traditional manner, the money multiplier should be more stable. However, if those reserves were essentially funding speculative activity, the money would remain with the large banks and the money multiplier would fall. This is the current condition.

The September 2013 level of 3.1 is the lowest in the entire 100-year history of the Federal Reserve. Until the last five years, the money multiplier never dropped below the old historical low of 4.5 reached in late 1940. Thus, LSAP may have produced the unintended consequence of actually reducing economic growth.

Stock market investors benefited, but this did not carry through to the broader economy. The net result is that LSAP worsened the gap between high- and low-income households. When policy makers try untested theories, risks are almost impossible to anticipate.

The near-term outlook

Economic growth should be very poor in the final months of 2013. Growth is unlikely to exceed 1%—that is even less than the already anemic 1.6% rate of growth in the past four quarters.

Marked improvement in 2014 is also questionable. Nominal interest rates have increased this year, and real yields have risen even more sharply because the inflation rate has dropped significantly. Due to the recognition and implementation lags, only half of the 2013 tax increase of $275 billion will have been registered by the end of the year, with the remaining impact to come in 2014 and 2015.

Additionally, parts of this year’s tax increase could carry a negative multiplier of two to three. Currently, many of the taxes and other cost burdens of the Affordable Care Act are in the process of being shifted from corporations and profitable small businesses to households, thus serving as a de facto tax increase. In such conditions, the broadest measures of inflation, which are barely exceeding 1%, should weaken further. Since LSAP does not constitute macro-stimulus, its continuation is equally meaningless. Therefore, the decision of the Fed not to taper makes no difference for the outlook for economic growth.

Ben Hunt (Epsilon Theory) sent me this note which I reproduce in its entirety because of its importance in the investment decision making process.

Epsilon Theory: The Koan of Donald Rumsfeld

There are known knowns; there are things we know we know.

We also know there are known unknowns; that is to say, we know there are some things we do not know.

But there are also unknown unknowns – the ones we don’t know we don’t know.

Donald Rumsfeld

There is an unmistakable Zen-like quality to this, my favorite of Donald Rumsfeld’s often cryptic statements. I like it so much because what Rumsfeld is describing perfectly in his inimitable fashion are the three forms of game theoretic decisions:

Decision-making under certainty – the known knowns. This is the sure thing, like betting on the sun coming up tomorrow, and it is a trivial sub-set of decision-making under risk where probabilities collapse to zero or 1.

Decision-making under risk – the known unknowns, where we are reasonably confident that we know the potential future states of the world and the rough probability distributions associated with those outcomes. This is the logical foundation of Expected Utility, the formal language of microeconomic behavior, and mainstream economic theory is predicated on the prevalence of decision-making under risk in our everyday lives.

Decision-making under uncertainty – the unknown unknowns, where we have little sense of either the potential future states of the world or, obviously, the probability distributions associated with those unknown outcomes. This is the decision-making environment faced by a Stranger in a Strange Land, where traditional cause-and-effect is topsy-turvy and personal or institutional experience counts for little, where good news is really bad news and vice versa. Sound familiar?

The sources of today’s market uncertainty are the same as they have always been throughout history – pervasive credit deleveraging and associated political strife. In the Middle Ages, these periods of deleveraging and strife were typically the result of political pursuit of wars of conquest … Edward III and his 14th century exploits in The Hundred Years War, say, or Edward IV and his 15th century exploits in The War of the Roses. Today, our period of deleveraging and strife is the result of political pursuit of la dolce vita … a less bloody set of exploits, to be sure, but no less expensive or impactful on markets. PIMCO co-CIO, Mohamed El-Erian, has a great quote to summarize this state of affairs – “Investors are in the back seat, politicians in the front seat, and it is very foggy through the windscreen.” – and the events of the past two weeks in Washington serve to confirm this observation … yet again. Of course, central banks are political institutions and central bankers are political animals, and the largest monetary policy experiment ever devised by humans should be understood in this political context. The simple truth is that no one knows how the QE story ends or what twists and turns await us. The crystal ball is broken and it’s likely to stay broken for years and years.

We are enduring a world of massive uncertainty, which is not at all the same thing as a world of massive risk. We tend to use the terms “risk” and “uncertainty” interchangeably, and that may be okay for colloquial conversation. But it’s not okay for smart decision-making, whether the field is foreign policy or investment, because the process of rational decision-making under conditions of risk is very different from the process of rational decision-making under conditions of uncertainty. The concept of optimization is meaningful and precise in a world of risk; much less so in a world of uncertainty. That’s because optimization is, by definition, an effort to maximize utility given a set of potential outcomes with known (or at least estimable) probability distributions. Optimization works whether you have a narrow range of probabilities or a wide range. But if you have no idea of the shape of underlying probabilities, it doesn’t work at all. As a result, applying portfolio management, risk management, or asset allocation techniques developed as exercises in optimization – and that includes virtually every piece of analytical software on the market today – may be sub-optimal or downright dangerous in an uncertain market. That danger also includes virtually every quantitatively trained human analyst!

All of these tools and techniques and people will still generate a risk-based “answer” even in the most uncertain of times because they are constructed and trained on the assumption that probability estimations and long-standing historical correlations have a lot of meaning regardless of circumstances. It’s not their fault, and their math isn’t wrong. They just haven’t been programmed to step back and evaluate whether their finely honed abilities are the right tool for the environment we’re in today.

My point is not to crawl under a rock and abandon any attempt to optimize a portfolio or an allocation … for most professional investors or allocators this is professional suicide. My point is that investment decisions designed to optimize – regardless of whether the target of that optimization is an exposure, a portfolio, or an allocation  – should incorporate a more agnostic and adaptive perspective in an uncertain market. We should be far less confident in our subjective assignment of probabilities to future states of the world, with far broader margins of error in those subjective evaluations than we would use in more “normal” times. Fortunately, there are decision-making strategies designed explicitly to incorporate this sort of perspective, to treat probabilities in an entirely different manner than that embedded in mainstream economic theory. One in particular – Minimax Regret – eliminates the need to assign any probability distribution whatsoever to potential outcomes.

Minimax Regret, developed in 1951 by Leonard “Jimmie” Savage, is a cornerstone of what we now refer to as behavioral economics. Savage played a critical role, albeit behind the scenes, in the work of three immortals of modern social science. He was John von Neumann’s right-hand man during World War II, a close colleague of Milton Friedman’s (the second half of the Friedman-Savage utility function), and the person who introduced Paul Samuelson to the concept of random walks and stochastic processes in finance (via Louis Bachelier) … not too shabby! Savage died in 1971 at the age of 53, so he’s not nearly as well-known as he should be, but his Foundations of Statisticsremains a seminal work for anyone interested in decision-making in general and Bayesian inference in particular.

As the name suggests, the Minimax Regret strategy seeks to minimize your maximum regret in any decision process. This is not at all the same thing as minimizing your maximum loss. The concept of regret is a much more powerful and flexible concept than mere loss, because it injects an element of subjectivity into a decision calculus. Is regret harder to program into a computer algorithm than simple loss? Sure. But that’s exactly what makes it much more human, and that’s why I think you may find the methodology more useful.

Minimax Regret downplays (or eliminates) the role that probability distributions play in the decision-making process. While any sort of Expected Utility or optimization approach seeks to evaluate outcomes in the context of the odds associated with those outcomes coming to pass, Minimax Regret says forget the odds … how would you feel if you pay the cost of Decision A and Outcome X occurs? What about Decision A and Outcome Y? Outcome Z? What about Decision B and Outcome X, Y, or Z?  Make that subjective calculation for every potential combination of decision + outcome you can imagine, and identify the worst possible outcome “branch” associated with each decision “tree”. Whichever decision tree holds the best of these worst possible outcome branches is the rational decision choice from a Minimax Regret perspective.

This is different from maximum loss calculation in many respects. For example, if the maximum loss outcome is rather apocalyptic, where it is extremely costly to prepare and you’re still pretty miserable even if you did prepare, most people will not experience this as a maximum regret outcome even if they make no preparations whatsoever to mitigate its impact. On the other hand, many people will experience substantial regret, perhaps even maximum regret, if the outcome is a large gain in which they do not share because they failed to prepare for it. Minimax Regret is a subjective decision-making strategy that captures the disutility of both missed opportunities as well as suffered losses, which makes it particularly appropriate for investment decisions that must inevitably incorporate the subjective feelings of greed and fear.

Minimax Regret requires a decision-maker to know nothing about the likelihood of this future state of the world or that future state. Because of its subjective foundations, however, it requires its practitioners to know a great deal about his or her utility for this future state of the world or that future state. The motto of Minimax Regret is not Know the World … it’s Know Thyself.

It’s also an appropriate decision-making strategy where you DO know the odds associated with the potential decision-outcomes, but where you have so few opportunities to make decisions that the stochastic processes of the underlying probability distributions don’t come into play. To use a poker analogy, my decision-making process should probably be different if I’m only going to be dealt one hand or if I’m playing all night. The former is effectively an environment of uncertainty and the latter an environment of risk, even though the risk attributes are clearly defined in both. This is an overwhelming issue in decision-making around, say, climate change policy, where we are only dealt a single hand (unless that Mars terraforming project picks up speed) and where both decisions and outcomes take decades to reveal themselves. It’s less of an issue in most investment contexts, but can certainly rear its ugly head in illiquid securities or strategies.

Is this a risk-averse strategy? In theory, no, but in practice, yes, because the most regret-filled outcomes tend to be those that are more likely to be low probability outcomes. If the “real” probability distributions of future outcomes were magically revealed to us … if we could just get our crystal ball working again … then an Expected Utility analysis of pretty much any Minimax Regret decision-making process would judge it as risk-averse. But that’s just the point … our crystal ball isn’t working, and it won’t so long as we have profound political fragmentation within and between the major economic powers of the world.

I’m not saying that Minimax Regret is the end-all and be-all. The truth is that the world is never entirely uncertain or without historical correlations that provide useful indications of what may be coming down the pike, and there are plenty of other ways to be more agnostic and adaptive in our investment decision-making without abandoning probability estimation entirely. But there’s no doubt that our world is more uncertain than it was five years ago, and there’s no doubt that there’s an embedded assumption of probabilistic specification in both the tools and the people that dominate modern risk management and asset allocation theory. Minimax Regret is a good example of an alternative decision-making approach that takes this uncertainty and lack of probabilistic specification seriously without sacrificing methodological rigor. As a stand-alone decision algorithm it’s a healthy corrective or decision-making supplement, and I believe it’s possible to incorporate its subjective Bayesian tenets directly into more mainstream techniques. Stay tuned …

If you want to sign up for the free direct distribution of Ben’s weekly notes and occasional emails, either contact him directly at ben.hunt@epsilontheory or click toFollow Epsilon Theory. All prior notes and emails are archived on the website.