ORIGINAL POST

Economic DeclineAn excerpt from Heinberg’s forthcoming book

The first economists were ancient Greek and Indian philosophers, among them Aristotle (382-322 BC)—who discussed the “art” of wealth acquisition and questioned whether property should best be owned privately or by government acting on behalf of the people. Little of real substance was added to the discussion during the next two thousand years.

The 18th century brought a virtual explosion of economic thinking. “Classical” economic philosophers such as Adam Smith (1723–1790), Thomas Robert Malthus (1766–1834), and David Ricardo (1772–1823) introduced basic concepts such as supply and demand, division of labor, and the balance of international trade. As happens in so many disciplines, early practitioners were presented with plenty of uncharted territory and proceeded to formulate general maps of their subject that future experts would labor to refine in ever more trivial ways.

These pioneers set out to discover natural laws in the day-to-day workings of economies. They were striving, that is, to make of economics a science on a par with the emerging disciplines of physics and astronomy.

Like all thinkers, the classical economic theorists—to be properly understood—must be viewed in the context of their age. In the 17th and 18th centuries, Europe’s power structure was beginning to strain: as wealth flowed from colonies, merchants and traders were getting rich, but they increasingly felt hemmed in by the established privileges of the aristocracy and the church. While economic philosophers were mostly interested in questioning the aristocracy’s entrenched advantages, they admired the ability of physicists, biologists, and astronomers to demonstrate the fallacy of old church doctrines, and to establish new universal “laws” by means of inquiry and experiment.

Physical scientists set aside biblical and Aristotelian doctrines about how the world works and undertook active investigations of natural phenomena such as gravity and electromagnetism—fundamental forces of nature. Economic philosophers, for their part, could point to price as arbiter of supply and demand, acting everywhere to allocate resources far more effectively than any human manager or bureaucrat could ever possibly do—surely this was a principle as universal and impersonal as the force of gravitation! Isaac Newton had shown there was more to the motions of the stars and planets than could be found in the book of Genesis; similarly, Adam Smith was revealing more potential in the principles and practice of trade than had ever been realized through the ancient, formal relations between princes and peasants, or among members of the medieval crafts guilds.

The classical theorists gradually adopted the math and some of the terminology of science. Unfortunately, however, they were unable to incorporate into economics the basic self-correcting methodology that is science’s defining characteristic. Economic theory required no falsifiable hypotheses and demanded no repeatable controlled experiments. Economists began to think of themselves as scientists, while in fact their discipline remained a branch of moral philosophy—as it largely does to this day.

The notions of these 18th and early 19th century economic philosophers constituted classical economic liberalism—the term liberal in this case indicating a belief that managers of the economy should let markets act freely and openly, without outside intervention, to set prices and thereby allocate goods, services, and wealth. Hence the term laissez-faire (from the French “let do” or “let it be”).

In theory, the Market was a beneficent quasi-deity tirelessly working for everyone’s good by distributing the bounty of nature and the products of human labor as efficiently and fairly as possible. But in fact everybody wasn’t benefiting equally or (in many people’s minds) fairly from colonialism and industrialization. The Market worked especially to the advantage of those for whom making money was a primary interest in life (bankers, traders, industrialists, and investors), and who happened to be clever and lucky. It also worked nicely for those who were born rich and who managed not to squander their birthright. Others, who were more interested in growing crops, teaching children, or taking care of the elderly, or who were forced by circumstance to give up farming or cottage industries in favor of factory work, seemed to be getting less and less—either proportionally (as a share of the entire economy), or often even in absolute terms. Was this fair? Well, that was a moral and philosophical question. In defense of the Market, many economists said that it was fair: merchants and factory owners were making more because they were increasing the general level of economic activity; as a result, everyone else would also benefit . . . eventually. See? The Market can do no wrong. To some this sounded a bit like the circularly reasoned response of a medieval priest to doubts about the infallibility of scripture. Nevertheless, despite its blind spots, classical economics proved useful in making sense of the messy details of money and markets.

Importantly, these early philosophers had some inkling of natural limits and anticipated an eventual end to economic growth. The essential ingredients of the economy were understood to consist of labor, land, and capital. There was on Earth only so much land (which in these theorists’ minds stood for all natural resources), so of course at some point the expansion of the economy would cease. Both Malthus and Smith explicitly held this view. A somewhat later economic philosopher, John Stuart Mill (1806-1873), put the matter as follows: “It must always have been seen, more or less distinctly, by political economists, that the increase in wealth is not boundless: that at the end of what they term the progressive state lies the stationary state. . . .”

But starting with Adam Smith, the idea that continuous “improvement” in the human condition was possible came to be generally accepted. At first, the meaning of “improvement” (orprogress) was kept vague, perhaps purposefully. Gradually, however, “improvement” and “progress” came to mean “growth” in the current economic sense of the term—abstractly, an increase in Gross Domestic Product (GDP), but in practical terms, an increase in consumption.

A key to this transformation was the gradual deletion by economists of land from the theoretical primary ingredients of the economy (increasingly, only labor and capital really mattered—land having been demoted to a sub-category of capital). This was one of the refinements that turned classical economic theory into neoclassical economics; others included the theories of utility maximization and rational choice. While this shift began in the 19th century, it reached its fruition in the 20th through the work of economists who explored models of imperfect competition, and theories of market forms and industrial organization, while emphasizing tools such as the marginal revenue curve (this is when economics came to be known as “the dismal science”—partly because its terminology was, perhaps intentionally, increasingly mind-numbing).

Meanwhile, however, the most influential economist of the 19th century, a philosopher named Karl Marx, had thrown a metaphorical bomb through the window of the house that Adam Smith had built. In his most important book, Das Kapital, Marx proposed a name for the economic system that had evolved since the Middle Ages: capitalism. It was a system founded on capital. Many people assume that capital is simply another word for money, but that entirely misses the essential point: capital is wealth—money, land, buildings, or machinery—that has been set aside for production of more wealth. If you use your entire weekly paycheck for rent, groceries, and other necessities, you may have money but no capital. But even if you are deeply in debt, if you own stocks or bonds, or a computer that you use for a home-based business, you have capital.

Capitalism, as Marx defined it, is a system in which productive wealth is privately owned.Communism (which Marx proposed as an alternative) is one in which productive wealth is owned by the community, or by the nation on behalf of the people.

In any case, Marx said, capital tends to grow. If capital is privately held, it must grow: as capitalists compete with one another, those who increase their capital fastest are inclined to absorb the capital of others who lag behind, so the system as a whole has a built-in expansionist imperative. Marx also wrote that capitalism is inherently unsustainable, in that when the workers become sufficiently impoverished by the capitalists, they will rise up and overthrow their bosses and establish a communist state (or, eventually, a stateless workers’ paradise).

The ruthless capitalism of the 19th century resulted in booms and busts, and a great increase in inequality of wealth—and therefore an increase in social unrest. With the depression of 1893 and the crash of 1907, and finally the Great Depression of the 1930s, it appeared to many social commentators of the time that capitalism was indeed failing, and that Marx-inspired uprisings were inevitable; the Bolshevik revolt in 1917 served as a stark confirmation of those hopes or fears (depending on one’s point of view).

Beginning in the late 19th century, social liberalism emerged as a moderate response to both naked capitalism and Marxism. Pioneered by sociologist Lester F. Ward (1841-1913), psychologist William James (1842-1910), philosopher John Dewey (1859-1952), and physician-essayist Oliver Wendell Holmes (1809-1894), social liberalism argued that government has a legitimate economic role in addressing social issues such as unemployment, health care, and education. Social liberals decried the unbridled concentration of wealth within society and the conditions suffered by factory workers, while expressing sympathy for labor unions. Their general goal was to retain the dynamism of private capital while curbing its excesses.

Non-Marxian economists channeled social liberalism into economic reforms such as the progressive income tax and restraints on monopolies. The most influential of the early 20thcentury economists of this school was John Maynard Keynes (1883-1946), who advised that when the economy falls into a recession government should spend lavishly in order to restart growth. Franklin Roosevelt’s New Deal programs of the 1930s were a laboratory for Keynes’s ideas, and the enormous government borrowing and spending that occurred during World War II were generally credited with ending the Depression and setting the nation on a path of economic expansion.

The next few decades saw a three-way contest between the Keynesian social liberals, the followers of Marx, and temporarily marginalized neoclassical or neoliberal economists who insisted that social reforms and Keynesian meddling by government with interest rates, spending, and borrowing merely impeded the ultimate efficiency of the free Market.

With the fall of the Soviet Union at the end of the 1980s, Marxism ceased to have much of a credible voice in economics. Its virtual disappearance from the discussion created space for the rapid rise of the neoliberals, who for some time had been drawing energy from widespread reactions against the repression and inefficiencies of state-run economies. Margaret Thatcher and Ronald Reagan both relied heavily on advice from neoliberal economists of the Chicago School (so called because of the widespread influence of the University of Chicago School of Economics, which graduated several generations of economists steeped in the ideas of monetarists like Milton Friedman, 1912-2006; as well as Austrian School economist Friedrich von Hayek, 1899-1992). One of the most influential libertarian, free-market economists of recent decades was Alan Greenspan (b. 1926), who, as U.S. Federal Reserve Chairman from 1987 to 2006, argued for privatization of state-owned enterprises and de-regulation of businesses—yet Greenspan nevertheless ran an activist Fed that expanded the nation’s money supply in ways and to degrees that neither Friedman or Hayek would have approved of. As a side note, it’s worth mentioning that the Austrian School of Ludwig von Mises (1881-1973) and Hayek should be distinguished from the Chicago School: the former has followed a more purely individualist, libertarian line of thinking, and usefully critiques central banks and fiat currencies; while the latter is more results-oriented and heterodox and accepts central banks and fractional-reserve banking as givens. Both reject Keynesian government intervention in favor of unfettered markets.

There is a saying now in Russia: Marx was wrong in everything he said about communism, but he was right in everything he wrote about capitalism. Since the 1980s, the nearly worldwide re-embrace of classical economic philosophy has predictably led to increasing inequalities of wealth within the U.S. and other nations, and to more frequent and severe economic bubbles and crashes.

Which brings us to the global crisis that began in 2008. By this time all mainstream economists (Keynesians and neoliberals alike) had come to assume that perpetual growth is the rational and achievable goal of national economies. The discussion was only about how to maintain it—through government intervention or a laissez-faire approach that assumes the Market always knows best.

But in 2008 economic growth ceased in most nations, and there has as yet been limited success in restarting it. Indeed, by some measures the U.S. economy is slipping further into a recession that might more correctly be termed a depression. This dire reality constitutes a challenge to both mainstream economic camps. It is clearly a challenge to the neoliberals, whose deregulatory policies were largely responsible for creating the housing bubble whose implosion is generally credited with stoking the crisis. But it is a conundrum also for the Keynesians, whose stimulus packages have failed in their aim of increasing employment and general economic activity. What we have, then, is a crisis not just of the economy, but also of economic theory and philosophy.

The ideological clash between Keynesians and neoliberals (represented to a certain degree in the escalating all-out warfare between the U.S. Democratic and Republican political parties) will no doubt continue and even intensify. But the ensuing heat of battle will yield little light if both philosophies conceal the same fundamental errors. One such error is of course the belief that economies can and should perpetually grow.

But that error rests on another that is deeper and subtler. The subsuming of land within the category of capital by nearly all post-classical economists had amounted to a declaration that Nature is merely a subset of the human economy—an endless pile of resources to be transformed into wealth. It also meant that natural resources could always be substituted with some other form of capital—money or technology. The reality, of course, is that the human economy exists within, and entirely depends upon Nature, and many natural resources have no realistic substitutes. This fundamental logical and philosophical mistake, embedded at the very heart of modern mainstream economic philosophies, set society directly upon a course toward the current era of climate change and resource depletion, and its persistence makes conventional economic theories—of both Keynesian and neoliberal varieties—utterly incapable of dealing with the economic and environmental survival threats to civilization in the 21st century.

For help, we can look to the ecological and biophysical economists, whose ideas have been thoroughly marginalized by the high priests and gatekeepers of mainstream economics—and, to a certain extent, to the likewise marginalized Austrian School, whose standard bearers have been particularly good at forecasting and diagnosing the purely financial aspects of the current global crisis. But that help will not come in the form that many would wish: as advice that can return our economy to a “normal” state of “healthy” growth. One way or the other—through planning and method, or through collapse and failure—our economy is destined to shrink, not grow.

Business Cycles, Interest Rates, and Central Banks

We have just reviewed a minimalist history of human economies and the economic theories that have come into vogue to explain and manage them. But there is a lot of detail to be filled in if we are to understand what’s happening in the world economy today. And much of that detail has to do with the spectacular growth of debt—in obvious and subtle forms—that has occurred during the past few decades. That phenomenon in turn must be seen in light of the business cycles that characterize economic activity in modern industrial societies, and the central banks that have been set up to manage them.

We’ve already noted how nations learned to support the fossil fuel-stoked growth of their physical economies by increasing their money supply via fractional reserve banking. As money was gradually (and finally completely) de-linked from physical substance (i.e., precious metals), the creation of money became tied to the making of loans by commercial banks. This meant that the supply of money was entirely elastic—as much could be created as was needed, and the amount in circulation could contract as well as expand. And the growth of money was tied to the growth of debt.

The system is dynamic and unstable, and this instability manifests in the business cycle. In the expansionary phase of the cycle, businesses see the future as rosy, and therefore take out loans to build more productive capacity and hire new workers. Because many businesses are doing this at the same time, the pool of available workers shrinks; so, to attract and keep the best workers, businesses have to raise wages. With wages rising, worker-consumers have more money in their pockets. Worker-consumers spend much of that money on products from the businesses that hire them, helping spread even more optimism about the future. Amid all this euphoria, worker-consumers go into debt based on the expectation that their wages will continue to grow, making it easy to repay loans. Businesses go into debt expanding their productive capacity. Real estate prices go up because of rising demand (former renters deciding they can now afford to buy), which means that houses are worth more as collateral if existing homeowners want to take out big loans to do some remodeling or to buy a new car. All of this borrowing and spending increases the money supply and the velocity of money.

At some point, however, the overall mood of the country changes. Businesses have invested in as much productive capacity as they are likely to need for a while. They feel they have taken on as much debt as they can handle, and don’t feel the need to hire more employees. Upward pressure on wages ceases, and that helps dampen the general sense of optimism about the economy. Workers likewise become shy about taking on more debt, as they are unsure whether they will be able to make payments. Instead, they concentrate on paying off existing debts. With fewer loans being written, less new money is being created; meanwhile, as earlier loans are paid off, money effectively disappears from the system. The nation’s money supply contracts in a self-reinforcing spiral.

But if people increase their savings during this downward segment of the cycle, they eventually will feel more secure and therefore more willing to begin spending again. Also, businesses will eventually have liquidated much of their surplus productive capacity and thereby reduced their debt burden. This sets the stage for the next expansion phase.

Business cycles can be gentle or rough, and their timing is somewhat random and largely unpredictable. They are also controversial: Austrian School economists believe they are self-correcting as long as the government and central banks (which we’ll discuss below) don’t interfere; Keynesians believe they are only partially self-correcting and must be managed.

In the worst case, the upside of the cycle can constitute a bubble, and the downside a recession or even a depression. A recession is a widespread decline in GDP, employment, and trade lasting from six months to a year; a depression is a sustained, multi-year contraction in economic activity. In the narrow sense of the term, a bubble consists of trade in high volumes at prices that are considerably at odds with intrinsic values, but the word can also be used more broadly to refer to any instance of rapid expansion of currency or credit that’s not sustainable over the long run. Bubbles always end with a crash—a rapid, sharp decline in asset values.

Interest rates can play an important role in the business cycle: when rates are low, both businesses and individuals are more likely to want to take on more debt; when rates are higher, new debt is more expensive to service. When money is flooding the system, the price of money (in terms of interest rates) naturally tends to fall, and when money is tight its price tends to rise—effects that magnify the existing trend.

During the 19th century, as banks acted with little supervision in creating money to fuel business growth cycles and bubbles, a series of financial crises ensued. In response, bankers in many countries organized to pressure governments to authorize central banks to manage the national money supply. In the U.S., the Federal Reserve (“the Fed”) was authorized by Congress in 1913 to act as the nation’s central bank.

The essential role of the central banks, such as the Fed, is to conduct the nation’s monetary policy, supervise and regulate banks, maintain the stability of the financial system, and provide financial services to both banks and the government. In doing this, central banks also often aim to moderate business cycles by influencing interest rates. The idea is simple enough: lowering interest rates makes borrowing easier, leading to an increasing money supply and the moderation of recessionary trends; high interest rates discourage borrowing and deflate dangerous bubbles.

The Federal Reserve charters member banks, which must obey rules if they are to maintain the privilege of creating money through generating loans. It effectively controls interest rates for the banking system as a whole by influencing the rate that banks charge each other for overnight loans of federal funds, and the rate for overnight loans that member banks borrow directly from the Fed. In addition, the Fed can purchase government debt obligations, creating the money out of thin air (by fiat) with which to do so, thus directly expanding the nation’s money supply.

The Fed has often been a magnet for controversy: while it operates without fanfare and issues statements filled with terms opaque even to many trained economists, its secrecy and power has led many critics to call for reforms or for its replacement with other kinds of banking regulatory institutions. Critics point out that the Fed is not really democratic (the Fed chairman is appointed by the President, but other board members are chosen by private banks, which also own shares in the institution, making it an odd government-corporate hybrid).

Other central banks serve similar functions within their domestic economies, but with some differences: The Bank of England, for example, was nationalized in 1946 and is now wholly owned by the government; the Bank of Russia was set up in 1990 and by law must channel half of its profits into the national budget. Nevertheless, many see both Fed and central banks elsewhere (the European Central Bank, the Bank of Canada, the People’s Bank of China, the Reserve Bank of India) as clubs of bankers that run national economies largely for their own benefit. Suspicions are most often voiced with regard to the Fed, which is arguably the most secretive and certainly the most powerful of the central banks. Consider the Fed’s theoretical ability to engineer either a euphoric financial bubble or a Wall Street crash immediately before an election, and its ability therefore to substantially impact that election. It is not hard to see why president James Garfield would write, “Whoever controls the volume of money in any country is absolute master of industry and commerce,” or why Thomas Jefferson would opine, “Banking establishments are more dangerous than standing armies.”

Still, the U.S. government itself—apart from the Fed—maintains an enormous role in managing the economy. National governments set and collect taxes, which encourage or discourage various kinds of economic activity (taxes on cigarettes encourage smokers to quit; tax breaks for oil companies discourage alternative energy producers). General tax cuts can spur more activity throughout the economy, while generally higher taxes may dampen borrowing and spending. Governments also regulate the financial system by setting rules for banks, insurance companies, and investment institutions.

Meanwhile, as Keynes advised, governments also borrow and spend to create infrastructure and jobs, becoming the borrowers and spenders of last resort during recessions. A non-trivial example: In the U.S. since World War II, military spending has supported a substantial segment of the national economy—the weapons industries and various private military contractors—while directly providing hundreds of thousands of jobs, at any given moment, for soldiers. Critics describe the system as a military-industry “welfare state for corporations” and speculate that some recent wars have been fought in part merely to stimulate the economy.

The upsides and downsides of the business cycle are reflected in higher or lower levels of inflation. Inflation is often defined in terms of higher wages and prices, but (as the Austrian economists have persuasively argued) wage and price inflation is actually just the symptom of an increase in the money supply relative to the amounts of goods and services being traded, which in turn is typically the result of exuberant borrowing and spending. The downside of the business cycle, in the worst instance, can produce the opposite of inflation, or deflation. Deflation manifests as declining wages and prices, consequent upon a declining money supply relative to goods and services traded, due to a contraction of borrowing and spending.

Business cycles and regulated monetary-banking systems constitute the framework within which companies and individual investors, workers, and consumers act. But over the past few of decades something remarkable has happened within that framework. In the U.S., the financial services industry has ballooned to unprecedented proportions, accounting for over 40 percent of all corporate profits, and has plunged society as a whole into a crisis of still unknown proportions. How and why did this happen? As we are about to see, these recent developments have deep roots.

Mad Money

Investing is a practice nearly as old as money itself, and from the earliest times motives for investment were two-fold: to share in profits from productive enterprise, and to speculate on anticipated growth in the value of assets. The former kind of investment is generally regarded as helpful to society, while the latter is seen, by some at least, as a form of gambling that eventually results in wasteful destruction of wealth. It is important to remember that the difference between the two is not always clear-cut, as investment always carries risk as well as an expectation of reward.

Here are obvious examples of the two kinds of investment motive. If you own shares of stock in General Motors, you own part of the company; if it does well, you are paid dividends—in “normal” times, a modest but steady return on your investment. If dividends are your main objective, you are likely to hold your GM stock for a long time, and if most others who own GM stock have bought it with similar goals, then—barring serious mismanagement or a general economic downturn—the value of the stock is likely to remain fairly stable. But suppose instead you bought shares of a small start-up company that is working to perfect a new oil-drilling technology. If the technology works, the value of the shares could skyrocket, long before the company actually shows a profit. You could then dump your shares and make a killing. If you’re this kind of investor, you are more likely to hold shares relatively briefly, and you are likely to gravitate toward stocks that see rapid swings in value. You are also likely to be constantly on the lookout for information—even rumors—that could tip you off to impending price swings in particular stocks.

When lots of people engage in speculative investment, the likely result is a series of occasional manias or bubbles. Here’s a classic example, discussed at length by British journalist Charles Mackay in his book Extraordinary Popular Delusions and the Madness of Crowds: In early seventeenth century Holland, tulips became a coveted status symbol, and the trade in tulip bulbs (complete with futures contracts and short selling, about which we will learn more below) assumed bubble proportions; at the peak of the mania in early February 1637, some single tulip bulbs sold for more than ten times the annual income of a skilled craftsman. Just days after the peak, tulip bulb contract prices collapsed and speculative tulip trading virtually ceased. More recently, in the 1920s, radio stocks were the bubble du jour, while the dot-com or Internet bubble ran its course a little over a decade ago (1995-2000).

Given the evident fact that bubbles inevitably burst, resulting in a destruction of wealth sometimes on an enormous and catastrophic scale, one might expect that governments would seek to restrain the riskier versions of speculative investing through regulation. This has indeed tended to be the case in historic periods immediately following spectacular crashes. For example, after the 1929 stock market crash regular commercial banks (which accept deposits and make loans) were prohibited from acting as investment banks (which deal in stocks, bonds, and other financial instruments). But as the memory of a crash fades, such restraints tend to fall away.

Modern investment practices are accompanied by a complex and sometimes confusing jargon. It’s worthwhile sorting out the elements of that jargon that are essential to understanding the financial debacle of the past couple of years.

Let’s start with leverage—a general term for any way to multiply investment gains or losses. As is so often the case, a bit of history helps in understanding the concept. During the 1920s, partly because the Fed was keeping interest rates low, investors found they could borrow money to buy stocks, then make enough of a profit in the buoyant stock market to repay their debt (with interest) and still come out ahead. This was called buying on margin, and it is a classic form of leverage. Unfortunately, when the worries about higher interest rates and falling real estate prices helped cause the stock market crash of October 1929, margin investors found themselves owing enormous sums they couldn’t repay. The lesson: leverage can multiply profits, but it likewise multiplies losses.

Two important ways to attain leverage are by borrowing money and trading derivatives. An example of the former: A public corporation (i.e., one that sells stock) may leverage its equity by borrowing money. The more it borrows, the fewer dividend-paying stock shares it needs to sell to raise capital, so any profits or losses are divided among a smaller base and are proportionately larger as a result. The company’s stock looks like a better buy and the value of shares may increase. But if a corporation borrows too much money, a business downturn might drive it into bankruptcy, while a less-levered corporation might prove more resilient.

In the financial world, leverage is mostly achieved with securities. A security is any fungible, negotiable financial instrument representing value. Securities are generally categorized as debt securities (such as bonds and debentures), equity securities (such as common stocks), and derivative contracts.

Debt and equity securities are relatively easy to explain and understand; derivatives are often another story. A derivative is an agreement between two parties that has a value that is determined by the price movement of something else (called the underlying). The underlying can consist of stock shares, a currency, or an interest rate—to cite only three examples. Since a derivative can be placed on any sort of security, the scope of possible derivatives is nearly endless. Derivatives can be used either to deliberately acquire risk or to hedge against risk. The most common kinds of derivatives are swaps (in which counterparties exchange certain benefits of one party’s financial instrument for those of the other party’s financial instrument), futures (a contract to buy or sell an asset at a future date at a price agreed today), and options (financial instruments that give owners the right, but not the obligation, to engage in a specific transaction on an asset). Derivatives have a history: rice futures have been traded on the Dojima Rice Exchange in Osaka, Japan since 1710. However, they have more recently attracted considerable controversy, as the total nominal value of outstanding derivatives contracts has grown to colossal proportions—in the hundreds of trillions of dollars globally, according to some estimates. Prior to the crash of 2008, investor Warren Buffet famously called derivatives “financial weapons of mass destruction,” and asserted that they constitute an enormous bubble. Indeed, during the 2008 crash, a subsidiary of the giant insurance company AIG lost more than $18 billion on a type of swap known as a credit default swap, or CDS (essentially an insurance arrangement in which the buyer pays a premium at periodic intervals in exchange for a contingent payment in the event that a third party defaults), and Société Générale lost $7.2 Billion in January of the same year on futures contracts.

Often, mundane financial jargon conceals truly remarkable practices. Take the common terms long and short for example. If a trader is “long” on oil futures, for example, that means he or she is holding contracts to buy or sell a specified amount of oil at a specified future date at a price agreed today, in expectation of a rise in price. One would therefore naturally assume that taking a “short” position on oil futures or anything else would simply involve expectation of a falling price. True enough. But just how does one successfully go about investing to profit on assets whose value is declining? The answer: short selling (also known as shorting or going short), which involves borrowing the assets (usually securities borrowed from a broker, for a fee) and immediately selling them, waiting for the price of those assets to fall, buying them back at the depressed price, then returning them to the borrower and pocketing the price difference. Of course, if the price of the assets rises, the short seller loses money. (“Shorting” can also refer to entering into any derivative contract in which the investor profits from a fall in the value of an asset.) If this sounds dodgy, then consider naked short selling, in which the investor sells a financial instrument without bothering first to buy or borrow it, or even to ensure that it can be borrowed. Naked short selling is illegal in the U.S., but many knowledgeable commentators assert that the practice is widespread nonetheless.

In the boom years leading up to the crash, it was often the wealthiest individuals who engaged in the riskiest financial behavior. And the wealthy seemed to flock, like finches around a bird feeder, toward hedge funds: investment funds open to a limited range of investors that undertake a wider range of activities than traditional “long-only” investment funds that merely invest in stocks and bonds—activities including short selling and entering into derivative contracts. To neutralize the effect of overall market movement, hedge fund managers balance portfolios by buying assets whose price is expected to outpace the market, and by selling short assets expected to do worse than the market as a whole. Thus in theory price movements of particular securities that simply reflect activity in the overall market are cancelled out or “hedged.” Hedge funds promise (and often produce) high returns through extreme leverage. But because of the enormous sums at stake, critics say this poses a systemic risk to the entire economy. This risk was highlighted by the near-collapse of two Bear Stearns hedge funds, which had invested heavily in mortgage-backed securities, in June 2007.

I Owe You

If this essay were to serve as an economics primer, then plenty more financial terms should be defined and discussed; however, the aim instead is merely to provide the essential background (by way of history and terminology) necessary to understand the recent financial events and trends that have led industrial society to the point where we are today—the end of growth.

As we have seen, bubbles are a phenomenon generally tied to speculative investing. But in a larger sense our entire economy has assumed the characteristics of a bubble—even a Ponzi scheme. That is because it has come to depend upon staggering and continually expanding amounts of debt: government and private debt; debt in the trillions, and tens of trillions, and hundreds of trillions of dollars; debt that, in aggregate, has grown by 500 percent since 1980; debt that has grown faster than economic output (measured in GDP) in all but one of the past 50 years; debt that can never be repaid; debt that represents claims on quantities of labor and resources that simply do not exist.

When we inquire how and why this happened, we discover a web of interrelated trends.

Looking at the problem close up, the globalization of the economy looms as a prominent factor. In the 1970s and ’80s, with stiffer environmental and labor standards to contend with domestically, corporations began eyeing the regulatory vacuum, cheap labor, and relatively untouched natural resource base of less-industrialized nations as a potential goldmine. International investment banks started loaning poor nations enormous sums to pay for ill-advised infrastructure projects (and, incidentally, to pay kickbacks to corrupt local politicians), later requiring these countries to liquidate their natural resources at fire-sale prices so as to come up with the cash required to make loan payments. Then, prodded by corporate interests, industrialized nations pressed for the liberalization of trade rules via the World Trade Organization (the new rules almost always subtly favored the wealthier trading partner). All of this led predictably to a reduction of manufacturing and resource extraction in core industrial nations, especially the U.S. (many important resources were becoming depleted in the wealthy industrial nations anyway), and a steep increase in resource extraction and manufacturing in several “developing” nations, principally China. Reductions in domestic manufacturing and resource extraction in turn motivated investors within industrial nations to seek profits through purely financial means. As a result of these trends, there are now as many Americans employed in manufacturing as there were in 1940, when the nation’s population was roughly half what it is today—while the proportion of total U.S. economic activity deriving from financial services has tripled during the same period. And speculative investing has become an accepted practice that is taught in top universities and institutionalized in the world’s largest corporations.

But as we back up to take in a wider view, we notice larger and longer-term trends that have played even more important roles. One key factor was the severance of money from its moorings in precious metals, a process that started over a century ago: once money came to be based on debt (so that it was created primarily when banks made loans), then growth in total outstanding debt became a precondition for growth of the money supply and therefore for economic expansion. With virtually everyone—workers, investors, politicians—clamoring for more economic growth, it was inevitable that innovative ways to stimulate the process of debt creation would be found. Hence the fairly recent appearance of a bewildering array of devices for borrowing, betting, and insuring—from credit cards to credit default swaps—all essentially tools for the “ephemeralization” of money and the expansion of debt.

A Marxist would say that all of this flows from the inherent imperatives of capitalism. A historian might contend it reflects the inevitable trajectory of all empires (though past empires didn’t have fossil fuels and therefore lacked the means to become global in extent; this time around the empire-building process has scaled unprecedented heights). And a cultural anthropologist might point out that the causes of our debt spiral are endemic to civilization itself: as the gift economy has shrunk and trade has grown, the infinitely various strands of mutual obligation that bind together every human community have become translated into financial debt; and, as hunter-gatherers intuitively understood, debts within the community can never fully be repaid—nor should they be. And certainly not with interest.

In the end perhaps the modern world’s dilemma is as simple as “What goes up must come down.” But as we experience the events comprising ascent and decline close up and first-hand, matters don’t appear simple at all. We suffer from media bombardment; we are soaked in unfiltered and unorganized data; we are blindingly, numbingly overwhelmed by the rapidity of change. But if we are to respond and adapt successfully to all this change, we must have a way of understanding why it is happening, where it might be headed, and what we can do to achieve an optimal outcome under the circumstances. If we are to get it right, we must see both the forest (the big, long-term trends) and the trees (the immediate challenges ahead).

Which brings us to a key question: If the financial economy cannot continue to grow by piling up more debt, then what will happen next?

Discover more from Carolyn Baker

Subscribe now to keep reading and get access to the full archive.

Continue reading