Tag Archives: economics

Social Capital and Inequality

Inequality is different this time, because the rich are usurping a different kind of capital.


For a long time, most thinkers in the West accepted poverty as natural. As Jesus said: “The poor you will always have with you.” But by 1754, Jean-Jacques Rousseau was writing an entire discourse on the origin of inequality and blaming it largely on the practice of recognizing land as private property.

The first man who, having enclosed a piece of ground, bethought himself of saying This is mine, and found people simple enough to believe him, was the real founder of civil society. From how many crimes, wars and murders, from how many horrors and misfortunes might not any one have saved mankind, by pulling up the stakes, or filling up the ditch, and crying to his fellows, “Beware of listening to this impostor; you are undone if you once forget that the fruits of the earth belong to us all, and the earth itself to nobody.”

Thomas Paine, who in many ways was the most radical of the American revolutionaries, observed the contrasting example of the Native American tribes — where he found no parallel to European wealth or poverty — and came away with a more nuanced model of the connection between inequality and landed property, which he published in 1797 as Agrarian Justice. He started in much the same place as Rousseau:

The earth in its natural, uncultivated state, was, 
and ever would have continued to be 
THE COMMON PROPERTY OF THE HUMAN RACE. In that state every man 
would have been born to property. He would have been a joint life-proprietor with the rest 
in the property of the soil, 
and in all its natural productions, 
vegetable and animal.

But Paine also recognized that the development of modern agriculture — which he saw as necessary to feed people in the numbers and diversity of activities essential to advanced civilization — required investing a lot of up-front effort: clearing forests of trees and rocks, draining marshlands, and then annually plowing and planting. Who would do all that, if in the end the harvest would belong equally to everybody? He saw private ownership of land as a solution to this problem, but believed it had been implemented badly. What a homesteader deserved to own was his or her improvement on the productivity of the land, not the land itself. If the land a family cleared became more valuable than the forest or marshland they started with, then the homesteaders should own that difference in value, but not the land itself. [1]

Society as a whole, he concluded, deserved a rent on the land in its original state, and he proposed using that income — or an inheritance tax on land, which would not be as clean a solution theoretically, but would be easier to assess and collect — to capitalize the poor.

When a young couple begin the world, 
the difference is exceedingly great 
whether they begin with nothing 
or with fifteen pounds apiece. With this aid they could buy a cow, 
and implements to cultivate a few acres of land; 
and instead of becoming burdens upon society … would be put in the way 
of becoming useful and profitable citizens.

Paine argued this not as charity or even social engineering, but as justice: The practice of privatizing land had usurped the collective inheritance of those born without land, so something had to be done to restore the usurped value.

In one of my favorite talks (I published versions of it here and here), I extended Paine’s idea in multiple directions, including to intellectual property. Just as Paine would buy a young couple a cow and some tools, I proposed helping people launch themselves into a 21st century information economy. Like Paine, I see this as justice, because otherwise the whole benefit of technological advancement accrues only to companies like Apple or Google, reaching the rest of us only through such companies. A fortune like Bill Gates’ arises partly through innovation, effort, and good business judgment, but also by usurping a big chunk of the common inheritance.

Avent. And that brings us to Ryan Avent’s new book, The Wealth of Humans: work, power, and status in the twenty-first century. There are at least two ways to read this book. It fits into the robot-apolcalypse, where-are-the-jobs-of-the-future theme that I have recently discussed here (and less recently here and here). Avent’s title has a double meaning: On the one hand it’s about the wealth humans will produce through the continued advance of technology. But that advance will also result in society having a “wealth” of humans — more than are needed to do the jobs available.

Most books in this genre are by technologists or futurists, and consequently assemble evidence to support a single vision or central prediction. Avent is an economic journalist. (He writes for The Economist.) So he has produced a more balanced analysis, cataloging the forces, trends, and possibilities. It’s well worth reading from that point of view.

But I found Avent’s book more interesting in what it says about inequality and social justice in the current era. What’s different about the 21st century is that technology and globalism have converged to make prosperity depend on a type of capital we’re not used to thinking about: social capital. [2] And from a moral point of view, it’s not at all obvious who should own social capital. Maybe we all should.

What is social capital? Before the Industrial Revolution, capital consisted mainly of land (and slaves, where that was allowed). By the late 19th century, though, the big fortunes revolved around industrial capital: the expensive machines that sat in big factories. The difference between a rich country and a poor one was mainly that people in rich countries could afford to invest in such machinery, which then made them richer. On a national level, industrial capital showed up as government-subsidized railroads and canals and port facilities. (The Erie Canal alone created one of the great 19th-century boom towns: Buffalo.) A country that could afford to make such improvements became more productive and more prosperous.

In the 20th century, the countries that rose to wealth — first Japan and then later Singapore, Taiwan, and South Korea — did so partly through investment in machinery, but also through education. An educated populace could provide the advanced services that made an industrial economy thrive. And so we started talking about human capital, the investments that people and their governments make in acquiring skills, and intellectual capital, the patents, copyrights, and trade secrets that powered a 20th-century giant like IBM.

That may seem like a pretty complete list of the kinds of capital. But now look at today’s most valuable companies: Apple and Google, either of which might become the world’s first trillion-dollar corporation in a year or two. Each owns a small amount of land, no slaves, and virtually no industrial capital; Apple contracts out nearly all of its manufacturing, and a lot of Google’s products are entirely intangible. Both employ brilliant, well-educated people, but not hundreds of billions of dollars worth of them. They have valuable patents, copyrights, trademarks, etc., but again, intellectual property alone doesn’t account for either company’s market value. There’s something in how all those factors fit together that makes Apple and Google what they are.

That’s social capital. Avent describes it like this:

Social capital is individual knowledge that only has value in particular social contexts. An appreciation for property rights, for example, is valueless unless it is held within a community of like-minded people. Likewise, an understanding of the culture of a productive firm is only useful within that firm, where that culture governs behavior. That dependence on a critical mass of minds to function is what distinguishes social capital from human capital.

Social capital has always existed and been a factor of production, but something about the current era, some combination of globalism and technology, has brought it to the fore. Today, a firm strong in social capital — a shared way of approaching problems and taking action that is uniquely suited to a particular market at this moment in history — can acquire all the other factors of production cheaply, making social capital the primary source of its wealth. [3]

Who should own social capital? Right now it’s clear who does own a company’s social capital: the stockholders. But should they? Avent talks about Bill Gates’ $70 billion net worth — created mostly not by his own efforts but by the social organism called Microsoft — and then generalizes:

People, essentially, do not create their own fortunes. They inherit them, come to them through the occupation of some state-protected niche, or, if they are very brilliant and very lucky, through infusing a particular group of men and women with the germ of an idea, which, in time and with just the right environment, allows that group to evolve into an organism suited to the creation of economic value, a very large chunk of which the founder can then capture for himself.

Stockholders — the people who put up the money to acquire the other factors of production — currently get the vast majority of the benefit from a company’s social capital, but it’s not clear why they should. We usually imagine other forms of capital as belonging to whomever would have them if the enterprise broke up: The stockholders would sell off the land and industrial and intellectual capital, while the employees would walk away with the human capital of their experience and education. But the company’s social capital would just vanish, the way that a living organism vanishes if it gets rendered into its constituent chemicals. So, rightfully, who owns it?

Another chunk of social capital resides in nations, which are also social organisms. The very real economic value of the rule of law, voluntary compliance with beneficial but unenforceable norms, shared notions of fairness, trust that others will fulfill their commitments, and general public-spiritedness — in other words, all the cultural stuff that makes a worker or firm or idea more valuable in America or Germany than in Burundi or Yemen — who does it belong to? Who should share in its benefits?

Bargaining power. Avent does not try to sell the conservative fairy tale that the market will allocate benefits appropriately. Under the market, what each party gets out of any collective endeavor depends on its relative bargaining power, not on what it may deserve in some more abstract sense.

Avent proposes this thought experiment: What if automation got to the point where only one human worker was required to produce everything? Naively, you might expect this individual to be tremendously important and very well paid, but that’s probably not what would happen. Everyone in the world who wanted a job would want his job, and even if he had considerable skills, probably in the whole world millions of people would share those skills. So his bargaining power would be essentially zero, and even though in some sense he produced everything, he might end up working for nothing.

Globalization and automation, plus political developments like the decline of unions, have lowered the bargaining power of unskilled workers in rich countries, so they get less money, even though in most cases their productivity has increased. As communication gets cheaper and systems get more intelligent, more and more jobs can be automated or outsourced to countries with lower wages, so the bargaining power of the people in those jobs shrinks. That explains this graph, which I keep coming back to because I think it’s the single most important thing to understand about the American economy today: Hourly wages tracked productivity fairly closely until the 1970s, but have fallen farther and farther behind ever since.

Companies could have afforded to pay more — by now, the productivity is there to support a wage nearly 2 1/2 times higher — but workers haven’t had the bargaining power to demand that money, so they haven’t gotten it. [4]

A similar thing happened early in the Industrial Revolution: Virtually none of the benefits that came from industrial capital were shared with the workers, until they gained bargaining power through political action and unionization. The result is the safety net we have today.

Just as workers’ ability to reap significant benefits from the deployment of industrial capital was in doubt for decades, so we should worry that social capital will not, without significant alterations to the current economic system, generate better economic circumstances for most people.

Who’s in? Who’s out? When you do start sharing social capital, whether within a firm or within a country, you run into the question of who belongs. This is a big part of the contracting-out revolution. The janitors and cafeteria workers at Henry Ford’s plants worked for Henry Ford. But a modern technology corporation is likely to contract for those services. By shrinking down to a core competency, it can reward its workers while keeping a tight rein on who “its workers” are. No need to give stock options or healthcare benefits to receptionists and parking lot attendants if they don’t seem essential to maintaining the company’s social capital.

Things shake out similarly at the national level: The more ordinary Americans succeed in getting a share of the social capital of the United States, the greater the temptation to restrict who can get into the US and qualify for benefits — or to throw out people that many of the rest of us think shouldn’t be here.

Avent would like to see us take the broadest possible view of who’s in:

The question we ask ourselves, knowingly or not, is: With whom do we wish to share society? The easy answer, the habitual answer, is: with those who are like us.

But this answer is bound to lead to trouble, because it is arbitrary, and because it is lazy, and because it is imprecise, in ways that invite social division. There is always some trait or characteristic available which can be used to define someone seemingly like us as not like us.

There is a better answer available: that to be “like us” is to be human. That to be human is to earn the right to share in the wealth generated by the productive social institutions that have evolved and the knowledge that has been generated, to which someone born in a slum in Dhaka is every bit the rightful heir as someone born to great wealth in Palo Alto or Belgravia.

Can it happen? Much of the Avent’s book is depressing, but by the time the Epilogue rolls around he seems almost irrationally optimistic. For 200 pages, he has painted as realistic a picture as he could of the challenges we face, whether economic, technological, social, or political. But as to whether things will ultimately work out, he appears to come around to the idea that they have to, so they will. So he ends with this:

We are entering into a great historical unknown. In all probability, humanity will emerge on the other side, some decades hence, in a world in which people are vastly richer and happier than they are now. With some probability, small but positive, we will not make it at all, or we will arrive on the other side poorer and more miserable. That assessment is not optimism or pessimism. It is just the way things are.

Face to face with the unknown, it is hard to know what to feel or what to do. It is tempting to be afraid. But, faced with this great, powerful, transformative force, we shouldn’t be frightened. We should be generous. We should be as generous as we can be.


[1] The arbitrariness of this becomes clear when you consider mineral rights. If my grandfather homesteaded a plot of land, which in my generation turned out to be in the middle of a oil field, what would that wealth have to do with me that I would deserve to own it?

[2] If the term social capital rings a bell for you, you’re probably remembering Robert Putnam’s Bowling Alone, which appeared as a magazine article in 1995 and was expanded to a book in 2000. But Putnam used the term more metaphorically, expressing a sociological idea in economic terms, rather than as a literal factor of production.

[3] Henry Ford’s company probably also had a lot of social capital, but it was hard to notice behind all those buildings and machines.

[4] Individual employers will tell you that they’d go bankrupt if they had to raise wages 2 1/2 times, and in some sense that’s true: They compete with companies that also pay low wages, and would lose that competition if they paid high wages. But that is simply evidence that workers’ bargaining power is low across entire industries, rather than just in this company or that one.

Jobs, Income, and the Future

What “the jobs problem” is depends on how far into the future you’re looking. Near-term, macroeconomic policy should suffice to create enough jobs. But long-term, employing everyone may be unrealistic, and a basic income program might be necessary. That will be such a change in our social psychology that we need to start preparing for it now.


Historical context. The first thing to recognize about unemployment is that it’s not a natural problem. Tribal hunter-gatherer cultures have no notion of it. No matter how tough survival might be during droughts or other hard times, nothing stops hunter-gatherers from continuing to hunt and gather. The tribe has a territory of field or forest or lake, and anyone can go to this commonly held territory to look for food.

Unemployment begins when the common territory becomes private property. Then hunting turns into poaching, gathering becomes stealing, and people who are perfectly willing to hunt or fish or gather edible plants may be forbidden to do so. At that point, those who don’t own enough land to support themselves need jobs; in other words, they need arrangements that trade their labor to an owner in exchange for access to the owned resources. The quality of such a job might vary from outright slavery to Clayton Kershaw’s nine-figure contract to pitch for the Dodgers, but the structure is the same: Somebody else owns the productive enterprise, and non-owners needs to acquire the owner’s permission to participate in it.

So even if unemployment is not an inevitable part of the human condition, it is as old as private property. Beggars — people who have neither land nor jobs — appear in the Bible and other ancient texts.

But the nature of unemployment changed with Industrial Revolution. With the development and continuous improvement of machines powered by rivers or steam or electricity, jobs in various human trades began to vanish; you might learn a promising trade (like spinning or weaving) in your youth, only to see that trade become obsolete in your lifetime.

So if the problem of technological unemployment is not exactly ancient, it’s still been around for centuries. As far back as 1819, the economist Jean Charles Léonard de Sismondi was wondering how far this process might go. With tongue in cheek he postulated one “ideal” future:

In truth then, there is nothing more to wish for than that the king, remaining alone on the island, by constantly turning a crank, might produce, through automata, all the output of England.

This possibility raises an obvious question: What, then, could the English people offer the king (or whichever oligarchy ended up owning the automata) in exchange for their livelihoods?

Maslow. What has kept that dystopian scenario from becoming reality is, basically, Maslow’s hierarchy of needs. As basic food, clothing, and shelter become easier and easier to provide, people develop other desires that are less easy to satisfy. Wikipedia estimates that currently only 2% of American workers are employed in agriculture, compared to 50% in 1870 and probably over 90% in colonial times. But those displaced 48% or 88% are not idle. They install air conditioners, design computer games, perform plastic surgery, and provide many other products and services our ancestors never knew they could want.

So although technology has continued to put people out of work — the railroads pushed out the stagecoach and steamboat operators, cars drastically lessened opportunities for stableboys and horse-breeders, and machines of all sorts displaced one set of skilled craftsmen after another — new professions have constantly emerged to take up the slack. The trade-off has never been one-for-one, and the new jobs have usually gone to different people than the ones whose trades became obsolete.  But in the economy as a whole, the unemployment problem has mostly remained manageable.

Three myths. We commonly tell three falsehoods about this march of technology: First, that the new technologies themselves directly create the new jobs. But to the extent they do, they don’t create nearly enough of them. For example, factories that manufacture combines and other agricultural machinery do employ some assembly-line workers, but not nearly as many people as worked in the fields in the pre-mechanized era.

When the new jobs do arise, it is indirectly, through the general working of the economy satisfying new desires, which may have only a tangential relationship to the new technologies. The telephone puts messenger-boys out of business, and also enables the creation of jobs in pizza delivery. But messenger-boys don’t automatically get pizza-delivery jobs; they go into the general pool of the unemployed, and entrepreneurs who create new industries draw their workers from that pool. At times there may be a considerable lag between the old jobs going away and the new jobs appearing.

Second, the new jobs haven’t always required more education and skill than the old ones. One of the key points of Harry Braverman’s 1974 classic Labor and Monopoly Capital: the degradation of work in the 20th century was that automation typically bifurcates the workforce into people who need to know a lot and people who need to know very little. Maybe building the first mechanized shoe factory required more knowledge and skill than a medieval cobbler had, but the operators of those machines needed considerably less knowledge and skill. The point of machinery was never just that it replaced human muscle-power with horsepower or waterpower or fossil fuels, but also that once the craftsman’s knowledge had been built into a machine, low-skill workers could replace high-skill workers.

And finally, technological progress by itself doesn’t always lead to general prosperity. It increases productivity, but that’s not the same thing. A technologically advanced economy can produce goods with less labor, so one possible outcome is that it could produce more goods for everybody. But it could also produce the same goods with less labor, or even fewer goods with much less labor. In Sismondi’s Dystopia, for example, why won’t the king stop turning his crank as soon as he has all the goods he wants, and leave everyone else to starve?

So whether a technological society is rich or not depends on social and political factors as much as economic ones. If a small number of people wind up owning the machines, patents, copyrights, and market platforms, the main thing technology will produce is massive inequality. What keeps that from happening is political change: progressive taxation, the social safety net, unions, shorter work-weeks, public education, minimum wages, and so on.

The easiest way to grasp this reality is to read Dickens: In his day, London was the most technologically advanced city in the world, but because political change hadn’t caught up, it was a hellhole for a large chunk of its population.

The fate of horses. Given the long history of technological unemployment, it’s tempting to see the current wave as just more of the same. Too bad for the stock brokers put out of work by automated internet stock-trading, but they’ll land somewhere. And if they don’t, they won’t wreck the economy any more than the obsolete clipper-ship captains did.

But what’s different about rising technologies like robotics and artificial intelligence is that they don’t bifurcate the workforce any more: To a large extent, the unskilled labor just goes away. The shoe factory replaced cobblers with machine designers and assembly-line workers. But now picture an economy where you get new shoes by sending a scan of your feet to a web site which 3D-prints the shoes, packages them automatically, and then ships them to you via airborne drone or driverless delivery truck. There might be shoe designers or computer programmers back there someplace, but once the system is built, the amount of extra labor your order requires is zero.

In A Farewell to Alms, Gregory Clark draws this ominous parallel: In 1901, the British economy required more than 3 million working horses. Those jobs are done by machines now, and the UK maintains a far smaller number of horses (about 800K) for almost entirely recreational purposes.

There was always a wage at which all these horses could have remained employed. But that wage was so low that it did not pay for their feed.

By now, there is literally nothing that three million British horses can do more economically than machines. Could the same thing happen to humans? Maybe it will be a very long time before an AI can write a more riveting novel than Stephen King, but how many of us still have a genuinely irreplacable talent?

Currently, the U.S. economy has something like 150 million jobs for humans. What if, at some point in the not-so-distant future, there is literally nothing of economic value that 150 million people can do better than some automated system?

Speed of adjustment. The counter-argument is subtle, but not without merit: You shouldn’t let your attention get transfixed by the new systems, because new systems never directly create as many jobs as they destroy. Most new jobs won’t come from maintaining 3D printers or manufacturing drones or programming driverless cars, they’ll come indirectly via Maslow’s hierarchy: People who get their old wants satisfied more easily will start to want new things, some of which will still require people. Properly managed, the economy can keep growing until all the people who need jobs have them.

The problem with that argument is speed. If technology were just a one-time burst, then no matter how big the revolution was, eventually our desires would grow to absorb the new productivity. But technology is continually improving, and could even be accelerating. And even though we humans are a greedy lot, we’re also creatures of habit. If the iPhone 117 hits the market a week after I got my new iPhone 116, maybe I won’t learn to appreciate its new features until the iPhone 118, 119, and 120 are already obsolete.

Or, to put the same idea in a historical context, what if technology had given us clipper ships on Monday, steamships on Tuesday, and 747s by Friday? Who would we have employed to do what?

You could imagine, then, a future where we constantly do want new things that employ people in new ways, but still the economy’s ability to create jobs keeps falling farther behind. Since we’re only human, we won’t have time either to appreciate the new possibilities technology offers us, or to learn the new skills we need to find jobs in those new industries — at least not before they also become obsolete.

Macroeconomics. Right now, though, we are still far from the situation where there’s nothing the unemployed could possibly do. Lots of things that need doing aren’t getting done, even as people who might do them are unemployed: Our roads and bridges are decaying. We need to prepare for climate change by insulating our buildings better and installing more solar panels. The electrical grid is vulnerable and doesn’t let us take advantage of the most efficient power-managing technologies. Addicts who want treatment aren’t getting it. Working parents need better daycare options. Students could benefit from more one-on-one or small-group attention from teachers. Hospital patients would like to see their nurses come around more often and respond to the call buttons more quickly. Many of our elderly are warehoused in inadequately staffed institutions.

Some inadequate staffing we’ve just gotten used to: We expect long lines at the DMV, and that it might take a while to catch a waitress’ eye. In stores, it’s hard to get anybody to answer your questions. But that’s just life, we think.

That combination of unmet needs and unemployed people isn’t a technological problem, it’s an economic problem. In other words, the problem is about money, not about what is or isn’t physically possible. Either the people with needs don’t have enough money to create effective demand in the market, or the workers who might satisfy the needs can’t afford the training they need, or the businessmen who might connect workers with consumers can’t raise the capital to get started.

One solution is for the Federal Reserve to create more money. At Vox, Timothy Lee writes:

When society invents a new technology that makes workers more efficient, it has two options: It can employ the same number of workers and produce more goods and services, or it can employ fewer workers to produce the same number of goods and services.

Jargon-filled media coverage makes this hard to see, but the Federal Reserve plays a central role in this decision. When the Fed pumps more money into the economy, people spend more and create more jobs. If the Fed fails to supply enough cash, then faster technological progress can lead to faster job losses — something we might be experiencing right now.

So if you’re worried that technological progress will lead to mass unemployment — and especially if you think this process is already underway — you should be very interested in what the Federal Reserve does.

Another option is for the government to directly subsidize the people whose needs would otherwise go unmet. That’s what the Affordable Care Act and Medicaid do: They subsidize healthcare for people who need it but otherwise couldn’t afford it, and so create jobs for doctors, nurses, and the people who manufacture drugs, devices, and the other stuff used in healthcare.

Finally, the government can directly invest in industries that otherwise can’t raise capital. The best model here is the New Deal’s investment in the rural electric co-ops that brought electricity to sparsely populated areas. It’s also what happens when governments build roads or mass-transit systems.

When you look at things this way, you realize that our recent job problems have as much to do with conservative macroeconomic policy as with technology. Since Reagan, we’ve been weakening all the political tools that distribute the benefits of productivity: progressive taxation, the social safety net, unions, shorter work-weeks, public education, the minimum wage. And the result has been exactly what we should have expected: For decades, increases in national wealth have gone almost entirely to owners rather than workers.

In short, we’ve been moving back towards Dickensian London.

The long-term jobs problem. But just because the Robot Apocalypse isn’t the sole source of our immediate unemployment problem, that doesn’t mean it’s not waiting in the middle-to-far future. Our children or grandchildren might well live in a world where the average person is economically superfluous, and only the rare genius has any marketable skills.

The main thing to realize about this future is that its problems are more social and psychological than economic. If we can solve the economic problem of distributing all this machine-created wealth, we could be talking about the Garden of Eden, or various visions of hunter-gatherer Heaven. People could spend their lives pursuing pleasure and other forms of satisfaction, without needing to work. But if we don’t solve the distribution problem, we could wind up in Sismondi’s Dystopia, where it’s up to the owners of the automata whether the rest of us live or die.

The solution to the economic problem is obvious: People need to receive some kind of basic income, whether their activities have any market value or not. The obvious question “Where will the money for this come from?” has an obvious answer “From the surplus productivity that makes their economic contribution unnecessary.” In the same way that we can feed everybody now (and export food) with only 2% of our population working in agriculture, across-the-board productivity could create enough wealth to support everyone at a decent level with only some small number of people working.

But the social/psychological problem is harder. Kurt Vonnegut was already exploring this in his 1952 novel Player Piano. People don’t just get money from their work, they get their identities and senses of self-worth. For example, coal miners of that era may not have wanted to spend their days underground breathing coal dust and getting black lung disease, but many probably felt a sense of heroism in making these sacrifices to support their families and to give their children better opportunities. If they had suddenly all been replaced by machines and pensioned off, they could have achieved those same results with their pension money. But why, an ex-miner might wonder, should anyone love or appreciate him, rather than just his unearned money?

Like unemployment itself, the idea that the unemployed are worthless goes way back. St. Paul wrote:

This we commanded you, that if any would not work, neither should he eat.

It’s worth noticing, though, that many people are already successfully dealing with this psycho-social problem. Scions of rich families only work if they want to, and many of them seem quite happy. Millions of Americans are pleasantly retired, living off a combination of savings and Social Security. Millions of others are students, who may be working quite hard, but at things that have no current economic value. Housespouses work, but not at jobs that pay wages.

Countless people who have wage-paying jobs derive their identities from some other part of their lives: Whatever they might be doing for money, they see themselves as novelists, musicians, chess players, political activists, evangelists, long-distance runners, or bloggers. Giving them a work-free income would just enable them to do more of what they see as their calling.

Conservative and liberal views of basic income. If you talk to liberals about basic income, the conversation quickly shifts to all the marvelous things they would do themselves if they didn’t have to work. Conservatives may well have similar ambitions, but their attention quickly shifts to other people, who they are sure would lead soulless lives of drunken society-destroying hedonism. (This is similar to the split a century ago over Prohibition: Virtually no one thought that they themselves needed the government to protect them from the temptation of gin, but many believed that other people did.)

So far this argument is almost entirely speculative, with both sides arguing about what they imagine would happen based on their general ideas about human nature. However, we may get some experimental results before long.

GiveDirectly is an upstart charity funded by Silicon Valley money, and it has tossed aside the old teach-a-man-to-fish model of third-world aid in favor of the direct approach: Poor people lack money, so give them money. It has a plan to provide a poverty-avoiding basic income — about $22 a month — for 12 years to everybody in 40 poor villages in Kenya. Another 80 villages will get a 2-year basic income. Will this liberate the recipients’ creativity? Or trap them in soul-destroying dependence and rob them of self esteem?

My guess: a little bit of both, depending on who you look at. And both sides will feel vindicated by that outcome. We see that already in American programs like food stamps. For some conservatives, the fact that cheating exists at all invalidates the whole effort; that one guy laughing at us as he eats his subsidized lobster outweighs all the kids who now go to school with breakfast in their stomachs. Liberals may look at the same facts and come to the opposite conclusion: If I get to help some people who really need it, what does it matter if a few lazy lowlifes get a free ride?

So I’ll bet some of the Kenyans will gamble away their money or use it to stay permanently stoned, while others will finally get a little breathing room, escape self-reinforcing poverty traps, and make something of their lives. Which outcome matters to you?

Summing up. In the short run, there will be no Robot Apocalypse as long as we regain our understanding of macroeconomics. But we need to recognize that technological change combines badly with free-market dogma, leading to Dickensian London: Comparatively few people own the new technologies, so they capture the benefits while the rest of us lose our bargaining power as we become less and less necessary.

However, we’re still at the point in history where most people’s efforts have genuine economic value, and many things that people could do still need doing. So by using macroeconomic tools like progressive taxation, public investment, and money creation, the economy can expand so that technological productivity leads to more goods and services for all, rather than a drastic loss of jobs and livelihoods for most while a few become wealthy on a previously unheard-of scale.

At some point, though, we’re going to lose our competition with artificial intelligence and go the way of horses — at least economically. Maybe you believe that AIs will never be able to compete with your work as a psychotherapist, a minister, or a poet, but chess masters and truck drivers used to think that too. Sooner or later, it will happen.

Adjusting to that new reality will require not just economic and political change, but social and psychological change as well. Somehow, we will need to make meaningful lives for ourselves in a work-free technological Garden of Eden. When I put it that way, it sounds easy, but when you picture it in detail, it’s not. We will all need to attach our self-respect and self-esteem to something other than pulling our weight economically.

In the middle-term, there are things we can do to adjust: We should be on the lookout for other roles like student and retiree, that give people a socially acceptable story to tell about themselves even if they’re not earning a paycheck. Maybe the academic idea of a sabbatical needs to expand to the larger economy: Whatever you do, you should take a year or so off every decade. “I’m on sabbatical” might become a story more widely acceptable than “I’m unemployed.” College professors and ministers are expected to take sabbaticals; it’s the ones who don’t who have something to explain.

Already-existing trends that lower the workforce, like retraining mid-career or retiring early, need to be celebrated rather than worried about. In the long run the workforce is going to go down; that can be either a source of suffering or a cause for rejoicing, depending on how we construct it.

Most of all, we need to re-examine the stereotypes we attach to the unemployed: They are lazy, undeserving, and useless. These stereotypes become self-fulfilling prophecies: If no one is willing to pay me, why shouldn’t I be useless?

Social roles are what we make them. The Bible does not report Adam and Eve feeling useless and purposeless in the Garden of Eden, and I suspect hunter-gatherer tribes that happened onto lands of plentiful game and endless forest handled that bounty relatively well. We could to the same. Or not.

What’s a 21st Century Equivalent of the Homestead Act?

A typical featured article on this blog is supposed to tell my readers something they might not already know, or at least to get them to think about it in a different way. But this time I’m just trying to raise a question, hoping that the combined wisdom and creativity of the readership will come up with stuff I haven’t thought of.

Before I ask the question, some background: One of the most radical things the United States government ever did was pass the Homestead Act (actually the Homestead Acts; there were a series of them). Beginning in 1850, and picking up steam after the Civil War, the government gave away relatively small plots of land — usually 160 acres — to settlers who over a period of five years would build a home on the land, live there, “improve” the land to make it farmable, and then farm it. Wikipedia claims that 10% of the total area of the United States was given away in this manner, to the benefit of 1.6 million families. [1]

I doubt Karl Marx had much influence on the U.S. Congress (though he was writing during this era) and there’s nothing particularly communist about establishing 1.6 million plots of private property. But I like to look at the Homestead Act in the light of the Marxist concept of the means of production. In a nutshell, the means of production is whatever resources are necessary to turn labor into goods and services. So, in a given society at a given state of technology,

Labor + X = Goods and Services

Solve for X, and that’s the means of production. Today, X is complicated: factories and patents and communication systems and whatever. But for most of human history, the means of production had mostly been land. And it still could be, even in the 19th century with its growing industrial economy; if you had fertile land, you could work it and produce sustenance for yourself, plus some extra to trade.

To Marx, the problem of capitalism is that the means of production — land, factories, mines, and so on — wind up privately owned by a fairly small group of people, and everybody else can only get access to the means of production by negotiating with those people. In other words, your productivity is not up to you; you can’t just go work and collect the fruit of your labor, you need an employer to hire you, so that you can have a job and get paid. Your labor only counts if you can get an employer’s permission to use his access to the means of production. Otherwise, you’re like a landless farmer or an auto worker who has been laid off from the factory.

Marx foresaw a vicious cycle: The narrower the ownership of the means of production became, the less bargaining power a worker would have, and the larger the premium an employer could demand in order to grant access. [2] This imbalance in bargaining power would increase the concentration of wealth, making the ownership of the means of production even narrower.

Usually, communists end up talking about state ownership of the means of production, but I want to point out that that’s a method, not a goal. What is really important is universal access to the means of production. State ownership is one way to try to do that, and I’m not sure how many other ways there might be — that’s part of the question here — but the real goal should be access: If all the people who want to work can find a way to turn their effort into goods and services, without needing to make a extortionate deal with some gatekeeper, then we’re on to something.

Now let’s return to the Homestead Act. What it did was vastly increase the number of Americans with access to the means of production. Mind you, it didn’t establish universal access — if you were a freedman sharecropping in Georgia, or were making pennies an hour in some dangerous factory in Connecticut, you had little prospect of assembling a big enough stake to go out West and homestead for five years — but it was vastly expanded access.

So now you’re in a position to understand what I’m asking: What would do that now? What change could we make (where we includes but is not necessarily limited to the federal government) that would vastly increase access to whatever the means of production is today?


[1] Probably most of you have already realized that this was an example of robbing Peter to pay Paul. The only reason the U.S. government had all this land to give was that they were in the process of stealing it from the Native Americans.

I would argue that at this point the decision to rob Peter had already been made; I doubt any major figure in the government saw much future for the Native Americans other than being pushed back onto reservations or annihilated. However we do the moral calculations today, at the time Congress saw itself with the power (and even the right, though don’t ask me to defend it) to dispose of that land however it wanted.

Given that robbery-in-progress, I think the decision to pay Paul is still remarkable. It certainly wasn’t the only thing Congress could have done. The government could have applied the Spanish model, and created a bunch of large haciendas to be controlled by a wealthy elite. Or it could have applied the English model, and granted the land in huge swathes to public/private companies like the East India Company or the Virginia Company, who could develop it for profit. What it did instead created a middle class of small landowners rather than an aristocracy or a managerial elite.

[2] Workers don’t usually pay an explicit “premium for access to the means of production”, but it’s implicit when a profitable business pays low wages: Money comes in and the owner keeps the lion’s share. If you don’t like it, go get another job.

One way to read the productivity vs. wages graphs I post every few months is that access premiums have been growing since the mid-1970s, and really started to accelerate in the mid-1980s.

The Election Is About the Country, Not the Candidates

Citizens shouldn’t let the media make us forget about ourselves.


Judging by the amount of media attention they got, these were the most important political stories of the week: Donald Trump and Bernie Sanders agreed to debate, but then Trump backed out, leading Sanders supporters to launch the #ChickenTrump hashtag. A report on Hillary Clinton’s emails came out. A poll indicated that the California primary is closer than previously thought. Trump’s delegate total went over 50%. Elizabeth Warren criticized Trump, so he began calling her “Pocahontas”. Sanders demanded that Barney Frank be removed as the chair of the DNC’s platform committee. Trump told a California audience that the state isn’t in a drought and has “plenty of water“. Trump accused Bill Clinton of being a rapist, and brought up the 1990s conspiracy theory that Vince Foster was murdered. President Obama said that the prospect of a Trump presidency had foreign leaders “rattled“, and Trump replied that “When you rattle someone, that’s good.” Clinton charged that Trump had been rooting for the 2008 housing collapse. Pundits told us that the tone of the campaign was only going to get worse from here; Trump and Clinton have record disapproval ratings for presidential nominees, and so the debate will have to focus on making the other one even more unpopular.

If you are an American who follows political news, you probably heard or read most of these stories, and you may have gotten emotionally involved — excited or worried or angry — about one or more of them. But if at any time you took a step back from the urgent tone of the coverage, you might have wondered what any of it had to do with you, or with the country you live in. The United States has serious issues to think about and serious decisions to make about what kind of country it is or wants to be. This presidential election, and the congressional elections that are also happening this fall, will play an important role in those decisions.

That’s why I think it’s important, both in our own minds and in our interactions with each other, to keep pulling the discussion back to us and our country. The flaws and foibles and gaffes and strategies of the candidates are shiny objects that can be hard to ignore, and Trump in particular is unusually gifted at drawing attention. But the government of the United States is supposed to be “of the People, by the People, and for the People”. It’s supposed to be about us, not about them.

As I’ve often discussed before, the important issues of our country and how it will be governed, of the decisions we have to make and the implications those decisions will have, are not news in the sense that our journalistic culture understands it. Our sense of those concerns evolves slowly, and almost never changes significantly from one day to the next. It seldom crystallizes into events that are breaking and require minute-to-minute updates. At best, a breaking news event like the Ferguson demonstrations or the Baltimore riot will occasionally give journalists a hook on which to hang a discussion of an important issue that isn’t news, like our centuries-long racial divide. (Picture trying to cover it without the hook: “This just in: America’s racial problem has changed since 1865 and 1965, but it’s still there.”)

So let’s back away from the addictive soap opera of the candidates and try to refocus on the questions this election really ought to be about.

Who can be a real American?

In the middle of the 20th century (about the time I was born), if you had asked people anywhere in the world to describe “an American”, you’d have gotten a pretty clear picture: Americans were white and spoke English. They were Christians (with a few Jews mixed in, but they were assimilating and you probably couldn’t tell), and mostly Protestants. They lived in households where two parents — a man and a woman, obviously — were trying (or hoping) to raise at least two children. They either owned a house (that they probably still owed money on) or were saving to buy one. They owned at least one car, and hoped to buy a bigger and better one soon.

If you needed someone to lead or speak for a group of Americans, you picked a man. American women might get an education and work temporarily as teachers or nurses or secretaries, but only until they could find a husband and start raising children.

Of course, everyone knew that other kinds of people lived in America: blacks, obviously; Hispanics and various recent immigrants whose English might be spotty; Native Americans, who were still Indians then; Jews who weren’t assimilating and might make a nuisance about working on Saturday, or even wear a yarmulke in public; single people who weren’t looking to marry or raise children (but might be sexually active anyway); women with real careers; gays and lesbians (but not transgender people or even bisexuals, whose existence wasn’t recognized yet); atheists, Muslims, and followers of non-Biblical religions; the homeless and others who lived in long-term poverty; folks whose physical or mental abilities were outside the “normal” range; and so on.

But they were Americans-with-an-asterisk. Such people weren’t really “us”, but we were magnanimous enough to tolerate them living in our country — for which we expected them to be grateful.

Providing services for the “real” Americans was comparatively easy: You could do everything in English. You didn’t have to concern yourself with handicapped access or learning disabilities. You promoted people who fit your image of a leader, and didn’t worry about whether that was fair. You told whatever jokes real Americans found funny, because anybody those jokes might offend needed to get a sense of humor. The schools taught white male history and celebrated Christian holidays. Every child had two married parents, and you could assume that the mother was at home during the day. Everybody had a definite gender and was straight, so if you kept the boys and girls apart you had dealt with the sex issue.

If those arrangements didn’t work for somebody, that was their problem. If they wanted the system to work better for them, they should learn to be more normal.

It’s easy to imagine that this mid-20th-century Pleasantville America is ancient history now, but it existed in living memory and still figures as ideal in many people’s minds. Explicitly advocating a return to those days is rare. But that desire isn’t gone, it’s just underground.

For years, that underground nostalgia has figured in a wide variety of political issues. But it has been the particular genius of Donald Trump to pull them together and bring them as close to the surface as possible without making an explicit appeal to turn back the clock and re-impose the norms of that era. “Make America great again!” doesn’t exactly promise a return to Pleasantville, but for many people that’s what it evokes.

What, after all, does the complaint about political correctness amount to once you get past “Why can’t I get away with behaving like my grandfather did?”

We can picture rounding up and deporting undocumented Mexicans by the millions, because they’re Mexicans. They were never going to be real Americans anyway. Ditto for Muslims. It would have been absurd to stop letting Italians into the country because of Mafia violence, or to shut off Irish immigration because of IRA terrorism. But Muslims were never going to be real Americans anyway, so why not keep them out? (BTW: As I explained a few weeks ago, the excuse that the Muslim ban is “temporary” is bogus. If nobody can tell you when or how something is going to end, it’s not temporary.)

All the recent complaints about “religious liberty” fall apart once you dispense with the notion that Christian sensibilities deserve more respect than non-Christian ones, or that same-sex couples deserve less respect than opposite-sex couples.

On the other side, Black Lives Matter is asking us to address that underground, often subconscious, feeling that black lives really aren’t on the same level as white lives. If a young black man is dead, it just doesn’t have the same claim on the public imagination — or on the diligence of the justice system — that a white death would. How many black or Latina girls vanish during a news cycle that obsesses over some missing white girl? (For that matter, how many white presidents have seen a large chunk of the country doubt their birth certificates, or have been interrupted during State of the Union addresses by congressmen shouting “You lie!”?)

But bringing myself back to the theme: The issue here isn’t Trump, it’s us. Do we want to think of some Americans as more “real” than others, or do we want to continue the decades-long process of bringing more Americans into the mainstream?

That question won’t be stated explicitly on your ballot this November, like a referendum issue. But it’s one of the most important things we’ll be deciding.

What role should American power play in the world?

I had a pretty clear opinion on that last question, but I find this one much harder to call.

The traditional answer, which goes back to the Truman administration and has existed as a bipartisan consensus in the foreign-policy establishment ever since, is that American power is the bedrock on which to build a system of alliances that maintains order in the world. The archetype here is NATO, which has kept the peace in Europe for 70 years.

That policy involves continuing to spend a lot on our military, and risks getting us involved in wars from time to time. (Within that establishment consensus, though, there is still variation in how willing we should be to go to war. The Iraq War, for example, was a choice of the Bush administration, not a necessary result of the bipartisan consensus.) The post-Truman consensus views America as “the indispensable nation”; without us, the world community lacks both the means and the will to stand up to rogue actors on the world stage.

A big part of our role is in nuclear non-proliferation. We intimidate countries like Iran out of building a bomb, and we extend our nuclear umbrella over Japan so that it doesn’t need one. The fact that no nuclear weapon has been fired in anger since 1945 is a major success of the establishment consensus.

Of our current candidates, Hillary Clinton (who as Secretary of State negotiated the international sanctions that forced Iran into the recent nuclear deal) is the one most in line with the foreign policy status quo. Bernie Sanders is more identified with strengthened international institutions which — if they could be constructed and work — would make American leadership more dispensable. To the extent that he has a clear position at all, Donald Trump is more inclined to pull back and let other countries fend for themselves. He has, for example, said that NATO is “obsolete” and suggested that we might be better off if Japan had its own nuclear weapons and could defend itself against North Korea’s nukes. On the other hand, he has also recently suggested that we bomb Libya, so it’s hard to get a clear handle on whether he’s more or less hawkish than Clinton.

Should we be doing anything about climate change?

Among scientists, there really are two sides to the climate-change debate: One side believes that the greenhouse gases we are pumping into the atmosphere threaten to change the Earth’s climate in ways that will cause serious distress to millions or even billions of people, and the other side is funded by the fossil fuel industry.

It’s really that simple. There are honest scientific disagreements about the pace of climate change and its exact mechanisms, but the basic picture is clear to any scientist who comes to the question without a vested interest: Burning fossil fuels is raising the concentration of greenhouse gases in the atmosphere. An increase in greenhouse gases causes the Earth to radiate less heat into space. So you would expect to see a long-term warming trend since the Industrial Revolution got rolling, and in fact that’s what the data shows — despite the continued existence of snowballs, which has been demonstrated by a senator funded by the fossil fuel industry.

Unfortunately, burning fossil fuels is both convenient and fun, at least in the short term. And if you don’t put any price on the long-term damage you’re doing, it’s also economical. In reality, doing nothing about climate change is like going without health insurance or refusing to do any maintenance on your house or car. Those decisions can improve your short-term budget picture, which now might have room for that Hawaiian vacation your original calculation said you couldn’t afford. Your mom might insist that you should account for your risk of getting sick or needing some major repair, but she’s always been a spoilsport.

That’s the debate that’s going on now. If you figure in the real economic costs of letting the Earth get hotter and hotter — dealing with tens of millions of refugees from regions that will soon be underwater, building a seawall around Florida, moving our breadbasket from Iowa to wherever the temperate zone is going to be in 50 years, rebuilding after the stronger and more frequent hurricanes that are coming, and so on, then burning fossil fuels is really, really expensive. But if you decide to let future generations worry about those costs and just get on with enjoying life now, then coal and oil are still cheap compared to most renewable energy sources.

So what should we do?

Unfortunately, nobody has come up with a good way to re-insert the costs of climate change into the market without involving government, or to do any effective mitigation without international agreements among governments, of which the recent Paris Agreement is just a baby step in the right direction. And to one of our political parties, government is a four-letter word and world government is an apocalyptic horror. So the split inside the Republican Party is between those who pretend climate change isn’t happening, and those who think nothing can or should be done about it. (Trump is on the pretend-it-isn’t-happening side.)

President Obama has been taking some action to limit greenhouse gas emissions, but without cooperation from Congress his powers are pretty limited. (It’s worth noting how close we came to passing a cap-and-trade bill to put a price on carbon before the Republicans took over Congress in 2010. What little Obama’s managed to do since may still get undone by the Supreme Court, particularly if its conservative majority is restored.)

Both Clinton and Sanders take climate change seriously. As is true across the board, Sanders’ proposals are simpler and more sweeping (like “ban fracking”) while Clinton’s are wonkier and more complicated. (In a debate, she listed the problems with fracking — methane leaks, groundwater pollution, earthquakes — and proposed controlling them through regulation. She concluded: “By the time we get through all of my conditions, I do not think there will be many places in America where fracking will continue to take place.”) But like Obama, neither of them will accomplish much if we can’t flip Congress.

Trump, meanwhile, is doing his best impersonation of an environmentalist’s worst nightmare. He thinks climate change is a hoax, wants to reverse President Obama’s executive orders to limit carbon pollution, has pledged to undo the Paris Agreement, and to get back to burning more coal.

How should we defend ourselves from terrorism?

There are two points of view on ISIS and Al Qaeda-style terrorism, and they roughly correspond to the split between the two parties.

From President Obama’s point of view, the most important thing about battle with terrorism is to keep it contained. Right now, a relatively small percentage of the world’s Muslims support ISIS or Al Qaeda, while the vast majority are hoping to find a place for themselves inside the world order as it exists. (That includes 3.3 million American Muslims. If any more than a handful of them supported terrorism, we’d be in serious trouble.) We want to keep tightening the noose on ISIS in Iraq and Syria, and keep closing in on terrorist groups elsewhere in the world, while remaining on good terms with the rest of the Muslim community.

From this point of view — which I’ve described in more detail here and illustrated with an analogy here — the worst thing that could happen would be for these terrorist incidents to touch off a world war between Islam and Christendom.

The opposite view, represented not just by Trump but by several of the Republican rivals he defeated, is that we are already in such a war, so we should go all out and win it: Carpet bomb any territory ISIS holds, without regard to civilian casualties. Discriminate openly against Muslims at home and ban any new Muslims from coming here.

Like Obama, I believe that the main result of these policies would be to convince Muslims that there is no place for them in a world order dominated by the United States. Rather than a few dozen pro-ISIS American terrorists, we might have tens of thousands. If we plan to go that way, we might as well start rounding up 3.3 million Americans right now.

Clinton and Sanders are both roughly on the same page with Obama. Despite being Jewish and having lived on a kibbutz, Sanders is less identified with the current Israeli government than either Obama or Clinton, to the extent that makes a difference.

Can we give all Americans a decent shot at success? How?

Pre-Trump, Republicans almost without exception argued that all we need to do to produce explosive growth and create near-limitless economic opportunity for everybody is to get government out of the way: Lower taxes, cut regulations, cut government programs, negotiate free trade with other countries, and let the free market work its magic. (Jeb Bush, for example, argued that his small-government policies as governor of Florida — and not the housing bubble that popped shortly after he left office — had led to 4% annual economic growth, so similar policies would do the same thing for the whole country.)

Trump has called this prescription into question.

If you think about it, the economy is rigged, the banking system is rigged, there’s a lot of things that are rigged in this world of ours, and that’s why a lot of you haven’t had an effective wage increase in 20 years.

However, he has not yet replaced it with any coherent economic view or set of policies. His tax plan, for example, is the same sort of let-the-rich-keep-their-money proposal any other Republican might make. He promises to renegotiate our international trade agreements in ways that will bring back all the manufacturing jobs that left the country over the last few decades, but nobody’s been able to explain exactly how that would work.

At least, though, Trump is recognizing the long-term stagnation of America’s middle class. Other Republicans liked to pretend that was all Obama’s fault, as if the 2008 collapse hadn’t happened under Bush, and — more importantly — as if the overall wage stagnation didn’t date back to Reagan.

One branch of liberal economics, the one that is best exemplified by Bernie Sanders, argues that the problem is the over-concentration of wealth at the very top. This can devolve into a the-rich-have-your-money argument, but the essence of it is more subtle than that: Over-concentration of wealth has created a global demand problem. When middle-class and poor people have more money, they spend it on things whose production can be increased, like cars or iPhones or Big Macs. That increased production creates jobs and puts more money in the pockets of poor and middle-class people, resulting in a virtuous demand/production/demand cycle that is more-or-less the definition of economic growth.

By contrast, when very rich people have more money, they are more likely to spend it on unique items, like van Gogh paintings or Mediterranean islands. The production of such things can’t be increased, so what we see instead are asset bubbles, where production flattens and the prices of rare goods get bid higher and higher.

For the last few decades, we’ve been living in an asset-bubble world rather than an economic-growth world. The liberal solution is to tax that excess money away from the rich, and spend it on things that benefit poor and middle-class people, like health care and infrastructure.

However, there is a long-term problem that neither liberal nor conservative economics has a clear answer for: As artificial intelligence creeps into our technology, we get closer to a different kind of technological unemployment than we have seen before, in which people of limited skills may have nothing they can offer the economy. (In A Farewell to Alms Gregory Clark makes a scary analogy: In 1901, the British economy provided employment for 3 million horses, but almost all those jobs have gone away. Why couldn’t that happen to people?)

As we approach that AI-driven world, the connection between production and consumption — which has driven the world economy for as long as there has been a world economy — will have to be rethought. I don’t see anybody in either party doing that.


So what major themes have I left out? Put them in the comments.

Can We Overthrow the Creditocracy?

In the long history of oppression, where are we today? And what can we do about it?


The simplest, most direct form of oppression is forced labor: Work for me, do what I say, or I’ll beat you. And if no beating short of death will induce you to do what I want, then the example of your demise will at least make my next victim more pliable.

Unfortunately for the oppressor, though, forced labor is also morally simple. The press-ganged victim knows I have wronged him or her. Given the chance to run away, or (better yet) kill me, he or she will feel completely justified.

That’s why history is full of attempts to dress oppression up and make its morality more confusing. If you want to be cynical, you might tell the whole economic history of the world that way: as a series of systems to dress up oppression and shift the guilt of it from the order-giver to the order-taker. In every era, the many work and the few benefit, but those who run away or revolt are the immoral ones. They are ungrateful wretches who bite the hands that feed them and repay their kindly benefactors with violence.

For example, from today’s perspective the slave society of the old South seems pretty stark: Do what I say because I own you and your children and your children’s children down to the last generation. And yet, the literature of the time — written by whites, naturally — often waxes lyrical about the great good the white man has done for his undeserving servants: given them the gift of civilization, saved their souls for Christ, accepted them in his home and fed and clothed them since birth, or perhaps purchased them from an animal-like existence under a slave-trader and bestowed upon them new names and new roles (however lowly) in human society.

How dare the slave forget his obligation and steal himself away!

Freedom without access. Most systems are more subtle than that. The people at the bottom aren’t owned, and in fact their freedom may be a central point of public celebration. But a small group controls access to something everyone needs to survive. To guarantee your own access, you must strike a deal with them — on their terms, usually — and do what they say. And because society frames its story in a way that justifies the access-control, the people who tell you what to do are not your oppressors, they’re your benefactors. You owe them for giving you the opportunity to serve.

Whatever that necessary something is, and however access to it is controlled, tells you what kind of oppressive system you’re in. In feudalism, a small group of lordly families control the land you need to grow food. To get access, your family must swear fealty to one of them, and God have mercy on the traitor who breaks his vows. In the sharecropper system that replaced slavery in the South, whites (often the same whites who had owned the antebellum plantations) controlled access to money and markets. Freedom and even a small chunk of land might be yours, but the wherewithal to survive until harvest had to be borrowed, and then you were obliged to sell your crop to your creditor, for a price he named — usually not quite enough to clear your debt. If you tried to escape this system, you weren’t a runaway slave (as your mother or father would have been), but you were a runaway debtor and the law would hunt you down just the same.

In the North, oppression took its purest form in the company towns immortalized in the song “16 Tons“, where the singer imagines that not even death will get him out. The company controlled every side of the transaction — not just access to productive work, but the scrip you were paid in, and the company store where you could spend it. The system wasn’t quite so obvious in the bigger cities, where many employers drew from the same labor pool, but basic outline was the same: To get access to what Marx called “the means of production” — land, factories, mines, or any other resource that human labor could turn into the stuff of survival — the masses at the bottom of the pyramid had to deal with a fairly small group of employers, who could dictate wages and working conditions.

As on the plantations or the feudal manors, the language of morality had been turned inside-out: The oppressor was the benefactor. Give me a job, the worker begged.

The American exception. Underneath all that oppressiveness, though, something new had been blooming in America from the beginning. Dispossessing the Native Americans of an entire continent had created opportunities for wealth so vast that the old upper classes couldn’t exploit them all without help, so common people were cut in on the booty.

Already in 1776’s The Wealth of Nations, Adam Smith had documented that wages were considerably higher in the colonies (where there was so much work to be done and a comparative dearth of hands) than in England itself. The post-revolutionary Homestead Acts codified a system that had been operating informally for some while: For whites, American wages were enough above subsistence that you could build a stake of capital, buy tools and transport, and then set out for the hinterland and establish an independent relationship with the means of production. For one of the few times since the hunter-gatherer era, working-class Europeans could apply their labor directly to the land and live without paying for access.

Post-Civil-War American history can be told as a struggle by the capitalist class to claw back those hastily bestowed opportunities by manipulating markets, monopolizing the new railroads, and generally “crucify[ing] mankind upon a cross of gold” as William Jennings Bryan famously put it. But they never completely succeeded. Hellish as turn-of-the-century mines and factories could be, the vision remained: Capitalism didn’t have to be so bad, if workers had a way to opt out and employers had to compete to hire them.

The early 20th century brought a series of shocks to the capitalist system: the world wars, the Russian Revolution, the Great Depression, and finally the very real threat of Communist revolutions. The devastated Europe of 1945 in some ways duplicated the opportunities of the New World: There was so much work to be done that for three decades (les Trente Glorieuses, as the French put it) full employment and rising wages could be the norm.

In the Cold War competition with Communism, Capitalism had to loosen up to maintain the workers’ loyalty. And so a mixed public/private social contract developed: The means of production would continue to be privately owned, but government would keep the worker in the game. Government would provide education at little or no cost to the student; guarantee a liveable minimum wage; protect consumers from unsafe products and workers from dangerous workplaces; prevent monopolies from forming; create jobs by building public infrastructure; defend the workers’ right to form unions powerful enough to negotiate with corporations on equal terms; maintain a safety net against unemployment, disability, and old age; and (except in the United States) take care of the sick. The political expectation was that a rising tide would lift all boats: If profits rose, wages would rise, and everyone would benefit.

Counterrevolution. But by the late 1970s, the failure of the Soviet system to make good on its economic promises made Khrushchev’s we-will-bury-you threat ring hollow, and Western capitalists started to wonder if they’d given away too much. The theme of their Reagan/Thatcher counterrevolution would be privatization. Wherever possible, get government out of the picture so that the natural power imbalance between worker and employer can re-assert itself.

And that has been the story of the last not-so-glorious forty years: Powerful unions and nearly-free state universities are mere memories. Inflation has pushed the minimum wage down towards subsistence. We are told that the wealthiest nation in the world cannot afford a safety net; if bankruptcy looms (or can be manufactured), the solution is not to commit new resources, but to slash benefits. Consumer and worker protection is “job-killing regulation”, and making up for a job shortfall with public works is unthinkable. Increasingly, even public K-12 education is under fire; if you really want a high-quality education for your child, perhaps a government voucher will defray the cost a little, until inflation eats up that subsidy as it has the minimum wage.

As a result, even as productivity-per-hour and GDP-per-capita have continued to rise, wages have not. Ever-increasing shares of the national income and the national wealth are controlled by the top 10%, the top 1%, the top .01%. Even in the uppermost levels of the economic pyramid, there is always an even smaller class of people just above you whose skyrocketing wealth is leaving you far behind.

Creditocracy. Andrew Ross’ book Creditocracy and the Case for Debt Refusal points out that the goal of the counter-revolution is not just a restoration of late 19th-century capitalism, in which large employers dominate by controlling access to jobs. It’s a subtly different system of oppression entirely: a creditocracy.*

Everything the Cold War social contract promised is still available, you just have to pay up for it. How will you do that? You’ll get loans, and spend the rest of your life working to make payments. Rather than beg “Give me a job”, you’ll beg “Give me loan, so that I can get what I need to get and keep a job.” The bankers will be your benefactors, and then they will tell you what to do.

Education is where this project is most advanced. Probably there will always be some way to warehouse children at public expense while their parents work, either in public schools or in minimal private schools fully covered by a public voucher. But if you want the kind of education that gives a child options beyond minimum wage or welfare, you’ll have to pay up. Some people will be able to cover that expense, but most will have to borrow. If we’re talking about college, we’re already there. Working your way through college was once a realistic goal; it no longer is. The Federal Reserve recently estimated total student debt at $1.13 trillion, with about 1 in 8 borrowers owing more than $50,000 each, and a small but increasing number beginning their careers more than $200,000 in the hole.

If you just want to live somewhere, that won’t be a problem. But if you want to live in an neighborhood where potholes are fixed and police protect you rather than prey on you, you’ll have to pay up. Need a loan?

Public transportation? Forget about it. You can stay home for free, but if you want to work you’ll need a car, and cars cost. Calories are easy to come by, but safe and healthy food? Still available in certain upscale groceries, if you can afford it. Medical care? We’d never just let you die, and we have repayment plans with attractive rates. Clothes? I see you’ve got your body covered, but you’ll never get a job looking like that. Libraries? Parks? There are some you can join for a membership fee, though probably not in your neck of the woods. News? Comes from cable TV or the internet, via the local monopoly. Retirement? You can never be sure you’ll have enough to stay out of poverty, but maybe your kids will co-sign for you if you live too long.

During the post-war Trente Glorieuses, debt was a way to anticipate your rising income and get a few luxuries earlier than you otherwise might. But in the Creditocracy, debt is a necessity; all but the wealthy need to borrow to stay in the game. And once you owe, the onus is on you to toe the line: You’ll never cover your payments working in a field you love, or letting moral considerations control what you will and won’t do for a living. (Are you sure you don’t want to fight in our war? We’re hiring.) You don’t dare stick your neck out politically or socially, if you want to stay employed and keep making your payments. Maybe someday, if you get it all paid off, you’ll live by your heart and your conscience. But until then …

And where does this needed credit ultimately come from? It’s conjured out of the aether by the Federal Reserve, and distributed to the big banks by loans at rock-bottom rates. That’s the controlled access that makes the whole system possible. They have access and you need it, so they can tell you what to do and leave you thanking them for it. And if they ever push things too far and make loans that can never be repaid, then they’ll have the government behind them, bailing them out and sticking ordinary taxpayers with the bill. You may have lost your home, your savings, and God knows what else in the whole mess, but at least the banker will be made whole.

The Morality of Default. On the rare occasions when systems of oppression are beaten, they are first beaten morally. Slavery can’t be defeated until the runaway slave becomes a hero rather than a scoundrel, and the rebellious one can become a soldier rather than a murderer. The company town can’t be overthrown until the worker who refuses to work becomes a striker rather than a bum, and values solidarity with his comrades over the debt he owes his employer for “giving” him a job.

Today, it seems like an impossible dream that debtors could ever take the moral high ground away from creditors. Somebody who borrows and then won’t pay is a deadbeat, a moocher, a loser. It seems hard to imagine a debtors’ rights movement that could win popular support for a repayment strike or the outright renunciation of unreasonable debts.

But that’s what Ross envisions. To get there, we need to develop and popularize moral standards that separate good debts from bad debts. For example, view John Oliver’s piece on the payday lending industry, and then consider the idea that many of these loans — particularly ones where the original principal amount was paid back long ago, but the compounding interest has taken on a life of its own —  should just not be repaid. Similarly, the Consumer Financial Protection Board is suing ITT Educational Services for tactics that seem widespread in the for-profit college industry: using high-pressure sales tactics to push students into taking out loans, when they have little prospect of either getting a degree or paying off the loan. Some of the sub-prime loans of the housing boom were likewise made with no reasonable prospect of repayment, then sold off to investors anyway. The primary fraud came from the banker, not the borrower.

Other debt is perhaps no fault of the lender, but should not be charged against the debtor either. Medical debt — often as clear a case of pay-or-die as any highway robbery — is the best example, but much student debt fits as well. The debt exists because of society’s failure to provide what ought to be public goods. If any debt is going to vanish in the fancy bookkeeping of the Fed, this kind of debt should.

Some debts are legitimate, but there are equally legitimate claims in the other direction, ones that the Creditocracy does not take as seriously. Much of the developing world’s debt to the wealthy countries might be cancelled by fair reparations for colonialism, or by the responsibility that industrialized nations have for using up the carbon-carrying capacity of the atmosphere. Today, the obligations in one direction are considered iron-clad, while the ones in the other are optional. Why should that be?

Probably most debts should eventually be paid. But even they might also be part of a larger debt strike, to force action on the ones that should be renegotiated or just renounced.

In the long run, the infrastructure of the Creditocracy might be torn down and rebuilt into an economic system whose primary purpose is to create useful goods and services rather than profits, a world with more co-ops and credit unions and crowd funding, and less money swirling around in financial derivatives.

But long before that can happen, the moral structure that supports the Creditocracy needs to be challenged and shaken at many levels. Imagine, if you can, a world in which the debtor who does not pay — like the slave who runs away or the worker who sits down on the job — is a hero.

Not a deadbeat, a moocher, or a loser. A hero.


* One reason this “review” is so long is that although I think the ideas in the book are important, I don’t actually like the way Ross makes his case. His style is repetitive, needlessly polemic, and sloppy with numbers. So I’m recasting the ideas in my own way.

One example: While making some point about Google and Facebook, Ross mentioned what each “earned” in a particular quarter. The numbers seemed high to me, so I checked them. He had actually quoted the companies’ revenues, not their earnings.

He was making a qualitative point, in which revenues worked just as well as earnings (i.e., some other number was small potatoes to companies that big). So it seemed to just be sloppiness rather than deception. But I don’t have to hit many such examples before I start to doubt everything.

Prosperity Without Growth?

When you take a very-long-term view of the future of civilization, the one option that seems most unlikely is that we can continue the patterns of the last few centuries: an ever-increasing population consuming ever-more stuff, using ever-more natural resources to produce it, and leaving ever-more waste products for the planet to absorb.

Futurists embarrass themselves when they predict precisely when and how that pattern will break, but still, it defies my imagination to picture how this could all continue indefinitely down the millennia. Eventually — whether by wise planning, cataclysm, alien conquest, or the return of Jesus — the exponential growth is going to stop.*

What will that look like? If you stipulate those steady-state conditions — stable population, stable resource use, and each generation leaving the planet’s natural environment more-or-less the way they found it — what kind of society can you construct? Can you come up with one that has a place for people more-or-less like us? Or does the whole concept involve making over the human character completely? Could the people in such a no-growth society feel prosperous? Or is prosperity-without-growth a contradiction?

A number of fairly smart, reasonable people have been asking those questions for a while now, and they’re starting to come up with some visions — sketchy ones, to be sure, but sketched-out well enough that the rest of us should start paying attention. One such vision is in Enough is Enough by Rob Dietz and Dan O’Neill.

Disclaimers. Growth has gotten to be such a religion that no-growth smacks of heresy. Like most heresies, it has been caricatured by the faithful to such a degree that any discussion has to start with a few denials.

Two examples of non-growing economies leap to mind: growth-oriented economies that are failing to grow (as the American economy has failed since the housing bubble burst), and aboriginal hunter-gatherer economies. The first example is characterized by despair, lack of opportunity,  and increasing poverty; the second, by discomfort, lack of technology, and vulnerability to disease and famine. Aboriginal societies may live in harmony with Nature, but they also live at the mercy of Nature. One thing you can say for the global economy is that Iowa can have a drought without Iowans starving to death.

Neither example is what the no-growth visionaries are proposing. A society without growth could continue to have antibiotics and the internet — and could even continue innovating, as long as the innovations-as-a-whole didn’t increase the consumption of resources or the production of waste.

A growth-oriented economy that doesn’t grow is the worst of both worlds. It consumes resources unsustainably, and yet fails to provide opportunity and hope. If that were the goal, it could easily be achieved: Just instruct the Fed to keep interest rates high enough to choke off new investment.

The challenge, though, is quite different: To envision a steady-state relationship between Nature and a stable population of humans, while providing those humans the opportunity to lead satisfying lives.

Outline. The book is in three parts. The first discusses the overall idea of “enough”. The second breaks this down into specific areas: How could we achieve a stable population? How could a non-growing economy deal with poverty? What would banking and investment look like? And the third discusses strategies for changing the culture and the political system.

Problem-solving attitude. Because it covers so many topics and is intended to further an open-ended discussion, the book really can’t be condensed. Its strength is in its details, not in a sound bite that gets elaborated over 200 pages.

But the other important aspect of the book is the attitude it projects: It takes the problem of planetary depletion seriously and approaches it with a problem-solving attitude. So it is not a jeremiad, or a prophesy of doom, or a denial that anything really needs to change — three categories that take in most of the debate on these topics. It’s easy to find reasons why a stable economy can’t happen, but comparatively rare to find people who accept that it must happen eventually, and then bring a problem-solving attitude to the question of how.

A number of factors evolved with the idea of economic growth, and they will have to change or be replaced to achieve stability: a money-creating banking system, measuring the economy by GDP, and corporations devoted to constant growth are just a few of the ones discussed in more detail. An example of the kind of change a stable economy would need: Much of what is done today by profit-seeking corporations could be done by consumer-owned co-ops focused on providing service rather than producing an ever-increasing profit for investors.**

The poor held hostage. To me, the most significant argument against a stable economy says, “Morally, how can we rein in economic growth when so many people still don’t have enough?” My problem with that question: I have lost faith that the capitalist economy will ever provide enough for everybody, now matter how high global GDP gets. Over the last few decades, the top 1% has gotten better and better at capturing economic growth for themselves. From the point of view of a CEO seeking higher profits for his corporation, a better life for the poor is an inefficiency to be avoided. Across-the-board wage increases are a capitalist nightmare, not a fulfillment of the capitalist system.

In the Dietz/O’Neill view, we need to turn this kind of thinking around: Rather than continuing to grow the economy in hopes that some of the new consumables will filter down to the poor, we need to solve the problem of inequality so that we can achieve a stable economy. Poverty is a political problem, not an economic problem. Growing the economy without changing the politics won’t solve it.

Rather than putting the entire burden of proof on the no-growth vision, I think we also have to stop accepting a “someday” vision of ending poverty through growth. Anyone who makes the anti-poverty argument for growth needs to explain exactly how growth is going to help the poor, and offer a projection of how much more growth it will take to eradicate poverty before we can stabilize the economy’s toll on the planet.

Trustworthy governance. Again and again, I was struck by how the Dietz/O’Neill vision requires that we work together as a species. The easiest way to envision that unity is via some Hunger-Games-style tyranny, which no one (least of all Dietz and O’Neill) wants. But even the most free and democratic vision of a stable economy depends on establishing some trustworthy global institutions.

For example, a global cap-and-trade system to stabilize the CO2 in the atmosphere would work only if people can’t cheat anywhere in the world, if the tradable CO2 certificates can’t be counterfeited, and if you can’t “earn” them by creating bogus carbon-offset projects — trees that are never actually planted, etc.

Similarly, population could be stabilized through incentives and voluntary cooperation rather than one-child mandates and forced sterilizations. But someone would have to monitor all that and adjust the incentives accordingly, and the rest of us would have to trust the fairness of that monitoring agency.

This is the part I worry about most: If you have money and power and you want to derail the vision of a stable future, all you really have to do is create distrust. What could be easier?

Not a lone voice. Another striking thing about Enough is Enough is the extent to which it builds on the work of many others. For example, the view of money, debt, and banking will be familiar to Sift readers from David Graeber’s Debt: the first 5,000 years and Warren Mosler’s Seven Deadly Innocent Frauds of Economic Policy.

I’m sure many people will look on this as cranks quoting other cranks, but I don’t. I’m starting to see a unifying view develop.

Virtual consumption. Futurists have to be wary of a technology-will-save-us argument, which is always too easy and is often a mirage. But I think Dietz and O’Neill miss one important way that technology can contribute to a sustainable future: virtualization. We’re already seeing some of it: My book collection is gradually turning into patterns of electrical charges rather than shelves of paper.

Dietz and O’Neill point out (appropriately) that such changes are meaningless if they just make paper cheaper and allow somebody else to consume more of it. But recent sci-fi (starting with Snow Crash and continuing into more recent works like The Quantum Thief or Ready Player One) points to the greater possibilities.

You can think of consumption as serving four purposes: survival, comfort, entertainment, and competition for status. It is easy to imagine “enough” when we talk about survival and comfort, and maybe even entertainment. But the really open-ended consumption happens when we compete for status. I can imagine wanting a boat for entertainment, but the only reason to want a 400-foot yacht is to out-do the guys who can only afford 300-foot yachts. (As far back as the Roman sumptuary laws, the essence of the moral argument to limit consumption is that some people are starving so that others can raise their status.)

Survival and comfort require real-world resources. (You can’t eat pixels.) But if the culture evolved so that we got most of our entertainment inside virtual worlds and competed for status there, then a sustainable economy would be much easier to achieve.


* Space travel is sometimes presented as a far-future solution. While I can imagine a Noah’s-Ark-style spaceship seeding another planet with humans, I can’t imagine inter-stellar travel ever being so cheap that emigration has a significant impact on Earth’s population. (At least that’s not a future I’m willing to count on.) So Earth’s remaining citizens would still have to come to terms with the planet’s limitations.

Think about the colonization of the New World. Except for a few temporary situations (like the Irish Potato Famine), Europe’s population continued going up, even as it sent more and more people to America. Europe today is more crowded than ever.

** This got me thinking. Back when cable TV was being established, we all took for granted the model of a privately financed network made economically feasible by granting a monopoly. But the New-Deal-era model of the rural electric co-ops also would have worked: government-guaranteed loans to establish consumer-owned co-ops. If we’d done that, every year you’d get to vote on the leadership and policy of your cable system.

Why the Austerity Fraud Matters

When disputes break out among academics, most people don’t care. For good reason: Academic controversies are usually hard to follow, and concern topics that wouldn’t matter to most of us even if we understood them. (I was in an academic dispute once, and my side won. Trust me, you don’t want to hear about it.)

But this week a controversy broke out in economics, and it actually deserves your attention. A paper that has had a major influence on public policy around the world turns out to be wrong. And not just wrong in a subtle way that only geniuses can see, or even wrong in an everybody’s-human way that you look at and say, “Oh yeah, I’ve done that.” This one was wrong in three different ways that make you (or at least me) say, “That can’t be an accident.”

The bogus paper came out in 2010: “Growth in a Time of Debt” by Carmen Reinhardt and Ken Rogoff (both from Harvard). The paper that refutes it appeared last Monday: “Does High Public Debt Consistently Stifle Economic Growth? A Critique of Reinhart and Rogoff“ by Thomas Herndon, Michael Ash, and Robert Pollin (all from the University of Massachusetts).

Before I get into the back-and-forth of it, let’s return to why you should care. It has to do with whether the government should be trying to create jobs or cut spending.

Stimulus vs. austerity. Many countries came out of the Great Recession with a much larger national debt, but persistent unemployment and slow growth. And that led to a debate: The usual thing a government does when it has high unemployment and slow growth is spend money. (People need jobs and the private sector is skittish about expanding, so the government hires people to do things that need doing: building highways, fixing sewers, insulating homes, and so on. Or maybe the government boosts the economy by subsidizing certain kinds of consumption, like the popular cash-for-clunkers program that got a bunch of old gas-guzzling cars off the road.)

But maybe this time the thing to do was to cut spending, because of all that debt. Maybe spending more, and so increasing the national debt, would just make things worse.

The same debate was happening in all countries, and none of them went completely one way or the other. But the poster child for austerity has been the United Kingdom, where it hasn’t worked. Here’s how British economic growth has compared to the projections made by the UK’s Office for Budget Responsibility. Austerity has brought the UK essentially no economic growth for three years.

The US has had its own stimulus/austerity debate, which has kept the Obama administration from spending as much as it wanted (or as much as Paul Krugman wanted, which was even more). But compared to the other major economies, the US has been on the stimulus side of the debate, which is probably why (disappointing as our economy has been these last few years) we’re doing better than most other countries. (This graph is scaled so that all countries are equal when austerity-loving David Cameron became the UK’s prime minister.)

Basically, the US and Germany are the only countries in that group that have seen any net growth since 2008.

The gist of what we’ve seen since 2008 is: Keynes was right. In the long run you probably want to keep your national debt under some kind of control, but not when you have high unemployment and slow growth.

How Reinhart/Rogoff leads to Ryan. Now, obviously, the budget debate we keep having in Washington doesn’t acknowledge this reality at all. Conservatives like Paul Ryan and Rand Paul, who want drastic cuts in government spending  (to them, the sequester is just a down payment), somehow get away with claiming to have a “pro-growth” agenda.

How is that possible? Well, partly it’s just dogma. The Gospel According to Ayn Rand states that government is always and eternally bad for the economy — she called for “a complete separation of state and economics” — and no accumulation of facts can outweigh holy writ.

But also, a handful of economists provide academic cover for the “pro-growth” austerity nonsense. And the biggest fig leaf in the bunch is the Reinhart/Rogoff paper. In his 2013 budget proposal, Ryan wrote:

Even if high debt did not cause a crisis, the nation would be in for a long and grinding period of economic decline. A well-known study completed by economists Ken Rogoff and Carmen Reinhart confirms this common sense conclusion. The study found conclusive empirical evidence that gross debt (meaning all debt that a government owes, including debt held in government trust funds) exceeding 90 percent of the economy has a significant negative effect on economic growth.

More precisely, R/R found a “threshold” that gets crossed when a nation’s public debt exceeds 90% of the annual GDP. (The United States currently has  a debt-to-GDP ratio around 100%. It was comfortably below the 90% “threshold” until almost exactly the moment the R/R paper appeared.) In other words: All your economic intuition and experience might tell you not to cut spending in a slow-growth environment, but something magic happens when debt crosses 90%. Beyond that point, debt suddenly becomes toxic.

Jared Bernstein comments on the significance:

Those whose goal is severely shrinking the size of government in general and social insurance in particular need hair-on-fire results like this from established experts to keep the fire going,  even in the face of statistics that lean strongly the other way

What they did and why it’s wrong. Reinhart and Rogoff looked at 20 industrialized countries year-by-year and divided the country-years into four bins: years when the national debt was 0-30% of GDP, 30-60%, 60-90%, and over 90%. They found significantly lower average economic growth in the over-90% bin. The average annual growth rates for the four bins in the 1946-2009 (post-WW2) period were 4.1%, 2.8%, 2.8% and negative .1%.

Now, if you look at those countries and years one-by-one, the case isn’t always impressive. For example, 1946 in the US. We had a lot of debt because we’d just fought World War II, and we had a recession because all the discharged soldiers and laid-off tank-factory workers hadn’t found new jobs yet. So high debt and negative growth were happening at the same time, but not because government debt was killing the economy.

Those are the kinds of one-off situations that you hope cancel out in the averages. And they kinda-sorta do, if you assemble your data honestly and do the math right. Unfortunately, R/R did neither. When Herndon/Ash/Pollin go back and do the analysis right, growth in the over-90% bin jumps from negative 0.1% to positive 2.2%.

So what mistakes did R/R make? Well, one was really stupid: They plugged the wrong row number into a formula on their spreadsheet, so their average skipped a bunch of rows, representing 6 of the 20 countries. (They’ve confessed to that mistake.)

Second, their data-set didn’t really include all the country-years it should have. So, for example, New Zealand only has one year in their average, when it ought to have five. Unfortunately, that makes a huge difference in the country average, because that one year NZ had -7.9% growth, when the five-year average was +2.6%.

And third, they made the bizarre choice to average by country rather than by country-year. So that one anomalous year in New Zealand ended up constituting 1/14th of the entire average rather than the 1/110th it should have.

Why it’s so bad. The significance of the R/R paper comes entirely from those mistakes.

Yes, an honest and accurate accounting still shows a negative correlation between growth and debt-to-GDP ratios, but everybody would have expected that anyway, because there’s well-known causality in the other direction: recessions cause debt/GDP ratios to rise*. (GDP goes down because that’s the definition of a recession. Debt goes up for two reasons: Revenue drops because there’s less income to tax, and spending rises to pay for more unemployment insurance and food stamps.)

The only significant part of R/R was the threshold, and that was wrong: The something-magic-happens-at-90% was just a spreadsheet typo plus statistical sleight-of-hand.

So the data R/R assembled provides absolutely no reason to have some special fear about the current level of debt in the US. We haven’t just passed through some economic equivalent of the sound barrier. To the extent that debt was bad before, it’s still bad, and to the extent that it didn’t matter before, it still doesn’t matter.

Fraud. I anticipate taking heat for using the word fraud in the title. The Herndon/Ash/Pollin paper doesn’t use it, and to fully justify fraud you’d have to see into the hearts of Reinhart and Rogoff. Responsible academics are slow to use words like fraud, because academics are cautious in general. You’re not supposed to publish something you can’t fully prove, even if your rivals do.

But I’m not an academic any more, so I’m using a preponderance-of-evidence standard, not a beyond-reasonable-doubt standard. Let’s look at the three mistakes.

The spreadsheet error shows an unbelievable level of negligence, but if that were the only mistake I’d be inclined to give R/R some benefit of the doubt. The original mistake was almost certainly honest, but not finding the mistake is the real culpability. They didn’t look the gift horse in the mouth; the mistake gave them the result they wanted, so they didn’t check too hard.

They claim to have filled in the missing data in later research, but they’ve done nothing to point out what a difference it makes. And they defend their weighting scheme — an argument I could buy if they had defended that scheme in the original paper while pointing the major difference it made in the result. But they didn’t. They were hoping the readers wouldn’t notice.

In their response to H/A/P, Reinhart and Rogoff, defend their non-spreadsheet errors “in the strongest possible terms”.

But surely the authors do not mean to insinuate that we manipulated the data to exaggerate our results.

I can’t speak for H/A/P, but I won’t insinuate anything, I’ll say it outright: Yeah, R&R, you manipulated the data to exaggerate your results.

R/R’s response. One proof of the fraud is that they’re still doing it. Their response claims:

We do not, however, believe this regrettable slip [the spreadsheet error] affects in any significant way the central message of the paper or that in our subsequent work.

And that’s just flatly false.

Do Herndon et al. get dramatically different results on the relatively short post war sample they focus on? Not really. They, too, find lower growth associated with periods when debt is over 90 per cent.

And that’s sophistry. The “relatively short post war sample” are the economies that happen to resemble the United States today. And “lower growth” is not the result the paper is noted for; no one would care if that were the whole message, because that is completely explained by the well-known recession-causes-debt relationship. The 90% threshold is the paper’s claim to fame, and that result has blown up completely.

And finally, while they don’t explicitly claim that they’ve found a debt-causes-slow-growth relationship, they keep using their result as if they had. They do so even in their response:

There is also the question of whether these growth effects can be economically large. Here it is very misleading to think of 1 per cent growth differences without recognizing that the typical high debt episode lasts well over a decade (23 years on average in the full sample.)

It is utterly misleading to speak of a 1 per cent growth differential that lasts 10-25 years as small. If a country grows at 1 per cent below trend for 23 years, output will be roughly 25 per cent below trend at the end of the period, with massive cumulative effects.

That point is utterly meaningless if the causality works in the other direction, if the slow growth is causing the debt rather than the other way around. And another re-analysis of the R/R data shows that’s what’s happening. That analysis also was simple to do. As Matt Yglesias comments:

it’s striking that R&R didn’t even check this. I don’t begrudge any academic’s right to rush into publication with an interesting empirical finding based on the assembly of a novel and useful dataset. I don’t even begrudge them the right to keep their dataset private for a little while so they can internalize more of the benefits. But Reinhart and especially Rogoff have spent years now engaged in a high-profile political advocacy campaign grounded in a causal interpretation of their empirical work that both of them knew perfectly well was not in fact supported by their analysis.

Buying apples, selling oranges. And that’s the important point. The biggest reason R/R’s paper has been so badly misused in our political debate is that they have been out there misrepresenting their results. Senator Coburn described their testimony to 40 senators a few months before the debt-ceiling debacle in 2012. After listening to their initial testimony,

Senator Kent Conrad, D-N.D., the chairman of the Senate Budget Committee, then offered his own stern warning to the assembled senators. Turning around in his chair in the middle of the room, he explained to his colleagues that when our high debt burden causes our economy to slow by 1 point of GDP, as Reinhart and Rogoff estimate, that doesn’t slow our [economic growth] by 1 percent, but by 25 to 33 percent, because we are growing at only 3 or 4 percent per year.

Did either professor interrupt to say, “Wait, Senator, we’re not saying the debt causes a slowdown. Our data just shows a correlation that could be explained by slowdowns causing high debt.”? No.

Reinhart echoed Conrad’s point and explained that countries rarely pass the 90 percent debt-to-GDP tipping point precisely because it is dangerous to let that much debt accumulate.

Fraud. Fraud, fraud, fraud.


* A point I often make when numbers appear in the Sift: Correlation is not causation. Correlation just means that two things tend to go together; causation means that one causes the other. A very common fallacy is to display a graph showing that A and B go up (or down) together, and then say that A causes B.

My favorite way to demonstrate the fallacy: Birthdays are good for you; people who have a lot of birthdays tend to live long lives.

I Read the Ryan Budget

Last week, when I talked about ideological bubbles and how to tell if you’re in one, I should have mentioned the best way to stay out of bubbles in the first place: Expose yourself to as many original sources as you can, especially the ones you know you’re going to hate.

With that in mind, I read Paul Ryan’s budget. (More accurately: I read the 91-page document he wrote to advertise his budget. An actual budget would have way more numbers in it.) In telling you about it, I’m going to try to keep my commentary as close to the text as possible, with quotes and page references as appropriate. (I wish I had the time to do an end-to-end annotation, but I’ve got some big deadlines looming.)

General impressions. Before I get into specifics, I want to say a few things about the overall impression the document makes.

As many people have already observed, Ryan’s proposal is not an attempt to reach a workable compromise with the White House or the Democratic majority in the Senate, both of which would have to agree before his plan could become law. Instead, it’s an aspirational document for conservatives: This is what they fantasize doing if and when they get complete control of the government.

There’s nothing wrong with that, but the Ryan Budget needs to be classed with aspirational budgets from the Left, like People’s Budget put out by the Congressional Progressive Caucus (which also balances the budget in ten years). Both are shots across the bow, not plausible projections of what its backers think they can pass.

So Ryan has written a rallying cry for the troops of the conservative movement, not an attempt convince or convert non-believers like me. The summary (page 7) says

This is a plan to balance the budget in ten years. It invites President Obama and Senate Democrats to commit to the same common-sense goal.

But there is no spirit-of-invitation in Ryan’s style. Any liberal who reads it will get pissed off, and I believe that’s intentional. Conservatives couldn’t fully enjoy their reading experience without visualizing pissed-off liberals.

Let me detail that: You’ve probably already heard that Ryan wants (once again) to try to repeal the Affordable Care Act (a.k.a. ObamaCare). But after the first mention, he can’t just call it by name. It’s “the President’s onerous health care law” (page 33) or “the President’s misguided health care law” (page 40) and so on, as if the ACA had been imposed on the country by imperial decree and Congress had nothing to say about it — also as if the ACA hadn’t been an issue in the 2012 election that Romney/Ryan lost by nearly five million votes.

Other partisan stuff is just silly. On page 24, President Reagan is given credit both for the economic expansion of his era, and of President Clinton’s era as well. Clinton is mentioned exactly once (on page 33, when Ryan re-raises the universally debunked lie from campaign 2012 that Obama wants to rescind the work requirement of Clinton’s welfare reform). The reader would never know that Ryan’s stated goal — a balanced budget — was achieved by Clinton (who raised taxes) while Reagan (who cut taxes) ran up record deficits.

You will also hear echoes of 2009’s Lie of the Year: death panels. The ACA sets up an Independent Payment Advisory Board (IPAB) to make annual recommendations (which Congress can rewrite before they take effect) on keeping Medicare spending within specified limits. The law specifically bans the IPAB from recommending care-rationing, but the heading of the Ryan’s section on it (page 40) is “Repeal the health-care rationing board”.

Background assumptions. In the real world, if a program is important enough, the government could conceivably raise taxes or borrow to pay for it. OK, Ryan’s balanced-budget goal won’t let him advocate borrowing. But a fundamental assumption that runs through his whole budget — usually without being stated explicitly — is that taxes cannot be raised for any purpose. Nothing is important enough to raise taxes to pay for.

Also, defense spending is untouchable. “There is no foreseeable ‘peace dividend’ on our horizon.” (page 61)

So if the domestic demands on government are growing — the population is getting older, the infrastructure more decrepit, healthcare more expensive, weather-related disasters more extreme and more frequent, future economic growth more dependent basic research and an educated workforce etc. — any money you want to spend to deal with one of those challenges has to be taken from the others.

The idea that over the long term our country could decide that it wants to do more of its consumption publicly — that it wants to take its economic growth in the form of Medicare and public education, say, rather than BMWs — is completely off the table.

Big Picture. The numbers don’t appear until the Appendix (page 80). Atlantic’s Derek Thompson put them into a bar graph:

Medicare and Social Security are usually considered “mandatory spending” (because benefits are defined by law rather than by appropriation), but I believe the additional $962 billion of 10-year savings is mostly Food Stamps, Pell grants, and so on.

So the cuts are almost entirely in healthcare, education, or anti-poverty spending. And while Ryan waves his hand at replacing Obamacare with “patient-centered health-care reforms” (page 33), apparently those reforms require no money from the government.

Meanwhile, rich people get a big bonanza: The top tax rate drops from the current 39.6% to 25%. If you make $10 million a year (some CEOs do), you could save nearly $15 million over the ten years Ryan’s budget covers.

So what isn’t in the budget document?

  • Any specifics about discretionary spending cuts. The cuts are just numbers on a spreadsheet. All the “tough choices” necessary to achieve those numbers are left to your imagination, so Ryan can deny his intention to cut anything in particular, as Mitt Romney did in his first debate with President Obama.
  • Any specifics about closing tax loopholes. Ryan claims his rich-guys-bonanza 25% tax rate wouldn’t cut federal revenue, because it would be balanced by eliminating tax loopholes. As in the 2012 campaign, Ryan says nothing about what those loopholes might be. Again, he can deny wanting to cut any specific item, like the mortgage interest deduction. But he’s got to raise that revenue somehow, and I seriously doubt it’s all going to come from the super-rich who benefit most from the lower rate.
  • Any plan for Social Security. Page 37 charges: “In Social Security, government’s refusal to deal with demographic realities has endangered the solvency of this critical program.” But rather than “deal with demographic realities” here and now, Ryan only “requires the President and Congress to work together to forge a solution.”

We have always been at war with Eastasia. The background rob-Peter-to-pay-Paul assumption allows Ryan to construct some truly Orwellian statements. This is particularly true in the “Opportunity Extended” section, which is all about shrinking opportunity for poor and working-class young people.

For example, on page 20 Ryan identifies “tuition inflation” as a problem that “plung[es] students and their families into unaffordable levels of debt”. And then he says:

Many economists, including Ohio University’s Richard Vedder*, argue that the structure of the federal government’s aid programs don’t simply chase higher tuition costs, but are in fact a key driver of those costs.

What could that possibly mean? Well, that federal aid is allowing too many people to go to college, creating a high-demand environment in which colleges can raise tuition. So the “solution” is to lower the maximum Pell grant (thereby “saving” the Pell grant program from spending at an “unsustainable” level, since we couldn’t possibly raise taxes to pay for it). Also to “target aid to the truly needy” by making families report more of their income on financial aid forms. Also “reforming” student loans and “re-examining the data made available to students to make certain they are armed with information that will assist them in making their postsecondary decisions”.

Presumably, when the facts of this harsher you’re-on-your-own world are “made available to students”, fewer of them will decide to go to college, thereby saving both their money and the government’s. So don’t worry about student debt — just don’t go to college at all if you’re not rich, and if you do go we’ll “help” you avoid massive debts by refusing to loan you money.

Oh, and we’ll also “encourage innovation” in education through “nontraditional models like online coursework”. Never mind that’s where the big scams are. Corporations profit from those scams, so that’s not “waste”.

Ditto for job training: Ryan promises to “extend opportunity” by spending less on it.

Ditto for the safety net. Since taxes can’t possibly be raised, every person who is helped by the safety net is taking those dollars away from somebody else who might be helped. So Ryan’s “A Safety Net Strengthened” section is all about spending less on the safety net. Mostly this is accomplished by block-granting programs like Medicaid to give “states more flexibility to tailor programs to their people’s needs.”

So if, say, low-income Texans need to toughen up and stop seeing a doctor at all, Texas can tailor its program that way. That’s what it’s doing already with the “flexibility” the Supreme Court gave it last summer.

Energy. Climate change just isn’t happening. Ryan doesn’t make that claim in so many words, but there’s a big empty spot where climate change would otherwise have to figure in.

He clumps energy together with a grab-bag of other issues in the “Fairness Restored” section. The “unfairness” in this case is the way that the Obama administration favors clean energy over dirty energy. Ryan will “end kickbacks to favored industries” like wind and solar in favor of “reliable, low-cost energy” like coal, oil, and gas. With climate change out of the picture, only corruption can explain Obama’s favoritism. In the Introduction, Ryan says his budget “restores fair play to the marketplace by ending cronyism.”

In current energy policy, fossil fuels and green energy are subsidized in different ways: Green energy gets grants and loans while established-and-profitable fossil energy gets tax breaks. Tax breaks are invisible to Ryan, so he can say on page 50:

on a dollar-per-unit-of-production basis, the level of subsidies received by the wind and solar industries were almost 100 times greater than those for conventional energy

Do it for the kids. So what’s the purpose of all this? A better world for our children. “By living beyond our means, we’re stealing from the next generation.” (page 5)

Of course my baby-boom generation knows how that works, because all that debt America ran up during World War II was “stolen” from us, right? I don’t know how I failed to notice.

In the real America, the big deficits of World War II kicked off 40 years of prosperity, during which the country achieved a level of equality that it hasn’t equalled before or since. So no, deficits are not “stolen” from the future. My generation did not build tanks and landing crafts and put them in time machines to send back to D-Day.

But in order to save our children from the horrible maybe-sorta-problem of the national debt, we need to under-educate them; not do basic research that might create the next computer industry or Internet; leave them crumbling roads, bridges, and electrical grids; not care for them when they get sick; move in with them when we get old; and leave them with a torched planet, where Iowa is a desert and Miami is underwater.

I’m sure they’ll thank us for our foresight.


* As best I can tell, although Ryan identifies only their university affiliations, every economist Ryan mentions by name is inside the conservative bubble. Richard Vedder is with the American Enterprise Institute and John Taylor with the Hoover Institute.

Nobody Likes the New Capitalist Man

A number of insightful recent books and articles point out various pieces of the following picture:

  • People are fascinating bundles of benevolence and selfishness.
  • A well-designed market can channel people’s selfish tendencies into actions which, in the aggregate, achieve beneficial social ends.
  • Our economic theory models markets, not people, so only human selfishness is relevant. Homo economicus is entirely selfish.
  • Because the conditions that nurture benevolence are invisible to market theory, an “optimized” market system may inadvertently poison benevolence. In other words, market theory may create the perfectly selfish people it postulates.
  • For-profit corporations are artificial entities designed for the market. Consequently, they are defined to be the perfectly selfish, totally profit-driven players market theory postulates.
  • “Good management” means training each employee to internalize the values of the corporation.
  • Top managers are valued for their ability to “make the tough decisions”. In other words, they eliminate all human values other than profit from their decision process.
  • Increasingly, all the rewards of the corporate system flow to those at the top.

Put all that together, and you see that we have created a system that trains us to be bastards, and rewards us according to how well we have managed to stamp out our benevolence.

When you put it that way, it sounds kind of crazy, doesn’t it?

Let’s start with the upside of this vision: If our economic system is making us into worse people than we would otherwise be, then we could be better people and live in a nicer world if we just stopped making ourselves worse. This is not the utopian vision of the “new Soviet man“, a society-centered being who will spontaneously appear (for the first time in human history) after the revolution. It’s the far more modest observation that human beings have benevolent as well as selfish tendencies, and that creative system-builders could figure out ways to make use of human benevolence and nurture it.

That’s the uplifting message of The Penguin and the Leviathan by Yochai Benkler. Benkler says that through most of history, big cooperative projects only happened through “the Leviathan” — the state, exercising top-down power to make people play their parts. (Picture slaves dragging blocks to build the pyramids.) With capitalism comes the alternative of “the Invisible Hand” — the market, in which many individual decisions can add up to something big. (Think about how we wound up with lots of personal computers rather than the “big iron” IBM originally offered.)

Most of our political debate is about the Leviathan vs. the Invisible Hand: Will we get things done through government or by manipulating the incentives of the market?

(One hybrid observation doesn’t get enough attention: A corporation or cartel can dominate a market to the point that it essentially becomes a government, usually an unelected and unaccountable one.)

Anarchists have long claimed that another choice is possible: voluntary cooperation. But until recently, it was hard to find examples on scales larger than a barn-raising.

Then came the open-source movement, which Benkler identifies with the Penguin, the logo of the open-source Linux computer operating system. The Internet grew up together with a host of open-source projects created and maintained by volunteers: Linux, Apache, Mozilla, and eventually Wikipedia. Each in its own way defeated corporate-sponsored for-profit competitors. (Some, like Linux, eventually drew in corporate support, but on their own terms. IBM pays employees to contribute to Linux, but IBM still can’t own Linux.)

Benkler doesn’t claim that we could live in a complete open-source utopia; only that the principles that make open-source projects work have unexplored potential. Many people in our society are starved for opportunities to express their inventiveness, skill, and creativity in ways that do not pay them money, but win them the admiration of a peer group that shares their values. Similar motivations could complement monetary incentives more broadly.

He reviews much of the recent research into cooperation, reaching this conclusion:

In hundreds of studies, conducted in numerous disciplines across dozens of societies, a basic pattern emerges. In any given experiment, a large minority of people (about 30 percent) behave as though they really are selfish, as the mainstream commonly assumes. But here is the rub: Fully half of all people systematically, significantly and predictably behave cooperatively. … In practically no human society examined under controlled conditions have the majority of people consistently behaved selfishly.

The bulk of the book explores non-internet examples of how these principles play out in Japanese management, in community policing, in politics, and elsewhere. He concludes by offering principles for “growing a penguin” — designing a system that nurtures cooperation rather than incentivizing selfishness.

One of Benkler’s political examples — the get-out-the-vote strategy of the Obama campaign — is examined in more detail in The Victory Lab by Sasha Issenberg. It turns out that who people vote for may be determined by self-interest, but whether they vote isn’t. Nobody really believes their single vote will decide the election, so purely selfish people will stay home and pursue their other interests. The most effective method of motivating marginal voters, it turns out, is to appeal positively to their civic pride, while subtly reminding them that their non-voting will be a matter of public record. In laboratory experiments, this pride/guilt combination is more effective than paying people to vote.

Staying positive for a bit longer, Jane McGonigal’s Reality is Broken, which I have reviewed before, finds that online gamers hunger for the chance to be a respected member of a questing community. She reports that many gamers feel their online persona is a better person than they are in their offline jobs and relationships. Like Benkler, she examines ways that the design principles of games could be used to encourage cooperative and altruistic behavior in real life.

Now let’s look at the negative side, starting with a book that walks the line between seriousness and tongue-in-cheek humor: Assholes, a theory by Aaron James. A sociopath is someone who lacks any moral core, but uses other people’s moral scruples to gain an advantage over them. An asshole, according to James, is different: He has a moral sense, but his moral vision comes with an unassailable sense of entitlement. So, for example, he understands perfectly why other people should wait their turn in a line, and is honestly incensed when they don’t. But he also feels — not occasionally, but constantly — that his special situation or status entitles him to cut to the front.

Like Benkler, James recognizes that most people aren’t assholes. (If they were, there would be no lines. We’d all just shove our way to the front.) But late in the book he considers whether a society can reach a tipping point, where there are so many assholes that the rest of us are driven to behave like assholes just to avoid constant exploitation.

From there he considers how capitalism can devolve into asshole capitalism. Suppose some social change causes the system to send

a powerful entitlement message, for instance, that having ever more is one’s moral right, even when it comes at a cost to others. As asshole thinking and culture spread and take hold, the asshole-dampening systems that used to keep assholery in check become overwhelmed. Parents start preparing their kids for an asshole economy, the law is increasingly compromised, the political system is increasingly captured, and so on. As some switch sides while others withdraw, cooperative people find it more difficult to uphold the practices and institutions needed for capitalism to do right by its own values. … Society becomes awash with people who are defensively unwilling to accept the burdens of cooperative life, out of a righteous sense that they deserve ever more.

James applies this model to various countries and concludes: “Japan is fine, Italy already qualifies as an asshole capitalist system, and the United States is in trouble.” (One symptom of Italy’s trouble: Even Silvio Berlusconi’s supporters understood that he was an asshole. Nobody cared.)

And that brings us to Gus DiZerega’s blog post Capitalism vs. the Market. In some ways this belongs to the same genre as my own Why I Am Not a Libertarian — insights that begin with a critique of a simplistically appealing libertarian worldview. DiZerega views the fundamental libertarian error as upholding corporate capitalism because markets are good. DiZerega agrees that markets are good, but corporate capitalism is something else entirely.

Markets, he says, are ways that producers and consumers send each other signals about supply and demand. The market doesn’t tell you what you should do, just what it will cost you. For example, the slave market won’t tell you whether or not you should free your slave, just how much money you’ll be passing up if you do.

But in corporate capitalism the market usurps the decisions once made by humans.

To succeed in managing a capitalist institution a person must always try and buy for the lowest price and sell for the highest before any other value enters in.  Any corporate CEO allowing other values to trump this principle will see his or her decisions reflected in lower share prices.  If these prices are much affected the corporation risks the likelihood of being taken over in an unfriendly acquisition, its management ousted, and financial values once again elevated above all others. In other words, as a system of economic organization capitalism defends itself against richer human values by penalizing and expelling people who to some degree put them ahead of profit when making economic decisions.

In theory corporations are owned by people. But in practice you cannot remove your capital from a corporation. All you can do is sell your shares to someone else. By selling, you disassociate yourself from practices you may consider immoral, but you do nothing to end them. Think of slavery again: You can free your slave, even if it lowers your net worth. But if instead you own shares in Rent-a-Slave, Inc., all you can do is give or sell those shares to someone else. No slaves are freed when you do.

So if I don’t want to profit by addicting people to drugs that kill them, I can sell my shares in tobacco companies. But the tobacco companies themselves roll on. To the extent that they are profitable, the new owner of my shares will make money and gain power in society. Even individually, power accrues to people who have no values beyond profit.

The libertarian ideal is of people who are free to live by their own values, trading with each other without coercion.

Capitalism is different. It is the gradual overwhelming and destruction of all values that are not instrumental. … Once capitalism exists non-instrumental values are actively selected against, and receive little opportunity for expression.  Human beings become profit centers for corporations, and nothing more. … Capitalism cannot distinguish love from prostitution.

I wish DiZerega had said “corporate capitalism” rather than just capitalism, but otherwise I agree. As I put forward two years ago in Corporations Are Sociopaths, we have created entities that embody all of our worst traits. James and DiZerega are pointing out what then happens to us and our society when those created entities are allowed to dominate.

The Trillion-Dollar Coin Hits the Big Time

The notion that President Obama could avoid the debt ceiling by minting a trillion-dollar platinum coin and depositing it in the government’s account at the Federal Reserve has been around for a while now. (I first noticed it in July, 2011.) It sounds ridiculous because it is. (Even people who favor the idea understand that.) It’s a wacky solution that underlines just how wacky the whole debt-ceiling problem is in the first place.

Think about the situation President Obama will find himself in (by about mid-February) if the debt ceiling isn’t raised: Laws passed by Congress tell the President what taxes he can collect, what money he must spend, and that (even though these numbers don’t balance) he can’t borrow. Meanwhile, the Constitution tells him that his first duty is to “faithfully execute the laws”.

What’s he supposed to do? Several people, including Matt Yglesias, claim that the Budget and Impoundment Act of 1974* leaves the administration with no legal choices other than something off-the-wall like a trillion-dollar coin.

During the 2011 debt-ceiling crisis, the Very Serious Persons of the punditocracy did not stoop to comment on the trillion-dollar coin. Instead, they just refused to believe that our politics had gotten that dysfunctional. Congress might appear to be steaming headlong towards welching on all our nation’s commitments, but at the last minute wisdom would prevail. And lo: Congress temporized, giving a Super Committee of the Wise time to design an austerity plan.

Well, that worked out just dandy, didn’t it? The Super Committee deadlocked in the same place Obama and Boehner had: Republicans would not raise rich people’s taxes by a single dime, and Democrats refused to thrust all the sacrifice onto the old, the sick, and the poor. That deadlock set up the fiscal-cliff conflict that Congress again avoided at the last minute, but didn’t resolve. Now we’re looking at a second debt-ceiling showdown.

I think that sequence of events has been an eye-opener for the VSPs: Seriously? You want to do that again? [Yes, they do.]

Suddenly, the trillion-dollar coin doesn’t look so crazy. Well, it is still crazy. But picking a path into the fiscal future is starting to feel like picking a Bull Goose Loony at the asylum. Tom the Dancing Bug provides the proper level of seriousness:

So this week the trillion-dollar coin suddenly went from a fringy absurdity to a policy option that every VSP needs to have an opinion on. The WaPo asked financial types how the markets would react. Wednesday, NBC’s Chuck Todd asked about it at a White House press briefing, and Jay Carney dodged. “I would refer you to the Treasury.” Saturday, the Treasury issued an official denial.

Neither the Treasury Department nor the Federal Reserve believes that the law can or should be used to facilitate the production of platinum coins for the purpose of avoiding an increase in the debt limit.

But a lot of other VSPs regard it as a viable option. Paul Krugman was one of the few to comment during the 2011 debt-ceiling crisis: “Outrageous behavior demands extraordinary responses.” He came back to it this week, characterizing Obama’s options as:

one [the coin] that’s silly but benign, the other [default] that’s equally silly but both vile and disastrous. The decision should be obvious.

Thursday he added: “we need a strategy to deal with the crazies if they really do prove irredeemably crazy, which seems all too possible.”

Former CBO director Donald Marron more-or-less agrees: The coin option “lacks dignity”, but “might be better than the alternatives if we reach the brink of default”. Former Director of the Mint Philip Diehl says minting the coin would work and have no obvious bad effects on the economy. As a co-author of the law it takes advantage of, he writes:

Yes, this is an unintended consequence of the platinum coin bill, but how many other pieces of legislation have had unintended consequences? Most, I’d guess.

And Atlantic’s Matthew O’Brien adds:

If it’s a choice between defaulting on our obligations, and minting a trillion-dollar coin, I say mint the coin. In an ideal world, Obama would end the platinum coin loophole in return for the House GOP forever ending the debt ceiling, as Josh Barro proposed, but I’ll settle for anything that involves us paying our bills as we promised.

So far, most conservatives still refuse to take this idea seriously. But they want the rest of us to take their don’t-raise-the-debt-ceiling threat seriously, and threaten impeachment if Obama somehow circumvents it.

Continuing to stake their claim as the Party of Stupid, Republicans at the NRCC tweeted an image** of a coin made out of a trillion dollars worth of platinum — as if that’s how coinage works. And the Network of Stupid made the same mistake even after the NRCC had been widely lampooned.

But liberals have an objection also, which Ezra Klein expressed like this:

The platinum coin is an attempt to delay a reckoning that we unfortunately need to have. It takes a debate that will properly focus on the GOP’s reckless threat to force the United States into default and refocuses it on a seemingly absurd power grab by the executive branch.

The right way for this crisis to end, Klein believes, is for the remaining grown-ups in the Republican Party (i.e., the business community) to take back control in order to save the day. That will start a civil war inside the party, so they will only do it if they have no choice; if they think Obama can still pull a day-saving gimmick out of his hat — especially one that could make him vulnerable politically — they won’t.

That’s why wannabe Republican grown-up Philip Klein (no relation) says minting the coin “would be tossing a life preserver to Republicans”.

Obama apparently agrees. That’s why he’s steadfastly refusing to take the burden off Congress by embracing any executive-branch gimmicks. He thinks Congress should pass a clean debt-ceiling bill. If House Republicans want to tie the ceiling increase to unpopular spending cuts, they can spell out what those cuts are. He isn’t going to give them any political cover.

[I’ve explained the politics of this many times: The American people have only very hazy notions of how the government spends money. So “spending” in general is unpopular, but the particular things the government actually spends on — Medicare, Social Security, defense — are very popular. Republicans want to take advantage of this by opposing “spending” but getting Obama to specify which programs to cut.]

Here’s how I put all that together: The coin would be a last resort, and while Obama should hold it in mind to buck up his resolve, the administration is right to deny that they are open to it — until the public understands that we are in last-resort territory and clamors for any kind of solution.

“Last resort” means: The Republicans have blocked a clean bill raising the debt ceiling. The Treasury has run out of books it can juggle to keep paying the bills. The government has shut down all but the most essential services, furloughed its workers, and the public has felt the first pinches: Retirees find that there is no one to process their Social Security applications. Income tax refunds are delayed indefinitely. Defense contractors are filing lawsuits to get paid. And there’s a big interest payment due on the national debt that there may not be money to cover***. The stock market is crashing. Wall Street is begging its bought-and-paid-for congressmen to do something. But still the House majority refuses to raise the debt limit.

Then — and only then — does Obama go on TV, explain the coin loophole to the public, say he has reconsidered his decision not to use it, and promise to trade away that ridiculous power forever if Congress also eliminates the ridiculous debt ceiling.

If that scenario plays out, America will be a laughing stock to the rest of the world. But we will have taken a pratfall, not tumbled into an abyss.


*After President Nixon “impounded” money Congress appropriated to buy stuff he didn’t like, Congress passed a law demanding that future presidents spend whatever Congress appropriates.

**Their image contains a false frame I can’t let pass: It’s not “Obama’s spending”, it’s the spending of the United States of America, duly authorized and appropriated according the Constitution.

***As Josh Barro points out: It isn’t just that incoming revenue covers only 60% of expenditures over the course of a year. Both revenue and expenses are “lumpy”.

It would be impossible to give certainty to people and entities owed money by the federal government about when and whether they would be paid; they would have to wait and see how much money the government could come up with on any given day.