Tag Archives: economics

Three Misunderstood Things 7-24-2017

This week: census, environmental regulations, coal jobs


I. The census

What’s misunderstood about it: How can counting people be a partisan issue?

What more people should know: A lot rides on the census. The Census Bureau knows it gets the answers wrong, but Republicans have a partisan interest in not letting it do better. In 2020, it’s being set up to fail.

*

When the Founders wrote the Constitution, they knew the country was changing fast. New people were pouring into America — some coming by choice and others by force. If Congress was going to represent these people into the distant future, it would have to change as the country changed. So somebody would have to keep track of how the country was changing. That’s why Article I, Section 2 says:

The actual Enumeration shall be made within three Years after the first Meeting of the Congress of the United States, and within every subsequent Term of ten Years, in such Manner as they shall by Law direct.

Congress has implemented that clause by setting up the Census Bureau, which tries to count everyone in America in each year that ends in a zero. You can look at this as a rolling peaceful revolution: Via the census, states like Virginia and Massachusetts have gradually surrendered their founding-era power to new states like California and Texas.

No doubt you learned in grade school that counting is an objective process that produces a correct answer — the same one for everybody who knows how to count. But in practice, when a bunch of people count to 325 million, agreement starts to break down. Now imagine that you’re counting a field full of 325 million cats, most running around and jumping over each other, and a few actively hiding from you. How do you come up with an answer you have faith in?

That’s the Census Bureau’s fundamental problem: Americans won’t stand still long enough to be counted, and some are actively suspicious of anybody from the government who comes around asking questions. Inevitably, then, not everybody gets counted, and some people get counted more than once. This is not a secret; the Census Bureau admits that it gets the wrong answer.

That might not be so bad if the errors were random, but they’re not. Basically, the more stable your life is, the more likely you are to be counted correctly. If, for example, you’re still living in the same house with the same people that a census worker counted ten years ago, they’re going to count you again. But if you’re sleeping on your friend’s couch for a few weeks while you’re waiting for a job to turn up, and thinking about moving back in with Mom if you can’t find one, then you might get missed.

Stability isn’t a randomly distributed quality. The LA Times spells it out:

The last census was considered successful — that is, the 2010 results were considered to be within an acceptable margin of error. But by the Census Bureau’s own estimates, it omitted 2.1% of African Americans, 1.5% of Latinos and nearly 5% of reservation-dwelling American Indians, while non-Latino whites were overcounted by almost 1%. The census missed about 7% of African American and Latino children 4 or younger, a rate twice as high as the overall average for young children.

But that raises an epistemological question: How do you know your count is wrong if you don’t have a correct count to compare it to? And if you have that correct count, why not just use it?

The answer to the first question is statistics. Imagine, for example, that you’re trying to count all the species that live in your back yard. You go out one day and count 50. Then you go out longer with a bigger magnifying glass and find 10 more. Then the next couple of times you don’t find anything new. But then you find two. Are you confident that’s all of them now? What’s your best guess about how many are really out there?

Now extend that to every yard in the neighborhood. Imagine that after each household does its own count, you all converge on one yard for a more intensive search than you’d be willing to do on every yard. That search finds even more new species. Now how many do you think you missed in the other yards?

Statisticians have thought long and hard about questions like that, and have a variety of well-tested ways to estimate the number of things that haven’t been found yet. If you apply those techniques to the census, you get more accurate estimates of the total.

So why not just use those estimates? Two reasons:

  • It sounds bad: Ivory-tower eggheads are using a bunch of mumbo-jumbo Real Americans can’t understand to invent a bunch of blacks and Hispanics that nobody has ever seen.
  • Republicans have a partisan interest in keeping the count the way it is.

The Census determines two very important things: how many representatives (and electoral votes) each state gets, and how hundreds of billions of dollars in federal money for programs like Medicaid and highway-building get distributed among the states. The miscount gives more power and money to mostly white (and Republican) states like Wyoming and Kansas, and less to a majority non-white (and Democratic) state like California. Within a state, Republican gerrymandering works by crowding Democratic-leaning urban minorities into a few districts, leaving a bunch of safely Republican rural and suburban districts. That minority-packing is even easier to do if a chunk of those people were never counted to begin with.

The 2020 census is already headed for trouble. The Census Bureau is being underfunded, taking no account of the fact that it has more people to count than last time. Plans to modernize its technology went badly. And it is currently leaderless: The bureau chief resigned at the end of June, and Trump has nominated no one to replace him.

So we’re set up for an even bigger uncount of minorities this year. And that’s got to make Paul Ryan happy.

II. Environmental regulations

What’s misunderstand about it: Many people believe that a clean environment is a costly luxury.

What more people should understand: Externalities. That’s how well-designed environmental regulations can save more money than they cost.

*

Nobody should come out of Econ 101 without an understanding externalities — real economic costs that the market doesn’t see because they aren’t borne by either the buyer or the seller.

Pollution is the classic example: Suppose I run a paper mill, and I use large quantities of chlorine to make my paper nice and white. At the end of the process I dump the chlorine into my local river, because that’s the cheapest way for me to get rid of it. Because I use such an inexpensive (for me) disposal process, I can keep my prices low. That makes me happy and my customers happy, so the market is happy too. Any of my competitors who doesn’t dump his chlorine in the river is going to be at a disadvantage.

The problems in this process only accrue to people who live downstream, especially fishermen and anybody who wants to swim or eat fish. They suffer real economic losses — losses that are probably much bigger than what I save. But since their loss is invisible to the paper market, nothing will change without the some outside-the-market action — like a government regulation, a court order, or a mob of fishermen coming to burn down my mill.

Now suppose the government tells me I have to stop dumping chlorine. I have to find either some environmentally friendly paper-whitening technique or a way to treat my chlorine-tainted wastewater until it’s safe to put back into the river. Either solution will cost me money, and I will have no trouble calculating exactly how much. So you can bet there will be an article in my local newspaper (which now has to pay more for the newsprint it buys from me) about how many millions of dollars these new regulations cost. The corresponding gains by fishermen, riverfront resort owners whose properties no longer stink, and downstream towns that don’t have to get the chlorine out of their drinking water — that’s all much more diffuse and hard to quantify. So the newspaper won’t have any precise number to weigh my cost against. Chances are its readers will see the issue as money vs. quality of life. They won’t realize that the regulations also make sense in purely economic terms.

That’s an abstract and somewhat dated example, but similar issues — and similar news stories — appear all the time. The costs of new regulations are borne by specific industries who can calculate them exactly, while the benefits — though very real — are more diffuse, and may accrue to people who don’t even realize they’re benefiting. (Companies are very aware of what they’ll have to spend to take carcinogens out of their products, but nobody ever knows about the cancers they don’t get.) But that doesn’t mean that the benefits aren’t bigger than the costs, even in dollar terms.

The best example from my lifetime is getting the lead out of gasoline. If you were alive at the time, you probably remember that the new unleaded gasoline cost a few cents more per gallon. Spread over the whole economy, that amounted to billions and billions. What we got out of that, though, was far more than just the vague satisfaction of breathing cleaner air. Without so much lead in their bloodstreams, our children are smarter, less violent, and less impulsive. The gains — even in purely material terms — have been overwhelmingly positive.

III. Coal jobs.

What’s misunderstood about it: What happened to them? Environmentalists are often blamed for destroying these jobs.

What more people should know: No doubt environmentalists would kill the coal industry if they could. But the real destroyers of coal jobs are automation and competition from other fuels.

*

Coal miners are the heroes of one of the classic success stories of the 20th century. Mining was originally a job for the desperate and expendable, but miners were among the first American workers to see the benefits of unionization. Year after year, coal mining became safer [1], less debilitating, and better paying, until by the 1960s a miner no longer “owed his soul to the company store“, but could be the breadwinner of a middle-class family, owning a home, driving a nice car or truck, and even sending his children to college. Sons and daughters of miners could become doctors, lawyers, or business executives. Or if they wanted to follow their fathers into the mines, that promised to be a good life too.

However, the total number of coal-mining jobs in the United States peaked in 1923.

Was that because Americans stopped using coal? Not at all. Coal production kept going up for the next 85 years.

The difference was automation. Mines employed three-quarters of a million men in the pick-and-shovel days, but better tools allow 21st-century mines to produce more coal with far fewer workers.

If you take a closer look at that employment graph, you’ll notice a hump in the 1970s, when coal employment staged a brief comeback. That corresponded to the Arab Oil Embargo of 1973 and the increased oil prices of the OPEC era. For decades after that, coal was the cheaper, more reliable energy source. Americans who dreamed of energy independence dreamed of coal. In a 1980 presidential debate, candidate Ronald Reagan said:

This nation has been portrayed for too long a time to the people as being energy-poor, when it is energy-rich. The coal that the President [Carter] mentioned — yes, we have it, and yet 1/8th of our total coal resources is not being utilized at all right now. The mines are closed down. There are 22,000 miners out of work. Most of this is due to regulation.

However, all that changed with the fracking boom. Depending on market fluctuations, natural gas can be the cheaper fuel. Meanwhile, the price-per-watt of renewable energy is falling fast, and is now competitive with coal for some applications. So if a utility started building a new coal-fueled plant now, by the time it came on line a renewable source might be more economical — even without considering possible carbon taxes or environmental regulations.

The dirtiness of coal is a huge externality (see misunderstanding II, above), so regulations disadvantaging it make good economic sense. Looking at the full cost to society, coal is the most expensive fuel we have, and should be phased out as soon as possible.

Statements like that make good fodder for politicians (like Trump or Reagan) who want to scapegoat environmental regulations for killing the coal industry. However, dirty coal is like the obnoxious murder victim in an Agatha Christie novel: Environmentalists are only one of the many who wanted it dead, and other suspects actually killed it.


[1] The number of coal-mining deaths peaked at 3,242 in 1907. In 2016 that number was down to 8. As a comment below notes, though, that doesn’t count deaths from black lung disease, which are on the rise again.

Three Misunderstood Things

This week: the anti-gay baker, why the Senate can’t move on, and whether raising the minimum wage kills jobs.


I. The Masterpiece Cakeshop case (which the Supreme Court will hear in the fall).

What’s misunderstood about it: People think it has free-speech implications.

What more people should know: The baker objected to the whole idea of making a wedding cake for two men, and cut off the conversation before the design of the cake was ever discussed. That makes it a discrimination case, not a freedom-of-speech case.

*

Defenders of Masterpiece Cakeshop owner Jack Phillips frequently portray him as a martyr not just to so-called “traditional marriage”, but to the freedom of tradespeople not to say things they object to. For example, one conservative Christian tried to demonstrate a double standard like this:

Marjorie Silva, owner of Azucar Bakery in Denver, said she told the man, Bill Jack of the Denver suburb of Castle Rock, that she wouldn’t fill his order last March for two cakes in the shape of the Bible, to be decorated with phrases like “God hates gays” and an image of two men holding hands with an “X” on top.

Is this cake gay or straight?

But the Colorado Civil Rights Commission ruled against Jack, because the two cases are very different: Silva objected to the message Jack wanted on the cake, not to anything about Jack himself or the situation in which the cake would be served. If the government had demanded that Silva make that cake, it would have been an example of forced speech, which there is already a long legal history against.

Do conservatives also have a right to refuse forced speech? Yes. A Kentucky court recently ruled in favor of a print-shop that refused to make t-shirts for a gay-pride festival.

So liberals must have howled in rage, right? Not me, and not philosopher John Corvino, who defended the Kentucky decision on the liberal news site Slate:

the print shop owners are not merely being asked to provide something that they normally sell (T-shirts; cakes), but also to write a message that they reject. We should defend their right to refuse on free-speech grounds, even while we support anti-discrimination laws as applied to cases like Masterpiece Cakeshop. … Free speech includes the freedom to express wrong and even morally repugnant beliefs; it also includes the freedom for the rest of us not to assist with such expression.

The reason the baker has lost at every stage so far — the administrative court and state appeals court ruled against him, and the Colorado Supreme Court refused to hear his appeal, letting the lower court ruling stand — is that he wasn’t objecting to putting some particular message or symbol on the cake, like a marriage-equality slogan or a rainbow flag. For all he knew when he refused, the men might have wanted a cake identical to one he had already made for some opposite-sex couple. In short, he objected to them, not to the cake they wanted.

Corvino explains:

One might object that Masterpiece Cakeshop is similar: “Same-sex wedding cakes” are simply not something they sell. But wedding cakes are not differentiated that way; a “gay wedding cake” is not a thing. Same-sex wedding cakes are generally chosen from the same catalogs as “straight” wedding cakes, with the same options for designs, frosting, fillings and so forth. It might be different if Masterpiece had said “We won’t provide a cake with two brides or two grooms on top; we don’t sell those to anyone.” But what they said, in fact, was that they wouldn’t sell any cakes for same-sex weddings. That’s sexual orientation discrimination.

II. Mitch McConnell’s agenda.

What’s misunderstand about it: If the Senate is stuck on its ObamaCare replacement, why can’t it move on to the next items on the Republican agenda: tax reform and the budget?

What more people should know: McConnell is trying to exploit a loophole in Senate rules. As soon as a new budget resolution passes, his ability to pass both TrumpCare and tax reform goes away — unless he changes the proposals to get Democratic votes.

*

During the Obama years, we often heard that “it takes 60 votes to get anything done in the Senate”, as if filibusters that can only be broken with 60-vote cloture motions were in the Constitution somewhere, and the minority party had always filibustered everything. (That’s why even the weakest gun-control bills failed, despite 54-46 votes in their favor.) But the Senate recognized a long time ago that budgets have to get passed somehow, and so the Budget Control Act of 1974 established an arcane process called “reconciliation” that circumvents the filibuster in very limited circumstances.

That’s how the Senate’s 52 Republicans can hope to pass bills without talking to the Democrats at all. But there’s a problem: Reconciliation is a once-a-year silver bullet. Fox Business explains:

Reconciliation allows Congress to consider just three items per fiscal year, whether they pertain to one bill or multiple. Those items are spending, revenue and debt limit. Since the GOP also wants to pass its tax reform agenda using reconciliation, it cannot statutorily do that under this budget blueprint because the two policy measures overlap.

And NPR elaborates:

The budget resolution for the current fiscal year dictates that any reconciliation measure must reduce the deficit, which the GOP’s Obamacare repeal was designed to do. Republicans then could draft a new budget resolution for the upcoming fiscal year with easier deficit targets, allowing for more aggressive tax cuts.

Under the most commonly accepted interpretation of the reconciliation rules, as soon as Congress passes a budget resolution for Fiscal Year 2018 (which begins this October), the window for passing TrumpCare under the FY 2017 resolution closes. So the only way to get them both done before facing another election campaign is to do them in the right order: first TrumpCare, then a new budget resolution, then tax reform.

Otherwise, McConnell’s options become less appealing: He can get rid of the filibuster completely, which several Republican senators don’t support. He can scrap either TrumpCare or tax reform for the foreseeable future. Or he can start envisioning the kinds of proposals that might get eight Democratic votes, plus a few to make up for Republican defections.

III. The minimum wage.

What’s misunderstood about it: Both supporters and critics of an much-higher minimum wage think they know what effect it will have on jobs.

What more people should understand: The effect of a minimum-wage increase on jobs is an empirical issue, not something you can deduce from first principles. And the data we have only covers small increases.

*

There is a certain kind of conservative who thinks he learned everything he needs to know about this issue in Econ 101: Every commodity, including unskilled labor, has a demand curve; if you raise its price, demand for it falls.

The right response to that analysis is maybe. Imagine that you own a shop with one machine, run by your sole employee. The machine produces some high-profit item. To make things simple, let’s ignore counterfeiting laws and imagine that the machine prints money. Cheap paper and ink go in, $100 bills come out.

Obviously, you could afford to pay your employee a lot more than the $7.25-per-hour federal minimum wage. But you don’t, because the machine is simple to operate and you could easily replace him, so he doesn’t have any bargaining leverage.

Now what happens if the minimum wage goes up to $15? Do you fire your guy and shut the machine down? Do you abandon your plan to buy another machine and hire a second worker? No, of course not.

Admittedly, that’s an extreme example, but it points out the right issues: Whether an increase in the minimum wage causes you to employ fewer people depends on how much you’re making off those people’s work. If you have a razor-thin profit margin, maybe a higher wage makes the whole operation unprofitable and you lay workers off. But if you could actually afford the higher wage, and the only reason you don’t pay it already is that your workers lack bargaining leverage, then you don’t.

In fact, if a minimum-wage increase gives your customers more money to spend on whatever you make, then you might have to hire more people to meet the demand.

Which situation is more typical? One reason to think the second situation is, is that sometime in the 1970s wages stopped tracking productivity: Workers have been producing more, but not getting comparable pay raises, presumably because they lack the bargaining power to demand them.

During the same era, the minimum wage has not kept pace with inflation. An increase to around $11 would just get it back to where it was in 1968. If it wasn’t causing massive unemployment then, why would it now?

Supporters of a higher minimum wage also point to studies of past increases, which don’t show big job losses.

But there’s a problem on that side, too: Past hikes haven’t been nearly as big as the proposal to go from $7.25 to $15. I was a minimum-wage worker myself in the 1970s when it increased from $1.60 to $1.80. I suspect my employer was not greatly inconvenienced. But larger increases might have a shock value that makes an employer say, “We can’t afford all these workers.”

That’s why the new data coming in from Seattle is so important: Seattle was one of the first cities to adopt a much-higher minimum wage, so we’re just beginning to see the results of that. The headlines on that initial study were that the higher wage is costing jobs, but that early conclusion is still debatable.

So in spite of my own preference for a higher minimum wage, I find myself in agreement with minimum-wage skeptic economist Adam Ozimek: This is an empirical question, and both sides should maintain more humility until we see more definitive data.

Social Capital and Inequality

Inequality is different this time, because the rich are usurping a different kind of capital.


For a long time, most thinkers in the West accepted poverty as natural. As Jesus said: “The poor you will always have with you.” But by 1754, Jean-Jacques Rousseau was writing an entire discourse on the origin of inequality and blaming it largely on the practice of recognizing land as private property.

The first man who, having enclosed a piece of ground, bethought himself of saying This is mine, and found people simple enough to believe him, was the real founder of civil society. From how many crimes, wars and murders, from how many horrors and misfortunes might not any one have saved mankind, by pulling up the stakes, or filling up the ditch, and crying to his fellows, “Beware of listening to this impostor; you are undone if you once forget that the fruits of the earth belong to us all, and the earth itself to nobody.”

Thomas Paine, who in many ways was the most radical of the American revolutionaries, observed the contrasting example of the Native American tribes — where he found no parallel to European wealth or poverty — and came away with a more nuanced model of the connection between inequality and landed property, which he published in 1797 as Agrarian Justice. He started in much the same place as Rousseau:

The earth in its natural, uncultivated state, was, 
and ever would have continued to be 
THE COMMON PROPERTY OF THE HUMAN RACE. In that state every man 
would have been born to property. He would have been a joint life-proprietor with the rest 
in the property of the soil, 
and in all its natural productions, 
vegetable and animal.

But Paine also recognized that the development of modern agriculture — which he saw as necessary to feed people in the numbers and diversity of activities essential to advanced civilization — required investing a lot of up-front effort: clearing forests of trees and rocks, draining marshlands, and then annually plowing and planting. Who would do all that, if in the end the harvest would belong equally to everybody? He saw private ownership of land as a solution to this problem, but believed it had been implemented badly. What a homesteader deserved to own was his or her improvement on the productivity of the land, not the land itself. If the land a family cleared became more valuable than the forest or marshland they started with, then the homesteaders should own that difference in value, but not the land itself. [1]

Society as a whole, he concluded, deserved a rent on the land in its original state, and he proposed using that income — or an inheritance tax on land, which would not be as clean a solution theoretically, but would be easier to assess and collect — to capitalize the poor.

When a young couple begin the world, 
the difference is exceedingly great 
whether they begin with nothing 
or with fifteen pounds apiece. With this aid they could buy a cow, 
and implements to cultivate a few acres of land; 
and instead of becoming burdens upon society … would be put in the way 
of becoming useful and profitable citizens.

Paine argued this not as charity or even social engineering, but as justice: The practice of privatizing land had usurped the collective inheritance of those born without land, so something had to be done to restore the usurped value.

In one of my favorite talks (I published versions of it here and here), I extended Paine’s idea in multiple directions, including to intellectual property. Just as Paine would buy a young couple a cow and some tools, I proposed helping people launch themselves into a 21st century information economy. Like Paine, I see this as justice, because otherwise the whole benefit of technological advancement accrues only to companies like Apple or Google, reaching the rest of us only through such companies. A fortune like Bill Gates’ arises partly through innovation, effort, and good business judgment, but also by usurping a big chunk of the common inheritance.

Avent. And that brings us to Ryan Avent’s new book, The Wealth of Humans: work, power, and status in the twenty-first century. There are at least two ways to read this book. It fits into the robot-apolcalypse, where-are-the-jobs-of-the-future theme that I have recently discussed here (and less recently here and here). Avent’s title has a double meaning: On the one hand it’s about the wealth humans will produce through the continued advance of technology. But that advance will also result in society having a “wealth” of humans — more than are needed to do the jobs available.

Most books in this genre are by technologists or futurists, and consequently assemble evidence to support a single vision or central prediction. Avent is an economic journalist. (He writes for The Economist.) So he has produced a more balanced analysis, cataloging the forces, trends, and possibilities. It’s well worth reading from that point of view.

But I found Avent’s book more interesting in what it says about inequality and social justice in the current era. What’s different about the 21st century is that technology and globalism have converged to make prosperity depend on a type of capital we’re not used to thinking about: social capital. [2] And from a moral point of view, it’s not at all obvious who should own social capital. Maybe we all should.

What is social capital? Before the Industrial Revolution, capital consisted mainly of land (and slaves, where that was allowed). By the late 19th century, though, the big fortunes revolved around industrial capital: the expensive machines that sat in big factories. The difference between a rich country and a poor one was mainly that people in rich countries could afford to invest in such machinery, which then made them richer. On a national level, industrial capital showed up as government-subsidized railroads and canals and port facilities. (The Erie Canal alone created one of the great 19th-century boom towns: Buffalo.) A country that could afford to make such improvements became more productive and more prosperous.

In the 20th century, the countries that rose to wealth — first Japan and then later Singapore, Taiwan, and South Korea — did so partly through investment in machinery, but also through education. An educated populace could provide the advanced services that made an industrial economy thrive. And so we started talking about human capital, the investments that people and their governments make in acquiring skills, and intellectual capital, the patents, copyrights, and trade secrets that powered a 20th-century giant like IBM.

That may seem like a pretty complete list of the kinds of capital. But now look at today’s most valuable companies: Apple and Google, either of which might become the world’s first trillion-dollar corporation in a year or two. Each owns a small amount of land, no slaves, and virtually no industrial capital; Apple contracts out nearly all of its manufacturing, and a lot of Google’s products are entirely intangible. Both employ brilliant, well-educated people, but not hundreds of billions of dollars worth of them. They have valuable patents, copyrights, trademarks, etc., but again, intellectual property alone doesn’t account for either company’s market value. There’s something in how all those factors fit together that makes Apple and Google what they are.

That’s social capital. Avent describes it like this:

Social capital is individual knowledge that only has value in particular social contexts. An appreciation for property rights, for example, is valueless unless it is held within a community of like-minded people. Likewise, an understanding of the culture of a productive firm is only useful within that firm, where that culture governs behavior. That dependence on a critical mass of minds to function is what distinguishes social capital from human capital.

Social capital has always existed and been a factor of production, but something about the current era, some combination of globalism and technology, has brought it to the fore. Today, a firm strong in social capital — a shared way of approaching problems and taking action that is uniquely suited to a particular market at this moment in history — can acquire all the other factors of production cheaply, making social capital the primary source of its wealth. [3]

Who should own social capital? Right now it’s clear who does own a company’s social capital: the stockholders. But should they? Avent talks about Bill Gates’ $70 billion net worth — created mostly not by his own efforts but by the social organism called Microsoft — and then generalizes:

People, essentially, do not create their own fortunes. They inherit them, come to them through the occupation of some state-protected niche, or, if they are very brilliant and very lucky, through infusing a particular group of men and women with the germ of an idea, which, in time and with just the right environment, allows that group to evolve into an organism suited to the creation of economic value, a very large chunk of which the founder can then capture for himself.

Stockholders — the people who put up the money to acquire the other factors of production — currently get the vast majority of the benefit from a company’s social capital, but it’s not clear why they should. We usually imagine other forms of capital as belonging to whomever would have them if the enterprise broke up: The stockholders would sell off the land and industrial and intellectual capital, while the employees would walk away with the human capital of their experience and education. But the company’s social capital would just vanish, the way that a living organism vanishes if it gets rendered into its constituent chemicals. So, rightfully, who owns it?

Another chunk of social capital resides in nations, which are also social organisms. The very real economic value of the rule of law, voluntary compliance with beneficial but unenforceable norms, shared notions of fairness, trust that others will fulfill their commitments, and general public-spiritedness — in other words, all the cultural stuff that makes a worker or firm or idea more valuable in America or Germany than in Burundi or Yemen — who does it belong to? Who should share in its benefits?

Bargaining power. Avent does not try to sell the conservative fairy tale that the market will allocate benefits appropriately. Under the market, what each party gets out of any collective endeavor depends on its relative bargaining power, not on what it may deserve in some more abstract sense.

Avent proposes this thought experiment: What if automation got to the point where only one human worker was required to produce everything? Naively, you might expect this individual to be tremendously important and very well paid, but that’s probably not what would happen. Everyone in the world who wanted a job would want his job, and even if he had considerable skills, probably in the whole world millions of people would share those skills. So his bargaining power would be essentially zero, and even though in some sense he produced everything, he might end up working for nothing.

Globalization and automation, plus political developments like the decline of unions, have lowered the bargaining power of unskilled workers in rich countries, so they get less money, even though in most cases their productivity has increased. As communication gets cheaper and systems get more intelligent, more and more jobs can be automated or outsourced to countries with lower wages, so the bargaining power of the people in those jobs shrinks. That explains this graph, which I keep coming back to because I think it’s the single most important thing to understand about the American economy today: Hourly wages tracked productivity fairly closely until the 1970s, but have fallen farther and farther behind ever since.

Companies could have afforded to pay more — by now, the productivity is there to support a wage nearly 2 1/2 times higher — but workers haven’t had the bargaining power to demand that money, so they haven’t gotten it. [4]

A similar thing happened early in the Industrial Revolution: Virtually none of the benefits that came from industrial capital were shared with the workers, until they gained bargaining power through political action and unionization. The result is the safety net we have today.

Just as workers’ ability to reap significant benefits from the deployment of industrial capital was in doubt for decades, so we should worry that social capital will not, without significant alterations to the current economic system, generate better economic circumstances for most people.

Who’s in? Who’s out? When you do start sharing social capital, whether within a firm or within a country, you run into the question of who belongs. This is a big part of the contracting-out revolution. The janitors and cafeteria workers at Henry Ford’s plants worked for Henry Ford. But a modern technology corporation is likely to contract for those services. By shrinking down to a core competency, it can reward its workers while keeping a tight rein on who “its workers” are. No need to give stock options or healthcare benefits to receptionists and parking lot attendants if they don’t seem essential to maintaining the company’s social capital.

Things shake out similarly at the national level: The more ordinary Americans succeed in getting a share of the social capital of the United States, the greater the temptation to restrict who can get into the US and qualify for benefits — or to throw out people that many of the rest of us think shouldn’t be here.

Avent would like to see us take the broadest possible view of who’s in:

The question we ask ourselves, knowingly or not, is: With whom do we wish to share society? The easy answer, the habitual answer, is: with those who are like us.

But this answer is bound to lead to trouble, because it is arbitrary, and because it is lazy, and because it is imprecise, in ways that invite social division. There is always some trait or characteristic available which can be used to define someone seemingly like us as not like us.

There is a better answer available: that to be “like us” is to be human. That to be human is to earn the right to share in the wealth generated by the productive social institutions that have evolved and the knowledge that has been generated, to which someone born in a slum in Dhaka is every bit the rightful heir as someone born to great wealth in Palo Alto or Belgravia.

Can it happen? Much of the Avent’s book is depressing, but by the time the Epilogue rolls around he seems almost irrationally optimistic. For 200 pages, he has painted as realistic a picture as he could of the challenges we face, whether economic, technological, social, or political. But as to whether things will ultimately work out, he appears to come around to the idea that they have to, so they will. So he ends with this:

We are entering into a great historical unknown. In all probability, humanity will emerge on the other side, some decades hence, in a world in which people are vastly richer and happier than they are now. With some probability, small but positive, we will not make it at all, or we will arrive on the other side poorer and more miserable. That assessment is not optimism or pessimism. It is just the way things are.

Face to face with the unknown, it is hard to know what to feel or what to do. It is tempting to be afraid. But, faced with this great, powerful, transformative force, we shouldn’t be frightened. We should be generous. We should be as generous as we can be.


[1] The arbitrariness of this becomes clear when you consider mineral rights. If my grandfather homesteaded a plot of land, which in my generation turned out to be in the middle of a oil field, what would that wealth have to do with me that I would deserve to own it?

[2] If the term social capital rings a bell for you, you’re probably remembering Robert Putnam’s Bowling Alone, which appeared as a magazine article in 1995 and was expanded to a book in 2000. But Putnam used the term more metaphorically, expressing a sociological idea in economic terms, rather than as a literal factor of production.

[3] Henry Ford’s company probably also had a lot of social capital, but it was hard to notice behind all those buildings and machines.

[4] Individual employers will tell you that they’d go bankrupt if they had to raise wages 2 1/2 times, and in some sense that’s true: They compete with companies that also pay low wages, and would lose that competition if they paid high wages. But that is simply evidence that workers’ bargaining power is low across entire industries, rather than just in this company or that one.

Jobs, Income, and the Future

What “the jobs problem” is depends on how far into the future you’re looking. Near-term, macroeconomic policy should suffice to create enough jobs. But long-term, employing everyone may be unrealistic, and a basic income program might be necessary. That will be such a change in our social psychology that we need to start preparing for it now.


Historical context. The first thing to recognize about unemployment is that it’s not a natural problem. Tribal hunter-gatherer cultures have no notion of it. No matter how tough survival might be during droughts or other hard times, nothing stops hunter-gatherers from continuing to hunt and gather. The tribe has a territory of field or forest or lake, and anyone can go to this commonly held territory to look for food.

Unemployment begins when the common territory becomes private property. Then hunting turns into poaching, gathering becomes stealing, and people who are perfectly willing to hunt or fish or gather edible plants may be forbidden to do so. At that point, those who don’t own enough land to support themselves need jobs; in other words, they need arrangements that trade their labor to an owner in exchange for access to the owned resources. The quality of such a job might vary from outright slavery to Clayton Kershaw’s nine-figure contract to pitch for the Dodgers, but the structure is the same: Somebody else owns the productive enterprise, and non-owners needs to acquire the owner’s permission to participate in it.

So even if unemployment is not an inevitable part of the human condition, it is as old as private property. Beggars — people who have neither land nor jobs — appear in the Bible and other ancient texts.

But the nature of unemployment changed with Industrial Revolution. With the development and continuous improvement of machines powered by rivers or steam or electricity, jobs in various human trades began to vanish; you might learn a promising trade (like spinning or weaving) in your youth, only to see that trade become obsolete in your lifetime.

So if the problem of technological unemployment is not exactly ancient, it’s still been around for centuries. As far back as 1819, the economist Jean Charles Léonard de Sismondi was wondering how far this process might go. With tongue in cheek he postulated one “ideal” future:

In truth then, there is nothing more to wish for than that the king, remaining alone on the island, by constantly turning a crank, might produce, through automata, all the output of England.

This possibility raises an obvious question: What, then, could the English people offer the king (or whichever oligarchy ended up owning the automata) in exchange for their livelihoods?

Maslow. What has kept that dystopian scenario from becoming reality is, basically, Maslow’s hierarchy of needs. As basic food, clothing, and shelter become easier and easier to provide, people develop other desires that are less easy to satisfy. Wikipedia estimates that currently only 2% of American workers are employed in agriculture, compared to 50% in 1870 and probably over 90% in colonial times. But those displaced 48% or 88% are not idle. They install air conditioners, design computer games, perform plastic surgery, and provide many other products and services our ancestors never knew they could want.

So although technology has continued to put people out of work — the railroads pushed out the stagecoach and steamboat operators, cars drastically lessened opportunities for stableboys and horse-breeders, and machines of all sorts displaced one set of skilled craftsmen after another — new professions have constantly emerged to take up the slack. The trade-off has never been one-for-one, and the new jobs have usually gone to different people than the ones whose trades became obsolete.  But in the economy as a whole, the unemployment problem has mostly remained manageable.

Three myths. We commonly tell three falsehoods about this march of technology: First, that the new technologies themselves directly create the new jobs. But to the extent they do, they don’t create nearly enough of them. For example, factories that manufacture combines and other agricultural machinery do employ some assembly-line workers, but not nearly as many people as worked in the fields in the pre-mechanized era.

When the new jobs do arise, it is indirectly, through the general working of the economy satisfying new desires, which may have only a tangential relationship to the new technologies. The telephone puts messenger-boys out of business, and also enables the creation of jobs in pizza delivery. But messenger-boys don’t automatically get pizza-delivery jobs; they go into the general pool of the unemployed, and entrepreneurs who create new industries draw their workers from that pool. At times there may be a considerable lag between the old jobs going away and the new jobs appearing.

Second, the new jobs haven’t always required more education and skill than the old ones. One of the key points of Harry Braverman’s 1974 classic Labor and Monopoly Capital: the degradation of work in the 20th century was that automation typically bifurcates the workforce into people who need to know a lot and people who need to know very little. Maybe building the first mechanized shoe factory required more knowledge and skill than a medieval cobbler had, but the operators of those machines needed considerably less knowledge and skill. The point of machinery was never just that it replaced human muscle-power with horsepower or waterpower or fossil fuels, but also that once the craftsman’s knowledge had been built into a machine, low-skill workers could replace high-skill workers.

And finally, technological progress by itself doesn’t always lead to general prosperity. It increases productivity, but that’s not the same thing. A technologically advanced economy can produce goods with less labor, so one possible outcome is that it could produce more goods for everybody. But it could also produce the same goods with less labor, or even fewer goods with much less labor. In Sismondi’s Dystopia, for example, why won’t the king stop turning his crank as soon as he has all the goods he wants, and leave everyone else to starve?

So whether a technological society is rich or not depends on social and political factors as much as economic ones. If a small number of people wind up owning the machines, patents, copyrights, and market platforms, the main thing technology will produce is massive inequality. What keeps that from happening is political change: progressive taxation, the social safety net, unions, shorter work-weeks, public education, minimum wages, and so on.

The easiest way to grasp this reality is to read Dickens: In his day, London was the most technologically advanced city in the world, but because political change hadn’t caught up, it was a hellhole for a large chunk of its population.

The fate of horses. Given the long history of technological unemployment, it’s tempting to see the current wave as just more of the same. Too bad for the stock brokers put out of work by automated internet stock-trading, but they’ll land somewhere. And if they don’t, they won’t wreck the economy any more than the obsolete clipper-ship captains did.

But what’s different about rising technologies like robotics and artificial intelligence is that they don’t bifurcate the workforce any more: To a large extent, the unskilled labor just goes away. The shoe factory replaced cobblers with machine designers and assembly-line workers. But now picture an economy where you get new shoes by sending a scan of your feet to a web site which 3D-prints the shoes, packages them automatically, and then ships them to you via airborne drone or driverless delivery truck. There might be shoe designers or computer programmers back there someplace, but once the system is built, the amount of extra labor your order requires is zero.

In A Farewell to Alms, Gregory Clark draws this ominous parallel: In 1901, the British economy required more than 3 million working horses. Those jobs are done by machines now, and the UK maintains a far smaller number of horses (about 800K) for almost entirely recreational purposes.

There was always a wage at which all these horses could have remained employed. But that wage was so low that it did not pay for their feed.

By now, there is literally nothing that three million British horses can do more economically than machines. Could the same thing happen to humans? Maybe it will be a very long time before an AI can write a more riveting novel than Stephen King, but how many of us still have a genuinely irreplacable talent?

Currently, the U.S. economy has something like 150 million jobs for humans. What if, at some point in the not-so-distant future, there is literally nothing of economic value that 150 million people can do better than some automated system?

Speed of adjustment. The counter-argument is subtle, but not without merit: You shouldn’t let your attention get transfixed by the new systems, because new systems never directly create as many jobs as they destroy. Most new jobs won’t come from maintaining 3D printers or manufacturing drones or programming driverless cars, they’ll come indirectly via Maslow’s hierarchy: People who get their old wants satisfied more easily will start to want new things, some of which will still require people. Properly managed, the economy can keep growing until all the people who need jobs have them.

The problem with that argument is speed. If technology were just a one-time burst, then no matter how big the revolution was, eventually our desires would grow to absorb the new productivity. But technology is continually improving, and could even be accelerating. And even though we humans are a greedy lot, we’re also creatures of habit. If the iPhone 117 hits the market a week after I got my new iPhone 116, maybe I won’t learn to appreciate its new features until the iPhone 118, 119, and 120 are already obsolete.

Or, to put the same idea in a historical context, what if technology had given us clipper ships on Monday, steamships on Tuesday, and 747s by Friday? Who would we have employed to do what?

You could imagine, then, a future where we constantly do want new things that employ people in new ways, but still the economy’s ability to create jobs keeps falling farther behind. Since we’re only human, we won’t have time either to appreciate the new possibilities technology offers us, or to learn the new skills we need to find jobs in those new industries — at least not before they also become obsolete.

Macroeconomics. Right now, though, we are still far from the situation where there’s nothing the unemployed could possibly do. Lots of things that need doing aren’t getting done, even as people who might do them are unemployed: Our roads and bridges are decaying. We need to prepare for climate change by insulating our buildings better and installing more solar panels. The electrical grid is vulnerable and doesn’t let us take advantage of the most efficient power-managing technologies. Addicts who want treatment aren’t getting it. Working parents need better daycare options. Students could benefit from more one-on-one or small-group attention from teachers. Hospital patients would like to see their nurses come around more often and respond to the call buttons more quickly. Many of our elderly are warehoused in inadequately staffed institutions.

Some inadequate staffing we’ve just gotten used to: We expect long lines at the DMV, and that it might take a while to catch a waitress’ eye. In stores, it’s hard to get anybody to answer your questions. But that’s just life, we think.

That combination of unmet needs and unemployed people isn’t a technological problem, it’s an economic problem. In other words, the problem is about money, not about what is or isn’t physically possible. Either the people with needs don’t have enough money to create effective demand in the market, or the workers who might satisfy the needs can’t afford the training they need, or the businessmen who might connect workers with consumers can’t raise the capital to get started.

One solution is for the Federal Reserve to create more money. At Vox, Timothy Lee writes:

When society invents a new technology that makes workers more efficient, it has two options: It can employ the same number of workers and produce more goods and services, or it can employ fewer workers to produce the same number of goods and services.

Jargon-filled media coverage makes this hard to see, but the Federal Reserve plays a central role in this decision. When the Fed pumps more money into the economy, people spend more and create more jobs. If the Fed fails to supply enough cash, then faster technological progress can lead to faster job losses — something we might be experiencing right now.

So if you’re worried that technological progress will lead to mass unemployment — and especially if you think this process is already underway — you should be very interested in what the Federal Reserve does.

Another option is for the government to directly subsidize the people whose needs would otherwise go unmet. That’s what the Affordable Care Act and Medicaid do: They subsidize healthcare for people who need it but otherwise couldn’t afford it, and so create jobs for doctors, nurses, and the people who manufacture drugs, devices, and the other stuff used in healthcare.

Finally, the government can directly invest in industries that otherwise can’t raise capital. The best model here is the New Deal’s investment in the rural electric co-ops that brought electricity to sparsely populated areas. It’s also what happens when governments build roads or mass-transit systems.

When you look at things this way, you realize that our recent job problems have as much to do with conservative macroeconomic policy as with technology. Since Reagan, we’ve been weakening all the political tools that distribute the benefits of productivity: progressive taxation, the social safety net, unions, shorter work-weeks, public education, the minimum wage. And the result has been exactly what we should have expected: For decades, increases in national wealth have gone almost entirely to owners rather than workers.

In short, we’ve been moving back towards Dickensian London.

The long-term jobs problem. But just because the Robot Apocalypse isn’t the sole source of our immediate unemployment problem, that doesn’t mean it’s not waiting in the middle-to-far future. Our children or grandchildren might well live in a world where the average person is economically superfluous, and only the rare genius has any marketable skills.

The main thing to realize about this future is that its problems are more social and psychological than economic. If we can solve the economic problem of distributing all this machine-created wealth, we could be talking about the Garden of Eden, or various visions of hunter-gatherer Heaven. People could spend their lives pursuing pleasure and other forms of satisfaction, without needing to work. But if we don’t solve the distribution problem, we could wind up in Sismondi’s Dystopia, where it’s up to the owners of the automata whether the rest of us live or die.

The solution to the economic problem is obvious: People need to receive some kind of basic income, whether their activities have any market value or not. The obvious question “Where will the money for this come from?” has an obvious answer “From the surplus productivity that makes their economic contribution unnecessary.” In the same way that we can feed everybody now (and export food) with only 2% of our population working in agriculture, across-the-board productivity could create enough wealth to support everyone at a decent level with only some small number of people working.

But the social/psychological problem is harder. Kurt Vonnegut was already exploring this in his 1952 novel Player Piano. People don’t just get money from their work, they get their identities and senses of self-worth. For example, coal miners of that era may not have wanted to spend their days underground breathing coal dust and getting black lung disease, but many probably felt a sense of heroism in making these sacrifices to support their families and to give their children better opportunities. If they had suddenly all been replaced by machines and pensioned off, they could have achieved those same results with their pension money. But why, an ex-miner might wonder, should anyone love or appreciate him, rather than just his unearned money?

Like unemployment itself, the idea that the unemployed are worthless goes way back. St. Paul wrote:

This we commanded you, that if any would not work, neither should he eat.

It’s worth noticing, though, that many people are already successfully dealing with this psycho-social problem. Scions of rich families only work if they want to, and many of them seem quite happy. Millions of Americans are pleasantly retired, living off a combination of savings and Social Security. Millions of others are students, who may be working quite hard, but at things that have no current economic value. Housespouses work, but not at jobs that pay wages.

Countless people who have wage-paying jobs derive their identities from some other part of their lives: Whatever they might be doing for money, they see themselves as novelists, musicians, chess players, political activists, evangelists, long-distance runners, or bloggers. Giving them a work-free income would just enable them to do more of what they see as their calling.

Conservative and liberal views of basic income. If you talk to liberals about basic income, the conversation quickly shifts to all the marvelous things they would do themselves if they didn’t have to work. Conservatives may well have similar ambitions, but their attention quickly shifts to other people, who they are sure would lead soulless lives of drunken society-destroying hedonism. (This is similar to the split a century ago over Prohibition: Virtually no one thought that they themselves needed the government to protect them from the temptation of gin, but many believed that other people did.)

So far this argument is almost entirely speculative, with both sides arguing about what they imagine would happen based on their general ideas about human nature. However, we may get some experimental results before long.

GiveDirectly is an upstart charity funded by Silicon Valley money, and it has tossed aside the old teach-a-man-to-fish model of third-world aid in favor of the direct approach: Poor people lack money, so give them money. It has a plan to provide a poverty-avoiding basic income — about $22 a month — for 12 years to everybody in 40 poor villages in Kenya. Another 80 villages will get a 2-year basic income. Will this liberate the recipients’ creativity? Or trap them in soul-destroying dependence and rob them of self esteem?

My guess: a little bit of both, depending on who you look at. And both sides will feel vindicated by that outcome. We see that already in American programs like food stamps. For some conservatives, the fact that cheating exists at all invalidates the whole effort; that one guy laughing at us as he eats his subsidized lobster outweighs all the kids who now go to school with breakfast in their stomachs. Liberals may look at the same facts and come to the opposite conclusion: If I get to help some people who really need it, what does it matter if a few lazy lowlifes get a free ride?

So I’ll bet some of the Kenyans will gamble away their money or use it to stay permanently stoned, while others will finally get a little breathing room, escape self-reinforcing poverty traps, and make something of their lives. Which outcome matters to you?

Summing up. In the short run, there will be no Robot Apocalypse as long as we regain our understanding of macroeconomics. But we need to recognize that technological change combines badly with free-market dogma, leading to Dickensian London: Comparatively few people own the new technologies, so they capture the benefits while the rest of us lose our bargaining power as we become less and less necessary.

However, we’re still at the point in history where most people’s efforts have genuine economic value, and many things that people could do still need doing. So by using macroeconomic tools like progressive taxation, public investment, and money creation, the economy can expand so that technological productivity leads to more goods and services for all, rather than a drastic loss of jobs and livelihoods for most while a few become wealthy on a previously unheard-of scale.

At some point, though, we’re going to lose our competition with artificial intelligence and go the way of horses — at least economically. Maybe you believe that AIs will never be able to compete with your work as a psychotherapist, a minister, or a poet, but chess masters and truck drivers used to think that too. Sooner or later, it will happen.

Adjusting to that new reality will require not just economic and political change, but social and psychological change as well. Somehow, we will need to make meaningful lives for ourselves in a work-free technological Garden of Eden. When I put it that way, it sounds easy, but when you picture it in detail, it’s not. We will all need to attach our self-respect and self-esteem to something other than pulling our weight economically.

In the middle-term, there are things we can do to adjust: We should be on the lookout for other roles like student and retiree, that give people a socially acceptable story to tell about themselves even if they’re not earning a paycheck. Maybe the academic idea of a sabbatical needs to expand to the larger economy: Whatever you do, you should take a year or so off every decade. “I’m on sabbatical” might become a story more widely acceptable than “I’m unemployed.” College professors and ministers are expected to take sabbaticals; it’s the ones who don’t who have something to explain.

Already-existing trends that lower the workforce, like retraining mid-career or retiring early, need to be celebrated rather than worried about. In the long run the workforce is going to go down; that can be either a source of suffering or a cause for rejoicing, depending on how we construct it.

Most of all, we need to re-examine the stereotypes we attach to the unemployed: They are lazy, undeserving, and useless. These stereotypes become self-fulfilling prophecies: If no one is willing to pay me, why shouldn’t I be useless?

Social roles are what we make them. The Bible does not report Adam and Eve feeling useless and purposeless in the Garden of Eden, and I suspect hunter-gatherer tribes that happened onto lands of plentiful game and endless forest handled that bounty relatively well. We could to the same. Or not.

What’s a 21st Century Equivalent of the Homestead Act?

A typical featured article on this blog is supposed to tell my readers something they might not already know, or at least to get them to think about it in a different way. But this time I’m just trying to raise a question, hoping that the combined wisdom and creativity of the readership will come up with stuff I haven’t thought of.

Before I ask the question, some background: One of the most radical things the United States government ever did was pass the Homestead Act (actually the Homestead Acts; there were a series of them). Beginning in 1850, and picking up steam after the Civil War, the government gave away relatively small plots of land — usually 160 acres — to settlers who over a period of five years would build a home on the land, live there, “improve” the land to make it farmable, and then farm it. Wikipedia claims that 10% of the total area of the United States was given away in this manner, to the benefit of 1.6 million families. [1]

I doubt Karl Marx had much influence on the U.S. Congress (though he was writing during this era) and there’s nothing particularly communist about establishing 1.6 million plots of private property. But I like to look at the Homestead Act in the light of the Marxist concept of the means of production. In a nutshell, the means of production is whatever resources are necessary to turn labor into goods and services. So, in a given society at a given state of technology,

Labor + X = Goods and Services

Solve for X, and that’s the means of production. Today, X is complicated: factories and patents and communication systems and whatever. But for most of human history, the means of production had mostly been land. And it still could be, even in the 19th century with its growing industrial economy; if you had fertile land, you could work it and produce sustenance for yourself, plus some extra to trade.

To Marx, the problem of capitalism is that the means of production — land, factories, mines, and so on — wind up privately owned by a fairly small group of people, and everybody else can only get access to the means of production by negotiating with those people. In other words, your productivity is not up to you; you can’t just go work and collect the fruit of your labor, you need an employer to hire you, so that you can have a job and get paid. Your labor only counts if you can get an employer’s permission to use his access to the means of production. Otherwise, you’re like a landless farmer or an auto worker who has been laid off from the factory.

Marx foresaw a vicious cycle: The narrower the ownership of the means of production became, the less bargaining power a worker would have, and the larger the premium an employer could demand in order to grant access. [2] This imbalance in bargaining power would increase the concentration of wealth, making the ownership of the means of production even narrower.

Usually, communists end up talking about state ownership of the means of production, but I want to point out that that’s a method, not a goal. What is really important is universal access to the means of production. State ownership is one way to try to do that, and I’m not sure how many other ways there might be — that’s part of the question here — but the real goal should be access: If all the people who want to work can find a way to turn their effort into goods and services, without needing to make a extortionate deal with some gatekeeper, then we’re on to something.

Now let’s return to the Homestead Act. What it did was vastly increase the number of Americans with access to the means of production. Mind you, it didn’t establish universal access — if you were a freedman sharecropping in Georgia, or were making pennies an hour in some dangerous factory in Connecticut, you had little prospect of assembling a big enough stake to go out West and homestead for five years — but it was vastly expanded access.

So now you’re in a position to understand what I’m asking: What would do that now? What change could we make (where we includes but is not necessarily limited to the federal government) that would vastly increase access to whatever the means of production is today?


[1] Probably most of you have already realized that this was an example of robbing Peter to pay Paul. The only reason the U.S. government had all this land to give was that they were in the process of stealing it from the Native Americans.

I would argue that at this point the decision to rob Peter had already been made; I doubt any major figure in the government saw much future for the Native Americans other than being pushed back onto reservations or annihilated. However we do the moral calculations today, at the time Congress saw itself with the power (and even the right, though don’t ask me to defend it) to dispose of that land however it wanted.

Given that robbery-in-progress, I think the decision to pay Paul is still remarkable. It certainly wasn’t the only thing Congress could have done. The government could have applied the Spanish model, and created a bunch of large haciendas to be controlled by a wealthy elite. Or it could have applied the English model, and granted the land in huge swathes to public/private companies like the East India Company or the Virginia Company, who could develop it for profit. What it did instead created a middle class of small landowners rather than an aristocracy or a managerial elite.

[2] Workers don’t usually pay an explicit “premium for access to the means of production”, but it’s implicit when a profitable business pays low wages: Money comes in and the owner keeps the lion’s share. If you don’t like it, go get another job.

One way to read the productivity vs. wages graphs I post every few months is that access premiums have been growing since the mid-1970s, and really started to accelerate in the mid-1980s.

The Election Is About the Country, Not the Candidates

Citizens shouldn’t let the media make us forget about ourselves.


Judging by the amount of media attention they got, these were the most important political stories of the week: Donald Trump and Bernie Sanders agreed to debate, but then Trump backed out, leading Sanders supporters to launch the #ChickenTrump hashtag. A report on Hillary Clinton’s emails came out. A poll indicated that the California primary is closer than previously thought. Trump’s delegate total went over 50%. Elizabeth Warren criticized Trump, so he began calling her “Pocahontas”. Sanders demanded that Barney Frank be removed as the chair of the DNC’s platform committee. Trump told a California audience that the state isn’t in a drought and has “plenty of water“. Trump accused Bill Clinton of being a rapist, and brought up the 1990s conspiracy theory that Vince Foster was murdered. President Obama said that the prospect of a Trump presidency had foreign leaders “rattled“, and Trump replied that “When you rattle someone, that’s good.” Clinton charged that Trump had been rooting for the 2008 housing collapse. Pundits told us that the tone of the campaign was only going to get worse from here; Trump and Clinton have record disapproval ratings for presidential nominees, and so the debate will have to focus on making the other one even more unpopular.

If you are an American who follows political news, you probably heard or read most of these stories, and you may have gotten emotionally involved — excited or worried or angry — about one or more of them. But if at any time you took a step back from the urgent tone of the coverage, you might have wondered what any of it had to do with you, or with the country you live in. The United States has serious issues to think about and serious decisions to make about what kind of country it is or wants to be. This presidential election, and the congressional elections that are also happening this fall, will play an important role in those decisions.

That’s why I think it’s important, both in our own minds and in our interactions with each other, to keep pulling the discussion back to us and our country. The flaws and foibles and gaffes and strategies of the candidates are shiny objects that can be hard to ignore, and Trump in particular is unusually gifted at drawing attention. But the government of the United States is supposed to be “of the People, by the People, and for the People”. It’s supposed to be about us, not about them.

As I’ve often discussed before, the important issues of our country and how it will be governed, of the decisions we have to make and the implications those decisions will have, are not news in the sense that our journalistic culture understands it. Our sense of those concerns evolves slowly, and almost never changes significantly from one day to the next. It seldom crystallizes into events that are breaking and require minute-to-minute updates. At best, a breaking news event like the Ferguson demonstrations or the Baltimore riot will occasionally give journalists a hook on which to hang a discussion of an important issue that isn’t news, like our centuries-long racial divide. (Picture trying to cover it without the hook: “This just in: America’s racial problem has changed since 1865 and 1965, but it’s still there.”)

So let’s back away from the addictive soap opera of the candidates and try to refocus on the questions this election really ought to be about.

Who can be a real American?

In the middle of the 20th century (about the time I was born), if you had asked people anywhere in the world to describe “an American”, you’d have gotten a pretty clear picture: Americans were white and spoke English. They were Christians (with a few Jews mixed in, but they were assimilating and you probably couldn’t tell), and mostly Protestants. They lived in households where two parents — a man and a woman, obviously — were trying (or hoping) to raise at least two children. They either owned a house (that they probably still owed money on) or were saving to buy one. They owned at least one car, and hoped to buy a bigger and better one soon.

If you needed someone to lead or speak for a group of Americans, you picked a man. American women might get an education and work temporarily as teachers or nurses or secretaries, but only until they could find a husband and start raising children.

Of course, everyone knew that other kinds of people lived in America: blacks, obviously; Hispanics and various recent immigrants whose English might be spotty; Native Americans, who were still Indians then; Jews who weren’t assimilating and might make a nuisance about working on Saturday, or even wear a yarmulke in public; single people who weren’t looking to marry or raise children (but might be sexually active anyway); women with real careers; gays and lesbians (but not transgender people or even bisexuals, whose existence wasn’t recognized yet); atheists, Muslims, and followers of non-Biblical religions; the homeless and others who lived in long-term poverty; folks whose physical or mental abilities were outside the “normal” range; and so on.

But they were Americans-with-an-asterisk. Such people weren’t really “us”, but we were magnanimous enough to tolerate them living in our country — for which we expected them to be grateful.

Providing services for the “real” Americans was comparatively easy: You could do everything in English. You didn’t have to concern yourself with handicapped access or learning disabilities. You promoted people who fit your image of a leader, and didn’t worry about whether that was fair. You told whatever jokes real Americans found funny, because anybody those jokes might offend needed to get a sense of humor. The schools taught white male history and celebrated Christian holidays. Every child had two married parents, and you could assume that the mother was at home during the day. Everybody had a definite gender and was straight, so if you kept the boys and girls apart you had dealt with the sex issue.

If those arrangements didn’t work for somebody, that was their problem. If they wanted the system to work better for them, they should learn to be more normal.

It’s easy to imagine that this mid-20th-century Pleasantville America is ancient history now, but it existed in living memory and still figures as ideal in many people’s minds. Explicitly advocating a return to those days is rare. But that desire isn’t gone, it’s just underground.

For years, that underground nostalgia has figured in a wide variety of political issues. But it has been the particular genius of Donald Trump to pull them together and bring them as close to the surface as possible without making an explicit appeal to turn back the clock and re-impose the norms of that era. “Make America great again!” doesn’t exactly promise a return to Pleasantville, but for many people that’s what it evokes.

What, after all, does the complaint about political correctness amount to once you get past “Why can’t I get away with behaving like my grandfather did?”

We can picture rounding up and deporting undocumented Mexicans by the millions, because they’re Mexicans. They were never going to be real Americans anyway. Ditto for Muslims. It would have been absurd to stop letting Italians into the country because of Mafia violence, or to shut off Irish immigration because of IRA terrorism. But Muslims were never going to be real Americans anyway, so why not keep them out? (BTW: As I explained a few weeks ago, the excuse that the Muslim ban is “temporary” is bogus. If nobody can tell you when or how something is going to end, it’s not temporary.)

All the recent complaints about “religious liberty” fall apart once you dispense with the notion that Christian sensibilities deserve more respect than non-Christian ones, or that same-sex couples deserve less respect than opposite-sex couples.

On the other side, Black Lives Matter is asking us to address that underground, often subconscious, feeling that black lives really aren’t on the same level as white lives. If a young black man is dead, it just doesn’t have the same claim on the public imagination — or on the diligence of the justice system — that a white death would. How many black or Latina girls vanish during a news cycle that obsesses over some missing white girl? (For that matter, how many white presidents have seen a large chunk of the country doubt their birth certificates, or have been interrupted during State of the Union addresses by congressmen shouting “You lie!”?)

But bringing myself back to the theme: The issue here isn’t Trump, it’s us. Do we want to think of some Americans as more “real” than others, or do we want to continue the decades-long process of bringing more Americans into the mainstream?

That question won’t be stated explicitly on your ballot this November, like a referendum issue. But it’s one of the most important things we’ll be deciding.

What role should American power play in the world?

I had a pretty clear opinion on that last question, but I find this one much harder to call.

The traditional answer, which goes back to the Truman administration and has existed as a bipartisan consensus in the foreign-policy establishment ever since, is that American power is the bedrock on which to build a system of alliances that maintains order in the world. The archetype here is NATO, which has kept the peace in Europe for 70 years.

That policy involves continuing to spend a lot on our military, and risks getting us involved in wars from time to time. (Within that establishment consensus, though, there is still variation in how willing we should be to go to war. The Iraq War, for example, was a choice of the Bush administration, not a necessary result of the bipartisan consensus.) The post-Truman consensus views America as “the indispensable nation”; without us, the world community lacks both the means and the will to stand up to rogue actors on the world stage.

A big part of our role is in nuclear non-proliferation. We intimidate countries like Iran out of building a bomb, and we extend our nuclear umbrella over Japan so that it doesn’t need one. The fact that no nuclear weapon has been fired in anger since 1945 is a major success of the establishment consensus.

Of our current candidates, Hillary Clinton (who as Secretary of State negotiated the international sanctions that forced Iran into the recent nuclear deal) is the one most in line with the foreign policy status quo. Bernie Sanders is more identified with strengthened international institutions which — if they could be constructed and work — would make American leadership more dispensable. To the extent that he has a clear position at all, Donald Trump is more inclined to pull back and let other countries fend for themselves. He has, for example, said that NATO is “obsolete” and suggested that we might be better off if Japan had its own nuclear weapons and could defend itself against North Korea’s nukes. On the other hand, he has also recently suggested that we bomb Libya, so it’s hard to get a clear handle on whether he’s more or less hawkish than Clinton.

Should we be doing anything about climate change?

Among scientists, there really are two sides to the climate-change debate: One side believes that the greenhouse gases we are pumping into the atmosphere threaten to change the Earth’s climate in ways that will cause serious distress to millions or even billions of people, and the other side is funded by the fossil fuel industry.

It’s really that simple. There are honest scientific disagreements about the pace of climate change and its exact mechanisms, but the basic picture is clear to any scientist who comes to the question without a vested interest: Burning fossil fuels is raising the concentration of greenhouse gases in the atmosphere. An increase in greenhouse gases causes the Earth to radiate less heat into space. So you would expect to see a long-term warming trend since the Industrial Revolution got rolling, and in fact that’s what the data shows — despite the continued existence of snowballs, which has been demonstrated by a senator funded by the fossil fuel industry.

Unfortunately, burning fossil fuels is both convenient and fun, at least in the short term. And if you don’t put any price on the long-term damage you’re doing, it’s also economical. In reality, doing nothing about climate change is like going without health insurance or refusing to do any maintenance on your house or car. Those decisions can improve your short-term budget picture, which now might have room for that Hawaiian vacation your original calculation said you couldn’t afford. Your mom might insist that you should account for your risk of getting sick or needing some major repair, but she’s always been a spoilsport.

That’s the debate that’s going on now. If you figure in the real economic costs of letting the Earth get hotter and hotter — dealing with tens of millions of refugees from regions that will soon be underwater, building a seawall around Florida, moving our breadbasket from Iowa to wherever the temperate zone is going to be in 50 years, rebuilding after the stronger and more frequent hurricanes that are coming, and so on, then burning fossil fuels is really, really expensive. But if you decide to let future generations worry about those costs and just get on with enjoying life now, then coal and oil are still cheap compared to most renewable energy sources.

So what should we do?

Unfortunately, nobody has come up with a good way to re-insert the costs of climate change into the market without involving government, or to do any effective mitigation without international agreements among governments, of which the recent Paris Agreement is just a baby step in the right direction. And to one of our political parties, government is a four-letter word and world government is an apocalyptic horror. So the split inside the Republican Party is between those who pretend climate change isn’t happening, and those who think nothing can or should be done about it. (Trump is on the pretend-it-isn’t-happening side.)

President Obama has been taking some action to limit greenhouse gas emissions, but without cooperation from Congress his powers are pretty limited. (It’s worth noting how close we came to passing a cap-and-trade bill to put a price on carbon before the Republicans took over Congress in 2010. What little Obama’s managed to do since may still get undone by the Supreme Court, particularly if its conservative majority is restored.)

Both Clinton and Sanders take climate change seriously. As is true across the board, Sanders’ proposals are simpler and more sweeping (like “ban fracking”) while Clinton’s are wonkier and more complicated. (In a debate, she listed the problems with fracking — methane leaks, groundwater pollution, earthquakes — and proposed controlling them through regulation. She concluded: “By the time we get through all of my conditions, I do not think there will be many places in America where fracking will continue to take place.”) But like Obama, neither of them will accomplish much if we can’t flip Congress.

Trump, meanwhile, is doing his best impersonation of an environmentalist’s worst nightmare. He thinks climate change is a hoax, wants to reverse President Obama’s executive orders to limit carbon pollution, has pledged to undo the Paris Agreement, and to get back to burning more coal.

How should we defend ourselves from terrorism?

There are two points of view on ISIS and Al Qaeda-style terrorism, and they roughly correspond to the split between the two parties.

From President Obama’s point of view, the most important thing about battle with terrorism is to keep it contained. Right now, a relatively small percentage of the world’s Muslims support ISIS or Al Qaeda, while the vast majority are hoping to find a place for themselves inside the world order as it exists. (That includes 3.3 million American Muslims. If any more than a handful of them supported terrorism, we’d be in serious trouble.) We want to keep tightening the noose on ISIS in Iraq and Syria, and keep closing in on terrorist groups elsewhere in the world, while remaining on good terms with the rest of the Muslim community.

From this point of view — which I’ve described in more detail here and illustrated with an analogy here — the worst thing that could happen would be for these terrorist incidents to touch off a world war between Islam and Christendom.

The opposite view, represented not just by Trump but by several of the Republican rivals he defeated, is that we are already in such a war, so we should go all out and win it: Carpet bomb any territory ISIS holds, without regard to civilian casualties. Discriminate openly against Muslims at home and ban any new Muslims from coming here.

Like Obama, I believe that the main result of these policies would be to convince Muslims that there is no place for them in a world order dominated by the United States. Rather than a few dozen pro-ISIS American terrorists, we might have tens of thousands. If we plan to go that way, we might as well start rounding up 3.3 million Americans right now.

Clinton and Sanders are both roughly on the same page with Obama. Despite being Jewish and having lived on a kibbutz, Sanders is less identified with the current Israeli government than either Obama or Clinton, to the extent that makes a difference.

Can we give all Americans a decent shot at success? How?

Pre-Trump, Republicans almost without exception argued that all we need to do to produce explosive growth and create near-limitless economic opportunity for everybody is to get government out of the way: Lower taxes, cut regulations, cut government programs, negotiate free trade with other countries, and let the free market work its magic. (Jeb Bush, for example, argued that his small-government policies as governor of Florida — and not the housing bubble that popped shortly after he left office — had led to 4% annual economic growth, so similar policies would do the same thing for the whole country.)

Trump has called this prescription into question.

If you think about it, the economy is rigged, the banking system is rigged, there’s a lot of things that are rigged in this world of ours, and that’s why a lot of you haven’t had an effective wage increase in 20 years.

However, he has not yet replaced it with any coherent economic view or set of policies. His tax plan, for example, is the same sort of let-the-rich-keep-their-money proposal any other Republican might make. He promises to renegotiate our international trade agreements in ways that will bring back all the manufacturing jobs that left the country over the last few decades, but nobody’s been able to explain exactly how that would work.

At least, though, Trump is recognizing the long-term stagnation of America’s middle class. Other Republicans liked to pretend that was all Obama’s fault, as if the 2008 collapse hadn’t happened under Bush, and — more importantly — as if the overall wage stagnation didn’t date back to Reagan.

One branch of liberal economics, the one that is best exemplified by Bernie Sanders, argues that the problem is the over-concentration of wealth at the very top. This can devolve into a the-rich-have-your-money argument, but the essence of it is more subtle than that: Over-concentration of wealth has created a global demand problem. When middle-class and poor people have more money, they spend it on things whose production can be increased, like cars or iPhones or Big Macs. That increased production creates jobs and puts more money in the pockets of poor and middle-class people, resulting in a virtuous demand/production/demand cycle that is more-or-less the definition of economic growth.

By contrast, when very rich people have more money, they are more likely to spend it on unique items, like van Gogh paintings or Mediterranean islands. The production of such things can’t be increased, so what we see instead are asset bubbles, where production flattens and the prices of rare goods get bid higher and higher.

For the last few decades, we’ve been living in an asset-bubble world rather than an economic-growth world. The liberal solution is to tax that excess money away from the rich, and spend it on things that benefit poor and middle-class people, like health care and infrastructure.

However, there is a long-term problem that neither liberal nor conservative economics has a clear answer for: As artificial intelligence creeps into our technology, we get closer to a different kind of technological unemployment than we have seen before, in which people of limited skills may have nothing they can offer the economy. (In A Farewell to Alms Gregory Clark makes a scary analogy: In 1901, the British economy provided employment for 3 million horses, but almost all those jobs have gone away. Why couldn’t that happen to people?)

As we approach that AI-driven world, the connection between production and consumption — which has driven the world economy for as long as there has been a world economy — will have to be rethought. I don’t see anybody in either party doing that.


So what major themes have I left out? Put them in the comments.

Can We Overthrow the Creditocracy?

In the long history of oppression, where are we today? And what can we do about it?


The simplest, most direct form of oppression is forced labor: Work for me, do what I say, or I’ll beat you. And if no beating short of death will induce you to do what I want, then the example of your demise will at least make my next victim more pliable.

Unfortunately for the oppressor, though, forced labor is also morally simple. The press-ganged victim knows I have wronged him or her. Given the chance to run away, or (better yet) kill me, he or she will feel completely justified.

That’s why history is full of attempts to dress oppression up and make its morality more confusing. If you want to be cynical, you might tell the whole economic history of the world that way: as a series of systems to dress up oppression and shift the guilt of it from the order-giver to the order-taker. In every era, the many work and the few benefit, but those who run away or revolt are the immoral ones. They are ungrateful wretches who bite the hands that feed them and repay their kindly benefactors with violence.

For example, from today’s perspective the slave society of the old South seems pretty stark: Do what I say because I own you and your children and your children’s children down to the last generation. And yet, the literature of the time — written by whites, naturally — often waxes lyrical about the great good the white man has done for his undeserving servants: given them the gift of civilization, saved their souls for Christ, accepted them in his home and fed and clothed them since birth, or perhaps purchased them from an animal-like existence under a slave-trader and bestowed upon them new names and new roles (however lowly) in human society.

How dare the slave forget his obligation and steal himself away!

Freedom without access. Most systems are more subtle than that. The people at the bottom aren’t owned, and in fact their freedom may be a central point of public celebration. But a small group controls access to something everyone needs to survive. To guarantee your own access, you must strike a deal with them — on their terms, usually — and do what they say. And because society frames its story in a way that justifies the access-control, the people who tell you what to do are not your oppressors, they’re your benefactors. You owe them for giving you the opportunity to serve.

Whatever that necessary something is, and however access to it is controlled, tells you what kind of oppressive system you’re in. In feudalism, a small group of lordly families control the land you need to grow food. To get access, your family must swear fealty to one of them, and God have mercy on the traitor who breaks his vows. In the sharecropper system that replaced slavery in the South, whites (often the same whites who had owned the antebellum plantations) controlled access to money and markets. Freedom and even a small chunk of land might be yours, but the wherewithal to survive until harvest had to be borrowed, and then you were obliged to sell your crop to your creditor, for a price he named — usually not quite enough to clear your debt. If you tried to escape this system, you weren’t a runaway slave (as your mother or father would have been), but you were a runaway debtor and the law would hunt you down just the same.

In the North, oppression took its purest form in the company towns immortalized in the song “16 Tons“, where the singer imagines that not even death will get him out. The company controlled every side of the transaction — not just access to productive work, but the scrip you were paid in, and the company store where you could spend it. The system wasn’t quite so obvious in the bigger cities, where many employers drew from the same labor pool, but basic outline was the same: To get access to what Marx called “the means of production” — land, factories, mines, or any other resource that human labor could turn into the stuff of survival — the masses at the bottom of the pyramid had to deal with a fairly small group of employers, who could dictate wages and working conditions.

As on the plantations or the feudal manors, the language of morality had been turned inside-out: The oppressor was the benefactor. Give me a job, the worker begged.

The American exception. Underneath all that oppressiveness, though, something new had been blooming in America from the beginning. Dispossessing the Native Americans of an entire continent had created opportunities for wealth so vast that the old upper classes couldn’t exploit them all without help, so common people were cut in on the booty.

Already in 1776’s The Wealth of Nations, Adam Smith had documented that wages were considerably higher in the colonies (where there was so much work to be done and a comparative dearth of hands) than in England itself. The post-revolutionary Homestead Acts codified a system that had been operating informally for some while: For whites, American wages were enough above subsistence that you could build a stake of capital, buy tools and transport, and then set out for the hinterland and establish an independent relationship with the means of production. For one of the few times since the hunter-gatherer era, working-class Europeans could apply their labor directly to the land and live without paying for access.

Post-Civil-War American history can be told as a struggle by the capitalist class to claw back those hastily bestowed opportunities by manipulating markets, monopolizing the new railroads, and generally “crucify[ing] mankind upon a cross of gold” as William Jennings Bryan famously put it. But they never completely succeeded. Hellish as turn-of-the-century mines and factories could be, the vision remained: Capitalism didn’t have to be so bad, if workers had a way to opt out and employers had to compete to hire them.

The early 20th century brought a series of shocks to the capitalist system: the world wars, the Russian Revolution, the Great Depression, and finally the very real threat of Communist revolutions. The devastated Europe of 1945 in some ways duplicated the opportunities of the New World: There was so much work to be done that for three decades (les Trente Glorieuses, as the French put it) full employment and rising wages could be the norm.

In the Cold War competition with Communism, Capitalism had to loosen up to maintain the workers’ loyalty. And so a mixed public/private social contract developed: The means of production would continue to be privately owned, but government would keep the worker in the game. Government would provide education at little or no cost to the student; guarantee a liveable minimum wage; protect consumers from unsafe products and workers from dangerous workplaces; prevent monopolies from forming; create jobs by building public infrastructure; defend the workers’ right to form unions powerful enough to negotiate with corporations on equal terms; maintain a safety net against unemployment, disability, and old age; and (except in the United States) take care of the sick. The political expectation was that a rising tide would lift all boats: If profits rose, wages would rise, and everyone would benefit.

Counterrevolution. But by the late 1970s, the failure of the Soviet system to make good on its economic promises made Khrushchev’s we-will-bury-you threat ring hollow, and Western capitalists started to wonder if they’d given away too much. The theme of their Reagan/Thatcher counterrevolution would be privatization. Wherever possible, get government out of the picture so that the natural power imbalance between worker and employer can re-assert itself.

And that has been the story of the last not-so-glorious forty years: Powerful unions and nearly-free state universities are mere memories. Inflation has pushed the minimum wage down towards subsistence. We are told that the wealthiest nation in the world cannot afford a safety net; if bankruptcy looms (or can be manufactured), the solution is not to commit new resources, but to slash benefits. Consumer and worker protection is “job-killing regulation”, and making up for a job shortfall with public works is unthinkable. Increasingly, even public K-12 education is under fire; if you really want a high-quality education for your child, perhaps a government voucher will defray the cost a little, until inflation eats up that subsidy as it has the minimum wage.

As a result, even as productivity-per-hour and GDP-per-capita have continued to rise, wages have not. Ever-increasing shares of the national income and the national wealth are controlled by the top 10%, the top 1%, the top .01%. Even in the uppermost levels of the economic pyramid, there is always an even smaller class of people just above you whose skyrocketing wealth is leaving you far behind.

Creditocracy. Andrew Ross’ book Creditocracy and the Case for Debt Refusal points out that the goal of the counter-revolution is not just a restoration of late 19th-century capitalism, in which large employers dominate by controlling access to jobs. It’s a subtly different system of oppression entirely: a creditocracy.*

Everything the Cold War social contract promised is still available, you just have to pay up for it. How will you do that? You’ll get loans, and spend the rest of your life working to make payments. Rather than beg “Give me a job”, you’ll beg “Give me loan, so that I can get what I need to get and keep a job.” The bankers will be your benefactors, and then they will tell you what to do.

Education is where this project is most advanced. Probably there will always be some way to warehouse children at public expense while their parents work, either in public schools or in minimal private schools fully covered by a public voucher. But if you want the kind of education that gives a child options beyond minimum wage or welfare, you’ll have to pay up. Some people will be able to cover that expense, but most will have to borrow. If we’re talking about college, we’re already there. Working your way through college was once a realistic goal; it no longer is. The Federal Reserve recently estimated total student debt at $1.13 trillion, with about 1 in 8 borrowers owing more than $50,000 each, and a small but increasing number beginning their careers more than $200,000 in the hole.

If you just want to live somewhere, that won’t be a problem. But if you want to live in an neighborhood where potholes are fixed and police protect you rather than prey on you, you’ll have to pay up. Need a loan?

Public transportation? Forget about it. You can stay home for free, but if you want to work you’ll need a car, and cars cost. Calories are easy to come by, but safe and healthy food? Still available in certain upscale groceries, if you can afford it. Medical care? We’d never just let you die, and we have repayment plans with attractive rates. Clothes? I see you’ve got your body covered, but you’ll never get a job looking like that. Libraries? Parks? There are some you can join for a membership fee, though probably not in your neck of the woods. News? Comes from cable TV or the internet, via the local monopoly. Retirement? You can never be sure you’ll have enough to stay out of poverty, but maybe your kids will co-sign for you if you live too long.

During the post-war Trente Glorieuses, debt was a way to anticipate your rising income and get a few luxuries earlier than you otherwise might. But in the Creditocracy, debt is a necessity; all but the wealthy need to borrow to stay in the game. And once you owe, the onus is on you to toe the line: You’ll never cover your payments working in a field you love, or letting moral considerations control what you will and won’t do for a living. (Are you sure you don’t want to fight in our war? We’re hiring.) You don’t dare stick your neck out politically or socially, if you want to stay employed and keep making your payments. Maybe someday, if you get it all paid off, you’ll live by your heart and your conscience. But until then …

And where does this needed credit ultimately come from? It’s conjured out of the aether by the Federal Reserve, and distributed to the big banks by loans at rock-bottom rates. That’s the controlled access that makes the whole system possible. They have access and you need it, so they can tell you what to do and leave you thanking them for it. And if they ever push things too far and make loans that can never be repaid, then they’ll have the government behind them, bailing them out and sticking ordinary taxpayers with the bill. You may have lost your home, your savings, and God knows what else in the whole mess, but at least the banker will be made whole.

The Morality of Default. On the rare occasions when systems of oppression are beaten, they are first beaten morally. Slavery can’t be defeated until the runaway slave becomes a hero rather than a scoundrel, and the rebellious one can become a soldier rather than a murderer. The company town can’t be overthrown until the worker who refuses to work becomes a striker rather than a bum, and values solidarity with his comrades over the debt he owes his employer for “giving” him a job.

Today, it seems like an impossible dream that debtors could ever take the moral high ground away from creditors. Somebody who borrows and then won’t pay is a deadbeat, a moocher, a loser. It seems hard to imagine a debtors’ rights movement that could win popular support for a repayment strike or the outright renunciation of unreasonable debts.

But that’s what Ross envisions. To get there, we need to develop and popularize moral standards that separate good debts from bad debts. For example, view John Oliver’s piece on the payday lending industry, and then consider the idea that many of these loans — particularly ones where the original principal amount was paid back long ago, but the compounding interest has taken on a life of its own —  should just not be repaid. Similarly, the Consumer Financial Protection Board is suing ITT Educational Services for tactics that seem widespread in the for-profit college industry: using high-pressure sales tactics to push students into taking out loans, when they have little prospect of either getting a degree or paying off the loan. Some of the sub-prime loans of the housing boom were likewise made with no reasonable prospect of repayment, then sold off to investors anyway. The primary fraud came from the banker, not the borrower.

Other debt is perhaps no fault of the lender, but should not be charged against the debtor either. Medical debt — often as clear a case of pay-or-die as any highway robbery — is the best example, but much student debt fits as well. The debt exists because of society’s failure to provide what ought to be public goods. If any debt is going to vanish in the fancy bookkeeping of the Fed, this kind of debt should.

Some debts are legitimate, but there are equally legitimate claims in the other direction, ones that the Creditocracy does not take as seriously. Much of the developing world’s debt to the wealthy countries might be cancelled by fair reparations for colonialism, or by the responsibility that industrialized nations have for using up the carbon-carrying capacity of the atmosphere. Today, the obligations in one direction are considered iron-clad, while the ones in the other are optional. Why should that be?

Probably most debts should eventually be paid. But even they might also be part of a larger debt strike, to force action on the ones that should be renegotiated or just renounced.

In the long run, the infrastructure of the Creditocracy might be torn down and rebuilt into an economic system whose primary purpose is to create useful goods and services rather than profits, a world with more co-ops and credit unions and crowd funding, and less money swirling around in financial derivatives.

But long before that can happen, the moral structure that supports the Creditocracy needs to be challenged and shaken at many levels. Imagine, if you can, a world in which the debtor who does not pay — like the slave who runs away or the worker who sits down on the job — is a hero.

Not a deadbeat, a moocher, or a loser. A hero.


* One reason this “review” is so long is that although I think the ideas in the book are important, I don’t actually like the way Ross makes his case. His style is repetitive, needlessly polemic, and sloppy with numbers. So I’m recasting the ideas in my own way.

One example: While making some point about Google and Facebook, Ross mentioned what each “earned” in a particular quarter. The numbers seemed high to me, so I checked them. He had actually quoted the companies’ revenues, not their earnings.

He was making a qualitative point, in which revenues worked just as well as earnings (i.e., some other number was small potatoes to companies that big). So it seemed to just be sloppiness rather than deception. But I don’t have to hit many such examples before I start to doubt everything.

Prosperity Without Growth?

When you take a very-long-term view of the future of civilization, the one option that seems most unlikely is that we can continue the patterns of the last few centuries: an ever-increasing population consuming ever-more stuff, using ever-more natural resources to produce it, and leaving ever-more waste products for the planet to absorb.

Futurists embarrass themselves when they predict precisely when and how that pattern will break, but still, it defies my imagination to picture how this could all continue indefinitely down the millennia. Eventually — whether by wise planning, cataclysm, alien conquest, or the return of Jesus — the exponential growth is going to stop.*

What will that look like? If you stipulate those steady-state conditions — stable population, stable resource use, and each generation leaving the planet’s natural environment more-or-less the way they found it — what kind of society can you construct? Can you come up with one that has a place for people more-or-less like us? Or does the whole concept involve making over the human character completely? Could the people in such a no-growth society feel prosperous? Or is prosperity-without-growth a contradiction?

A number of fairly smart, reasonable people have been asking those questions for a while now, and they’re starting to come up with some visions — sketchy ones, to be sure, but sketched-out well enough that the rest of us should start paying attention. One such vision is in Enough is Enough by Rob Dietz and Dan O’Neill.

Disclaimers. Growth has gotten to be such a religion that no-growth smacks of heresy. Like most heresies, it has been caricatured by the faithful to such a degree that any discussion has to start with a few denials.

Two examples of non-growing economies leap to mind: growth-oriented economies that are failing to grow (as the American economy has failed since the housing bubble burst), and aboriginal hunter-gatherer economies. The first example is characterized by despair, lack of opportunity,  and increasing poverty; the second, by discomfort, lack of technology, and vulnerability to disease and famine. Aboriginal societies may live in harmony with Nature, but they also live at the mercy of Nature. One thing you can say for the global economy is that Iowa can have a drought without Iowans starving to death.

Neither example is what the no-growth visionaries are proposing. A society without growth could continue to have antibiotics and the internet — and could even continue innovating, as long as the innovations-as-a-whole didn’t increase the consumption of resources or the production of waste.

A growth-oriented economy that doesn’t grow is the worst of both worlds. It consumes resources unsustainably, and yet fails to provide opportunity and hope. If that were the goal, it could easily be achieved: Just instruct the Fed to keep interest rates high enough to choke off new investment.

The challenge, though, is quite different: To envision a steady-state relationship between Nature and a stable population of humans, while providing those humans the opportunity to lead satisfying lives.

Outline. The book is in three parts. The first discusses the overall idea of “enough”. The second breaks this down into specific areas: How could we achieve a stable population? How could a non-growing economy deal with poverty? What would banking and investment look like? And the third discusses strategies for changing the culture and the political system.

Problem-solving attitude. Because it covers so many topics and is intended to further an open-ended discussion, the book really can’t be condensed. Its strength is in its details, not in a sound bite that gets elaborated over 200 pages.

But the other important aspect of the book is the attitude it projects: It takes the problem of planetary depletion seriously and approaches it with a problem-solving attitude. So it is not a jeremiad, or a prophesy of doom, or a denial that anything really needs to change — three categories that take in most of the debate on these topics. It’s easy to find reasons why a stable economy can’t happen, but comparatively rare to find people who accept that it must happen eventually, and then bring a problem-solving attitude to the question of how.

A number of factors evolved with the idea of economic growth, and they will have to change or be replaced to achieve stability: a money-creating banking system, measuring the economy by GDP, and corporations devoted to constant growth are just a few of the ones discussed in more detail. An example of the kind of change a stable economy would need: Much of what is done today by profit-seeking corporations could be done by consumer-owned co-ops focused on providing service rather than producing an ever-increasing profit for investors.**

The poor held hostage. To me, the most significant argument against a stable economy says, “Morally, how can we rein in economic growth when so many people still don’t have enough?” My problem with that question: I have lost faith that the capitalist economy will ever provide enough for everybody, now matter how high global GDP gets. Over the last few decades, the top 1% has gotten better and better at capturing economic growth for themselves. From the point of view of a CEO seeking higher profits for his corporation, a better life for the poor is an inefficiency to be avoided. Across-the-board wage increases are a capitalist nightmare, not a fulfillment of the capitalist system.

In the Dietz/O’Neill view, we need to turn this kind of thinking around: Rather than continuing to grow the economy in hopes that some of the new consumables will filter down to the poor, we need to solve the problem of inequality so that we can achieve a stable economy. Poverty is a political problem, not an economic problem. Growing the economy without changing the politics won’t solve it.

Rather than putting the entire burden of proof on the no-growth vision, I think we also have to stop accepting a “someday” vision of ending poverty through growth. Anyone who makes the anti-poverty argument for growth needs to explain exactly how growth is going to help the poor, and offer a projection of how much more growth it will take to eradicate poverty before we can stabilize the economy’s toll on the planet.

Trustworthy governance. Again and again, I was struck by how the Dietz/O’Neill vision requires that we work together as a species. The easiest way to envision that unity is via some Hunger-Games-style tyranny, which no one (least of all Dietz and O’Neill) wants. But even the most free and democratic vision of a stable economy depends on establishing some trustworthy global institutions.

For example, a global cap-and-trade system to stabilize the CO2 in the atmosphere would work only if people can’t cheat anywhere in the world, if the tradable CO2 certificates can’t be counterfeited, and if you can’t “earn” them by creating bogus carbon-offset projects — trees that are never actually planted, etc.

Similarly, population could be stabilized through incentives and voluntary cooperation rather than one-child mandates and forced sterilizations. But someone would have to monitor all that and adjust the incentives accordingly, and the rest of us would have to trust the fairness of that monitoring agency.

This is the part I worry about most: If you have money and power and you want to derail the vision of a stable future, all you really have to do is create distrust. What could be easier?

Not a lone voice. Another striking thing about Enough is Enough is the extent to which it builds on the work of many others. For example, the view of money, debt, and banking will be familiar to Sift readers from David Graeber’s Debt: the first 5,000 years and Warren Mosler’s Seven Deadly Innocent Frauds of Economic Policy.

I’m sure many people will look on this as cranks quoting other cranks, but I don’t. I’m starting to see a unifying view develop.

Virtual consumption. Futurists have to be wary of a technology-will-save-us argument, which is always too easy and is often a mirage. But I think Dietz and O’Neill miss one important way that technology can contribute to a sustainable future: virtualization. We’re already seeing some of it: My book collection is gradually turning into patterns of electrical charges rather than shelves of paper.

Dietz and O’Neill point out (appropriately) that such changes are meaningless if they just make paper cheaper and allow somebody else to consume more of it. But recent sci-fi (starting with Snow Crash and continuing into more recent works like The Quantum Thief or Ready Player One) points to the greater possibilities.

You can think of consumption as serving four purposes: survival, comfort, entertainment, and competition for status. It is easy to imagine “enough” when we talk about survival and comfort, and maybe even entertainment. But the really open-ended consumption happens when we compete for status. I can imagine wanting a boat for entertainment, but the only reason to want a 400-foot yacht is to out-do the guys who can only afford 300-foot yachts. (As far back as the Roman sumptuary laws, the essence of the moral argument to limit consumption is that some people are starving so that others can raise their status.)

Survival and comfort require real-world resources. (You can’t eat pixels.) But if the culture evolved so that we got most of our entertainment inside virtual worlds and competed for status there, then a sustainable economy would be much easier to achieve.


* Space travel is sometimes presented as a far-future solution. While I can imagine a Noah’s-Ark-style spaceship seeding another planet with humans, I can’t imagine inter-stellar travel ever being so cheap that emigration has a significant impact on Earth’s population. (At least that’s not a future I’m willing to count on.) So Earth’s remaining citizens would still have to come to terms with the planet’s limitations.

Think about the colonization of the New World. Except for a few temporary situations (like the Irish Potato Famine), Europe’s population continued going up, even as it sent more and more people to America. Europe today is more crowded than ever.

** This got me thinking. Back when cable TV was being established, we all took for granted the model of a privately financed network made economically feasible by granting a monopoly. But the New-Deal-era model of the rural electric co-ops also would have worked: government-guaranteed loans to establish consumer-owned co-ops. If we’d done that, every year you’d get to vote on the leadership and policy of your cable system.

Why the Austerity Fraud Matters

When disputes break out among academics, most people don’t care. For good reason: Academic controversies are usually hard to follow, and concern topics that wouldn’t matter to most of us even if we understood them. (I was in an academic dispute once, and my side won. Trust me, you don’t want to hear about it.)

But this week a controversy broke out in economics, and it actually deserves your attention. A paper that has had a major influence on public policy around the world turns out to be wrong. And not just wrong in a subtle way that only geniuses can see, or even wrong in an everybody’s-human way that you look at and say, “Oh yeah, I’ve done that.” This one was wrong in three different ways that make you (or at least me) say, “That can’t be an accident.”

The bogus paper came out in 2010: “Growth in a Time of Debt” by Carmen Reinhardt and Ken Rogoff (both from Harvard). The paper that refutes it appeared last Monday: “Does High Public Debt Consistently Stifle Economic Growth? A Critique of Reinhart and Rogoff“ by Thomas Herndon, Michael Ash, and Robert Pollin (all from the University of Massachusetts).

Before I get into the back-and-forth of it, let’s return to why you should care. It has to do with whether the government should be trying to create jobs or cut spending.

Stimulus vs. austerity. Many countries came out of the Great Recession with a much larger national debt, but persistent unemployment and slow growth. And that led to a debate: The usual thing a government does when it has high unemployment and slow growth is spend money. (People need jobs and the private sector is skittish about expanding, so the government hires people to do things that need doing: building highways, fixing sewers, insulating homes, and so on. Or maybe the government boosts the economy by subsidizing certain kinds of consumption, like the popular cash-for-clunkers program that got a bunch of old gas-guzzling cars off the road.)

But maybe this time the thing to do was to cut spending, because of all that debt. Maybe spending more, and so increasing the national debt, would just make things worse.

The same debate was happening in all countries, and none of them went completely one way or the other. But the poster child for austerity has been the United Kingdom, where it hasn’t worked. Here’s how British economic growth has compared to the projections made by the UK’s Office for Budget Responsibility. Austerity has brought the UK essentially no economic growth for three years.

The US has had its own stimulus/austerity debate, which has kept the Obama administration from spending as much as it wanted (or as much as Paul Krugman wanted, which was even more). But compared to the other major economies, the US has been on the stimulus side of the debate, which is probably why (disappointing as our economy has been these last few years) we’re doing better than most other countries. (This graph is scaled so that all countries are equal when austerity-loving David Cameron became the UK’s prime minister.)

Basically, the US and Germany are the only countries in that group that have seen any net growth since 2008.

The gist of what we’ve seen since 2008 is: Keynes was right. In the long run you probably want to keep your national debt under some kind of control, but not when you have high unemployment and slow growth.

How Reinhart/Rogoff leads to Ryan. Now, obviously, the budget debate we keep having in Washington doesn’t acknowledge this reality at all. Conservatives like Paul Ryan and Rand Paul, who want drastic cuts in government spending  (to them, the sequester is just a down payment), somehow get away with claiming to have a “pro-growth” agenda.

How is that possible? Well, partly it’s just dogma. The Gospel According to Ayn Rand states that government is always and eternally bad for the economy — she called for “a complete separation of state and economics” — and no accumulation of facts can outweigh holy writ.

But also, a handful of economists provide academic cover for the “pro-growth” austerity nonsense. And the biggest fig leaf in the bunch is the Reinhart/Rogoff paper. In his 2013 budget proposal, Ryan wrote:

Even if high debt did not cause a crisis, the nation would be in for a long and grinding period of economic decline. A well-known study completed by economists Ken Rogoff and Carmen Reinhart confirms this common sense conclusion. The study found conclusive empirical evidence that gross debt (meaning all debt that a government owes, including debt held in government trust funds) exceeding 90 percent of the economy has a significant negative effect on economic growth.

More precisely, R/R found a “threshold” that gets crossed when a nation’s public debt exceeds 90% of the annual GDP. (The United States currently has  a debt-to-GDP ratio around 100%. It was comfortably below the 90% “threshold” until almost exactly the moment the R/R paper appeared.) In other words: All your economic intuition and experience might tell you not to cut spending in a slow-growth environment, but something magic happens when debt crosses 90%. Beyond that point, debt suddenly becomes toxic.

Jared Bernstein comments on the significance:

Those whose goal is severely shrinking the size of government in general and social insurance in particular need hair-on-fire results like this from established experts to keep the fire going,  even in the face of statistics that lean strongly the other way

What they did and why it’s wrong. Reinhart and Rogoff looked at 20 industrialized countries year-by-year and divided the country-years into four bins: years when the national debt was 0-30% of GDP, 30-60%, 60-90%, and over 90%. They found significantly lower average economic growth in the over-90% bin. The average annual growth rates for the four bins in the 1946-2009 (post-WW2) period were 4.1%, 2.8%, 2.8% and negative .1%.

Now, if you look at those countries and years one-by-one, the case isn’t always impressive. For example, 1946 in the US. We had a lot of debt because we’d just fought World War II, and we had a recession because all the discharged soldiers and laid-off tank-factory workers hadn’t found new jobs yet. So high debt and negative growth were happening at the same time, but not because government debt was killing the economy.

Those are the kinds of one-off situations that you hope cancel out in the averages. And they kinda-sorta do, if you assemble your data honestly and do the math right. Unfortunately, R/R did neither. When Herndon/Ash/Pollin go back and do the analysis right, growth in the over-90% bin jumps from negative 0.1% to positive 2.2%.

So what mistakes did R/R make? Well, one was really stupid: They plugged the wrong row number into a formula on their spreadsheet, so their average skipped a bunch of rows, representing 6 of the 20 countries. (They’ve confessed to that mistake.)

Second, their data-set didn’t really include all the country-years it should have. So, for example, New Zealand only has one year in their average, when it ought to have five. Unfortunately, that makes a huge difference in the country average, because that one year NZ had -7.9% growth, when the five-year average was +2.6%.

And third, they made the bizarre choice to average by country rather than by country-year. So that one anomalous year in New Zealand ended up constituting 1/14th of the entire average rather than the 1/110th it should have.

Why it’s so bad. The significance of the R/R paper comes entirely from those mistakes.

Yes, an honest and accurate accounting still shows a negative correlation between growth and debt-to-GDP ratios, but everybody would have expected that anyway, because there’s well-known causality in the other direction: recessions cause debt/GDP ratios to rise*. (GDP goes down because that’s the definition of a recession. Debt goes up for two reasons: Revenue drops because there’s less income to tax, and spending rises to pay for more unemployment insurance and food stamps.)

The only significant part of R/R was the threshold, and that was wrong: The something-magic-happens-at-90% was just a spreadsheet typo plus statistical sleight-of-hand.

So the data R/R assembled provides absolutely no reason to have some special fear about the current level of debt in the US. We haven’t just passed through some economic equivalent of the sound barrier. To the extent that debt was bad before, it’s still bad, and to the extent that it didn’t matter before, it still doesn’t matter.

Fraud. I anticipate taking heat for using the word fraud in the title. The Herndon/Ash/Pollin paper doesn’t use it, and to fully justify fraud you’d have to see into the hearts of Reinhart and Rogoff. Responsible academics are slow to use words like fraud, because academics are cautious in general. You’re not supposed to publish something you can’t fully prove, even if your rivals do.

But I’m not an academic any more, so I’m using a preponderance-of-evidence standard, not a beyond-reasonable-doubt standard. Let’s look at the three mistakes.

The spreadsheet error shows an unbelievable level of negligence, but if that were the only mistake I’d be inclined to give R/R some benefit of the doubt. The original mistake was almost certainly honest, but not finding the mistake is the real culpability. They didn’t look the gift horse in the mouth; the mistake gave them the result they wanted, so they didn’t check too hard.

They claim to have filled in the missing data in later research, but they’ve done nothing to point out what a difference it makes. And they defend their weighting scheme — an argument I could buy if they had defended that scheme in the original paper while pointing the major difference it made in the result. But they didn’t. They were hoping the readers wouldn’t notice.

In their response to H/A/P, Reinhart and Rogoff, defend their non-spreadsheet errors “in the strongest possible terms”.

But surely the authors do not mean to insinuate that we manipulated the data to exaggerate our results.

I can’t speak for H/A/P, but I won’t insinuate anything, I’ll say it outright: Yeah, R&R, you manipulated the data to exaggerate your results.

R/R’s response. One proof of the fraud is that they’re still doing it. Their response claims:

We do not, however, believe this regrettable slip [the spreadsheet error] affects in any significant way the central message of the paper or that in our subsequent work.

And that’s just flatly false.

Do Herndon et al. get dramatically different results on the relatively short post war sample they focus on? Not really. They, too, find lower growth associated with periods when debt is over 90 per cent.

And that’s sophistry. The “relatively short post war sample” are the economies that happen to resemble the United States today. And “lower growth” is not the result the paper is noted for; no one would care if that were the whole message, because that is completely explained by the well-known recession-causes-debt relationship. The 90% threshold is the paper’s claim to fame, and that result has blown up completely.

And finally, while they don’t explicitly claim that they’ve found a debt-causes-slow-growth relationship, they keep using their result as if they had. They do so even in their response:

There is also the question of whether these growth effects can be economically large. Here it is very misleading to think of 1 per cent growth differences without recognizing that the typical high debt episode lasts well over a decade (23 years on average in the full sample.)

It is utterly misleading to speak of a 1 per cent growth differential that lasts 10-25 years as small. If a country grows at 1 per cent below trend for 23 years, output will be roughly 25 per cent below trend at the end of the period, with massive cumulative effects.

That point is utterly meaningless if the causality works in the other direction, if the slow growth is causing the debt rather than the other way around. And another re-analysis of the R/R data shows that’s what’s happening. That analysis also was simple to do. As Matt Yglesias comments:

it’s striking that R&R didn’t even check this. I don’t begrudge any academic’s right to rush into publication with an interesting empirical finding based on the assembly of a novel and useful dataset. I don’t even begrudge them the right to keep their dataset private for a little while so they can internalize more of the benefits. But Reinhart and especially Rogoff have spent years now engaged in a high-profile political advocacy campaign grounded in a causal interpretation of their empirical work that both of them knew perfectly well was not in fact supported by their analysis.

Buying apples, selling oranges. And that’s the important point. The biggest reason R/R’s paper has been so badly misused in our political debate is that they have been out there misrepresenting their results. Senator Coburn described their testimony to 40 senators a few months before the debt-ceiling debacle in 2012. After listening to their initial testimony,

Senator Kent Conrad, D-N.D., the chairman of the Senate Budget Committee, then offered his own stern warning to the assembled senators. Turning around in his chair in the middle of the room, he explained to his colleagues that when our high debt burden causes our economy to slow by 1 point of GDP, as Reinhart and Rogoff estimate, that doesn’t slow our [economic growth] by 1 percent, but by 25 to 33 percent, because we are growing at only 3 or 4 percent per year.

Did either professor interrupt to say, “Wait, Senator, we’re not saying the debt causes a slowdown. Our data just shows a correlation that could be explained by slowdowns causing high debt.”? No.

Reinhart echoed Conrad’s point and explained that countries rarely pass the 90 percent debt-to-GDP tipping point precisely because it is dangerous to let that much debt accumulate.

Fraud. Fraud, fraud, fraud.


* A point I often make when numbers appear in the Sift: Correlation is not causation. Correlation just means that two things tend to go together; causation means that one causes the other. A very common fallacy is to display a graph showing that A and B go up (or down) together, and then say that A causes B.

My favorite way to demonstrate the fallacy: Birthdays are good for you; people who have a lot of birthdays tend to live long lives.

I Read the Ryan Budget

Last week, when I talked about ideological bubbles and how to tell if you’re in one, I should have mentioned the best way to stay out of bubbles in the first place: Expose yourself to as many original sources as you can, especially the ones you know you’re going to hate.

With that in mind, I read Paul Ryan’s budget. (More accurately: I read the 91-page document he wrote to advertise his budget. An actual budget would have way more numbers in it.) In telling you about it, I’m going to try to keep my commentary as close to the text as possible, with quotes and page references as appropriate. (I wish I had the time to do an end-to-end annotation, but I’ve got some big deadlines looming.)

General impressions. Before I get into specifics, I want to say a few things about the overall impression the document makes.

As many people have already observed, Ryan’s proposal is not an attempt to reach a workable compromise with the White House or the Democratic majority in the Senate, both of which would have to agree before his plan could become law. Instead, it’s an aspirational document for conservatives: This is what they fantasize doing if and when they get complete control of the government.

There’s nothing wrong with that, but the Ryan Budget needs to be classed with aspirational budgets from the Left, like People’s Budget put out by the Congressional Progressive Caucus (which also balances the budget in ten years). Both are shots across the bow, not plausible projections of what its backers think they can pass.

So Ryan has written a rallying cry for the troops of the conservative movement, not an attempt convince or convert non-believers like me. The summary (page 7) says

This is a plan to balance the budget in ten years. It invites President Obama and Senate Democrats to commit to the same common-sense goal.

But there is no spirit-of-invitation in Ryan’s style. Any liberal who reads it will get pissed off, and I believe that’s intentional. Conservatives couldn’t fully enjoy their reading experience without visualizing pissed-off liberals.

Let me detail that: You’ve probably already heard that Ryan wants (once again) to try to repeal the Affordable Care Act (a.k.a. ObamaCare). But after the first mention, he can’t just call it by name. It’s “the President’s onerous health care law” (page 33) or “the President’s misguided health care law” (page 40) and so on, as if the ACA had been imposed on the country by imperial decree and Congress had nothing to say about it — also as if the ACA hadn’t been an issue in the 2012 election that Romney/Ryan lost by nearly five million votes.

Other partisan stuff is just silly. On page 24, President Reagan is given credit both for the economic expansion of his era, and of President Clinton’s era as well. Clinton is mentioned exactly once (on page 33, when Ryan re-raises the universally debunked lie from campaign 2012 that Obama wants to rescind the work requirement of Clinton’s welfare reform). The reader would never know that Ryan’s stated goal — a balanced budget — was achieved by Clinton (who raised taxes) while Reagan (who cut taxes) ran up record deficits.

You will also hear echoes of 2009’s Lie of the Year: death panels. The ACA sets up an Independent Payment Advisory Board (IPAB) to make annual recommendations (which Congress can rewrite before they take effect) on keeping Medicare spending within specified limits. The law specifically bans the IPAB from recommending care-rationing, but the heading of the Ryan’s section on it (page 40) is “Repeal the health-care rationing board”.

Background assumptions. In the real world, if a program is important enough, the government could conceivably raise taxes or borrow to pay for it. OK, Ryan’s balanced-budget goal won’t let him advocate borrowing. But a fundamental assumption that runs through his whole budget — usually without being stated explicitly — is that taxes cannot be raised for any purpose. Nothing is important enough to raise taxes to pay for.

Also, defense spending is untouchable. “There is no foreseeable ‘peace dividend’ on our horizon.” (page 61)

So if the domestic demands on government are growing — the population is getting older, the infrastructure more decrepit, healthcare more expensive, weather-related disasters more extreme and more frequent, future economic growth more dependent basic research and an educated workforce etc. — any money you want to spend to deal with one of those challenges has to be taken from the others.

The idea that over the long term our country could decide that it wants to do more of its consumption publicly — that it wants to take its economic growth in the form of Medicare and public education, say, rather than BMWs — is completely off the table.

Big Picture. The numbers don’t appear until the Appendix (page 80). Atlantic’s Derek Thompson put them into a bar graph:

Medicare and Social Security are usually considered “mandatory spending” (because benefits are defined by law rather than by appropriation), but I believe the additional $962 billion of 10-year savings is mostly Food Stamps, Pell grants, and so on.

So the cuts are almost entirely in healthcare, education, or anti-poverty spending. And while Ryan waves his hand at replacing Obamacare with “patient-centered health-care reforms” (page 33), apparently those reforms require no money from the government.

Meanwhile, rich people get a big bonanza: The top tax rate drops from the current 39.6% to 25%. If you make $10 million a year (some CEOs do), you could save nearly $15 million over the ten years Ryan’s budget covers.

So what isn’t in the budget document?

  • Any specifics about discretionary spending cuts. The cuts are just numbers on a spreadsheet. All the “tough choices” necessary to achieve those numbers are left to your imagination, so Ryan can deny his intention to cut anything in particular, as Mitt Romney did in his first debate with President Obama.
  • Any specifics about closing tax loopholes. Ryan claims his rich-guys-bonanza 25% tax rate wouldn’t cut federal revenue, because it would be balanced by eliminating tax loopholes. As in the 2012 campaign, Ryan says nothing about what those loopholes might be. Again, he can deny wanting to cut any specific item, like the mortgage interest deduction. But he’s got to raise that revenue somehow, and I seriously doubt it’s all going to come from the super-rich who benefit most from the lower rate.
  • Any plan for Social Security. Page 37 charges: “In Social Security, government’s refusal to deal with demographic realities has endangered the solvency of this critical program.” But rather than “deal with demographic realities” here and now, Ryan only “requires the President and Congress to work together to forge a solution.”

We have always been at war with Eastasia. The background rob-Peter-to-pay-Paul assumption allows Ryan to construct some truly Orwellian statements. This is particularly true in the “Opportunity Extended” section, which is all about shrinking opportunity for poor and working-class young people.

For example, on page 20 Ryan identifies “tuition inflation” as a problem that “plung[es] students and their families into unaffordable levels of debt”. And then he says:

Many economists, including Ohio University’s Richard Vedder*, argue that the structure of the federal government’s aid programs don’t simply chase higher tuition costs, but are in fact a key driver of those costs.

What could that possibly mean? Well, that federal aid is allowing too many people to go to college, creating a high-demand environment in which colleges can raise tuition. So the “solution” is to lower the maximum Pell grant (thereby “saving” the Pell grant program from spending at an “unsustainable” level, since we couldn’t possibly raise taxes to pay for it). Also to “target aid to the truly needy” by making families report more of their income on financial aid forms. Also “reforming” student loans and “re-examining the data made available to students to make certain they are armed with information that will assist them in making their postsecondary decisions”.

Presumably, when the facts of this harsher you’re-on-your-own world are “made available to students”, fewer of them will decide to go to college, thereby saving both their money and the government’s. So don’t worry about student debt — just don’t go to college at all if you’re not rich, and if you do go we’ll “help” you avoid massive debts by refusing to loan you money.

Oh, and we’ll also “encourage innovation” in education through “nontraditional models like online coursework”. Never mind that’s where the big scams are. Corporations profit from those scams, so that’s not “waste”.

Ditto for job training: Ryan promises to “extend opportunity” by spending less on it.

Ditto for the safety net. Since taxes can’t possibly be raised, every person who is helped by the safety net is taking those dollars away from somebody else who might be helped. So Ryan’s “A Safety Net Strengthened” section is all about spending less on the safety net. Mostly this is accomplished by block-granting programs like Medicaid to give “states more flexibility to tailor programs to their people’s needs.”

So if, say, low-income Texans need to toughen up and stop seeing a doctor at all, Texas can tailor its program that way. That’s what it’s doing already with the “flexibility” the Supreme Court gave it last summer.

Energy. Climate change just isn’t happening. Ryan doesn’t make that claim in so many words, but there’s a big empty spot where climate change would otherwise have to figure in.

He clumps energy together with a grab-bag of other issues in the “Fairness Restored” section. The “unfairness” in this case is the way that the Obama administration favors clean energy over dirty energy. Ryan will “end kickbacks to favored industries” like wind and solar in favor of “reliable, low-cost energy” like coal, oil, and gas. With climate change out of the picture, only corruption can explain Obama’s favoritism. In the Introduction, Ryan says his budget “restores fair play to the marketplace by ending cronyism.”

In current energy policy, fossil fuels and green energy are subsidized in different ways: Green energy gets grants and loans while established-and-profitable fossil energy gets tax breaks. Tax breaks are invisible to Ryan, so he can say on page 50:

on a dollar-per-unit-of-production basis, the level of subsidies received by the wind and solar industries were almost 100 times greater than those for conventional energy

Do it for the kids. So what’s the purpose of all this? A better world for our children. “By living beyond our means, we’re stealing from the next generation.” (page 5)

Of course my baby-boom generation knows how that works, because all that debt America ran up during World War II was “stolen” from us, right? I don’t know how I failed to notice.

In the real America, the big deficits of World War II kicked off 40 years of prosperity, during which the country achieved a level of equality that it hasn’t equalled before or since. So no, deficits are not “stolen” from the future. My generation did not build tanks and landing crafts and put them in time machines to send back to D-Day.

But in order to save our children from the horrible maybe-sorta-problem of the national debt, we need to under-educate them; not do basic research that might create the next computer industry or Internet; leave them crumbling roads, bridges, and electrical grids; not care for them when they get sick; move in with them when we get old; and leave them with a torched planet, where Iowa is a desert and Miami is underwater.

I’m sure they’ll thank us for our foresight.


* As best I can tell, although Ryan identifies only their university affiliations, every economist Ryan mentions by name is inside the conservative bubble. Richard Vedder is with the American Enterprise Institute and John Taylor with the Hoover Institute.