Tag Archives: economics

The corporate tax cut will never trickle down

The immediate benefits of the corporate tax cut have gone to stockholders and executives rather than workers. The long-term benefits will too.


Dropping the corporate tax rate from 35% to 21% was the centerpiece of the tax reform package Republicans passed (with no Democratic votes) and Trump signed late last year. They sold that cut with the argument that lower corporate taxes would stimulate investment: Rather than build that new factory in Indonesia or Vietnam, a corporation might site it in Iowa instead, creating new jobs and raising wages. So while it might look like the benefits would go entirely to wealthy shareholders, in the long run that money would flow to American workers. American households, Trump economic advisors claimed, would see their incomes go up by $4000 a year over the next 3-5 years.

For a few weeks, it looked like the trickle-down was happening: A number of companies responded to the tax cut by giving their workers a one-time $1000 bonus — small potatoes compared to what the companies themselves were set to rake in, but not bad if it represented a down payment on future wage increases.

But how long would it take those increases to show up? Well, not immediately, in spite of the well-publicized bonuses. And not in one quarter. CBS reported in April that the corporate windfall (financed by increasing the federal budget deficit) was mostly going into stock manipulations.

In the first quarter, corporate America committed $305 billion to cash takeovers and stock buybacks, more than double the $131 billion in pre-tax wage growth for both new and existing workers subject to income tax withholding, TrimTabs calculates.

Worse, the Bureau of Labor Statistics is reporting bad news for “production and nonsupervisory employees”.

From May 2017 to May 2018, real average hourly earnings decreased 0.1 percent

The Washington Post elaborates, saying that this category “accounts for about four-fifths of the privately employed workers in America”. It also provides this graph.

How long? But it terms of the tax cut, it’s still early days. Of course the process of building new factories and hiring new workers would take longer than just a few months. So when should we expect the corporate tax cut to trickle down? Two years? Five years? Ten?

What about never?In his Friday column, Paul Krugman explains why the tax-motivated new factories and jobs and higher wages aren’t coming, not immediately and probably not ever. He labels his argument as “wonkish”, meaning that ordinary people who aren’t economists may find it hard to follow. So let me interpret a little.

The vision of low corporate taxes creating new jobs with higher wages comes from the Industrial Era, the age of coal-powered textile mills and Henry Ford’s assembly lines. Business investment in those days was mostly big, heavy equipment that cost a lot of money and was meant to last for decades or even longer. (I live in an apartment in a converted textile mill. The mill was built in the 1820s.) Businesses were national (or more likely, local) in those days, so a company located in Akron or Dearborn paid taxes in Akron or Dearborn.

That’s not what the economy looks like any more.

Tax havens. The biggest corporations are multi-national, and they book their profits in whatever countries their accountants choose. One trick is to transfer a company’s intellectual property to a foreign subsidiary, and then pay massive royalties and licensing fees to that subsidiary.

The rights to Nike’s Swoosh trademark, Uber’s taxi-hailing app, Allergan’s Botox patents and Facebook’s social media technology have all resided in shell companies that listed as their headquarters Appleby offices in Bermuda and Grand Cayman, the records show.

When pieces of your product — an iPhone, say — are made all over the world, who’s to say what country the profit is made in? Your accountants say. And they all say the same things: You made your profits in a tax haven.

Indeed, a tiny handful of jurisdictions — mostly Bermuda, Ireland, Luxembourg and the Netherlands — now account for 63 percent of all profits that American multinational companies claim to earn overseas, according to an analysis by Gabriel Zucman, an assistant professor of economics at the University of California, Berkeley.

Think about it: When was the last time you bought something marked “Made in Luxembourg”? Multinationals don’t build factories and employ workers in low-tax countries, they just route their profits there.

Krugman looks at the profit-to-wage ratio of foreign firms and local firms in a variety of countries.

If places like Puerto Rico and Ireland were just massively more productive than the US or Germany — producing enormous profits with relatively low labor costs — that would apply to their local firms too. But it doesn’t. For local firms, the ratio of profits to wages stays pretty constant across the board. It’s only foreign firms that have managed to unlock the Irish productivity miracle — not with actual production that employs workers, but via accounting tricks that claim profits produced by workers in other countries.

In short, multinational corporations have benefited enormously from Ireland’s generous tax laws. Irish workers, not so much. And with time, the corporations get better and better at gaming the tax system.

So lower US corporate taxes may induce corporations to book more of their profits here, for what that’s worth. But that’s an accounting gimmick, not an actual change in economic activity.

But even with that illusion making the effect look bigger than it is, won’t lower taxes still motivate investment and create jobs? Why doesn’t that work? This is where Krugman gets wonkish.

What investment means now. In the Industrial Era, nothing was more solid than a factory. Henry Ford started building his massive River Rouge complex in Dearborn during World War I, and it’s still there. Once it made Model T’s; now it makes F-150 trucks. The US Steel complex in Gary is even older, going back to 1908. Firestone in Akron, Caterpillar in Peoria — the big Industrial Era companies were virtually synonymous with the towns where their factories were.

In the Industrial Era, corporate investment was long-haul investment. You bought land and erected massive buildings to house huge machines. You dug canals and built railroad spurs that came right up to the beginnings and ends of your production lines. The industrialists who made those investments were looking half a century into the future, or even longer.

But most corporate investment these days is far more ephemeral. Take Google, the second-most valuable company in the world. What does it make exactly? Where is its River Rouge or Gary Works? If it wants to create a new product, it may have to hire some extra designers and programmers. But what does it invest in? An office, some computers. The office could be rented, the computers will be obsolete in a few years. Ditto for Facebook. Amazon also needs some warehouses, and maybe some robots to move boxes around. In a few years the warehouses could be somewhere else and the robots will be replaced by better robots. It’s all short-term stuff.

Whenever a company makes an investment, it’s weighing its expected profits against two things: the cost of capital (for example, the interest rate it has to pay on the money it borrows) and the depreciation rate (how fast the investment becomes obsolete). In the Industrial Era, when a factory complex or a railroad might be around for half a century, depreciation was low. So the cost of capital really mattered. If interest rates dropped from 6% to 4%, all your calculations changed. Investments you’d been putting off suddenly made sense again.

But when the equipment you’re buying is going to be scrap in 3-5 years, the cost of capital doesn’t matter nearly so much. Cutting interest rates still motivates people to buy houses, because those are long-term investments. But it doesn’t motivate business investment much any more. Krugman looks at the huge interest rate spike of 1979-1982, when the Fed pushed rates up over 20%. Housing investment crashed. Business investment not so much.

If that was divergence was happening already in the early 80s, it’s even moreso now.

What’s that have to do with tax rates? Now comes the wonky part:

What does this have to do with taxes? One way to think about corporate taxes in a global economy is that they raise the effective cost of capital. Suppose global investors demand an after-tax rate of return r*. Then the pre-tax rate of return they’ll demand in your country – your cost of capital — is r*/(1-t), where t is the marginal tax rate on profits. So cutting the corporate tax rate reduces the effective cost of capital, which should encourage more investment.

Let’s work an example of that. Suppose global investors are looking for a 5% return on their investment after taxes. (That’s Krugman’s r*.) If the corporate tax rate is 35%, they’ll need to make a pre-tax return of 7.7%. (That’s 5%/(1 – .35).) So for every $1,000 you invest, you make $77, you pay 35% of your profit in taxes ($27), and you wind up with $50, or a 5% profit.

Now cut the tax rate to 21%. Now you only need to make 6.3% before taxes to wind up with 5% after taxes. For every $1,000 invested, you make $63, pay 21% in taxes ($13) and wind up with $50.

So in this example, the tax cut effectively reduces the cost of capital from 7.7% to 6.3%.

That would have been a big deal to Henry Ford or Andrew Carnegie. Jeff Bezos or Mark Zuckerberg prefer the lower rate, of course, but it doesn’t drive their decisions in the same way.

Hence Krugman’s conclusion: It’s not that cutting corporate taxes will have no effect on jobs or wages, but it’s going to work out to a huge loss of goverment revenue in exchange for a small number of jobs.

But the vision of a global market in which real capital moves a lot in response to tax rates is all wrong; most of what we see in response to tax rate differences is profit-shifting, not real investment. And there is no reason to believe that the kind of tax cut America just enacted will achieve much besides starving the government of revenue.

The end result. Krugman’s argument needs one more step, because he leaves one question unanswered: Why should you care if the government collects less tax revenue? OK, maybe the lost revenue flows mainly to rich shareholders and billionaire CEOs and only a few jobs are created. Maybe the overall effect on wages doesn’t amount to much. But if it’s something, isn’t that good? The taxman may bag a little less — or even a lot less — but why should American workers cry about that?

Over the last few decades, conservatives have done a good job of convincing many Americans that taxes just go down a rat hole and aren’t connected to the valued services government provides. (In states like Kansas and Louisiana, though, people are starting to see the relationship.) And for the moment, Republicans have stopped worrying about the budget deficits that they were so focused on during the Obama administration. Less revenue means bigger deficits, but, again, why should you care?

Because deficit phobia will be back someday. We are already looking at trillion-dollar deficits beginning in 2020, and that’s under the assumption that we aren’t in recession by then. (This economic cycle is already getting a little old; that’s why unemployment numbers are so low.) In any serious recession — and one always comes eventually — the deficit will top $2 trillion, which is much higher than the record Bush/Obama deficit of FY 2009.

There is only one pile of money big enough cover a shortfall like that: entitlements like Social Security and Medicare. (We could zero out the defense budget and still have a deficit.) When Republicans remember that they care about deficits, that’s where they’re going to look.

So American workers who cheer for the corporate tax cut are like Esau being grateful to Jacob for his porridge: In the long run, the tax cut they let the rich monopolize will cost them their birthright of Social Security and Medicare.

Does the Exploding Federal Deficit Matter?

Republicans claimed that Obama’s deficits were apocalyptic, but trillion-dollar deficits are fine now that Trump is president. What’s the right level of concern?


In his 2008 stump speech, John McCain used to say that accusing Congress of spending money like a drunken sailor was an insult to drunken sailors. McCain is an old Navy man, the son and grandson of admirals, so he was particularly well positioned to take offense. The line usually got a good laugh.

Out-of-control debt and spending was a standard Republican complaint all through the Obama years. The Tea Party’s original claim to being non-partisan was that they also accused the Bush administration of being wild spenders, abetted by K-Street establishment Republicans as well as Democrats. For almost a decade now, Republicans of all stripes have railed against the deficit. Some dark curse would steal away our economic growth, their economists’ spreadsheet errors told us, if the total national debt ever got close to the annual GDP. As a result, Obama’s budgets turned into an annual game of chicken, the second round of stimulus spending never happened, infrastructure continued to decay, and we were stuck with a sluggish economy that didn’t get unemployment back under 5% until 2016.

But then the Electoral College appointed Trump president, and now the Bush days are back again: Deficits don’t matter. We can cut taxes and raise spending and everything should be fine (until the next Democratic president takes office, at which time the party will be over and the national debt will once again be an existential threat to the Republic). So Obama cut his inherited deficit in half, while Trump is in the process of pushing it back up again. The latest estimate of the FY 2019 deficit is $1.2 trillion, possibly rising to over $2 trillion by 2027. [1] And that doesn’t count the infrastructure plan that Trump plans to release today.

That’s been the pattern since Ronald Reagan: Republicans blow up the deficit, and then pressure Democrats to deal with it — which they’ve done. Presidents are inaugurated in January, inheriting a budget that started in October. Together, Clinton and Obama shaved more than a trillion dollars off the deficits of their entering year. But that was no match for the $1.7 trillion that Reagan and the two Bushes added to their entering deficits.

President entering deficit exiting deficit change
Trump -666 ??? ???
Obama -1413 -666 +747
Bush II +128 -1413 -1541
Clinton -255 +128 +383
Bush I -153 -255 -102
Reagan -79 -153 -74

(Numbers from thebalance.com. Negative numbers are deficits, the lone positive number a surplus.)

The GOP has never owned up to that pattern in its rhetoric, though. As Reagan was entering office, he scolded Congress about runaway debt.

Can we, who man the ship of state, deny it is somewhat out of control? Our national debt is approaching $1 trillion. A few weeks ago I called such a figure, a trillion dollars, incomprehensible, and I’ve been trying ever since to think of a way to illustrate how big a trillion really is. And the best I could come up with is that if you had a stack of thousand-dollar bills in your hand only 4 inches high, you’d be a millionaire. A trillion dollars would be a stack of thousand-dollar bills 67 miles high. The interest on the public debt this year we know will be over $90 billion, and unless we change the proposed spending for the fiscal year beginning October 1st, we’ll add another almost $80 billion to the debt.

So what did he do? He cut taxes, raised defense spending, and never ran an annual deficit less than $100 billion, peaking at $221 billion in FY 1986. In total, he added another $1.4 trillion to the national debt.

Trump is following the same script. In the short run, it’s good politics. Everybody likes a tax cut. If the increased spending means that the defense industry in your area starts hiring again, your local highways get resurfaced, or you don’t have to deal with cuts in Medicare, Social Security, CHIP, or whatever other government program your family relies on, then you’re happy. Compared to something immediate and personal, like whether you have a job or your kids can get the medical treatment they need, the federal deficit seems like an abstract, remote problem.

And yet, it’s hard to escape the nagging feeling that we can’t get something for nothing. If the government keeps spending and stops collecting taxes, it seems like something bad ought to happen eventually. But what?

Bad analogies. One problem we have in thinking about this question is that our national conversation about debt has been polluted by a really bad metaphor: The government’s budget is like your household budget.

The deficits-are-good-politics part of that analogy works. If you’re the budgeter in your household, and you suddenly decide that running up a big debt is no big deal, you can make everybody pretty happy for a while. The kids can get the Christmas presents they want. When nobody feels like cooking, the family can eat at a nice restaurant. That big vacation you’ve dreamed about can happen this summer rather than sometime in the indefinite future. If the job is getting to be too big a hassle, your spouse can just quit. It’s all good.

Until it’s not. Eventually, the household metaphor tells us, the bills will have to be paid, and then bankruptcy looms. And that’s where the analogy breaks down. Your household spending spree can’t go on forever, but it wouldn’t have to end if your bank simply cashed all your checks and never bothered you about the fact that your account is deep in the red. That’s the situation the U.S. government is in: The bank is the Federal Reserve, and it can (and will) simply honor all the checks the government writes.

Pushing the household analogy further, you might ask: But what happens when the bank runs out of money? In the case of the Fed, that can’t happen, because dollars are whatever the Fed says they are. For example, one of the ways the Fed dealt with the financial crisis that began in 2007 is called quantitative easing, which is defined like this:

Quantitative easing is a massive expansion of the open market operations of a central bank. It’s used to stimulate the economy by making it easier for businesses to borrow money. The bank buys securities from its member banks to add liquidity to capital markets. This has the same effect as increasing the money supply. In return, the central bank issues credit to the banks’ reserves to buy the securities. Where do central banks get the credit to purchase these assets? They simply create it out of thin air. Only central banks have this unique power.

Several countries’ central banks did this, but none more aggressively than the Fed, which created $2 trillion just by typing some numbers into its central computers. Since there’s no limit to the number of dollars the Fed can create this way, it can buy as many bonds as Congress wants to authorize. So there’s no limit to what the U.S. government can spend.

Consequently, anybody who talks about the U.S. government going bankrupt is just being hyperbolic. The government can refuse to cover its debts (that possibility is what the debt-ceiling crises of 2011 and 2013 were about), but it can’t be forced into bankruptcy. [2]

So what really goes wrong? You know something has to, because otherwise the government could just make us all rich.

The government’s debt gets financed in two different ways, and they correspond to the two things that can go wrong: high interest rates and inflation.

One way the debt gets financed is that investors buy government bonds. You may own some yourself, and if you have a 401k, probably some of that money is invested in mutual funds that own some government bonds. Banks or corporations with extra cash may hold it in the form of government bonds.

Investors like U.S. treasury bonds because they pay interest. But like every other market, the market for treasury bonds works by supply and demand. If the supply of bonds zooms up (because the government is borrowing more money), they won’t all get bought unless something attracts more investors. In this case, the “something” is higher interest rates. The more the government needs to borrow from investors, the higher the interest rate it will have to pay.

Since the U.S. government can’t go bankrupt, investors would rather loan to it than to just about anybody else. So the only way you or Bill Gates or General Motors can get a loan is to pay more interest rate than the going rate on treasury bonds. So the government borrowing more money can result in everybody paying higher interest rates: Your mortgage rate goes up, your credit-card rate goes up, businesses that want to borrow money to expand have to pay higher interest rates, and so on. If interest rates get high enough, people and businesses will stop borrowing, the ones who can’t cover the higher interest payments will go bankrupt, and the economy will fall into a recession.

The 50-billion-mark note of 1923.

The other way the government deficit gets financed is that the Fed can buy the bonds itself, creating dollars out of thin air to do so. This is the modern-day analog of governments paying their bills by printing money, and it can have the same result as when the German government printed money in the 1920s: inflation. It makes sense: dollars are part of a supply-and-demand system too, so increasing the number of dollars should decrease how much each of them can buy. [3]

Except … Notice that I keep using words like can and should. What makes economics such a hard subject is that simple reasoning like this doesn’t always pan out. Sometimes when the Fed creates more money, the economy just soaks it up. If the economy has unused capacity — if, say, there are idle mines and factories, and unemployed workers who want jobs — the extra money might just bring all that back to life. If more people start working and spending, producing and consuming more goods and services, then the normal function of the economy requires more money. So the money the Fed creates might not cause inflation. And if investors are having trouble finding attractive alternative investments — as they do when economic prospects are iffy for everybody — they might be happy to loan the government more money without a higher interest rate.

In other words, sometimes there really is a free lunch. The government can borrow more money, make a bunch of people happy, and nothing bad happens.

That’s how things played out during the Obama years (and also during Reagan’s administration). The national debt went up substantially, the Fed created trillions of dollars, and yet both interest rates and inflation stayed low. (In Reagan’s case, interest rates were at record highs when he came into office, and went down from there.) Conservative deficit hawks kept predicting that the sky was about to fall on us, but it didn’t. [4]

What about now? The reason we got away with running such big deficits during the Obama years was that the economy was in really bad shape when he took office in 2009. Left to its own devices, the economy looked likely to go into a deflationary cycle, where money stops circulating and suddenly no one can pay their debts: Businesses go bankrupt, so workers lose their jobs and creditors don’t get paid. That causes them to go bankrupt, and the whole vicious cycle builds on itself.

Classic Keynesian economic theory says that the government should run deficits during busts and surpluses during booms. [5] That way the overall debt stays under control and the economy grows without violent swings up and down. That’s what the record $1.4 trillion deficit in FY 2009 (the Bush/Obama transition year) was for: It provided some inflationary pressure to balance the deflationary pressure of the Great Recession. The government played its Keynesian role as the spender of last resort, and so money kept flowing. Without that stimulus, things could have been much worse.

But the situation right now is very different. For the last several months, the unemployment rate has been 4.1%, the lowest it has been since the Goldilocks years of the Clinton administration. We’ve never run a trillion-dollar deficit during a time of economic growth and low unemployment, but we’re about to.

In this situation, we’re unlikely to get the free lunch. The free lunch happens because productive capacity is just sitting there, waiting for new money to bring it to life. If you need more workers, you don’t have to hire them away from somebody else, you can hire them off the unemployment line. When a business increases its orders, its suppliers don’t have to build new plants or pay overtime, they just start running their factories on their regular schedules rather than at a reduced rate.

When the economy is already humming, though, all the increased inputs come at a higher cost. Somewhere there are going to be bottlenecks, places where supply can’t be increased easily, and so that limited supply will go to the highest bidder at an increased cost. Those price increases ripple through the system, and you have inflation.

Inflation hasn’t shown up yet, though interest rates have already started to rise. Back in September, when passing a tax cut still seemed unlikely, rates on the 10-year treasury bond were barely over 2%. Now they’re a little under 3%. The stock market doesn’t submit to interviews, so no one can say exactly why the Dow Jones Index dropped 2800 points in 9 business days. But traders are often citing worries about inflation and interest rates.

Only hindsight will be able to tell us whether the markets are over-reacting. But there is a limit to how much debt the government can pile up without bringing on inflation and high interest rates. We just don’t know what it is.


[1] The last $400 billion on that estimate (the white box in the chart) comes from two temporary changes that Republicans assure us they intend to be permanent: the part of the recent tax bill that benefits individuals and some taxes that were part of the Affordable Care Act that have since be delayed. So Republicans can claim the deficit will only (!) be $1.7 trillion in 2027 if they admit that the long-term tax cut was really just intended for corporations.

[2] Somebody out there is asking: “What about Greece?” During the last decade, the Greek government has had a series of major financial crises that revolved around not being able to finance its national debt. Why won’t that happen to us?

The difference is that Greece doesn’t have a true central bank that controls its own currency. Greece is part of the euro-zone, so when it runs a deficit, it needs to borrow euros. Euros are controlled by the European Central Bank, a pan-European institution that feels no obligation to buy the Greek government’s bonds.

[3] That’s what goes wrong with the government making us all millionaires. The first thing you’d probably do if you became a millionaire is hire somebody to do some cleaning. But the people you’d be trying to hire are now millionaires too, so they’re not going to work for the same rate you’d have paid them before.

In addition to what I’ve described, inflation and interest rates can also interact: If investors expect the dollars they’ll be repaid in the future to be worth less than the dollars they’re loaning out now, they’ll want a higher interest rate to make up the difference. The value of the dollar in other currencies also comes into play: Inflation pushes the value of the dollar down, while higher interest rates prop it up. Things get complicated.

[4] The showdown that led to the 2011 debt-ceiling crisis was foreshadowed by Paul Ryan’s report “The Path to Prosperity“, which called for drastic reductions in government spending.

Government at all levels is mired in debt. Mismanagement and overspending have left the nation on the brink of bankruptcy.
The cause for Ryan’s alarm was the $1.2 trillion deficit in Obama’s proposed FY 2012 budget. That’s virtually identical to the FY 2019 deficit Ryan has voted for.

[5] In 1937, John Maynard Keynes wrote: “The boom, not the slump, is the right time for austerity at the Treasury.” In actual practice, we’ve usually run big deficits during busts and smaller deficits during booms. But the overall principle is the same.

Visions of a Future Gift Economy

Cory Doctorow’s recent novel Walkaway imagines a world where scarcity is unnecessary and generosity is a feasible way of life.


When you take a mountaintop view that lets all the gritty details blur into insignificance, most of our political arguments come down to two visions of how an economy might function. We might have a capitalist market economy, where good things are scarce and people compete to obtain them (and possibly fail to obtain necessities like food or medical care). Or we might have a socialist command economy, where central planners figure out how the work all of us do is going to produce the goods and services all of us need.

Our current economy is a blend of the two — a mostly capitalist economy sitting over a socialist safety net that is maintained by a tax-supported central government — and our endless political debates are about where the capitalist/socialist boundary should be. Do we want higher taxes and a sturdier safety net, or lower taxes and a flimsier safety net?

There is, however, a completely different third vision, which for most of human history has sounded kind of crazy: an anarchist gift economy, in which people compete not to obtain scarce goods, but to give the most impressive gifts.

Christmas dinner. Gift economies already exist in little niches, on very small scales. For example: the pot-luck family Christmas dinners I remember from when I was growing up. If you approached the dinner like the homo economicus of capitalist theory, you’d bring the minimal dish to get yourself in the door, and then pig out on what everybody else brought. The obvious result, as any economist could predict, would be a tragedy of the commons: Everybody would bring less and less as the years went by, until Christmas became a celebration of scarcity rather than abundance.

If such an outcome didn’t kill Christmas entirely, it would probably lead to a socialist revolution: A central planning committee would make sure we all got enough to eat by telling everyone exactly what to bring, specifying quantity and quality very precisely, and checking that no one cheated. So food would be plentiful again, but even so, the joy of the season might get lost.

In fact, though, neither of those things ever happened. Instead, my aunts competed with each other to bring the most appealing dishes, probably secretly hoping that everybody would eat their food first and only eventually get around to sampling what the other aunts brought. The way you won Christmas wasn’t to get the best deal for your household, it was to give the best gift. As a result, the common table was anything but tragic; we all stuffed ourselves and there was plenty left over.

Sweat and scale. Critics will ask how that example scales up, and they’ll have a point. The general human condition was laid out thousands of years ago in Genesis: “In the sweat of thy face shalt thou eat bread.” Ever since we got kicked out of Eden, good things have required work, and work has been disagreeable. Christmas dinners are one thing, but in general nobody’s going to do the world’s work voluntarily, just so other people can have nice stuff.

Imagine, for example, being a New York gentleman shopping for a shirt around 1850 or so: The raw cotton has come from slaves working under the lash, and has been turned into thread and cloth and finally a shirt in factories where teen-age Irish immigrant girls get respiratory diseases from breathing the lint spewed out by the big machines. None of them would have put themselves through that just to give you a shirt.

And yet, as technology hands more and more of the economy’s grunt work off to machines, gift-economy niches are expanding, especially in any area that involves information or the internet. Wikipedia is a darn good encyclopedia. Linux is a top-notch operating system. They both required huge amounts of human effort to create, but they’re gifts; they exist (and are continuously updated) because people want to make themselves useful, even if they’re not paid for it. [1]

Facebook and other social-media platforms are a fascinating hybrid of economic models: Mark Zuckerberg got fabulously wealthy by putting a capitalist interface around a gift economy. Nobody (other than maybe a few of his personal friends) uses Facebook because they want to interact with Zuckerberg. We use it to see the interesting, clever, and entertaining things other people post for free. Like my aunts at Christmas, we compete with each other to provide more and better free content. The ads that have made Mark a zillionaire are the friction that we tolerate for the chance to give and receive each other’s gifts.

Goods and services. Still, Linux-programming nerds are a special case, and a real economy is more than just clever tweets or cute cat videos. What about services that require time and effort here and now? Will people provide that for free?

Yeah, they will. Look at retired people, especially professionals who did something more interesting than purely physical labor. Often they keep doing similar work on a smaller scale for nothing. Retired public school teachers teach art classes at the community center, or mentor at-risk students one-on-one. Retired business executives give free advice to small start-ups. Retired doctors and nurses help out at free neighborhood clinics, or go off to disaster areas like Haiti after the earthquake or Puerto Rico after the hurricane.

When you ask such people why they stopped working for pay, the answer usually isn’t that they wanted to do nothing; it’s that the jobs available were too exhausting and constraining. The workplace wanted too much out of them, or left too little room for the parts of the job they most enjoyed. Young people describe the same situation from the opposite side: It’s not hard for them to think of ways to use their talents to help people and make stuff, or even for them to get excited about doing so. What’s difficult is figuring out how to get paid for it.

Anyone involved with a volunteer organization knows that people will even step up to do physical labor as long as there’s not too much of it. If you require long hours of drudgery day after day, you’ll have to pay somebody. But if you want a bunch of people to paint the new school or clean out the church basement, you can usually get that done by volunteers. If not for the thought that some big corporation would be making money off of us, I could imagine people volunteering to help at UPS during the holidays, as long as we could do it on our own terms. “Hey, I’m not doing anything Tuesday. You want to go deliver some Christmas presents?”

Material goods. OK, but what about real stuff? Physical things are different from information or services.

But not as different as they used to be. 3D printing is still in its infancy, but it looks like a bridge between the information-wants-to-be-free world of the internet and the sweat-of-thy-face world of physical objects. Most of what you can make now falls under the broad heading of “cheap plastic crap“, but you only have to squint a little bit to see future printers that are more like general fabricators: They’ll use a greater variety of materials, and weave them together on smaller and smaller scales, until we have something approaching the replicators of Star Trek.

In the future, you might acquire a shirt by getting your torso scanned, choosing from a set of designs somebody posted free to the internet, and having your general-purpose home fabricator assemble the shirt molecule-by-molecule, using one or two of your worn-out shirts as raw material. No slaves. No wheezing red-haired girls. Just energy (which you might have gotten free from the wind or sun), computing power (so cheap that it’s barely worth accounting for), and gifts from other people.

It’s a stretch, but you can imagine even food working that way eventually: Get some organic molecules by throwing your grass clippings into the fabricator, and take out a beef stroganoff — or maybe at least some edible substance that is tasty and nutritious. In the meantime, people love to garden or raise chickens or tend bees. A lot of them happily give their surplus away. At the moment, that’s not nearly enough to feed the world. But such small-scale producers might come a lot closer if they didn’t need to have jobs or sell their produce for money. If you needed traditional food merely as a garnish, and got your basic nutrition elsewhere, the gift culture might provide it.

In short, some desirable things — beachfront homes, original copies of Action #1 — might always be scarce and remain part of a market economy. But it’s possible to imagine the market and gift economies switching places: Markets might become niches, as gift economies are now.

Capitalism and the surplus population. Compare the gift economy’s trends to what technology is doing to the capitalist economy. Picture a capitalist economy as a set of concentric circles: The innermost one consists of the relatively small number of people who increasingly own everything. They can afford to get whatever they want, so there has to be a next circle out, consisting of the people who produce goods and services for the rich: food and clothing, obviously, but also yacht designers, heart specialists, estate planners, physical trainers, teachers for their kids, bodyguards, and so on.

It takes a lot of people to provision a single oligarch, but if the central circle is small enough, the next one out will still be just a fraction of the general population. Those second-circle people may not be rich like the central circle, but they will need to be paid enough to buy a number of the things they want. So a third, even larger circle of people becomes necessary to produce goods and services for them.

And so on.

It would be pleasant to imagine that these circles expand forever, each new circle spreading the wealth to the next circle out, until everybody can be paid to do some useful work. But as the inner circle gets smaller and smaller, and as more and more work is done by machines, probably the process ends long before it includes everybody. So you wind up with a final outer circle of surplus population: people the economy has no real use for. It’s not that they have no skills or don’t want to work or have some moral failing that makes them unemployable. It’s just math. The people with money can get everything they want without employing everybody, so a lot of other people wind up as ballast. [2]

If you’re a bleeding-heart type, you might get sentimental about those surplus people. But put yourself in the shoes of an oligarch: The prevailing moral code won’t tolerate just letting the extra people starve, so somebody has to maintain them through either charity or taxes, even though they’re entirely useless. Imagine how you must resent all those parasites, who have no connection to your productive economy, but still want to be supported by it! [3]

Now we’re in the world of Cory Doctorow’s Walkaway.

Walkaway. The novel takes place in the late 21st and early 22nd centuries, by which time several of the trends we can see now have gone much further. Large numbers of people compete for a relatively small number of jobs, and the people who get those jobs are increasingly desperate to keep them. If you weren’t born rich, getting enough training to compete for the good jobs involves taking on debts that you may never get a good enough job to pay off. The economy has contracted around a few major economic centers, leaving large sections of the U.S. and Canada virtually empty.

Increasing numbers of people who get fed up with this situation “go walkaway”: They set out for the empty areas, hoping to find a way to make a life for themselves outside the “default culture”, which the Walkaways come to call Default. Fortuitously, the UN has responded to a variety of refugee crises over the decades by developing technologies that make it easy to establish settlements quickly: cheap wind and solar generators, small fabricators you can use to make bigger fabricators, shelter designs that don’t require skilled construction, and so on. Computing power and internet connectivity are easy to set up, and from there you can get whatever expert advice you need from professionals who find their Default jobs unfulfilling.

Walkaway settlements display that unique combination of order and anarchy you may recognize if you’ve spent any time at Burning Man or an Occupy encampment or working on an open-source project. There are elaborate social processes aiming at consensus, but if you can’t resolve a conflict you walk away from it: Take a copy of the source code and go create your own version of Linux if you want; maybe other programmers and users will come to like your vision better, or maybe not.

The Walkaway lifestyle is a mixture of hardship and abundance. The prevailing aesthetic is minimalistic, but everything you actually need is freely available. If somebody really wants your stuff, let them have it and go fabricate new stuff. If a group of assholes shows up and wants to take over the settlement, walk away and build a new settlement.

Doctorow’s most interesting insights involve the values implied by Default and Walkaway. Default is based on scarcity, and a person’s claim on scarce goods revolves around having special merit. [4] So everyone in Default is constantly striving to be special, to convince themselves that they’re special, and to prove their specialness to others. The hardest thing to adjust to in Walkaway is that you’re not special; you’re like everybody else. But that’s OK, because everybody deserves a chance to live and be happy.

Without spoiling anything, I can tell you that three things drive the plot:

  • An oligarch’s daughter goes walkaway, and he wants to reclaim and deprogram her.
  • Researchers at “Walkaway U” (a loose collection of scientists who mainly need computing power and don’t want their research controlled by oligarchs) solve the problem of simulating brains and uploading a person’s consciousness into software, thereby creating a version of immortality. Not only do the oligarchs want this technology — that would be easy, since nobody is keeping it from them — they want it to be expensive intellectual property that only they can afford to use.
  • Default culture is starting to fall apart, as more and more of the people it relies on stop believing in it. [5]

Default had tried to ignore Walkaway, and then to smear it as a dangerous place full of rejects and criminals. [6] But the plot-drivers cause Default to start seeing Walkaway as a threat.

Reflections on scarcity. One thing I take away from the novel is to be more skeptical of scarcity. Systems tend to justify themselves, so it’s not surprising that a system based on managing scarcity would concoct ways to create unnecessary scarcity. Much of our current culture, I think, revolves around making us want things that only a few people can have. [7] The vast majority that fails to acquire these things are defined as losers, and they/we deserve whatever bitter result they/we get.

Ditto for the idea that work is disagreeable. Maybe we’re making work disagreeable. Because good jobs are scarce, employers can demand a lot and treat workers badly. If, instead, we could fully engage everybody’s talents and energies, maybe the work we each needed to do wouldn’t be that demanding. We might even enjoy it.

So I’m left with a series of provocative questions: What if scarcity isn’t the fundamental principle of economics any more, or won’t be at some point in the near-to-middle future? What if God’s post-Eden curse — “In the sweat of thy face shalt thou eat bread” — came with a time limit? What if our sentence is up?


[1] This blog is a gift: no subscriptions, no ads, no click-here-to-donate buttons, not even a means to collect and sell your data. It’s really this simple: I want to write it and I hope you enjoy reading it. If you want to do me a favor in return, spread my gift to your friends.

[2] Once this process gets started, a vicious cycle makes it worse: The larger the surplus population, and the more capable people it contains, the more competition there is for the available jobs. This drives down wages, and shrinks all the circles further. For example: The less the second circle gets paid, the fewer goods and services it can command. Consequently, the third circle doesn’t have to be as big. And so on.

[3] In case you’re struggling to put words around the flaw in this way of thinking, I already did: The mistake is the assumption that the oligarchs own the world, and that a baby born into poverty has no claim on either the natural productivity of the planet or the human heritage that created technological society. The oligarchs assume they are the sole rightful heirs both of the Creator and of all previous generations of inventors.

[4] Characters in the novel dispose of the “meritocracy” view of capitalist society very quickly: The view is based on circular logic, because “merit” is defined by whatever the system rewards. So Donald Trump is on top because he has merit, but the only observable evidence of his merit is the fact that he’s on top.

[5] The collapse of Soviet Communism is probably a model here. The Soviet system maintained the appearance of vast power right up to the last minute. When people respect you mainly for your power, the first signs of weakness quickly snowball.

[6] Recall the mainstream reaction to Occupy Wall Street.

[7] The archetypal example of this is the the Prize in Highlander: “In the end there can be only one.” Reality TV tells us this story over and over: The Bachelor will pick only one woman. Only one performer will become the next American Idol. And so on.

Just What We Needed: More Inequality, Bigger Deficits

Trump’s tax plan is designed to help the little people.

Congress still needs to fill in key details, but the general direction of the Republican tax-reform plan is so clear that no conceivable details can change it.


For decades now, Republicans have been dancing a two-step on taxes and spending:

  1. Cut taxes a little bit for most people and hugely for the very rich, promising that economic growth will make up the lost revenue.
  2. When the lost revenue stays lost, claim that the resulting deficits are an existential threat to the Republic, necessitating previously unthinkable spending cuts.

The result of the two-step is a set of policies that could never pass as a unit. Kansas, for example, would never have voted to cut schools and highways to make rich people richer, but that’s how Sam Brownback’s fiscal revolution worked out. When George W. Bush’s tax cuts turned Clinton’s record surpluses into record deficits, his proposed solution was not to admit the mistake and restore the Clinton rates, or even to say that we couldn’t afford the wars in Iraq and Afghanistan any more, but to propose “entitlement reform” — privatizing Social Security and reorganizing Medicare and Medicaid as defined-benefit programs.

Now, as Republicans try to shake off their ObamaCare-repeal failure and move on, the music is starting again. “A-one, a-two, cut rich people’s taxes …”

Trump promised it wouldn’t be that way this time. All his tax-reform rhetoric has been about jobs and middle-class families, and he often says or implies that people like him will have to sacrifice. Wednesday in Indianapolis, he said:

Our framework includes our explicit commitment that tax reform will protect low-income and middle-income households, not the wealthy and well-connected. They can call me all they want. It’s not going to help. I’m doing the right thing, and it’s not good for me. Believe me. [1]

A few weeks ago, when he began the tax-reform push by speaking at the Loran Cook Company in Springfield Missouri, he said:

Tax reform must dramatically simplify the tax code, eliminate special interest loopholes — and I’m speaking against myself when I do this, I have to tell you. And I might be speaking against Mr. Cook, and we’re both okay with it, is that right? It’s crazy. We’re speaking — maybe we shouldn’t be doing this, you know? (Laughter.) But we’re doing the right thing. (Applause.) True.

Not true, as it turns out. There are still a lot of details missing — so far all we have is a nine-page “framework” document (with not that many words on each page), not a bill that could be analyzed precisely or voted into law — but everything that has been nailed down points in the direction of big cuts for Trump himself and people like him. It’s hard to imagine any set of details that could reverse that course.

Here are some things already specified:

  • The corporate tax rate drops from 35% to 20%, and corporations get to write off their capital investments faster. That’s a big win for the people who own corporations.
  • “The committees also may consider methods to reduce the double taxation of corporate earnings.” In other words: either another write-off for corporations or a big tax cut for people whose income is mostly corporate dividends.
  • Multi-national corporations would no longer be taxed on overseas profits, and profits currently held overseas to escape U.S. taxes could be repatriated at a low rate.
  • The seven current individual tax brackets, running from 10% to 39.6%, become three brackets: 12%, 25%, and 35%. The bottom rate goes up and the top rate comes down.
  • The alternate minimum tax (which applies mainly to the wealthy, and is the main tax Trump himself paid in the one year we know anything about) and the estate tax (which no estate smaller than $5.5 million currently pays) go away.
  • Income from businesses organized as something other than corporations — sole proprietorships, partnerships, and S-corporations (collectively known as “pass-through entities”) — is currently taxed at the individual rates, which could be as high as 39.6%. That gets cut to 25%. Given the way Trump’s hotels are structured or could be structured, this also would be a big win for him. (You could imagine rich people dodging the 35% tax rate by re-organizing their finances so that all their income comes via pass-through entities, but the framework promises Congress will write rules to prevent that from happening. It doesn’t provide any notion of how such rules might work.)

Specifics are supposed to be filled in by “the tax-writing committees” of the House and Senate “through a transparent and inclusive committee process” that is supposed to produce a complete bill sometime in November. They are the Krampuses assigned to deliver all the lumps of coal now that Santa (the nine-page framework) has distributed the sugar plums. The tax-writing committees are supposed to find and eliminate enough special-interest deductions to keep the revenue loss manageable and make the final product “at least as progressive as the existing tax code” so that it “does not shift the tax burden from high-income to lower- and middle-income taxpayers.” They will do that in the face of what promises to be the most expensive lobbying effort ever by special interests intent on keeping their loopholes. Because that’s what tax-writing committees have historically been so good at: imposing pain on special interests whose lobbyists have vast sums of money to throw around. [2]

That’s the general drift of the framework: If you’re rich, your benefits have been spelled out. Benefits to the rest of us are promised in some feel-good rhetoric, but it’s hard to imagine exactly what they’ll be. After all, somebody has to pay taxes, don’t they?

Analysis. The pattern we saw during ObamaCare repeal was that Republicans in Congress wrote the bills without Democratic input and kept their details secret for as long as possible. When the details appeared, they fulfilled none of the feel-good rhetoric Trump and others had been dishing out to the public: All that stuff about more people getting better coverage with lower premiums was ancient history by the time the actual bills were available for inspection, as was the promise that people with preexisting conditions would still be protected.

In particular, the number-crunchers at the Congressional Budget Office were kept in the dark as long as possible. Graham-Cassidy was voted on without CBO analysis, and the bill the House passed was only analyzed later. When analysis did come out in time, and documented just how far the proposal in question was from the promises it was supposed to fulfill, McConnell and Ryan pushed to vote before the public had a chance to process the implications.

So far, tax reform is on that track. The lack of detail in the framework prevents any definitive analysis. We don’t, for example, know exactly when the 12%, 25%, and 35% rates apply. You could imagine a bill where the 25% rate doesn’t kick in until your income reaches $1 million, so middle-class people would all pay 12%. Or it could start applying at $10, and everybody would pay 25% or more on virtually all their income beyond the standard deduction. Those are the kinds of “details” we’re still missing.

The Tax Policy Center tried to analyze anyway, making reasonable assumptions about how the details will shake out. (Neither of the possibilities I described in the previous paragraph is at all reasonable.) When the 9-page document didn’t specify something, they consulted statements by Trump officials, or documents like Paul Ryan’s “A Better Way“. Given that kind of speculation, the numbers they came up with shouldn’t be taken as gospel, but TPC’s analysis does throw the burden of proof back on Trump and the Republicans: Don’t just dismiss it, tell me where it’s wrong. [3]

TPC’s analysis says that taxpayers in the top 1% would see their after-tax incomes rise by 8.4%, and the top .1% by 10.2%, while the benefit to other taxpayers would be on the order of 1%. [4] Some upper-middle-class/lower-upper-class taxpayers would actually pay more tax, and (due to inflation) the number of people facing a tax increase would rise each year, until by 2027, it wouldn’t just be a few exceptional cases: The 80-95% income percentiles would see a net tax increase as a group.

Deficits. During the Obama administration, Republicans and their allies in the right-wing media often claimed that our rapidly-increasing national debt would bring on some economic catastrophe in the near-to-medium future. That fear is all gone now. It’s as if Democrats had announced in 2009 that under Obama we could go back to burning all the fossil fuels we want.

They haven’t changed their tune because the debt problem has cleared up. For a while it looked like it might. The annual deficit did hit alarming levels in FY 2009 (the year of the budget Obama inherited from Bush), and then headed down for several years afterward.

In raw numbers, the deficit bottomed out in FY 2015 at $483 billion, nearly a trillion less than 2009’s $1.413 trillion. But then it started rising again, hitting $585 billion in FY 2016, and an estimated $693 billion in FY 2017, which ended Saturday. The current CBO projections, with no tax cuts, say that the annual deficit will pass $1 trillion again in FY 2022, and keep rising thereafter.

So if you think the deficit is a real problem — not everybody does — you ought to be seriously worried.

But Trump and the congressional Republicans aren’t worried, at least not now that the red ink is gushing from their own budgets. So why not cut taxes?

The original story was that the tax cut would be deficit-neutral, i.e., whatever revenue it lost by cutting rates, it would regain by eliminating loopholes. But deficit-neutral tax cuts are no fun; to really get the party started you need cuts that nobody pays for.

So Senate Republicans are now preparing a budget resolution (the first step in a reconciliation process that would allow the final bill to pass the Senate with 50 votes), that allows a $1.5 trillion loss of revenue over ten years. And that’s just the current state of the bidding. Why not make it higher? Why not fill the budget with accounting gimmicks that allow the real cuts to be even bigger? (TPC estimates the lost revenue at $2.4 trillion in the first decade, $3.2 trillion in the second. Again: Republicans shouldn’t just scoff, they should explain why TPC is wrong.)

The same budget proposal gets the timing wrong on the two-step: It proposes a $450 billion cut to Medicare now. Silly, Medicare cuts are supposed to wait until after the tax cuts are in place and growth falls short of your projections.

Can they pass it? ObamaCare repeal is a cautionary tale of how Republican legislative efforts can fail, despite their apparent control of both houses of Congress and the presidency. In the Senate, reconciliation is a narrow path that eliminates many of the features conservatives want, and Republicans can only afford two dissenters (unless they manage to attract some Democrats). In the House, the Freedom Caucus has the power to hold a bill hostage until it is loaded up with provisions guaranteed to alienate moderates. (They’ve already started maneuvering.)

On the policy side, the similarities should be ominous to anybody who wants this to pass: The rhetoric selling the idea of the program has been populist, but the actual bill will be elitist: The rich will profit, the middle class will get a pittance (probably only temporarily), and the deficit will skyrocket. That will set up new “emergency” proposals to slash benefits the middle class would never have agreed to sacrifice to the rich, if the tax cuts hadn’t created an artificial budget “emergency”.

Eventually, the details will have to come out, and there will be well-founded analyses that Republicans can’t just brush off. When that happens, the public will turn against the bill, as it turned against the various forms of ObamaCare repeal. Red-state Democrats who have seemed open to tax reform (Heitkamp, Donnelly) will have plenty of cover when they stand against the final bill: They supported the middle-class tax cut Trump talked about in the beginning, not the upper-class giveaway it turned into.

Then Republicans in Congress will face a familiar question: Are they willing to vote against their constituents in order to follow their ideology, keep a promise to their donors, please Trump, and avoid going into the 2018 election cycle with zero accomplishments? For most of them, the answer will be Yes. But maybe three senators will balk.


[1] I’m not the only person to notice that Trump has what poker players call a tell: When he says “Believe me”, he’s lying.

[2] You could tell I was being sarcastic, right?

[3] Trump is also making assumptions and claiming specific outcomes for specific people. Wednesday he named a working couple in the audience and said they would save $1,000 next year under his plan. At this point, his opinion is just as speculative as TPC’s.

[4] Of course, that’s 1% of a much smaller number. If your income of $50 thousand goes up by 1%, that’s $500. If your income of $50 million goes up by 10.2%, that’s $5.1 million.

Three Misunderstood Things 7-24-2017

This week: census, environmental regulations, coal jobs


I. The census

What’s misunderstood about it: How can counting people be a partisan issue?

What more people should know: A lot rides on the census. The Census Bureau knows it gets the answers wrong, but Republicans have a partisan interest in not letting it do better. In 2020, it’s being set up to fail.

*

When the Founders wrote the Constitution, they knew the country was changing fast. New people were pouring into America — some coming by choice and others by force. If Congress was going to represent these people into the distant future, it would have to change as the country changed. So somebody would have to keep track of how the country was changing. That’s why Article I, Section 2 says:

The actual Enumeration shall be made within three Years after the first Meeting of the Congress of the United States, and within every subsequent Term of ten Years, in such Manner as they shall by Law direct.

Congress has implemented that clause by setting up the Census Bureau, which tries to count everyone in America in each year that ends in a zero. You can look at this as a rolling peaceful revolution: Via the census, states like Virginia and Massachusetts have gradually surrendered their founding-era power to new states like California and Texas.

No doubt you learned in grade school that counting is an objective process that produces a correct answer — the same one for everybody who knows how to count. But in practice, when a bunch of people count to 325 million, agreement starts to break down. Now imagine that you’re counting a field full of 325 million cats, most running around and jumping over each other, and a few actively hiding from you. How do you come up with an answer you have faith in?

That’s the Census Bureau’s fundamental problem: Americans won’t stand still long enough to be counted, and some are actively suspicious of anybody from the government who comes around asking questions. Inevitably, then, not everybody gets counted, and some people get counted more than once. This is not a secret; the Census Bureau admits that it gets the wrong answer.

That might not be so bad if the errors were random, but they’re not. Basically, the more stable your life is, the more likely you are to be counted correctly. If, for example, you’re still living in the same house with the same people that a census worker counted ten years ago, they’re going to count you again. But if you’re sleeping on your friend’s couch for a few weeks while you’re waiting for a job to turn up, and thinking about moving back in with Mom if you can’t find one, then you might get missed.

Stability isn’t a randomly distributed quality. The LA Times spells it out:

The last census was considered successful — that is, the 2010 results were considered to be within an acceptable margin of error. But by the Census Bureau’s own estimates, it omitted 2.1% of African Americans, 1.5% of Latinos and nearly 5% of reservation-dwelling American Indians, while non-Latino whites were overcounted by almost 1%. The census missed about 7% of African American and Latino children 4 or younger, a rate twice as high as the overall average for young children.

But that raises an epistemological question: How do you know your count is wrong if you don’t have a correct count to compare it to? And if you have that correct count, why not just use it?

The answer to the first question is statistics. Imagine, for example, that you’re trying to count all the species that live in your back yard. You go out one day and count 50. Then you go out longer with a bigger magnifying glass and find 10 more. Then the next couple of times you don’t find anything new. But then you find two. Are you confident that’s all of them now? What’s your best guess about how many are really out there?

Now extend that to every yard in the neighborhood. Imagine that after each household does its own count, you all converge on one yard for a more intensive search than you’d be willing to do on every yard. That search finds even more new species. Now how many do you think you missed in the other yards?

Statisticians have thought long and hard about questions like that, and have a variety of well-tested ways to estimate the number of things that haven’t been found yet. If you apply those techniques to the census, you get more accurate estimates of the total.

So why not just use those estimates? Two reasons:

  • It sounds bad: Ivory-tower eggheads are using a bunch of mumbo-jumbo Real Americans can’t understand to invent a bunch of blacks and Hispanics that nobody has ever seen.
  • Republicans have a partisan interest in keeping the count the way it is.

The Census determines two very important things: how many representatives (and electoral votes) each state gets, and how hundreds of billions of dollars in federal money for programs like Medicaid and highway-building get distributed among the states. The miscount gives more power and money to mostly white (and Republican) states like Wyoming and Kansas, and less to a majority non-white (and Democratic) state like California. Within a state, Republican gerrymandering works by crowding Democratic-leaning urban minorities into a few districts, leaving a bunch of safely Republican rural and suburban districts. That minority-packing is even easier to do if a chunk of those people were never counted to begin with.

The 2020 census is already headed for trouble. The Census Bureau is being underfunded, taking no account of the fact that it has more people to count than last time. Plans to modernize its technology went badly. And it is currently leaderless: The bureau chief resigned at the end of June, and Trump has nominated no one to replace him.

So we’re set up for an even bigger uncount of minorities this year. And that’s got to make Paul Ryan happy.

II. Environmental regulations

What’s misunderstand about it: Many people believe that a clean environment is a costly luxury.

What more people should understand: Externalities. That’s how well-designed environmental regulations can save more money than they cost.

*

Nobody should come out of Econ 101 without an understanding externalities — real economic costs that the market doesn’t see because they aren’t borne by either the buyer or the seller.

Pollution is the classic example: Suppose I run a paper mill, and I use large quantities of chlorine to make my paper nice and white. At the end of the process I dump the chlorine into my local river, because that’s the cheapest way for me to get rid of it. Because I use such an inexpensive (for me) disposal process, I can keep my prices low. That makes me happy and my customers happy, so the market is happy too. Any of my competitors who doesn’t dump his chlorine in the river is going to be at a disadvantage.

The problems in this process only accrue to people who live downstream, especially fishermen and anybody who wants to swim or eat fish. They suffer real economic losses — losses that are probably much bigger than what I save. But since their loss is invisible to the paper market, nothing will change without the some outside-the-market action — like a government regulation, a court order, or a mob of fishermen coming to burn down my mill.

Now suppose the government tells me I have to stop dumping chlorine. I have to find either some environmentally friendly paper-whitening technique or a way to treat my chlorine-tainted wastewater until it’s safe to put back into the river. Either solution will cost me money, and I will have no trouble calculating exactly how much. So you can bet there will be an article in my local newspaper (which now has to pay more for the newsprint it buys from me) about how many millions of dollars these new regulations cost. The corresponding gains by fishermen, riverfront resort owners whose properties no longer stink, and downstream towns that don’t have to get the chlorine out of their drinking water — that’s all much more diffuse and hard to quantify. So the newspaper won’t have any precise number to weigh my cost against. Chances are its readers will see the issue as money vs. quality of life. They won’t realize that the regulations also make sense in purely economic terms.

That’s an abstract and somewhat dated example, but similar issues — and similar news stories — appear all the time. The costs of new regulations are borne by specific industries who can calculate them exactly, while the benefits — though very real — are more diffuse, and may accrue to people who don’t even realize they’re benefiting. (Companies are very aware of what they’ll have to spend to take carcinogens out of their products, but nobody ever knows about the cancers they don’t get.) But that doesn’t mean that the benefits aren’t bigger than the costs, even in dollar terms.

The best example from my lifetime is getting the lead out of gasoline. If you were alive at the time, you probably remember that the new unleaded gasoline cost a few cents more per gallon. Spread over the whole economy, that amounted to billions and billions. What we got out of that, though, was far more than just the vague satisfaction of breathing cleaner air. Without so much lead in their bloodstreams, our children are smarter, less violent, and less impulsive. The gains — even in purely material terms — have been overwhelmingly positive.

III. Coal jobs.

What’s misunderstood about it: What happened to them? Environmentalists are often blamed for destroying these jobs.

What more people should know: No doubt environmentalists would kill the coal industry if they could. But the real destroyers of coal jobs are automation and competition from other fuels.

*

Coal miners are the heroes of one of the classic success stories of the 20th century. Mining was originally a job for the desperate and expendable, but miners were among the first American workers to see the benefits of unionization. Year after year, coal mining became safer [1], less debilitating, and better paying, until by the 1960s a miner no longer “owed his soul to the company store“, but could be the breadwinner of a middle-class family, owning a home, driving a nice car or truck, and even sending his children to college. Sons and daughters of miners could become doctors, lawyers, or business executives. Or if they wanted to follow their fathers into the mines, that promised to be a good life too.

However, the total number of coal-mining jobs in the United States peaked in 1923.

Was that because Americans stopped using coal? Not at all. Coal production kept going up for the next 85 years.

The difference was automation. Mines employed three-quarters of a million men in the pick-and-shovel days, but better tools allow 21st-century mines to produce more coal with far fewer workers.

If you take a closer look at that employment graph, you’ll notice a hump in the 1970s, when coal employment staged a brief comeback. That corresponded to the Arab Oil Embargo of 1973 and the increased oil prices of the OPEC era. For decades after that, coal was the cheaper, more reliable energy source. Americans who dreamed of energy independence dreamed of coal. In a 1980 presidential debate, candidate Ronald Reagan said:

This nation has been portrayed for too long a time to the people as being energy-poor, when it is energy-rich. The coal that the President [Carter] mentioned — yes, we have it, and yet 1/8th of our total coal resources is not being utilized at all right now. The mines are closed down. There are 22,000 miners out of work. Most of this is due to regulation.

However, all that changed with the fracking boom. Depending on market fluctuations, natural gas can be the cheaper fuel. Meanwhile, the price-per-watt of renewable energy is falling fast, and is now competitive with coal for some applications. So if a utility started building a new coal-fueled plant now, by the time it came on line a renewable source might be more economical — even without considering possible carbon taxes or environmental regulations.

The dirtiness of coal is a huge externality (see misunderstanding II, above), so regulations disadvantaging it make good economic sense. Looking at the full cost to society, coal is the most expensive fuel we have, and should be phased out as soon as possible.

Statements like that make good fodder for politicians (like Trump or Reagan) who want to scapegoat environmental regulations for killing the coal industry. However, dirty coal is like the obnoxious murder victim in an Agatha Christie novel: Environmentalists are only one of the many who wanted it dead, and other suspects actually killed it.


[1] The number of coal-mining deaths peaked at 3,242 in 1907. In 2016 that number was down to 8. As a comment below notes, though, that doesn’t count deaths from black lung disease, which are on the rise again.

Three Misunderstood Things

This week: the anti-gay baker, why the Senate can’t move on, and whether raising the minimum wage kills jobs.


I. The Masterpiece Cakeshop case (which the Supreme Court will hear in the fall).

What’s misunderstood about it: People think it has free-speech implications.

What more people should know: The baker objected to the whole idea of making a wedding cake for two men, and cut off the conversation before the design of the cake was ever discussed. That makes it a discrimination case, not a freedom-of-speech case.

*

Defenders of Masterpiece Cakeshop owner Jack Phillips frequently portray him as a martyr not just to so-called “traditional marriage”, but to the freedom of tradespeople not to say things they object to. For example, one conservative Christian tried to demonstrate a double standard like this:

Marjorie Silva, owner of Azucar Bakery in Denver, said she told the man, Bill Jack of the Denver suburb of Castle Rock, that she wouldn’t fill his order last March for two cakes in the shape of the Bible, to be decorated with phrases like “God hates gays” and an image of two men holding hands with an “X” on top.

Is this cake gay or straight?

But the Colorado Civil Rights Commission ruled against Jack, because the two cases are very different: Silva objected to the message Jack wanted on the cake, not to anything about Jack himself or the situation in which the cake would be served. If the government had demanded that Silva make that cake, it would have been an example of forced speech, which there is already a long legal history against.

Do conservatives also have a right to refuse forced speech? Yes. A Kentucky court recently ruled in favor of a print-shop that refused to make t-shirts for a gay-pride festival.

So liberals must have howled in rage, right? Not me, and not philosopher John Corvino, who defended the Kentucky decision on the liberal news site Slate:

the print shop owners are not merely being asked to provide something that they normally sell (T-shirts; cakes), but also to write a message that they reject. We should defend their right to refuse on free-speech grounds, even while we support anti-discrimination laws as applied to cases like Masterpiece Cakeshop. … Free speech includes the freedom to express wrong and even morally repugnant beliefs; it also includes the freedom for the rest of us not to assist with such expression.

The reason the baker has lost at every stage so far — the administrative court and state appeals court ruled against him, and the Colorado Supreme Court refused to hear his appeal, letting the lower court ruling stand — is that he wasn’t objecting to putting some particular message or symbol on the cake, like a marriage-equality slogan or a rainbow flag. For all he knew when he refused, the men might have wanted a cake identical to one he had already made for some opposite-sex couple. In short, he objected to them, not to the cake they wanted.

Corvino explains:

One might object that Masterpiece Cakeshop is similar: “Same-sex wedding cakes” are simply not something they sell. But wedding cakes are not differentiated that way; a “gay wedding cake” is not a thing. Same-sex wedding cakes are generally chosen from the same catalogs as “straight” wedding cakes, with the same options for designs, frosting, fillings and so forth. It might be different if Masterpiece had said “We won’t provide a cake with two brides or two grooms on top; we don’t sell those to anyone.” But what they said, in fact, was that they wouldn’t sell any cakes for same-sex weddings. That’s sexual orientation discrimination.

II. Mitch McConnell’s agenda.

What’s misunderstand about it: If the Senate is stuck on its ObamaCare replacement, why can’t it move on to the next items on the Republican agenda: tax reform and the budget?

What more people should know: McConnell is trying to exploit a loophole in Senate rules. As soon as a new budget resolution passes, his ability to pass both TrumpCare and tax reform goes away — unless he changes the proposals to get Democratic votes.

*

During the Obama years, we often heard that “it takes 60 votes to get anything done in the Senate”, as if filibusters that can only be broken with 60-vote cloture motions were in the Constitution somewhere, and the minority party had always filibustered everything. (That’s why even the weakest gun-control bills failed, despite 54-46 votes in their favor.) But the Senate recognized a long time ago that budgets have to get passed somehow, and so the Budget Control Act of 1974 established an arcane process called “reconciliation” that circumvents the filibuster in very limited circumstances.

That’s how the Senate’s 52 Republicans can hope to pass bills without talking to the Democrats at all. But there’s a problem: Reconciliation is a once-a-year silver bullet. Fox Business explains:

Reconciliation allows Congress to consider just three items per fiscal year, whether they pertain to one bill or multiple. Those items are spending, revenue and debt limit. Since the GOP also wants to pass its tax reform agenda using reconciliation, it cannot statutorily do that under this budget blueprint because the two policy measures overlap.

And NPR elaborates:

The budget resolution for the current fiscal year dictates that any reconciliation measure must reduce the deficit, which the GOP’s Obamacare repeal was designed to do. Republicans then could draft a new budget resolution for the upcoming fiscal year with easier deficit targets, allowing for more aggressive tax cuts.

Under the most commonly accepted interpretation of the reconciliation rules, as soon as Congress passes a budget resolution for Fiscal Year 2018 (which begins this October), the window for passing TrumpCare under the FY 2017 resolution closes. So the only way to get them both done before facing another election campaign is to do them in the right order: first TrumpCare, then a new budget resolution, then tax reform.

Otherwise, McConnell’s options become less appealing: He can get rid of the filibuster completely, which several Republican senators don’t support. He can scrap either TrumpCare or tax reform for the foreseeable future. Or he can start envisioning the kinds of proposals that might get eight Democratic votes, plus a few to make up for Republican defections.

III. The minimum wage.

What’s misunderstood about it: Both supporters and critics of an much-higher minimum wage think they know what effect it will have on jobs.

What more people should understand: The effect of a minimum-wage increase on jobs is an empirical issue, not something you can deduce from first principles. And the data we have only covers small increases.

*

There is a certain kind of conservative who thinks he learned everything he needs to know about this issue in Econ 101: Every commodity, including unskilled labor, has a demand curve; if you raise its price, demand for it falls.

The right response to that analysis is maybe. Imagine that you own a shop with one machine, run by your sole employee. The machine produces some high-profit item. To make things simple, let’s ignore counterfeiting laws and imagine that the machine prints money. Cheap paper and ink go in, $100 bills come out.

Obviously, you could afford to pay your employee a lot more than the $7.25-per-hour federal minimum wage. But you don’t, because the machine is simple to operate and you could easily replace him, so he doesn’t have any bargaining leverage.

Now what happens if the minimum wage goes up to $15? Do you fire your guy and shut the machine down? Do you abandon your plan to buy another machine and hire a second worker? No, of course not.

Admittedly, that’s an extreme example, but it points out the right issues: Whether an increase in the minimum wage causes you to employ fewer people depends on how much you’re making off those people’s work. If you have a razor-thin profit margin, maybe a higher wage makes the whole operation unprofitable and you lay workers off. But if you could actually afford the higher wage, and the only reason you don’t pay it already is that your workers lack bargaining leverage, then you don’t.

In fact, if a minimum-wage increase gives your customers more money to spend on whatever you make, then you might have to hire more people to meet the demand.

Which situation is more typical? One reason to think the second situation is, is that sometime in the 1970s wages stopped tracking productivity: Workers have been producing more, but not getting comparable pay raises, presumably because they lack the bargaining power to demand them.

During the same era, the minimum wage has not kept pace with inflation. An increase to around $11 would just get it back to where it was in 1968. If it wasn’t causing massive unemployment then, why would it now?

Supporters of a higher minimum wage also point to studies of past increases, which don’t show big job losses.

But there’s a problem on that side, too: Past hikes haven’t been nearly as big as the proposal to go from $7.25 to $15. I was a minimum-wage worker myself in the 1970s when it increased from $1.60 to $1.80. I suspect my employer was not greatly inconvenienced. But larger increases might have a shock value that makes an employer say, “We can’t afford all these workers.”

That’s why the new data coming in from Seattle is so important: Seattle was one of the first cities to adopt a much-higher minimum wage, so we’re just beginning to see the results of that. The headlines on that initial study were that the higher wage is costing jobs, but that early conclusion is still debatable.

So in spite of my own preference for a higher minimum wage, I find myself in agreement with minimum-wage skeptic economist Adam Ozimek: This is an empirical question, and both sides should maintain more humility until we see more definitive data.

Social Capital and Inequality

Inequality is different this time, because the rich are usurping a different kind of capital.


For a long time, most thinkers in the West accepted poverty as natural. As Jesus said: “The poor you will always have with you.” But by 1754, Jean-Jacques Rousseau was writing an entire discourse on the origin of inequality and blaming it largely on the practice of recognizing land as private property.

The first man who, having enclosed a piece of ground, bethought himself of saying This is mine, and found people simple enough to believe him, was the real founder of civil society. From how many crimes, wars and murders, from how many horrors and misfortunes might not any one have saved mankind, by pulling up the stakes, or filling up the ditch, and crying to his fellows, “Beware of listening to this impostor; you are undone if you once forget that the fruits of the earth belong to us all, and the earth itself to nobody.”

Thomas Paine, who in many ways was the most radical of the American revolutionaries, observed the contrasting example of the Native American tribes — where he found no parallel to European wealth or poverty — and came away with a more nuanced model of the connection between inequality and landed property, which he published in 1797 as Agrarian Justice. He started in much the same place as Rousseau:

The earth in its natural, uncultivated state, was, 
and ever would have continued to be 
THE COMMON PROPERTY OF THE HUMAN RACE. In that state every man 
would have been born to property. He would have been a joint life-proprietor with the rest 
in the property of the soil, 
and in all its natural productions, 
vegetable and animal.

But Paine also recognized that the development of modern agriculture — which he saw as necessary to feed people in the numbers and diversity of activities essential to advanced civilization — required investing a lot of up-front effort: clearing forests of trees and rocks, draining marshlands, and then annually plowing and planting. Who would do all that, if in the end the harvest would belong equally to everybody? He saw private ownership of land as a solution to this problem, but believed it had been implemented badly. What a homesteader deserved to own was his or her improvement on the productivity of the land, not the land itself. If the land a family cleared became more valuable than the forest or marshland they started with, then the homesteaders should own that difference in value, but not the land itself. [1]

Society as a whole, he concluded, deserved a rent on the land in its original state, and he proposed using that income — or an inheritance tax on land, which would not be as clean a solution theoretically, but would be easier to assess and collect — to capitalize the poor.

When a young couple begin the world, 
the difference is exceedingly great 
whether they begin with nothing 
or with fifteen pounds apiece. With this aid they could buy a cow, 
and implements to cultivate a few acres of land; 
and instead of becoming burdens upon society … would be put in the way 
of becoming useful and profitable citizens.

Paine argued this not as charity or even social engineering, but as justice: The practice of privatizing land had usurped the collective inheritance of those born without land, so something had to be done to restore the usurped value.

In one of my favorite talks (I published versions of it here and here), I extended Paine’s idea in multiple directions, including to intellectual property. Just as Paine would buy a young couple a cow and some tools, I proposed helping people launch themselves into a 21st century information economy. Like Paine, I see this as justice, because otherwise the whole benefit of technological advancement accrues only to companies like Apple or Google, reaching the rest of us only through such companies. A fortune like Bill Gates’ arises partly through innovation, effort, and good business judgment, but also by usurping a big chunk of the common inheritance.

Avent. And that brings us to Ryan Avent’s new book, The Wealth of Humans: work, power, and status in the twenty-first century. There are at least two ways to read this book. It fits into the robot-apolcalypse, where-are-the-jobs-of-the-future theme that I have recently discussed here (and less recently here and here). Avent’s title has a double meaning: On the one hand it’s about the wealth humans will produce through the continued advance of technology. But that advance will also result in society having a “wealth” of humans — more than are needed to do the jobs available.

Most books in this genre are by technologists or futurists, and consequently assemble evidence to support a single vision or central prediction. Avent is an economic journalist. (He writes for The Economist.) So he has produced a more balanced analysis, cataloging the forces, trends, and possibilities. It’s well worth reading from that point of view.

But I found Avent’s book more interesting in what it says about inequality and social justice in the current era. What’s different about the 21st century is that technology and globalism have converged to make prosperity depend on a type of capital we’re not used to thinking about: social capital. [2] And from a moral point of view, it’s not at all obvious who should own social capital. Maybe we all should.

What is social capital? Before the Industrial Revolution, capital consisted mainly of land (and slaves, where that was allowed). By the late 19th century, though, the big fortunes revolved around industrial capital: the expensive machines that sat in big factories. The difference between a rich country and a poor one was mainly that people in rich countries could afford to invest in such machinery, which then made them richer. On a national level, industrial capital showed up as government-subsidized railroads and canals and port facilities. (The Erie Canal alone created one of the great 19th-century boom towns: Buffalo.) A country that could afford to make such improvements became more productive and more prosperous.

In the 20th century, the countries that rose to wealth — first Japan and then later Singapore, Taiwan, and South Korea — did so partly through investment in machinery, but also through education. An educated populace could provide the advanced services that made an industrial economy thrive. And so we started talking about human capital, the investments that people and their governments make in acquiring skills, and intellectual capital, the patents, copyrights, and trade secrets that powered a 20th-century giant like IBM.

That may seem like a pretty complete list of the kinds of capital. But now look at today’s most valuable companies: Apple and Google, either of which might become the world’s first trillion-dollar corporation in a year or two. Each owns a small amount of land, no slaves, and virtually no industrial capital; Apple contracts out nearly all of its manufacturing, and a lot of Google’s products are entirely intangible. Both employ brilliant, well-educated people, but not hundreds of billions of dollars worth of them. They have valuable patents, copyrights, trademarks, etc., but again, intellectual property alone doesn’t account for either company’s market value. There’s something in how all those factors fit together that makes Apple and Google what they are.

That’s social capital. Avent describes it like this:

Social capital is individual knowledge that only has value in particular social contexts. An appreciation for property rights, for example, is valueless unless it is held within a community of like-minded people. Likewise, an understanding of the culture of a productive firm is only useful within that firm, where that culture governs behavior. That dependence on a critical mass of minds to function is what distinguishes social capital from human capital.

Social capital has always existed and been a factor of production, but something about the current era, some combination of globalism and technology, has brought it to the fore. Today, a firm strong in social capital — a shared way of approaching problems and taking action that is uniquely suited to a particular market at this moment in history — can acquire all the other factors of production cheaply, making social capital the primary source of its wealth. [3]

Who should own social capital? Right now it’s clear who does own a company’s social capital: the stockholders. But should they? Avent talks about Bill Gates’ $70 billion net worth — created mostly not by his own efforts but by the social organism called Microsoft — and then generalizes:

People, essentially, do not create their own fortunes. They inherit them, come to them through the occupation of some state-protected niche, or, if they are very brilliant and very lucky, through infusing a particular group of men and women with the germ of an idea, which, in time and with just the right environment, allows that group to evolve into an organism suited to the creation of economic value, a very large chunk of which the founder can then capture for himself.

Stockholders — the people who put up the money to acquire the other factors of production — currently get the vast majority of the benefit from a company’s social capital, but it’s not clear why they should. We usually imagine other forms of capital as belonging to whomever would have them if the enterprise broke up: The stockholders would sell off the land and industrial and intellectual capital, while the employees would walk away with the human capital of their experience and education. But the company’s social capital would just vanish, the way that a living organism vanishes if it gets rendered into its constituent chemicals. So, rightfully, who owns it?

Another chunk of social capital resides in nations, which are also social organisms. The very real economic value of the rule of law, voluntary compliance with beneficial but unenforceable norms, shared notions of fairness, trust that others will fulfill their commitments, and general public-spiritedness — in other words, all the cultural stuff that makes a worker or firm or idea more valuable in America or Germany than in Burundi or Yemen — who does it belong to? Who should share in its benefits?

Bargaining power. Avent does not try to sell the conservative fairy tale that the market will allocate benefits appropriately. Under the market, what each party gets out of any collective endeavor depends on its relative bargaining power, not on what it may deserve in some more abstract sense.

Avent proposes this thought experiment: What if automation got to the point where only one human worker was required to produce everything? Naively, you might expect this individual to be tremendously important and very well paid, but that’s probably not what would happen. Everyone in the world who wanted a job would want his job, and even if he had considerable skills, probably in the whole world millions of people would share those skills. So his bargaining power would be essentially zero, and even though in some sense he produced everything, he might end up working for nothing.

Globalization and automation, plus political developments like the decline of unions, have lowered the bargaining power of unskilled workers in rich countries, so they get less money, even though in most cases their productivity has increased. As communication gets cheaper and systems get more intelligent, more and more jobs can be automated or outsourced to countries with lower wages, so the bargaining power of the people in those jobs shrinks. That explains this graph, which I keep coming back to because I think it’s the single most important thing to understand about the American economy today: Hourly wages tracked productivity fairly closely until the 1970s, but have fallen farther and farther behind ever since.

Companies could have afforded to pay more — by now, the productivity is there to support a wage nearly 2 1/2 times higher — but workers haven’t had the bargaining power to demand that money, so they haven’t gotten it. [4]

A similar thing happened early in the Industrial Revolution: Virtually none of the benefits that came from industrial capital were shared with the workers, until they gained bargaining power through political action and unionization. The result is the safety net we have today.

Just as workers’ ability to reap significant benefits from the deployment of industrial capital was in doubt for decades, so we should worry that social capital will not, without significant alterations to the current economic system, generate better economic circumstances for most people.

Who’s in? Who’s out? When you do start sharing social capital, whether within a firm or within a country, you run into the question of who belongs. This is a big part of the contracting-out revolution. The janitors and cafeteria workers at Henry Ford’s plants worked for Henry Ford. But a modern technology corporation is likely to contract for those services. By shrinking down to a core competency, it can reward its workers while keeping a tight rein on who “its workers” are. No need to give stock options or healthcare benefits to receptionists and parking lot attendants if they don’t seem essential to maintaining the company’s social capital.

Things shake out similarly at the national level: The more ordinary Americans succeed in getting a share of the social capital of the United States, the greater the temptation to restrict who can get into the US and qualify for benefits — or to throw out people that many of the rest of us think shouldn’t be here.

Avent would like to see us take the broadest possible view of who’s in:

The question we ask ourselves, knowingly or not, is: With whom do we wish to share society? The easy answer, the habitual answer, is: with those who are like us.

But this answer is bound to lead to trouble, because it is arbitrary, and because it is lazy, and because it is imprecise, in ways that invite social division. There is always some trait or characteristic available which can be used to define someone seemingly like us as not like us.

There is a better answer available: that to be “like us” is to be human. That to be human is to earn the right to share in the wealth generated by the productive social institutions that have evolved and the knowledge that has been generated, to which someone born in a slum in Dhaka is every bit the rightful heir as someone born to great wealth in Palo Alto or Belgravia.

Can it happen? Much of the Avent’s book is depressing, but by the time the Epilogue rolls around he seems almost irrationally optimistic. For 200 pages, he has painted as realistic a picture as he could of the challenges we face, whether economic, technological, social, or political. But as to whether things will ultimately work out, he appears to come around to the idea that they have to, so they will. So he ends with this:

We are entering into a great historical unknown. In all probability, humanity will emerge on the other side, some decades hence, in a world in which people are vastly richer and happier than they are now. With some probability, small but positive, we will not make it at all, or we will arrive on the other side poorer and more miserable. That assessment is not optimism or pessimism. It is just the way things are.

Face to face with the unknown, it is hard to know what to feel or what to do. It is tempting to be afraid. But, faced with this great, powerful, transformative force, we shouldn’t be frightened. We should be generous. We should be as generous as we can be.


[1] The arbitrariness of this becomes clear when you consider mineral rights. If my grandfather homesteaded a plot of land, which in my generation turned out to be in the middle of a oil field, what would that wealth have to do with me that I would deserve to own it?

[2] If the term social capital rings a bell for you, you’re probably remembering Robert Putnam’s Bowling Alone, which appeared as a magazine article in 1995 and was expanded to a book in 2000. But Putnam used the term more metaphorically, expressing a sociological idea in economic terms, rather than as a literal factor of production.

[3] Henry Ford’s company probably also had a lot of social capital, but it was hard to notice behind all those buildings and machines.

[4] Individual employers will tell you that they’d go bankrupt if they had to raise wages 2 1/2 times, and in some sense that’s true: They compete with companies that also pay low wages, and would lose that competition if they paid high wages. But that is simply evidence that workers’ bargaining power is low across entire industries, rather than just in this company or that one.

Jobs, Income, and the Future

What “the jobs problem” is depends on how far into the future you’re looking. Near-term, macroeconomic policy should suffice to create enough jobs. But long-term, employing everyone may be unrealistic, and a basic income program might be necessary. That will be such a change in our social psychology that we need to start preparing for it now.


Historical context. The first thing to recognize about unemployment is that it’s not a natural problem. Tribal hunter-gatherer cultures have no notion of it. No matter how tough survival might be during droughts or other hard times, nothing stops hunter-gatherers from continuing to hunt and gather. The tribe has a territory of field or forest or lake, and anyone can go to this commonly held territory to look for food.

Unemployment begins when the common territory becomes private property. Then hunting turns into poaching, gathering becomes stealing, and people who are perfectly willing to hunt or fish or gather edible plants may be forbidden to do so. At that point, those who don’t own enough land to support themselves need jobs; in other words, they need arrangements that trade their labor to an owner in exchange for access to the owned resources. The quality of such a job might vary from outright slavery to Clayton Kershaw’s nine-figure contract to pitch for the Dodgers, but the structure is the same: Somebody else owns the productive enterprise, and non-owners needs to acquire the owner’s permission to participate in it.

So even if unemployment is not an inevitable part of the human condition, it is as old as private property. Beggars — people who have neither land nor jobs — appear in the Bible and other ancient texts.

But the nature of unemployment changed with Industrial Revolution. With the development and continuous improvement of machines powered by rivers or steam or electricity, jobs in various human trades began to vanish; you might learn a promising trade (like spinning or weaving) in your youth, only to see that trade become obsolete in your lifetime.

So if the problem of technological unemployment is not exactly ancient, it’s still been around for centuries. As far back as 1819, the economist Jean Charles Léonard de Sismondi was wondering how far this process might go. With tongue in cheek he postulated one “ideal” future:

In truth then, there is nothing more to wish for than that the king, remaining alone on the island, by constantly turning a crank, might produce, through automata, all the output of England.

This possibility raises an obvious question: What, then, could the English people offer the king (or whichever oligarchy ended up owning the automata) in exchange for their livelihoods?

Maslow. What has kept that dystopian scenario from becoming reality is, basically, Maslow’s hierarchy of needs. As basic food, clothing, and shelter become easier and easier to provide, people develop other desires that are less easy to satisfy. Wikipedia estimates that currently only 2% of American workers are employed in agriculture, compared to 50% in 1870 and probably over 90% in colonial times. But those displaced 48% or 88% are not idle. They install air conditioners, design computer games, perform plastic surgery, and provide many other products and services our ancestors never knew they could want.

So although technology has continued to put people out of work — the railroads pushed out the stagecoach and steamboat operators, cars drastically lessened opportunities for stableboys and horse-breeders, and machines of all sorts displaced one set of skilled craftsmen after another — new professions have constantly emerged to take up the slack. The trade-off has never been one-for-one, and the new jobs have usually gone to different people than the ones whose trades became obsolete.  But in the economy as a whole, the unemployment problem has mostly remained manageable.

Three myths. We commonly tell three falsehoods about this march of technology: First, that the new technologies themselves directly create the new jobs. But to the extent they do, they don’t create nearly enough of them. For example, factories that manufacture combines and other agricultural machinery do employ some assembly-line workers, but not nearly as many people as worked in the fields in the pre-mechanized era.

When the new jobs do arise, it is indirectly, through the general working of the economy satisfying new desires, which may have only a tangential relationship to the new technologies. The telephone puts messenger-boys out of business, and also enables the creation of jobs in pizza delivery. But messenger-boys don’t automatically get pizza-delivery jobs; they go into the general pool of the unemployed, and entrepreneurs who create new industries draw their workers from that pool. At times there may be a considerable lag between the old jobs going away and the new jobs appearing.

Second, the new jobs haven’t always required more education and skill than the old ones. One of the key points of Harry Braverman’s 1974 classic Labor and Monopoly Capital: the degradation of work in the 20th century was that automation typically bifurcates the workforce into people who need to know a lot and people who need to know very little. Maybe building the first mechanized shoe factory required more knowledge and skill than a medieval cobbler had, but the operators of those machines needed considerably less knowledge and skill. The point of machinery was never just that it replaced human muscle-power with horsepower or waterpower or fossil fuels, but also that once the craftsman’s knowledge had been built into a machine, low-skill workers could replace high-skill workers.

And finally, technological progress by itself doesn’t always lead to general prosperity. It increases productivity, but that’s not the same thing. A technologically advanced economy can produce goods with less labor, so one possible outcome is that it could produce more goods for everybody. But it could also produce the same goods with less labor, or even fewer goods with much less labor. In Sismondi’s Dystopia, for example, why won’t the king stop turning his crank as soon as he has all the goods he wants, and leave everyone else to starve?

So whether a technological society is rich or not depends on social and political factors as much as economic ones. If a small number of people wind up owning the machines, patents, copyrights, and market platforms, the main thing technology will produce is massive inequality. What keeps that from happening is political change: progressive taxation, the social safety net, unions, shorter work-weeks, public education, minimum wages, and so on.

The easiest way to grasp this reality is to read Dickens: In his day, London was the most technologically advanced city in the world, but because political change hadn’t caught up, it was a hellhole for a large chunk of its population.

The fate of horses. Given the long history of technological unemployment, it’s tempting to see the current wave as just more of the same. Too bad for the stock brokers put out of work by automated internet stock-trading, but they’ll land somewhere. And if they don’t, they won’t wreck the economy any more than the obsolete clipper-ship captains did.

But what’s different about rising technologies like robotics and artificial intelligence is that they don’t bifurcate the workforce any more: To a large extent, the unskilled labor just goes away. The shoe factory replaced cobblers with machine designers and assembly-line workers. But now picture an economy where you get new shoes by sending a scan of your feet to a web site which 3D-prints the shoes, packages them automatically, and then ships them to you via airborne drone or driverless delivery truck. There might be shoe designers or computer programmers back there someplace, but once the system is built, the amount of extra labor your order requires is zero.

In A Farewell to Alms, Gregory Clark draws this ominous parallel: In 1901, the British economy required more than 3 million working horses. Those jobs are done by machines now, and the UK maintains a far smaller number of horses (about 800K) for almost entirely recreational purposes.

There was always a wage at which all these horses could have remained employed. But that wage was so low that it did not pay for their feed.

By now, there is literally nothing that three million British horses can do more economically than machines. Could the same thing happen to humans? Maybe it will be a very long time before an AI can write a more riveting novel than Stephen King, but how many of us still have a genuinely irreplacable talent?

Currently, the U.S. economy has something like 150 million jobs for humans. What if, at some point in the not-so-distant future, there is literally nothing of economic value that 150 million people can do better than some automated system?

Speed of adjustment. The counter-argument is subtle, but not without merit: You shouldn’t let your attention get transfixed by the new systems, because new systems never directly create as many jobs as they destroy. Most new jobs won’t come from maintaining 3D printers or manufacturing drones or programming driverless cars, they’ll come indirectly via Maslow’s hierarchy: People who get their old wants satisfied more easily will start to want new things, some of which will still require people. Properly managed, the economy can keep growing until all the people who need jobs have them.

The problem with that argument is speed. If technology were just a one-time burst, then no matter how big the revolution was, eventually our desires would grow to absorb the new productivity. But technology is continually improving, and could even be accelerating. And even though we humans are a greedy lot, we’re also creatures of habit. If the iPhone 117 hits the market a week after I got my new iPhone 116, maybe I won’t learn to appreciate its new features until the iPhone 118, 119, and 120 are already obsolete.

Or, to put the same idea in a historical context, what if technology had given us clipper ships on Monday, steamships on Tuesday, and 747s by Friday? Who would we have employed to do what?

You could imagine, then, a future where we constantly do want new things that employ people in new ways, but still the economy’s ability to create jobs keeps falling farther behind. Since we’re only human, we won’t have time either to appreciate the new possibilities technology offers us, or to learn the new skills we need to find jobs in those new industries — at least not before they also become obsolete.

Macroeconomics. Right now, though, we are still far from the situation where there’s nothing the unemployed could possibly do. Lots of things that need doing aren’t getting done, even as people who might do them are unemployed: Our roads and bridges are decaying. We need to prepare for climate change by insulating our buildings better and installing more solar panels. The electrical grid is vulnerable and doesn’t let us take advantage of the most efficient power-managing technologies. Addicts who want treatment aren’t getting it. Working parents need better daycare options. Students could benefit from more one-on-one or small-group attention from teachers. Hospital patients would like to see their nurses come around more often and respond to the call buttons more quickly. Many of our elderly are warehoused in inadequately staffed institutions.

Some inadequate staffing we’ve just gotten used to: We expect long lines at the DMV, and that it might take a while to catch a waitress’ eye. In stores, it’s hard to get anybody to answer your questions. But that’s just life, we think.

That combination of unmet needs and unemployed people isn’t a technological problem, it’s an economic problem. In other words, the problem is about money, not about what is or isn’t physically possible. Either the people with needs don’t have enough money to create effective demand in the market, or the workers who might satisfy the needs can’t afford the training they need, or the businessmen who might connect workers with consumers can’t raise the capital to get started.

One solution is for the Federal Reserve to create more money. At Vox, Timothy Lee writes:

When society invents a new technology that makes workers more efficient, it has two options: It can employ the same number of workers and produce more goods and services, or it can employ fewer workers to produce the same number of goods and services.

Jargon-filled media coverage makes this hard to see, but the Federal Reserve plays a central role in this decision. When the Fed pumps more money into the economy, people spend more and create more jobs. If the Fed fails to supply enough cash, then faster technological progress can lead to faster job losses — something we might be experiencing right now.

So if you’re worried that technological progress will lead to mass unemployment — and especially if you think this process is already underway — you should be very interested in what the Federal Reserve does.

Another option is for the government to directly subsidize the people whose needs would otherwise go unmet. That’s what the Affordable Care Act and Medicaid do: They subsidize healthcare for people who need it but otherwise couldn’t afford it, and so create jobs for doctors, nurses, and the people who manufacture drugs, devices, and the other stuff used in healthcare.

Finally, the government can directly invest in industries that otherwise can’t raise capital. The best model here is the New Deal’s investment in the rural electric co-ops that brought electricity to sparsely populated areas. It’s also what happens when governments build roads or mass-transit systems.

When you look at things this way, you realize that our recent job problems have as much to do with conservative macroeconomic policy as with technology. Since Reagan, we’ve been weakening all the political tools that distribute the benefits of productivity: progressive taxation, the social safety net, unions, shorter work-weeks, public education, the minimum wage. And the result has been exactly what we should have expected: For decades, increases in national wealth have gone almost entirely to owners rather than workers.

In short, we’ve been moving back towards Dickensian London.

The long-term jobs problem. But just because the Robot Apocalypse isn’t the sole source of our immediate unemployment problem, that doesn’t mean it’s not waiting in the middle-to-far future. Our children or grandchildren might well live in a world where the average person is economically superfluous, and only the rare genius has any marketable skills.

The main thing to realize about this future is that its problems are more social and psychological than economic. If we can solve the economic problem of distributing all this machine-created wealth, we could be talking about the Garden of Eden, or various visions of hunter-gatherer Heaven. People could spend their lives pursuing pleasure and other forms of satisfaction, without needing to work. But if we don’t solve the distribution problem, we could wind up in Sismondi’s Dystopia, where it’s up to the owners of the automata whether the rest of us live or die.

The solution to the economic problem is obvious: People need to receive some kind of basic income, whether their activities have any market value or not. The obvious question “Where will the money for this come from?” has an obvious answer “From the surplus productivity that makes their economic contribution unnecessary.” In the same way that we can feed everybody now (and export food) with only 2% of our population working in agriculture, across-the-board productivity could create enough wealth to support everyone at a decent level with only some small number of people working.

But the social/psychological problem is harder. Kurt Vonnegut was already exploring this in his 1952 novel Player Piano. People don’t just get money from their work, they get their identities and senses of self-worth. For example, coal miners of that era may not have wanted to spend their days underground breathing coal dust and getting black lung disease, but many probably felt a sense of heroism in making these sacrifices to support their families and to give their children better opportunities. If they had suddenly all been replaced by machines and pensioned off, they could have achieved those same results with their pension money. But why, an ex-miner might wonder, should anyone love or appreciate him, rather than just his unearned money?

Like unemployment itself, the idea that the unemployed are worthless goes way back. St. Paul wrote:

This we commanded you, that if any would not work, neither should he eat.

It’s worth noticing, though, that many people are already successfully dealing with this psycho-social problem. Scions of rich families only work if they want to, and many of them seem quite happy. Millions of Americans are pleasantly retired, living off a combination of savings and Social Security. Millions of others are students, who may be working quite hard, but at things that have no current economic value. Housespouses work, but not at jobs that pay wages.

Countless people who have wage-paying jobs derive their identities from some other part of their lives: Whatever they might be doing for money, they see themselves as novelists, musicians, chess players, political activists, evangelists, long-distance runners, or bloggers. Giving them a work-free income would just enable them to do more of what they see as their calling.

Conservative and liberal views of basic income. If you talk to liberals about basic income, the conversation quickly shifts to all the marvelous things they would do themselves if they didn’t have to work. Conservatives may well have similar ambitions, but their attention quickly shifts to other people, who they are sure would lead soulless lives of drunken society-destroying hedonism. (This is similar to the split a century ago over Prohibition: Virtually no one thought that they themselves needed the government to protect them from the temptation of gin, but many believed that other people did.)

So far this argument is almost entirely speculative, with both sides arguing about what they imagine would happen based on their general ideas about human nature. However, we may get some experimental results before long.

GiveDirectly is an upstart charity funded by Silicon Valley money, and it has tossed aside the old teach-a-man-to-fish model of third-world aid in favor of the direct approach: Poor people lack money, so give them money. It has a plan to provide a poverty-avoiding basic income — about $22 a month — for 12 years to everybody in 40 poor villages in Kenya. Another 80 villages will get a 2-year basic income. Will this liberate the recipients’ creativity? Or trap them in soul-destroying dependence and rob them of self esteem?

My guess: a little bit of both, depending on who you look at. And both sides will feel vindicated by that outcome. We see that already in American programs like food stamps. For some conservatives, the fact that cheating exists at all invalidates the whole effort; that one guy laughing at us as he eats his subsidized lobster outweighs all the kids who now go to school with breakfast in their stomachs. Liberals may look at the same facts and come to the opposite conclusion: If I get to help some people who really need it, what does it matter if a few lazy lowlifes get a free ride?

So I’ll bet some of the Kenyans will gamble away their money or use it to stay permanently stoned, while others will finally get a little breathing room, escape self-reinforcing poverty traps, and make something of their lives. Which outcome matters to you?

Summing up. In the short run, there will be no Robot Apocalypse as long as we regain our understanding of macroeconomics. But we need to recognize that technological change combines badly with free-market dogma, leading to Dickensian London: Comparatively few people own the new technologies, so they capture the benefits while the rest of us lose our bargaining power as we become less and less necessary.

However, we’re still at the point in history where most people’s efforts have genuine economic value, and many things that people could do still need doing. So by using macroeconomic tools like progressive taxation, public investment, and money creation, the economy can expand so that technological productivity leads to more goods and services for all, rather than a drastic loss of jobs and livelihoods for most while a few become wealthy on a previously unheard-of scale.

At some point, though, we’re going to lose our competition with artificial intelligence and go the way of horses — at least economically. Maybe you believe that AIs will never be able to compete with your work as a psychotherapist, a minister, or a poet, but chess masters and truck drivers used to think that too. Sooner or later, it will happen.

Adjusting to that new reality will require not just economic and political change, but social and psychological change as well. Somehow, we will need to make meaningful lives for ourselves in a work-free technological Garden of Eden. When I put it that way, it sounds easy, but when you picture it in detail, it’s not. We will all need to attach our self-respect and self-esteem to something other than pulling our weight economically.

In the middle-term, there are things we can do to adjust: We should be on the lookout for other roles like student and retiree, that give people a socially acceptable story to tell about themselves even if they’re not earning a paycheck. Maybe the academic idea of a sabbatical needs to expand to the larger economy: Whatever you do, you should take a year or so off every decade. “I’m on sabbatical” might become a story more widely acceptable than “I’m unemployed.” College professors and ministers are expected to take sabbaticals; it’s the ones who don’t who have something to explain.

Already-existing trends that lower the workforce, like retraining mid-career or retiring early, need to be celebrated rather than worried about. In the long run the workforce is going to go down; that can be either a source of suffering or a cause for rejoicing, depending on how we construct it.

Most of all, we need to re-examine the stereotypes we attach to the unemployed: They are lazy, undeserving, and useless. These stereotypes become self-fulfilling prophecies: If no one is willing to pay me, why shouldn’t I be useless?

Social roles are what we make them. The Bible does not report Adam and Eve feeling useless and purposeless in the Garden of Eden, and I suspect hunter-gatherer tribes that happened onto lands of plentiful game and endless forest handled that bounty relatively well. We could to the same. Or not.

What’s a 21st Century Equivalent of the Homestead Act?

A typical featured article on this blog is supposed to tell my readers something they might not already know, or at least to get them to think about it in a different way. But this time I’m just trying to raise a question, hoping that the combined wisdom and creativity of the readership will come up with stuff I haven’t thought of.

Before I ask the question, some background: One of the most radical things the United States government ever did was pass the Homestead Act (actually the Homestead Acts; there were a series of them). Beginning in 1850, and picking up steam after the Civil War, the government gave away relatively small plots of land — usually 160 acres — to settlers who over a period of five years would build a home on the land, live there, “improve” the land to make it farmable, and then farm it. Wikipedia claims that 10% of the total area of the United States was given away in this manner, to the benefit of 1.6 million families. [1]

I doubt Karl Marx had much influence on the U.S. Congress (though he was writing during this era) and there’s nothing particularly communist about establishing 1.6 million plots of private property. But I like to look at the Homestead Act in the light of the Marxist concept of the means of production. In a nutshell, the means of production is whatever resources are necessary to turn labor into goods and services. So, in a given society at a given state of technology,

Labor + X = Goods and Services

Solve for X, and that’s the means of production. Today, X is complicated: factories and patents and communication systems and whatever. But for most of human history, the means of production had mostly been land. And it still could be, even in the 19th century with its growing industrial economy; if you had fertile land, you could work it and produce sustenance for yourself, plus some extra to trade.

To Marx, the problem of capitalism is that the means of production — land, factories, mines, and so on — wind up privately owned by a fairly small group of people, and everybody else can only get access to the means of production by negotiating with those people. In other words, your productivity is not up to you; you can’t just go work and collect the fruit of your labor, you need an employer to hire you, so that you can have a job and get paid. Your labor only counts if you can get an employer’s permission to use his access to the means of production. Otherwise, you’re like a landless farmer or an auto worker who has been laid off from the factory.

Marx foresaw a vicious cycle: The narrower the ownership of the means of production became, the less bargaining power a worker would have, and the larger the premium an employer could demand in order to grant access. [2] This imbalance in bargaining power would increase the concentration of wealth, making the ownership of the means of production even narrower.

Usually, communists end up talking about state ownership of the means of production, but I want to point out that that’s a method, not a goal. What is really important is universal access to the means of production. State ownership is one way to try to do that, and I’m not sure how many other ways there might be — that’s part of the question here — but the real goal should be access: If all the people who want to work can find a way to turn their effort into goods and services, without needing to make a extortionate deal with some gatekeeper, then we’re on to something.

Now let’s return to the Homestead Act. What it did was vastly increase the number of Americans with access to the means of production. Mind you, it didn’t establish universal access — if you were a freedman sharecropping in Georgia, or were making pennies an hour in some dangerous factory in Connecticut, you had little prospect of assembling a big enough stake to go out West and homestead for five years — but it was vastly expanded access.

So now you’re in a position to understand what I’m asking: What would do that now? What change could we make (where we includes but is not necessarily limited to the federal government) that would vastly increase access to whatever the means of production is today?


[1] Probably most of you have already realized that this was an example of robbing Peter to pay Paul. The only reason the U.S. government had all this land to give was that they were in the process of stealing it from the Native Americans.

I would argue that at this point the decision to rob Peter had already been made; I doubt any major figure in the government saw much future for the Native Americans other than being pushed back onto reservations or annihilated. However we do the moral calculations today, at the time Congress saw itself with the power (and even the right, though don’t ask me to defend it) to dispose of that land however it wanted.

Given that robbery-in-progress, I think the decision to pay Paul is still remarkable. It certainly wasn’t the only thing Congress could have done. The government could have applied the Spanish model, and created a bunch of large haciendas to be controlled by a wealthy elite. Or it could have applied the English model, and granted the land in huge swathes to public/private companies like the East India Company or the Virginia Company, who could develop it for profit. What it did instead created a middle class of small landowners rather than an aristocracy or a managerial elite.

[2] Workers don’t usually pay an explicit “premium for access to the means of production”, but it’s implicit when a profitable business pays low wages: Money comes in and the owner keeps the lion’s share. If you don’t like it, go get another job.

One way to read the productivity vs. wages graphs I post every few months is that access premiums have been growing since the mid-1970s, and really started to accelerate in the mid-1980s.

The Election Is About the Country, Not the Candidates

Citizens shouldn’t let the media make us forget about ourselves.


Judging by the amount of media attention they got, these were the most important political stories of the week: Donald Trump and Bernie Sanders agreed to debate, but then Trump backed out, leading Sanders supporters to launch the #ChickenTrump hashtag. A report on Hillary Clinton’s emails came out. A poll indicated that the California primary is closer than previously thought. Trump’s delegate total went over 50%. Elizabeth Warren criticized Trump, so he began calling her “Pocahontas”. Sanders demanded that Barney Frank be removed as the chair of the DNC’s platform committee. Trump told a California audience that the state isn’t in a drought and has “plenty of water“. Trump accused Bill Clinton of being a rapist, and brought up the 1990s conspiracy theory that Vince Foster was murdered. President Obama said that the prospect of a Trump presidency had foreign leaders “rattled“, and Trump replied that “When you rattle someone, that’s good.” Clinton charged that Trump had been rooting for the 2008 housing collapse. Pundits told us that the tone of the campaign was only going to get worse from here; Trump and Clinton have record disapproval ratings for presidential nominees, and so the debate will have to focus on making the other one even more unpopular.

If you are an American who follows political news, you probably heard or read most of these stories, and you may have gotten emotionally involved — excited or worried or angry — about one or more of them. But if at any time you took a step back from the urgent tone of the coverage, you might have wondered what any of it had to do with you, or with the country you live in. The United States has serious issues to think about and serious decisions to make about what kind of country it is or wants to be. This presidential election, and the congressional elections that are also happening this fall, will play an important role in those decisions.

That’s why I think it’s important, both in our own minds and in our interactions with each other, to keep pulling the discussion back to us and our country. The flaws and foibles and gaffes and strategies of the candidates are shiny objects that can be hard to ignore, and Trump in particular is unusually gifted at drawing attention. But the government of the United States is supposed to be “of the People, by the People, and for the People”. It’s supposed to be about us, not about them.

As I’ve often discussed before, the important issues of our country and how it will be governed, of the decisions we have to make and the implications those decisions will have, are not news in the sense that our journalistic culture understands it. Our sense of those concerns evolves slowly, and almost never changes significantly from one day to the next. It seldom crystallizes into events that are breaking and require minute-to-minute updates. At best, a breaking news event like the Ferguson demonstrations or the Baltimore riot will occasionally give journalists a hook on which to hang a discussion of an important issue that isn’t news, like our centuries-long racial divide. (Picture trying to cover it without the hook: “This just in: America’s racial problem has changed since 1865 and 1965, but it’s still there.”)

So let’s back away from the addictive soap opera of the candidates and try to refocus on the questions this election really ought to be about.

Who can be a real American?

In the middle of the 20th century (about the time I was born), if you had asked people anywhere in the world to describe “an American”, you’d have gotten a pretty clear picture: Americans were white and spoke English. They were Christians (with a few Jews mixed in, but they were assimilating and you probably couldn’t tell), and mostly Protestants. They lived in households where two parents — a man and a woman, obviously — were trying (or hoping) to raise at least two children. They either owned a house (that they probably still owed money on) or were saving to buy one. They owned at least one car, and hoped to buy a bigger and better one soon.

If you needed someone to lead or speak for a group of Americans, you picked a man. American women might get an education and work temporarily as teachers or nurses or secretaries, but only until they could find a husband and start raising children.

Of course, everyone knew that other kinds of people lived in America: blacks, obviously; Hispanics and various recent immigrants whose English might be spotty; Native Americans, who were still Indians then; Jews who weren’t assimilating and might make a nuisance about working on Saturday, or even wear a yarmulke in public; single people who weren’t looking to marry or raise children (but might be sexually active anyway); women with real careers; gays and lesbians (but not transgender people or even bisexuals, whose existence wasn’t recognized yet); atheists, Muslims, and followers of non-Biblical religions; the homeless and others who lived in long-term poverty; folks whose physical or mental abilities were outside the “normal” range; and so on.

But they were Americans-with-an-asterisk. Such people weren’t really “us”, but we were magnanimous enough to tolerate them living in our country — for which we expected them to be grateful.

Providing services for the “real” Americans was comparatively easy: You could do everything in English. You didn’t have to concern yourself with handicapped access or learning disabilities. You promoted people who fit your image of a leader, and didn’t worry about whether that was fair. You told whatever jokes real Americans found funny, because anybody those jokes might offend needed to get a sense of humor. The schools taught white male history and celebrated Christian holidays. Every child had two married parents, and you could assume that the mother was at home during the day. Everybody had a definite gender and was straight, so if you kept the boys and girls apart you had dealt with the sex issue.

If those arrangements didn’t work for somebody, that was their problem. If they wanted the system to work better for them, they should learn to be more normal.

It’s easy to imagine that this mid-20th-century Pleasantville America is ancient history now, but it existed in living memory and still figures as ideal in many people’s minds. Explicitly advocating a return to those days is rare. But that desire isn’t gone, it’s just underground.

For years, that underground nostalgia has figured in a wide variety of political issues. But it has been the particular genius of Donald Trump to pull them together and bring them as close to the surface as possible without making an explicit appeal to turn back the clock and re-impose the norms of that era. “Make America great again!” doesn’t exactly promise a return to Pleasantville, but for many people that’s what it evokes.

What, after all, does the complaint about political correctness amount to once you get past “Why can’t I get away with behaving like my grandfather did?”

We can picture rounding up and deporting undocumented Mexicans by the millions, because they’re Mexicans. They were never going to be real Americans anyway. Ditto for Muslims. It would have been absurd to stop letting Italians into the country because of Mafia violence, or to shut off Irish immigration because of IRA terrorism. But Muslims were never going to be real Americans anyway, so why not keep them out? (BTW: As I explained a few weeks ago, the excuse that the Muslim ban is “temporary” is bogus. If nobody can tell you when or how something is going to end, it’s not temporary.)

All the recent complaints about “religious liberty” fall apart once you dispense with the notion that Christian sensibilities deserve more respect than non-Christian ones, or that same-sex couples deserve less respect than opposite-sex couples.

On the other side, Black Lives Matter is asking us to address that underground, often subconscious, feeling that black lives really aren’t on the same level as white lives. If a young black man is dead, it just doesn’t have the same claim on the public imagination — or on the diligence of the justice system — that a white death would. How many black or Latina girls vanish during a news cycle that obsesses over some missing white girl? (For that matter, how many white presidents have seen a large chunk of the country doubt their birth certificates, or have been interrupted during State of the Union addresses by congressmen shouting “You lie!”?)

But bringing myself back to the theme: The issue here isn’t Trump, it’s us. Do we want to think of some Americans as more “real” than others, or do we want to continue the decades-long process of bringing more Americans into the mainstream?

That question won’t be stated explicitly on your ballot this November, like a referendum issue. But it’s one of the most important things we’ll be deciding.

What role should American power play in the world?

I had a pretty clear opinion on that last question, but I find this one much harder to call.

The traditional answer, which goes back to the Truman administration and has existed as a bipartisan consensus in the foreign-policy establishment ever since, is that American power is the bedrock on which to build a system of alliances that maintains order in the world. The archetype here is NATO, which has kept the peace in Europe for 70 years.

That policy involves continuing to spend a lot on our military, and risks getting us involved in wars from time to time. (Within that establishment consensus, though, there is still variation in how willing we should be to go to war. The Iraq War, for example, was a choice of the Bush administration, not a necessary result of the bipartisan consensus.) The post-Truman consensus views America as “the indispensable nation”; without us, the world community lacks both the means and the will to stand up to rogue actors on the world stage.

A big part of our role is in nuclear non-proliferation. We intimidate countries like Iran out of building a bomb, and we extend our nuclear umbrella over Japan so that it doesn’t need one. The fact that no nuclear weapon has been fired in anger since 1945 is a major success of the establishment consensus.

Of our current candidates, Hillary Clinton (who as Secretary of State negotiated the international sanctions that forced Iran into the recent nuclear deal) is the one most in line with the foreign policy status quo. Bernie Sanders is more identified with strengthened international institutions which — if they could be constructed and work — would make American leadership more dispensable. To the extent that he has a clear position at all, Donald Trump is more inclined to pull back and let other countries fend for themselves. He has, for example, said that NATO is “obsolete” and suggested that we might be better off if Japan had its own nuclear weapons and could defend itself against North Korea’s nukes. On the other hand, he has also recently suggested that we bomb Libya, so it’s hard to get a clear handle on whether he’s more or less hawkish than Clinton.

Should we be doing anything about climate change?

Among scientists, there really are two sides to the climate-change debate: One side believes that the greenhouse gases we are pumping into the atmosphere threaten to change the Earth’s climate in ways that will cause serious distress to millions or even billions of people, and the other side is funded by the fossil fuel industry.

It’s really that simple. There are honest scientific disagreements about the pace of climate change and its exact mechanisms, but the basic picture is clear to any scientist who comes to the question without a vested interest: Burning fossil fuels is raising the concentration of greenhouse gases in the atmosphere. An increase in greenhouse gases causes the Earth to radiate less heat into space. So you would expect to see a long-term warming trend since the Industrial Revolution got rolling, and in fact that’s what the data shows — despite the continued existence of snowballs, which has been demonstrated by a senator funded by the fossil fuel industry.

Unfortunately, burning fossil fuels is both convenient and fun, at least in the short term. And if you don’t put any price on the long-term damage you’re doing, it’s also economical. In reality, doing nothing about climate change is like going without health insurance or refusing to do any maintenance on your house or car. Those decisions can improve your short-term budget picture, which now might have room for that Hawaiian vacation your original calculation said you couldn’t afford. Your mom might insist that you should account for your risk of getting sick or needing some major repair, but she’s always been a spoilsport.

That’s the debate that’s going on now. If you figure in the real economic costs of letting the Earth get hotter and hotter — dealing with tens of millions of refugees from regions that will soon be underwater, building a seawall around Florida, moving our breadbasket from Iowa to wherever the temperate zone is going to be in 50 years, rebuilding after the stronger and more frequent hurricanes that are coming, and so on, then burning fossil fuels is really, really expensive. But if you decide to let future generations worry about those costs and just get on with enjoying life now, then coal and oil are still cheap compared to most renewable energy sources.

So what should we do?

Unfortunately, nobody has come up with a good way to re-insert the costs of climate change into the market without involving government, or to do any effective mitigation without international agreements among governments, of which the recent Paris Agreement is just a baby step in the right direction. And to one of our political parties, government is a four-letter word and world government is an apocalyptic horror. So the split inside the Republican Party is between those who pretend climate change isn’t happening, and those who think nothing can or should be done about it. (Trump is on the pretend-it-isn’t-happening side.)

President Obama has been taking some action to limit greenhouse gas emissions, but without cooperation from Congress his powers are pretty limited. (It’s worth noting how close we came to passing a cap-and-trade bill to put a price on carbon before the Republicans took over Congress in 2010. What little Obama’s managed to do since may still get undone by the Supreme Court, particularly if its conservative majority is restored.)

Both Clinton and Sanders take climate change seriously. As is true across the board, Sanders’ proposals are simpler and more sweeping (like “ban fracking”) while Clinton’s are wonkier and more complicated. (In a debate, she listed the problems with fracking — methane leaks, groundwater pollution, earthquakes — and proposed controlling them through regulation. She concluded: “By the time we get through all of my conditions, I do not think there will be many places in America where fracking will continue to take place.”) But like Obama, neither of them will accomplish much if we can’t flip Congress.

Trump, meanwhile, is doing his best impersonation of an environmentalist’s worst nightmare. He thinks climate change is a hoax, wants to reverse President Obama’s executive orders to limit carbon pollution, has pledged to undo the Paris Agreement, and to get back to burning more coal.

How should we defend ourselves from terrorism?

There are two points of view on ISIS and Al Qaeda-style terrorism, and they roughly correspond to the split between the two parties.

From President Obama’s point of view, the most important thing about battle with terrorism is to keep it contained. Right now, a relatively small percentage of the world’s Muslims support ISIS or Al Qaeda, while the vast majority are hoping to find a place for themselves inside the world order as it exists. (That includes 3.3 million American Muslims. If any more than a handful of them supported terrorism, we’d be in serious trouble.) We want to keep tightening the noose on ISIS in Iraq and Syria, and keep closing in on terrorist groups elsewhere in the world, while remaining on good terms with the rest of the Muslim community.

From this point of view — which I’ve described in more detail here and illustrated with an analogy here — the worst thing that could happen would be for these terrorist incidents to touch off a world war between Islam and Christendom.

The opposite view, represented not just by Trump but by several of the Republican rivals he defeated, is that we are already in such a war, so we should go all out and win it: Carpet bomb any territory ISIS holds, without regard to civilian casualties. Discriminate openly against Muslims at home and ban any new Muslims from coming here.

Like Obama, I believe that the main result of these policies would be to convince Muslims that there is no place for them in a world order dominated by the United States. Rather than a few dozen pro-ISIS American terrorists, we might have tens of thousands. If we plan to go that way, we might as well start rounding up 3.3 million Americans right now.

Clinton and Sanders are both roughly on the same page with Obama. Despite being Jewish and having lived on a kibbutz, Sanders is less identified with the current Israeli government than either Obama or Clinton, to the extent that makes a difference.

Can we give all Americans a decent shot at success? How?

Pre-Trump, Republicans almost without exception argued that all we need to do to produce explosive growth and create near-limitless economic opportunity for everybody is to get government out of the way: Lower taxes, cut regulations, cut government programs, negotiate free trade with other countries, and let the free market work its magic. (Jeb Bush, for example, argued that his small-government policies as governor of Florida — and not the housing bubble that popped shortly after he left office — had led to 4% annual economic growth, so similar policies would do the same thing for the whole country.)

Trump has called this prescription into question.

If you think about it, the economy is rigged, the banking system is rigged, there’s a lot of things that are rigged in this world of ours, and that’s why a lot of you haven’t had an effective wage increase in 20 years.

However, he has not yet replaced it with any coherent economic view or set of policies. His tax plan, for example, is the same sort of let-the-rich-keep-their-money proposal any other Republican might make. He promises to renegotiate our international trade agreements in ways that will bring back all the manufacturing jobs that left the country over the last few decades, but nobody’s been able to explain exactly how that would work.

At least, though, Trump is recognizing the long-term stagnation of America’s middle class. Other Republicans liked to pretend that was all Obama’s fault, as if the 2008 collapse hadn’t happened under Bush, and — more importantly — as if the overall wage stagnation didn’t date back to Reagan.

One branch of liberal economics, the one that is best exemplified by Bernie Sanders, argues that the problem is the over-concentration of wealth at the very top. This can devolve into a the-rich-have-your-money argument, but the essence of it is more subtle than that: Over-concentration of wealth has created a global demand problem. When middle-class and poor people have more money, they spend it on things whose production can be increased, like cars or iPhones or Big Macs. That increased production creates jobs and puts more money in the pockets of poor and middle-class people, resulting in a virtuous demand/production/demand cycle that is more-or-less the definition of economic growth.

By contrast, when very rich people have more money, they are more likely to spend it on unique items, like van Gogh paintings or Mediterranean islands. The production of such things can’t be increased, so what we see instead are asset bubbles, where production flattens and the prices of rare goods get bid higher and higher.

For the last few decades, we’ve been living in an asset-bubble world rather than an economic-growth world. The liberal solution is to tax that excess money away from the rich, and spend it on things that benefit poor and middle-class people, like health care and infrastructure.

However, there is a long-term problem that neither liberal nor conservative economics has a clear answer for: As artificial intelligence creeps into our technology, we get closer to a different kind of technological unemployment than we have seen before, in which people of limited skills may have nothing they can offer the economy. (In A Farewell to Alms Gregory Clark makes a scary analogy: In 1901, the British economy provided employment for 3 million horses, but almost all those jobs have gone away. Why couldn’t that happen to people?)

As we approach that AI-driven world, the connection between production and consumption — which has driven the world economy for as long as there has been a world economy — will have to be rethought. I don’t see anybody in either party doing that.


So what major themes have I left out? Put them in the comments.