Monday, May 25, 2015

Is Growth Understated?

Martin Feldstein has a nice op-ed in The Wall Street Journal arguing that the Bureau of Economic Analysis is understating GDP growth because of difficulties in adjusting for quality improvements and new products. It goes well with recent technical reports from Goldman Sachs and the Fed's Board of Governors. And read Paul Krugman for skepticism on whether technological progress is a big deal.

Here is a closely-related claim: Free access to Internet utilities like Google and Facebook means that market-based consumption growth understates growth in total consumption and therefore GDP growth understates gains in social welfare.

The way we should be thinking about this claim is "household production." My ability to use Google and Facebook doesn't require additional spending, just additional time. Spending time on the Internet rather than buying the newspaper, therefore, is functionally similar to making a sandwich at home from cold cuts in the fridge rather than buying a ready-made one at the deli.

Consider, then, the idea that these free Internet utilities are becoming more important, more powerful, more valuable, or whatever. That's identical to an improvement in my sandwich-making skills. And we would think that, as I become a better sandwich-maker, I will substitute market goods for home production by reducing my consumption of deli sandwiches. In particular, I'll cut back until the next deli sandwich is worth as much to me as the next homemade one.

The implication is that, if the marginal value of time on the Internet is actually rising due to Google, Facebook, and similar utilities, we should be seeing substitution away from the relevant alternative uses of time.

Do we? Yes, from Business Insider:



Consumers are substituting digital media, much of it free, for media sources they pay for, like TV and print. Maybe, then, we should also take the omission of free goods seriously, too, when we consider the divergence of GDP from a fuller, hypothetical measure of social welfare.

Thursday, May 14, 2015

Macro Mysteries and Non-Mysteries

There has been an interesting, if rather theoretical, debate between Roger Farmer, Brad DeLongPaul Krugman, and John Cochrane. The gist of it is simple enough: Is the current standard toolkit of macroeconomic models enough to explain the 2008 recession and limp recovery?

So that all blog-readers are on the same page, Keynesian macroeconomics has rallied around a certain framework since the 1980s. You start with a very classical model of the economy -- an economy that is always at potential, always has the right prices, and always has efficient allocations of resources -- and add some frictions, usually sticky prices or some sort of borrowing constraint. The result is a model where business cycles happen (and can be very severe) but where, eventually, the economy returns to potential. Krugman largely defends this theoretical tradition or, more precisely, a more primitive version of it.

This is not what Roger Farmer wants. Instead, Farmer wants economists to be thinking about models in which "potential" is not well defined -- that is, where it is very much possible for the economy to find equilibrium at many different levels of production. In short, Farmer wants ideas like multiple equilibria, nonlinearity, and self-fulfilling expectations back on the theoretical agenda. And, on the empirical side, Farmer has been trying to show empirically that we see this phenomena in key economic variables like unemployment and output.

In moderating the debate, DeLong faults Krugman's defense of the standard toolkit and argues that Farmer deserves some credit. The standard toolkit, DeLong contends, doesn't get the size of the recession right:
When I look at the size of the housing bubble that triggered the Lesser Depression from which we are still suffering, it looks at least an order of magnitude too small to be a key cause... To put it bluntly: Paul is wrong because the magnitude of the financial accelerator in this episode cries out for a model of multiple--or a continuous set of--equilibria. And so Roger seems to me to be more-or-less on the right track.
I do not think DeLong is correct when he says that the magnitudes come out wrong. My sense has been that the standard toolkit -- with the financial accelerator and sticky prices -- actually does get it right. It follows that, at the moment, we do not have compelling evidence that the stuff Farmer wants to put into macroeconomic models is needed.

Matteo Iacoviello, for instance, showed back in 2005 that textbook financial-accelerator models match what we see in the data. There's no mystery to be solved about why declines in home prices have such severe, protracted effects on economic growth. More recently, Atif Mian and Amir Sufi have put forward a lot of evidence that the hit to household balance sheets during the 2008 recession explains the decline in employment. On my part, I am doing some work to extend this line of inquiry to Spain's housing bubble, with some initial results showing that the boom and bust in mortgage lending, driven by wholesale finance, fully explains the boom and bust in housing prices.

Simon Gilchrist and Egon Zakrajšek have shown something similar is true in corporate bonds -- a financial market that, when hit with an adverse shock, propagates the shock into corporate investment and employment. Daniel Leigh and an army of economists at the International Monetary Fund have shown that, across the set of developed economies, the drop and sluggish recovery in business investment also lines up with the predictions of the textbook model.

Another approach is to put these financial frictions into a more developed model of the economy's structure, as in some recent work by Marco Del Negro, Marc Giannoni, and Frank Schorfheide. When you hit that model economy with the the kind of shocks that preceded the 2008 recession, the downturn that pops out of the model looks quite a lot like the 2008 recession.

I am not trying to say here that the 2008 recession raises no interesting questions. It does. But I think that a review of the empirical research would suggest that "why was the downturn so severe?" and "why has the recovery been so weak?" are not among them. When DeLong and Farmer say that our theoretical framework is insufficient to explain the evidence, I do not know what evidence they have in mind.

Farmer does some informal statistical work to try to show that real output drifts rather than returns to a trend. The problem with this argument is that, when you separate out permanent and transient shocks -- something Farmer doesn't do -- the transient ones look like shocks to demand, the permanent ones to supply. (Cochrane's post has a lot more to say on these statistical issues.)

Farmer might find some stronger evidence for his view that "potential" is a nebulous concept in some fascinating new work by Larry Ball, which compares the revision of estimates of potential output to the actual downturn in output. Where the downturn was worse, Ball shows, the loss of potential has been worse. However, there's some (very different) evidence from the bombings of Japan and Vietnam showing that long-run economic potential is almost indestructible.

Trying to find solid footing on this issue will be a challenge. It's terribly difficult, from the standpoint of research, to show that short-run fluctuations transmit into long-run catastrophes. "Permanent" is hard to distinguish from "long-lasting."

My feeling then, is that the heat in this debate is pretty misplaced. We have a mountain of evidence showing that financial shocks can generate long-lasting, deep recessions -- and yet, we are only at the beginning when it comes to understanding whether recessions do permanent damage, let alone how much. Why don't we start there?

Sunday, May 10, 2015

Who's The Best Candidate?

Martin O'Malley: If drafted, I will run; but if nominated, I will probably be a disaster.

Without a doubt, one of the best parts of political elections are active prediction markets. For observers of prediction-market activity, a lot can be learned about how politics works.

I'm curious whether prediction markets can answer an important political question in the U.S.: Is someone a good candidate for president? And they actually can.

Prediction markets give us probabilities that candidates will win the Republican and Democratic nominations for president and the probability that they will win the general election. If we assume that candidates compete for only one nomination and cannot stage a third-party run if they do not win -- that is, no John Andersons allowed -- then we can easily estimate the probability of them winning the general election, conditional on winning the party nomination.*

When political pundits discuss whether someone is a good candidate for president, I think this conditional probability is exactly what they mean.

Taking these two probabilities from three different prediction markets -- PredictWise, Betfair, and PredictIt -- I am able to estimate this "competitiveness" score for nine top contenders for the Republican and Democratic nominations: Jeb Bush, Marco Rubio, Scott Walker, Rand Paul, and Chris Christie for the Republicans, and Hillary Clinton, Elizabeth Warren, Joe Biden, and Martin O'Malley for the Democrats. (Some technical notes can be found below.)

Here's what I find: The best Republican candidate is Jeb Bush, who has a 67-percent chance of winning the general election if he wins the nomination. The worst Republican candidate is Scott Walker, who has a 44-percent chance.

Among Democrats, Joe Biden and Hillary Clinton are nearly tied for the top candidate, with 58-percent and 57-percent chances of general-election victory if either secures the nomination. With a 24-percent chance, Martin O'Malley is the worst Democratic candidate.

You can see the full table of results here:



It's worth noting here that, at the party level, prediction markets estimate a 58-percent chance of a Democrat winning the presidency and a 42-percent chance of a Republican win. So comparing the candidate's conditional probability with the party's overall probability gives you a sense of good, say, Jeb Bush is as a candidate relative to the Republican field.

I found the results pretty surprising. They suggest that Rand Paul is a viable general-election candidate, Elizabeth Warren and Scott Walker are pretty overrated, and that "Bush fatigue" is fake. I was also surprised, in general, how closely clustered the top candidates were -- one take-away from this is that the candidate matters less than you might think.

On the other hand, the prediction markets think that the rest of the field is remarkably weak. Another take-away to the parties, then, might be: Nominate one of these candidates, or you will get crushed. This also helps explain why many of the top candidates can have better than 50-50 odds of winning in the general election if they win their party's nomination.

What might differentiate, say, Jeb Bush from Scott Walker in this conditional probability? I'll mostly leave that to the pundits. Yet Andy Hall, a young political scientist at Harvard, has recently found compelling evidence that political extremism hurts candidates' chances in general elections.

Another possibility is that these conditional probabilities aren't a perfect measure of competitiveness. If some of these candidates win the nomination, you've got to imagine that they got lucky -- Biden, for instance, trails Clinton in his chance of winning the Democratic nomination -- and so there's a sense in which this conditional probability is premised on "something good" happening to the candidate.

I would also remind readers about the "no-John-Andersons" assumption. If a candidate could stage a viable third-party race -- one might imagine this for Warren or Paul -- then my estimates might be a bit low.

Assessing the viability of presidential candidates is too important to be left to polling and pundits. Prediction markets can shed some light on whether a candidate has a shot in the general election if they win their party's nomination. 

----

* I will step through the math. By the law of total probability:

P(wins election) = P(wins election | wins nomination)*P(wins nomination)
                                        + P(wins election | ! wins nomination)*P(! wins nomination),


and then by assumption that P(wins election | ! wins nomination) = 0 :

P(wins election) = P(wins election | wins nomination)*P(wins nomination)

and therefore

P(wins election | wins nomination) = P(wins election) / P(wins nomination).

** Two technical notes:

(1) Since prediction markets for both the nomination and the general election do not exist for all candidates, I wasn't able to go further than the top names. Another issue, for some long shots, was that the probabilities are coarsely estimated -- that is, if you have about 2-percent chance of winning the nomination, whether that 2-percent is really 2.4 percent or 1.6 percent matters, and I do not have that level of precision. So I excluded candidates that prediction markets see as long shots. 

(2) Prediction markets are Dutch-booked to ensure profits for the market maker. To correct for this, I re-based the relevant probabilities so that they summed to one.

Tuesday, May 5, 2015

Today's Links

1. My good friends Daniel Yu and Jason Kang are up to amazing things. Yu runs Reliefwatch, a tech startup that helps clinics in the developing world avoid shortages of medical supplies. Kang helped to design Highlight, a powdered bleach additive that makes decontamination against infectious disease much easier.

2. The IMF has released a new dataset on capital controls. And here's an amazing resource that explains how to use basically any major survey dataset.

3. Graph: How the market value of tech firms has evolved from 1980 to present.

4. When the State Speaks, What Should It Say? There is a lot that this book (by Corey Brettschneider) can bring to bear on recent debates about if and how the government should intervene against private discrimination.

Saturday, May 2, 2015

Student Loans and the Next Crisis

These are boom times for student debt. Now 10 percent of all household debt, it has nearly doubled in the past five years. That growth comes entirely from new lending by the federal government.



Recent federal efforts to shut down for-profit colleges, a major source of demand for student loans, may not be enough. The federal takeover of the student-loan industry in 2010 was a shock to the supply of student loans, just like the boom in private securitization was to residential mortgages during the 2000s. We have seen this movie before. If the government is to prevent either another debt-driven economic downturn or large write-offs at the taxpayer's expense, the supply of student loans must pull back.

This will be difficult, and not only because students are a sympathetic bunch. The more meaningful obstacle will be government itself. If the Fed is to regulate the supply of student loans -- which, since the financial crisis, is responsible for such regulation -- it will have to knock on the Department of Education's door. And the Department of Education doesn't think of itself as a source of macroeconomic risk, even though it should.

The worry is that government does this sort of self-regulation badly. Think of Fannie Mae and Freddie Mac, which enjoyed government backing in the boom and then imploded in the bust. A more apt analogy may be the many developing countries where the state and state-run institutions are a large and direct source of credit and not merely of loan guarantees, as were Fannie and Freddie.

Without careful thought to the design of institutions, state-run lending has a tendency toward unpleasant endings. Inattentive to risk, the government starts a credit boom. It then fails to rein it in, because prudence never attracts a political constituency. When the bust comes, government faces an unattractive choice: It can forgive debts at great cost to taxpayers, or it can leave the borrowers saddled and plunge the economy deeper into distress.

Why are student loans a danger? More than a third of young families now have student debt and carry a median balance of $17,000, according to the 2013 Survey of Consumer Finances, up from a fifth of households with a median balance of $10,000. Those with bachelor's degrees pay 6.5 percent of their annual income in student-debt service. Student loans surpassed auto loans as a source of debt in 2010, but it remains less than mortgages. And delinquency rates are up. If the level of student debt does not make it major risk today, its explosive growth guarantees it will be in a few years.

The good news, say the defenders of the student-loan boom, is that government does not face any credit risk. They are correct in that student loans are not dischargeable in bankruptcy but wrong in a more substantive sense. If student-loan defaults were to soar, they would be the first ones calling for government to forgive the loans so that the economy avoids a recession. So the option of collecting on federally-owned student debt via the tax system -- the final recourse that the government has -- may not be a viable one if the debt boom continues.

A stronger defense is that young people are swapping student debt for other forms of debt, such as auto loans and mortgages. In fact, more detailed data from the Survey of Consumer Finances show that debt burdens for young households are down since 2010. Those with student debts also tend to be pretty well off -- for the most part, they have college degrees -- and so maybe they can handle it. Yet much of the decline in home and auto loans is cyclical. It's not clear whether the growth in student loans will look so much like a "swap" when these other forms of borrowing return.

Better to get the institutions right, then, when we can. To its credit, the Obama administration has worked hard to establish rules that prevent the government from financing educations at institutions that don't improve their students' job prospects. Yet these rules are "microprudential," in the regulatory terminology that has so far been limited to banks. Student lending is fast becoming a macroprudential concern.

Friday, April 17, 2015

Today's Links

1. Resource conflicts of the future.

2. This is what looks like when some of the world's best development economists spar off in a blog post comment section.

3. Congress will grant trade promotion authority, conditional on labor and environmental standards. My sense has always been that economists are pretty unsure as to whether such standards actually help the developing world. Here, for example, is the first real piece of research on the effects of child-labor bans -- and the result is that the ban actually caused child labor to increase and child wages to fall. On the other hand, Nancy Birdsall argues that developing countries end up "importing" the environmental standards of their richer trading partners -- which makes you think that maybe these standards don't undermine developing countries.

4. I did not realize that Alan Greenspan wrote papers on how to measure mortgage-equity withdrawals. Makes you wonder how he missed the housing bubble, given that those withdrawals rose to 8 percent of disposable income in the 2000s.