Friday, August 31, 2012

Effects of Health Insurance: Randomized Evidence from Oregon

How might providing health insurance affect people along various dimensions, like how much health care they consume, their financial well-being, and their actual health? As health care economists have long recognized, this question is a lot tougher to answer than one might at first think. The basic analytical problem is that you can't just compare averages for those who have health insurance and those who don't, because these groups are different in fundamental ways. For example, those with private sector health insurance in the U.S. tend to get it through their employers, so they tend to be people of prime working ages who hold jobs, or those who get government health insurance for the elderly (Medicare) or the poor (Medicaid). It's easy to imagine cases of people who have a hard time holding a job because they have poor health, and thus don't have employer-provided health insurance. For these people, their poor health leads to a lack of health insurance, but wasn't primarily caused by their lack of health insurance. When the variables are interrelated like this, it's hard to sort out cause and effect.

From a social science research perspective, the ideal experiment would be to take a large group of people and to divide them randomly, giving health to one group but not the other. Then study the results. However, in the real world, such randomized experiments are quite rare. The one classic example is the Rand Health Insurance Experiment (HIE) conducted in the 1970s: for an overview written in 2010 with some applications to the health care debate, see here

"The HIE was a large-scale, randomized experiment conducted between 1971 and 1982. For the study, RAND recruited 2,750 families encompassing more than 7,700 individuals, all of whom were under the age of 65. They were chosen from six sites across the United States to provide a regional and urban/rural balance. Participants were randomly assigned to one of five types of health insurance plans created specifically for the experiment. There were four basic types of fee-for-service plans: One type offered free care; the other three types involved varying levels of cost sharing — 25 percent, 50 percent, or 95 percent coinsurance (the percentage of medical charges that the consumer must pay). The fifth type of health insurance plan was a nonprofit, HMO-style group cooperative. Those assigned to the HMO received their care free of charge. For poorer families in plans that involved cost sharing, the amount of cost sharing was income-adjusted to one of three levels: 5, 10, or 15 percent of income. Out-of-pocket spending was capped at these percentages of income or at $1,000 annually (roughly $3,000 annually if adjusted from 1977 to 2005 levels), whichever was lower. ... Families participated in the experiment for 3–5 years."

The basic lessons of the Rand experiment, which has been the gold standard for research on this question over the last 30 years, is that cost-sharing substantially reduced the quantity of health care spending by 20-30%. Further this reduction in the quantity of health care spending had no effect on the quality of health care services received and no overall effect on health status.

Now, 30 years later, there's finally another study on the effects of health insurance built on a randomized design. The first round of results from the study are reported in "The Oregon Health Insurance Experiment: Evidence from the First Year," co-authored by an all-star lineup of health care economists: Amy Finkelstein, Sarah Taubman, Bill Wright, Mira Bernstein, Jonathan Gruber, Joseph P. Newhouse, Heidi Allen, Katherine Baicker and the Oregon Health Study Group. It appears in the
 August 2012 issue of the Quarterly Journal of Economics, which is not freely available on-line, although many in academia will have access through library subscriptions.

The story begins when Oregon, in 2008, decided to offer health insurance coverage for low-income adults who would not usually have been eligible for Medicaid. However, Oregon only had the funds to provide this insurance to 10,000 people, so the state decided to choose the 10,000 people by lottery.  The health care economists heard about this plan, and recognized a research opportunity. They began to gather financial and health data about all of those eligible for the lottery, the 90,000 people who entered the lottery, and the 10,000 who were awarded coverage.  Here are some findings:

"About one year after enrollment, we find that those selected by the lottery have substantial and statistically significantly higher health care utilization, lower out-of-pocket medical expenditures and medical debt, and better self-reported health than the control group that was not given the opportunity to apply for Medicaid. Being selected through the lottery is associated with a 25 percentage point increase in the probability of having insurance during our study period. ... [W]e find that insurance coverage is associated with a 2.1 percentage point (30%) increase in the probability of having a
hospital admission, an 8.8 percentage point (15%) increase in the probability of taking any prescription drugs, and a 21 percentage point (35%) increase in the probability of having an outpatient visit. We are unable to reject the null of no change in emergency room utilization, although the confidence intervals do not allow us to rule out substantial effects in either direction.
In addition, insurance is associated with 0.3 standard deviation increase in reported compliance with recommended preventive care such as mammograms and cholesterol monitoring. Insurance also results in decreased exposure to medical liabilities and out-of-pocket medical expenses, including a 6.4 percentage point (25%) decline in the probability of having an unpaid medical bill sent to a collections agency and a 20 percentage point (35%) decline in having any out-of-pocket medical expenditures. ... Finally, we find that insurance is associated with improvements across the board in measures of self-reported physical and mental health, averaging 0.2 standard deviation improvement."



 Under the Patient Protection and Affordable Care Act signed into law by President Obama in March 2010, the U.S. is moving toward a health care system in which millions of people who lacked health insurance coverage will now receive it. Drawing implications from the Oregon study for the national health care reform should be done with considerable caution. Still, some likely lessons are possible. 

1) One sometimes hears optimistic claims about how, if people have health insurance, they will get preventive and other care sooner, and so they will avoid more costly episodes of care and we will end up saving money. This outcome is highly unlikely. If lots more people have health insurance, they will consume more health care spending overall.  


2) The cost of the Oregon health insurance coverage was about $3,000 per person--adequate for basic health insurance, although less than half of what is spent on behalf of the average American spends for health care in a given year. The health care reform legislation of 2010 is projected to provide health insurance to an additional 28 million people (leaving about 23 million still without health insurance). At the fairly modest cost of $3,000 per person, the expansion of coverage itself would cost $84 billion per year.

3) Although health insurance will improve people's well-being and financial satisfaction, the extent to which it improves actual health is not yet clear. As the Finkelstein team reports: "Whether there are also improvements in objective, physical health is more difficult to determine with the data we now have available. More data on physical health, including biometric measures such as blood pressure and blood sugar, will be available from the in-person interviews and health exams that we conducted about six months after the time frame in this article."

Evidence from theOregon health insurance experiment will be accumulating over the next few years. Stay tuned!

Thursday, August 30, 2012

Are Groups More Rational than Individuals?

"A decision maker in an economics textbook is usually modeled as an individual whose decisions are not influenced by any other people, but of course, human decision-making in the real world is typically embedded in a social environment. Households and firms, common decision-making agents in economic theory, are typically not individuals either, but groups of people—in the case of firms, often interacting and overlapping groups. Similarly, important political or military decisions as well as resolutions on monetary and economic policy are often made by configurations of groups and committees rather than by individuals."

Thus starts an article called "Groups Make Better Self-Interested Decisions," by Gary Charness and Matthias Sutter, which appears in the Summer 2012 issue of my own Journal of Economic Perspectives. (Like all articles appearing in JEP back to 1994, it is freely available on-line courtesy of the American Economic Association.) They explore ways in which individual decision-making is different from group decision making, with almost all of their evidence coming from behavior in economic lab experiments. To me, there were two especially intriguing results: 1) Groups are often more rational and self-interested than individuals; and 2) This behavior doesn't always benefit the participants in groups, because the group can be less good than individuals at setting aside self-interest when cooperation is more appropriate. Let me explore these themes a bit--and for some readers, offer a quick introduction to some economic games that they might not be familiar with.

There has been an ongoing critique of the assumption that individuals act in a rational and self-interested manner, based on the observations that people are often limited in the information that they have, muddled in their ability to process information, myopic in their time horizons, affected by how questions are framed, and many other "behavioral economics" issues. It turns out that in many contexts, groups are often better at avoiding these issues and acting according to pure rationality than are individuals.

As one example, consider the "beauty contest" game. As Charness and Sutter point out: "The name of the beauty-contest game comes from the Keynes (1936) analogy between beauty contests and financial investing in the General Theory: “It is not a case of choosing those which, to the best of one’s judgment, are really the prettiest, nor even those which average opinion genuinely thinks the prettiest. We have reached the third degree where we devote our intelligences to anticipating what average opinion expects the average opinion to be. And there are some, I believe, who practice the fourth, fifth and higher degrees.” Similarly, in a beauty-contest game, the choice requires anticipating what average opinion will be."

The game works like this. A group of players is told that they should choose a number between 0 and 100, and the winner of the game will be the person who chooses a number that is (say) 1/2 of the average of the other choices. In this game, the rational player will reason as follows: "OK, let's say that the other players choose randomly, so the average will be 50, and I should choose 25 to win. But if other players have this first level of insight, they will all choose 25 to win, and I should instead choose 12. But if other players have this second level of insight, then they will choose 12, and I should choose 6. Hmmm. If the other players are rational and self-interested, the equilibrium choice will end up being zero."

The players in a beauty contest game can be either individuals or groups. It turns out that groups choose lower numbers: that is, as a result of interacting in the group, they tend to be one step ahead.

Here's another example, called the "Linda paradox," in which players get the following question (quoting from Charness and Sutter):

"Linda is 31 years old, single, outspoken, and very bright. She majored in philosophy. As a student, she was deeply concerned with issues of discrimination and social justice, and also participated in anti-nuclear demonstrations. Which is more probable:
(a) Linda is a bank teller.
(b) Linda is a bank teller and is active in the feminist movement."

Notice that Linda is a bank teller in both choices, but only active in the feminist movement in the second choice: that is, the second choice is a subset of the first choice. For that reason, it is impossible for choice b to be more likely than choice a. However, early research on this question found that 85% of individuals answered b. But when the game is played with groups of 2, and with groups of 3, the error rate drops.

Charness and Sutter offer a number of other examples, but the underlying themes are clear. In many settings, a group of people is likely to be better than an individual at processing a question, processing information, and acting in a rational answer. However, there are a number of settings in which pure self-interest can be self-defeating, and a more cooperative approach is useful. It turns out that individuals are often better than groups at setting aside pure self-interest and perceiving such opportunities.

Here's an example called the "trust game." In this game, the first player starts with a certain sum of money. Player 1 divides the money and passes part of it to Player 2. The experimenter triples the amount passed to Player 2. Player 2 then divides what is received, and passes part of the money back to Player 1.  In this kind of game, clearly what's best for both players is if Player 1 gives all of the money to Player 2, thus tripling the entire total, and trusts that Player 2 will return enough to make such this worthwhile. However, a strictly self-interested Player 2 may see no reason in this game to send anything at all back to Player 1, and Player 1, perceiving this, will then see no reason to send anything to Player 2. If both players act in a self-interested manner, both can end up worse off.

It turns out that when groups play this game, when they are acting they send less of the pot from Player 1 to Player 2 than do individuals, and they return less of the pot from Player 2 to Player 1 than do individuals. Thus, groups pursue self-interest in a way that reduces the potential returns from cooperation and trust, as compared with individuals.

Much remains to be done in comparing actions of groups and individuals in a wider variety of contexts. But these results intrigue, because they seem to point toward an economic theory of when group decision-making might be preferable to that of individuals, and vice versa. For example, when looking at a potentially complex problem, where the appropriate decision isn't clear, getting a group of people with diverse backgrounds and information can be quite helpful in leading to a more rational decision. But groups can also become large enough that they don't work well in gathering input from individuals, and become unable to move ahead with decisions.


The results also suggest that economists and social scientists should be cautious in making quick-and-dirty statements about how economic actors either do engage or don't engage in rational self-interested behavior. For example, it's possible to have a bunch of individuals who can't manage to lose weight or save money when acting on their own, but who find a way to do so when acting acting as a group and reinforcing each other. A person may act irrationally in some aspects of their personal life, but still be a useful member of a highly rational group in their work environment.  On the other side, in situations calling for gains from cooperation, pitting one group against another may be dysfunctional.  For example, many negotiations in business and politics follow the model of designating a lead negotiator, and descriptions of such negotiations often suggest that good negotiators form a bond with those on the other side that helps a compromise to emerge.

Tuesday, August 28, 2012

The Drop in Personal Interest Income

Low interest rates are good for borrowers, but lousy for savers. Here's a graph showing personal interest income, which dropped by about $400 billion per year--a fall of more than one-fourth--as interest rates have plummeted.  

FRED Graph


One of my local newspapers, the (Minneapolis) Star Tribune, offered a nice illustrative set of anecdotes in a story last Sunday about this consequences of this change for those who were depending on interest-bearing assets--often those who are near-retirement or in-retirement, and who want to hold safe assets, but who are receiving a much lower return than they might have expected.
 Moreover, as the article points out, it's not just individual savers who are affected. Pension funds, life insurance companies, long-term care insurance companies, and others who keep a substantial proportion of their investments in safe interest-bearing assets are receiving much less in interest than they would have expected, too.


I'm someone who supported pretty much everything the Federal Reserve did through the depths of the recession and financial crisis that started in late 2007: cutting the federal funds rate down to near-zero percent; setting up a number of agencies to lend money to make short-term liquidity loans to number of firms in financial markets; and the "quantitative easing" policies that involved printing money to purchase federal debt and mortgage-backed securities. But the recession officially ended back in June 2009, more than three years ago. It's time to start recognizing that ultra-low interest rates pose some painful trade-offs, too. Higher-ups at the Fed were reportedly saying back in 2009 that when the financial crisis was over, they would unwind these steps--but with the sluggishness of the recovery, they haven't done so.

 A year ago, for example, I posted on Can Bernanke Unwind the Fed's Policies? I posted last month on "BIS on Dangers of Continually Expansionary Monetary Policy," in which the Bank of International Settlements states: "Failing to appreciate the limits of monetary policy can lead to central banks being overburdened, with potentially serious adverse consequences. Prolonged and aggressive monetary accommodation has side effects that may delay the return to a self-sustaining recovery and may create risks for financial and price stability globally."

I lack the confidence to say just when or how the Fed should start backing away from its extremely accommodating monetary policies, but after jamming the monetary policy pedal quite hard for the last five years, it seems time to acknowledge that monetary policy in certain settings like the aftermath of a financial crisis and an overleveraged economy is a more limited tool than many of us would have believed back in 2006. Moreover, the U.S. economy has a very recent example from the early 2000s in which the Federal Reserve kept interest rates too low for too long in the early 2000s, and it helped to trigger the boom in lending and borrowing, much of it housing-related in one way or another, that led to the financial crisis and the Great Recession. The dangers of ultra-low interest rates and quantitative easing may not yet outweigh their benefits, but the potential tradeoffs and dangers shouldn't be minimized or ignored.


Monday, August 27, 2012

Presidential Predictions From Economic Data

The performance of the economy clearly has an influence on presidential elections. What are such models saying about the like outcome of an Obama-Romney election? Here are three examples: from Ray Fair, from Douglas Hibbs, and from the team of Michael Berry and Kenneth Bickers.

For some years now, Ray Fair has been experimenting with different relationships between economic variables and the outcome of national elections. For example, my own Journal of Economic Perspectives published an article he authored on this subject back in the Summer 1996 issue.  Just after the results of the November 2010 elections were known, he published "Presidential and Congressional Vote-Share Equations: November 2010 Update." Based on the fit of the past  historical data, he offers this equation:


VP = 48.39 + .672*G - .654*P + 0.990*Z

VP is the Democratic share of the two-party presidential vote in 2012. G is the growth rate of real per capita GDP in the first 3 quarters of 2012. P is the growth rate of the GDP deflator in the first 15 quarters of the Obama administration. And Z is the number of quarters in the first 15 quarters of the Obama administration in which the growth rate of real per capita GDP is greater than 3.2 percent at an annual rate.

Obviously, the third quarter GDP of 2012 GDP numbers will not be officially announced until after the election--although one can argue that they will be perceived by people in the economy as they happen. Just for the sake of argument, if one plugs in a 1.5% growth rate for real per capita GDP in the first three quarters of 2012, a 1.5% growth rate for the GDP deflator (a measure of inflation) and 2 quarter of real per capita GDP growth above 3.2% (that is, fourth quarter 2009 and fourth quarter 2011), then Fair's equation says that Obama will get 50.4% of the two-party vote--and thus (assuming not too much weirdness in the Electoral College) would narrowly win re-election.


An obvious question about Fair's equation is that it doesn't t include a separate variable for the unemployment rate. Why not? The answer is that researchers like Fair have experimenting with a wide variety of different kinds of economics data. For Fair, the equation given above is the one that predicts best over the elections from 1916 up through 2008. It's worth remembering that presidents have often been re-elected with a fairly high unemployment rate, like Franklin Roosevelt during the Great Depression, or Ronald Reagan when the unemployment rate was 7.4% in October 2004.

However, my guess is that in the context of 2012, Fair's equation will tend to overstate Obama's changes. I think the sustained high rates of unemployment will be more salient in the 2012 election than they may have been in some past elections. Also, Fair's equation is that it gives Obama credit for the low rate of inflation during his term of office--but it's not clear that in 2012, the low rate of overall inflation is getting Obama a lot of political credit. Fair's equation gives Obama extra credit for the two good quarters of economic growth during his presidency--and I'm not sure that in the broader context of a slow recovery, Obama will get much political credit for those two quarters.

Douglas Hibbs takes a related but different approach to economic variables and presidential election outcomes in his "Bread and Peace" model. He starts off by using the average of per capita real economic growth over the incumbent president's term, and uses that to predict the incumbent's share of the two-party vote. The result, as shown in the best-fit line in the figure below, is that some of the years where the incumbent did much worse than expected were years with a large number of military fatalities, like 1952 and 1968.



Thus, Hibbs added a second explanatory variable, military fatalities, and with the addition of that extra factor most of the points showing actual voting shares fall closer to best-fit line.  As the figure shows, based on economic data through the end of 2012, Obama would receive 45.5% of the two-party vote. The Hibbs equation doesn't give Obama credit for a low inflation rate.

 

Like Fair, Hibbs doesn't include an unemployment variable. But of course, these sorts of disputes over what should be included in drawing connections from economic data to election results just illustrates a broader point: every election has unique characteristics, starting with the actual candidates. In the 1996 election, Bill Clinton did better than would have been expected based on Hibbs formula. In the 2000, Al Gore (who was from the "incumbent" party), did considerably worse than would have been expected. Public perceptions of the candidates, non-economic policies, and the actual events during the campaign clearly matter as well.

Michael Berry and Kenneth Bickers take a different approach in "Forecasting the 2012 Presidential Election With State-Level Economic Indicators."  Instead of trying to predict the overall national popular vote, they instead estimate an equation for each of 50 states and the District of Columbia. They also use a more complex set of variables. They emphasize economic variables like the change in each state in real per capita income, the national unemployment rate, the state-level unemployment rate. But they also include variables for the vote received by that party in the previous election, a variable for whether the incumbent is a Democrat or a Republican, and for how many terms in a row the presidency would be held by a particular party. These terms then interact with the economic variables. A variable for the home state of each presidential candidate is also included.

The strength of the Berry/Bickers approach is that with this extra detail, their equation correctly predicts all of presidential elections from 1980 to 2008. The weakness is that if you add enough variables and complexity, you can always create an equation that will match the past data, but your results can be quite sensitive to small tweaks in the variables that you use. Thus, it's hard to have confidence that a complex equation is will be an accurate predictor.  For what it's worth, their equation predicts that Romney will win the 2012 election rather comfortably, with more than 52% of the popular vote. As they point out: "What is striking about our state-level economic indicator forecast is the expectation that Obama will lose almost all of the states currently considered as swing states, including North Carolina, Virginia, New Hampshire, Colorado, Wisconsin, Minnesota, Pennsylvania, Ohio, and Florida."


In terms of narrowly economic influences on the presidential race, there are just two more monthly unemployment reports that will come out before the election in November. There won't be too much new economic data, either. Thus, my suspicion is that the ways in which economic outcomes affect presidential preferences are in large part already being taken into account in the presidential polls, which currently show a tight race.

Friday, August 24, 2012

Full Recovery by 2018, says the CBO

Twice each year, the Congressional Budget Office publishes a "Budget and Economic Outlook" for the next 10 years. The just-released August version offers a timeline for the eventual recovery of the U.S. economy to its pre-recession path of economic growth, and some worries about the budget fights that are scheduled to erupt later this year.

On the subject of eventual full recovery, the CBO regularly estimates the "potential GDP" of the U.S. economy: that is, what the U.S. economy would produce if unemployment was down to a steady-state level of about 5.5% and steady growth was occurring. During recessions, of course, the economy produces below its potential GDP. During booms (like the end of the dot-com period in the early 2000s and the top of the housing price bubble in about 2005-2006), it's possible for an economy to produce more than its potential GDP, but only for a short-term and unsustainable period. Here's the CBO graph comparing actual and potential GDP since 2000. The shallowness of how much the recession in 2001 caused actual GDP to fall below potential, compared to the depth and length of how much actual GDP has fallen below potential in the aftermath of the Great Recession, is especially striking. 

This scenario in which the U.S. economy returns to potential GDP in 2018 requires a period in which the U.S. economy shows some rapid growth. Here's a graph showing the growth rate of the U.S. economy vs. that of the average of major U.S. trading partners (weighted by how much trade the U.S. economy does with each of them). "[T]he trading partners are Australia, Brazil, Canada, China, Hong Kong, Japan, Mexico, Singapore, South Korea, Switzerland, Taiwan, the United Kingdom, and countries in the euro zone." Notice that the prediction is for a substantial burst of growth around 2014, and lasting for a couple of years.


However, the eagle-eyed reader will note that these growth predictions from the CBO also show a period of negative growth--that is, a recession--in the near future. Similarly, the earlier figure showing a gap between actual and potential GDP shows that gap widening in the near future, before it starts to close. Why? The CBO is required by law to make baseline projections that reflect the laws that are actually on the books. And currently, "sharp increases in federal taxes and reductions in federal spending that are scheduled under current law to begin in calendar year 2013 are likely to
interrupt the recent economic progress. ... The increases in federal taxes and reductions in federal
spending, totaling almost $500 billion, that are projected to occur in fiscal year 2013 represent an amount of deficit reduction over the course of a single year that has not occurred (as a share of GDP) since 1969." These changes mean that the baseline CBO projection is for a recession in 2013.

The currently changes in taxes and spending can be changed, of course, and along with its baseline estimate, the CBO estimate an alternative scenario for comparison. "Of course, the tax and spending policies that are scheduled to take effect under current law may be delayed or modified. To illustrate how some changes to current law would affect the economy over the next decade, CBO analyzed an alternative fiscal scenario that would maintain many policies that have been in place for several
years. Specifically, that scenario incorporates the assumptions that all expiring tax provisions (other than the payroll tax reduction) are extended indefinitely; the alternative minimum tax is indexed for inflation after 2011; Medicare’s payment rates for physicians’ services are held constant at their current level; and the automatic spending reductions required by the Budget Control Act of
2011 (Public Law 112-25) do not occur (although the original caps on discretionary appropriations remain in place) ..."

Other scenarios are possible, of course. But several points seem worth making.

The CBO alternative scenario does not predict a recession in the short-term, and it predicts higher growth in the next few years. But the alternative scenario also means that the budget deficits are reduced by less, which means that growth slows down. The CBO prediction is that by 2022, the U.S. economy would be larger under the baseline scenario--with a recession in 2013--than it would be with the alternative scenario. This is a standard tradeoff in thinking about budget deficits: in the short run, spending cuts and tax increases are always unattractive, but unless a government figures out a way to reduce outsized deficits, long-run economic growth will be hindered. 

The obvious policy conclusion here, it seems to me, is that having all the scheduled changes to taxes and spending hit in 2013 is too much of a burden for an already-struggling U.S. economy, and should be avoided. But it remains tremendously important to get the U.S. budget deficits headed in a downward direction in the medium-term, certainly before the 10-year horizon of these predictions. 

Moreover, it would be useful for the U.S. economy, in the sense of reducing uncertainty, if the political system would stop gaming the budget process. That is, stop the ongoing cycle of having taxes that are scheduled to rise in a few years, or spending that it scheduled to be cut in a few years. Sure, writing down projections that show higher taxes or lower spending in a few years makes the 10-year deficit projections look better, but it also leads to waves of uncertainty cascading through the economy as these policies perpetually are hitting their expiration dates. 

Thursday, August 23, 2012

All JEP Archives Now Available On-line

It seems as if every few weeks, I see an article about a publication that is going digital. Last month, the owner of Newsweek magazine--a mainstay of newsstands and kiosks for all of my life--announced that it would transition to digital-only publication, with a timeline to be announced this fall. Among glossy think-tank publications, the Wilson Quarterly recently announced that the Summer issue will be its last one to be printed on  paper. Among more specialized journals, the most recent issue of the Review of Economic Research on Copyright Issues announced that it was turning to pure digital publication, too. 

My own Journal of Economic Perspectives has a different digital landmark. As of a few days ago, all issues of JEP from the current one back to the first issue of Summer 1987 are freely available on-line, courtesy of the journal's publisher, the American Economic Association. (Before this, archives back to 1994 had been available.) For the Spring 2012 issue of the journal, which was the 100th issue since the journal started publication, I wrote an article called "From the Desk of the Managing Editor" which offered some thoughts and reminisces about running the journal. From that article, here are a few reflections on the shift from paper to electronic media.

"Some days, working on an academic journal feels like being among the last of the telephone switchboard operators or the gas lamplighters. Printing on paper is a 500 year-old technology. When the first issue of JEP was printed in 1987, the print run was nearly 25,000 copies. Now, as readers shift to reading online or on CD-ROM, the print run has fallen to 13,000. The American Economic Association has shifted its membership rules toward a model where all dues-paying members have online access to the AEA journals at zero marginal cost, but need to pay extra for paper copies. Thus, in the next few years, I wouldn’t be surprised to see the JEP print run fall by half again. The smaller print run means substantial up-front cost savings for the AEA: paper and postage used to amount to half the journal’s budget. But for anyone sitting in a managing editor’s chair, the shorter print runs also raise existential questions about your work: in particular, questions about permanence and serendipity.
"Back in 1986, when we were choosing paper stocks for the journal, “permanence” meant acid-free paper that would last 100 years or more on a library shelf. I’m still acculturating myself to the concept that in the web-world, permanence has little to do with paper quality, but instead means a permanent IP address and a server with multiple back-ups. As a twentieth-century guy, pixels seem impermanent to me. I still get a little shock seeing a CD-ROM with back issues of JEP : almost two decades of my work product condensed down to a space about the size of a lettuce leaf.
"But in a world of evanescent interactive social media, there remains a place for publications that are meant to lay down a record—to last. It pleases me enormously that the American Economic Association in 2010 made all issues of JEP freely available online at <http://e-jep.org>. Archives are available back to 1994; the complete journal back to 1987 will eventually become available. The JEP now has a combination of permanency and omnipresence.
"The other concern about the gradual disappearance of paper journals is the issue of serendipity—the possibility of accidentally finding something of interest. In the old days, serendipity often happened when you were standing in the library stacks, looking up a book or paging through a back issue of a journal, and you ran across another intriguing article. The Journal of Economic Perspectives was founded on the brave and nonobvious assumption that busy-bee academic economists are actually interested in cross-pollination—in reaching out beyond their specialties.
"As the JEP makes a gradual transition from paper to pixels, I hope it doesn’t become a disconnected collection of permanent URLs. When you hold a paper copy of an issue in your hands, the barriers to flipping through a few articles are low. When you receive an e-mail with the table of contents for an issue, the barriers to sampling are a little higher. But perhaps my worries here betray a lack of imagination for where technology is headed. Soon enough, I expect many of us will have full issues of our periodicals delivered directly to our e-readers. When these are tied together with the connectivity of weblinks and blogs, the possibilities for serendipity could easily improve. Starting with the Winter 2012 issue, entire issues of JEP can be downloaded in pdf or e-reader formats."

But the big question I don't mention in this article is financial support. There's no big secret about the American Economic Association finances, which are published on its website. The audited financial statement  for 2010, for example, shows that the AEA gets revenue from "license fees" paid by libraries and other institutions that use its EconLit index of economics articles, from subscriptions for the seven AEA journals, from member dues, and from fees paid for listing "Job Openings for Economists" in the AEA publication of that name. These sources of revenue don't change much or at all just because the AEA makes one of its journals, the Journal of Economic Perspectives, freely available. But whether and how Newsweek, the Wilson Quarterly, and Review of Economic Research on Copyright Issues can gather funding and attract readers for their digital publications of the future remains to be seen.

Wednesday, August 22, 2012

Return Mail From Nonexistent Addresses: An International Comparison

A large proportion of academic research isn't about trying to resolve a big question once and for all. Instead, it's about putting together one brick of evidence, and when enough bricks accumulate, then it becomes possible to offer evidence-backed judgements on bigger questions.

In that spirit, Alberto Chong, Rafael La Porta, Florencio Lopez-de-Silanes, and Andrei Shleifer offer a study of "Letter Grading Government Efficiency" in an August 2012 working paper for the National Bureau of Economic Research. (NBER working papers are not freely available on-line, but many readers will have access through their academic institutions. Also, earlier copies of the paper are available on the web, like here.) Full disclosure: One of the authors of the paper, Andrei Shleifer, was the editor of my own Journal of Economic Perspectives, and thus my boss, from 2003 to 2008.

The notion is to measure one simple dimension of government efficiency: whether a letter sent to a real city with an actual zip code, but a nonexistent address, is returned to the sender. Thus, the authors sent 10 letters to nonexistent addresses in each of 159 countries: two letters to each of a country's five largest cities. The letters were all sent with a return address to the Tuck School of Business at Dartmouth University with a request to "return to the sender if undeliverable." A one-page letter requesting a response from the recipient was inside.

In theory, all countries belong to an international postal convention requiring them to return letters with an incorrect address, and to do so within about a month. Here's an overview of their results and a table:

"We got 100% of the letters back from 21 out of 159 countries, including from the usual suspects of efficient government such as Canada, Norway, Germany and Japan, but also from Uruguay, Barbados, and Algeria. At the same time, we got 0% of the letters back from 16 countries, most of which are in Africa but also including Tajikistan, Cambodia, and Russia. Overall, we had received 59% of the letters back within a year after they were sent out. Another measure we look at is the percentage of the letters we got back in 90 days. Only 4 countries sent all the letters back in 90 days (United States, El Salvador, Czech Republic, and Luxembourg), while 42 did not manage to deliver any back within 3 months. Overall, only 35% of the letters came back within 3 months. ... In statistical terms, the variation in our measures of postal efficiency is comparable to the variation of per capita incomes across countries."


Not unexpectedly, the data shows that countries with higher per capita GDP or with higher average levels of education typically did better at returning misaddressed mail. The U.S. Postal Service is at the top of the list--but because the letters were mailed in the U.S. and being returned to a U.S. address, it would be quite troublesome if this were not true!

The authors can account for about half of the variation across countries by looking at factors like the types of machines used for reading postal codes, and whether the country uses a Latin alphabet (although the international postal conventions actually require that addresses be written in Latin letters). But more intriguingly, much of the variation across countries in whether they return misaddressed letters seems to be correlated with other measures of the quality of government and management in that country. In that sense, the ability to return misaddressed letters may well be a sort of simple diagnostic tool that suggests something about broader patterns of efficiency in government and the economy.

Tuesday, August 21, 2012

A Systematic List of Hyperinflations

Most discussions of hyperinflation focus on a particular episode: for example, here's my post from last March 5 on "Hyperinflation and the Zimbabwe Example." But Steve H. Hanke and Nicholas Krus have taken the useful step of compiling a list of "World Hyperinflations" in an August 2012 Cato Working Paper. 

Hanke and Krus define an episode of hyperinflation as starting when the rate of inflation exceeds 50% in a month, and ending after a year in which inflation rates do not exceed this level. But the task of compiling a systematic and well-documented list of hyperinflations is tricky. Data on prices can be scarce. While data on consumer prices is preferable, looking at data on wholesale prices or even exchange rates is sometimes necessary. As one example, the Republika Srpska is currently one of the two main parts of Bosnia and Herzegovina, which in turn was formed from the break-up of Yugoslavia. But for a time in the early 1990s, the Republic Srpska had its own currency circulating. Finding monthly price data for this currency is not a simple task! As another example, the city of Danzig in 1923 carried out its own currency reform: Danzig was at the time technically a free city, but heavily influenced by the German hyperinflation around it, in the midst of the overall German hyperinflation at that time. 

Their paper offers a much fuller discussion of details and approaches, but here is a taste of their findings: a much-abbreviated version of their main table showing the top 25 hyperinflations, measured by whether the single highest monthly inflation rate exceeded 200%.

A few themes jump out at me:

1) The infamous German hyperinflation of 1922-23 is near the top of the list, but ranks only fifth for highest monthly rate of inflation. The dubious honor of record-holder for highest monthly hyperinflation rate is apparently Hungary, which in July 1946 had a hyperinflation rate that was causing priced to double every 15 hours. The Zimbabwe hyperinflation mentioned above is a close second, with a hyperinflation rate in November 2008 causing prices to double every 25 hours.

2) The earliest hyperinflation on the list is France in 1795-1796, and there are no examples of hyperinflation in the 1800s.

3) Many of the hyperinflations on the list occur either in the aftermath of World War II, or in the aftermath of the break-up of the Soviet Union in the early 1990s.

4) Finally, Hanke and Krus state in a footnote that they would now make one addition to the table, which would be the most recent episode of all: the experience of North Korea from December 2009 to January 2011.

 "We are aware of one other case of hyperinflation: North Korea. We reached this conclusion after calculating inflation rates using data from the foreign exchange black market, as well as the price of rice. We estimate that this episode of hyperinflation occurred from December 2009 to mid-January 2011. Using black market exchange rate data, and calculations based on purchasing power parity, we determined that the North Korean hyperinflation peaked in early March 2010, with a monthly rate of 496% (implying a 6.13% daily inflation rate and a price-doubling time of 11.8 days). When we used rice price data, we calculated the peak month to be mid-January 2010, with a monthly rate of 348% (implying a 5.12% daily inflation rate and a price-doubling time of 14.1 days)."


Monday, August 20, 2012

What is a Beveridge Curve and What is it Telling Us?

A Beveridge curve is a graphical relationship between job openings and the unemployment rate that macroeoconomists and labor economists have been looking at since the 1950s. But in the last decade or so, it has taken on some new importance. It is used as part of the explanation for search models of unemployment, part of the work for which Peter Diamond, Dale Mortensen and Christopher Pissarides won the Nobel Prize back in 2010. It also has some lessons for how we think about the high and sustained levels of unemployment in the U.S. economy in the last few years. In the most recent issue of my own Journal of Economic Perspectives, Mary C. Daly, Bart Hobijn, Ayseg├╝l Sahin, and Robert G. Valletta provide an overview of the analysis and implications in "A Search and Matching Approach to Labor Markets: Did the Natural Rate of Unemployment Rise?" (Like all JEP articles back to 1994, it is freely available on the web courtesy of the American Economic Association.)

Let's start with an actual Beveridge curve. The monthly press release from the Job Openings and Labor Turnover Statistics (JOLTS) data from the U.S. Bureau of Labor Statistics offers a Beveridge curve plotted with real data. Here's the curve from this month's press release:
 
BLS explains:  "This graph plots the job openings rate against the unemployment rate. This graphical representation is known as the Beveridge Curve, named after the British economist William Henry Beveridge (1879-1963). The economy’s position on the downward sloping Beveridge Curve reflects the state of the business cycle. During an expansion, the unemployment rate is low and the job openings rate is high. Conversely, during a contraction, the unemployment rate is high and the job openings rate is low. The position of the curve is determined by the efficiency of the labor market. For example, a greater mismatch between available jobs and the unemployed in terms of skills or location would cause the curve to shift outward, up and toward the right."

Thus, on the graph the U.S. economy slides down the Beveridge curve during the recession from March 2001 to November 2001, shown by the dark blue line on the graph. During the Great Recession from December 2007 to June 2009, the economy slides down the Beveridge curve again. On a given Beveridge curve, recessions move toward the bottom right, and periods of growth move toward the upper left.

However, the right hand-side of the Beveridge curve seems to convey a dispiriting message. Instead of the economy working its way directly back up the Beveridge curve since the end of the recession, it seems instead to be looping out to the right: that is, even though the number of job openings has been rising, the unemployment rate has not been falling as fast as might be expected. Why not?

One possible reason is that this kind of looping counterclockwise pattern in the Beveridge curve is not unusual in the aftermath of recessions. Daly, Hobijn, Sahin, and Valletta provide a graph graphing Beveridge curve data back to 1951. Notice that the Beveridge curve can shift from decade to decade. Also, if you look at the labels for 1990s and 1980s, it's clear that the Beveridge curve can have an outward counterclockwise shift for a time. A partial explanation here is that at the tail end of recession, employers are still reluctant to hire, so that even as they start to post more job openings, their actual hiring doesn't go full speed ahead until they are confident that the recovery will be sustained. Conversely, if employers aren't fully confident that the recovery will be sustained, they won't be quick to hire.



My own thoughts about the pattern of U.S. unemployment situation developed along these lines: The unemployment rate hovered at 4.4-4.5% from September 2006 through May 2007. In October 2009 it peaked at 10%. By December 2011 it had fallen to 8.5%, but since then, it has remained stuck above 8%--for example, 8.3% in July. How can these patterns be explained?

As a starting point, we should recognize that the 4.4% rate of unemployment back in late 2006 and early 2007 was part of a bubble economy at that time--an unsustainably low unemployment rate being juiced by the bubble in housing prices and the associated financial industry bubble. Estimating the sustainable rate of unemployment for an economy--the so-called "natural rate of unemployment"--is as much art as science. But in retrospect, a reasonable guess might be that the dynamics of the bubble were pushing the unemployment rate down from a natural rate of maybe 5.5%.

When the U.S. unemployment rate hit 10% in October 2009, it was countered with an enormous blast of expansionary fiscal and monetary policy. That is, the economy was stimulated both through huge budget deficits and through near-zero interest rates from the Federal Reserve. The ability of these policies to bring down unemployment quickly was overpromised, but they made a real contribution to stopping the unemployment rate from climbing above 10% and to getting it down to 8.3%.

But the unemployment rate still needs to come down by 2.5-3%, and that is where the Beveridge curve arguments enter in. In seeking reasons for the outward shift of the Beveridge curve in the last few years, Daly, Hobijn, Sahin, and Valletta point to three factors: a mismatch between the skills of unemployed workers and the available jobs; incentives from extended unemployment insurance that have slowed the incentive to take available jobs; and heightened uncertainty over the future course of the economy and economic policy. These factors together can explain why the unemployment rate has stayed high over the last several years. Fortunately, they are also factors which should ameliorate themselves over time. That's why the January 2012 projections from the Congressional Budget Office for the future of the unemployment rate look like this:



 To put it another way, I strongly suspect that whoever is elected president in November 2012 will look like an economic policy genius by early in 2014. It won't be so much because of any policies enacted during that time, but just a matter of the slow economic adjustment of Beveridge curve.

As an afterthought, I'll add that Beveridge curve is apparently one more manifestation of an old pattern in academic work: Curves and laws and rules are often named after people who did not actually discover them. This is sometimes called Stigler's law:  "No scientific discovery is named after its original discoverer." Of course, Steve Stigler was quick to point out in his 1980 article that he didn't discover his own law, either!

But William Beveridge is a worthy namesake, in the sense that he did write a lot about job openings and unemployment. For example, here's a representative comment from his 1944 report, Full Employment in a Free Society: 



"Full employment does not mean literally no unemployment; that is to say, it does not mean that every man and woman in the country who is fit and free for work is employed productively every day of his or her working life ... Full employment means that unemployment is reduced to short intervals of standing by, with the certainty that very soon one will be wanted in one's old job again or will be wanted in a new job that is within one's powers.”

The U.S. economy since the Great Recession is clearly failing Beveridge's test for full employment, and failing it badly.

Friday, August 17, 2012

The Rise of Residential Segregation by Income

 I've posted in the past about "The Big Decline in Housing Segregation" by race, but it seems likely that another kind of residential segregation is on the rise. In a report for the Pew Research Center, Paul Taylor (no relation) and Richard Fry discuss "The Rise of Residential Segregation by Income Social & Demographic Trends."

To measure the extent to which households are segregated by income, the authors take three steps. First, they look at the 30 U.S. cities with the largest number of households. Second, they categorized households by income as lower, middle, or upper income. "For the purpose of this analysis, low-income households are defined as having less than two-thirds of the national median annual income and upper-income households as having more than double the national median annual income. Using these thresholds, it took an annual household income of less than $34,000 in 2010 to be labeled low income and $104,000 or above to be labeled upper income. The Center conducted multiple analyses using different thresholds to define lower- and upper-income households. The basic finding reported here of increased residential segregation by income was consistent regardless of which thresholds were used." Third, they look at where households are living by Census tract: "The nation’s 73,000 census tracts are the best statistical proxy available from the Census Bureau to define neighborhoods. The typical census tract has about 4,200 residents. In a sparsely populated rural area, a tract might cover many square miles; in a densely populated urban area, it might cover just a city block or two. But these are outliers. As a general rule, a census tract conforms to what people typically think of as a neighborhood."

Next, the authors calculate what they call a Residential Income Segregation Index, which comes from "adding together the share of lower-income households living in a majority lower-income tract and the share of upper-income households living in a majority upper-income tract ... (The maximum possible RISI score is 200. In such a metropolitan area, 100% of lower-income and 100% of upper-income households would be situated in a census tract where a majority of households were in their same income bracket.)"  Here are the RISI scores for the 30 cities in 1980 and 2010:


Overall, the national index rose from 32 in 1980 to 46 in 2010. The report does not seek to analyze the differences across cities, which are presumably influenced by a range of local factors. At the regional level, "one finds that the metro areas in the Southwest have the highest average RISI score (57), followed by those in the Northeast (48), Midwest (44), West (38) and Southeast (35). The analysis also shows that the level of residential segregation by income in the big Southwestern metro areas have, on average, increased much more rapidly from 1980 to 2010 than have those in other parts of the country. But all regions have had some increase."

At a broad level, the main reason for the rise in segregation by income is the rising inequality of incomes in the U.S. The authors write: 
"[T]here has been shrinkage over time in the share of households in the U.S. that have an annual income that falls within 67% to 200% of the national median, which are the boundaries used in this report to define middle-income households. In 1980, 54% of the nation’s households fell within this statistically defined middle; by 2010, just 48% did. The decline in the share of middle-income households is largely accounted for by an increase in the share of upper-income households. ... With fewer households now in the middle income group, it’s not surprising that there are now also more census tracts in which at least half of the households are either upper income or lower income. In 2010, 24% of all census tracts fell into one category or the other—with 18% in the majority lower-income category and 6% in the majority upper-income category. Back in 1980, 15% of all census tracts fell into one category or the other—with 12% majority lower and 3% majority upper. To be sure, even with these increases over time in the shares of tracts that have a high concentration of households at one end of the income scale or the other, the vast majority of tracts in the country—76%—do not fit this profile. Most of America’s neighborhoods are still mostly middle income or mixed income—just not as many as before."
I have no strong prior belief about how much residential segregation by income is desirable, and I have no reason to believe that the extent of residential segregation by income in 1980 was some golden historical ideal to which we should aspire. But in a U.S. economy with rising inequality of incomes, and in which our economic and political future will depend on shared interactions, a rising degree of residential segregation by income does give me a queasy feeling.


Thursday, August 16, 2012

Voter Turnout Since 1964

To hear the candidates and the media tell it, every presidential election year has greater historical importance and is more frenzied and intense than any previous election. But the share of U.S. adults who actually vote has been voter turnout has mostly been trending down over time. Here's are some basic facts from a chartbook put together by the Stanford Institute for Economic Policy Research.

The youngest group of voters from age 18-24 have seen a rise in turnout recently, especially from 2000 to 2004, and there is a more modest rise in turnout for some other age groups. But all elections since 1988 have had lower turnout than that year; in turn, 1988 had lower turnout that the presidential elections from 1964-1972.

I see the chart as a reminder of a basic truth: Elections aren't decided by what people say to pollsters. They are determined by who actually casts a vote.


Wednesday, August 15, 2012

What is the Tragedy of the Commons?

A couple of weeks ago, I posted on how "The Economics of Antibiotics Resistance" could be viewed as an example of the "tragedy of the commons." I got a few notes suggesting that I explain the term more fully. Here's the explanation from my own Principles of Economics textbook. (Of course, if you are teaching a college-level intro economics class, I would encourage you to take a look at it at the website of the publisher, Textbook Media. Along with many expository virtues, my book is priced far below the $200 price of many leading textbooks, at at $40 for a combination of a soft-cover paper copy and on-line access. On-line access alone, or micro and macro splits, are priced even lower.)  From Chapter 15:

"The historical meaning of a commons is a piece of pasture land that is open to anyone who wishes to graze their cattle upon it. More recently, the term has come to apply to any area that is open to all, like a city park. In a famous 1968 article, a professor of ecology named Garrett Hardin (1915-2003) described a scenario called the tragedy of the commons, in which the utility-maximizing behavior of individuals ruins the commons for all."
 "Hardin imagined a pasture that is open to many herdsmen, each with their own herd of cattle. A herdsman benefits from adding cows, but too many cows will lead to overgrazing and even to ruining the commons. The problem that when a herdsman adds a cow, the herdsman personally receives all of the gain, but when that cow contributes to overgrazing and injures the commons, the loss is suffered by all of the herdsmen as a group—so any individual herdsman suffers only a small fraction of the loss. Hardin wrote: `Therein is the tragedy. Each man is locked into a system that compels him to increase his herd without limit—in a world that is limited. Ruin is the destination toward which all men rush, each pursuing his own best interest in a society that believes in the freedom of the commons. Freedom in a commons brings ruin to all.'"

"This tragedy of the commons can arise in any situations where benefits are primarily received by one party, while the costs are spread out over many parties. For example, clean air can be regarded as a commons, where firms that pollute air can gain higher profits, but firms that pay for anti-pollution equipment provide a benefit to others. A commons can be regarded as a public good, where it is difficult to exclude anyone from use (nonexcludability) and where many parties can use the resource simultaneously (nonrivalrous)."

"The historical commons was often protected, at least for a time, by social rules that limited how many cattle a  herdsman could graze. Avoiding a tragedy of the commons with the environment will require its own set of rules which limit how the common resource can be used."
Hardin's original 1968 article is widely available on the web--for example, here.

A few years ago in 2008, Ian Angus wrote a provocative essay in Monthly Review called “The Myth of the Tragedy of the Commons.” The tragedy of the commons is often considered a politically liberal insight, because it offers a potential justification for government regulation of shared resources. But Angus attacks from the thesis from further to the political left, argues that Hardin's thesis is evidence-free, that it ignores the reality of community self-regulation, and that it amounts to blaming the poor for their poverty. Here are a few words from Angus's trenchant essay:


“Since its publication in Science in December 1968, ‘The Tragedy of the Commons’ [by Garrett Hardin] has been anthologized in at least 111 books, making it one of the most-reprinted articles ever to appear in any scientific journal. . . . For 40 years it has been, in the words of a World Bank Discussion Paper, ‘the dominant paradigm within which social scientists assess natural resource
issues’. . . It’s shocking to realize that he provided no evidence at all to support his sweeping conclusions. He claimed that the ‘tragedy’ was inevitable—but he didn’t show that it had happened even once. Hardin simply ignored what actually happens in a real commons: self-regulation by the communities involved. . . . The success of Hardin’s argument reflects its usefulness as a pseudo-scientific explanation of global poverty and inequality, an explanation that doesn’t question the
dominant social and political order. It confirms the prejudices of those in power: logical and factual errors are nothing compared to the very attractive (to the rich) claim that the poor are responsible for their own poverty. The fact that Hardin’s argument also blames the poor for ecological destruction is a bonus.”

I enjoyed Angus's counterattack, but in the end, it seemed to me overwrought. The logic behind the tragedy of the commons is solid enough that it is often a useful starting point for thinking about shared resource issues. I'm fairly confident that Hardin didn't see himself as blaming the poor for their own poverty and for ecological destruction! However, it's important to emphasize that because a tragedy of the commons is possible doesn't make it inevitable. And further, as the penultimate sentence from my short textbook description mentions,social rules and community self-regulation have often been able to manage the commons for a considerable period of time.