Monday, March 31, 2014

Does Economics Make You a Bad Person?

Economic reasoning often begins with a presumption that people act in a self-interested manner. And so there's been a string of small-scale research over over time about whether studying economics makes people more likely to act in a selfish manner. Adam Grant offers a nice short overview of this research (with links to underlying papers!) for Psychology Today last October, in "Does Studying Economics Breed Greed?"

The especially tough question in this research is how to distinguish between several possibilities: 1) Maybe studying economics changes the extent to which people act selfishly; 2) Maybe studying economics makes people more willing to admit that they might act selfishly, or makes them act more selfishly in small-scale classroom game interactions, but doesn't actually change their real-world behavior; or 3) Maybe people who are more likely to act in their own self-interest, or more likely to admit that they act in their own self-interest, are drawn to economics in the first place.

I won't try to summarize the literature here, especially because Grant already offers such a nice overview, but here are a few samples. A 2012 study by Andrew L. Molinsky, Adam M. Grant, and Joshua D. Margolis in the journal Organizational Behavior and Human Decision Processes (v. 119, pp. 27-37) considers "The bedside manner of homo economicus: How and why priming an economic schema reduces compassion." Here's how Grant describes the study:
"In one experiment, Andy Molinsky, Joshua Margolis, and I recruited presidents, CEOs, partners, VPs, directors, and managers who supervised an average of 140 employees. We randomly assigned them to unscramble 30 sentences, with either neutral phrases like [green tree was a] or economic words like [continues economy growing our]. Then, the executives wrote letters conveying bad news to an employee who was transferred to an undesirable city and disciplining a highly competent employee for being late to meetings because she lacked a car. Independent coders rated their letters for compassion.
Executives who unscrambled sentences with economic words expressed significantly less compassion. There were two factors at play: empathy and unprofessionalism. After thinking about economics, executives felt less empathy—and even when they did empathize, they worried that expressing concern and offering help would be inappropriate."
Of course, a skeptic might object that even if the "priming" with economics terms changes the amount of compassion expressed in a "bad news" letter, the executives would still deliver the bad news when they felt it was necessary.

In a 2011 study published in Academy of Management Learning & Education, Long Wang,  Deepak Malhotra, and J. Keith Murnighan carry out various experiments on the topic of "Economics Education and Greed." (10 (4): 643–660). For example, several of their experiments are online surveys in which people in various ways revealed their attitudes about greed. Those who majored in economics, or who read a snippet of economics before answering, were less likely to express attitudes that strongly condemned greed. Of course, a skeptic might point out that such surveys show what people say, but not necessarily what they feel or how they act.

Some of the early studies about how the study of economics might affect its students were published in the Journal of Economic Perspectives (where I've worked as the Managing Editor since the start of the journal in 1987). For example, in the Spring 1993 issue, Robert H. Frank, Thomas Gilovich,
and Dennis T. Regan ask "Does Studying Economics Inhibit Cooperation?" They present a variety of evidence that raises cause for concern. For exmaple, they present data that economists in academia are more likely to give zero to charity than others. They report the results of surveys in which students are asked what would happen if they were working for a company that paid for nine computers, but accidentally received ten. Do they expect the error would be reported back to the seller, and would they personally report the error? Those who have taken an economics class are less likely to say that the error would be reported, or that they would report it, than students in an astronomy class. 

Again, a skeptic might point out that even if economists are less likely to give to charity, this pattern may have been established well before they entered economics. And maybe the economics class is just causing students to be more realistic about whether errors would be reported and more honest in reporting what they would really do, compared with those in other classes. 

In a study in the Winter 1996 issue of JEP that Grant doesn't mention in his brief Psychology Today overview, Anthony M. Yezer, Robert S. Goldfarb, and Paul J. Poppen discussed ""Does Studying Economics Discourage Cooperation? Watch What We Do, Not What We Say or How We Play,"  They carried out a "dropped letter" experiment, in which a letter was left behind in various classrooms around the George Washington University campus, some economics classrooms and some not. The letter was addressed and stamped, but unsealed and with no return address. Inside there was $10 in cash, and a brief note saying that the money was being sent to repay a loan. In their experiment, over half of the letters that were dropped in economics classrooms were mailed in with the cash, compared with less than a third of the letters dropped in noneconomics classrooms. They also argue that substantial parts of the economics curriculum are about mutual benefits from trade, both between individuals and across national borders, and in that sense economics may encourage students to see economic interactions as friendly to decentralized cooperation, rather than as just an arena for clashes of unfettered selfishness.

In a similar spirit, I sometimes argue that many students enter an economics class seeing the world and the economy as fundamentally a zero-sum game, where anyone who benefits must do so at the expense of someone else. Learning about how division of labor and voluntary exchange at least has the possibility of being a positive-sum win-win game makes them more likely to consider the possibility of benefiting both themselves and others. Maybe there are some unreconstructed business schools which do teach that "greed is good." But all the economics curricula with which I'm familiar is painstaking about explaining the situations and conditions in which the interactions of self-interested parties is likely to lead to positive outcomes like free choice and efficiency, and also the situations and conditions in which it can lead to pollution, unemployment, inequality, and poverty.

My own sense is everyone plays many roles and wears many hats. A surgeon who cuts into  people all day would never dream of getting into a knife fight on the way home. An athlete who competes ferociously all week goes to church and volunteers to help handicapped children during off hours. A parent fights grimly over the appropriate annual marketing plan, and then goes home and hugs their children and takes dinner over to the neighbors who just had a baby. Some economic behaviors certainly shouldn't be generalized to the rest of life, and it's possible to set up situations with questionnaires and little classroom experiments where the boundaries can become blurred. But there's not much reason to believe that Darwinian biologists practice survival of the fittest in their spare time, nor that sociologists give up on personal responsibility because everything is society's fault, nor that lawyers go home and argue over fine print with their families.

It's worth remembering that Adam Smith, the intellectual godfather of economics, reflected on selfishness and economics at the start of his first great work, The Moral Sentiments, published in 1759. The opening words of the book are: "How selfish soever man may be supposed, there are evidently some principles in his nature, which interest him in the fortune of others, and render their happiness necessary to him, though he derives nothing from it except the pleasure of seeing it." And later in the same chapter: "And hence it is, that to feel much for others and little for ourselves, that to restrain our selfish, and to indulge our benevolent affections, constitutes the perfection of human nature; and can alone produce among mankind that harmony of sentiments and passions in which consists their whole grace and propriety."  Smith saw no contradiction in thinking about people as containing both selfishness and "benevolent affections," and most people, even economists, seek a comfortable balance between the two depending on the situation and context.










Friday, March 28, 2014

Warren Buffett on Index Funds for the Non-Professional Investor

Each year the legendary investor Warren Buffett writes an annual letter to the shareholders of Berkshire Hathaway. If you want a quick overview of the thinking behind how Buffett has achieved returns averaging 19.7% annually from 1965 to 2013, while the S&P 500 (with reinvested dividends) has clocked in at 9.8% annually over this time, this is a good starting point. Oddly enough, this year's letter to shareholders includes a section on "Some Thoughts about Investing" in which Buffett recommends that ordinary stock market investors should look into the merits of no-load mutual funds, like those run by Vanguard. Indeed, in his will, Buffett instructs that the trustee for the money he is leaving to his wife invest those funds in a no-load index fund. Buffett writes:

"When Charlie and I buy stocks – which we think of as small portions of businesses – our analysis is very similar to that which we use in buying entire businesses. We first have to decide whether we can sensibly estimate an earnings range for five years out, or more. If the answer is yes, we will buy the stock (or business) if it sells at a reasonable price in relation to the bottom boundary of our estimate. If, however, we lack the ability to estimate future earnings – which is usually the case – we simply move on to other prospects. In the 54 years we have worked together, we have never foregone an attractive purchase because of the macro or political environment, or the views of other people. In fact, these subjects never come up when we make decisions. It’s vital, however, that we recognize the perimeter of our “circle of competence” and stay well inside of it. ...
Most investors, of course, have not made the study of business prospects a priority in their lives. If wise, they will conclude that they do not know enough about specific businesses to predict their future earning power. 
I have good news for these non-professionals: The typical investor doesn’t need this skill. In aggregate, American business has done wonderfully over time and will continue to do so (though, most assuredly, in unpredictable fits and starts). In the 20th Century, the Dow Jones Industrials index advanced from 66 to 11,497, paying a rising stream of dividends to boot. The 21st Century will witness further gains, almost certain to be substantial. The goal of the non-professional should not be to pick winners – neither he nor his “helpers” can do that – but should rather be to own a cross-section of businesses that in aggregate are bound to do well. A low-cost S&P 500 index fund will achieve this goal.
That’s the “what” of investing for the non-professional. The “when” is also important. The main danger is that the timid or beginning investor will enter the market at a time of extreme exuberance and then become disillusioned when paper losses occur. (Remember the late Barton Biggs’ observation: “A bull market is like sex. It feels best just before it ends.”) The antidote to that kind of mistiming is for an investor to accumulate shares over a long period and never to sell when the news is bad and stocks are well off their highs. Following those rules, the “know-nothing” investor who both diversifies and keeps his costs minimal is virtually certain to get satisfactory results. Indeed, the unsophisticated investor who is realistic about his shortcomings is likely to obtain better long-term results than the knowledgeable professional who is blind to even a single weakness.
If “investors” frenetically bought and sold farmland to each other, neither the yields nor prices of their crops would be increased. The only consequence of such behavior would be decreases in the overall earnings realized by the farm-owning population because of the substantial costs it would incur as it sought advice and switched properties. Nevertheless, both individuals and institutions will constantly be urged to be active by those who profit from giving advice or effecting transactions. The resulting frictional costs can be huge and, for investors in aggregate, devoid of benefit. So ignore the chatter, keep your costs minimal, and invest in stocks as you would in a farm.
My money, I should add, is where my mouth is: What I advise here is essentially identical to certain instructions I’ve laid out in my will. One bequest provides that cash will be delivered to a trustee for my wife’s benefit. (I have to use cash for individual bequests, because all of my Berkshire shares will be fully distributed to certain philanthropic organizations over the ten years following the closing of my estate.) My advice to the trustee could not be more simple: Put 10% of the cash in short-term government bonds and 90% in a very low-cost S&P 500 index fund. (I suggest Vanguard’s.) I believe the trust’s long-term results from this policy will be superior to those attained by most investors – whether pension funds, institutions or individuals – who employ high-fee managers

Here's a discussion on this blog about "Behavioral Investors and the Dumb Money Effect," concerning the folly of individual investors trying to time market swings. And here's a discussion drawing on the work of Burton Malkiel about how difficult it is for the average investor to beat the market, a mathematical fact which remains true even when the average investor is an active financial manager being paid fees by customers.


Thursday, March 27, 2014

An Inequality Chartbook: Long-Run Patterns in 25 Countries

Tony Atkinson and Salvatore Morelli have combine to produce an intriguing "Chartbook of Economic Inequality." The main feature of the chartbook is a set of figures showing long-run trends in inequality as measured by a variety of statistics for 25 different countries, with all the statistics appearing on a single chart for each country. The charts appears in two forms: there's a colorful online version, and then a black-and-white version that can be printed out from a PDF file.

The 25 countries are partly determined by the availability of long-run (meaning a good chunk of the 20th century) data. Along with the United States, the other countries are  Argentina, Brazil, Australia, Canada, Finland, France, Germany, Iceland, India, Indonesia, Italy, Japan, Malaysia, Mauritius, Netherlands, New Zealand, Norway, Portugal, Singapore, South Africa, Spain, Sweden, Switzerland, and the United Kingdom. The figure for each country is also followed by a few bulletpoints that highlight some main trends. Detailed sources for the data are also provided.

Any single economic statistic is a one-dimensional slice of reality. When you put a lot of economic statistics together, and let your eye move from one to another, you develop a better overall perspective. As one example, here's the chart for the United States. But if you have an interest in these topics, I encourage you to surf the countries of the Chartbook.




Among the key takeaways from this figure: U.S. inequality follows a U-shaped pattern, with a number of measures of inequality falling in the 1930s and 1940s, and then rising since the 1970s. For example,
"the top decile of earnings has risen from 150 per cent of median in 1950 to 244 per cent in 2012." The table also suggests some puzzles. For example, the share of total wealth held by the top 1%, based on estate data, doesn't seem to have risen in the last few decades along with inequality of incomes. The dispersion of earnings as measured by the top decile starts rising in the 1950s, but the overall inequality of earnings doesn't seem to start rising until the 1970s--presumably because during the 1950s and 1960s, there was declining inequality at the bottom of the income distribution, as seen in the falling poverty rate, to offset rising dispersion of incomes at the top.



Hat tip: I was alerted to the "Chartbook" by a post from Larry Willmore at his "Thought du Jour" blog.



Wednesday, March 26, 2014

Who Sings Homework Blues?

I have three children between grades 6 and 10, and I view homework as a plague upon the land. Family time is tight. The hours between dinner and bedtime are few. Weekends are often full of activities. It can feel as if homework overshadows and steals any available down-time and flexibility. That said, I have to admit that the evidence about actual amounts of homework tends to suggest that it's not a problem for most families or students. Instead, my suspicion is that too much homework is a problem for one group and too little homework may be the problem for another group. There's a nice summary of the evidence on "Homework in America" by Tom Loveless in Chapter 2 of the 2014 Brown Center Report on American Education:How Well Are American Students Learning?"

As Loveless points out, controversies over "too much homework" have been going on for a long time.

"In 1900, Edward Bok, editor of the Ladies Home Journal, published an impassioned article, “A National Crime at the Feet of Parents,” accusing homework of destroying American youth. . . .Bok argued that study at home interfered with children’s natural inclination towards play and free movement, threatened children’s physical and mental health, and usurped the right of parents to decide activities in the home. The Journal was an influential magazine, especially with parents. An anti-homework campaign burst forth that grew into a national crusade. School districts across the land passed restrictions on homework, culminating in a 1901 statewide prohibition of homework in California for any student under the age of 15 . . ."

Among more recent examples in the last 10-15 years, Loveless points to cover stories on the "too much homework" theme in TIME, Newsweek, and People, the documentary movie Race to Nowhere, and a September 2013 story in the Atlantic, "My Daughter's Homework is Killing Me." But of course, it's an old saying among social scientists that the plural of anecdote is not data. The systematic data suggests that the homework burden is not a widespread or growing problem.

For example, one of the standard national surveys is the National Assessment of Educational Progress (NAEP). Here are their results from surveying students about how much homework they had the night before. As you can see, there is a small share of students who report more than 2 hours of homework per night, but that share doesn't seem to be growing over time. And among 17 year-olds, 27% had no homework assigned the night before the survey.

Of course, one might wonder about just how this survey was carried out, but Loveless runs through a number of other surveys of parents and students that reach similar results. As one more example, here's a survey of college freshmen--thus only a sample of those who went on to college--about how much homework they had done as high school seniors. About two-thirds report have done less than 6 hours per week, which of course averages less than an hour per day.

So how do the homework-haters, like me, chew and swallow this evidence?

The evidence is very clear that most students don't have too much homework. But it certainly leaves open the possibility that some high school and even middle-school students, perhaps 10-15%, average more than 2 hours of homework each night. My guess is that a lot of these students are also active in school activities: clubs, sports, plays. By the time you add their regular school day, they are in effect committed to one activity or another for about 12 hours out of every day, which doesn't leave a lot of time for eating, commuting, sleeping, and anything else.

For example, an article by Mollie Galloway, Jerusha Conner, and Denise Pope late last year in the Journal of Experimental Education looks at "Nonacademic Effects of Homework in Privileged, High-Performing High Schools" (81:4, pp. 490-510, 2013). (I don't think the article is freely available on-line, but readers may have access through a library subscription.) The survey looked at 4,317 student in 10 high-performing high schools in California. These students averaged more than three hours of homework per night. Here's as sampling of comments from these students:

  • "There’s never a time to rest. There’s always something more you should be doing. If I go to bed before 1:30 I feel like I’m slacking off, or just screwing myself over for an even later night later in the week . . . There’s never a break. Never."
  • “[I have] way too much homework! I cannot focus on sports and family if I have 4 hours of homework like I normally do.”
  • “[I have an] overload of activities in a day. School till 3, football till 6:30, Advanced Jazz Band till 8:30, Homework till 12 (or later) then all over again. It is hard to imagine that such intensive schedules could be sustainable for very long.”

In some cases, it feels to me as if parents want the school to push their children to achieve, and schools are responding with a blizzard of homework, while students are caught in the middle. The obvious answer here is for parents to push back, hard, at the individual schools where this happens.

But at the other end of the academic achievement scale, America's difficulties with low graduation rates in many high schools, and the need to raise academic achievement across the board for social mobility, active citizenship, and economic growth, suggests that many students are not getting sufficient homework--which is to say that many are getting none at all, or none that they ever do.

Finally, I'll add that the way in which a school day interacts with the modern family is part of the issue here. Fewer households with children have a stay-at-home parent waiting with milk-and-cookies at 2:30 or 3. Children get signed up for afternoon activities in part to provide coverage through the work-day of the parents. By the time the parents and children get home, everyone has had a long day already. At that point in the day, starting at 7 or 8, even a moderate amount of homework feels like a heavy load, and 2 hours or more can feel crushing.

Personally, I'd love to see schools experiment with a schedule where most of the the children would be there from 8-5. (Parents who want to pick their children up at 2:30 or 3 could do so.) But when the child came  home at 5, they would be typically be done with homework for the day. Optional activities like sports or music could still be in the evenings, as desired. Or I'd like to see schools experiment with a rule that all homework is due Wednesday, Thursday, and Friday, but under no circumstances ever due on Monday or Tuesday, thus trying to save the weekend. I know family life will always be busy, and I wouldn't have it any other way. But the ebb and flow of even moderate levels of homework across multiple students isn't just a student burden. It takes a toll on families, too.

Tuesday, March 25, 2014

Should Federal IT Spending be Flat?

In its latest proposed budget , the Obama administration praising itself for limiting the rise in information technology spending by the federal government. This seems peculiar. After all, a number of stories in the news suggest that some additional federal spending on information technology might be needed, like the  the botched and halting roll-out of the health care exchanges in the Affordable Care and Patient Protection Act of 2009, or the rising concerns over cybersecurity, or difficulties for the IRS in handling information and payments. The Washington Post just published a "Sinkhole of Bureaucracy," by David Farentholdwhich tells the eyebrow-raising story of how story of how 600 employees of the federal Office of Personnel Management work in an office that is set in an old Pennsylvania limestone mine outside of Pittsburgh, where they file retirement papers of governnment workers entirely by hand, 230 feet underground. The federal government has been making plans to computerize the operation since the late 1980s, but has not succeeded after a quarter-century of trying.

More broadly, the federal government is at its heart an enormous information-generating and information-processing organization. Over the last few decades, information technology has progressed by leaps and bounds. It would seem plausible to me that the federal government should be a major consumer of this technology: after all, the private sector has used this technology to provide a wide range of services to consumers while also replacing large numbers of middle managers.

The discussion of federal IT spending appears is in Chapter 19 of the Analytical Perspectives volume, released each year as a supplement to the budget proposals from the president. Here's Figure 19-1, in which the Obama budget folks choose to emphasize that IT spending rose by 7% annually during the Bush administration, but by less than 1% per year during the Obama administration.

Some of the discussion in this chapter makes perfectly sensible points. Yes, you don't want every agency in the federal government running around with its own proprietary software packages that won't link up to others. Yes, it's useful to make some decisions about shared tools and platforms. Yes, it's always good to be on the lookout for the "duplicative" and the "underperforming" projects.

But in the middle of an IT revolution, it's easy for me to believe that rising levels of IT spending make economic sense.  Think about the 600 federal workers filing by hand in a mine in Pennsylvania for the last few decades. Or consider some of the rhetorical flourishes from this chapter of the budget, and ask yourself: "Is this consistent with a federal IT budget that, after adjusting for inflation, is falling in real dollars?" As the budget chapter says:

"The interconnectedness of our digital world dictates that the Government buy, build and manage IT in a new way. Rapidly adopting innovative technologies, improving the efficiency and effectiveness of the Federal workforce through technology, and fostering a more participatory and citizen-centric Government are critical to providing the services that citizens expect from a 21st Century Government. ..."
"The President has identified the Cybersecurity threat as one of the most serious national security, public safety, and economic challenges we face as a nation. ..."
"On May 23, 2012, the President issued a directive entitled “Building a 21st Century Digital Government.” It launched a comprehensive Digital Government Strategy aimed at delivering better digital services to the American people. The strategy has three main objectives: (1) enabling the American people and an increasingly mobile workforce to access high-quality, digital Government information and services anywhere, anytime, on any device; (2) ensuring that as the Government adjusts to this new digital world, we seize the opportunity to procure and manage devices, applications, and data in smart, secure and affordable ways; and (3) unlocking the power of Government data to spur
innovation across our Nation and improve the quality of services for Federal employees and the American people."
I suppose one might argue that the capabilities of information are rising so quickly, and prices for IT are falling so rapidly, that the federal government will be able to achieve these kinds of IT goals even with no increase in spending. I'm writing myself a little mental note here: Whenever the federal government under runs into IT-related issues and problems, I'm going to wonder if the proud determination of the Obama administration to hold down IT spending was such a smart decision.

Monday, March 24, 2014

Long-Run Unemployment Arrives in the U.S. Economy

One lasting consequence of the Great Recession has been that the problem of long-run unemployment has now arrived in the U.S economy. Alan B. Krueger, Judd Cramer, and David Cho present the evidence and some striking analysis in "Are the Long-Term Unemployed on the 
Margins of the Labor Market?" written for the just-completed Spring 2014 conference of the Brookings Panel on Economic Activity. To get a sense of the issue here are a couple of striking figures from Krueger, Cramer, and Cho.

Split up those unemployment rate into three groups: those unemployment for 14 weeks or less, those unemployed for 15-26 weeks, and those unemployed for more than 26 weeks. What do the patterns look like, both over time and more recently?

A few patterns jump out here:

1) In the last 65 years, the short-term unemployment rate, 14 weeks or less, has been higher than the middle-term or long-term unemployment rate. But for a time just after the Great Recession, the long-term unemployment rate spike so severely that it exceeded the short-term rate.

2) In the last 65 years, the medium term unemployment rate for those without jobs from 15-26 weeks moved in quite a similar way and at a similar level to the longer-term unemployment rate for those without jobs for more than 26 weeks. But after the Great Recession, the long-term unemployment rate spiked far out of line with the medium-term unemployment rate.

3) Moreover, notice that right after the Great Recession, the long-term unemployment was spiking at a time when the short-term and medium-term unemployment rates had already peaked and had started to decline.

4) The short-term unemployment rate is now below the pre-recession average for the years 2001-2007. The medium term unemployment rate is almost back to its pre-recession average. The long-term unemployment rate, although it has declined in recent months, is still near its highest level for the period from 1948-2007.

This outcome is troubling for many reasons. Krueger, Cramer and Cho present evidence that the most recent wave of long-run have become detached from the labor market: they have a much-reduced change of finding a job, little effect on wage increases, and if they do find a job are likely to soon become unemployed again. A summary of the paper notes:

"The short-term unemployment rate is a much stronger predictor of inflation and real wage growth than the overall unemployment rate in the US. Even in good times, the long-term unemployed are on the margins of the labor market, with diminished job prospects and high labor force withdrawal rates, and as a result they exert little pressure on wage growth or inflation. Even after finding another job, reemployment does not fully reset the clock for the long-term unemployed, who are frequently jobless again soon after they gain reemployment: only 11 percent of those who were long-term unemployed in a given month returned to steady, full-time employment a year later. The long-term unemployed are spread throughout all corners of the economy, with a majority previously employed in sales and service jobs (36 percent) and blue collar jobs (28 percent), they find."
For me, one of the most troubling of the graphs looks at long-term unemployment rates across other high-income countries. The figure shows what share of the total unemployed in a country qualify as long-term unemployed--that is, what share of the unemployed have been out of work for more than six months.


Historically, the U.S. and Canadian economies have been places where the long-run unemployed were maybe 10-20% of total unemployment. Meanwhile, in countries like Italy, Germany and France, the long-run unemployed were often 60-70%, or more, of total unemployment. The fundamental meaning of what it means to be "unemployed" is pretty different, depending on whether the experience is usually fairly short or usually fairly long. In the U.S., the share of the unemployed who are long-run unemployed hasn't yet reached some of levels common in those other economies. But the experience of those other countries points out that when the share of the unemployed who are long-run unemployed is very high, that situation can persist for decades. 

I'm not sure exactly what policies will work best for bringing the long-term unemployed back into the labor force. But Sweden and Canada, to pick two examples from the figure, have apparently had some success in doing so. But it's reasonable to worry that past U.S. approaches to addressing unemployment are not well-suited to the long-run unemployment that emerged after the Great Recession.   

Friday, March 21, 2014

Will the U.S. Dollar Remain the World's Reserve Currency?

In the question-and-answer period after I give talks about the U.S. economy, someone always seems to ask about whether the U.S. dollar will remain the world's reserve currency. But at least so far, it's holding steady. Eswar Prasad reviews the arguments in "The Dollar Reigns Supreme, by Default,"
which appears in the March 2014 issue of Finance & Development.

One way to look at the importance of the U.S. dollar is in terms of what currency governments and investors around the world choose hold as their reserves. In the aftermath of the 1998 financial and economic crash in east Asia, lots of countries started ramping up their U.S. dollar international reserves. In the aftermath of the Great Recession, a number of countries spent down their dollar reserves to some some extent--and now want to rebuild. For example, here's a figure from Prasad on the form in which governments hold foreign exchange reserves. Notice that there's no drop-off in the years after the recession.

Prasad writes:

"The global financial crisis shattered conventional views about the amount of reserves an economy needs to protect itself from the spillover effects of global crises. Even countries with large stockpiles found that their reserves shrank rapidly over a short period during the crisis as they sought to protect their currencies from collapse. Thirteen economies that I studied lost between a quarter and a third of their reserve stocks over about eight months
during the worst of the crisis."
 The U.S. dollar is also by far the dominant currency in world economic transactions. It is often how global prices are denominated. When bills are paid between counties, or investments are made between countries, and there is a need to carry out an exchange rate conversion, the U.S. dollar is often involved even when the transaction has nothing to do with the U.S. economy, because other currencies are converted into dollars in the inner processes of the exchange rate markets. The U.S. dollar is used in 87% of all foreign exchange transactions, in foreign exchange markets that are now trading over $5 trillion per day.

The U.S. economy clearly benefits from being the world's reserve currency in an important way: There is an enormous appetite around the world to hold U.S. dollar assets, which makes it a lot easier when a low-saving economy like the United States wants to the borrowing large amounts from foreign investors.  Here's a figure from Prasad showing who holds of U.S. government debt:


Many foreign investors, including governments, have expressed concerns about being so heavily invested in U.S. dollar assets. They worry that if inflation does rise, the real value of their debt holdings would decline. As Prasad writes:

"Still, emerging market countries are frustrated that they have no place other than dollar assets to park most of their reserves, especially since interest rates on Treasury securities have remained low for an extended period, barely  keeping up with inflation. This frustration is heightened by the disconcerting prospect that, despite its strength as  the dominant reserve currency, the dollar is likely to fall in value over the long term. China and other key emerging  markets are expected to continue registering higher
productivity growth than the United States, so once global financial markets settle down, the dollar is likely to return  to the gradual depreciation it has experienced since the early 2000s. In other words, foreign investors stand to get a  smaller payout in terms of their domestic currencies when they eventually sell their dollar investments.
Here's a figure from the ever-useful FRED website run by the St. Louis Fed showing the overall sag of the U.S. dollar over time, with some notable bumps in the road along the way.





Over time, one would expect the role of the U.S. dollar to decline as the global reserve currency. Other economies are growing faster. More closely integrated global financial markets are making it easier to carry out transactions that don't involve the U.S. dollar. But the typical predictions, from Prasad and others, is that the dollar will remain dominant not just for a few more years, but perhaps for a few more decades. Part of the reason is that no clear alternative is available. Some well-informed folks continue to doubt whether the euro can survive. China's economy is headed for being the largest in the world, but as Prasad judiciously writes, "the limited financial market development and structure of political and legal institutions in China make it unlikely that the renminbi will become a major reserve asset hat foreign investors, including other central banks, turn to for safekeeping of their funds. At best, the renminbi will erode but not significantly challenge the dollar’s preeminent status. No other emerging market economies are in a position to have their currencies ascend to reserve status, let alone challenge the dollar."

For those with an appetite for more on this subject, Prasad has just published a book on the subject,
The Dollar Trap: How the U.S. Dollar Tightened Its Grip on Global Finance. I can also recommend Barry Eichengreen's book on the subject, Exorbitant Privilege: The Rise and Fall of the Dollar and the Future of the International Monetary System, which offers additional historical perspective.


Thursday, March 20, 2014

Eliminate U.S. Tourist Visas?

International tourism is an industry that now involves more than one billion travellers per year and more than $1 trillion per year in total spending. Welcoming more international visitors to spend their vacation dollars in the U.S. is both a plausible way of putting a dent in the U.S. trade deficit and a potential growth industry for new jobs. But international tourism is also an industry where America doesn't really try to compete; indeed, it actively hinders international tourists through its visa requirements.  Robert A. Lawson, Saurav Roychoudhury, and Ryan Murphy consider "The Economic Gains from Eliminating U.S. Travel Visas" in a Cato Institute Economic Development Bulletin (February 9, 2014).  Here's a realistic if hypothetical scenario to set the stage:

"Suppose a reasonably affluent Brazilian family was interested in visiting Disney World in Florida. First they must fill out the DS-160 online application. Then they must pay a $160 application fee per visa and perform two separate interviews, one of which requires invasive questions and fingerprinting. The interviews typically must take place at an American embassy or consulate, of which there are only four locations in all of Brazil–a country as big as the continental United States–meaning that many of the Brazilian applicants will have to travel for the interview. While visas may take only 10 days to process, delays are common, and the United States government recommends not making travel plans until receiving the visa. To make matters even more uncertain, consular officials can stop the process at any moment and deny the family a visa without reason. The Brazilian family could go through that entire lengthy, expensive, and uncertain process or they could go to Disneyland Paris, France, without getting a visa at all. Unsurprisingly, many Brazilian families choose to go to Disneyland in France over the United States."

By their estimates, removing visa requirements completely could triple the number of international tourists in the United States. "Eliminating all travel visas to the United States could increase tourism by 45–67 million visitors annually, corresponding to an additional $90–123 billion in tourist spending."

Of course, there are two main concerns about international tourists: they may be using the tourist visa as a way to immigrate to the U.S., or they may be a terrorist risk.  Lawson, Roychoudhury, and Murphy suggest the possibility of phasing in the removal of travel visas for countries where these issues seem less likely to be important. After all, the U.S. already has a Visa Waiver Program under which people from 37 countries can visit the U.S. for up to 90 days without a visa. They write:
"The United States could phase in tourist visa reciprocity with nations that do not have a history of sending many unauthorized immigrants or that do not present security threats. Tourists from Malaysia, Botswana, Mongolia, Uruguay, and Georgia–nations that do not require Americans to obtain visas before visiting–could be allowed to enter without a visa to begin with, phasing in other nations depending on the success of those liberalizations. If the American authorities grow confident in their ability to limit visa overstays or the possibility of unauthorized immigration is greatly reduced, reciprocity could eventually be phased in with Mexico, Central American countries, and Caribbean nations as well."
Americans need to stop thinking about international tourism as only something where Americans spend money abroad, and start thinking about it as an economic opportunity, too. In a globalizing world economy, the countries that make an effort to be at the crossroads of that economy will have particular advantages.

Wednesday, March 19, 2014

How Academics Learn to Write Badly

Most of my days are spent editing articles by academic economists. So when I saw a book called Learn to Write Badly: How to Succeed in the Social Sciences, the author Michael Billig had me at the title. The book is a careful dissection of the rhetorical habits of social scientists, and in particular their tendency to banish actual people from their writing, and instead to turn everything into a string of nouns (often ending in -icity or -ization) linked with passive verbs to other strings of nouns. (If that sentence sounded ugly to you, welcome to my work life!)

I found especially thought-provoking Billig's argument early in the book about how the necessity for continual publications is relatively recent innovation in academic life, and how it has altered the incentives for quantity and quality of academic writing.  Here are a few of Billig's thoughts (citations omitted):

"In the late 1960s, only a minority of those working in American four-year higher educational colleges tended to publish regularly; today over sixty per cent do ... In 1969 only half of American academics in universities had published during the previous two years; by the late 1990s, the figure had risen to two-thirds, with even higher proportions in the research universities. The number of prolific publishers is increasing. In American universities the proportion of faculty, who had published five or more publications in the previous two years, exploded from a quarter in 1987 to nearly two-thirds by 1998, with the rise in the natural and social sciences particularly noticeable ...
"Experienced academics know that teaming up with other academics can be a means to increasing their collective output and thereby the total number of papers of which they can be credited as an author. In a field such as economics,  jointly written papers were rare before the 1970s but now they are commonplace. Journal editors, as well as those who have studied academic publishing, recognize the phenomenon of `salami slicing'. Academic authors will cut their research findings thinly, so that they can maximize the number of publications they can obtain from a single piece of research. ...
So, we produce our papers, as if on a relentless production line. We cannot wait for inspiration; we must maintain our output. To do our jobs successfully, we need to acquire a fundamental academic skill that the scholars of old generally did not possess; modern academics must be able to keep writing and publishing even when they have nothing to say. ....
As professional academics, we must extract the small nuggets of material relevant to our our interests from the mass of stuff that is being produced. Finding what we need to read necessarily means overlooking so much else. The more that is published in our discipline, the more there is to ignore. In consequence, the sheer volume of published material will be narrowing, not widening, horizons, containing us within ever smaller, less varied sub-worlds. It is important to remember that no one designed this system. There was not a moment in history when a group of powerful figures sat down in secret around a table and said: `Let us create a situation where academics have to read narrowly and to write at speed; that will stop them making trouble.' No secret meeting planned all this. But this is where we are now."

Tuesday, March 18, 2014

The U.S. Productivity Challenge

Over time, productivity growth determines the rise in a society's standard of living. I often find myself talking to people who are skeptical of economic growth for a variety of reasons, so let me specify that by productivity growth, I mean health-increasing, education-improving, job-creating, wage-gaining, pollution-reducing, energy-saving growth. More broadly, the nice thing about the problems of economic growth is that you can afford to do something about all your other social desires, because a bigger pie creates room for both higher government spending and lower tax rates.  Chapter 5 of the In the 2014 Economic Report of the President, released last week by President Obama's Council of Economic Advisers, tells the story of U.S. productivity growth in recent decades.


To put some intuitive meat the bones of the productivity idea, the discussion starts with a basic example of productivity for an Iowa corn farmer.

"In 1870, a family farmer planting corn in Iowa would have expected to grow 35 bushels an acre. Today, that settler’s descendant can grow nearly 180 bushels an acre and uses sophisticated equipment to work many times the acreage of his or her forbearer. Because of higher yields and the use of time-saving machinery, the quantity of corn produced by an hour of farm labor has risen from an estimated 0.64 bushel in 1870 to more than 60 bushels in 2013. This 90-fold increase in labor productivity—that is, bushels of corn (real output) an hour—corresponds to an annual rate of increase of 3.2 percent compounded over 143 years. In 1870, a bushel of corn sold for approximately $0.80, about two days of earnings for a typical manufacturing worker; today, that bushel sells for approximately $4.30, or 12 minutes worth of average earnings.
This extraordinary increase in corn output, fall in the real price of corn, and the resulting improvement in physical well-being, did not come about because we are stronger, harder-working, or tougher today than the early settlers who first plowed the prairies. Rather, through a combination of invention, more advanced equipment, and better education, the Iowa farmer today uses more productive strains of corn and sophisticated farming methods to get more output an acre. ... Technological advances such as corn hybridization, fertilizer technology, disease resistance, and mechanical planting and harvesting have resulted from decades of research and development."

In the picture, a typical American worker had more than four times the output per hour than a worker in 1948. As the table shows, about 10% of the gain can be traced to higher education levels and about 38% of the gain to workers that are working with capital investments of greater value. But the majority of the change is growth in multifactor producitivity: that is, innovations big and small that make it possible for a given worker with a given amount of capital to produce more.


The U.S. productivity challenge can be seen in the statistics of the last few decades. U.S. productivity growth was healthy and high in the 1950s and 1960s, plummeted from the early 1970s up to the mid-1990s, and has rebounded somewhat since then.



The reasons for the productivity slowdown around 1970 not fully understood. The report lists some of the likely candidates: energy price shocks that made a lot of energy-guzzling capital investment nearly obsolete; a relatively less-experienced labor force as a result of the baby boom generation entering the labor force and the widespread entry of women into the (paid) labor force; and a slowdown after the boost that productivity had received from World War II innovations like jet engines and synthetic rubber, as well as the completion of the interstate highway system in the 1950s. The bounceback of productivity since the mid-1990s is typically traced to information and communications technology, both making it and finding ways to use it. There is a considerable controversy about whether future productivity growth is likely to be faster or slower. But given that economists failed to predict either the productivity slowdown of the 1970s (and still don't fully understand it) or the productivity surge of the 1990s, I am not filled with optimism about our ability to foretell future productivity trends.

Sometimes people look at the vertical axis on these productivity graphs and wonder what all the fuss is about. Does the fall from 1.8% to 0.4% matter all that much? Aren't they both really small? But remember that the growth rate of productivity is an annual rate that shapes how much the overall economy grows. Say that from 1974 to 1995 the productivity growth rate had been 1% per year faster. After 22 years, with the growth rate compounding, the U.S. economy would have been about 25% larger. If the U.S. GDP was 25% larger in 2014, it would be $21.5 trillion instead of $17.2 trillion.

Policy-makers spend an inordinate amount of time trying to fine-tune the outcomes of the market system: for example, consider the recent arguments over raising the minimum wage, raising pay for those on federal contracts, changing how overtime compensation is calculated, or the top tax rate for those with high incomes. Given the rise in inequality in recent decades, I feel some sympathy with the impetus behind policies that seek to slice the pie differently--although I'm sometimes more skeptical about the actual policies proposed. But 20 or 30 years in the future, what will really matter in the U.S. economy is whether annual rates of productivity growth have on average been, say, 1% higher per year.

The agenda for productivity growth is a broad one, and it would include improving education and job training for American workers; tax and regulatory conditions to support business investment; innovation clusters that mix government, higher education, and the private sector; and sensible enforcement of intellectual property law. But here, I'll add a few words about research and development spending, which is often at that growth in innovative ideas that are a primary reason for rises in productivity over. The Council of Economic Advisers writes:

"Investments in R&D often have “spillover” effects; that is, a part of the returns to the investment accrue to parties other than the investor. As a result, investments that are worth making for society at large might not be profitable for any one firm, leaving aggregate R&D investment below the socially optimal level (for example, Nelson 1959). This tendency toward underinvestment creates a role for research that is performed or funded by the government as well as by nonprofit organizations such as universities. These positive spillovers can be particularly large for basic scientific research. Discoveries made through basic research are often of great social value because of their broad applicability, but are of little value to any individual private firm, which would likely have few, if any, profitable applications for them. The empirical analyses of Jones and Williams (1998) and Bloom et al. (2012) suggest that the optimal level of R&D investment is two to four times the actual level."

In other words, it's been clear to economists for a long time that society probably underinvests in R&D. Indeed, some of the biggest cliches of the last few decades is that we are moving to a "knowledge economy" or an "information economy." We should be thinking about doubling our levels of R&D spending, just for starters. But here's what U.S. R&D spending as a share of GDP looks like: a boost related to aerospace R&D in the late 1950s and into the 1960s, which drops off, and basically flat since around 1980.


How best to increase R&D spending is a worthy subject: Direct government spending on R&D? Matching grants from government to universities or corporations? Tax breaks for corporate R&D? Helping collaborative R&D efforts across industry and across public-private lines? Whether we should be increase R&D is a settled question, and the answer is "yes."

Finally, here's an article from the New York Times last weekend on how the U.S. research establishment is depending more and more on private-sector and non-profit funding. The graph above includes all R&D spending--government, private-sector, nonprofit--not just government. Nonprofit private foundations can do some extremely productive work, and I'm all for them. But they are currently filling in the gaps for research programs that lack other support, not causing  total R&D spending to rise.








Monday, March 17, 2014

Financial Literacy

The great difficulty with lifetime financial choices is that you only get to do them once. With lots of choices in  a market, like buying dinner or buying clothes, the choices are made and the consequences are experienced in a fairly short time. Mistakes are fairly small-scale. If you don't like the clothes or the restaurant, go somewhere else next time. Indeed, one of the ways that competitive market forces work to encourage value and quality is through this process of repeated (or not!) purchases. But in a single lifetime, you get to try out precisely one set of lifetime financial choices. If at age 50 or 60 or 70 or 80 you wish that you had done something different earlier in life, you can't go  back to your 20s and 30s and 40s and live it over again.

When I talk about "financial literacy" to make these choices sensibly, I don't mean anything too sophisticated--just the basics to get by. For example, consider a person who can't answer the following three questions, which are examples given by Annamaria Lusardi and Olivia S. Mitchell in "The Economic Importance of Financial Literacy: Theory and Evidence," published in the March 2014 issues of the Journal of Economic Literature. (Full disclosure: The JEL is published by the American Economic Association, which also publishes the Journal of Economic Perspectives, where I work as Managing Editor.) The JEL isn't freely available on-line, but many in academia at least should have access through a library subscription. Here's your three question financial literacy quiz:

  • Suppose you had $100 in a savings account and the interest rate was 2 percent per year. After 5 years, how much do you think you would have in the account if you left the money to grow: [more than $102; exactly $102; less than $102; do not know; refuse to answer.]
  • Imagine that the interest rate on your savings account was 1 percent per year and inflation was 2 percent per year. After 1 year, would you be able to buy: [more than, exactly the same as, or less than today with the money in this account; do not know; refuse to answer.]
  • Do you think that the following statement is true or false? “Buying a single company stock usually provides a safer return than a stock mutual fund.” [true; false; do not know; refuse to answer.]

Remember, these questions are multiple choice, and you don't need to really understand the subject in any depth to make a plausible guess at the correct answers. These three questions were included as part of the Health and Retirement Survey in 2004, which is a nationally representative sample of Americans age 50 and over. Apparently, about one-third of Americans are able to answer all three of these questions correctly.


As Lusardi and Mitchell point out, many other surveys similar findings, and the young often do worse than their elders. However, it seems that most Americans are actually pretty optimistic about their financial literacy. They write (citations and references to tables omitted): "Even though actual financial literacy levels are low, respondents are generally rather confident of their financial knowledge and, overall, they tend to overestimate how much they know. For instance, in the 2009
U.S. Financial Capability Study, 70 percent of respondents gave themselves score of 4 or higher (out of 7), but only 30 percent of the sample could answer the factual questions correctly ..."

Of course, a lack of financial literacy has many costs. People don't save enough, so that they become more vulnerable if their car breaks down, or someone in the family gets sick, or loses a job.  They are also vulnerable to costly financial decisions. Instead of having a bank account, they end up paying high fees through pawn shops and payday loans. They often pay high credit card fees. They end up in costly arrangements when buying cars or appliances or houses, and when interest rates fall (as in the last few years), they fail to refinance their mortgage or other loans. They end up poor in retirement.

Regulators have in the past tended to respond to these issues by requiring more and more disclosure, but adding more fine print is not the superhighway to financial literacy. The harder challenge is to specify a smaller number of clear choices, with sensible default options that will work for most of those who choose note to make a choice. People have different needs and desires, so this isn't easy. On the other hand, people who function in 21st century America need to deal with health care choices, consumer electronics, software, map-reading, insurance, and lot of other issues. Most of them can manage the basics of financial literacy, too. But they need to learn it relatively young, because knowing when you're 60 what you should have done when you were 25 is not useful.

Many U.S. states have at least some elements of personal finance in their high school curriculum requirements, but it's not clear how well the lessons are being communicated, either to high school students or to adults interested in learning more. As Lusardi and Mitchell write: "Much work remains to be done. Very importantly, there has been no carefully-crafted cost–benefit analysis indicating which sorts of financial education programs are most appropriate, and least
expensive, for which kinds of people."




Friday, March 14, 2014

Inequality and Redistribution: An International View

The issue of income inequality is international. The IMF Fiscal Affairs Department provides an overview in an IMF Policy Paper "Fiscal Policy and Income Inequality," published January 23, 2014.

As a starting point, consider inequality across the world by region, as shown in Figure 1. The measure of inequality here is the Gini coefficient, a measure of inequality which ranges from zero where everyone has the same income to 1 when one person has all the income. The measure of income here is "disposable income," which means that it is income after taxes and after transfers of income, but it does not include "in-kind" transfers like the provision of food stamps or Medicaid in the United States. The number in parentheses shows the number of countries in each group.




As the figure shows, inequality has been rising in advanced economies, even after the redistributive effects of government tax and spending programs are taken into effect. But it's also interesting to note that in these groupings, the amount of income inequality has been and remains much higher in other regions of the world. The IMF writes:

"Between 1990 and 2010, the Gini for disposable income has increased in nearly all advanced and emerging European economies. Over one-third of advanced economies and half of emerging Europe experienced increases in their Ginis exceeding 3 percentage points, with most of the increases in emerging Europe occurring between 1990 and 1995 during the early years of their transition to market-based systems. Inequality also rose in most economies in Asia and the Pacific and in Middle East and North Africa. While average inequality fell in sub-Saharan Africa over this period, it still rose by more than 3 percentage points in more than one-fourth of these economies. Inequality also increased in over one-third of the economies in Latin America, although on average there was a slight decline. However, since 2000 there has been a substantial decline in the Gini in nearly all countries in this region. ... "More striking than changes in inequality within regions are the persistent differences across regions. For instance, between 1990 and 2010, average inequality in each region changed by less than 3¼ percentage points. In contrast, average inequality in the two most unequal regions
(sub-Saharan Africa and Latin America) remained 12 percentage points higher than the two most equal regions (emerging Europe and advanced economies)." 
The long-term patterns of inequality at the very top of the income distribution vary as well.  For one group of countries--including the US, Canada, the UK, China, and India--the share of gross income going to the top 1% fell through much of the 1950s and 1960s, but has now risen back to levels similar to the 1920s and 1930s. However, in another group of countries--France, Germany, Japan, Sweden--the share of gross income going to the top 1% declined, but then has pretty much flattened out.




As the IMF report points out, "a large proportion of the differences in regional average disposable income inequalities can be explained by differences in fiscal policies, especially in the levels and composition of taxes and spending." As a starting point, here's a figure based on public opinion polls showing public support for redistribution. The horizontal axis shows support for redistribution in the late 1990s and the vertical axis shows support in the late 2000s. If a country is right on the black line, that means that polling data showed the same support for redistribution in the late 2000s as in the late 1990s. The USA, for example, is quite close to the black line. But most countries are above the line, showing that public support for redistribution increased.



Just to be clear, the points here are based on a combination of surveys. The IMF report explains:

"These surveys, which include the World Value Surveys (WVS), Regional Barometers, and International Social Surveys, ask citizens whether they favor more or less redistribution. In the WVS, respondents are asked to indicate, on a scale from 1 to 10, whether “incomes should be made more equal” (1) or whether the country “needs larger income differences as incentive” (10). For our purposes, we divide these responses in two categories: answers 1 to 5 indicate that the respondents prefer more redistribution, and answers 6 to 10 indicate preference for less redistribution. A similar approach is applied to other surveys to find the share of the population that supports more redistribution. The evidence indicates that public support for redistributive policies has grown in recent decades. Between the late-1990s and the late-2000s, public support for redistribution increased in almost 70 percent of the advanced and developing economies surveyed. For instance, support increased substantially in Finland, Germany, and Sweden, and also in China and India ... Support for redistribution grew more in countries where inequality increased and, more recently, in advanced economies where the crisis hit hardest. For instance, public support between the late-1990s and the late-2000s grew by more than 30 percentage points in China, Finland, Germany and several Eastern European countries, where the income Gini increased by over 20 percent. At the same time, support declined in countries where the Gini decreased, including in Bulgaria, Mexico, Peru, and Ukraine."
The IMF calculates how much redistribution is actually happening through tax and spending policies: that is, how much is the Gini coefficient reduced by the tax and transfer policies enacted by government. Here's a measure for the advanced economies, where the Gini based on market incomes would average .43, but the Gini based on disposable after-tax after-transfer income is .29.


As the figure shows, the U.S. does less redistribution than other high-income countries. But the figure also helps to illustrate a perhaps less-known point: the lower level of U.S. redistribution is more on the spending side than on the tax side. Many European countries collect a lot of their tax revenues through value-added taxes or energy taxes, which work like sales taxes in collecting a higher share of income from those who have lower incomes, and who consume a higher share of their income. Indeed, the amount of redistribution that the U.S. does through taxes hasn't changed much in the last couple of decades, but the amount of redistribution that it does through transfer spending programs has declined substantially, in part because the big entitlement spending programs like Social Security and Medicare are not especially aimed at the poor. The report has much more to say about the design of tax and transfer programs around the world.

Thursday, March 13, 2014

U.S. Teen Birthrate Plummets

Teen birthrates in the United States have been falling for two decades--and falling at a faster rate in the last few years. Melissa S. Kearney and Phillip B. Levine explore possible reasons why in "Teen Births Are Falling: What’s Going On?" written as as March 2014 Policy Brief for the Economics Studies group at the Brookings Institution. Here's the trend:

To be sure, the U.S. teen birthrate remains well above that of most other high-income countries, where, as Kearney and Levine point out, the teen birthrate is usually in the range of 5-10 births per 1,000 women aged 15-19. Still, the teen birth rate in the U.S. has fallen by half in the last couple of decades. What might be the underlying causes?

There are three possibilities: teens having sex less often, teens who are having sex using contraception more often, and a higher number of abortions. The abortion rate among teens has not risen in a way that would explain the fall in teen births, so Kearney and Levine focus on the first two causes. They sum up their overall perspective in this way (citations omitted for readability):

"To some extent, it is appropriate to consider teen childbearing to be the result of “non-decisions;” some teens are sufficiently ambivalent about becoming pregnant that they do not commit themselves to taking precautions against such an outcome. To illustrate this point, half of teens who report having an unintended pregnancy were not using contraception at the time of conception. This way of thinking about the issue allows for individual error and randomness in the process, but
ultimately considers that individuals – even teens – respond to the environment around them and make choices that either increase or decrease the likelihood of becoming a teen parent. Indeed, the data suggest as much, with teen childbearing rates rising and falling with environmental factors in systematic ways."

In ongoing research, they look at a number of these different environmental factors at the state level: different kinds of state sex education programs, from abstinence to contraceptive education; expansions of Medicaid that cover contraception or post-partum care for broader income groups; welfare payments; state abortion rules like waiting times or parental consent rules; child support enforcement; state unemployment rates, and more. These conditions will vary across states, and they are enacted at different times in different states. So you can do a statistical test to see how much of the variation across states in teen birthrates can be explained by these factors. The answer is "very little." They write: "We found that declining welfare benefit levels and expanded access to family planning services for lower income women through the Medicaid program were the only two
policies to have had a discernible effect. However, their effect is limited: we calculate that these two policies together account for only 12 percent of the reduction in teen childbearing between 1991 and 2008. Our analysis yields no evidence suggesting that other policies had a significant role in the decline."

Thus, if state-level differences are not the main factor, the next place to look is in broader economic or social patterns across states. In addition, these need to be factors that apply through both good and bad economic times over the last couple of decades. Indeed, although the U.S. has the highest rate of teen births across high-income countries, the rate of teen births has been falling in most countries, which suggests that at least some of the causes are broader than U.S. policies or socioeconomic conditions.

Kearney and Levine point to a few factors that have probably hastened the decline in the U.S. teen birth rate in recent years: in particular, the popularity of the TV shows 16 and Pregnant and Teen Mom, as well as the overall pattern that teen birth rates tend to be lower when unemployment is higher. But in terms of explaining the long-term trend, and the broader trend across countries, they readily admit that the evidence in not yet conclusive, but end up focusing on two factors:

"We speculate that there are two likely candidate explanations: (1) access to improved contraceptive technologies, most notably long-acting reversible contraception (LARCs) such as implants and intrauterine devices (IUDs) and (2) increased educational attainment along with better labor market prospects for young women. ... The policy challenge that we believe offers the greatest potential is to address the needs of those young women who are not committed to avoiding a pregnancy. These are teens whose views are characterized by ambivalence. For them, the issue is more about finding ways to make them want to avoid a teen birth. ...  Simply put, increased aspirations and expanded opportunities for young women have the potential to extend the downward trend in teen childbearing."
For those interested in this topic, and especially in unravelling the difficult questions of whether teen pregnancy causes poor economic outcomes or poor economic outcomes lead to more teen pregnancy, and why teen pregnancy rates vary across countries and U.S. regions, I recommend an article by Kearney and Levine in the Melissa S. Kearney and Phillip B. Levine tackle these questions in"Why is the Teen Birth Rate in the United States So High and Why Does It Matter?" which appears in the Spring 2012 issue the Journal of Economic Perspectives. (Full disclosure: I've been Managing Editor of the JEP since 1987, and so am predisposed to believe that all article in the journal are well-exposited and of considerable interest.)


Wednesday, March 12, 2014

Selling a Kidney: Would the Option Necessarily be Beneficial?

The March 2014 issue of the Journal of Medical Ethics has a symposium on the issues of whether people should be allowed to sell a kidney. The lead article by Simon Rippon, "Imposing options on people in poverty: the harm of a live donor organ market," is freely available on-line, but you need a subscription to read the comments.

Rippon aims to tackle head-on the claim, popular among economists, that offering people an additional option--in this case to sell a kidney--must make the people better off, because they don't need to choose the option, but if they wish to do so, they can. He clears the ground by saying: "I know of no good reason for believing that there is anything intrinsically wrong with buying or selling
organs. It is certainly difficult to imagine any plausible explanation of the wrongness of selling organs that would not equally count against giving them away. It is true that these two types of act differ in that giving organs away is presumably motivated altruistically, whereas selling need not be—but it is not usually considered intrinsically wrong to act from non-altruistic motives. Even if giving organs away is morally better than selling them, it is implausible to suggest that we therefore ought to encourage donation by banning selling entirely, when the cost of doing so might be measured in the loss of thousands of innocent lives due to an inadequate supply of organs. For this reason I will set aside this objection to organ markets, and turn to another. My objection will not depend on the claim that there is anything intrinsically morally wrong with selling or buying organs."

Instead, Rippon makes an argument that when an option is available, at least some people will find themselves under social pressure to select that option, or will be held responsible for failing to choose it. "For example, imagine a cashier at a rural filling station that is potentially vulnerable to an overnight robbery. It may be better for the cashier to have no key to the safe (and to have a
prominent sign displaying that information) than for the cashier to have the key which gives him the option to open it. Possession of the key would make the cashier vulnerable to threats, and the filling station worth robbing."

If selling a kidney was a legal option, Rippon argues: 

"This means that even if you have no possessions to sell and cannot find a job, nobody can reasonably criticise you for, say, failing to sell a kidney to pay your rent. If a free market in organs was permitted and became widespread, then it is reasonable to assume that your organs would soon enough become economic resources like any other, in the context of the market. Selling your organs would become something that is simply expected of you as and when financial need arises. ... 
We should ask questions such as the following: Would those in poverty be eligible for bankruptcy protection, or for public assistance, if they have an organ that they choose not to sell? Could they be legally forced to sell an organ to pay taxes, paternity bills or rent? How would society view someone who asks for charitable assistance to meet her basic needs, if she could easily sell a healthy ‘excess’ organ to meet them? ... Wherever there is great value in not being put under social or legal pressure to sell something as a result of economic forces, we should think carefully about whether it is right to permit a market and to thereby impose the option on everyone to sell it."

The idea that certain activities should be banned not because they are necessarily wrong, but because otherwise there would be social pressure to participate, has some intuitive plausibility, but in practice, it leads to some of our trickiest social issues. For example, some European countries have at times banned headscarves, thus meaning that Islamic women are not under social pressure to wear them. There are arguments for banning circumcision, to protect families from social pressure to have the procedure done. That said, the concern that powerless people might be pressured into selling a kidney seems to me a legitimate concern and counterargument. The commenters on Rippon's essay raise a number of possible responses.

In one of the comments, Gerald Dworkin offers several responses. He argues the possibility of selling a kidney shouldn't be settled by pointing to a few possible bad outcomes, but instead requires a balancing of costs and benefits, in the context of a regulated market: "To do so
he must establish that there is a class of harms that (1) are likely to occur, (2) are significant enough to outweigh the enormous benefit of saving people’s lives, and (3) cannot be mitigated sufficiently by intelligent regulation ..."

Dworkin points out that if Rippon's concerns were likely to occur, we have already entered that world. "But we already have formal markets in blood, tissues, sperm and eggs. And lest one think that the sums offered in these markets are trivial, it is not uncommon for infertile couples to make offers of US$50 000 for eggs that meet their specifications. Is there any evidence of the kinds of speculative harms adduced by Rippon—ineligibility for bankruptcy protection or for public assistance—in these markets?"

In addition, Dworkin draws an provocative parallel to the arguments over physician-assisted suicide, another area in which there is fear of social pressure if the option is open. He writes: "To use an analogy, many of the same arguments apply to legalising physician-assisted suicide. Those who are dying, and using up family resources, may be pressured into such suicide by family
members. They may be looked upon as ‘selfish’ by society for using scarce resources. But, on the other side, if we keep assisted-suicide illegal, we prevent dying patients from ending their life
sooner rather than later. I think the ability to determine the timing of your death is sufficiently important to expose those who do not wish to die sooner to pressures they will have to resist.
Similarly, I believe that greater access to organs necessary to continued life for many people justifies imposing risks of social pressures which, at the moment, we have little evidence will occur (or not) and have even less evidence are not preventable by regulation."

My sketch here cannot do justice to all of the arguments involved, but I will add two points. First, at present, the main source of kidney donations is people who die unexpectedly, with a few voluntary donors. In the meantime, thousands of Americans die every year awaiting a kidney transplant. I can easily imagine that a substantial group of healthy people might not be willing to donate a kidney for free, but would be willing to do so for substantial compensation, and encouraging transplants from healthy donors could save thousands of lives. Second, it troubles me that we often expect the donors of kidneys and blood to act out of sheer altruism, but we have no such expectation of any of the other participants in an organ transplant, like the health care providers or the hospital.

For those interested in how economists view this issue, the Journal of Economic Perspectives had a Symposium on Organ Transplants in the Summer 2007 issue with two Nobel laureates among the authors. (Full disclosure, I've been the Managing Editor of JEP since 1987.) All articles in JEP are freely available, courtesy of the American Economic Association.