Saturday, November 30, 2013

Editor Hell

At the end of a long day at my job as the Managing Editor of the Journal of Economic Perspectives, it's always pleasant to consider those editors whose lives are harder than my own. 

Consider the editors who have worked on the Oxford English Dictionary. Lorien Kite tells some of the stories in "The evolving role of the Oxford English Dictionary," which appeared in the Financial Times on November 15. For those not familiar with the OED, it not only aspires to include every word in the English language, whether in current use or archaic, but it also seeks to give examples of usage of words over time. The full article is worth reading, but here are a few snippets.
"James Murray (1837-1915), the indefatigable editor who oversaw much of the first edition, was originally commissioned to produce a four-volume work within a decade; after five years, he had got as far as the word “ant”."
"When work began on OED3 in the mid-1990s, it was meant to be complete by 2010. Today, they are roughly a third of the way through and Michael Proffitt, the new chief editor, estimates that the job won’t be finished for another 20 years."
"The first edition, published in 10 instalments between 1884 and 1928, defined more than 400,000 words and phrases; by 1989, when two further supplements of 20th-century neologisms were combined with the original to create the second, this had risen to some 600,000, with a full word count of 59m. Once the monumental task of revising and updating that last (and possibly final) printed incarnation is complete, the third edition is expected to have doubled in overall length." 
"The OED records 750,000 individual “sessions” each month, most of which come via institutions such as libraries, universities, NGOs and government departments. ... The surprising thing, explains Judy Pearsall, editorial director for dictionaries in OUP’s global academic division, is that a quarter of these monthly visits are coming from outside what we think of as the English-speaking world.In September, the US accounted for the single biggest group of users, followed by the UK, Canada and Australia. At numbers five and six, however, are Germany and China. Readership from countries where English is not the first language is growing faster too ..."

Thanks to Larry Willmore at his "Thought du Jour" blog for pointing out the article. I'm going to put it in "Editor Hell" file folder next to the example of Werner Stark, who edited the collected economic works of Jeremy Bentham. Here's my description of his task from an essay I wrote in 2009 called "An Editor's Life at the Journal of Economic Perspectives." (If you are curious about my personal background and approach to editing, you might find it an interesting read.)

[C]onsider the problems posed in editing the papers of Jeremy Bentham, the utilitarian philosopher and occasional economist. Bentham wrote perhaps 15 pages in longhand almost every day ofhis adult life. His admirers gathered some of his work for publication, but much was simply stored in boxes, primarily at the library of University College, London. In 1941, an economist named Werner Stark was commissioned by the Royal Economic Society to prepare a comprehensive edition of Bentham’s economic writings, which in turn are just a portion of his overall writings. Inthe three-volume work that was published 11 years later (!), Stark (1952) wrote in the introduction: 
The work itself involved immense difficulties. Bentham’s handwriting is so bad that it is quite impossible to make anything of his scripts without first copying them out. I saw myself confronted with the necessity of copying no less than nine big boxes of papers comprising nearly 3,000 pages and a number of words that cannot be far from the seven-figure mark. But that was only the first step. The papers are in no kind of order: in fact it is hard to imagine how they ever became so utterly disordered. They resemble a pack of cards after it has been thoroughly shuffled. . . . The pages of some manuscripts, it is true, were numbered, but then they often carried a double and treble numeration so that confusion was worse confounded, and sometimes I wished there had been no pagination at all. In other manuscript collections the fact that sentences run uninterruptedly from one sheet onto another, is of material help in creating order out of chaos. I was denied even this assistance. It was one of Bentham’s idiosyncrasies never to begin a new page without beginning at the same time a new paragraph. But I cannot hope to give the reader an adequate idea of the problems that had to be overcome. 

Stark’s lamentations would chill the heart of any editor. “Bentham was most unprincipled with regard to the use of capitals.” “After careful consideration, it was found impossible to transfer the punctuation of Bentham’s manuscripts on to the printed page. When he has warmed to a subject and is writing quickly, he simply forgets to punctuate . . . ” And so on.
Of course, the task of editing can have some extraordinary payoffs. Making Bentham's thoughts available and accessible to readers is of great importance. One can imagine a future in which you will buy the OED as an app for your e-reader or your word-processor, and definitions and past uses will be only a click away. In a 2012 essay "From the Desk of the Managing Editor," written for the 100th issue of the Journal of Economic Perspectives, I tried to describe some of what an editor can hope to accomplish:

Communication is hard. The connection between writer and reader is always tenuous. No article worth the reading will ever be a stroll down the promenade on a summer’s day. But most readers of academic articles are walking through swampy woods on a dark night, squelching through puddles and tripping over sticks, banging their shins into rocks, and struggling to see in dim light as thorny branches rake at their clothing. An editor can make the journey easier, so the reader need not dissipate time and attention overcoming unnecessary obstacles, but instead can focus on the intended pathway. 
Obstacles to understanding arise both in the form of content and argument and also in the nuts and bolts of writing. An editor needs a certain level of obsessiveness in confronting these issues, manuscript after manuscript, for the 1,000 pages that JEP publishes each year. Plotnick (1982, p. 1) writes in The Elements of Editing: “What kind of person makes a good editor? When hiring new staff, I look for such useful attributes as genius, charisma, adaptability, and disdain for high wages. I also look for signs of a neurotic trait called compulsiveness, which in one form is indispensable to editors, and in another, disabling.”
The ultimate goal of editing is to strengthen the connection between authors and readers. Barney Kilgore, who was editor of the Wall Street Journal during its time its circulation expanded dramatically in the 1950s and 1960s, used to post a motto in his office that would terrify any editor (as quoted in Crovitz 2009): “The easiest thing in the world for a reader to do is to stop reading.” An editor can help here, by serving as a proxy for future readers.

Friday, November 29, 2013

Shifting Components of the Dow Jones Industrial Index

The Dow Jones Industrial Index is based on stock prices of 30 large blue-chip companies that in some ill-defined way are supposed to represent the core of the U.S. economy. Over time, some companies are replaced by others. In September, for example, the formerly private investment bank Goldman Sachs replaced the public commercial bank Bank of America; the payments company Visa replaced the information technology company Hewlett-Packard; and the consumer clothing and gear company Nike Inc. replaced Alcoa Inc., which was traditionally an aluminum company but now has its finger in a various elements of design and manufacturing of parts, along with recycling. The changes seemed to me symptomatic of broader changes in the US economy, which made me look back at the companies in the Dow over time.

The first official Dow Jones Index was started in 1896, although Charles Dow had been putting out an earlier index, mainly of railroad stocks, as far back as the 1870s. Here, I'll just offer a few comparisons from more recent times. The companies in the Dow stayed unchanged from 1959 to 1976. The first column shows the list of Dow Jones index companies from that time period--call it roughly 40-50 years ago. The second column shows the companies in the Dow from 1994 to 1997, which is a little less than two decades ago. The third column shows the current list. These lists push me to think about how the US economy has been evolving.


As a starting point, compare the 1959-1976 list to the present. There are only six companies on both lists: AT&T (which is of course a vastly different company now than when it was the monopoly provider of U.S. telephone services), DuPont, General Electric, and Proctor and Gamble, Standard Oil (N.J) became Exxon and eventually ExxonMobil, and United Aircraft became United Technologies. A number of companies involving metal are out of the index: Aluminum Company of American, Allied Can, Anaconda Copper, International Nickel, and U.S. Steel, as are companies focused on commodities like American Can, International Paper, and Owens-Illinois Glass.

The new entries are tech companies like 3M, Cisco, IBM, Microsoft, United Technologies, as well as financial companies like American Express, Visa, JP Morgan, Goldman Sachs, and Travellers. Health care related companies, like Merck, Pfizer, and UnitedHealth Group are new entries. The face of  American retailing was Sears in the earlier list; now it is Wal-Mart and Home Depot. the face of food products was General Foods in the earlier list; now it's Coca Cola and McDonalds. the face of the oil industry was two Standard Oil companies (!) and Texaco in the earlier list; now it's ExxonMobil. International Harvester is off the list; Caterpillar is on.

The middle list is a snapshot of the transition between past and present. By my count, from the 1959-1976 up to the mid-1990s, about half of the 30 companies in the Dow (16 of 30) remained in some form, although several changed their names (Allied Chemical became AlliedSignal, Standard Oil (N.J) became Exxon, United Aircraft became United Technologies). Also, about half of the companies in the Dow in the mid-1990s (15 of 30) are no longer in the index at present. I don't claim to know what the "right" amount of turnover should be among top companies in a free-market society. But over the time frame of a few decades, the turnover is substantial.









Thursday, November 28, 2013

Changes in America's Family Structure

When families get together for Thanksgiving and the holidays that follow, the structure of those families is different than a few decades ago. Jonathan Vespa, Jamie M. Lewis, and Rose M. Kreider of the U.S. Census Bureau provide some background in "America’s Families and Living Arrangements: 2012" (August 2013). In some ways, none of the trends are deeply surprising, but in other ways, the patterns of households set the stage for our political and economic choices.

As as starting point, here's a graph showing changes in households by type. Married households with children were 40.3% of all US households in 1970; in 2012, that share had fallen by more than half to 19.6%. Interestingly, the share of households that were married without children has stayed at about 30%.  Other Family Households, usually meaning single-parent families with children, has risen. Overall, the share of U.S. households that involve a family (either married or with children) was 81% back in 1970, but down to 66% in 2012. The share of households which are men or women living alone has risen. The figures are not the same across gender in part because of differences in older age brackets: "Nearly three-quarters (72 percent) of men aged 65 and over
lived with their spouse compared with less than half (45 percent) of women."


The average number of people in households is falling. The share of households with five or more people has dropped by more than half, from 20.9% in 1979 to 9.6% in 2012. Meanwhile, 46% of households had 1 or 2 people in 1970, and 61% of households had 1 or 2 people in 2012. 

One of the hot topics in the last few years has been the subject of 20-somethings moving home to live in the parents' basement. This pattern is visible in the Census data, but it's less striking over time than I might have thought, and the pattern started before the onset of the recession. (The data in the early years of this figure come from different statistical surveys than the post-1983 data, so one shouldn't make too much of what appears to be a jump from 1980 to 1983.) My guess is that as the share of students enrolling in higher education has risen over time, some share of the rise in 18-24 year-olds living at home is represented by college students. 

Finally, it's worth noting that one-quarter of all US children are being raised in single-parent households. On average, these households are below-average in income, and with one parent at home, they are on average less able to provide hours of time at home. 

The structure of households shapes politics and economics. For example, a greater share of adults living alone means a shift in the housing supply away from big houses and toward apartments, and makes it more likely that these single-person households will locate in or near cities rather than in suburban houses. A smaller share of households with children means that when governments set priorities, support for schools will be lower. Households are also a way of sharing risk: a household with two adults has more possibilities for sharing the risk of job loss, or sharing the risk that time needs to be spent dealing with sickness or injury. Therefore, the growth in single-person households tends to mean increased support for social methods of sharing risk, including government programs that support unemployment insurance or health insurance.  To some extent, we are how we live. 


Wednesday, November 27, 2013

An Economist Chews Over Thankgiving

As Thanksgiving preparations arrive, I naturally find my thoughts veering to the evolution of demand for turkey, technological change turkey production, market concentration in the turkey industry, and price indexes for a classic Thanksgiving dinner. Not that there's anything wrong with that.

The last time the U.S. Department of Agriculture did a detailed "Overview of the U.S. Turkey Industry" appears to be back in 2007. Some themes about the turkey market waddle out from that report on both the demand and supply sides.

On the demand side, the quantity of turkey per person consumed rose dramatically from the mid-1970s up to about 1990, but since then has declined somewhat. The figure below is from the Eatturkey.com website run by the National Turkey Federation. Apparently, the Classic Thanksgiving Dinner is becoming slightly less widespread.



On the production side, the National Turkey Federation explains: "Turkey companies are vertically integrated, meaning they control or contract for all phases of production and processing - from breeding through delivery to retail." However, production of turkeys has shifted substantially, away from a model in which turkeys were hatched and raised all in one place, and toward a model in which all the steps of turkey production have become separated and specialized--with some of these steps happening at much larger scale. The result has been an efficiency gain in the production of turkeys.  Here is some commentary from the 2007 USDA report, with references to charts omitted for readability:
"In 1975, there were 180 turkey hatcheries in the United States compared with 55 operations in 2007, or 31 percent of the 1975 hatcheries. Incubator capacity in 1975 was 41.9 million eggs, compared with 38.7 million eggs in 2007. Hatchery intensity increased from an average 33 thousand egg capacity per hatchery in 1975 to 704 thousand egg  capacity per hatchery in 2007.

Turkeys were historically hatched and raised on the same operation and either slaughtered on or close to where they were raised. Historically, operations owned the parent stock of the turkeys they raised supplying their own eggs. The increase in technology and mastery of turkey breeding has led to highly specialized operations. Each production process of the turkey industry is now mainly represented by various specialized operations.

Eggs are produced at laying facilities, some of which have had the same genetic turkey breed for more than a century. Eggs are immediately shipped to hatcheries and set in incubators. Once the poults are hatched, they are then typically shipped to a brooder barn. As poults mature, they are moved to growout facilities until they reach slaughter weight. Some operations use the same building for the entire growout process of turkeys. Once the turkeys reach slaughter weight, they are shipped to slaughter facilities and processed for meat products or sold as whole birds.

Turkeys have been carefully bred to become the efficient meat producers they are today. In 1986, a turkey weighed an average of 20.0 pounds. This average has increased to 28.2 pounds per bird in 2006. The increase in bird weight reflects an efficiency gain for growers of about 41 percent."

U.S. agriculture is full of examples of remarkable increases in yields over a few decades, but they always drop my jaw. I tend to think of a "turkey" as a product that doesn't have a lot of opportunity for technological development, but clearly I'm wrong. Here's a graph showing the rise in size of turkeys over time.


The production of turkey remains an industry that is not very concentrated, with three relatively large producers and then more than a dozen mid-sized producers. Here's a list of top turkey producers in 2011 from the National Turkey Federation




For some reason, this entire post is reminding me of the old line that if you want to have free-flowing and cordial conversation at dinner party, never seat two economists beside each other. Did I mention that I make an excellent chestnut stuffing?  

Anyway, the starting point for measuring inflation is to define a relevant "basket" or group of goods, and then to track how the price of this basket of goods changes over time. When the Bureau of Labor Statistics measures the Consumer Price Index, the basket of goods is defined as what a typical U.S. household buys. But one can also define a more specific basket of goods if desired, and since 1986, the American Farm Bureau Federation has been using more than 100 shoppers in states across the country to estimate the cost of purchasing a Thanksgiving dinner. The basket of goods for their Classic Thanksgiving Dinner Price Index looks like this:



The cost of buying the Classic Thanksgiving Dinner fell about 1% in 2013, compared with 2012. The top line of the graph that follows shows the nominal price of purchasing the basket of goods for the Classic Thanksgiving Dinner. The lower line on the graph shows the price of the Classic Thanksgiving Dinner adjusted for the overall inflation rate in the economy. The line is relatively flat, especially since 1990 or so, which means that inflation in the Classic Thanksgiving Dinner has actually been a pretty good measure of the overall inflation rate.



Thanksgiving is my favorite holiday. Good food, good company, no presents--and all these good topics for conversation. What's not to like? 

(Note: This is an updated version of a post that was first published on Thanksgiving Day 2011.)

For those whose appetite for turkey-related economics is not yet satiated, I recommend that you turn next to the article in the New York Times last Sunday by Catherine Rampell, which tackles the puzzle of why the price of frozen turkeys tends to fall right before Thanksgiving, when one might expect demand to be highest. The article is here; a blog post with background information is here.

Tuesday, November 26, 2013

The Conundrum of EITC Overpayments (and the Health Insurance Exchanges)

Just to be up-front, I'm a long-time fan of the Earned Income Tax Credit. Other government programs to assist those with low incomes can easily discourage work. Imagine that every time a low-income person earns $1, they lose roughly $1 worth of benefits in welfare payments, food stamps, housing vouchers, maybe even losing their Medicaid health insurance. As a result, the incentive to work is dramatically reduced: for earlier posts with details on how this works in practice, see here and here. But with the EITC,  when low-income household (especially those with children) earn income, the federal government pays them an additional tax credit based on their earnings. Thus, the EITC is a reward for working, not a payment conditional on not working.
Here's a nice readable overview of the EITC , from the Center on Budget and Policy Priorities.

But the EITC does have a long-standing problem: a report from the Treasury Inspector General for Tax Administration estimates that about one-fifth of all EITC payments are made to those who don't actually qualify for them. In 2012, the EITC had $11.6 to $13.6 billion in overpayments in 2012. Or if you want to look back over time, an accumulation of $110 billion to $132 billion over the decade from 2003-2012.

The report is "The Internal Revenue Service Is Not in Compliance With Executive Order 13520 to Reduce Improper Payments" [August 28, 2013, #2013-40-084]). President Obama signed Executive Order 13520 on November 20, 2009, which sought to increase the accountability of federal agencies to reduce improper payments. But as TIGTA reports, the IRS has not succeeded reducing these EITC overpayments. On the other side of the coin, TIGTA writes, "The IRS estimates that the participation rate for individuals who are eligible to receive the EITC is between 78 and 80 percent."

Why is this problem so difficult? And if the federal government runs the EITC with a 20% error rate, what are the chances it can run the health care exchanges more effectively?

Here's the EITC enforcement conundrum: The program is spending 20% or so of its funds on millions of households those who aren't eligible, while not reaching millions of households who are eligible. But the amounts for any given household are small. The CBPP summary mentioned above writes: "During the 2010 tax year, the average EITC was $2,805 for a family with children and $262 for a family without children." In addition, low-income people are moving on and off the program all the time. TIGTA writes: "Studies show that approximately one-third of EITC claimants each year are intermittent or first-time claimants." It notes that some of the cases of EITC overpayment are just plain fraud, sometimes aided and abetted by those who prepare taxes. About two-thirds of tax applications that claim the EITC are filled out by outside preparers. But there are also lots of gray-area situations where the complexity of tax law and the EITC provisions, and general confusion makes it uncertain whether someone is eligible.

It won't make economic sense for the IRS to hire a bunch of well-paid tax auditors to delve into the complex and incomplete financial records, home lives, and tax returns of millions of low-income families, hoping to recover a few hundred or even a couple of thousand dollars per family. Thus, the TIGTA report argues: "The IRS must develop alternatives to traditional compliance methods to significantly reduce EITC improper payments." The IRS has made some efforts to communicate the law more clearly to paid tax preparers. But as TIGTA reports, "the IRS has made little improvement in reducing improper EITC payments as a whole ..."

The federal government can do some large-scale programs pretty well. For example, it sends out Social Security checks in a cost-effective manner. With more of a paperwork struggle, it manages to cope with annual tax returns and Medicare and Medicaid payments to health care providers.

But the EITC is a program that involves complex rules for eligibility and size of payments, much more complex than Social Security. The EITC is aimed at low-income people, many of whom have economic and personal lives that are in considerable flux and many of whom have limited ability to deal with detailed paperwork, unlike the health care providers who receive Medicare and Medicaid payments. The envisioned health insurance exchanges are likely to end up serving many more people than the EITC. The complexity of decisions about buying health insurance is greater than the complexity of qualifying for cash payments from the EITC. When people's eligibility for subsidies is moving and changing each year, as it is for the EITC and will also be for the health insurance exchanges, it will be difficult for the federal government to sort out eligibility. And as the complexity of the rules rises, it will spawn a network of people to help in filling out the forms, most of whom will be honest and forthright, but some of whom will focused on making people eligible for the largest possible subsidies, with little concern for legal qualifications.

I remain a fan of the EITC, but I confess that as a matter of practical administration, I'm not at all sure of how to substantially reduce its persistent problem of overpayments in a cost-effective manner. Part of the answer probably involves finding ways to simplify the rules and the interface with recipients, so that eligibility can be more clear-cut. In addition, I suspect that practical problems that cause a mix of over- and underpayments for the EITC will be dwarfed by the practical problems and error rates in deciding about eligibility for subsidies in the health insurance exchanges--even if or when the web interface itself becomes functional.




Monday, November 25, 2013

An Okun's Law Refresher

Okun's law--which is really more of a rule of thumb--holds that for each increase of one percentage point in real GDP, the unemployment rate would fall by 0.3 percentage points. Arthur Okun formulated this rule in a 1962 research paper, called “Potential GNP: Its Measurement and Significance,” which appeared in the Proceedings of the Business and Economics Statistics Section of the American Statistical Association (pp. 98-103). It's available as a Cowles Foundation working paper here. Michael T. Owyang, Tatevik Sekhposyan and E. Katarina Vermann take a look at the current state of Okun's law as the U.S. economy struggles with sluggish growth and a frustratingly gradual decline in its unemployment in "Output and Unemployment How Do They Relate Today?" which appears in the October 2013 issue of the Regional Economist, published by the Federal Reserve Bank of St. Louis. They argue that Okun's law has actually held up quite well over time.

Consider first the evidence roughly as Okun would have seen it back in the early 1960s. This figure shows the  quarterly change in the unemployment rate and the quarterly growth rate in output from 1948-1960. Economic data series like GDP are updated over time, so this isn't quite the same data that Okun used. But it's close. Notice that the main pattern of the points is a downward slope: that is, a negative growth rates of GDP is correlated with a rise in unemployment, while a rise in GDP is correlated with a fall in unemployment.


Now here is the same graph, but including all the quarters from 1948 up into 2013. Time periods are distinguished by the shape of the points: 1948-1960 is blue squares, 1961-2007 is black dots,  and 2008-2013 is red triangles. They estimate that the Okun's law relationship over this time is that, on average, a 1 percentage point rise in the growth rate of real GDP is associated with a 0.28 percentage point fall in the unemployment rate--almost exactly the same as what Okun found in 1962.


Friday, November 22, 2013

The Triffin Dilemma and U.S. Trade Deficits

Martin Feldstein interviews Paul Volcker in the most recent issue of the Journal of Economic Perspectives. The interview ranges from the 1970s up to the present, and I commend it to you in full. But one of Volcker's comments in particular sent me scurrying to learn more. At one point in the interview, Volcker says: " I think we’re back, in a way, in the Triffin dilemma." And I thought to myself: "What the heck is that?"

Here's an discussion of the Triffin dilemma from an IMF website describing problems in the international monetary system from 1959 to 1971, before the Bretton Woods system of (mostly) fixed exchange rates cratered. The IMF states:
If the United States stopped running balance of payments deficits, the international community would lose its largest source of additions to reserves. The resulting shortage of liquidity could pull the world economy into a contractionary spiral, leading to instability. ... If U.S. deficits continued, a steady stream of dollars would continue to fuel world economic growth. However, excessive U.S. deficits (dollar glut) would erode confidence in the value of the U.S. dollar. Without confidence in the dollar, it would no longer be accepted as the world's reserve currency. The fixed exchange rate system could break down, leading to instability."
In short, the U.S. needs to run trade deficits, because the rest of the world wants to  hold U.S. dollars as a safe asset, and the way the rest of the world gets U.S. dollars is when the U.S. economy buys imported products. However, if U.S. trade deficits go on for too long or become too large, then the U.S. dollar might stop looking like a safe asset, which in turn could bring its own waver of global financial disruption. Here's how Volcker describes the Triffin dilemma in the interview:
"In the 1960s, we were in a position in the Bretton Woods system with the other countries wanting to run surpluses and build their reserve positions, so the reserve position of the United States inevitably weakened—weakened to the point where we no longer could support the convertibility of currencies to gold. Now, how long can we expect as a country or world to support how many trillions of dollars that the rest of the world has? So far, so good. The rest of the world isn’t in a very good shape, so we look pretty good at the moment. But suppose that situation changes and we’re running big [trade] deficits, and however many trillion it is now, it’s another few trillions. At some point there is vulnerability there, I think, for the system, not just for the United States. We ought to be conscious of that and do something about it."
Triffin laid out his views in a 1960 book called "Gold and the dollar crisis”, which is not all that easily available. But his 1960 Congressional testimony available here offers an overview. However, the Triffin dilemma as it exists today, in a world of flexible exchange rates, is not the same as it was back in 1960. (Of course, this is also why Volcker said that "in a way," we had returned to the Triffin dilemma.) Lorenzo Bini Smaghi, a member of the Executive Board of the European Central Bank, laid out some of the similarities and differences in 2011 in a talk, "The Triffin dilemma revisited."

Smaghi notes the arrival of flexible exchange rates, but argues that the biggest change affecting the Triffin dilemma is something else, the arrival of multi-directional capital flows and private sector capital flows in the global financial economy means that there are now lots of ways for other countries to get the U.S. dollars (or euros) that they wish to hold a safe asset, without the U.S. (or the euro area) necessarily running a trade deficit. Smaghi said:
Today, the United States and the euro area are not obliged to run rising current account deficits to meet the demand for dollars or euros. This is for two main, interlinked reasons. First, well-functioning, more liquid and deeply integrated global financial markets enable reserve-issuing countries to provide the rest of the world with safe and liquid financial liabilities while investing a corresponding amount in a wide range of financial assets abroad. The euro has indeed become an important international currency since its inception and the euro area has been running a balanced current account. In a world where there is no longer a one-to-one link between current accounts, i.e. net capital flows and global liquidity, a proper understanding of global liquidity also needs to include gross capital flows. Second, under BW [Bretton Woods] global liquidity and official liquidity were basically the same thing, but today the “ease of financing” at global level also crucially depends on private liquidity directly provided by financial institutions, for instance through interbank lending or market making in securities markets. Given the endogenous character of such private liquidity, global official and private liquidity have to be assessed together for a proper evaluation of global liquidity conditions at some point in time, and there is no endemic shortage of global liquidity, as the empirical evidence confirms. This is not to deny that temporary shortages can occur, as happened after the bankruptcy of Lehman Brothers in September 2008. But such shortages are a by-product of shocks and boom-bust cycles, not an intrinsic feature of the IMS [international monetary system], and can be tackled with an appropriate global financial safety net.
So has the Triffin dilemma been eliminated. Smaghi argues not. He points out that there is a growing and widespread demand for U.S. dollar holdings around the world from emerging market economies that want to hold U.S. dollar reserves as a hedge against sudden capital outflows or to keep their own currency undervalued, as well as from oil and other commodity exporters who prefer to hold their accumulated trade surpluses in the form of U.S. dollars. And when economic shocks occur, demand for these safe U.S. dollar assets can rise and fall in ways that threaten economic stability.
Triffin's policy solution, back in the day, was the creation of a global central bank that would issue a new "reserve unit" which central banks could hold as a safe asset, but would not be linked to gold or currencies. While Volcker doesn't endorse that approach, he also says that the way out lies in international monetary reform.  Smaghi discusses the possibility that the Triffin dilemma might be much reduced in a multi-polar international monetary system, where the U.S. dollar remains quite prominent, but those seeking safe assets and can turn to a variety of other currencies, too.

This discussion may seem abstruse, so let me pull it back to some headline economic statistics. There was once a time, not so very long ago, when many people used to worry about what would happen if the U.S. economy ran sustained and large trade deficits.  Through most of the 1960s and 1970s, the U.S. was fairly close to balanced trade. The first big drop in the U.S. trade deficit happened in the mid-1980s, but then trade deficits since the late 1990s have been just as big, or bigger, than back in the 1980s. Here's a figure showing U.S. trade deficits as a share of GDP since the time Triffin enunciated his dilemma up to the present.


I suspect that the U.S. trade deficits in recent decades would have seemed almost impossibly large to Triffin and others back in 1960. But in recent decades, he rest of the world has wanted to hold trillions of U.S. dollars as a safe asset, and so the U.S. economy could import to its heart's content, send the U.S. dollars abroad and run enormous trade deficits. But at some point, the accumulation of U.S. trade deficits becomes so sustained and large that it will lead to economic disruption. As Paul Volcker says: "We ought to be conscious of that and do something about it."


(Thanks to Iva Djurovic for creating the trade deficit figure.)


Thursday, November 21, 2013

The State of US Health

It's fairly well-known that life expectancy for Americans are below other high-income countries. But did you know that that the gap is getting worse? And how the underlying causes of death in the U.S., together with proximate factors behind those causes, have been evolving? The Institute for Health Metrics and Evaluation at the University of Washington takes up these issues and more in "The State of US Health: Innovations, Insights, and Recommendations from the Global Burden of Disease Study."

Let's start with some international comparisons. Life expectancy in the US is rising--but it is rising more slowly than life expectancy in other OECD countries.

Or consider the average of death. In the U.S., the average age of death rose by nine yeas from 1970 to 2010, but it has risen faster most other places. In this figure, the vertical axis shows age at death in 1970, so if you look at countries from top to bottom, you see how they were ranked by life expectancy in 1970. The horizontal axis shows age at death in 2010, so if you look at countries from right to left, you see how they are ranked by life expectancy in 2010. Countries at about the same horizontal level as the US like New Zealand, Canada, Australia, Spain, Italy, and Japan all had similar life expectancy to the U.S. in 1970, but are now out to the right of the US with higher life expectancies in 2010.

What are the causes of early death? This study seeks to rank different causes of death according to "years of life lost"--that is, a cause of death that affects people at younger ages is counted more heavily than a cause of death which affects people at older ages. Here's the comparison from 1990 to 2010. Notice that the top six causes of years of life lost change their order a bit, but are otherwise unchanged: heart disease, lung cancer, stroke, chronic obstructive pulmonary disease (COPD), road injury and self-harm. But after that, there are some dramatic changes. For example, the years of life lost because of HIV/AIDS, interpersonal violence, and pre-term birth complications are ranked lower. However, cirrhosis, diabetes, Alzheimer's disease, and drug use disorders now rank much higher.

The study also looks at what is called a "disability-adjusted life year" or a DALY. This adds together years of life lost and also makes an adjustment for years lived with a disability. Looking at all the causes of death, the study calculates what percentage of the DALYs are attributable to various "risk factors." Clearly, the big risk factors are dietary risks (which largely seems to mean not eating enough fruits, nuts and seeds, vegetables, and whole grains, while eating too much salt and processed meat), together with tobacco use, obesity, high blood pressure, and low physical activity. Clearly, many of these issues of diet, weight, and exercise interact.


The report also offers some county-level analysis across the US, with a reminder that "how health is experienced in the US varies greatly by locale. People who live in San Francisco or Fairfax County, Virginia, or Gunnison, Colorado, are enjoying some of the best life expectancies in the world. In some US counties, however, life expectancies are on par with countries in North Africa and Southeast Asia."

As the US political system convulses over what legal standards that will govern health insurance, the mechanics of paying for health care are obviously of importance. But at the broad social level, life expectancy and health are first and foremost about diet, exercise, not smoking, not drinking to excess, and other behaviors.

Wednesday, November 20, 2013

A Tripartite Mandate for Central Banks?

The U.S. Federal Reserve operates under what is commonly called a "dual mandate," which basically means that it should take both inflation and unemployment into account when making its decisions. The dual mandate means that the Fed does not take into account the risk that asset market bubbles are destabilizing the economy. Should the dual mandate be turned into a tripartite mandate, with the risk of a financial crisis as the third factor that the Fed (and other central banks) should also take into account. In the most recent issue of the Journal of Economic Perspectives, Ricardo Reis discusses "Central Bank Design," and in particular what the economics literature has to say about a range of issues related to how central banks should choose their goals, their decision-makers, their policy tools, their communication methods, and more. Reis explains why economists have typically not supported a treble mandate in the past, and also why he thinks that this may change in the not-too-distant future.

Why have central banks typically not focused on asset market bubbles and financial instability in the past? Here's how Reis explains it (as usual, citations and footnotes are omitted):
"A more contentious debate is whether to have a tripartite mandate that also includes financial stability. After all, the two largest US recessions in the last century—the Great Depression of the 1930s and the Great Recession that started in 2007—were associated with financial crises. ... [I]f financial stability is to be included as a separate goal for the central bank, it must pass certain tests: 1) there must be a measurable definition of financial stability, 2) there has to be a convincing case that monetary policy can achieve the target of bringing about a more stable financial system, and 3) financial stability must pose a trade-off with the other two goals, creating situations where
prices and activity are stable but financial instability justifies a change in policy ... Older approaches to this question did not fulfill these three criteria, and thus did not justify treating financial stability as a separate criterion for monetary policy." 
As a recent example, think back to the dot-com bubble of the late 1990s. Sure, the price tech stocks seemed to be soaring implausibly high. But was it really the role of the Federal Reserve to decide that stock prices were "too high," and to change monetary policy, perhaps bringing on a recession, in an effort to bring stock prices down? As Reis notes: "Yet, at most dates, there seems to be someone crying “bubble” at one financial market or another, and the central bank does not seem particularly well equipped to either spot the fires in specific asset markets, nor to steer equity
prices."

When the dot-com boom was followed by a short and shallow recession in 2001, the Federal Reserve did what it could to cushion the economy with lower interest rates at that time. I'm probably not alone in being someone who watched the housing price boom around 2006 and thought: "Sure, there might be a recession eventually, like in 2001, but it works OK to have the Federal Reserve react after the fact when unemployment goes up. The Fed isn't well-equipped to judge when housing prices or stock prices are "too high" or "too low," nor to adjust monetary policy to alter such prices.

But as Reis points out, more sophisticated ideas of how to define the rising risk of a financial crisis have come into prominence in the last few years. Instead of focusing on whether stock prices or house prices are rising, or seem "too high," these approaches look at factors like whether total borrowing is rising. Reis explains:
 "A more promising modern approach begins with thinking about how to define financial stability: for example, in terms of the build-up of leverage, or the spread between certain key borrowing and lending rates, or the fragility of the funding of financial intermediaries. This literature has also started gathering evidence that when the central bank changes interest rates, reserves, or the assets it buys, it can have a significant effect on the composition of the balance sheets of financial intermediaries as well as on the risks that they choose to take. ... While it is not quite there yet, this modern approach to financial stability promises to be able to deliver a concrete recommendation for a third mandate for monetary policy that can be quantified and implemented." 

A final point here is that implementing a tripartite mandate may also mean changing how one describes the policy tools of a central bank does in a different way. Up to 2007, it was reasonable to describe the policy tools of a central bank mainly in terms of its ability to raise or lower interest rates. But when the central bank starts looking at the total amount of bank credit being extended or at stress-testing whether financial institutions are well-positioned to be resilient in the face of an shock, these sorts of goals can also be accomplished by so-called "macroprudential regulation," which involves adjusting credit conditions by adjusting the rules and standards to be applied by financial regulators.

Tuesday, November 19, 2013

Is Altruism a Scarce Resource that Needs Conserving?

The Harvard political philosopher Michael Sandel offers a thought-provoking essay, "Market Reasoning as Moral Reasoning: Why Economists Should Re-engage with Political Philosophy," in the just-released Fall 2013 issue of the Journal of Economic Perspectives. Like all JEP articles, it is freely available on-line courtesy of the American Economic Association. (Full disclosure: I've been the Managing Editor of the JEP since its first issue in 1987.) The article makes a number of arguments about the extent to which "putting a price on every human activity erodes certain moral and civic goods worth caring about." Here, I'll focus on one argument presented late in the paper, which is the claim that it is a good thing to let markets based on self-interest function in many areas, because it conserves on scarce resources of altruism.

Sandel makes a persuasive case that a number of economists hold this view, although it is not always stated openly. Here are a few examples. The eminent British economist Sir Dennis Robertson gave a prominent 1954 lecture on the topic "What does the economist economize?" Here is Sandel's discussion:
Robertson (1954) claimed that by promoting policies that rely, whenever possible, on self-interest rather than altruism or moral considerations, the economist saves society from squandering its scarce supply of virtue. “If we economists do [our] business well,” Robertson (p. 154) concluded, “we can, I believe, contribute mightily to the economizing . . . of that scarce resource Love,” the “most precious thing in the world."
Kenneth Arrow made a similar argument in a 1972 essay, Sandel notes:
“Like many economists,” Arrow (1972, pp. 354–55) writes, “I do not want to rely too heavily on substituting ethics for self-interest. I think it best on the whole that the requirement of ethical behavior be confined to those circumstances where the price system breaks down . . . We do not wish to use up recklessly the scarce resources of altruistic motivation.”
Or for another example, here's Sandel describing a speech that Larry Summers gave at Harvard's Memorial Church in 2003:
Summers (2003) concluded with a reply to those who criticize markets for relying on selfishness and greed: “We all have only so much altruism in us. Economists like me think of altruism as a valuable and rare good that needs conserving. Far better to conserve it by designing a system in which people’s wants will be satisfied by individuals being selfish, and saving that altruism for our families, our friends, and the many social problems in this world that markets cannot solve."
As Sandel notes with some asperity, this notion of altruism as a scarce resource, "like the supply of fossil fuels," is highly contestable. Might it not be possible instead that when people act in a way that displays altruisism, generosity, or civic virtue, that the social supply of these virtues tend to expand? It seems plausible to think that altruism, generosity, and even love may be socially created, not just used up.

At one level, Sandel's critique seems to me fair and well-made. There is actually a reasonable-sized literature in economics and other social sciences looking at how the level of trust and generosity varies across societies. It seems incorrect to think of altruism as a fixed quantity, unaffected by other social institutions.

But at some other level, I feel moved to defend my economist brethren a bit. Focusing on whether whether altruism is a fixed can be a debater's point that focuses on the specific phrasing of an argument, rather than the underlying issue at stake. After all, none of the economists are arguing that altruism, generosity, and social virtue are bad ideas or that we should have less of them. Instead, they are arguing that in the real world there exists a division of labor, if you will, in which real-world people choose between altruism and self-interest in different settings. They are arguing that in practical terms, it seems unlikely that social norms of private altruism and generosity by themselves be able to achieve important social goals like supplying food, housing, health care, education, and the necessities of live, as well as helping the poor or protecting the environment. In such cases, it will be important to consider the interactions of self-interest with the compulsion of law.

Sandel focuses on the arguments for how market forces might impinge on civic virtues worth preserving, which is plenty for one essay. But there is also potential for conflict between civic virtues and any institution of society, at least in certain settings. For example, governments around the world can also easily impinge on civic virtues worth preserving. I do think the issues concerning potential conflicts between market forces and moral virtues are real ones. In the style of eminent philosophers, Sandel poses his argument about  not as a set of strong claims, but rather as a set of questions for consideration, and I commend his article to your consideration in a similar spirit.


Monday, November 18, 2013

Journal of Economic Perspectives, Fall 2013, Now On-line


The Fall 2013 issue of the Journal of Economic Perspectives is now available on-line. All articles from this issue back to the first issue in Summer 1987 are freely available on-line, courtesy of the American Economic Association. I'll post on some of the articles from this issue in the next week or two. But for now, I just want to let people know what's in the issue.  (Full disclosure: I've been the Managing Editor of JEP since the first issue in 1987, so I am predisposed to believe that everything in the issue is well worth reading.) This issue starts with a six-paper symposium on "The First 100 Years of the Federal Reserve," followed by a two-paper exchange on "Economics and Moral Virtues," and then by several individual articles.

Symposium: The First 100 Years of the Federal Reserve

"A Century of US Central Banking: Goals, Frameworks, Accountability," by Ben S. Bernanke

Several key episodes in the 100-year history of the Federal Reserve have been referred to in various contexts with the adjective "Great" attached to them: the Great Experiment of the Federal Reserve's founding, the Great Depression, the Great Inflation and subsequent disinflation, the Great Moderation, and the recent Great Recession. Here, I'll use this sequence of "Great" episodes to discuss the evolution over the past 100 years of three key aspects of Federal Reserve policymaking: the goals of policy, the policy framework, and accountability and communication. The changes over time in these three areas provide a useful perspective, I believe, on how the role and functioning of the Federal Reserve have changed since its founding in 1913, as well as some lessons for the present and for the future.
Full-Text Access | Supplementary Materials

"Central Bank Design," by Ricardo Reis

Starting with a blank slate, how could one design the institutions of a central bank for the United States? This paper explores the question of how to design a central bank, drawing on the relevant economic literature and historical experiences while staying free from concerns about how the Fed got to be what it is today or the short-term political constraints it has faced at various times. The goal is to provide an opinionated overview that puts forward the trade-offs associated with different choices and identifies areas where there are clear messages about optimal central bank design.
Full-Text Access | Supplementary Materials

"The Federal Reserve and Panic Prevention: The Roles of Financial Regulation and Lender of Last Resort," by Gary Gorton and Andrew Metrick

This paper surveys the role of the Federal Reserve within the financial regulatory system, with particular attention to the interaction of the Fed's role as both a supervisor and a lender-of-last-resort. The institutional design of the Federal Reserve System was aimed at preventing banking panics, primarily due to the permanent presence of the discount window. This new system was successful at preventing a panic in the early 1920s, after which the Fed began to discourage the use of the discount window and intentionally create "stigma" for window borrowing -- policies that contributed to the panics of the Great Depression. The legislation of the New Deal era centralized Fed power in the Board of Governors, and over the next 75 years the Fed expanded its role as a supervisor of the largest banks. Nevertheless, prior to the recent crisis the Fed had large gaps in its authority as a supervisor and as lender of last resort, with the latter role weakened further by stigma. The Fed was unable to prevent the recent crisis, during which its lender of last resort function expanded significantly. As the Fed begins its second century, there are still great challenges to fulfilling its original intention of panic prevention.
Full-Text Access | Supplementary Materials

"Shifts in US Federal Reserve Goals and Tactics for Monetary Policy: A Role for Penitence?" by Julio J. Rotemberg

This paper considers some of the large changes in the Federal Reserve's approach to monetary policy. It shows that, in some important cases, critics who were successful in arguing that past Fed approaches were responsible for mistakes that caused harm succeeded in making the Fed averse to these approaches. This can explain why the Fed stopped basing monetary policy on the quality of new bank loans, why it stopped being willing to cause recessions to deal with inflation, and why it was temporarily unwilling to maintain stable interest rates in the period 1979-1982. It can also contribute to explaining why monetary policy was tight during the Great Depression. The paper shows that the evolution of policy was much more gradual and flexible after the Volcker disinflation, when the Fed was not generally deemed to have made an error.
Full-Text Access | Supplementary Materials

"Does the Federal Reserve Care about the Rest of the World?" by Barry Eichengreen

Many economists are accustomed to thinking about Federal Reserve policy in terms of the institution's "dual mandate," which refers to price stability and high employment, and in which the exchange rate and other international variables matter only insofar as they influence inflation and the output gap -- which is to say, not very much. In fact, this conventional view is heavily shaped by the distinctive and peculiar circumstances of the last three decades, when the influence of international considerations on Fed policy has been limited. In fact, the Federal Reserve paid significant attention to international considerations in its first two decades, followed by relative inattention to such factors in the two-plus decades that followed, then back to renewed attention to international aspects of monetary policy in the 1960s, before the recent period of benign neglect of the international dimension. I argue that in the next few decades, international aspects are likely to play a larger role in Federal Reserve policy making than at present.
Full-Text Access | Supplementary Materials

"An Interview with Paul Volcker," by Martin Feldstein

Martin Feldstein interviewed Paul Volcker in Cambridge, Massachusetts, on July 10, 2013, as part of a conference at the National Bureau of Economic Research on "The First 100 Years of the Federal Reserve: The Policy Record, Lessons Learned, and Prospects for the Future." Volcker was Chairman of the Board of Governors of the Federal Reserve System from 1979 through 1987. Before that, he served stints as President of the Federal Reserve Bank of New York from 1975 to 1979, as Deputy Undersecretary for International Affairs in the US Department of the Treasury from 1969 to 1974, as Deputy Undersecretary for Monetary Affairs in the Treasury from 1963 to 1965, and as an economist at the Federal Reserve Bank of New York from 1952 to 1957. He has led and served on a wide array of commissions, including chairing the President's Economic Recovery Advisory Board from its inception in 2009 through 2011.
Full-Text Access | Supplementary Materials

Symposium: Economics and Moral Virtues

"Market Reasoning as Moral Reasoning: Why Economists Should Re-engage with Political Philosophy," by Michael J. Sandel

In my book What Money Can't Buy: The Moral Limits of Markets (2012), I try to show that market values and market reasoning increasingly reach into spheres of life previously governed by nonmarket norms. I argue that this tendency is troubling; putting a price on every human activity erodes certain moral and civic goods worth caring about. We therefore need a public debate about where markets serve the public good and where they don't belong. In this article, I would like to develop a related theme: When it comes to deciding whether this or that good should be allocated by the market or by nonmarket principles, economics is a poor guide. Deciding which social practices should be governed by market mechanisms requires a form of economic reasoning that is bound up with moral reasoning. But mainstream economic thinking currently asserts its independence from the contested terrain of moral and political philosophy. If economics is to help us decide where markets serve the public good and where they don't belong, it should relinquish the claim to be a value-neutral science and reconnect with its origins in moral and political philosophy.
Full-Text Access | Supplementary Materials

"Reclaiming Virtue Ethics for Economics," by Luigino Bruni and Robert Sugden

Virtue ethics is an important strand of moral philosophy, and a significant body of philosophical work in virtue ethics is associated with a radical critique of the market economy and of economics. Expressed crudely, the charge sheet is this: The market depends on instrumental rationality and extrinsic motivation; market interactions therefore fail to respect the internal value of human practices and the intrinsic motivations of human actors; by using market exchange as its central model, economics normalizes extrinsic motivation, not only in markets but also in social life more generally; therefore economics is complicit in an assault on virtue and on human flourishing. We will argue that this critique is flawed, both as a description of how markets actually work and as a representation of how classical and neoclassical economists have understood the market. We show how the market and economics can be defended against the critique from virtue ethics, and crucially, this defense is constructed using the language and logic of virtue ethics. Using the methods of virtue ethics and with reference to the writings of some major economists, we propose an understanding of the purpose (telos) of markets as cooperation for mutual benefit, and identify traits that thereby count as virtues for market participants. We conclude that the market need not be seen as a virtue-free zone.
Full-Text Access | Supplementary Materials

Articles

"Gifts of Mars: Warfare and Europe's Early Rise to Riches," Nico Voigtländer and Hans-Joachim Voth

Western Europe surged ahead of the rest of the world long before technological growth became rapid. Europe in 1500 already had incomes twice as high on a per capita basis as Africa, and one-third greater than most of Asia. In this essay, we explain how Europe's tumultuous politics and deadly penchant for warfare translated into a sustained advantage in per capita incomes. We argue that Europe's rise to riches was driven by the nature of its politics after 1350 -- it was a highly fragmented continent characterized by constant warfare and major religious strife. No other continent in recorded history fought so frequently, for such long periods, killing such a high proportion of its population. When it comes to destroying human life, the atomic bomb and machine guns may be highly efficient, but nothing rivaled the impact of early modern Europe's armies spreading hunger and disease. War therefore helped Europe's precocious rise to riches because the survivors had more land per head available for cultivation. Our interpretation involves a feedback loop from higher incomes to more war and higher land-labor ratios, a loop set in motion by the Black Death in the middle of the 14th century.
Full-Text Access | Supplementary Materials

"The Economics of Slums in the Developing World," by Benjamin Marx, Thomas Stoker and Tavneet Suri

The global expansion of urban slums poses questions for economic research as well as problems for policymakers. We provide evidence that the type of poverty observed in contemporary slums of the developing world is characteristic of that described in the literature on poverty traps. We document how human capital threshold effects, investment inertia, and a "policy trap" may prevent slum dwellers from seizing economic opportunities offered by geographic proximity to the city. We test the assumptions of another theory -- that slums are a just transitory phenomenon characteristic of fastgrowing economies -- by examining the relationship between economic growth, urban growth, and slum growth in the developing world, and whether standards of living of slum dwellers are improving over time, both within slums and across generations. Finally, we discuss why standard policy approaches have often failed to mitigate the expansion of slums in the developing world. Our aim is to inform public debate on the essential issues posed by slums in the developing world.
Full-Text Access | Supplementary Materials

"Recommendations for Further Reading," by Timothy Taylor
Full-Text Access | Supplementary Materials

The Safety of Bioengineered Crops

When I first had to write about genetically modified crops, there was no evidence upon which to draw. It was 1986, and I was working as an editorial writer for the San Jose Mercury News. The first genetically modified plant, a variety of tobacco, had been created in a lab in 1983. A local company called Advanced Genetic Sciences was proposing the first field test of a genetically modified organism. Their plan was to spray some strawberries and potatoes with a bacteria often known as" ice-minus," because it helped prevent frost damage to crops.

The step was quite controversial. For opponents, no buffer zone and no set of precautions could justify spraying this bacteria: indeed, the night before the trial was to take place, protesters broke into the fields and pulled up many of the plants. But I wrote some editorials supporting the trial, in large part because the ice-minus bacteria were already fairly widespread in nature. It was perfectly legal to culture the natural ice-minus bacteria and spray it; this trial just involved spraying genetically identical ice-minus bacteria that had been created in a laboratory. The trial worked fine: that is, the crops were more frost-resistant, and there were no observable negative effects to the environment.

By 1994, the first genetically modified plant was commercially sold when the FlavrSavrTM tomato hit the market. But the introduction of genetically engineered crops in the field is usually dated to 1996, when field crops that were genetically engineered to be pest-resistant and herbicide-tolerant were introduced. Now, after the technology has been in widespread use for 17 years, the studies are piling up. In a forthcoming article posted online at Critical Reviews in Biotechnology, Alessandro Nicolia, Alberto Manzo, Fabio Veronesi, and Daniele Rosellini provide "An overview of the last 10 years of genetically engineered crop safety research." They built a comprehensive database of the research literature from 2002 through 2012, consisting of 1,783 articles on different aspects of these first-generation genetically engineered crops. Here is the bottom line of their survey, from the abstract:
"The technology to produce genetically engineered (GE) plants is celebrating its 30th anniversary and one of the major achievements has been the development of GE crops. The safety of GE crops is crucial for their adoption and has been the object of intense research work often ignored in the public debate. We have reviewed the scientific literature on GE crop safety during the last 10 years, built a classified and manageable list of scientific papers, and analyzed the distribution and composition of the published literature. We selected original research papers, reviews, relevant opinions and reports addressing all the major issues that emergedin the debate on GE crops, trying to catch the scientific consensus that has matured since GE plants became widely cultivated worldwide. The scientific research conducted so far has not detected any significant hazards directly connected with the use of GE crops; however, the debate is still intense." 
More specifically, they look at how genetically engineered crops have interacted with biodiversity; at risks of "gene flow" to other crops, wild plants, or through the soil; at health effects for animals that eat feed from genetically engineered crops; and potential health effects for human consumers. As they discuss, some of the evidence raises warning signs that are worth more investigation,  and in cases certain genetically engineered crops are no longer grown, or not grown in certain areas, because of these concerns. But as they write in the conclusion: "We have reviewed the scientific literature on GE [genetically engineered] crop safety for the last 10 years that catches the scientific
consensus matured since GE plants became widely cultivated worldwide, and we can conclude that the scientific research conducted so far has not detected any significant hazard directly connected with the use of GM [genetically modified] crops." In short, if you like to believe that you follow the scientific evidence, you should believe that the first generation of pest-resistant and herbicide-tolerate genetically engineered crops has been a great success, substantially increasing crop yields and reducing the need for heavy chemical use.

I support all sorts of rules and regulations and follow-up studies to make sure that genetically engineered crops continue to be safe for the environment and for consumers. After all, the first-generation genetically engineered field crops were all about pest resistance and herbicide-tolerance, and as new types of genetic engineering are proposed, they should be scrutinized. But for me, the purpose of these regulations is to create a clear pathway so that the technology can be more widely used in a safe way, not to create a set of paperwork hurdles to block the future use of the technology.

Farmers have been breeding plants and animals for desired characteristics for centuries. Genetic engineering holds the possibility of speeding up that process of agricultural innovation, so that agriculture can better meet a variety of human needs. Most obviously, genetically engineered crops are likely to be important as world population expands and world incomes continue to rise (so that meat consumption rises as well). In addition, remember that plants serve functions other than calorie consumption. Plant that were more effective at fixing carbon in place might be a useful tool in limiting the rise of carbon in the atmosphere. Genetically modified plants are one of the possible paths to making plant-based ethanol economically viable. Plants that can thrive with less water or fewer chemicals can be hugely helpful to the environment, and to the health of farmworkers around the world. The opportunity cost of slowing the progress of agricultural biotechnology is potentially very high.








Friday, November 15, 2013

Unavoidable Realities of Insurance: Health Care Edition

Insurance markets are unavoidably unpopular, because of a basic fact and an unpalatable implication.

Here's the basic fact about all insurance markets: What gets paid out must equal what gets paid in. Or to put it another way, what is paid in by the average person in premiums must be pretty much equal (with some minor differences noted below) to what is received by the average person in benefits.

And here's the unpalatable implication: Some people will receive much more from insurance payments than they pay in. They might buy life insurance and die young, or buy car insurance and suffer a severe accident, or buy health insurance and face a costly period of hospitalization. But in turn, because of the basic fact that the average person can't receive more more in benefits than the average person pays in premiums, there will inevitably be many people--probably a majority of insurance buyers--who will receive much less in insurance benefits than they pay in.

For example, I'm a direct purchaser of insurance for my home, car , and (term) life, as well as an indirect purchaser of health and dental insurance through my employer. The best possible outcome for me is that I would pay premiums year after year, all my adult life, and never receive more than minimal insurance benefits. After all, receiving significant payments from an insurance company would mean that my family had experienced damage to our home, or a car accident, or sickness, injury, or death.

Any market where the good outcome experienced by a majority of buyers is to make payments all your life in exchange for little or no benefit is going to be continually unpopular.

A lot of energy goes into trying to ignore or deny either the basic fact about insurance markets or the unpalatable implication. The expansions of what health insurance policies must cover that are required by the Affordable Care and Patient Protection Act offer a vivid example. The rules expanding what a typical health insurance plan must cover mean that the average person will need to pay higher premiums, because the benefits being paid out of the health insurance system need to equal what is paid in. Moreover, some people are going to draw on these additional coverage provisions much more than others, so many of those who are unlikely to draw upon such policies will find an even bigger gap between what they are paying in insurance and the insurance benefits they personally receive. Indeed, many of these all-pay, little-benefit households--and many people have a pretty good idea whether they fall into this category--are being required under the new law to pay for others to receive a level of health insurance coverage that they had not previously chosen to receive for themselves.

There will always be a political dynamic to promise that the majority should receive more for their insurance, but that no one should need to pay more on their premiums. For example, the original Medicare legislation back in 1966 required that premiums paid by the elderly should cover 50% of the costs of Part B. But Medicare spending went up, premiums didn't, and in 1997 a law needed to be passed to assure that premiums paid by the elderly would cover 25% of the costs of Part B.

Another way to quarrel with the basic fact about insurance is to point out that private insurance companies spend money on administrative costs and on profits. In addition, insurance companies earn revenue from investing reserves. One can argue that insurance companies could be more efficient, or their profits could be more regulated, and that in these ways benefits might be increased somewhat without paying more in premiums. I'm all for encouraging greater efficiency, and I think the government has been slow in pushing the private sector to coordinate on formats for electronic medical records and billing. But the National Association of Insurance Commissioners reports that in 2012, insurance companies spent 85.7% of the premiums they received, while 11.8% was paid in administrative expenses, and the other 2.7% was profit margin. In other words, the overwhelming amount of money paid into health insurance in premiums is indeed paid out to health care providers. The existence of private-sector insurance companies may bend the basic fact that what is paid out in insurance benefits must equal what is paid in, but it does not sever the connection.

Just to be clear, neither the fundamental fact about insurance nor the unpalatable implication goes away with a government-run or a single-payer insurance system. Whether it's Medicare or Medicaid or private sector health insurance or government-run exchanges, it still  holds true that benefits must be paid for. It's tempting, of course, to assert that the U.S. government could run a nonprofit health care system in a efficient and cost-effective manner that reduced administrative costs, but even if this assert is true, as noted a moment ago the additional revenue from greater administrative efficiency is a modest share of total health care spending. Also, given events in recent weeks, that assertion that the U.S. government could run an efficient and cost-effective health insurance system must be open to considerable dispute. There's a reason why, in a country as large and diverse as the United States, insurance regulation has typically been done at the state level rather than the federal level. Even with a government-run or single-payer system, it still holds true that because some people will experience very high costs, the typical person will pay into the health insurance system more than they ever get out--and should be perversely grateful to be lucky in this way.

I have for years favored having the government spend more to subsidize health insurance for those 50 million or so Americans who have lacked it. My preference has been to fund these subsidies by placing limits on the tax exemption given for employee compensation paid in the form of employer-paid health insurance. (For some estimates from the Congressional Budget Office of how such limits could raise $46-$101 billion per year in extra tax revenue when phased in 10 years from now, see Option 15 in Chapter Five of this recent CBO report.)

However, the Affordable Care and Patient Protection Act goes well beyond providing assistance to those without insurance. It has been promoted with set of promises that everyone in both the individual and employer-provided insurance market can have the same or more insurance coverage while everyone pays the same or lower private-sector insurance premiums--and while future increases in government health care spending are lower than than they otherwise would have been. In seeking to carry out these promises in defiance of the basic economics of insurance markets, the law will necessarily disrupt the health care arrangements for a sizable share of the 200 million or so Americans who already have private health insurance.