Friday, November 30, 2012

Global Manufacturing: A McKinsey View

The McKinsey Global Institute has published an intriguing report, "Manufacturing the future: The next era of global growth and innovation." Here's a passage that to me captures much of the overall message, along with many of the controversies about manufacturing.

 "The role of manufacturing in the economy changes over time. Empirical evidence shows that as economies become wealthier and reach middle-income status, manufacturing’s share of GDP peaks (at about 20 to 35 percent of GDP). Beyond that point, consumption shifts toward services, hiring in services outpaces job creation in manufacturing, and manufacturing’s share of GDP begins to fall along an inverted U curve. Employment follows a similar pattern: manufacturing’s share of US employment declined from 25 percent in 1950 to 9 percent in 2008. In Germany, manufacturing jobs fell from 35 percent of employment in 1970 to 18 percent in 2008, and South Korean manufacturing went from 28 percent of employment in 1989 to 17 percent in 2008.

"As economies mature, manufacturing becomes more important for other attributes, such as its ability to drive productivity growth, innovation, and trade. Manufacturing also plays a critical role in tackling societal challenges, such as reducing energy and resource consumption and limiting greenhouse gas emissions. ...Manufacturing continues to make outsize contributions to research and development, accounting for up to 90 percent of private R&D spending in major manufacturing nations. The sector contributes twice as much to productivity growth as its employment share, and it typically accounts for the largest share of an economy’s foreign trade; across major advanced and developing economies, manufacturing generates 70 percent of exports."
In short, the manufacturing share of GDP and employment tends to follow an inverted-U shape, and the United States and other high-income countries are on the downward side of that inverted U, while China, India, and others are on the upward side of their own inverted U. But as we watch the relative decline in U.S. manufacturing, there is a natural concern that in the process of this economic adaptation, we might be losing an important ingredient in the broader mix of the economy that holds difficult-to-replace importance for productivity, innovation, and even for certain types of employment that emphasizes a range of not-especially-academic skills.

Here are some figures with an international perspective on how manufacturing becomes less important as a share of the economy as the economy grows. The first figure looks at the value-added in manufacturing as a share of GDP. It's been falling for decades in high income countries, falling more slowly in middle-income countries, and rising in low-income countries. The second figure shows the common overall pattern, without plotting points from individual countries, of the inverted-U shape for manufacturing as a share of GDP. The third figure shows the inverted-U shape for manufacturing employment as a share of total employment as per capita income rises in a sample of eight countries. Mexico and India are still on the upward part of the inverted-U trajectory, and while no two countries are exactly alike, what's happening in the U.S. is qualitatively quite similar to what is happening in the United Kingdom, Germany, and others.


When confronted with this sort of tectonic shift in the economy, one natural reaction is a nostalgic desire to hold on to the way things used to be, but this reaction (almost by definition) is unlikely to help an economy move toward a higher future standard of living. How can we think about this shift away from manufacturing in a more nuanced way?

 The McKinsey analysis builds on the idea that manufacturing isn't monolithic. The study divides manufacturing into five categories with different traits. Here's a figure from the report, where the gray circles next to each category report the share of total global manufacturing in that category. 


The report emphasizes that the top category--chemicals, automotive, machinery and equipment--
moderately to heavy on research and development and on innovation. Regional processing industries like food processing tend to be highly automated and located near raw materials and demand, but not especially heavy on R&D. The third group of energy and resource-intensive commodities are often linked closely to what's happening energy or commodity prices. Global technology industries such as computers and electronics are heavy on R&D, and tend to involve products that are small in size by quite valuable (think mobile phones), which makes them important in global trade.  The final category of labor-intensive tradables, like apparel, is high tradeable and tends to be produced where labor costs are low. These kinds of insights, spelled out in more detail in the report, suggest that the appropriate response to an economy losing, say, output or jobs in apparel or in assembly of consumer electronics raises different concerns than losing output or jobs in chemicals or in computers.

The very fact that responses will differ across categories of manufacturing makes it hard to give extremely concrete advice, and the McKinsey report is full of terms like "granularity" and "flexibility." To me, these kinds of terms describe a world in which firms need to recognize that ib tge demand side, the world economy and national economies are actually composed of many different and smaller markets, which have somewhat different desires for products and even for what features should be prominent on a given product (like a mobile phone or a fast food meal). And on the production side, decisions are no longer about choosing a location to build the largest factory with the cheapest labor, but instead about how to draw upon a geographically separate network of resources, including both direct production supply chains, but also R&D, management, financial, marketing, and other resources as needed. 


The McKinsey report makes the point this way: "The way footprint decisions have been made in the past, especially the herd-like reflex to chase low-cost labor, needs to be replaced with more nuanced, multifactor analyses. Companies must look beyond the simple math of labor-cost arbitrage to consider total factor performance across the full range of factor inputs and other forces that determine what it costs to build and sell products—including labor, transportation, leadership talent, materials and components, energy, capital, regulation, and trade policy."

The role of "manfacturing" in these kinds of far-flung networks becomes a little blurry. On one side, there are a rich array of new technologies and innovations that are likely to continue transforming manufacturing, and the U.S. economy certainly should seek to play a substantial role in many of these technologies. But on the other side, implementing many of these technologies involves a blending of creativity and skill among many workers, only some of whom will actually be in direct physical contact with machinery that produces goods and services. Here's the McKinsey report:


"A rich pipeline of innovations promises to create additional demand and drive further productivity gains across manufacturing industries and geographies. New technologies are increasing the importance of information, resource efficiency, and scale variations in manufacturing. These innovations include new materials such as carbon fiber components and nanotechnology, advanced robotics and 3-D printing, and new information technologies that can generate new forms of intelligence, such as big data and the use of data-gathering sensors in production machinery and in logistics (the so-called Internet of Things). ...

Important advances are also taking place in development, process, and production technologies. It is increasingly possible to model the performance of a prototype that exists only as a CAD drawing. Additive manufacturing techniques, such as 3-D printing, are making prototyping easier and opening up exciting new options to produce intricate products such as aerospace components and even replacement human organs. Robots are gaining new capabilities at lower costs and are increasingly able to handle intricate work. The cost of automation relative to labor has fallen by 40 to 50 percent in advanced economies since 1990."

The U.S. economy has some notable advantages in this emerging new manufacturing sector. Compared to many firms around the world, U.S. firms are used to the concept of responding to customers and being flexible over time in production processes. Many firms have good internal capabilities and external connections for new technology and innovation. Moreover, U.S. firms have the advantage of operating in an economy with a well-established rule of law, with a fair degree or transparency and openness, and with a background of functioning infrastructure for communications, energy, and transportation. As the constellations of global manufacturing shift and move, many U.S. firms are well-positioned to do well.


But U.S. also faces a large and unique challenge: the enormous scale and dimension of the U.S. economy has meant that many U.S. firms could stay in the U.S. economy and not try that hard to compete in world markets. However, especially for manufactured goods, most of the growth of consumption in the decades ahead will be happening in the "emerging market" economies of east and south Asia, Latin America, and even eastern Europe and Africa. When U.S. manufacturing firms focus on "granularity" and "flexibility," they need to look at locations for sales and production that are often outside their traditional geographic focus.


Thursday, November 29, 2012

The BP Spill: What's the Monetary Cost of Environmental Damage?

 In April 2010, the BP Deepwater Horizon oil drilling rig suffered an explosion followed by an enormous oil spill. Here, I'll first lay out the question of much BP is likely to end up paying as a result of the spill, a number which is gradually being clarified by the passage of time and evolution of lawsuits. But beyond the question of what is going to happen, economists face a controversy about how best to place a dollar value on these kinds of environmental damages--and the most recent issue of my own Journal of Economic Perspectives has a three-paper symposium on the "contingent valuation" method.

A couple of weeks ago, Attorney General Eric Holder announced at a press conference in New Orleans: "BP has agreed to plead guilty to all 14 criminal charges – admitting responsibility for the deaths of 11 people and the events that led to an unprecedented environmental catastrophe.   The company also has agreed to pay $4 billion in fines and penalties. This marks both the single largest criminal fine – more than $1.25 billion – and the single largest total criminal resolution – $4 billion – in the history of the United States."

But as Nathan Richardson of Resources for the Future points out, the criminal penalty is a small slice of what BP will end up paying: "But remember that this criminal settlement is only a small part of BP’s liability. Earlier this year, BP reached a preliminary $7.8b class settlement with a large number of private plaintiffs (fishermen, property owners, etc.) harmed by the spill. That agreement is currently under review by a federal district court judge. This is in addition to $8b in payments made to private parties who agreed not to litigate (from BP’s oil spill “fund”). Future payments to private parties are likely as claims on the fund are resolved or as those who were not part of the class settlement pursue separate claims. BP also claims to have paid out $14b in cleanup costs.
But that’s not all. BP still must face civil suit from the federal government (and states) over natural resources damages. ... BP also faces civil penalties under the Clean Water Act, which would quadruple from $5.5b to $21b if gross negligence is found. In other words, BP will pay out the largest criminal settlement in U.S. history and it will be only a small share of its total liability."

I don't have anything new to say about the parade of events leading up to the spill, nor about the halting efforts to stop the flow and start a clean-up. For details on what happened, a useful starting point is the report from the National Commission on the BP Deepwater Horizon  Oil Spill and Offshore Drilling  that was released in January 2011. From the Foreword of that report: "The explosion that tore through the Deepwater Horizon drilling rig last April 20 [2010], as the rig’s
crew completed drilling the exploratory Macondo well deep under the waters of the Gulf of
Mexico, began a human, economic, and environmental disaster. Eleven crew members died, and others were seriously injured, as fire engulfed and ultimately destroyed the rig. And, although the nation would not know the full scope of the disaster for weeks, the first of more than four million barrels of oil began gushing uncontrolled into the Gulf—threatening livelihoods, precious habitats, and even a unique way of life. ... There are recurring themes of missed warning signals, failure to share information, and a general lack of appreciation for the risks involved.... But that complacency affected government as well as industry. The Commission has documented the weaknesses and the inadequacies of the federal regulation and oversight, and made important recommendations for changes in legal authority, regulations, investments in expertise, and management."

In editing the Fall 2012 issue of my own Journal of Economic Perspectives, I found myself focused on a narrower issue: How does one put a meaningful economic number on widespread environmental damage. The issue has three papers focused on a method called "contingent valuation," which involves using survey results to estimate damages. Catherine L. Kling, Daniel J. Phaneuf and Jinhua Zhao offer an overview of the disputes and issues surrounding this method. Then, Richard Carson makes the case that contingent valuation methods have developed sufficiently to be an accurate  estimating technique, while Jerry Hausman offers a skeptical view that contingent valuation surveys are so fundamentally flawed that their results should be completely disregarded. As usual, all JEP articles from the most recent back to the first issue in 1987 are freely available on-line, compliments of the American Economic Association.

From an economic perspective, the fundamental difficulty here is that not all the environmental damages affect economic output. A major oil spill, for example, affects production directly in industries like fishing and tourism and other industries directly, but it also affects birds and fish and beaches in ways that don't show up as a drop in economic output. In the economics literature, these losses are sometimes know as "passive use value." The notion is that even if I never visit the Gulf Coast around Louisiana and Mississippi, nor eat fish caught there, my utility can be affected by the environmental destruction that occurred. Thus, the argument goes that economic theory should take this "passive use" into account--roughly, the value that people place on the environmental damage that occurred--in thinking about lawsuits and policy choices.

The immediate objections to contingent value methods of setting such values are obvious: If people are just asked to place a value on environment damage, isn't it plausible that their answers will be untethered by reality? Richard Carson, a strong advocate of these methods, faces such skepticism head-on. He writes: "Economists are naturally skeptical of data generated from responses to survey questions—and they should be! Many surveys, including contingent valuation surveys, are inadequate." He also argues, "The best contingent valuation surveys are among the best survey instruments currently being administered while the worst are among the worst."

Carson emphasizes that a high-quality contingent valuation survey takes considerable care to provide what can be several dozen pages of focus-group-tested information to consider, and emphasizes to the responders that the results of the survey are likely to help guide policy outcomes. In such a setting, he argues that people have the information and incentives to answer truthfully. Hausman responds that such surveys are plagued by difficulties: for example, the "hypothetical bias" that people tend to overstate their value when they aren't actually paying; or that valuations can vary according to how questions are phrases, like whether the question asks about willingness-to-pay to avoid environmental damage or willingness-to-accept that the same amount of environmental damage will be done; or that when people value, say, three projects separately or the combination of those three projects, their answers often don't add up. Carson discusses how those who carry out such surveys seek to deal with these issues and others. Hausman says that legislatures, regulatory agencies, and courts, relying on expert opinion, are by far a preferable way to take passive use value into account. Kling, Phaneuf and Zhao point out that over 7,000 of these contingent valuation studies have been done in the last two decades, provide a background and framework for thinking about all of these issues. Of course, those who want all the ins and outs and gory details are encouraged to check out the articles themselves. 



To my knowledge, no contingent valuation surveys of the costs of the BP oil spill have yet been published. But it is interesting that after the Exxon Valdez spill, the eventual settlement roughly matched the estimates of the contingent valuation study. As Richard Carson notes: "Soon after the Exxon Valdez spill in March 1989, the state of Alaska funded a contingent valuation study, contained in Carson, Mitchell, Hanemann, Kopp, Presser, and Ruud (1992), which estimated the American public’s willingness to pay to avoid an oil spill similar to the Exxon Valdez at about $3 billion. The results of the study were shared with Exxon and a settlement for approximately $3 billion was reached, thus avoiding a long court case." As contingent valuation studies of the BP spill are published, it will be interesting to compare them with the amounts that BP is paying in the aftermath of the Deepwater Horizon spill.

Wednesday, November 28, 2012

International Capital Flows Slow Down

I'm not sure why it's happening or what it means, but some OECD reports are showing that international investment flows are slowing down in late 2012, whether one looks at international merger and acquisition activity or at flows of foreign direct investment.

For example, the OECD Investment News for September 2012, written by Michael Gestrin, is titled "Global investment dries up in 2012." The main focus of the report is on international merger and acquisition activity, and Gestrin writes:

"After two years of steady gains, international investment is again falling sharply. After breaking $1 trillion in 2011, international mergers and acquisitions (IM&A) are projected to reach $675 billion in 2012, a 34% decline from 2011(figure 1) ... At the same time as IM&A has been declining, firms have also been increasingly divesting themselves of international assets. As a result, net IM&A (the difference between IM&A and international divestment) has dropped to $317 billion, its lowest level since 2004 ..."

"IM&A has declined more sharply than overall M&A activity. This is reflected in the projected drop in the share of IM&A in total M&A from 35% in 2011 to 29% in 2012 (figure 2). IM&A is declining three times faster than domestic M&A, suggesting that concerns and uncertainties specific to the international investment climate are behind the recent slide in IM&A, rather than IM&A simply following a broader downward trend."

The main exception to these downward trends seems to be a continuing rise in state-owned enterprises, particularly from China, carrying out more mergers and acquisitions, especially in transactions aimed at energy and mining operations in the Middle East and Africa. Here's one figure showing the drop in international investment, and another showing the drop as a share of total M&A activity.



In the October 2012 issue of FDI in Figures, a similar pattern emerges for foreign direct investment--which includes merger and acquisition activity. "According to preliminary estimates, global foreign direct investment (FDI) flows continued shrinking in the second quarter of 2012 and declined by -10% from the previous quarter (-14% from a year earlier) to around USD 303 billion, similar to the value of FDI flows recorded in Q2 2010. The stock of global FDI at end-2011 was estimated at USD 20.4 trillion." As with M&A activity, there is something of a bounce-back in FDI between 2010 and 2011--although there is some fluctuation as well--but the first two quarters of 2012 are showing a decline. Here are graphs showing inflows and outflows of FDI for the world as a whole, as well as for the OECD countries, the G-20, and the EU (which are subgroups with overlapping memberships!).



In absolute dollars, China and the U.S. economy dominate these FDI flows, with China receiving about twice as much FDI in 2012 as the U.S. economy: "As from the beginning of 2012, China became the first destination for FDI, recording USD 64 billion in Q1 2012 and USD 54 billion in Q2 2012. Corresponding figures for the United States are USD 22 billion and USD 33.5 billion, respectively." Other major countries for FDI inflows are France, Netherlands and the United Kingdom, Brazil and India. As far as outflows: "Next to the United States outflows of USD 79 billion in Q2 2012 (-32% decrease from Q1 2012), the second largest investing economy was Japan at
USD 37 billion (or 61% increase) followed by Belgium at USD 16 billion (or 130% increase), China at USD 13 billion (or -10% decrease), Italy at USD 12.2 billion (or 16% increase), France at USD 12.1 billion (or -29% decrease) and Germany at USD 12.1 billion (or -66% decrease)."

As I said at the start, I'm not sure what to make of these patterns. Perhaps they are just random fluctuation that will sort itself out. But another a plausible interpretation would involve "concerns and uncertainties specific to the international investment climate," as Gestrin put it. Without trying to itemize those concerns here across the euro area, the U.S., China, Russia, India, and elsewhere, we may be seeing a movement toward a situation in which in exporting and importing to other countries, without seeking a management interest in firms in those other countries, is looking relatively more attractive than a few years ago.


(For those shaky on their definitions, the report defines "foreign direct investment" this way: "Foreign Direct Investment (FDI) is a category of investment that reflects the objective of establishing a lasting interest by a resident enterprise in one economy (direct investor) in an enterprise (direct investment enterprise) that is resident in an economy other than that of the direct investor. The lasting interest implies the existence of a long-term relationship between the direct investor and the direct investment enterprise and a significant degree of influence (not necessarily control) on the management of the enterprise. The direct or indirect ownership of 10% or more of the voting power of an enterprise resident in one economy by an investor resident in another economy is the statistical evidence of such a relationship.")








Tuesday, November 27, 2012

The Lucas Critique

The Society for Economic Dynamics has a short and delightful interview with Robert Lucas in the November 2012 issue of its newsletter, Economic Dynamics. Lucas, of course, received the Nobel prize in economics in 1995 and is, among other distinctions, the originator of the eponymous "Lucas critique," which the Nobel committee described in this way:

 "The 'Lucas critique' - Lucas's contribution to macroeconometric evaluation of economic policy - has received enormous attention and been completely incorporated in current thought. Briefly, the 'critique' implies that estimated parameters which were previously regarded as 'structural' in econometric analysis of economic policy actually depend on the economic policy pursued during the estimation period (for instance, the slope of the Phillips curve may depend on the variance of non-observed disturbances in money demand and money supply). Hence, the parameters may change with shifts in the policy regime. This is not only an academic point, but also important for economic-policy recommendations. The effects of policy regime shifts are often completely different if the agents' expectations adjust to the new regime than if they do not. Nowadays, it goes without saying that the effects of changing expectations should be taken into account when the consequences of a new policy are assessed - for instance, a new exchange rate system, a new monetary policy, a tax reform, or new rules for unemployment benefits.

"When Lucas's seminal article (1976) was published, practically all existing macroeconometric models had behavioral functions that were in so-called reduced form; that is, the parameters in those functions might implicitly depend on the policy regime. If so, it is obviously problematic to use the same parameter values to evaluate other policy regimes. Nevertheless, the models were often used precisely in that way: Parameters estimated under a particular policy regime were used in simulations with other policy rules, for the purpose of predicting the effect on crucial macroeconomic variables. With regime-dependent parameters, the predictions could turn out to be erroneous and misleading."
Perhaps it's useful to add a specific example here. Say that we are trying to figure out how much the Federal Reserve can boost the economy during a recession by cutting interest rates. We try to calculate a "parameter," that is, an estimate of  how much cutting the interest rate will boost lending and the economy. But what if it becomes widely expected that if the economy slows, the Federal Reserve will cut interest rates? Then it could be, for example, that when the economy shows signs of slowing, everyone begins to expect lower interest rates, and slows down their borrowing immediately because they are waiting for the lower interest rates to arrive--thus bringing on the threatened recession. Or it may be that because borrowers are expecting the lower interest rates, they have already taken those lower rates into account in their planning, and thus don't need to make any change in plans when those lower interest rates arrive. The key insight is that the effects of policy depend on whether that policy is expected or unexpected--and in general how the policy interacts with expectations. The parameters for effects of policy estimated under one set of expectations may well not apply in a setting where expectations differ.

As the Nobel committee noted more than a decade ago, this general point has now been thoroughly absorbed into economics. Thus, I was intrigued to see Lucas note that the phase "Lucas critique" has become detached from its original context in a way that can make it less useful as a method of argument. Here's Lucas in the recent interview:

"My paper, "Econometric Policy Evaluation: A Critique" was written in the early 70s. Its main content was a criticism of specific econometric models---models that I had grown up with and had used in my own work. These models implied an operational way of extrapolating into the future to see what the "long run" would look like. ... Of course every economist, then as now, knows that expectations matter but in those days it wasn't clear how to embody this knowledge in operational models. ... But the term "Lucas critique" has survived, long after that original context has disappeared. It has a life of its own and means different things to different people. Sometimes it is used like a cross you are supposed to use to hold off vampires: Just waving it it an opponent defeats him. Too much of this, no matter what side you are on, becomes just name calling."

Lucas offers some lively observations on dynamic stochastic general equilibrium models, differences across business cycles, and microfoundations in macroeoconomic analysis. But his closing comment in particular gave me a smile. In answer to a question about the economy being in an "unusual state," Lucas answers:  "`Unusual state'? Is that what we call it when our favorite models don't deliver what we had hoped? I would call that our usual state."

Monday, November 26, 2012

Why Doesn't Someone Undercut Payday Lending?

A payday loan works like this: The borrower received an amount that is typically between $100 and $500. The borrower writes a post-dated check to the lender, and the lender agrees not to cash the check for, say, two weeks. No collateral is required: the borrower often needs to show an ID, a recent pay stub, and maybe a statement showing that they have a bank account. The lender charges a fee of about $15 for every $100 borrowed. Paying $15 for a two-week loan of $100 works out to an astronomical annual rate of about 390% per year. But because the payment is a "fee," not an "interest rate," it does not fall afoul of state usury laws. A number of state have passed legislation to limit payday loans, either by capping the maximum amount, capping the interest rate, or banning them outright.

But for those who think like economists, complaints about price-gouging or unfairness in the payday lending market raise an obvious question: If payday lenders are making huge profits, then shouldn't we see entry into that  market from credit unions and banks, which would drive down the prices of such loans for everyone? Victor Stango offers some argument and evidence on this point in "Are Payday Lending Markets Competitive," which appears in the Fall 2012 issue of Regulation magazine.
Stango writes:

"The most direct evidence is the most telling in this case: very few credit unions currently offer payday loans. Fewer than 6 percent of credit unions offered payday loans as of 2009, and credit unions probably comprise less than 2 percent of the national payday loan market. This “market test” shows that credit unions find entering the payday loan market unattractive. With few regulatory obstacles to offering payday loans, it seems that credit unions cannot compete with a substantively similar product at lower prices.

"Those few credit unions that do offer a payday advance product often have total fee and interest charges that are quite close to (or even higher than) standard payday loan fees. Credit union payday loans also have tighter credit requirements, which generate much  lower default rates by rationing riskier borrowers out of the market. The upshot is that risk-adjusted prices on credit union payday loans might be no lower than those on standard payday loans."
The question of whether payday lending should be restricted can make a useful topic for discussions or even short papers in an economics class. The industry is far more prevalent than many people recognize. As Stango describes:

"The scale of a payday outlet can be quite small and startup costs are minimal compared to those of a bank. ... They can locate nearly anywhere and have longer business hours than banks. ... There are currently more than 24,000 physical payday outlets; by comparison there are roughly 16,000 banks and credit unions in total (with roughly 90,000 branches). Many more lenders offer payday loans online. Estimates of market penetration vary, but industry reports suggest that 5–10 percent of the adult population in the United States has used a payday loan at least once."


Payday lending fees do look uncomfortably high, but those with low incomes are often facing hard choices. Overdrawing a bank account often has high fees, as does exceeding a credit card limit. Having your electricity or water turned off for non-payment often leads to high fees, and not getting your car repaired for a couple of weeks can cost you your job.

Moreover, such loans are risky to make. Stango cites data that credit unions steer away from making payday loans because of their riskiness, and instead offer only only much safer loans that have lower costs to the borrower, but also have many more restrictions, like credit checks, or a longer application period, or a requirement that some of the "loan" be immediately placed into a savings account. Credit unions may also charge an "annual" fee for such a loan--but for someone taking out a short-term loan only once or twice in a year, whether the fee is labelled as "annual" or not doesn't affect what they pay. Indeed, Stango cites a July 2009 report from the National Consumer Law Center that criticized credit unions for offering "false payday loan `alternatives'" that actually cost about as much as a typical payday loan.

Stango also cites evidence form his own small survey of payday loan borrowers in Sacramento, California, that many of them prefer the higher fees and looser restrictions on payday loans to the lower fees and tighter restrictions common on similar loans from credit unions. Those interested in a bit more background might begin with my post from July 2011, "Could Restrictions on Payday Lending Hurt Consumers?" and the links included there.



Friday, November 23, 2012

Was Curbside Recycling the Invention of Beverage Companies?

We often think of programs like curbside recycling as driven by a pure environmentalist agenda. But Bartow J. Elmore makes an intriguing argument these programs were passed in substantial part because of pressure from U.S. beverage makers, who were trying to address a public relations nightmare and to increase their profits. His essay, "The American Beverage Industry and the Development of Curbside Recycling Programs, 1950-2000" appears in the Autumn 2012 issue of the Business History Review (vol. 86, number 3, pp. 477-501). This journal isn't freely available on-line, but many in academia will have access through library subscriptions. From the abstract:

"Many people today consider curbside recycling the quintessential model of eco-stewardship, yet this waste-management system in the United States was in many ways a polluter-sponsored initiative that allowed corporations to expand their productive capacity without fixing fundamental flaws in their packaging technology. For the soft-drink, brewing, and canning industries, the promise of recycling became a powerful weapon for combating mandatory deposit bills and other source-reduction measures in the 1970s and 1980s." 

As Elmore tells it, the story unfolds like this: For much of the 20th century, soft drink and beer companies shipped bottles. Then local bottling companies filled the bottles with beverages. The bottles included a deposit that was often 1 or 2 cents for returning them. Thus, the local bottling companies collected the empties, and washed and reused them. Elmore cites evidence that in the late 1940s, 96% of all soft drink bottles were returned, and a given bottle was often used 20-30 times before becoming chipped or broken.

But after Prohibition, as beer companies rebuilt their national sales networks, they started turning away from local bottlers, and instead using a larger centralized brewery and shipping beer in steel cans. Pepsi-Cola started shipping soft drinks in steel cans in 1953, and Coca-Cola followed in 1955. For the soft drink companies, there was a long-standing belief that they could raise profits if they could find a way to reduce the number of local bottlers: sure enough, the number of local bottlers fell from 4500 in 1960 to about 3,000 by 1972.

But there was a problem. The steel cans, and then aluminum cans, were "one-way"--that is, they weren't washed and recycled. To put it another way, they were a high volume of long-lasting garbage. People protested and state legislatures began to make ominous noises about taxing or banning nonreturnable drink containers. Industry banded together in the 1950s to create the first national anti-litter association: Keep America Beautiful. But promotional ads to encourage picking up litter weren't enough, and by the late 1960s and early 1970s, the U.S. Congress along with various states was again contemplating a ban on nonreturnable containers.

And so the beverage and canning companies, along with garbage giants like Waste Management and Browning-Ferris and scrap metal companies like Hugo Neu, formed a coalition with environmental groups like the Sierra Club and the National Wildlife Federation to push for federal grants that would help set up recycling programs. As Elmore writes: "The beverage industry positioned itself as the keystone of the recyling system." When anyone argued for reuseable drink containers, a common response was that doing so would cripple the recycling system, and that it would cost jobs in the can-making industry. 

States stopped passing laws requiring mandatory deposits on cans and bottles: since 1986, only Hawaii has passed such a law. Instead, taxpayers and ratepayers at the federal and state level paid for curbside recycling. Elmore writes: "Taxpayers were taking on the majority of the cost of collecting, processing, and returning corporate byproducts to producers, and industry remained exempt from disposal fees that might have been used to pay for expensive recycling systems. More critically, government-mandated source-reduction and polluter-pays programs had been discredited as viable methods for reducing the nation's pollution problem."

Compared to the 1940s when 96 percent of bottles were washed and reused, often a couple of dozen times, where do we stand today? Elmore cites evidence that in the mid-2000s, maybe 30-40 percent of cans and plastic bottles are recycled.

I wouldn't want to try to turn the clock back to the days of rewashing and reusing bottles. But it's not at all obvious to me that curbside recycling is doing the job. Ten states have laws requiring deposits on cans and bottles, according to the lobbyists at BottleBill.org. If we want people to be serious about recycling, having a policy of 5-10 cents for returning cans and bottles is likely to be a more effective tools than curbside recycling.


Wednesday, November 21, 2012

An Economist Chews Over Thanksgiving

(Originally appeared Thanksgiving 2011.)

As Thanksgiving preparations arrive, I naturally find my thoughts veering to the evolution of demand for turkey, technological change turkey production, market concentration in the turkey industry, and price indexes for a classic Thanksgiving dinner. Not that there's anything wrong with that.

The last time the U.S. Department of Agriculture did a detailed "Overview of the U.S. Turkey Industry" appears to be back in 2007. Some themes about the turkey market waddle out from that report on both the demand and supply sides.

On the demand side, the quantity of turkey consumed rose dramatically from the mid-1970s to the mid-1990s, but since then has declined somewhat. The figure below is from the USDA study, but more recent data from the Eatturkey.com website run by the National Turkey Federation report that U.S. producers raised 244 million turkeys in 2010, so the decline has continued in the last few years. Apparently, the Classic Thanksgiving Dinner is becoming slightly less widespread.



On the production side, the National Turkey Federation explains: "Turkey companies are vertically integrated, meaning they control or contract for all phases of production and processing - from breeding through delivery to retail." However, production of turkeys has shifted substantially, away from a model in which turkeys were hatched and raised all in one place, and toward a model in which all the steps of turkey production have become separated and specialized--with some of these steps happening at much larger scale. The result has been an efficiency gain in the production of turkeys.  Here is some commentary from the 2007 USDA report, with references to charts omitted for readability:
"In 1975, there were 180 turkey hatcheries in the United States compared with 55 operations in 2007, or 31 percent of the 1975 hatcheries. Incubator capacity in 1975 was 41.9 million eggs, compared with 38.7 million eggs in 2007. Hatchery intensity increased from an average 33 thousand egg capacity per hatchery in 1975 to 704 thousand egg  capacity per hatchery in 2007.

Turkeys were historically hatched and raised on the same operation and either slaughtered on or close to where they were raised. Historically, operations owned the parent stock of the turkeys they raised supplying their own eggs. The increase in technology and mastery of turkey breeding has led to highly specialized operations. Each production process of the turkey industry is now mainly represented by various specialized operations.

Eggs are produced at laying facilities, some of which have had the same genetic turkey breed for more than a century. Eggs are immediately shipped to hatcheries and set in incubators. Once the poults are hatched, they are then typically shipped to a brooder barn. As poults mature, they are moved to growout facilities until they reach slaughter weight. Some operations use the same building for the entire growout process of turkeys. Once the turkeys reach slaughter weight, they are shipped to slaughter facilities and processed for meat products or sold as whole birds.

Turkeys have been carefully bred to become the efficient meat producers they are today. In 1986, a turkey weighed an average of 20.0 pounds. This average has increased to 28.2 pounds per bird in 2006. The increase in bird weight reflects an efficiency gain for growers of about 41 percent."


U.S. agriculture is full of these kinds of examples of remarkable increases in yields over a few decades, but they always drop my jaw. I tend to think of a "turkey" as a product that doesn't have a lot of opportunity for technological development, but clearly I'm wrong. Here's a graph showing the rise in size of turkeys over time.


The production of turkey remains an industry that is not very concentrated, with three relatively large producers and then more than a dozen mid-sized producers. Here's a list of top turkey producers in 2010 from the National Turkey Federation




For some reason, this entire post is reminding me of the old line that if you want to have free-flowing and cordial conversation at dinner party, never seat two economists beside each other. Did I mention that I make an excellent chestnut stuffing? 

Anyway, the starting point for measuring inflation is to define a relevant "basket" or group of goods, and then to track how the price of this basket of goods changes over time. When the Bureau of Labor Statistics measures the Consumer Price Index, the basket of goods is defined as what a typical U.S. household buys. But one can also define a more specific basket of goods if desired, and for 26 years, the American Farm Bureau Federation has been using more than 100 shoppers in states across the country to estimate the cost of purchasing a Thanksgiving dinner. The basket of goods for their Classic Thanksgiving Dinner Price Index looks like this:


The top line of this graph shows the nominal price of purchasing the basket of goods for the Classic Thanksgiving Dinner. One could use the underlying data here to calculate an inflation rate: that is, the increase in nominal prices for the same basket of goods was 13% from 2010 to 2011. The lower line on the graph shows the price of the Classic Thanksgiving Dinner adjusted for the overall inflation rate in the economy. The line is relatively flat, which means that inflation in the Classic Thanksgiving Dinner has actually been a pretty good measure of the overall inflation rate in the last 26 years. But in 2011, the rise in the price of the Classic Thanksgiving Dinner, like the rise in food prices generally, has outstripped the overall rise in inflation.



Thanksgiving is my favorite holiday. Good food, good company, no presents--and all these good topics for conversation. What's not to like?


Tuesday, November 20, 2012

The Paperless Office: Headed that Way at Last?

As computers became widespread in the 1980s and into the early 1990s, a common prediction was that we were headed for the "paperless office." But that prediction went badly astray, as Abigail Sellen and Richard Harper pointed out in their 2002 book, The Myth of the Paperless Office.
For example, they cited evidence that consumption of common office paper rose by 14.7% from  1995 and 2000  and they argued that that when organizations started using e-mail, their consumption of paper rose by 40%. In short, it appeared that information technology was not a substitute for the use of paper, but instead was a complement for it--in other words, computerization was going to be the best thing that ever happened to the paper industry.

But although the transition took some time, it now appears that at least U.S. offices are becoming, if not quite paperless, much less paper-intensive. Here's a figure from the Environmental Paper Network's State of the Paper Industry 2011, which came out last year. Notice in particular the decline in paper use since about 2007 in North America and Western Europe.


The report describes U.S. paper consumption trends in this way (notes omitted):

"Consumption of paper and paperboard products has experienced significant decline in North America since 2007. This is attributable primarily to the aftermath of the financial crisis in the United States at the end of the decade. The poor economy motivated many companies to perform a close analysis of their paper use and inspired the adoption of innovative and more efficient systems. These new systems will remain in place into the economic recovery and likely have a lasting impact on printing and writing paper consumption. In addition, the shift in the patterns of consumption of news and other media from print to digital formats is also apparently having an irreversible effect in
some paper sectors such as newsprint.

"Total global consumption of paper is still rising, reaching 371 million tonnes in 2009. However, total paper consumption in North America has declined 24% between 2006 and 2009. Per capita consumption of paper in North America dropped from more than 652 lbs/year in 2005 to 504 lbs/year in 2009.

"North Americans still, however, consume almost 30 times more paper per capita than the average person in Africa and 6 times more than the average person in Asia. In 2009, total paper consumption in China eclipsed total North American consumption for the first time."
I remember stories from the old days of computerized offices, maybe 15-20 years ago, about executives who wanted all their e-mails and reports printed out. Those days are gone. But it's interesting to me that even for a change that seems as obvious as electronic communication leading to less paper, it took some years and the pressures of a recession for substantial change to take effect. Similarly, it wasn't until about 2006 that the volume of mail carried by the U.S. Postal Service took a nosedive. All the consequences of major technological changes can take decades to ripple through an economy.

 

Monday, November 19, 2012

China's Economic Growth: A Different Storyline

When I chat with people about China's economic growth, I often hear a story that goes like this: The main driver's behind China's growth is that it uses a combination of cheap labor and an undervalued exchange rate to create huge trade surpluses. The most recent issue of my own Journal of Economic Perspectives includes a five-paper symposium on China's growth, and they make a compelling case that this received wisdom about China's growth is more wrong than right.

For example, start with the claim that China's economic growth has been driven by huge trade surpluses. China's major economic reforms started around 1978, and rapid growth took off not long after that. But China's balance of trade was essentially in balance until the early 2000s, and only then did it take off. Here's a figure generated using the ever-useful FRED website from the St. Louis Fed.

How does China's pattern of trade balances line up with argument about China's undervalued exchange rate? Here's a graph showing China's exchange rate over time in yuan/dollar. Thus, an upward movement on the graph means that yuan is weaker relative to the dollar, and a downward movement means that it is stronger relative to the dollar. The yuan does indeed get weaker relative to the U.S. dollar for much of the 1980s and first half of the 1990s--but this is the time period when China's trade balance is near-zero. China's exchange rate is pretty much unchanging for the five years or so before China's trade surplus takes off. Since 2006, the yuan has indeed been strengthening. Last week the yuan hit a record high against the dollar since 1994.

What about China's purportedly cheap wages? Here's a figure from the article by Hongbin Li, Lei Li, Binzhen Wu, and Yanyan Xiong, called "The End of Cheap Chinese Labor." As the figure points out, China's wages were fairly during much of the 1980 and 1990s, which is the time when China's trade was nearly in-balance. But whether the conversion is done using yuan/dollar exchange rates or by inflation in China (measured by the producer price index), wages in China have been rising at double-digit annual rates since the late 1990s. In other words, China's big trade surpluses of the last decade have co-existed with sharply rising wages.

Clearly, China's pattern of economic growth since the start of its reforms needs a different storyline than the basic tale of low wages, a cheap currency, and big trade surpluses. After working with these authors, my own view is that it's useful to think of China's economy since about 1978 in two main stages--although there isn't a clean-and-clear break between them.

The first stage of China's growth that went through the 1980s and a bit into the early 1990s was really about rural areas.  Yasheng Huang makes this argument strongly in his JEP article "How Did China Take Off?" Huang writes: "China’s take-off in economic growth starting in the late 1970s and its poverty reduction for the next couple of decades was completely a function of its rural developments and its internal reforms in general. During the golden era of rural industry in the 1980s, China had none of what are often thought of as the requisite features of the China growth model, like massive state-controlled infrastructural investments and mercantilism." This was the time period when the agricultural sector was allowed to operate under a market framework, and as agricultural output exploded, rural workers moved to employment in the "township and village enterprises." Huang makes a strong argument that these enterprises should be thought of a privately owned firms, operating with what was in many ways a private-sector financial market.


But in the 1990s, the emphasis of China's economy began to change. New leaders favored urban development over rural development, and they cut the township and village enterprises down to size by re-nationalizing their sources of finance They began to reform the money-losing state-owned enterprises that still dominated China's urban economy as of the early 1990s. They moved China toward joining the World Trade Organization, which happened in 2001. For a sense of the transition in China's urban areas to private sector employment, here's another useful figure from Li, Li, Wu and Xiong:

But this process of change brought an unexpected macroeconomic imbalance. As Dennis Tao Yang point out in his JEP paper, "Aggregate Savings and External Imbalances in China," China's 11th Five-Year Plan for the years from 2006-2010 called for trade to be in balance overall--clearly an expectation that was not close to being met. Yang looks at a variety of reasons why savings rates took off in China: for example, after China joined the WTO in 2001, exports took off, but firms lacked useful ways in China's underdeveloped financial system to pass these savings to the household sector; as exports took off, China's government received an unexpectedly huge surplus, with budget surpluses upward of 8% of GDP; and households, concerned about retirement and health costs for themselves and their families, and with little access to loans for mortgages or consumer durables, continued to save at very high rates. Yang notes that in China, this combination of outcomes is sometimes criticized as the "Nation Rich, People Poor" policy.

Thus, although China's economy continues to grow rapidly, it is faced with many challenges. Along with the macroeconomic imbalances emphasized by Yang, Xin Meng raises another cluster of issues in her paper, "Labor Market Outcomes and Reforms in China": the extraordinary back-and-forth migration from rural to urban areas, now at well over 100 million people per year, and perhaps headed much higher; the growing inequalities in wages as labor markets move away from the administratively determined wages that were so common even just 20 years ago; the inequalities being created by the spread of education; and China's coming demographic bulge with many elderly and few young workers--a hangover of the one-child rules to limit population growth.

With little effort, one can compile quite a list of economic difficulties facing China: macroeconomic imbalances, an underdeveloped financial sector, inequalities in wages and across rural and urban areas, the demographic bulge, corruption, environmental problems, and more. Still, with all that said, it's worth remembering that China's economy still has enormous potential upside. China started from such a low per capita GDP back in 1978 that even now, productivity levels are only about 20% of the U.S. level. In yet another JEP paper, "Understanding China’s Growth: Past, Present, and Future," Xiaodong Zhu points out that when Japan and Korea and Taiwan had their rapid spurts of economic growth int he 1950s and 1960s and 1970s, they were essentially raising their productivity levels from 40-50% of the U.S. level up to 70-80% of the U.S. level. In other words, China is still far below the level that was the take-off point of rapid growth for countries like Japan, Korea and Taiwan. As Zhu points out, China is making enormous investments in education, physical capital investment, and research and development. In many ways, it is laying a framework for continued growth.

Surely, many things could go wrong for China's economy. For continued growth, it will need to transform its economy again and again. But it also seems to me that hundreds of millions of people in China have developed a sense of possibility, and of what their economic lives could hold for them. China's future growth is sure to have fits and starts, like every country, but its economy continues ot have enormous momentum toward a much higher standard of living.

Friday, November 16, 2012

Marginal Tax Rates on the Poor and Lower Middle Class

There's always a lot of talk about how marginal tax rates affect the incentives of those with high incomes. But how high are marginal tax rates on those with low incomes? The question might seem peculiar. After all, don't we know for a fact that those in the bottom of the income distribution, at least on average, don't pay federal taxes? Instead, on average, they get "refundable" tax credits from the federal government for programs like the Earned Income Tax Credit and the child credit. As a result, the Congressional Budget Office has calculated that the bottom two quintiles of the income distribution pay a negative income rate. Even with payroll taxes for Social Security and Medicare and federal excise taxes on gasoline, cigarettes and alcohol added in the bottom quintile of the income distribution pays only 1% of its income in federal taxes.

But the marginal tax rate that someone owes is not the same as the average tax rate that they owe. Those with low incomes can often face a situation where, as their income rises, the amount that the receive from the Earned Income Tax Credit declines. There are other non-tax programs like Food Stamps, Medicaid, Temporary Assistance to Needy Families (welfare), and Children's Health Insurance Program (CHIP) that phase out as income increases. Thus, for each marginal $1 that someone with a low income earns, the gradual withdrawal of these benefits means that their after-tax income rises by less than $1. In addition, even those with low incomes pay Social Security and Medicare payroll taxes.

The Congressional Budget Office has taken on the tax of calculating "Effective Marginal Tax Rates for Low- and Moderate-Income Workers."  Here's an illustrative figure showing before-tax and after-tax income for a hypothetical single parent with one child. Before-tax income is just a straight line for illustrative purposes. The line for after-tax income shows what after-tax income would be for this family, given the before-tax level of income. For example, with a before-tax income of zero, after tax income would be approximately $20,000, due to various transfer payments. At a before-tax income of about $27,000, after-tax income is also about $27,000: that is, $27,000 is the break-even point where the subsidies available from the government at that income level are equal to the taxes being paid at that income level. In general, the after-tax income line has a flatter slope that the before-tax line, which is telling you that when you earn $1 of before-tax income, the gain to after-tax income is less than $1--even for those with low and moderate income levels.

The first graph is more-or-less real data, but for a hypothetical family. A second graph looks at the actual marginal tax rates by household income level. At any given income level, of course, there is actually a range of marginal tax rates, depending on how many people are in the family, what programs they are eligible for, even state they live in (because benefit levels for many programs will vary by state). Thus, there will be a range of different marginal tax rates for households at any given income level. The graph shows the range of marginal tax rates for any given income level, ranging from the 10th percentile of marginal tax rates up to the 90th percentile. Earnings on the horizontal axis are shown as a percentage of the federal poverty line (FPL).

Two main patterns jump out at me from this graph. One pattern is that there is an enormous range of marginal tax rates at very low income levels, at and below the poverty line. This range of marginal tax rates reflects the enormous diversity in types of households in poverty, and what sort of government assistance each family is eligible for. The other pattern is that for those from about 200% of the poverty line up to about 600% of the poverty line, a sizeable proportion of households are facing marginal tax rates--considering federal income and payroll taxes, along with food stamps--in the range of 30-40%.

These high marginal tax rates on those with low and moderate levels of income raise some questions for those on all sides of the tax debates. For those who don't believe that high marginal tax rates have much affect on incentives to work at higher income levels, like households earning $250,000 or more per year, consistency would seem to suggest that they shouldn't worry too much about incentives to work at lower income levels, either. For those who express a lot of concern about how high marginal tax rates would injure incentives to work for those at the top income levels, consistency would seem to suggest that they express similar concern over lower marginal tax rates for those at the lower and moderate income levels, too--which means making programs like the Earned Income Tax Credit, food stamps, welfare, and others more generous, so that they can be phased down more slowly as people earn income.  

Thursday, November 15, 2012

The Case For and Against Advertising

Advertising may be that rare case where economists are less cynical than the general public. For many people, the epitome of exploitative and wasteful capitalism, encouraging people to feel bad unless they spend money they don't have for goods and services they don't actually want or need. Many business firms spend their advertising dollars ruefully,while quoting the old line:  "I know that half of the money I spending on advertising is wasted, but the problem is that I don't know which half."
 
But while many see this glass as totally empty, economist see it as half-full. Yes, advertising can be a arms-war of expenditures that benefits no one, while creating consumer dissatisfaction. But it can also be a form of active competition leading to lower prices and better products for consumers.  Here, I'll give a few facts about advertising, review the classic arguments from back in Alfred Marshall's 1919 classic Industry and Trade, and point out a bit of recent evidence that consumers may well benefit from lower prices when advertising expands.
 
 Kantar Media reports that total U.S. advertising spending was $144 billion in 2011--which is about 1% of GDP. That money is pretty well spread out across advertisers and across different kinds of media: for example, the top 10 advertisers account for only a bit more that 10% of total advertising spending. 


At the global level, according to Nielson's Global AdView Pulse Report, advertising totalled almost $500 billion in 2011.
 
The great economist Alfred Marshall laid out the social pros and cons of advertising in his 1919 book, Industry and Trade. On one side, he emphasizes the role of advertising in providing information and building reputations, names, and trademarks; on the other hand, he points out that it can be carried to excess. Here are a few of Marshall's comments. Even a century ago, he was skeptical about how American advertising, in particular, was often being carried to excess.  

 "The reputation acquired by large general advertising is easy of attainment, though expensive. It is indeed seldom of much value, unless accompanied by capable and honourable dealing: but, when attained, it extends in varying degrees to all products made or handled by the business: a name or a trade mark which has gained good fame in regard to one product is a great aid to the marketing of others." (p. 180)

"On the other hand, some sorts of private retail trade are spending lavishly on competitive advertisements, most of which waste much of their force in neutralizing the force of rivals. In America, where they have been developed with more energy and inventive force than anywhere else ..." (p. 195)

"Some of the implements of constructive advertisement are prominent in all large cities.  For instance a good frontage on a leading thoroughfare; adequate space for the convenience of employees and for customers; lifts and moving staircases, etc., are all constructive, so long as they do not exceed the requirements of the business. That is to say, the assistance, which they afford to customers by enabling them to satisfy their wants without inordinate fatigue or loss of time, would be appropriate, even if the business were not in strong rivalry with others. But eager rivalry often causes them to be carried to an excess, which involves social waste; and ultimately tends to raise the charges which the public have to meet without adequate return." (p. 200)

"On the other hand the combative force of mere capital obtrudes itself in the incessant  iteration of the name of a product, coupled perhaps with a claim that it is of excellent quality. Of course no amount of expenditure on advertising will enable any thing, which the customers can fairly test for themselves by experience (this condition excludes medicines which claim to be appropriate to subtle diseases, etc.), to get a permanent hold on the people, unless it is fairly good relatively to its price. The chief influence of such advertisement is exerted, not through the reason, but through the blind force of habit: people in general are, for good and for evil, inclined to prefer that which is familiar to that which is not." (p. 194)

"In conclusion it should be noted that academic students and professional advertising agents in America have united in applying modern methods of systematic and progressive analysis, observation, experiment, record, and provisional conclusion, in successive cycles to ascertaining the most effective forms of appeal. Psychology has been pressed into the service: the influence which repetition of an advertisement exerts has been subsumed as a special instance of the educative effect of repetition." (p. 201)
 Having stated the basic tradeoff of advertising, what empirical evidence is available on the point? Ferdinand Rauch describes some of his recent work on "Advertising and consumer prices" at the Vox blog. (I saw this study mentioned by Phil Izzo at the Real Time Economics blog.) Rauch points out that the evidence on  how advertising affects prices seems to vary across industries:
 
"Existing empirical evidence has demonstrated that prices of various goods react to changes in advertising costs differently. For example, advertising seems to decrease prices for eyeglasses (Kwoka 1984), children’s breakfast cereals (Clark 2007) and drugs (Cady 1976), while it increases the supply price in brewing industries (Gallet and Euzent 2002)."
 
Rauch's recent work takes a different approach by looking at what happened in Austria when a situation in which each region imposed its own different tax rate on advertising changed to a situation in which one common national tax rate was imposed on advertising. As a result of this change, the tax rate on advertising rose in some regions and decreased in others. He finds: 

"I first show that the taxation of advertising is indeed a powerful instrument to restrict advertising expenditures of firms. I also show that advertising increased consumer prices in some industries such as alcohol, tobacco and transportation, in which the persuasive effect dominates. But it also decreased consumer prices in other industries such as food. I use data from existing marketing studies which make it possible to relate different responses of market prices to characteristics of advertisements in industries. I can indeed show that those industries which exhibit the informative price include more information in their advertisements, consistent with the interpretation of informational and persuasive forces of advertising.
"The aggregate effect is informative, which means that, on average, advertising decreases consumer prices. This suggests that the Austrian advertising tax increases consumer prices and probably affects welfare adversely. I estimate that if the current 5% tax on advertising in Austria were abolished, consumer prices would decrease by about 0.25 percentage points on average."

Thus, the challenge for all of us as consumers of advertising is to consume the information that it provides without also swallowing the persuasion that it offers. In addition, whenever I feel annoyed that perhaps advertising has cost me some money, I try to remember that advertising pays essentially all of the production costs for my morning newspaper and for most of the television that I watch.
 
 

Wednesday, November 14, 2012

The Uncertain Future for Universities

Ernst and Young has produced an interesting report called: "University of the Future: A thousand year old industry on the cusp of profound change." Although the report is aimed specifically at Australian universities, many of the insights apply all around the world. The tone of the report is summed up right at the start:

"Our primary hypothesis is that the dominant university model in Australia — a broad-based teaching and research institution, supported by a large asset base and a large, predominantly in-house back office — will prove unviable in all but a few cases over the next 10-15 years. At a minimum, incumbent universities will need to significantly streamline their operations and asset base, at the same time as incorporating new teaching and learning delivery mechanisms, a diffusion of channels to market, and stakeholder expectations for increased impact. At its extreme, private universities and possibly some incumbent public universities will create new products and markets that merge parts of the education sector with other sectors, such as media, technology, innovation, and venture capital."

Tertiary education is on the rise all around the world. This figure shows participation rates for 18-22 year-olds in tertiary education around the world. Just from 2000 to 2010, the percentage has tripled in China, and more-or-less doubled in India, East Asia and the Pacific, and Latin America. (Side note: "MENA" in the table refers to the "Middle East and North Africa region.)

Finishing a four-year college degree is historically something that happens for well under half of students in high-income countries, and for only a tiny slice of students in low-income countries. With the dramatic expansion of attendance, the traditional model won't work well--it's just too costly on a per-student basis. How this industry will shake out in world of digital technology and global mobility, along with research programs that are increasingly intertwined with industry, is not at all clear. But the E&Y report offers a glimpse of some of the possibilities. Here are a few thoughts, scattered through the report, that jumped out at me.

"The likely outcome over the next 10-15 years is the emergence of a small number of elite, truly global university ‘brands’. These global brands of the future will include some of the ‘usual suspects’ — a subset of Ivy League and Oxbridge institutions — as well as a number of elite institutions from China."

"The relationship between industry and the higher education sector is changing and deepening. Industry plays multiple roles: as customer and partner of higher education institutions and, increasingly, as a competitor. ... Research commercialisation will go from being a fringe activity to being a core source of funding for many universities’ research programs. ...  Finally, industry will increasingly compete with universities in a number of specialist professional programs. Accounting industry bodies already provide a range of specialised postgraduate programs (CPA, CA, CFA etc). Other industry groups, for example engineering associations and pharmacy guilds, may play an increased role as certifiers and deliverers of content."

"Organisations in other knowledge-based industries, such as professional services firms, typically operate with ratios of support staff to front-line staff of 0.3 to 0.5. That is, 2-3 times as many front-line staff as support staff. Universities may not reach these ratios in 10-15 years, but given the ‘hot breath’ of market forces and declining government funding, education institutions are unlikely to survive with ratios of 1.3, 1.4, 1.5 and beyond."

"Use of assets is also an area with scope for much greater efficiency. Most universities own and maintain a sizeable asset base, much of which is used only for four days per week over two 13-week semesters — not much more than 100 days per year."

"Incumbent public universities bring two critical assets to this model: credibility and academic capability. In an age of ubiquitous content, ‘content is king’ no longer applies. Credibility is king — and increasingly ‘curation is king’. Universities are uniquely positioned to bring credibility and to act as curators of content. The challenge for public universities in this world is to cut the right deal — a deal that builds in brand protection and a reasonable share of the value created." 

One university vice-chancellor is quoted as saying: "Our major competitor in ten years time will be Google ... if we're still alive!" Another is quoted as saying: "The traditional university model is the analogue of the print newspaper ... 15 years max, you've got the transformation." And yet another is quoted as: "Universities face their biggest challenge in 800 years."

I would only add that universities and colleges typically don't like to think of themselves as "businesses," but even nonprofit institutions have a "business model" in which revenue needs to match up to expenditures.  The higher education business model is going to be dramatically disrupted, and so far, we've only seen the front edge of those changes.


Tuesday, November 13, 2012

Raising U.S. Exports

The U.S. Census Bureau is out with the monthly trade data for September 2012. The trade deficit is down a bit since August, but not much changed over the last few years. Here's the monthly trade picture of imports and exports over the last couple of years. But what I want to focus on here is the possibility of raising U.S. exports as part of helping to regenerate economic growth.


Graph of International Trade Balances

The U.S. GDP was $14.6 trillion in 2010. By World Bank estimates, with the conversions between currencies done at the purchasing power parity exchange rates, the U.S GDP was essentially equal to the sum of the GDP of China ($10.1 trillion) and (India $4.2 trillion). In the two years since then, the sum of China's and India's economy surely exceeds the U.S. And looking ahead, the annual economic growth rates of China, India, and some other emerging market economies are likely to be a multiple of U.S. growth rates. In much of the last few decades, the U.S. consumer has been driving the world economy with buying and borrowing. But in the next few decades, the emerging economies like China, India, and others will be the drivers of world economic growth.

So how well-prepared is the U.S. economy to find its niche in the global supply chains that are of increasing importance in the world economy? The World Economic Forum offers an answer in "The Global Enabling Trade Report 2012," published last spring. Chapter 1 of the report is called "Reducing Supply Chain Barriers: The Enabling Trade Index 2012," by Robert Z. Lawrence, Sean Doherty, and Margareta Drzeniek Hanouz. They sum up the growing importance of global supply chains in this way: 

"Traded commodities are increasingly composed of intermediate products. Reductions in transportation and communication costs and innovations in policies and management have allowed firms to operate global supply chains that benefit from differences in comparative advantage among nations, both through international intra-firm trade and through networks that link teams of producers located in different countries. Trade and foreign investment have become increasingly complementary activities. ... Increasingly, countries specialize in tasks rather than products. Value is now added in many countries before particular goods and services reach their final destination, and the traditional notion of trade as production in one country and consumption in another is
increasingly inaccurate."

Their "enabling trade" index ranks countries in how their institutions and infrastructure support global trade, along a number of categories. Here's the list of the top 25 countries in their ranking: the U.S. economy ranked 19th in 2010, and dropped to 23rd in the 2012 rankings.


I'm not especially bothered that the U.S. ranks behind national economies that are massively oriented to international trade, like Singapore and Hong Kong. But ranking behind other large high-income economies like Sweden, Canada, Germany, Japan and France is more troubling. Remember, countries that enable trade are the ones that will be prepared to participate in the main sources of future global economic growth. Here's how Lawrence, Doherty, and Hanouz sum up the U.S. situation.

"Dropping four places, the United States continues its downward trend since the last edition and is ranked 23rd this year. The country’s performance has fallen in international comparison in almost all areas assessed by the Index, bar the efficiency of its border procedures and the availability of logistics services. The regulatory environment appears less conducive to business than in previous years, falling by 10 ranks from 22nd to 32nd. Concerns regarding the protection of property rights, undue influence on government and judicial decisions, and corruption are on the rise. And as in previous years, protection from the threat of terrorism burdens the business sector with very high cost (112th), and US exporters face some of the highest trade barriers abroad. Yet overall the United States continues to benefit from hassle-free import and export procedures (17th) and efficient customs clearance (14th), thanks to excellent customs services to business (3rd). The country also boasts excellent infrastructure, including ICTs, providing a strong basis for enabling trade within the country."

The U.S. economy, with its enormous domestic market, has traditionally not had to treat exports and foreign trade as essential to its own growth. But the global economy is changing, and more U.S. producers need to start looking longer and harder beyond their national borders.

Monday, November 12, 2012

Hydraulic Models of the Economy: Phillips, Fisher, Financial Plumbing

Part of the lore of earlier economists as it was passed down to me around the campfire back in the Neolithic era  is the story of how Alban William Housego (Bill) Phillips, the originator of the famous 1956 paper that drew the "Phillips curve" tradeoff between unemployment and inflation, also built a hydraulic economic model: that is, a physical model of the economy in which flows of consumption, saving, investment and other economic forces were represented by liquid moving through tubes and pipes. What I hadn't known until more recently is that Irving Fisher also created an hydraulic model of the economy.


As a starting point for background on Bill Phillips and his famous 1956 Phillips curve paper, I can recommend the article by A.G. Sleeman called  "Retrospectives: The Phillips Curve: A Rushed Job?" which appeared in the Winter 2011 issue of my own Journal of Economic Perspectives. (Like all articles in JEP from the current issue back to the first issue in 1987, it is freely available on-line compliments of the American Economic Association.) The hydraulic computer is not the main focus of Sleeman's article, but he provides evidence that it was a major part of Phillips' career.

Apparently, Phillips completed his undergraduate degree at the London School of Economics in June 1949, specializing in sociology, and receiving only a "Pass." From there, Sleeman writes (footnotes and references omitted:

"In 1950, despite his poor degree, Phillips was appointed an Assistant Lecturer in the Department of Economics at LSE at the top of the pay scale, and simultaneously began his Ph.D. studies. The reason was that by 1949, Phillips had built the MONIAC: Monetary National Income Analogue Computer. (The name is a play on ENIAC, the Electronic Numerical Integrator and Computer, which had been announced in 1946 as the fifi rst general-purpose electronic computer.) MONIAC was a hydraulic machine, made of transparent plastic pipes and tanks fastened to a wooden board, about six feet high, four feet wide, and three feet deep. The MONIAC used colored water to represent the stocks and flows of an IS–LM style model and simulated how the model behaved as monetary and fiscal variables varied. The MONIAC brought Phillips to the attention of James Meade and, ultimately, to Lionel Robbins and other members of the LSE economics department. In 1950, Phillips published a paper on MONIAC in the August issue of Economica eight months after failing his Applied Economics and Economic History exams and passing Principles by a single point. In October 1951, the 36 year-old Phillips was promoted to Lecturer and tenured having published only one paper. Over the next two years Phillips completed his doctorate..."
Phillips was born in New Zealand, and a December 2007 article in Reserve Bank of New Zealand Bulletin discusses the MONIAC. Here's a photo of Phillips standing beside the MONIAC at LSE, and a more recent photo of a MONIAC that the New Zealand central bank received from LSE and has restored to working order.



In the New Zealand central bank publication, Tim Ng and Matthew Wright describe the functioning of MONIAC this way: "Separate water tanks represent households, business, government, exporting and importing sectors of the economy. Coloured water pumped around the system measures income, spending and GDP. The system is programmable and capable of solving nine simultaneous equations in response to any change of the parameters, to reach a new equilibrium. A plotter can record
changes in the trade balance, GDP and interest rates on paper. Simulation experiments with fiscal policy, monetary policy and exchange rates can be carried out. Although the MONIAC was conceived as a teaching tool, it is also capable of generating economic forecasts. Phillips himself
used the MONIAC as a teaching tool at the London School of Economics. Around 14 machines were built ..." For those with an insatiable need to know more about the MONIAC, I recommend this special December 2011 issue of Economia Politica: Journal of Analytical and Institutional Economics, which includes about a dozen articles about the lasting influence of Phillips and the MONIAC.  


But until recently, I hadn't known about that Irving Fisher had also built a hydraulic model of the economy as part of his doctoral dissertation back in 1891. I learned about it in the article by Robert W. Dimand and Rebeca Gomez Betancourt in the most recent issue of my own JEP. Their article is primarily focused, as the title notes on "Irving Fisher’s Appreciation and Interest (1896) and the Fisher Relation." But in their capsule overview of Fisher's life, they write (citations and footnotes omitted):

"His 1891 doctoral dissertation in mathematics and political economy (Yale’s first Ph.D. in political economy or economics), which was published as Mathematical Investigations in the Theory of Value and Prices (1892), brought general equilibrium analysis to North America; it was supervised jointly by the physicist and engineer J. Willard Gibb and the economist and sociologist William Graham Sumner. Paul Samuelson once described Fisher (1892) as the “greatest doctoral dissertation in economics ever written” ... because Fisher invented general equilibrium analysis for himself before his last minute discovery of the writings of Léon Walras and Francis Ysidro Edgeworth. Fisher’s thesis went beyond these writings in one striking respect: influenced by Gibbs’s work in mechanics, Fisher not only imagined but actually built a hydraulic mechanism to simulate the determination of equilibrium prices and quantities—in effect, a hydraulic computer in the days before electronic computers ..."

Oddly enough, Fisher also wrote a paper that is an early harbinger of the Phillips curve literature. As Dimand and Betancourt write: "In a series of articles, Fisher correlated distributed lags of price
level changes with economic activity and unemployment. His article “A Statistical Relationship between Unemployment and Price Level Changes” (1926 [1973]), little noticed when first published by the International Labour Office, attracted rather more attention when reprinted almost 50 years later in the Journal of Political Economy as “Lost and Found: I Discovered the Phillips Curve—Irving Fisher.”

I'm not aware of any working models of Fisher's hydraulic computer, nor of any photographs of a working model. But  back in 2000, William C. Brainard and Herbert E. Scarf took on the task of investigating how the model worked in "How to Compute Equilibrium Prices in 1891."  They reprint these sketches of Fisher's hydraulic computer from his dissertation. It apparently consisted of a series of cisterns, rods, floats, bellows, and tubes. It represents three consumers and three goods that they consume.





Apparently, Fisher used his hydraulic model of the economy as a teaching tool for 25 years. Brainerd and Scarf write (references omitted):

"Fisher regarded his model as "the physical analog of the ideal economic market," with the virtue that "The elements which contribute to the determination of prices are represented each with its appropriate role and open to the scrutiny of the eye ..." providing a "clear and analytical picture of the interdependence  of the  many elements in the causation of prices ... " Fisher also saw the machine as a way of demonstrating comparative static results, "... to employ the mechanism as an instrument of investigation and by it, study some complicated variations which could scarcely be successfully followed without its aid. ... 

"Although we do not know what experiments Fisher actually ran with his machine, he does describe eight comparative static exercises. Some of these illustrate basic features of the system, for example that proportional increases in money incomes result in an equal proportional increase of each price, with no change in the allocation of goods. Another simple exercise discussed by Fisher examines whether proportional increases in the endowment of goods necessarily result in proportional decreases in prices, as was apparently, and incorrectly, believed by Mill. Some exercises illustrate less intuitive properties of exchange economies: increasing one individual's income may make some other individual better off, and also the possibility of `immiserating growth,' i.e. increasing an individuals endowment of a good may actually lower his welfare."


The idea of a hydraulic computer seems anachronistic in these days of electronic computation, but I can imagine that an illustrative teaching tool, watching flows of liquid rebalance might be at least as useful as looking at a professor sketching a supply and demand diagram. In addition, the notion of the economy as a hydraulic set of forces still has considerable rhetorical power. We talk about "liquidity" and "bubbles." The Federal Reserve publishes "Flow of Funds" accounts for the U.S. economy. When economists talk about the financial crisis of 2008 and 2009, they sometime talk in terms of financial "plumbing." For example, here's Darrel Duffie:

"And there has been a lot of progress made, but I do feel that we’re looking at years of work to improve the plumbing, the infrastructure. And what I mean by that are institutional features of how our financial markets work that can’t be adjusted in the short run by discretionary behavior. They’re just there or they’re not. It’s a pipe that exists or it’s a pipe that’s not there. And if those pipes are too small or too fragile and therefore break, the ability of the financial system to serve its function in the macroeconomy ... is broken. If not well designed, the plumbing can get broken in any kind of financial crisis if the shocks are big enough. It doesn’t matter if it’s a subprime mortgage crisis or a eurozone sovereign debt crisis. If you get a big pulse of risk that has to go through the financial system and it can’t make it through one of these pipes or valves without breaking it, then the financial system will no longer function as it’s supposed to and we’ll have recession or possibly worse."


I find myself wondering about what an hydraulic model of an economy would look like if it also included bubbles, runs on financial institutions, credit crunches--along with tubes that could break. Sounds messy, and potentially quite interesting.