Mercatus Site Feed en A Solid Choice for CBO Director <h5> Expert Commentary </h5> <p>With their selection of <a href="">Keith Hall</a> to direct the Congressional Budget Office (CBO), the incoming chairs of the House and Senate Budget Committees, Dr. Tom Price and Senator Mike Enzi, have passed an unusually rigorous test. Their choice should be expected to not only well serve lawmakers, but also reflect well upon Congress, in the years ahead.</p> <p>The end of previous director Doug Elmendorf’s term fostered a dynamic unseen with previous CBO director appointments. <a href="">Some commentators</a> attempted to discredit any choice the budget chairmen might make unless they took the highly unusual step of reappointing the other party’s outgoing choice. <a href="">Much of this pressure</a> was applied well before Senator Enzi was even installed as Senate Budget Committee chairman. Though CBO under Elmendorf served Congress extremely well, the dynamic was unfair in the sense that it raised the bar for the next appointment higher than was ever previously the case.</p> <p>Despite this heightened standard, the selection of Keith Hall nevertheless cleared it easily. The chairmen needed to find someone with impeccable academic credentials, and they did (Hall not only has his economics PhD from Purdue, but also served as chief economist for the White House’s Council of Economic Advisors (CEA)). They needed to find someone with the demonstrated ability to manage CBO, and they did (Hall previously ran the Bureau of Labor Statistics). They needed someone manifestly even-tempered and evidence-driven, and they got that in spades. Hall will quickly come to be recognized more widely as the soft-spoken and objective analyst his associates already know him to be. He has been particularly good as a witness delivering congressional testimony, where his “just the facts” style suits what Congress needs from CBO.</p> <p>The selection of Hall provides other ancillary benefits which may or may not have played a role in his selection. His research emphasis has been on labor economics. This expertise will serve CBO well, as the agency will almost certainly need to increase its attention to labor market analysis in the years ahead. It is rapidly becoming apparent that the United States faces enormous challenges with respect to maintaining labor supply as more baby boomers reach their 60s. It is equally apparent that much of federal law, from retirement policy to disability policy to health insurance subsidies, is poorly designed in the sense of fostering labor market distortions. Policy makers will need to wrestle with these problems in the years ahead, and CBO will be pulled into many such questions. In this context it is a significant advantage that CBO has an esteemed labor economist at its helm.</p> <p>I have had the pleasure of working with Keith Hall first at the White House, where he was chief economist for CEA, and later at the Mercatus Center. He has long been a reliable source of objective information for decision-makers in both the executive and legislative branches. He is fairly described as highly intelligent, well informed, collegial, and unflappable. If a member of Congress’s temperature rises, Hall’s will stay the same. I would not be surprised if he becomes increasingly known for his succinct reserve; I have been joking to friends that CBO’s pronouncements may no longer require abridging to circulate via twitter.</p> <p>Of course, like all CBO directors, Hall will need to further develop important skills in his new position. A CBO director must be a diplomat as well as an analyst; he must explain CBO’s decisions to the press and public and must quickly master the nuances of the various ways members of Congress will, as it were, “work the referee.” But Hall will have a relatively free hand to develop these communications skills because he is already in firm possession of the essential analytical and managerial skills required for the job.</p> <p>The choice of a new CBO director was both a challenge and an opportunity for the incoming budget committee chairmen, to demonstrate a commitment to professional, responsible governance. They have done so with the selection of Keith Hall.</p> Wed, 04 Mar 2015 10:24:09 -0500 Will Robots Take Our Jobs? <h5> Expert Commentary </h5> <p>Today's robots may lack emotions, but they have quite a knack for rousing heated passions within their prevailing meaty overlords. Ray Kurzweil and his devotees daydream of a singularity rapture, when benevolent machine intelligence will overtake human knowledge to saturate and awaken the universe. On the gloomier side of the existential spectrum, Elon Musk recently donated $10 million to the Future of Life Institute to fight the rise of killer robots. Either way, these thinking machines are expected to be a pretty big deal. But we can hardly wait for cosmic horror or transhumanist actualization to start asking the tough questions: Will the robots take our jobs?</p> <p>Robots are nothing new. Industrial robots have been employed in manufacturing for about as long as polyester has been belabored in fashion. But unlike synthetic fibers, synthetic laborers have gotten much better over time. Digital employees consistently become cheaper, smarter, and more prevalent with each doubling of the number of transistors crammed into microprocessors. At their most salient, robots look a lot more like Kiva’s dumb and deferent deliverybots shuttling packages along Amazon warehouse floors than Neill Blomkamp’s charming CHAPPiE. But let’s not be crass humanoid supremacists, here. Digital workers are much more than mere metal reflections of ourselves.</p> <p><a href="">Continue reading</a></p> Tue, 03 Mar 2015 14:29:01 -0500 Come Together, Over the Fed? <h5> Expert Commentary </h5> <p>Federal Reserve Board Chair, Dr. Janet Yellen, offered largely predictable testimony during last week’s hearings before the Senate Banking and House Financial Services Committees. Deviations from predictions have a way of standing out. Addressing a question from Banking Committee Chair, Sen. Richard Shelby (R-Ala.), Dr. Yellen may have left the door open to bi-partisan proposals that would reform how the Federal Open Markets Committee (FOMC) votes on monetary policy, and do so in a manner that could more consistently promote broad economic opportunity over narrow distributional interests.</p> <p>The senator asked Dr. Yellen about a recent proposal from Richard Fisher, Dallas Fed’s president and a Democrat, to broaden the representation of district banks on the FOMC. Rep.Kevin Brady (R-Texas) has long championed a related proposal. Relative to her pointed concerns about a Congressional push to audit the Fed, Dr. Yellen offered a more measured assessment of the bi-partisan idea to let more district bank presidents vote in each FOMC meeting, and vote more frequently across meetings.</p> <p>Almost six years after the Great Recession, monetary policy continues to maintain an exceptionally accommodative stance. The length and depth of this accommodation has, in turn, raised concerns about the potential for asset bubbles, as well as the Fed’s ability to return to a more normal policy rate without creating undue market volatility. Congress has offered a number of proposals to address such concerns, with Sen. Rand Paul’s (R-Ky.) call to “<a href="">audit the Fed</a>” being a prominent example.</p> <p>Increased Congressional oversight may be fundamentally limited, however, and also compromise monetary policy if pushed too far. By its nature, oversight requires a look back at what happened, and thus relies on information that is second-hand and otherwise removed from those who make decisions in real time. And while informational constraints may lower the ceiling on how much good oversight can do, politics can increase how much bad it can do. Importantly, whether oversight favors narrow interests over broad opportunity can vary with the political preferences and capabilities of those who control the oversight mechanism.</p> <p>Better aligning the incentives of monetary policy-makers with productive policy objectives could establish a stronger first line of defense against monetary mischief. Proposals like those of Fisher and Brady might do just that.</p> <p>District bank presidents are nominated by boards of directors who, in turn, are elected in large part by member banks operating in their respective districts. To the extent that these commercial banks earn revenue from fixed rate loans, they are naturally concerned about inflation threatening their solvency. District presidents may thus embody an especially strong duty to serve the Fed’s most basic statutory goal—that is, price stability.</p> <p>More than just a story, this firmly grounded hypothesis enjoys support in the academic literature. FOMC voting records during numerous executive administrations and Congresses reveal a significantly stronger tendency for district bank presidents, relative to Federal Reserve Board governors, to guard against inflation.</p> <p>Bi-partisan proposals to give district presidents a stronger voice in monetary policy deliberations may be more than academic. Addressing Sen. Shelby’s question about such proposals last week, Dr. Yellen observed that the FOMC’s voting structure is “of course, something that Congress could, if it wished, revisit.” Members of Congress interested in effective reforms to the Fed could stand on firm economic ground in doing just that.</p> Tue, 03 Mar 2015 17:31:16 -0500 Export-Import Is Still Boeing’s Bank <h5> Publication </h5> <p>The US Export-Import Bank (Ex-Im) has been called “Boeing’s Bank” because of the overwhelming benefits that the aerospace conglomerate has received from the federal export credit agency over the years. This week’s charts, which were created using figures from Ex-Im’s 2014 annual report, show that Boeing remains the primary beneficiary of the bank’s taxpayer-backed financing.</p><p><a href=""><img height="327" width="585" src="" /></a></p> <p>In fiscal year 2014, Ex-Im authorized $10.8 billion in long-term loan guarantees, which is the Bank’s largest program. The first chart (<i>left</i>) shows that $7.4 billion of that amount, or 68.3 percent, belonged to Boeing. The second chart (<i>center</i>) shows that, of the $20.5 billion in total authorizations across all programs made by Ex-Im last year, 40 percent belonged to Boeing.</p> <p>The bank’s defenders often tout Ex-Im’s support for small businesses, but as the final chart (<i>right</i>) shows, Boeing’s 40 percent share of total authorizations dwarfs the 25 percent share for all small businesses combined.</p> Tue, 03 Mar 2015 11:44:12 -0500 Financing the Future: The Role of the Financial System in Fostering Economic Growth ( <h5> Events </h5> <p>A well-functioning financial system is a key ingredient in a well-functioning economy. Whether an existing company wants to bring a new product to market or a new company wants to compete with an existing company, access to funding is essential. Financial regulation can encourage or inhibit financial markets’ ability to provide the financial products and services necessary to support a vigorous, dynamic economy. A well-designed regulatory system enables the financial system to work efficiently, effectively, and creatively to support economic prosperity.</p> <p>Please join us for this important conference to discuss how well the financial system is serving entrepreneurs, businesses, and the American people.</p> <p>Our speakers and panelists will address questions such as:</p> <ul><li>What is the historical role that financial markets have played in building the American economy?</li><li>How effectively are the financial markets working now to foster economic growth?</li><li>How have regulatory developments, including those in Dodd-Frank, affected the economy?</li><li>What financial regulatory reforms could encourage innovation and investment?</li></ul> <p>Confirmed speakers include:</p> <ul><li><a href="">Richard Berner</a>, U.S. Department of the Treasury</li><li>James Brown, Iowa State University</li><li><a href="">Thomas Hoenig</a>, Federal Deposit Insurance Corporation</li><li><a href="">Stephen Miller</a>, Mercatus Center</li><li><a href="">Hester Peirce</a>, Mercatus Center</li><li><a href="">Michael Piwowar</a>, U.S. Securities and Exchange Commission</li><li><a href="">Hal Scott</a>, Harvard Law School</li><li><a href="">Betsey Stevenson</a>, Council of Economic Advisors</li></ul> <p>Questions about this event? Please contact Julie Burden at&nbsp;<a href=""></a>&nbsp;or (703) 344-3219.</p> Tue, 03 Mar 2015 15:30:24 -0500 Regressive Effects: Causes and Consequences of Selective Consumption Taxation <h5> Publication </h5> <p class="p1">Governments often impose selective taxes on goods deemed to be unhealthy or poor choices. These taxes are often referred to as “sin taxes” in the academic literature and in policy circles. They may target items such as sugary drinks, candy, alcohol, and tobacco, and are designed to mitigate the social cost of consuming these goods. The question for policymakers is whether these taxes are beneficial to consumers and to society as a whole.</p> <p class="p1">In a <a href="">new study </a>for the Mercatus Center at George Mason University, scholars Adam Hoffer, Rejeana Gvillo, William F. Shughart II, and Michael D. Thomas show that selective consumption taxes may do little to change behavior and that the poor spend a far greater percentage of their disposable budgets on these selective consumption taxes. The study concludes that selective consumption taxes are both ineffective and regressive, and that improving education and increasing the availability of healthier goods may be better steps than raising taxes on those who can least afford them.</p> <p class="p3"><b>DESIGN AND THEORY</b></p> <p class="p1">The study explores how consumption of 12 goods varies across incomes by calculating the goods’ income-expenditure elasticities (whether income affects consumers’ demand for the goods). The 12 goods are alcohol, cigarettes, fast food, items sold at vending machines, purchases of food away from home, cookies, cakes, chips, candy, donuts, bacon, and carbonated soft drinks. Data come from the Bureau of Labor Statistics Consumer Expenditure surveys for 2009 through 2012.</p> <p class="p4"><b>Price Elasticity vs. Price Inelasticity<br /></b><span style="font-size: 12px;">Goods can range from very price elastic (as prices increase, demand will decrease substantially) to very price inelastic (as prices increase, demand will stay relatively the same). Some of the goods examined simply do not have healthier substitutes, or consumers are not willing to substitute.</span></p> <p class="p4"><b>Theory Behind Selective Taxes<br /></b><span style="font-size: 12px;">Selective taxes will increase the market prices of the targeted goods. This being the case, the quantities of the taxed goods consumers buy will fall and the quantities of untaxed or lesser-taxed goods consumers buy will rise. Consumption of disfavored goods, such as sugary beverages or cookies, can lead to health-related problems. Theoretically, then, reducing the quantity of these goods that consumers buy could provide a health benefit to society. However, a review of the academic literature shows that the goods studied are price inelastic and consumers who switch to substitutes tend to switch to goods equally high in calories.</span><span style="font-size: 12px;">&nbsp;</span></p> <p class="p3"><b>KEY FINDINGS</b></p> <p class="p4"><b>Selective Taxes are Regressive<br /></b><span style="font-size: 12px;">People with lower incomes tend to spend a much larger percentage of their budgets on disfavored goods than wealthier people do.</span></p> <ul class="ul1"> <li class="li6">Searching for adequate substitutes for certain goods can be time-consuming, costly, or otherwise undesirable. For example, switching from potato chips to apple slices may sound like a great idea, but if for whatever reason the apple slices are more expensive or more difficult to get, consumers will not switch. Moreover, some substitutes may not be healthier than the original good. A good example of this would be switching from soft drinks to fruit juice even though such a switch may not reduce calories in the daily diet.</li> <li class="li6">Approximately 2.3 million American families live more than a mile away from a grocery store and do not own a car. Areas without a grocery store within walking distance are called “food deserts” and include many lower-income neighborhoods. Lack of access to the full range of food choices limits some consumers, particularly lower-income consumers, in their ability to choose healthier substitutes for taxed goods.</li> </ul> <p class="p4"><b>Selective Taxes are Ineffective<br /></b><span style="font-size: 12px;">Quantities purchased of the 12 considered goods decrease little in response to increases in their prices, including increases caused by imposing new selective sales or excise taxes or raising existing tax rates.</span></p> <p class="p1">The evidence shows that the link between selective taxes and consumption of alcohol, sugary drinks, snack food, and other elements of poor diets is weak. This being the case, selective consumption taxes are unlikely to slow or reverse the ongoing obesity “epidemic.”</p> <p class="p3"><b>CONCLUSION</b></p> <p class="p1">Individuals who continue to purchase “unhealthy” items after a tax has been levied or raised will see a decline in their disposable income—the money they have available for spending on other goods—making it more difficult for them to climb out of poverty. Stuck in poverty, these individuals will also be unable to adopt healthier diets, or to change their behaviors in the ways desired by the supporters of selective consumption taxes. Because the types of goods targeted by these taxes have relatively inelastic demand—meaning consumers will keep purchasing them regardless of increases in price—the taxes are regressive in nature. A better way to reduce unhealthy eating habits would be to introduce healthy alternatives in areas where none were available previously and to educate people regarding healthy choices in their daily diets. These are less intrusive ideas that do not involve raising taxes on the poor.</p> Wed, 04 Mar 2015 10:32:21 -0500 Junk Food Taxes Don't Work <h5> Expert Commentary </h5> <p>What if I told you there was an easy way to fix various health problems that had a variety of benefits and very little cost? Your first reaction could be "let’s do it." This is the promise that comes with taxing items to change consumer behavior. But, after many years of failed policy attempts, you should be a bit more skeptical of this approach.</p> <p>In a new research paper to be published tomorrow by the Mercatus Center at George Mason University on “Regressive Effects: Causes and Consequences of Selective Consumption Taxation,” my colleagues and I explore the taxation of junk food in more detail. So many other people were talking about the health effects of bad diet that we wanted to know more about how to stop the high rates of heart disease, diabetes and other health concerns that come from eating foods high in fat, salt and sugar.</p> <p>What we found is that many people live in areas where little else besides this type of food is available, areas called <a href="">food deserts</a> – or what the U.S. Department of Agriculture defines as “as urban neighborhoods and rural towns without ready access to fresh, healthy, and affordable food.” On top of this, salt and sugar are the most popular preservatives. That means food can wait around until you get ready to eat it, unlike a banana or an apple slice. Convenience is an important aspect of food purchases. It’s no wonder so many people make the choice to buy cheap, convenient food that might ultimately make them subject to chronic health problems.</p> <p><a href="">Continue reading</a></p> Wed, 04 Mar 2015 10:44:10 -0500 Five Myths about Net Neutrality <h5> Expert Commentary </h5> <p>In view of the Federal Communications Commission (FCC) vote on February 26 to regulate the Internet under Title II of the New Deal–era Communications Act, it is critical to understand what these “net neutrality” rules will and will not do.</p> <p>Columbia Business School professor Eli Noam <a href="">says</a> net neutrality has “at least seven different related but distinctive meanings….” The consensus is, however, that net neutrality is a principle for how an Internet Service Provider (ISP) or wireless carrier treats Internet traffic on “last mile” access — the connection between an ISP and its customer. Purists believe net neutrality requires ISPs to treat all last-mile Internet traffic the same. The FCC will not enforce that radical notion because networks are becoming more “intelligent” every year and, <a href="">as a Cisco network engineer recently put it</a>, equal treatment for all data packets “would be setting the industry back 20 years.”</p> <p>Nevertheless, because similar rules were twice struck down in federal court, the FCC is crafting new net neutrality rules for ISPs and technology companies. Many of these Title II provisions reined in the old Bell telephone monopoly and are the most intrusive rules available to the FCC. The net neutrality rules are garnering increased public scrutiny because they will apply to one of the few bright spots in the US economy — the technology and communications sector.</p> <p>As with many complex concepts, there are many myths about net neutrality. Five of the most widespread ones are dispelled below.</p> <p><b>Myth #1: The Internet Has Always Been Neutral<br /><img src="*hlZxNAlt1vTBXXulnH3GSQ.png" width="585" height="263" style="font-size: 12px;" /></b></p><p><b>Reality</b>: <a href="">Prioritization has been built into Internet protocols for years</a>. MIT computer scientist and early Internet developer David Clark colorfully dismissed this first myth as “<a href="">happy little bunny rabbit dreams</a>,” and pointed out that “[t]he network is not neutral and never has been.” Experts such as tech entrepreneur and investor <a href="">Mark Cuban</a> and President Obama’s former chief technology officer <a href="">Aneesh Chopra</a> have observed that the need for prioritization of some traffic increases as Internet services grow more diverse. People speaking face-to-face online with doctors through new telemedicine video applications, for instance, should not be disrupted by once-a-day data backups. ISPs and tech companies should be free to experiment with new broadband services without <a href="">time-consuming regulatory approval</a> from the FCC. <a href=";">John Oliver</a>, <a href="">The Oatmeal</a>, and <a href="">net neutrality activists</a>, therefore, are simply wrong about the nature of the Internet.</p><p><b>Myth #2:&nbsp;Net Neutrality Regulations Are the Only Way to Promote an Open Internet&nbsp;<br /><img src="*Vpni1qalhVw2zPoCEhMoSg.png" width="585" height="263" style="font-size: 12px;" /></b></p><p><b>Reality</b>: Even while lightly regulated, the Internet will remain open because consumers demand an open Internet. <a href="">Recent Rasmussen polling</a> indicates the vast majority of Americans enjoy the open Internet they currently receive and rate their Internet service as good or excellent. (Only a small fraction, 5 percent, says their Internet quality is “poor.”) It is in ISPs’ interest to provide high-quality Internet just as it is in smartphone companies’ interest to provide great phones and automakers’ interest to build reliable cars. Additionally, it is false when <a href="">high-profile scholars</a> and activists say there is no “cop on the beat” overseeing Internet companies. As Federal Trade Commissioner Joshua Wright <a href="">testified to Congress</a>, existing federal competition laws and consumer protection laws — and strict penalties — protect Americans from harmful ISP behavior.</p> <p><b><b>Myth #3: Net Neutrality Regulations Improve Broadband Competition<br /></b><img height="263" width="585" src="*SP_2N0CDDSFg9IsNYyvr9g.png" /><br /></b></p><p><b>Reality</b>: The FCC’s net neutrality rules are not an effective way to improve broadband competition. Net neutrality is a <a href="">principle for ISP treatment of Internet traffic</a> on the “last mile” — the connection between an ISP and a consumer. The principle says nothing about broadband competition and will not increase the number of broadband choices for consumers. On the contrary, net neutrality as a policy goal was created because many scholars did not believe more broadband choices could ensure a “neutral” Internet. Further, Supreme Court decisions lead scholars to <a href=";pg=PA375&amp;lpg=PA375&amp;dq=credit+suisse+trinko+net+neutrality&amp;source=bl&amp;ots=yUAcFZCn2V&amp;sig=c73nspUDGX80RCBLfIhsH-NaIHA&amp;hl=en&amp;sa=X&amp;ei=uTjmVK_PBK3HsQSkrYJg&amp;ved=0CDoQ6AEwBA#v=onepage&amp;q=credit%20suisse%20trinko%20net%20neutrality&amp;f=false">conclude</a> that “as prescriptive regulation of a field waxes, antitrust enforcement must wane.” Therefore, the FCC’s net neutrality rules would actually impede antitrust agencies from protecting consumers.</p><p><b>Myth #4: All Prioritized Internet Services Are Harmful to Users<br /><img height="263" width="585" src="*yfGrp91AIeGCjvJrcN8T6A.png" /><br /></b></p><p><b>Reality</b>: Intelligent management of Internet traffic and <a href="">prioritization provide useful services to consumers</a>. Net neutrality proponents <a href="">call</a> zero-rating — which is when carriers allow Internet services that don’t subtract from a monthly data allotment — and similar practices “dangerous,” “malignant,” and rights <a href="">violations</a>. This hyperbole arises from dogma, not facts. The real-world use of prioritization and zero-rating is encouraging and pro-consumer. <a href="">Studies show</a> that zero-rated applications are used by millions of people around the globe, including in the United States, and they are popular. In one instance, poor South African high school students petitioned their carriers for free — zero-rated — Wikipedia access because accessing Wikipedia frequently for homework was expensive. Upon hearing the students’ plight, Wikipedia and South African carriers <a href="">happily obliged</a>. Net neutrality rules like Title II would prohibit popular services like zero-rating and intelligent network management that makes more services available.</p> <p><b>Myth #5: Net Neutrality Rules Will Make Broadband Cheaper and Internet Services like Netflix Faster<br /><img height="263" width="585" src="*X9wzk_jfa1OOllJ3gyWBFw.png" /><br /></b></p><p><b>Reality</b>: First, the FCC’s rules will make broadband more expensive, not cheaper. The rules regulate Internet companies much like telephone companies and therefore federal and state telephone fees will eventually apply to Internet bills. According to preliminary estimates, <a href="">millions of Americans</a> will drop or never subscribe to an Internet connection because of these price hikes. Second, the FCC’s rules will not make Netflix and webpages faster. The FCC rules do not require ISPs to increase the capacity or speed of customers’ connections. Capacity upgrades require competition and ISP investment, which may be harmed by the FCC’s onerous new rules.</p> <p>To see more from Mercatus scholars on net neutrality, visit <a href=""></a>.</p> Mon, 02 Mar 2015 15:17:12 -0500 What Is the Sharing Economy? <h5> Publication </h5> <p class="p1">While the bulk of this conversation has focused on regulating the sharing economy, little time has been spent actually defining what the sharing economy is and is not. The lack of a shared definition is why <a href="">Matthew Feeney can call it</a> “a relatively new and increasingly popular peer-to-peer economic model,” and why <a href="">Avi Asher-Schapiro</a> can call it “propaganda,” and how they can both be correct. To perhaps help clarify some of these issues, I’d like to propose a simple definition for the sharing economy.</p> <p class="p1">I agree with Avi’s point that taking an Uber might not be sharing. And I would argue that there may be little sharing actually occurring in the sharing economy.&nbsp; But that really isn’t the point.&nbsp; Instead, it is helpful to think of the sharing economy as <a href="">my colleagues and I have defined it before</a>: any marketplace that uses the Internet to connect distributed networks of individuals to share or exchange otherwise underutilized assets.</p> <p class="p1">When people talk about the sharing economy, they are very rarely focused on remuneration. The term is simply being used as shorthand to describe firms that offer a platform to connect individuals who have something with those who need it.</p> <p class="p1">A cash-strapped homeowner may not have seen her spare bedroom as capital until the <a href="">Airbnb</a> platform provided a way for her to rent it out to vacationers. A college student with an extra hour between classes may not have viewed his time as a profit opportunity until <a href="">Instacart</a> and <a href="">TaskRabbit</a> allowed him to put that time to use for others. A young couple may not have been able to use their couch to connect with other travelers from around the world, but can now do so through <a href="">Couchsurfing</a>. A retiree with a workbench full of power equipment may not have viewed his tools as a way to supplement his income until <a href="">1000 Tools</a> connected him with people in his area wanting to borrow tools.&nbsp; This is the sharing economy.</p> <p class="p1">While some <a href="">may choose to call this</a> “the peer economy,” ”peer production,” “the collaborative economy,” or “collaborative consumption,” each of these are simply different attempts to describe the shifts taking place in the way individuals are choosing to transact and interact with one another.</p> <p class="p1">Regardless of what you are calling it, it has very real benefits.&nbsp; In <a href="">recent research</a>, Adam Thierer, Matthew Mitchell, and I highlight five distinct ways that the sharing economy is creating real value for both consumers and producers:</p> <ol class="ol1"> <li class="li2">By giving people an opportunity to use others’ cars, kitchens, apartments, and other property, it allows underutilized assets or “dead capital” to be put to more productive use.</li> <li class="li2">By bringing together multiple buyers and sellers, it makes both the supply and demand sides of its markets more competitive and allows greater specialization.</li> <li class="li2">By lowering the cost of finding willing partners, haggling over terms, and monitoring performance, it cuts transaction costs and expands the scope of trade.</li> <li class="li2">By aggregating the reviews of past consumers and producers and putting them at the fingertips of new market participants, it can significantly diminish the problem of asymmetric information between producers and consumers.</li> <li class="li2">By offering an “end-run” around regulators who have been captured by existing producers, it allows suppliers to create value for customers long underserved by incumbents that have become inefficient and unresponsive.</li> </ol> <p class="p1">Finally, it is also important to remember that many of the policy problems that <a href="">Avi Asher-Schapiro</a> and <a href="">Dean Baker</a> presented are often failings of particular firms and business models within the sharing economy and not problems with the entire industry. Viewed in this light, each and every one of these failures represents a profit opportunity for a new or rival firm interested in improving the customer experience. Preemptively regulating an entire class of firms based on these anecdotes can be dangerous. As Jim Dwyer of the <i>New York Times</i> <a href="">recently warned</a>, “Be careful around anecdotes; they are the black ice of reality.” The sharing economy is too diverse and too rapidly evolving, and these sorts of pixel-sized stories should not be mistaken for larger, universal truths.</p> <p class="p1">The real issues should not be lost in the noise.&nbsp; Are people sharing? Not always. But, then again, that really isn’t what the sharing economy is about. Instead, they are benefitting from mutually beneficial interactions that would not be possible without the sharing economy’s platforms.</p> Mon, 02 Mar 2015 12:17:21 -0500 Agency Analysis Rarely Used to Inform Regulatory Decisions <h5> Publication </h5> <p class="p1">Since 2008, the Mercatus Center’s Regulatory Report Card series has evaluated the quality of executive branch agencies’ Regulatory Impact Analyses for major regulations and the extent to which agencies explain how they used the analyses in their decisions. A good Regulatory Impact Analysis raises the odds that regulators will create a regulation that solves a real problem at a reasonable cost.</p> <p class="p1">One Report Card criterion asks whether the agency claimed, or appeared to use, any part of the analysis to guide decisions. As the chart below demonstrates, agencies often fail to explain how the Regulatory Impact Analysis helped inform their decisions.</p> <p class="p1"><a href=""> <img height="398" width="585" src="" /></a></p> <p class="p1">This disconnect between analysis and decisions occurs because Regulatory Impact Analysis is currently required only by executive orders, not statutes. If an administration decides that political priorities are more important than solving real problems at an acceptable cost, agencies can ignore their own analyses. A stronger enforcement mechanism may be needed. Legislation requiring economic analysis of proposed regulations, for example, could allow judges to invalidate regulations unaccompanied by a thorough economic analysis and an explanation of how the agency used that analysis.</p> <p class="p1">Reasonable people may disagree about how much and what type of regulation is justified, but we should all be able to agree that government owes the public a clear explanation of how it’s making regulatory decisions.</p> Tue, 03 Mar 2015 12:16:56 -0500 Certificate-of-Need Laws: Implications for Florida <h5> Publication </h5> <p class="p1">Thirty-six states and the District of Columbia currently limit entry or expansion of health care facilities through certificate-of-need (CON) programs.<sup>1</sup> These programs prohibit health care providers from entering new markets or making changes to their existing capacity without first gaining the approval of state regulators. Since 1973, Florida has been among the states that restrict the supply of health care in this way, with 11 devices and services—ranging from acute hospital beds to organ transplants to psychiatric services—requiring a certificate of need from the state before the device may be purchased or the service may be offered.<sup>2</sup>&nbsp;</p> <p class="p1">CON restrictions are in addition to the standard licensing and training requirements for medical professionals, but are neither designed nor intended to ensure public health or ensure that medical professionals have the necessary qualifications to do their jobs. Instead, CON laws are specifically designed to limit the supply of health care, and are traditionally justified with the claim that they reduce and control health care costs.<sup>3</sup> The theory is that by restricting market entry and expansion, states might reduce overinvestment in facilities and equipment. In addition, many states—including Florida—justify CON programs as a way to cross-subsidize health care for the poor. Under these “charity care” requirements providers that receive a certificate of need are typically required to increase the amount of care they provide to the poor. In effect, these programs intend to create quid pro quo arrangements: state governments restrict competition, increasing the cost of health care for some, and in return medical providers use these contrived profits to increase the care they provide to the poor.<sup>4</sup>&nbsp;</p> <p class="p1">However, these claimed benefits have failed to materialize as intended. Recent research by Thomas Stratmann&nbsp;<span style="font-size: 12px;">and Jacob Russ demonstrates that there is no relationship between CON programs and increased access to health care for the poor.</span><sup style="font-family: inherit; font-style: inherit; font-weight: inherit;">5</sup><span style="font-size: 12px;"> There are, however, serious consequences for continuing to enforce CON regulations. In particular, for residents of Miami-Dade County (Florida’s most populous county), these programs result in approximately 3,428 fewer hospital beds, between 5 and 10 fewer hospitals offering magnetic resonance imaging (MRI) services, and 18 fewer hospitals offering computed tomography (CT) scans. For those seeking quality health care throughout Florida, this means less competition and fewer choices, without increased access to care for the poor.&nbsp;</span></p> <p class="p1"><b>THE RISE OF CON PROGRAMS&nbsp;<br /></b><span style="font-size: 12px;">CON programs were first adopted by New York in 1964 as a way to strengthen regional health planning programs. Over the following 10 years, 23 other states adopted CON programs.<sup>6</sup> Many of these programs were initiated as “Section 1122” programs, which were federally funded programs providing Medicare and Medicaid reimbursement for certain approved capital expenditures. Florida enacted its first CON program in 1973. The passage of the National Health Planning and Resources Development Act of 1974, which made certain federal funds contingent on the enactment of CON programs, provided a strong incentive for the remaining states to implement CON programs.<sup>7</sup> In the seven years following this mandate, nearly every state without a CON program took steps to adopt certificate-of-need statutes. By 1982 every state except Louisiana had some form of a CON program.&nbsp;</span></p> <p class="p1">In 1987, the federal government repealed its CON program mandate when the ineffectiveness of CON regulations as a cost-control measure became clear. Twelve states rapidly followed suit and repealed their certificate-of-need laws in the 1980s.<sup>8</sup> By 2000, Indiana, North Dakota, and Pennsylvania had also repealed their CON programs. Since 2000, Wisconsin has been the only state to repeal its program.&nbsp;<span style="font-size: 12px;">&nbsp;</span></p> <p class="p1">Florida remains among the 36 states, along with the District of Columbia, that continue to limit entry and expansion within their respective health care markets through certificates of need. On average, states with CON programs regulate 14 different services, devices, and procedures. Florida’s CON program currently regulates 11, which is less than the national average.<span style="font-size: 12px;">&nbsp;</span></p> <p class="p1">As figure&nbsp;1 shows, Florida’s certificate-of-need program ranks 26th most restrictive in the United States.&nbsp;</p> <p class="p1"><a href=""><img src="" width="585" height="409" /></a></p> <p class="p1"><b>DO CON PROGRAMS CONTROL COSTS AND INCREASE THE POOR’S ACCESS TO CARE?&nbsp;<br /></b><span style="font-size: 12px;">Many early studies of CON programs found that these programs fail to reduce investment by hospitals.<sup>9</sup> These early studies also found that the programs fail to control costs.<sup>10</sup> Such findings contributed to the federal repeal of CON requirements. More recently, research into the effectiveness of remaining CON programs as a cost-control measure has shown mixed results. While some studies find that CON regulations may have some limited cost-control effect,<sup>11</sup> others find that strict CON programs may in fact increase costs by 5 percent.<sup>12</sup> The latter finding is not surprising, given that CON programs restrict competition and reduce the available supply of regulated services.&nbsp;</span></p> <p class="p1">While there is little evidence to support the claim that certificates of need are an effective cost-control measure, many states continue to justify these programs using the rationale that they increase the provision of health care for the poor. To achieve this, 14 states—including Florida—include some requirement for charity care within their respective CON programs.<sup>13</sup> This is what economists have come to refer to as a “cross subsidy.”<sup>14&nbsp;</sup><span style="font-size: 12px;">&nbsp;</span></p> <p class="p1">The theory behind cross-subsidization through these programs is straightforward. By limiting the number of providers that can enter a particular practice, and by limiting the expansion of incumbent providers, CON regulations effectively give a limited monopoly privilege to providers that receive approval in the form of a certificate of need. Approved providers are therefore able to charge higher prices than would be possible under truly competitive conditions. As a result, it is hoped that providers will use their enhanced profits to cover the losses from providing otherwise unprofitable, uncompensated care to the poor. In effect, those who can pay are charged higher prices to subsidize those who cannot.&nbsp;</p> <p class="p1">In reality, however, this cross-subsidization is not occurring. While early studies found some evidence of cross-subsidization among hospitals and nursing homes,<sup>15</sup> the more recent academic literature does not show this cross-subsidy taking place. The most comprehensive empirical study to date, conducted by Thomas Stratmann and Jacob Russ, finds no relationship between certificates of need and the level of charity care.<sup>16<span style="font-size: 12px;">&nbsp;</span></sup></p> <p class="p1"><b>THE LASTING EFFECTS OF FLORIDA’S CON PROGRAM&nbsp;<br /></b><span style="font-size: 12px;">While certificates of need neither control costs nor increase charity care, they continue to have lasting effects on the provision of health care services both in Florida and in the other states that continue to enforce them. However, these effects have largely come in the form of decreased availability of services and lower hospital capacity.&nbsp;</span></p> <p class="p1">In particular, Stratmann and Russ present several striking findings regarding the provision of health care in states implementing CON programs. First, CON programs are correlated with fewer hospital beds.<sup>17</sup> Throughout the United States there are approximately 362 beds per 100,000 persons. However, in states such as Florida that regulate acute hospital beds through their CON programs, Stratmann and Russ find 131 fewer beds per 100,000 persons. For a county like Miami-Dade, with its population of approximately 2.62 million, this means that there are about 3,428 fewer hospital beds as a result of the state’s CON program.</p> <p class="p1">Moreover, several basic health care services that are used for a variety of purposes are limited because of Florida’s CON program. Across the United States, an average of six hospitals per 500,000 persons offer MRI services. In states such as Florida that regulate the number of hospitals with MRI machines, the number of hospitals that offer MRIs is reduced by between one and two per 500,000 persons.<sup>18</sup> As a result, in an area like Miami-Dade County there are approximately five to ten fewer hospitals offering MRI services. Florida’s CON program also affects the availability of CT services. While an average of nine hospitals per 500,000 persons offer CT scans, CON regulations are associated with a 37 percent decrease in these services. For the 2.62 million people living in Miami-Dade, this could mean about 18 fewer hospitals offering CT scans.</p> <p class="p1"><b>CONCLUSION&nbsp;<br /></b><span style="font-size: 12px;">While CON programs were intended to limit the supply of health care services within a state, proponents claim that the limits were necessary to either control costs or increase the amount of charity care being provided. However, 40 years of evidence demonstrate that these programs do not achieve their intended outcomes but rather decrease the supply and availability of health care services by limiting entry and competition. For policymakers in Florida, this situation presents an opportunity to reverse course and open the market for greater entry, more competition, and ultimately more options for those seeking care.</span></p> Tue, 03 Mar 2015 09:59:39 -0500 Is It Fair to Tax Capital Gains at Lower Rates than Earned Income? <h5> Expert Commentary </h5> <p>Capital gains—and how big a bite the government should take out of them—have become a major point of contention in the past couple of months.</p> <p>In January, President Obama proposed tax changes designed to raise some $320 billion over 10 years, largely through higher levies on high-income Americans. The revenue would be used to cover $235 billion in tax breaks, mostly for moderate-income workers, along with other initiatives.</p> <p>Among the changes he proposed: boosting the capital-gains rate to 28% for the top 1% of taxpayers, up from the current 23.8%, as well as a new capital-gains tax on many inheritances.</p> <p>The GOP fired back that taxing investment income would harm economic growth by discouraging business investment and thereby hurt workers’ incomes.</p> <p>All of which points to a broader question that divides experts: Are capital gains so different from earned income that they should be taxed at a different rate?</p> <p>Below, two experts tackle that question. Scott Sumner is professor of economics at Bentley University and the Ralph G. Hawtrey chair of monetary policy at the Mercatus Center at George Mason University, where he is director of its program on monetary policy. Leonard E. Burman, director of the Urban-Brookings Tax Policy Center and the Paul Volcker chair in behavioral economics and professor of public administration and international affairs at Syracuse University’s Maxwell School, is author of “The Labyrinth of Capital Gains Tax Policy: A Guide for the Perplexed.” They can be reached at <a href=""></a>.</p><p class="p1"><b>YES: It Makes Sense for Individuals—and the Economy</b></p> <p class="p2"><b>By Scott Sumner</b></p> <p class="p2">To many people, investment income should obviously be taxed at the same rate as labor income. After all, income is income, right?</p> <p class="p2">But it’s not that simple. There are compelling reasons to treat capital gains differently than other earnings.</p> <p class="p2">For one thing, taxes on investment earnings effectively double-tax that income. Labor income is taxed when it is earned, and investments are generally made out of after-tax earnings—so capital-gains levies represent another bite out of an investor’s money.</p> <p class="p2">In effect, the system punishes those who put their money to work. Raising the capital-gains tax rate would just make the punishment that much more drastic.</p> <p class="p2">This question doesn’t simply affect people who invest—it affects the entire economy. Investment capital is one of the most important drivers of economic growth, and the promise of big capital gains are an important inducement to get people to put money into critical but risky fields like biotechnology. If we want more inventions, or a faster cure for cancer, then we should have lower capital-gains taxes.</p> <p><a href="">Continue reading</a></p> Wed, 04 Mar 2015 16:59:25 -0500 Regulatory Reform Can Amount to a Progressive Tax Refund, If Done Right <h5> Publication </h5> <p class="p1"><b>INTRODUCTION&nbsp;</b><br /><span style="font-size: 12px;">Chairman Marino, Ranking Member Johnson, and members of the committee: thank you for inviting me to testify today. As an economist and senior research fellow at the Mercatus Center at George Mason University, I focus my primary research on regulatory accumulation and the regulatory process, so it is my pleasure to testify on today’s topic. In previous research and testimony, I have highlighted the fact that regulatory accumulation creates substantial drag on economic growth by impeding innovation and entrepreneurship.</span><sup style="font-family: inherit; font-style: inherit; font-weight: inherit;">1</sup><span style="font-size: 12px;"> Today, I have three main points that may help you to examine the reforms under consideration. First, I will discuss the regressive effects of regulatory accumulation—or, to put it another way, why retrospective analysis of regulations can result in a what amounts to a progressive tax refund, with benefits going largely to lower-income Americans.</span></p> <p class="p4">Second, I will highlight how an increasingly long and complex regulatory code can actually make the task of achieving risk reduction in the workplace more difficult.</p> <p class="p4">Third, I will argue that not all attempts at regulatory reform are equal. In my research, I have found several factors that tend to contribute to meaningful and successful regulatory and governmental reform efforts. The most important of these is the use of an independent group or commission to identify regulations that need to be modified or eliminated. Any retrospective analysis effort that leaves this task in the hands of the same agencies that created the regulations in the first place is unlikely to succeed. I highlight some other important principles as well, but the independence of the reviewers is the most important.</p> <p class="p6"><b>REGRESSIVE EFFECTS OF REGULATIONS</b>&nbsp;<br /><span style="font-size: 12px;">Regulations can be regressive, particularly in their effects on prices paid by consumers.</span><sup style="font-family: inherit; font-style: inherit; font-weight: inherit;">2</sup><span style="font-size: 12px;"> A regressive regulation is one whose burden disproportionately falls on lower-income individuals and households. When regulations force producers to use more expensive production processes or inputs, some of those production cost increases are passed along to consumers in the form of higher prices. For example, in 2005, the Food and Drug Administration banned the use of chlorofluorocarbons as propellants in medical inhalers, such as the inhalers that millions of Americans use to treat asthma.</span><sup style="font-family: inherit; font-style: inherit; font-weight: inherit;">3</sup><span style="font-size: 12px;"> This ban was enacted because of environmental concerns rather than health or safety concerns. Since the implementation of that ban, the average price of asthma inhalers has tripled.</span><sup style="font-family: inherit; font-style: inherit; font-weight: inherit;">4</sup><span style="font-size: 12px;"> While individuals with high incomes might be able to absorb this price increase, the higher price may force people with low incomes to make the choice not to buy an inhaler and instead leave the asthma untreated—potentially leading to a real human cost if the person suffers an asthma attack without an inhaler available.</span></p> <p class="p4">When regulations cause the prices of goods and services to increase, lower-income households have to make a choice: no longer buy those goods, substitute them with something else if possible, or buy less of the more expensive good. This can have the unintended consequence of causing lower-income families not to be able to purchase some good or service that is a medical necessity or that would have reduced the risk of accidental death or injury. I have attached a study by economist Diana Thomas that gives more details on the regressive effects of regulations.&nbsp;</p> <p class="p4">The cumulative cost of regulations amounts to a hidden, regressive burden. But it’s a burden that could be lightened. In fact, one way of viewing that burden is as an opportunity: retrospective analysis that eliminates a portion of the regulatory cost burden would act as a progressive tax refund. Let me explain with an example that will illustrate how reducing the regulatory burden is similar to a tax refund that primarily benefits poorer Americans.</p> <p class="p4">While economists have not yet reached consensus on how to calculate the total cost of regulation, several estimates exist. For example, economists John Dawson and John Seater estimate that regulatory accumulation slows economic growth by about 2 percent per year.<sup>5</sup> The latest OIRA report to Congress on the benefits and costs of regulations estimates that a small subset of regulations reviewed cost the economy between $57 billion and $84 billion in 2001 dollars.<sup>6 </sup>Converted to 2014 dollars, this range is from $76.19 billion to $112.29 billion.<sup>7 </sup>At the other end of the spectrum, Clyde Wayne Crews estimates the annual cost of regulations to be around $1.882 trillion.<sup>8</sup> For this example, I’ll use the midpoint between $57 billion and $1.882 trillion, which is $969 billion. Consider this the annual regulatory burden shared across all households in the economy. As of 2013, there were 115,610,216 households in the United States. We can estimate the regulatory burden per household by simply dividing the midpoint cost estimate, $969 billion, by the number of households. This division yields about $8,386 per household.</p> <p class="p4">Now consider a regulatory reform that would reduce this cost burden by 15 percent. If the regulatory cost burden per household is $8,386, then a 15 percent reduction would equal about $1,258 per household per year. This reduction in cost burden is effectively an annual regulatory cost refund and would have different impacts to low-, middle-, and high-income households. In this example, I define a low-income household as a family of five with three children under the age of 18 earning a household income exactly equal to the Census poverty threshold for 2014: $28,252. For the middle-income household, I use the median household income in 2013 (the latest year available): $51,900. For the high-income household, I follow Diana Thomas’s calculations and use a household income equal to 10 times the poverty threshold: $282,520. Table 1 shows what a reduction in regulatory costs of $1,258 would equal, relative to household income and in percentage terms.</p> <p class="p4"><img height="133" width="585" src="" /></p><p class="p1">As table 1 shows, a reduction in regulatory burden of $1,258 would have a much larger effect on the purchasing power of the low-income household than the middle- or high-income households. To the low-income household, the regulatory cost refund would equal nearly 5 percent of one year’s household income. Conversely, to the high income household, it would equal only 0.4 percent of one year’s income. This example shows that a regulatory cost refund of any amount would work just like a progressive tax cut, helping low- and middle-income households relatively more than high-income households. Even better, unlike one-time tax rebates, this regulatory cost refund would repeat every year.&nbsp;</p> <p class="p1"><b>INCREASING INABILITY TO PRIORITIZE COMPLIANCE&nbsp;<br /></b><span style="font-size: 12px;">One concern that accompanies regulatory accumulation is called regulatory overload. Firms are compelled by law to comply with regulations, regardless of whether the regulations are effective at solving a particular problem. In a 2011 study, psychologist Andrew Hale and his coauthors find that as the number of rules increase, the rules themselves become less effective.</span><sup style="font-family: inherit; font-style: inherit; font-weight: inherit;">9</sup><span style="font-size: 12px;"> They also find that as the number of rules increase, companies tend to rely on more rigid, checklist-style compliance strategies to ensure compliance with the letter of the law rather than proactive risk management strategies that may be more effective at reducing health and safety risks in the workplace. They call these problems regulatory overload.&nbsp;</span></p> <p class="p1">Certainly, as regulations accumulate, risk managers’ attention will be spread across a greater number of rules. If any of those rules are not actually effective in reducing risk, the attention paid to those rules will detract from compliance with functional rules.&nbsp;</p> <p class="p1"><b>PRINCIPLES FOR SUCCESSFUL REFORM&nbsp;<br /></b><span style="font-size: 12px;">As I have previously testified,<sup>10</sup> the need to eliminate or modify nonfunctional regulations from the accumulated stock has been widely recognized by members of Congress and every president since Carter.<sup>11</sup> Functional rules address current, significant risks; mitigate some amount of those risks through compliance with the regulations; and do not have significant unintended effects or excessive compliance costs relative to their benefits. Nonfunctional rules are missing one or more of these features. The key to achieving significant amelioration of the problem of regulatory accumulation is first identifying as many nonfunctional rules as possible and then either eliminating them or changing them so that they become functional.&nbsp;</span></p> <p class="p1">Executive branch attempts to examine and revise or eliminate existing nonfunctional regulations have primarily relied on executive orders for review of the need for regulations rather than creating a streamlined and evidencebased, analytical process that could accomplish large-scale reform. In a 2014 study I coauthored with economist Richard Williams (attached), we examine previous efforts at regulatory reform led by every president since Reagan and conclude that these episodes yielded only marginal improvements at best. Most notably, none of these efforts resulted in either substantial reductions relative to the total size of the Code of Federal Regulations (CFR) or sustained changes in the rate of adding new regulations to the CFR.<sup>12</sup>&nbsp;</p> <p class="p1">Figure 1 shows just how little the regulatory process has changed, despite these presidential efforts. Since 1975, the CFR has expanded in 30 of 37 years. In those 30 expansionary years, 117,294 pages were added to the CFR. In contrast, in the seven contractive years, 17,871 pages were subtracted from the CFR—for net growth of nearly 100,000 pages. Previous efforts to eliminate obsolete regulations have removed only very small percentages of existing regulations from the books.</p> <p class="p1"><a href=""><img height="415" width="585" src="" /></a></p> <p class="p1">The failure of past regulatory review efforts likely stems from a fundamental misalignment of incentives: agencies, despite direction from the president, have incentives to maintain and increase their regulations to maximize their budgets and control over their portion of the economy. In turn, to retain regulations that would be eliminated otherwise, agencies may either hide or fail to produce information that would help identify obsolete or ineffective regulations in the first place. We should not expect agencies to give any better assessments of their own rules than professors would expect of students grading their own tests.&nbsp;</p> <p class="p1">Similarly, individuals in agencies have little incentive to provide information that would lead to a rule’s elimination or the choice not to produce a rule.<sup>13</sup> In general, employees—including economists—are professionally rewarded for being part of teams that create new regulations or expand existing regulatory programs.<sup>14 </sup>Conversely, employees are rarely rewarded for deciding that a regulation should not be created. This is unfortunate, because specialists in agencies are likely to have some relevant information about which rules are nonfunctional.&nbsp;</p> <p class="p1">However, the issues that have plagued previous, executive branch–led efforts at regulatory reform can be overcome. In previous research, I identified 11 characteristics of successful regulatory reform, derived from lessons learned by studying the Base Realignment and Closure (BRAC) process, regulatory reform in other countries, and previous attempts at retrospective review in the United States.<sup>15</sup> I highlight a few of these below, for the purposes of assessing the reforms currently under consideration.&nbsp;<span style="font-size: 12px;">&nbsp;</span></p> <p class="p1">1. The process of identifying rules for modification or elimination should entail independent assessment of whether regulations are functional. To be classified as functional, a rule must&nbsp;</p> <p class="p1" style="padding-left: 30px;">1. address a current risk,&nbsp;</p> <p class="p1" style="padding-left: 30px;">2. address a significant risk,&nbsp;</p> <p class="p1" style="padding-left: 30px;">3. not result in ongoing costs (including unintended consequences) that more than offset the ongoing benefits of the rule, and&nbsp;</p> <p class="p1" style="padding-left: 30px;">4. not interfere with or duplicate other rules.&nbsp;</p> <p class="p1">It is vital that the assessment of a rule with respect to each of these criteria be performed objectively. If the body tasked with the analysis of a rule has incentive to find that the rule is functional or is nonfunctional, the review risks becoming an exercise in advocacy rather than objective analysis. The SCRUB Act, for example, creates a commission with the authority to hire analysts and experts necessary for such an assessment and to collect essential information for those purposes. The SCRUB Act sets forth criteria for regulatory assessment that are not very different from how I define “nonfunctional” rules in my own research. While it is wise to build in flexibility for the commission to devise new criteria in response to future lessons learned, it is equally important that any commission be required to publicly disclose its complete assessment criteria and take comments from the public on them.&nbsp;</p> <p class="p1">2. The identification process must be broad enough to identify potentially duplicative regulations.&nbsp;<br /><span style="font-size: 12px;">Duplication and redundancy across agencies may be a large source of nonfunctional rules. For example, multiple agencies through different regulations may address food safety. In light of this source of nonfunctional rules, analysis that is focused on individual rules or the rules of a single agency may not capture factors (e.g., conflicts, duplication) that indicate certain rules are in fact nonfunctional.</span></p> <p class="p1">3. The analysis of the functionality of rules should use a standard method of assessment that is difficult to subvert. Nobel Prize–winning economist Ronald Coase famously said, “If you torture the data long enough, it will confess to anything.” So it goes with any analysis: those who perform the analysis can choose the data to examine, how to analyze them, and the framework within which to present results. This is a primary reason why I recommend that retrospective analysis of regulations not be left in the hands of agencies that have incentive to find specific results. However, a similar logic applies to an independent body that analyzes regulations. In the long run, we would have to worry about whether the body can maintain its independence and whether political or other pressure would be exerted on the body to subvert its analyses to serve an agenda. The best way to prevent such subversion is to require a simple, transparent, and replicable methodology of assessment. Under the SCRUB Act, the commission is required to specify a methodology for assessment. Doing so publicly and before beginning the assessment will help achieve a transparent, objective end product.&nbsp;</p> <p class="p1">4. Whatever the procedure for assessment, assessments of specific regulations or regulatory programs should focus on whether and how they lead to the outcomes desired. The SCRUB Act lists as one of the criteria for assessment “whether the rule or set of rules is ineffective at achieving the rule or set’s purpose.” To meet my criteria, this phrase should mean achieving desired outcomes, as opposed to producing outputs. A rule may lead to an increase in an output, such as increased safety inspections, but that does not guarantee that there has been an increase the outcome, safety.&nbsp;</p> <p class="p1">5. Congressional action—such as a joint resolution of disapproval—should be required to stop the recommendations, as opposed to a vote to enact or not enact. The SCRUB Act could be improved if it were modified to limit formally Congress’s ability to subvert the process of selecting rules for elimination or modification. As the creators of the BRAC process recognized, every base targeted for closure had a champion defending it in Congress: the member whose constituency would be affected by the closure. So it would likely be with regulations slated for revocation. A better solution would be to follow the BRAC experience and require that a SCRUB Act commission’s recommendations take effect automatically unless Congress were to enact a joint resolution of disapproval of the entire set of recommendations—with no amendments allowed.</p> <p class="p1">6. The review process should repeat indefinitely. The SCRUB Act provides for a dissolution of the commission by a specific date. Given the possibility that the commission cannot evaluate all regulations before that date, it may be worthwhile to extend the life of the commission until all regulations are evaluated at least once, or even have the commission continue on an ongoing basis. The regulatory process will lead to regulatory accumulation again. This commission could balance the tendency to accumulate regulations with a deliberate and streamlined process for eliminating nonfunctional regulations if and when they appear.&nbsp;</p> <p class="p1">CONCLUSIONS<br /><span style="font-size: 12px;">Regulatory accumulation in the United States, with its adverse impact on economic growth by impeding innovation and entrepreneurship, is now a widely recognized problem. Furthermore, the costs of regulation are disproportionately borne by low-income households and the accumulation of regulations may make us less safe overall as compliance becomes more thinly spread between functional and nonfunctional rules. Regulatory reform that reduces the overall burden of regulations would act as a progressive tax refund for American households. Nonetheless, the problem has not been meaningfully addressed despite the efforts of several administrations.</span></p> <p class="p1">One reason it has been hard to address regulatory accumulation is the difficulty of identifying nonfunctional rules—rules that are obsolete, unnecessary, duplicative, or otherwise undesirable. An independent group or commission—not regulatory agencies—seems required to successfully identify nonfunctional rules.&nbsp;</p> <p class="p1">The SCRUB Act has several characteristics that make it more likely to succeed where previous attempts have failed. First, it appoints an independent commission to identify nonfunctional rules. Second, the act requires that the commission establish a methodology before beginning the assessment of rules, thereby minimizing opportunities for the assessment to be subverted by special interests. Third, the act establishes criteria that the commission would use to identify nonfunctional rules, and these criteria are primarily based on fundamental problem-solving and sound economic thinking.</p> Tue, 03 Mar 2015 12:13:14 -0500 Initial Thoughts on Obama Administration’s “Privacy Bill of Rights” Proposal <h5> Expert Commentary </h5> <p>The Obama Administration has just released a draft “<a href="">Consumer Privacy Bill of Rights Act of 2015</a>.” Generally speaking, the bill aims to translate fair information practice principles (FIPPs) — which have traditionally been flexible and voluntary guidelines — into a formal set of industry best practices that would be federally enforced on private sector digital innovators. This includes federally-mandated Privacy Review Boards, approved by the Federal Trade Commission, the agency that will be primarily responsible for enforcing the new regulatory regime.</p> <p>Many of the principles found in the Administration’s draft proposal are quite sensible as best practices, but the danger here is that they could soon be converted into a heavy-handed, bureaucratized regulatory regime for America’s highly innovative, data-driven economy.</p> <p>No matter how well-intentioned this proposal may be, it is vital to recognize that restrictions on data collection could negatively impact innovation, consumer choice, and the competitiveness of America’s digital economy.</p> <p>Online privacy and security is vitally important, but we should look to use alternative and less costly approaches to protecting privacy and security that rely on education, empowerment, and targeted enforcement of existing laws. Serious and lasting long-term privacy protection requires a layered, multifaceted approach incorporating many solutions.</p> <p>That is why flexible data collection and use policies and evolving best practices will ultimately serve consumers better than one-size-fits all, top-down regulatory edicts. Instead of imposing these FIPPs in a rigid regulatory fashion, privacy and security best practices will need to evolve gradually to new marketplace realities and be applied in a more organic and flexible fashion, often outside the realm of public policy.</p> <p>Regulatory approaches, like the Obama Administration’s latest proposal, will instead impose significant costs on consumers and the economy. Data is the fuel that powers our information economy. Privacy-related mandates that curtail the use of data to better target or personalize new services could raise costs for consumers. There is no free lunch. Something has to pay for all the wonderful free sites and services we enjoy today. If data can’t be used to cross-subsidize those services, prices will go up.</p> <p>Data regulations could also indirectly cost consumers by diminishing the abundance of content and culture now supported by the data-driven economy. In other words, even if prices and paywalls don’t go up, quantity or quality could suffer if data collection is restricted.</p> <p>Data regulations could also hurt the competitiveness of domestic markets and the global competitive advantage that America’s tech sector has in this space. That regulatory burden would fall hardest on smaller operators and new start-ups. Today’s “app economy” has given countless small innovators a chance to compete on even footing with the biggest players. Burdensome data collection restrictions could short-circuit the engine that drives entrepreneurial innovation among mom-and-pop companies if ad dollars get consolidated in the hands of only the larger companies that can afford to comply with new rules.</p> <p>We don’t want to go down the path the European Union charted in the 1990s with heavy-handed data directives. That suffocated high-tech entrepreneurialism and innovation there. America’s Internet sector came to be the envy of the world because our more flexible, light-touch regulatory regime leaves more breathing room for competition and innovation compared to Europe’s top-down regime. We should not abandon that approach now.</p> <p>Finally, the Obama Administration’s proposal deals exclusively with private sector data collection and has nothing to say about government surveillance activities. The Administration would be wise to channel its energies into that far more significant privacy problem first.</p> Fri, 27 Feb 2015 17:10:17 -0500 The Economic Situation, March 2015 <h5> Publication </h5> <p class="p1"><b>The US Locomotive Economy</b></p> <p class="p1">Has the US economy kicked off third quarter’s running cleats and slipped on bedroom shoes&nbsp;<span style="font-size: 12px;">with very soft soles? The running pace has changed abruptly. As the accompanying chart tells us,&nbsp;</span><span style="font-size: 12px;">the second estimate for growth in the fourth quarter of 2014 fell to 2.2 percent from the third&nbsp;</span><span style="font-size: 12px;">quarter’s hair-raising 5.0 percent. Is this the economic engine that is pulling the world economy?&nbsp;</span><span style="font-size: 12px;">Yes, it’s the best engine the system has. So why the sudden shift to second gear?&nbsp;</span><span style="font-size: 12px;">Weakness in the rest of the world is the major part of the story. Still seeking higher ground,&nbsp;</span><span style="font-size: 12px;">Europe is slowly backing away from the edge of recession. China is running in third gear with its&nbsp;</span><span style="font-size: 12px;">growth hitting 7 percent instead of its “normal” 10 percent. Canada and Mexico are moving&nbsp;</span><span style="font-size: 12px;">along at 2.5 percent growth. And Japan’s economy has launched again but is just beginning to&nbsp;</span><span style="font-size: 12px;">sail. The world economy is a mixed bag but still a decidedly weak one.</span></p> <p class="p1">Meanwhile, with the dollar getting good as gold, while others cut interest rates in the hope of&nbsp;<span style="font-size: 12px;">stimulating growth, US exports are falling and imports have surged.</span><span style="font-size: 12px;">The chart’s white four-quarter moving average shows real GDP growth is averaging about 2.6&nbsp;</span><span style="font-size: 12px;">percent for the year. The gap between current growth and the 3.14 percent long-term average&nbsp;</span><span style="font-size: 12px;">may look like a permanent feature of the data landscape, but most forecasters are betting the gap&nbsp;</span><span style="font-size: 12px;">will be closed as 2015 progresses. As always, there are some special considerations. This time it&nbsp;</span><span style="font-size: 12px;">is energy, and this time the net effect is positive.</span></p> <p class="p1"><a href=""><img src="" width="585" height="427" /></a></p> <p class="p1"><b>More on the Energy Story</b></p> <p class="p1">The effects of the better than 50 percent decline in crude prices since June 2014 are now working&nbsp;<span style="font-size: 12px;">their way through the economy. US commentators cheered the explosive growth of shale oil&nbsp;</span><span style="font-size: 12px;">production that triggered the price decline, and they should have. As will be shown later, it was&nbsp;</span><span style="font-size: 12px;">growth in the shale oil states that propelled the US economy as it sailed out of the recession. But&nbsp;</span><span style="font-size: 12px;">folks on the other side of the pond—OPEC and its leader, Saudi Arabia—somehow felt&nbsp;</span><span style="font-size: 12px;">differently about the matter. Let’s face it, when prices fall, it matters whether you are a buyer or&nbsp;</span><span style="font-size: 12px;">seller, a producer or a consumer, and folks who have dominated a product market for decades&nbsp;</span><span style="font-size: 12px;">just don’t go quietly into the night. On balance, of course, the United States is a consumer.</span></p> <p class="p1">Lower energy prices are a boon to the economy, maybe adding as much as 0.50 percentage&nbsp;<span style="font-size: 12px;">points to GDP growth.</span><span style="font-size: 12px;">The decline in crude oil prices came when the Saudis targeted the United States and Asia with a&nbsp;</span><span style="font-size: 12px;">price cut, raised their price to Europe, and opened up the valves for more oil production. When&nbsp;</span><span style="font-size: 12px;">the price plummeted from $100 a barrel to $45, the Saudis responded with a smile. They are the world low-cost producer, and have lots of loot in their sovereign fund for weathering a long price&nbsp;</span><span style="font-size: 12px;">war. Holding market share seems to be their current strategy.</span></p> <p class="p3"><span style="font-size: 12px;">While consumers overall can enjoy large savings in transportation cost, something on the order&nbsp;</span><span style="font-size: 12px;">of $750 a year for the average family, just where they live and work puts a different spin on that,&nbsp;</span><span style="font-size: 12px;">too.</span></p> <p class="p1"><a href="">Continue reading</a></p> Fri, 27 Feb 2015 14:32:38 -0500 The Questionable History of Regulatory Reform Since the APA <h5> Publication </h5> <p class="p1">The 114th Congress will likely consider many regulatory reform bills. Understanding how such bills pass is important for effective policymaking. While compromise is often key to legislative success, some kinds of compromise may undermine the future success of the intended regulatory reform. If the history of regulatory reform is any indication, the success of future reform will hinge on whether reform bills maintain the substantive intent of their sponsors or are watered down until they fulfill a merely symbolic purpose.</p> <p class="p1">A <a href="">new study</a> for the Mercatus Center at George Mason University examines the legislative histories and implementation of key regulatory reform statutes and finds that these bills passed after crucial but controversial provisions were weakened. Compromises included in the legislation to secure its passage have consistently undermined substantive reform objectives by maintaining broad agency discretion to interpret the law and by minimizing judicial review. To achieve regulatory reform objectives, legislators must be careful not to abandon core reform elements or history will continue to repeat itself.</p> <p class="p1">To read the study in its entirety and learn more about its authors, Stuart Shapiro and Deanna Moran, please see “<a href="">The Questionable History of Regulatory Reform since the APA</a>.”</p> <p class="p3"><b>SUMMARY</b></p> <p class="p1">Since the passage of the Administrative Procedure Act (APA) in 1946, several pieces of legislation designed to reform the regulatory process have been enacted. The legislative histories of five of the most significant statutes—the Paperwork Reduction Act, the Regulatory Flexibility Act, the Unfunded Mandates Reform Act, the Small Business Regulatory Enforcement Fairness Act, and the Congressional Review Act—have never been mined with the purpose of understanding the implementation of these acts and why they were able to pass Congress.</p> <p class="p1">These laws were accompanied by strong rhetoric about the need to reduce the regulatory burden. However, none of the reforms has lived up to that rhetoric. The number of hours Americans spend providing information to the government continues to increase. Small businesses continue to be burdened by regulations. States and local governments still complain about unfunded mandates. By any measure, these reforms have failed, largely due to provisions in each reform that maximize agency discretion and minimize judicial review. Attempts to change the rulemaking process can be a very poor way to change the substance of regulations if agencies retain wide discretion to interpret the law and the judiciary has a minimal role in holding regulatory agencies accountable.</p> <p class="p3"><b>KEY FINDINGS IN THE LEGISLATIVE HISTORY</b></p> <p class="p1">Since passage of the APA, there have been two waves of regulatory reform: first in the late 1970s and then in the mid-1990s. Each of those periods saw the passage of significant regulatory reform statutes.</p> <p class="p4"><b>Regulatory Flexibility Act of 1980<br /></b><span style="font-size: 12px;">The Regulatory Flexibility Act requires an agency to conduct a regulatory flexibility analysis when it issues a rule that has a significant impact on small businesses. This analysis is not subject to judicial review, however. Moreover, agencies were assured throughout the legislative process that the law would not undermine existing regulatory statutes or their goals. The law gave a voice to the interests of small businesses, which often are disproportionately affected by regulation, but was drastically weakened by the lack of judicial review and by the discretion it gave agencies to determine whether their rules have an impact on small businesses. But these provisions were likely necessary for the law’s passage.</span></p> <p class="p4"><b>Paperwork Reduction Act of 1980<br /></b><span style="font-size: 12px;">The Paperwork Reduction Act created the Office of Information and Regulatory Affairs (OIRA) within the Office of Management and Budget. OIRA was designed to oversee the implementation of the act, which creates procedures for collecting information from the public. The law had widespread support across parties and within the business community since reducing the burden of providing information was a popular goal. But the trend of information collection over the last 15 years shows that despite OIRA’s existence, that burden has increased on businesses and the public—making it hard to argue that the statute has achieved its goals.</span></p> <p class="p4"><b>Unfunded Mandates Reform Act of 1995<br /></b><span style="font-size: 12px;">A string of complex laws and regulations from the 1970s and 1980s resulted in the enactment of the Unfunded Mandates Reform Act, which required agencies to disclose the cost of new rules enforced on the private sector when a rule’s cost is expected to exceed $100 million. While the new benefit-cost analysis requirement mollified a vocal constituency—state and local governments—the effects of the reform are not easily measured. The law provides numerous exemptions for regulations related to public health and waters down other requirements, and thus appears to be largely symbolic.</span></p> <p class="p4"><b>Small Business Regulatory Enforcement Fairness Act and Congressional Review Act of 1996<br /></b><span style="font-size: 12px;">The Small Business Regulatory Enforcement Fairness Act and the Congressional Review Act made significant changes to the Regulatory Flexibility Act. Once again, small business interests lobbied for changes to the regulatory process. The laws mandated an increase of judicial review for the regulatory process, small business participation in the process through panel review of proposed rules, and decreased punitive action against small businesses that seek redress for regulatory action. The law also included a provision for Congress to review and disapprove of federal agency rules. However, both laws are limited and provide few constraints on agency discretion.</span></p> <p class="p3"><b>CONCLUSION</b></p> <p class="p1">The history of these acts shows how compromises that placed a higher value on preserving broad agency discretion than on the stated reform objectives of the underlying bills achieved the political objective of passing popular “reforms.” Such compromises, however, shifted the legislation from substantive change to mere symbolic action. Drawing on these legislative lessons, the success or failure of future reform efforts hinges on policymakers’ ability to maintain the link between their primary reform objectives and the substantive statutory provisions necessary to achieve them.</p> Wed, 04 Mar 2015 09:53:49 -0500 Brent Skorup Discusses the FCC's Vote on Net Neutrality on CNBC Asia <h5> Video </h5> <iframe width="560" height="315" src="" frameborder="0" allowfullscreen></iframe> <p>Brent Skorup Discusses the FCC's Vote on Net Neutrality on CNBC Asia</p><div class="field field-type-text field-field-embed-code"> <div class="field-label">Embed Code:&nbsp;</div> <div class="field-items"> <div class="field-item odd"> &lt;iframe width=&quot;560&quot; height=&quot;315&quot; src=&quot;; frameborder=&quot;0&quot; allowfullscreen&gt;&lt;/iframe&gt; </div> </div> </div> Fri, 27 Feb 2015 11:52:20 -0500 Keith Hall to Direct Congressional Budget Office: Mercatus Center Commends Hall’s Continued Success <h5> Expert Commentary </h5> <p class="p1"><b>Arlington, Va.—</b>Starting April 1, Keith Hall will become the next Director of the Congressional Budget Office (CBO). Hall worked as a senior research fellow for the Mercatus Center at George Mason University from April 2012 to September 2014.&nbsp;</p> <p class="p1">“Keith Hall is a first-rate economist who understands fiscal responsibility and the importance of honest and accurate reporting of the numbers,” said <a href="">Tyler Cowen</a>, general director of the Mercatus Center.&nbsp; “I expect he will do a great job.”</p> <p class="p1">Now in its 40th year, the CBO&nbsp;supports the Congressional budget process by producing independent analyses of budgetary and economic issues. CBO is nonpartisan and does not make policy recommendations.&nbsp;</p> <p class="p1">Hall’s research at Mercatus focused on labor markets, labor market policy, and economic data.&nbsp; His Mercatus publications include&nbsp;<i><a href="">Opportunity, Mobility, and Inequality in Today’s Economy,</a>&nbsp;</i><a href=""><i>Dreams Deferred: Young Workers and Recent Graduates in the U.S. Economy</i></a><i>,&nbsp;</i>and<i>&nbsp;</i><a href=""><i>The Employment Costs of Regulations</i></a><i>.&nbsp;</i>Hall’s Mercatus Center biography, with links to his research and media appearances, is available at <a href=""></a>.&nbsp;</p> <p class="p1">Before his employment at the Mercatus Center, Hall served as the 13th Commissioner of the Bureau of Labor Statistics, the Chief Economist for the White House Council of Economic Advisors, and the Chief Economist for the US Department of Commerce.&nbsp;</p> Mon, 02 Mar 2015 11:26:11 -0500 Fundamentals of Budget Process <h5> Video </h5> <iframe width="560" height="315" src="" frameborder="0" allowfullscreen></iframe> <p><span style="color: #333333; font-family: arial, sans-serif; font-size: 13px; font-style: normal;">In a Capitol Hill Campus event, David Primo, Associate Professor at the University of Rochester and senior scholar for the Mercatus Center, and Patrick Louis Knudsen, a former long-time policy director for the House Budget Committee, held a discussion on the congressional budget process.</span></p><div class="field field-type-text field-field-embed-code"> <div class="field-label">Embed Code:&nbsp;</div> <div class="field-items"> <div class="field-item odd"> &lt;iframe width=&quot;560&quot; height=&quot;315&quot; src=&quot;; frameborder=&quot;0&quot; allowfullscreen&gt;&lt;/iframe&gt; </div> </div> </div> Fri, 27 Feb 2015 11:25:10 -0500 Net Neutrality Rules Represent a Giant Step Backwards <h5> Expert Commentary </h5> <p>The Federal Communications Commission today voted, 3-2, that the Internet will be subject to many of the Title II regulatory provisions of the 1934 Communications Act. Applying Title II laws to broadband means regulating the Internet as a common carrier, akin to the telephone network, and gives significant control of the Internet to the FCC, lobbyists, and industry players.</p> <p>The Title II order and new net neutrality rules have not been released yet, but the thrust of the regulations is clear from commissioners’ statements and media reports. In short, the FCC’s rules represent a giant step backwards to the days of command-and-control of markets.</p> <p>The FCC’s actions derive in part from <a href="">the myth that the Internet is neutral</a>. In the evolving online world, the Internet gets less neutral—and better for consumers—every day. Through a hands-off approach from policymakers, the U.S. communications and technology sector has thrived as a supplier of innovation, but Title II rules effectively throw sand in the gears.</p> <p>If the FCC’s rules are not overturned by the courts, the days of permissionless innovation online come to a close. The application of Title II means new broadband services must receive approval from this federal agency. Companies in Silicon Valley will therefore rely increasingly on their regulatory compliance officers, not their engineers and designers.</p> <p>If courts do strike down the FCC’s net neutrality rules for a third time, the FCC should abandon its campaign to regulate the Internet. Instead the Commission should focus on increasing broadband competition across the nation, thereby reducing prices and increasing the availability of new broadband services. There is plenty of work to be done on this front, but pursuing Title II net neutrality rules distract the Commission and Congress from spearheading a pro-consumer innovation agenda.</p> Thu, 26 Feb 2015 13:30:36 -0500