Mercatus Site Feed en Financing the Future: The Role of the Financial System in Fostering Economic Growth ( <h5> Events </h5> <p>A well-functioning financial system is a key ingredient in a well-functioning economy. Whether an existing company wants to bring a new product to market or a new company wants to compete with an existing company, access to funding is essential. Financial regulation can encourage or inhibit financial markets’ ability to provide the financial products and services necessary to support a vigorous, dynamic economy. A well-designed regulatory system enables the financial system to work efficiently, effectively, and creatively to support economic prosperity.</p> <p>Please join us for this important conference to discuss how well the financial system is serving entrepreneurs, businesses, and the American people.</p> <p>Our speakers and panelists will address questions such as:</p> <ul><li>What is the historical role that financial markets have played in building the American economy?</li><li>How effectively are the financial markets working now to foster economic growth?</li><li>How have regulatory developments, including those in Dodd-Frank, affected the economy?</li><li>What financial regulatory reforms could encourage innovation and investment?</li></ul> <p>Confirmed speakers include:</p> <ul><li><a href="">Richard Berner</a>, U.S. Department of the Treasury</li><li><a href="">Thomas Hoenig</a>, Federal Deposit Insurance Corporation</li><li><a href="">Stephen Miller</a>, Mercatus Center</li><li><a href="">Hester Peirce</a>, Mercatus Center</li><li><a href="">Michael Piwowar</a>, U.S. Securities and Exchange Commission</li><li><a href="">Hal Scott</a>, Harvard Law School</li><li><a href="">Betsey Stevenson</a>, Council of Economic Advisors</li><li><a href="">Phil Swagel</a>, University of Maryland School of Public Policy</li></ul> <p>Questions about this event? Please contact Julie Burden at&nbsp;<a href=""></a>&nbsp;or (703) 344-3219.</p> Mon, 02 Mar 2015 17:01:35 -0500 Junk Food Taxes Don't Work <h5> Expert Commentary </h5> <p>What if I told you there was an easy way to fix various health problems that had a variety of benefits and very little cost? Your first reaction could be "let’s do it." This is the promise that comes with taxing items to change consumer behavior. But, after many years of failed policy attempts, you should be a bit more skeptical of this approach.</p> <p>In a new research paper to be published tomorrow by the Mercatus Center at George Mason University on “Regressive Effects: Causes and Consequences of Selective Consumption Taxation,” my colleagues and I explore the taxation of junk food in more detail. So many other people were talking about the health effects of bad diet that we wanted to know more about how to stop the high rates of heart disease, diabetes and other health concerns that come from eating foods high in fat, salt and sugar.</p> <p>What we found is that many people live in areas where little else besides this type of food is available, areas called <a href="">food deserts</a> – or what the U.S. Department of Agriculture defines as “as urban neighborhoods and rural towns without ready access to fresh, healthy, and affordable food.” On top of this, salt and sugar are the most popular preservatives. That means food can wait around until you get ready to eat it, unlike a banana or an apple slice. Convenience is an important aspect of food purchases. It’s no wonder so many people make the choice to buy cheap, convenient food that might ultimately make them subject to chronic health problems.</p> <p><a href="">Continue reading</a></p> Mon, 02 Mar 2015 21:08:55 -0500 Five Myths about Net Neutrality <h5> Expert Commentary </h5> <p>In view of the Federal Communications Commission (FCC) vote on February 26 to regulate the Internet under Title II of the New Deal–era Communications Act, it is critical to understand what these “net neutrality” rules will and will not do.</p> <p>Columbia Business School professor Eli Noam <a href="">says</a> net neutrality has “at least seven different related but distinctive meanings….” The consensus is, however, that net neutrality is a principle for how an Internet Service Provider (ISP) or wireless carrier treats Internet traffic on “last mile” access — the connection between an ISP and its customer. Purists believe net neutrality requires ISPs to treat all last-mile Internet traffic the same. The FCC will not enforce that radical notion because networks are becoming more “intelligent” every year and, <a href="">as a Cisco network engineer recently put it</a>, equal treatment for all data packets “would be setting the industry back 20 years.”</p> <p>Nevertheless, because similar rules were twice struck down in federal court, the FCC is crafting new net neutrality rules for ISPs and technology companies. Many of these Title II provisions reined in the old Bell telephone monopoly and are the most intrusive rules available to the FCC. The net neutrality rules are garnering increased public scrutiny because they will apply to one of the few bright spots in the US economy — the technology and communications sector.</p> <p>As with many complex concepts, there are many myths about net neutrality. Five of the most widespread ones are dispelled below.</p> <p><b>Myth #1: The Internet Has Always Been Neutral<br /><img src="*hlZxNAlt1vTBXXulnH3GSQ.png" width="585" height="263" style="font-size: 12px;" /></b></p><p><b>Reality</b>: <a href="">Prioritization has been built into Internet protocols for years</a>. MIT computer scientist and early Internet developer David Clark colorfully dismissed this first myth as “<a href="">happy little bunny rabbit dreams</a>,” and pointed out that “[t]he network is not neutral and never has been.” Experts such as tech entrepreneur and investor <a href="">Mark Cuban</a> and President Obama’s former chief technology officer <a href="">Aneesh Chopra</a> have observed that the need for prioritization of some traffic increases as Internet services grow more diverse. People speaking face-to-face online with doctors through new telemedicine video applications, for instance, should not be disrupted by once-a-day data backups. ISPs and tech companies should be free to experiment with new broadband services without <a href="">time-consuming regulatory approval</a> from the FCC. <a href=";">John Oliver</a>, <a href="">The Oatmeal</a>, and <a href="">net neutrality activists</a>, therefore, are simply wrong about the nature of the Internet.</p><p><b>Myth #2:&nbsp;Net Neutrality Regulations Are the Only Way to Promote an Open Internet&nbsp;<br /><img src="*Vpni1qalhVw2zPoCEhMoSg.png" width="585" height="263" style="font-size: 12px;" /></b></p><p><b>Reality</b>: Even while lightly regulated, the Internet will remain open because consumers demand an open Internet. <a href="">Recent Rasmussen polling</a> indicates the vast majority of Americans enjoy the open Internet they currently receive and rate their Internet service as good or excellent. (Only a small fraction, 5 percent, says their Internet quality is “poor.”) It is in ISPs’ interest to provide high-quality Internet just as it is in smartphone companies’ interest to provide great phones and automakers’ interest to build reliable cars. Additionally, it is false when <a href="">high-profile scholars</a> and activists say there is no “cop on the beat” overseeing Internet companies. As Federal Trade Commissioner Joshua Wright <a href="">testified to Congress</a>, existing federal competition laws and consumer protection laws — and strict penalties — protect Americans from harmful ISP behavior.</p> <p><b><b>Myth #3: Net Neutrality Regulations Improve Broadband Competition<br /></b><img height="263" width="585" src="*SP_2N0CDDSFg9IsNYyvr9g.png" /><br /></b></p><p><b>Reality</b>: The FCC’s net neutrality rules are not an effective way to improve broadband competition. Net neutrality is a <a href="">principle for ISP treatment of Internet traffic</a> on the “last mile” — the connection between an ISP and a consumer. The principle says nothing about broadband competition and will not increase the number of broadband choices for consumers. On the contrary, net neutrality as a policy goal was created because many scholars did not believe more broadband choices could ensure a “neutral” Internet. Further, Supreme Court decisions lead scholars to <a href=";pg=PA375&amp;lpg=PA375&amp;dq=credit+suisse+trinko+net+neutrality&amp;source=bl&amp;ots=yUAcFZCn2V&amp;sig=c73nspUDGX80RCBLfIhsH-NaIHA&amp;hl=en&amp;sa=X&amp;ei=uTjmVK_PBK3HsQSkrYJg&amp;ved=0CDoQ6AEwBA#v=onepage&amp;q=credit%20suisse%20trinko%20net%20neutrality&amp;f=false">conclude</a> that “as prescriptive regulation of a field waxes, antitrust enforcement must wane.” Therefore, the FCC’s net neutrality rules would actually impede antitrust agencies from protecting consumers.</p><p><b>Myth #4: All Prioritized Internet Services Are Harmful to Users<br /><img height="263" width="585" src="*yfGrp91AIeGCjvJrcN8T6A.png" /><br /></b></p><p><b>Reality</b>: Intelligent management of Internet traffic and <a href="">prioritization provide useful services to consumers</a>. Net neutrality proponents <a href="">call</a> zero-rating — which is when carriers allow Internet services that don’t subtract from a monthly data allotment — and similar practices “dangerous,” “malignant,” and rights <a href="">violations</a>. This hyperbole arises from dogma, not facts. The real-world use of prioritization and zero-rating is encouraging and pro-consumer. <a href="">Studies show</a> that zero-rated applications are used by millions of people around the globe, including in the United States, and they are popular. In one instance, poor South African high school students petitioned their carriers for free — zero-rated — Wikipedia access because accessing Wikipedia frequently for homework was expensive. Upon hearing the students’ plight, Wikipedia and South African carriers <a href="">happily obliged</a>. Net neutrality rules like Title II would prohibit popular services like zero-rating and intelligent network management that makes more services available.</p> <p><b>Myth #5: Net Neutrality Rules Will Make Broadband Cheaper and Internet Services like Netflix Faster<br /><img height="263" width="585" src="*X9wzk_jfa1OOllJ3gyWBFw.png" /><br /></b></p><p><b>Reality</b>: First, the FCC’s rules will make broadband more expensive, not cheaper. The rules regulate Internet companies much like telephone companies and therefore federal and state telephone fees will eventually apply to Internet bills. According to preliminary estimates, <a href="">millions of Americans</a> will drop or never subscribe to an Internet connection because of these price hikes. Second, the FCC’s rules will not make Netflix and webpages faster. The FCC rules do not require ISPs to increase the capacity or speed of customers’ connections. Capacity upgrades require competition and ISP investment, which may be harmed by the FCC’s onerous new rules.</p> <p>To see more from Mercatus scholars on net neutrality, visit <a href=""></a>.</p> Mon, 02 Mar 2015 15:17:12 -0500 What Is the Sharing Economy? <h5> Publication </h5> <p class="p1">While the bulk of this conversation has focused on regulating the sharing economy, little time has been spent actually defining what the sharing economy is and is not. The lack of a shared definition is why <a href="">Matthew Feeney can call it</a> “a relatively new and increasingly popular peer-to-peer economic model,” and why <a href="">Avi Asher-Schapiro</a> can call it “propaganda,” and how they can both be correct. To perhaps help clarify some of these issues, I’d like to propose a simple definition for the sharing economy.</p> <p class="p1">I agree with Avi’s point that taking an Uber might not be sharing. And I would argue that there may be little sharing actually occurring in the sharing economy.&nbsp; But that really isn’t the point.&nbsp; Instead, it is helpful to think of the sharing economy as <a href="">my colleagues and I have defined it before</a>: any marketplace that uses the Internet to connect distributed networks of individuals to share or exchange otherwise underutilized assets.</p> <p class="p1">When people talk about the sharing economy, they are very rarely focused on remuneration. The term is simply being used as shorthand to describe firms that offer a platform to connect individuals who have something with those who need it.</p> <p class="p1">A cash-strapped homeowner may not have seen her spare bedroom as capital until the <a href="">Airbnb</a> platform provided a way for her to rent it out to vacationers. A college student with an extra hour between classes may not have viewed his time as a profit opportunity until <a href="">Instacart</a> and <a href="">TaskRabbit</a> allowed him to put that time to use for others. A young couple may not have been able to use their couch to connect with other travelers from around the world, but can now do so through <a href="">Couchsurfing</a>. A retiree with a workbench full of power equipment may not have viewed his tools as a way to supplement his income until <a href="">1000 Tools</a> connected him with people in his area wanting to borrow tools.&nbsp; This is the sharing economy.</p> <p class="p1">While some <a href="">may choose to call this</a> “the peer economy,” ”peer production,” “the collaborative economy,” or “collaborative consumption,” each of these are simply different attempts to describe the shifts taking place in the way individuals are choosing to transact and interact with one another.</p> <p class="p1">Regardless of what you are calling it, it has very real benefits.&nbsp; In <a href="">recent research</a>, Adam Thierer, Matthew Mitchell, and I highlight five distinct ways that the sharing economy is creating real value for both consumers and producers:</p> <ol class="ol1"> <li class="li2">By giving people an opportunity to use others’ cars, kitchens, apartments, and other property, it allows underutilized assets or “dead capital” to be put to more productive use.</li> <li class="li2">By bringing together multiple buyers and sellers, it makes both the supply and demand sides of its markets more competitive and allows greater specialization.</li> <li class="li2">By lowering the cost of finding willing partners, haggling over terms, and monitoring performance, it cuts transaction costs and expands the scope of trade.</li> <li class="li2">By aggregating the reviews of past consumers and producers and putting them at the fingertips of new market participants, it can significantly diminish the problem of asymmetric information between producers and consumers.</li> <li class="li2">By offering an “end-run” around regulators who have been captured by existing producers, it allows suppliers to create value for customers long underserved by incumbents that have become inefficient and unresponsive.</li> </ol> <p class="p1">Finally, it is also important to remember that many of the policy problems that <a href="">Avi Asher-Schapiro</a> and <a href="">Dean Baker</a> presented are often failings of particular firms and business models within the sharing economy and not problems with the entire industry. Viewed in this light, each and every one of these failures represents a profit opportunity for a new or rival firm interested in improving the customer experience. Preemptively regulating an entire class of firms based on these anecdotes can be dangerous. As Jim Dwyer of the <i>New York Times</i> <a href="">recently warned</a>, “Be careful around anecdotes; they are the black ice of reality.” The sharing economy is too diverse and too rapidly evolving, and these sorts of pixel-sized stories should not be mistaken for larger, universal truths.</p> <p class="p1">The real issues should not be lost in the noise.&nbsp; Are people sharing? Not always. But, then again, that really isn’t what the sharing economy is about. Instead, they are benefitting from mutually beneficial interactions that would not be possible without the sharing economy’s platforms.</p> Mon, 02 Mar 2015 12:17:21 -0500 Agency Analysis Rarely Used to Inform Regulatory Decisions <h5> Publication </h5> <p class="p1">Since 2008, the Mercatus Center’s Regulatory Report Card series has evaluated the quality of executive branch agencies’ Regulatory Impact Analyses for major regulations and the extent to which agencies explain how they used the analyses in their decisions. A good Regulatory Impact Analysis raises the odds that regulators will create a regulation that solves a real problem at a reasonable cost.</p> <p class="p1">One Report Card criterion asks whether the agency claimed, or appeared to use, any part of the analysis to guide decisions. As the chart below demonstrates, agencies often fail to explain how the Regulatory Impact Analysis helped inform their decisions.</p> <p class="p1"><a href=""> <img height="398" width="585" src="" /></a></p> <p class="p1">This disconnect the between analysis and decisions occurs because Regulatory Impact Analysis is currently required only by executive orders, not statutes. If an administration decides that political priorities are more important than solving real problems at an acceptable cost, agencies can ignore their own analyses. A stronger enforcement mechanism may be needed. Legislation requiring economic analysis of proposed regulations, for example, could allow judges to invalidate regulations unaccompanied by a thorough economic analysis and an explanation of how the agency used that analysis.</p> <p class="p1">Reasonable people may disagree about how much and what type of regulation is justified, but we should all be able to agree that government owes the public a clear explanation of how it’s making regulatory decisions.</p> Mon, 02 Mar 2015 11:56:26 -0500 Is It Fair to Tax Capital Gains at Lower Rates Than Earned Income? <h5> Expert Commentary </h5> <p>Capital gains—and how big a bite the government should take out of them—have become a major point of contention in the past couple of months.</p> <p>In January, President Obama proposed tax changes designed to raise some $320 billion over 10 years, largely through higher levies on high-income Americans. The revenue would be used to cover $235 billion in tax breaks, mostly for moderate-income workers, along with other initiatives.</p> <p>Among the changes he proposed: boosting the capital-gains rate to 28% for the top 1% of taxpayers, up from the current 23.8%, as well as a new capital-gains tax on many inheritances.</p> <p>The GOP fired back that taxing investment income would harm economic growth by discouraging business investment and thereby hurt workers’ incomes.</p> <p>All of which points to a broader question that divides experts: Are capital gains so different from earned income that they should be taxed at a different rate?</p> <p>Below, two experts tackle that question. Scott Sumner is professor of economics at Bentley University and the Ralph G. Hawtrey chair of monetary policy at the Mercatus Center at George Mason University, where he is director of its program on monetary policy. Leonard E. Burman, director of the Urban-Brookings Tax Policy Center and the Paul Volcker chair in behavioral economics and professor of public administration and international affairs at Syracuse University’s Maxwell School, is author of “The Labyrinth of Capital Gains Tax Policy: A Guide for the Perplexed.” They can be reached at <a href=""></a>.</p><p class="p1"><b>YES: It Makes Sense for Individuals—and the Economy</b></p> <p class="p2"><b>By Scott Sumner</b></p> <p class="p2">To many people, investment income should obviously be taxed at the same rate as labor income. After all, income is income, right?</p> <p class="p2">But it’s not that simple. There are compelling reasons to treat capital gains differently than other earnings.</p> <p class="p2">For one thing, taxes on investment earnings effectively double-tax that income. Labor income is taxed when it is earned, and investments are generally made out of after-tax earnings—so capital-gains levies represent another bite out of an investor’s money.</p> <p class="p2">In effect, the system punishes those who put their money to work. Raising the capital-gains tax rate would just make the punishment that much more drastic.</p> <p class="p2">This question doesn’t simply affect people who invest—it affects the entire economy. Investment capital is one of the most important drivers of economic growth, and the promise of big capital gains are an important inducement to get people to put money into critical but risky fields like biotechnology. If we want more inventions, or a faster cure for cancer, then we should have lower capital-gains taxes.</p> <p><a href="">Continue reading</a></p> Mon, 02 Mar 2015 13:36:58 -0500 Regulatory Reform Can Amount to a Progressive Tax Refund, If Done Right <h5> Publication </h5> <p class="p1"><b>INTRODUCTION&nbsp;</b><br /><span style="font-size: 12px;">Chairman Marino, Ranking Member Johnson, and members of the committee: thank you for inviting me to testify today. As an economist and senior research fellow at the Mercatus Center at George Mason University, I focus my primary research on regulatory accumulation and the regulatory process, so it is my pleasure to testify on today’s topic. In previous research and testimony, I have highlighted the fact that regulatory accumulation creates substantial drag on economic growth by impeding innovation and entrepreneurship.</span><sup style="font-family: inherit; font-style: inherit; font-weight: inherit;">1</sup><span style="font-size: 12px;"> Today, I have three main points that may help you to examine the reforms under consideration. First, I will discuss the regressive effects of regulatory accumulation—or, to put it another way, why retrospective analysis of regulations can result in a what amounts to a progressive tax refund, with benefits going largely to lower-income Americans.</span></p> <p class="p4">Second, I will highlight how an increasingly long and complex regulatory code can actually make the task of achieving risk reduction in the workplace more difficult.</p> <p class="p4">Third, I will argue that not all attempts at regulatory reform are equal. In my research, I have found several factors that tend to contribute to meaningful and successful regulatory and governmental reform efforts. The most important of these is the use of an independent group or commission to identify regulations that need to be modified or eliminated. Any retrospective analysis effort that leaves this task in the hands of the same agencies that created the regulations in the first place is unlikely to succeed. I highlight some other important principles as well, but the independence of the reviewers is the most important.</p> <p class="p6"><b>REGRESSIVE EFFECTS OF REGULATIONS</b>&nbsp;<br /><span style="font-size: 12px;">Regulations can be regressive, particularly in their effects on prices paid by consumers.</span><sup style="font-family: inherit; font-style: inherit; font-weight: inherit;">2</sup><span style="font-size: 12px;"> A regressive regulation is one whose burden disproportionately falls on lower-income individuals and households. When regulations force producers to use more expensive production processes or inputs, some of those production cost increases are passed along to consumers in the form of higher prices. For example, in 2005, the Food and Drug Administration banned the use of chlorofluorocarbons as propellants in medical inhalers, such as the inhalers that millions of Americans use to treat asthma.</span><sup style="font-family: inherit; font-style: inherit; font-weight: inherit;">3</sup><span style="font-size: 12px;"> This ban was enacted because of environmental concerns rather than health or safety concerns. Since the implementation of that ban, the average price of asthma inhalers has tripled.</span><sup style="font-family: inherit; font-style: inherit; font-weight: inherit;">4</sup><span style="font-size: 12px;"> While individuals with high incomes might be able to absorb this price increase, the higher price may force people with low incomes to make the choice not to buy an inhaler and instead leave the asthma untreated—potentially leading to a real human cost if the person suffers an asthma attack without an inhaler available.</span></p> <p class="p4">When regulations cause the prices of goods and services to increase, lower-income households have to make a choice: no longer buy those goods, substitute them with something else if possible, or buy less of the more expensive good. This can have the unintended consequence of causing lower-income families not to be able to purchase some good or service that is a medical necessity or that would have reduced the risk of accidental death or injury. I have attached a study by economist Diana Thomas that gives more details on the regressive effects of regulations.&nbsp;</p> <p class="p4">The cumulative cost of regulations amounts to a hidden, regressive burden. But it’s a burden that could be lightened. In fact, one way of viewing that burden is as an opportunity: retrospective analysis that eliminates a portion of the regulatory cost burden would act as a progressive tax refund. Let me explain with an example that will illustrate how reducing the regulatory burden is similar to a tax refund that primarily benefits poorer Americans.</p> <p class="p4">While economists have not yet reached consensus on how to calculate the total cost of regulation, several estimates exist. For example, economists John Dawson and John Seater estimate that regulatory accumulation slows economic growth by about 2 percent per year.<sup>5</sup> The latest OIRA report to Congress on the benefits and costs of regulations estimates that a small subset of regulations reviewed cost the economy between $57 billion and $84 billion in 2001 dollars.<sup>6 </sup>Converted to 2014 dollars, this range is from $76.19 billion to $112.29 billion.<sup>7 </sup>At the other end of the spectrum, Clyde Wayne Crews estimates the annual cost of regulations to be around $1.882 trillion.<sup>8</sup> For this example, I’ll use the midpoint between $57 billion and $1.882 trillion, which is $969 billion. Consider this the annual regulatory burden shared across all households in the economy. As of 2013, there were 115,610,216 households in the United States. We can estimate the regulatory burden per household by simply dividing the midpoint cost estimate, $969 billion, by the number of households. This division yields about $8,386 per household.</p> <p class="p4">Now consider a regulatory reform that would reduce this cost burden by 15 percent. If the regulatory cost burden per household is $8,386, then a 15 percent reduction would equal about $1,258 per household per year. This reduction in cost burden is effectively an annual regulatory cost refund and would have different impacts to low-, middle-, and high-income households. In this example, I define a low-income household as a family of five with three children under the age of 18 earning a household income exactly equal to the Census poverty threshold for 2014: $28,252. For the middle-income household, I use the median household income in 2013 (the latest year available): $51,900. For the high-income household, I follow Diana Thomas’s calculations and use a household income equal to 10 times the poverty threshold: $282,520. Table 1 shows what a reduction in regulatory costs of $1,258 would equal, relative to household income and in percentage terms.</p> <p class="p4"><img height="133" width="585" src="" /></p><p class="p1">As table 1 shows, a reduction in regulatory burden of $1,258 would have a much larger effect on the purchasing power of the low-income household than the middle- or high-income households. To the low-income household, the regulatory cost refund would equal nearly 5 percent of one year’s household income. Conversely, to the high income household, it would equal only 0.4 percent of one year’s income. This example shows that a regulatory cost refund of any amount would work just like a progressive tax cut, helping low- and middle-income households relatively more than high-income households. Even better, unlike one-time tax rebates, this regulatory cost refund would repeat every year.&nbsp;</p> <p class="p1"><b>INCREASING INABILITY TO PRIORITIZE COMPLIANCE&nbsp;<br /></b><span style="font-size: 12px;">One concern that accompanies regulatory accumulation is called regulatory overload. Firms are compelled by law to comply with regulations, regardless of whether the regulations are effective at solving a particular problem. In a 2011 study, psychologist Andrew Hale and his coauthors find that as the number of rules increase, the rules themselves become less effective.</span><sup style="font-family: inherit; font-style: inherit; font-weight: inherit;">9</sup><span style="font-size: 12px;"> They also find that as the number of rules increase, companies tend to rely on more rigid, checklist-style compliance strategies to ensure compliance with the letter of the law rather than proactive risk management strategies that may be more effective at reducing health and safety risks in the workplace. They call these problems regulatory overload.&nbsp;</span></p> <p class="p1">Certainly, as regulations accumulate, risk managers’ attention will be spread across a greater number of rules. If any of those rules are not actually effective in reducing risk, the attention paid to those rules will detract from compliance with functional rules.&nbsp;</p> <p class="p1"><b>PRINCIPLES FOR SUCCESSFUL REFORM&nbsp;<br /></b><span style="font-size: 12px;">As I have previously testified,<sup>10</sup> the need to eliminate or modify nonfunctional regulations from the accumulated stock has been widely recognized by members of Congress and every president since Carter.<sup>11</sup> Functional rules address current, significant risks; mitigate some amount of those risks through compliance with the regulations; and do not have significant unintended effects or excessive compliance costs relative to their benefits. Nonfunctional rules are missing one or more of these features. The key to achieving significant amelioration of the problem of regulatory accumulation is first identifying as many nonfunctional rules as possible and then either eliminating them or changing them so that they become functional.&nbsp;</span></p> <p class="p1">Executive branch attempts to examine and revise or eliminate existing nonfunctional regulations have primarily relied on executive orders for review of the need for regulations rather than creating a streamlined and evidencebased, analytical process that could accomplish large-scale reform. In a 2014 study I coauthored with economist Richard Williams (attached), we examine previous efforts at regulatory reform led by every president since Reagan and conclude that these episodes yielded only marginal improvements at best. Most notably, none of these efforts resulted in either substantial reductions relative to the total size of the Code of Federal Regulations (CFR) or sustained changes in the rate of adding new regulations to the CFR.<sup>12</sup>&nbsp;</p> <p class="p1">Figure 1 shows just how little the regulatory process has changed, despite these presidential efforts. Since 1975, the CFR has expanded in 30 of 37 years. In those 30 expansionary years, 117,294 pages were added to the CFR. In contrast, in the seven contractive years, 17,871 pages were subtracted from the CFR—for net growth of nearly 100,000 pages. Previous efforts to eliminate obsolete regulations have removed only very small percentages of existing regulations from the books.</p> <p class="p1"><a href=""><img height="415" width="585" src="" /></a></p> <p class="p1">The failure of past regulatory review efforts likely stems from a fundamental misalignment of incentives: agencies, despite direction from the president, have incentives to maintain and increase their regulations to maximize their budgets and control over their portion of the economy. In turn, to retain regulations that would be eliminated otherwise, agencies may either hide or fail to produce information that would help identify obsolete or ineffective regulations in the first place. We should not expect agencies to give any better assessments of their own rules than professors would expect of students grading their own tests.&nbsp;</p> <p class="p1">Similarly, individuals in agencies have little incentive to provide information that would lead to a rule’s elimination or the choice not to produce a rule.<sup>13</sup> In general, employees—including economists—are professionally rewarded for being part of teams that create new regulations or expand existing regulatory programs.<sup>14 </sup>Conversely, employees are rarely rewarded for deciding that a regulation should not be created. This is unfortunate, because specialists in agencies are likely to have some relevant information about which rules are nonfunctional.&nbsp;</p> <p class="p1">However, the issues that have plagued previous, executive branch–led efforts at regulatory reform can be overcome. In previous research, I identified 11 characteristics of successful regulatory reform, derived from lessons learned by studying the Base Realignment and Closure (BRAC) process, regulatory reform in other countries, and previous attempts at retrospective review in the United States.<sup>15</sup> I highlight a few of these below, for the purposes of assessing the reforms currently under consideration.&nbsp;<span style="font-size: 12px;">&nbsp;</span></p> <p class="p1">1. The process of identifying rules for modification or elimination should entail independent assessment of whether regulations are functional. To be classified as functional, a rule must&nbsp;</p> <p class="p1" style="padding-left: 30px;">1. address a current risk,&nbsp;</p> <p class="p1" style="padding-left: 30px;">2. address a significant risk,&nbsp;</p> <p class="p1" style="padding-left: 30px;">3. not result in ongoing costs (including unintended consequences) that more than offset the ongoing benefits of the rule, and&nbsp;</p> <p class="p1" style="padding-left: 30px;">4. not interfere with or duplicate other rules.&nbsp;</p> <p class="p1">It is vital that the assessment of a rule with respect to each of these criteria be performed objectively. If the body tasked with the analysis of a rule has incentive to find that the rule is functional or is nonfunctional, the review risks becoming an exercise in advocacy rather than objective analysis. The SCRUB Act, for example, creates a commission with the authority to hire analysts and experts necessary for such an assessment and to collect essential information for those purposes. The SCRUB Act sets forth criteria for regulatory assessment that are not very different from how I define “nonfunctional” rules in my own research. While it is wise to build in flexibility for the commission to devise new criteria in response to future lessons learned, it is equally important that any commission be required to publicly disclose its complete assessment criteria and take comments from the public on them.&nbsp;</p> <p class="p1">2. The identification process must be broad enough to identify potentially duplicative regulations.&nbsp;<br /><span style="font-size: 12px;">Duplication and redundancy across agencies may be a large source of nonfunctional rules. For example, multiple agencies through different regulations may address food safety. In light of this source of nonfunctional rules, analysis that is focused on individual rules or the rules of a single agency may not capture factors (e.g., conflicts, duplication) that indicate certain rules are in fact nonfunctional.</span></p> <p class="p1">3. The analysis of the functionality of rules should use a standard method of assessment that is difficult to subvert. Nobel Prize–winning economist Ronald Coase famously said, “If you torture the data long enough, it will confess to anything.” So it goes with any analysis: those who perform the analysis can choose the data to examine, how to analyze them, and the framework within which to present results. This is a primary reason why I recommend that retrospective analysis of regulations not be left in the hands of agencies that have incentive to find specific results. However, a similar logic applies to an independent body that analyzes regulations. In the long run, we would have to worry about whether the body can maintain its independence and whether political or other pressure would be exerted on the body to subvert its analyses to serve an agenda. The best way to prevent such subversion is to require a simple, transparent, and replicable methodology of assessment. Under the SCRUB Act, the commission is required to specify a methodology for assessment. Doing so publicly and before beginning the assessment will help achieve a transparent, objective end product.&nbsp;</p> <p class="p1">4. Whatever the procedure for assessment, assessments of specific regulations or regulatory programs should focus on whether and how they lead to the outcomes desired. The SCRUB Act lists as one of the criteria for assessment “whether the rule or set of rules is ineffective at achieving the rule or set’s purpose.” To meet my criteria, this phrase should mean achieving desired outcomes, as opposed to producing outputs. A rule may lead to an increase in an output, such as increased safety inspections, but that does not guarantee that there has been an increase the outcome, safety.&nbsp;</p> <p class="p1">5. Congressional action—such as a joint resolution of disapproval—should be required to stop the recommendations, as opposed to a vote to enact or not enact. The SCRUB Act could be improved if it were modified to limit formally Congress’s ability to subvert the process of selecting rules for elimination or modification. As the creators of the BRAC process recognized, every base targeted for closure had a champion defending it in Congress: the member whose constituency would be affected by the closure. So it would likely be with regulations slated for revocation. A better solution would be to follow the BRAC experience and require that a SCRUB Act commission’s recommendations take effect automatically unless Congress were to enact a joint resolution of disapproval of the entire set of recommendations—with no amendments allowed.</p> <p class="p1">6. The review process should repeat indefinitely. The SCRUB Act provides for a dissolution of the commission by a specific date. Given the possibility that the commission cannot evaluate all regulations before that date, it may be worthwhile to extend the life of the commission until all regulations are evaluated at least once, or even have the commission continue on an ongoing basis. The regulatory process will lead to regulatory accumulation again. This commission could balance the tendency to accumulate regulations with a deliberate and streamlined process for eliminating nonfunctional regulations if and when they appear.&nbsp;</p> <p class="p1">CONCLUSIONS<br /><span style="font-size: 12px;">Regulatory accumulation in the United States, with its adverse impact on economic growth by impeding innovation and entrepreneurship, is now a widely recognized problem. Furthermore, the costs of regulation are disproportionately borne by low-income households and the accumulation of regulations may make us less safe overall as compliance becomes more thinly spread between functional and nonfunctional rules. Regulatory reform that reduces the overall burden of regulations would act as a progressive tax refund for American households. Nonetheless, the problem has not been meaningfully addressed despite the efforts of several administrations.</span></p> <p class="p1">One reason it has been hard to address regulatory accumulation is the difficulty of identifying nonfunctional rules—rules that are obsolete, unnecessary, duplicative, or otherwise undesirable. An independent group or commission—not regulatory agencies—seems required to successfully identify nonfunctional rules.&nbsp;</p> <p class="p1">The SCRUB Act has several characteristics that make it more likely to succeed where previous attempts have failed. First, it appoints an independent commission to identify nonfunctional rules. Second, the act requires that the commission establish a methodology before beginning the assessment of rules, thereby minimizing opportunities for the assessment to be subverted by special interests. Third, the act establishes criteria that the commission would use to identify nonfunctional rules, and these criteria are primarily based on fundamental problem-solving and sound economic thinking.</p> Mon, 02 Mar 2015 15:43:20 -0500 Initial Thoughts on Obama Administration’s “Privacy Bill of Rights” Proposal <h5> Expert Commentary </h5> <p>The Obama Administration has just released a draft “<a href="">Consumer Privacy Bill of Rights Act of 2015</a>.” Generally speaking, the bill aims to translate fair information practice principles (FIPPs) — which have traditionally been flexible and voluntary guidelines — into a formal set of industry best practices that would be federally enforced on private sector digital innovators. This includes federally-mandated Privacy Review Boards, approved by the Federal Trade Commission, the agency that will be primarily responsible for enforcing the new regulatory regime.</p> <p>Many of the principles found in the Administration’s draft proposal are quite sensible as best practices, but the danger here is that they could soon be converted into a heavy-handed, bureaucratized regulatory regime for America’s highly innovative, data-driven economy.</p> <p>No matter how well-intentioned this proposal may be, it is vital to recognize that restrictions on data collection could negatively impact innovation, consumer choice, and the competitiveness of America’s digital economy.</p> <p>Online privacy and security is vitally important, but we should look to use alternative and less costly approaches to protecting privacy and security that rely on education, empowerment, and targeted enforcement of existing laws. Serious and lasting long-term privacy protection requires a layered, multifaceted approach incorporating many solutions.</p> <p>That is why flexible data collection and use policies and evolving best practices will ultimately serve consumers better than one-size-fits all, top-down regulatory edicts. Instead of imposing these FIPPs in a rigid regulatory fashion, privacy and security best practices will need to evolve gradually to new marketplace realities and be applied in a more organic and flexible fashion, often outside the realm of public policy.</p> <p>Regulatory approaches, like the Obama Administration’s latest proposal, will instead impose significant costs on consumers and the economy. Data is the fuel that powers our information economy. Privacy-related mandates that curtail the use of data to better target or personalize new services could raise costs for consumers. There is no free lunch. Something has to pay for all the wonderful free sites and services we enjoy today. If data can’t be used to cross-subsidize those services, prices will go up.</p> <p>Data regulations could also indirectly cost consumers by diminishing the abundance of content and culture now supported by the data-driven economy. In other words, even if prices and paywalls don’t go up, quantity or quality could suffer if data collection is restricted.</p> <p>Data regulations could also hurt the competitiveness of domestic markets and the global competitive advantage that America’s tech sector has in this space. That regulatory burden would fall hardest on smaller operators and new start-ups. Today’s “app economy” has given countless small innovators a chance to compete on even footing with the biggest players. Burdensome data collection restrictions could short-circuit the engine that drives entrepreneurial innovation among mom-and-pop companies if ad dollars get consolidated in the hands of only the larger companies that can afford to comply with new rules.</p> <p>We don’t want to go down the path the European Union charted in the 1990s with heavy-handed data directives. That suffocated high-tech entrepreneurialism and innovation there. America’s Internet sector came to be the envy of the world because our more flexible, light-touch regulatory regime leaves more breathing room for competition and innovation compared to Europe’s top-down regime. We should not abandon that approach now.</p> <p>Finally, the Obama Administration’s proposal deals exclusively with private sector data collection and has nothing to say about government surveillance activities. The Administration would be wise to channel its energies into that far more significant privacy problem first.</p> Fri, 27 Feb 2015 17:10:17 -0500 The Economic Situation, March 2015 <h5> Publication </h5> <p class="p1"><b>The US Locomotive Economy</b></p> <p class="p1">Has the US economy kicked off third quarter’s running cleats and slipped on bedroom shoes&nbsp;<span style="font-size: 12px;">with very soft soles? The running pace has changed abruptly. As the accompanying chart tells us,&nbsp;</span><span style="font-size: 12px;">the second estimate for growth in the fourth quarter of 2014 fell to 2.2 percent from the third&nbsp;</span><span style="font-size: 12px;">quarter’s hair-raising 5.0 percent. Is this the economic engine that is pulling the world economy?&nbsp;</span><span style="font-size: 12px;">Yes, it’s the best engine the system has. So why the sudden shift to second gear?&nbsp;</span><span style="font-size: 12px;">Weakness in the rest of the world is the major part of the story. Still seeking higher ground,&nbsp;</span><span style="font-size: 12px;">Europe is slowly backing away from the edge of recession. China is running in third gear with its&nbsp;</span><span style="font-size: 12px;">growth hitting 7 percent instead of its “normal” 10 percent. Canada and Mexico are moving&nbsp;</span><span style="font-size: 12px;">along at 2.5 percent growth. And Japan’s economy has launched again but is just beginning to&nbsp;</span><span style="font-size: 12px;">sail. The world economy is a mixed bag but still a decidedly weak one.</span></p> <p class="p1">Meanwhile, with the dollar getting good as gold, while others cut interest rates in the hope of&nbsp;<span style="font-size: 12px;">stimulating growth, US exports are falling and imports have surged.</span><span style="font-size: 12px;">The chart’s white four-quarter moving average shows real GDP growth is averaging about 2.6&nbsp;</span><span style="font-size: 12px;">percent for the year. The gap between current growth and the 3.14 percent long-term average&nbsp;</span><span style="font-size: 12px;">may look like a permanent feature of the data landscape, but most forecasters are betting the gap&nbsp;</span><span style="font-size: 12px;">will be closed as 2015 progresses. As always, there are some special considerations. This time it&nbsp;</span><span style="font-size: 12px;">is energy, and this time the net effect is positive.</span></p> <p class="p1"><a href=""><img src="" width="585" height="427" /></a></p> <p class="p1"><b>More on the Energy Story</b></p> <p class="p1">The effects of the better than 50 percent decline in crude prices since June 2014 are now working&nbsp;<span style="font-size: 12px;">their way through the economy. US commentators cheered the explosive growth of shale oil&nbsp;</span><span style="font-size: 12px;">production that triggered the price decline, and they should have. As will be shown later, it was&nbsp;</span><span style="font-size: 12px;">growth in the shale oil states that propelled the US economy as it sailed out of the recession. But&nbsp;</span><span style="font-size: 12px;">folks on the other side of the pond—OPEC and its leader, Saudi Arabia—somehow felt&nbsp;</span><span style="font-size: 12px;">differently about the matter. Let’s face it, when prices fall, it matters whether you are a buyer or&nbsp;</span><span style="font-size: 12px;">seller, a producer or a consumer, and folks who have dominated a product market for decades&nbsp;</span><span style="font-size: 12px;">just don’t go quietly into the night. On balance, of course, the United States is a consumer.</span></p> <p class="p1">Lower energy prices are a boon to the economy, maybe adding as much as 0.50 percentage&nbsp;<span style="font-size: 12px;">points to GDP growth.</span><span style="font-size: 12px;">The decline in crude oil prices came when the Saudis targeted the United States and Asia with a&nbsp;</span><span style="font-size: 12px;">price cut, raised their price to Europe, and opened up the valves for more oil production. When&nbsp;</span><span style="font-size: 12px;">the price plummeted from $100 a barrel to $45, the Saudis responded with a smile. They are the world low-cost producer, and have lots of loot in their sovereign fund for weathering a long price&nbsp;</span><span style="font-size: 12px;">war. Holding market share seems to be their current strategy.</span></p> <p class="p3"><span style="font-size: 12px;">While consumers overall can enjoy large savings in transportation cost, something on the order&nbsp;</span><span style="font-size: 12px;">of $750 a year for the average family, just where they live and work puts a different spin on that,&nbsp;</span><span style="font-size: 12px;">too.</span></p> <p class="p1"><a href="">Continue reading</a></p> Fri, 27 Feb 2015 14:32:38 -0500 Brent Skorup Discusses the FCC's Vote on Net Neutrality on CNBC Asia <h5> Video </h5> <iframe width="560" height="315" src="" frameborder="0" allowfullscreen></iframe> <p>Brent Skorup Discusses the FCC's Vote on Net Neutrality on CNBC Asia</p><div class="field field-type-text field-field-embed-code"> <div class="field-label">Embed Code:&nbsp;</div> <div class="field-items"> <div class="field-item odd"> &lt;iframe width=&quot;560&quot; height=&quot;315&quot; src=&quot;; frameborder=&quot;0&quot; allowfullscreen&gt;&lt;/iframe&gt; </div> </div> </div> Fri, 27 Feb 2015 11:52:20 -0500 Keith Hall to Direct Congressional Budget Office: Mercatus Center Commends Hall’s Continued Success <h5> Expert Commentary </h5> <p class="p1"><b>Arlington, Va.—</b>Starting April 1, Keith Hall will become the next Director of the Congressional Budget Office (CBO). Hall worked as a senior research fellow for the Mercatus Center at George Mason University from April 2012 to September 2014.&nbsp;</p> <p class="p1">“Keith Hall is a first-rate economist who understands fiscal responsibility and the importance of honest and accurate reporting of the numbers,” said <a href="">Tyler Cowen</a>, general director of the Mercatus Center.&nbsp; “I expect he will do a great job.”</p> <p class="p1">Now in its 40th year, the CBO&nbsp;supports the Congressional budget process by producing independent analyses of budgetary and economic issues. CBO is nonpartisan and does not make policy recommendations.&nbsp;</p> <p class="p1">Hall’s research at Mercatus focused on labor markets, labor market policy, and economic data.&nbsp; His Mercatus publications include&nbsp;<i><a href="">Opportunity, Mobility, and Inequality in Today’s Economy,</a>&nbsp;</i><a href=""><i>Dreams Deferred: Young Workers and Recent Graduates in the U.S. Economy</i></a><i>,&nbsp;</i>and<i>&nbsp;</i><a href=""><i>The Employment Costs of Regulations</i></a><i>.&nbsp;</i>Hall’s Mercatus Center biography, with links to his research and media appearances, is available at <a href=""></a>.&nbsp;</p> <p class="p1">Before his employment at the Mercatus Center, Hall served as the 13th Commissioner of the Bureau of Labor Statistics, the Chief Economist for the White House Council of Economic Advisors, and the Chief Economist for the US Department of Commerce.&nbsp;</p> Mon, 02 Mar 2015 11:26:11 -0500 Fundamentals of Budget Process <h5> Video </h5> <iframe width="560" height="315" src="" frameborder="0" allowfullscreen></iframe> <p><span style="color: #333333; font-family: arial, sans-serif; font-size: 13px; font-style: normal;">In a Capitol Hill Campus event, David Primo, Associate Professor at the University of Rochester and senior scholar for the Mercatus Center, and Patrick Louis Knudsen, a former long-time policy director for the House Budget Committee, held a discussion on the congressional budget process.</span></p><div class="field field-type-text field-field-embed-code"> <div class="field-label">Embed Code:&nbsp;</div> <div class="field-items"> <div class="field-item odd"> &lt;iframe width=&quot;560&quot; height=&quot;315&quot; src=&quot;; frameborder=&quot;0&quot; allowfullscreen&gt;&lt;/iframe&gt; </div> </div> </div> Fri, 27 Feb 2015 11:25:10 -0500 Net Neutrality Rules Represent a Giant Step Backwards <h5> Expert Commentary </h5> <p>The Federal Communications Commission today voted, 3-2, that the Internet will be subject to many of the Title II regulatory provisions of the 1934 Communications Act. Applying Title II laws to broadband means regulating the Internet as a common carrier, akin to the telephone network, and gives significant control of the Internet to the FCC, lobbyists, and industry players.</p> <p>The Title II order and new net neutrality rules have not been released yet, but the thrust of the regulations is clear from commissioners’ statements and media reports. In short, the FCC’s rules represent a giant step backwards to the days of command-and-control of markets.</p> <p>The FCC’s actions derive in part from <a href="">the myth that the Internet is neutral</a>. In the evolving online world, the Internet gets less neutral—and better for consumers—every day. Through a hands-off approach from policymakers, the U.S. communications and technology sector has thrived as a supplier of innovation, but Title II rules effectively throw sand in the gears.</p> <p>If the FCC’s rules are not overturned by the courts, the days of permissionless innovation online come to a close. The application of Title II means new broadband services must receive approval from this federal agency. Companies in Silicon Valley will therefore rely increasingly on their regulatory compliance officers, not their engineers and designers.</p> <p>If courts do strike down the FCC’s net neutrality rules for a third time, the FCC should abandon its campaign to regulate the Internet. Instead the Commission should focus on increasing broadband competition across the nation, thereby reducing prices and increasing the availability of new broadband services. There is plenty of work to be done on this front, but pursuing Title II net neutrality rules distract the Commission and Congress from spearheading a pro-consumer innovation agenda.</p> Thu, 26 Feb 2015 13:30:36 -0500 Private Money in Virtual Worlds <h5> Expert Commentary </h5> <p><i>This article appears in the March edition of&nbsp;</i><a href=""><i>Reason Magazine</i></a></p><p>Edward Castronova initially started researching the economics of cybernetic worlds as a joke, whimsically gathering stats on buying and selling in the roleplaying video game EverQuest starting in 2001. "Then," he says, "I saw how much money there is."</p> <p>For those who know where to look, virtual worlds contain many riches indeed, from in-game currencies to Amazon coins to frequent flyer rewards. Castronova, a professor of media at Indiana University, has distilled his years of observing human economic behavior in online environments into a new book, Wildcat Currency. Evoking the so-called "wildcat banking" period in the mid-19th century, when American banks were only regulated by state governments, the book's title refers to the burgeoning system of digital currencies proliferating in virtual worlds. Equal parts ethnography, prophecy, and pre-emptive funeral oration for ubiquitous state control of financial activity, Wildcat Currency outlines the unprecedented opportunities the author believes virtual currencies will afford to private individuals—and the problems this poses for the established world order.</p> <p>Could the mighty Federal Reserve one day be vanquished by the humble Linden Dollar? Linden Lab's official gig is developing the platform for Second Life, an online world intentionally designed without manufactured conflicts or set objectives. But for about a decade, the designers of this virtual world have stealthily moonlighted as a sort of central bank for its own ad hoc digital currency.</p> <p><a href="">Continue reading</a></p> Fri, 27 Feb 2015 17:30:14 -0500 No, Mr. Tarullo, We're Not All Macroprudentialists Now <h5> Expert Commentary </h5> <p>Federal Reserve Governor Daniel Tarullo began a <a href="">speech</a> last month by saying, "Standing in front of this audience I feel secure in observing that we are all macroprudentialists now." Having been a member of that audience, I can assure Mr. Tarullo that his statement was inaccurate. Macroprudentialists' intensifying focus on the asset management industry offers the latest glimpse into how such an approach could undermine financial stability.</p> <p>Mr. Tarullo explained that the macroprudential approach to regulation "focuses on the financial system as a whole, and not just the well-being of individual firms." Regulators are central to the macroprudential approach; only they have the breadth of vision to know how and when-for the good of the collective-to override careful decisions made by individual firms.</p> <p>The focus of Mr. Tarullo and other macroprudentialists has turned most recently to the asset management industry. Asset managers include the investment advisers and mutual fund companies that manage the investment portfolios of institutions and households. Asset managers control a lot of money-$63 trillion according to a recent <a href="">speech</a> by Mary Jo White, chair of the Securities and Exchange Commission, which oversees the asset management industry.</p> <p>Ms. White's colleagues on the Financial Stability Oversight Council-a collection of top financial regulators-are not confident that SEC oversight is adequate. The FSOC and its international cousin-the Financial Stability Board-are on the lookout for particular asset managers and asset management activities that might put the financial system at risk. Dodd-Frank gives the FSOC authority to make recommendations to the SEC about how it should regulate the asset management industry. The FSOC also can designate asset managers for regulation by the Federal Reserve.</p> <p>The FSOC is soliciting input on a <a href="!docketDetail;D=FSOC-2014-0001">document</a> that runs through worst-case scenarios in asset management. What if asset managers don't manage their funds "in a way that prevents or fully mitigates the risks to the investment vehicle and the broader financial system"? What if asset managers are forced to conduct fire sales, which could drive asset prices down? What if a key industry service provider goes out of business?</p> <p>The risks the FSOC described pale in comparison to the risks it could create by adding a new macroprudential regulatory layer to asset management. Attempts to centrally mitigate risk likely would create new risks by narrowing the differences in the way assets are managed. There are thousands of asset managers and mutual funds. Even very large mutual fund complexes employ many managers, each of whom takes her own approach to investing. More prescriptive regulation will eat away at that system-strengthening diversity.</p> <p>Mr. Tarullo envisions a macroprudential regime that "builds on the traditional investor protection and market functioning aims of securities regulation by incorporating a system-wide perspective." Asset managers will have the impossible task of balancing their fiduciary duties to their own funds and investors with regulatory obligations to do what's best for their competitors and the rest of the financial system.</p> <p>Using tools like stress tests and liquidity requirements, regulators would corral asset managers into similar strategies, assets, and risk management techniques. If regulators make bad choices, the entire industry will be affected. But even if regulators make good choices, making asset managers follow a single formula makes it more likely that the actions of one manager-such as asset sales to meet redemptions-would reverberate throughout the industry.</p> <p>Moreover, as bank regulators play an increasingly central role in regulating asset managers, the differences that distinguish the banking industry from the asset management industry will start to disappear. Shocks will more easily transmit across the entire financial sector. Imagine the scene as banks and asset managers all fight during a crisis for the safe assets that their common regulatory frameworks permit. When problems arise, taxpayer money will flow to all macroprudentially regulated corners as regulators seek to mask their mistakes.</p> <p>Regulators are not wrong to think about the stability of the whole financial system. They are wrong, however, to assume that centralized risk management will foster systemic stability. Instead, it will introduce new vulnerabilities into the financial system. These vulnerabilities likely will manifest themselves when the financial system is already under stress. Rather than seeking to extend macroprudential regulation, regulators should emphasize microprudential responsibility. Asset managers, governed by their legal responsibilities to their clients, need to plan for bad events. This is not a task that can be outsourced to government regulators.</p> Thu, 26 Feb 2015 12:58:02 -0500 The Official Unemployment Rate Isn’t the Complete Picture <h5> Publication </h5> <p>This week’s chart is <a href="">an updated comparison</a> of the different measurements of the unemployment rate from the Bureau of Labor Statistics (BLS). It includes <a href="">new data</a> on the official and alternative unemployment measurements for January 2015. The widely reported official unemployment rate, which remains the primary measure of labor market performance, is not the most realistic representation of the current state of the economy, because it fails to capture, among other things, individuals who have simply stopped looking for work. The limited perspective on the labor market offered by the official unemployment rate is readily apparent when compared to alternative measures of unemployment.</p><p>The chart displays the official unemployment rate, or U3 unemployment rate, alongside various alternative measures from 2005 to the most recent data in January 2015.</p> <p><a href=""><img src="" width="524" height="397" /></a></p><p>The most commonly reported unemployment rate—5.7 percent in January 2015—is defined as the number of people without jobs who are available to work and are actively seeking work in the four weeks preceding the survey as a percentage of the labor force (the sum of employed and unemployed persons in the economy). At first glimpse, the 5.7 percent official US unemployment rate appears to be good news. Indeed, the early data show that the economy did add 257,000 jobs in January.</p> <p>However, the official U3 number, represented as the blue area on the chart above, can be compared to the number of workers who are “officially” unemployed plus those categorized as “discouraged workers,” known as U4 unemployment. This is represented by the light blue portion of the graph. Discouraged workers are people who are able to work but cease searching for employment because they believe that no job opportunities exist for them. As of January 2015, the U4 unemployment rate was 6.1 percent, increasing by 0.1 percentage points from December 2014, the same increase as the official U3 unemployment rate. We would expect the U4 unemployment rate to decrease in healthy economic times as more workers have faith that they can find gainful employment. The U4 unemployment rate has not significantly decreased relative to the official U3 rate since the onset of the recent recession, suggesting persistent structural barriers to employment.</p> <p>Next, we can consider marginally attached workers with the U5 unemployment rate, represented as the orange portion on the graph. The BLS defines this group as persons who want and are available for work but who are not counted as unemployed under the official U3 measurement because they had not actively searched for work in the four weeks preceding the BLS survey. The U5 rate adds marginally attached workers to their measures of officially unemployed and discouraged workers. In December 2014, the U5 unemployment rate was 7.0 percent, rising by 0.1 percentage points from the month before.</p> <p>Finally, it is important to consider workers who are “underemployed.” This is represented by a final alternative measurement called the U6 unemployment rate, which adds part-time workers for economic reasons. Shown as the red portion on the graph above and totaling 11.3 percent in January 2015, it is considerably higher than the official unemployment rate and has risen by 0.1 from December 2014. This final measure provides the broadest picture of the current labor situation. At twice the official U3 unemployment rate, it has slightly declined since the onset of the recession but is still significantly higher than pre-recession range of 8–9.3 percent from 2005 to 2007.</p> <p>Much of the decrease in the U3 unemployment rate is due to a decrease in the labor force participation rate—that is, fewer people of working age working or looking for work. If the labor force participation rate in January 2015 were the same as that in January 2014, the official U3 unemployment rate would be 5.8 percent. Adding in the alternative unemployment measures provides even less cause for optimism.</p> <p>This chart shows that many workers who do not fit the narrow criteria of the official unemployment measurement have struggled to find employment for years with limited success. It is important to remember these critical labor demographics in assessing the complete unemployment picture in the United States and in beginning a broader discussion about the institutional and other barriers to creating jobs.</p> Tue, 24 Feb 2015 17:27:43 -0500 Comprehensive Regulatory Impact Analysis: The Cornerstone of Regulatory Reform <h5> Publication </h5> <p>Good morning Chairman Johnson, Ranking Member Carper, and members of the committee. Thank you for inviting me to testify today.</p> <p>I am an economist and research fellow at the Mercatus Center, a 501(c)(3) research, educational, and outreach organization affiliated with George Mason University in Arlington, Virginia. I’ve previously served as a senior economist at the Joint Economic Committee and as deputy director of the Office of Policy Planning at the Federal Trade Commission. My principal research for the last 25 years has focused on the regulatory process, government performance, and the effects of government regulation. For these reasons, I’m delighted to testify on today’s topic.</p> <p>I work at a university. That means I’m for knowledge and against ignorance. I think that regulators have a moral responsibility to make decisions about regulations based on actual knowledge of a regulation’s likely effects—not just on hopes, intentions, or wishful thinking. A decision maker’s failure or refusal to acquire this knowledge before making decisions is a willful choice to act based on ignorance.</p> <p>Executive orders, and sometimes laws, seek to encourage regulatory agencies to act based on knowledge rather than ignorance. For more than three decades, presidents of both political parties have instructed executive branch agencies to conduct regulatory impact analysis when issuing significant regulations. Some independent agencies, such as the Securities and Exchange Commission, are required by law to assess the economic effects of their regulations. Executive orders and laws requiring economic analysis of regulations reflect a bipartisan consensus that economic analysis should inform, but not dictate, regulatory decisions. A good regulatory impact analysis also lays the groundwork for an effective retrospective review of the regulation by identifying the outcomes that should be tracked in order to assess whether the regulation accomplishes the desired goals.</p> <p>Unfortunately, agencies’ regulatory impact analyses are not nearly as informative as they ought to be, and there is often scant evidence that agencies have utilized any part of the analysis in making decisions. These problems have persisted through multiple administrations of both political parties. The problem is institutional, not partisan or personal. Improvement in the quality and use of regulatory impact analysis will likely occur only as a result of legislative reform of the regulatory process. To achieve improvement, all agencies should be required to conduct thorough and objective regulatory impact analysis for major regulations and to explain how the results of the analysis informed their decisions.</p> <p>Let me elaborate on each of these points.</p> <p><b>WHY REGULATORY IMPACT ANALYSIS IS NECESSARY</b></p> <p>We expect federal regulation to accomplish a lot of important things, such as protecting us from financial fraudsters, preventing workplace injuries, preserving clean air, and deterring terrorist attacks. Regulation also requires sacrifices; there is no free lunch. Depending on the regulation, consumers may pay more, workers may receive less, our retirement savings may grow more slowly due to reduced corporate profits, and we may have less privacy or less personal freedom. Regulatory impact analysis is the key tool that makes these tradeoffs more transparent to decision makers. So, understanding the effects of regulation has to start with sound regulatory impact analysis. A thorough regulatory impact analysis should do four things:</p> <ol> <li>Assess the nature and significance of the problem that the agency is trying to solve, so the agency knows whether there is a problem that could be solved through regulation. If there is, the agency can tailor a solution that will effectively solve the problem. </li><li>Identify a wide variety of alternative solutions. </li><li>Define the benefits that the agency seeks to achieve in terms of ultimate outcomes that affect citizens’ quality of life, and assess each alternative’s ability to achieve those outcomes. </li><li>Identify the good things that regulated entities, consumers, and other stakeholders must sacrifice in order to achieve the desired outcomes under each alternative. In economics jargon, these sacrifices are known as “costs,” but just like benefits, costs may involve far more than monetary expenditures.</li></ol> <p>Without all this information, regulatory decisions are likely to be based on hopes, intentions, and wishful thinking rather than on reality. Regulators should not adopt a regulation without knowing whether it will solve a significant problem at a reasonable cost. Given the enormous influence regulation has on our day-to-day lives, decision makers have a moral responsibility to act based on knowledge of regulation’s likely effects, not just good intentions.</p> <p>High-quality regulatory impact analysis is also essential for effective congressional oversight.</p> <p>Mechanisms that provide for congressional approval or disapproval of individual regulations, such as the Congressional Review Act or the proposed REINS Act, presume that members of Congress have thorough knowledge about the root cause of the problem that the regulation seeks to solve and about the benefits and costs of alternatives. After all, how can legislators make a responsible decision to approve or disapprove a regulation if they do not know whether the regulation solves a real problem or whether there is a better alternative solution than the proposed regulation? Oversight of existing regulatory programs also presumes that congressional committees have good information about the outcomes that the regulation is intended to achieve and the results that are expected. A high-quality regulatory impact analysis provides that information.</p> <p><a href="">Continue reading</a></p> Wed, 25 Feb 2015 10:56:45 -0500 Scrap Regulations that Thin the Ranks of Small Banks <h5> Expert Commentary </h5> <p>Regulatory burdens allow big banks to flourish at the expense of their smaller competitors. This has become so obvious in the aftermath of the Dodd-Frank Act that even Goldman Sachs chairman Lloyd Blankfein admits to it.</p> <p>The financial industry is an "expensive business to be in if you don't have the market share and scale," Blankfein <a href="">remarked</a> in a recent speech. Thanks to regulatory and technology demands, he said, "the barriers to entry [are] higher than at any other time in modern history."</p> <p>Blankfein's comments were made in the context of Goldman's institutional client services business. But these concerns are broadly applicable across the financial services industry. There is nothing wrong with big, established banks gaining market share because they offer the mix and quality of products and services that customers want. There is something wrong, however, with these large banks beating off their smaller, newer rivals with a club fashioned by Washington legislators and regulators.</p> <p>The Senate Banking Committee has considered the regulatory burden on community banks in two recent hearings. At <a href=";Hearing_ID=1f80703e-b15b-4392-88b0-5ae384e5377b">one of these hearings</a>, community bankers and credit union leaders testified about how their lending practices had been changed by Dodd-Frank.</p> <p>Before Dodd-Frank, financial institutions were able to make loans to customers with whom they had long relationships and therefore had good reason to anticipate would be willing and able to repay. Now financial institutions are compelled to turn away these same customers because their loans would carry too much regulatory risk. Long-term, mutually beneficial banking relationships are crumbling as new regulations mount.</p> <p>The federal regulators who <a href=";Hearing_id=94d7e84d-3396-41cf-acdd-1d8aff386ca9">appeared</a> before the Banking Committee seemed unwilling to work to understand the depth of the problem. They complained that proposed legislative changes to help them better anticipate the costs and benefits of regulations before they adopt them would make writing rules more difficult.</p> <p>It is true that rulemaking would be slowed if regulators are required to do the additional work upfront to answer the critical question of whether the benefits of a new rule outweigh the costs. But as I described in a 2012 <a href="">study</a>, the federal financial regulators charged with implementing Dodd-Frank are not doing the economic analysis that is required of other regulators. Doing so could save regulators the hassle of having to redo rules down the road when it becomes apparent that they are doing more harm than good.</p> <p>In a new Harvard Kennedy School <a href="">study</a> of the impact of regulations on community banks, authors Marshall Lux and Robert Greene urge policymakers to embrace economic analysis as a way to "ensure better-designed regulation in the future and avert unintended consequences that jeopardize lending market vitality." The paper discusses the important role that small banks play in the lending markets and finds that community banks' share of U.S. banking assets has dropped more than 12% since the passage of Dodd-Frank. Community banks, especially the smallest ones, also have seen their market share decline in a number of lending markets.</p> <p>Yet Sen. Elizabeth Warren posited at the Senate Banking Committee hearings that small banks' calls for regulatory relief are in fact veiled attempts by large banks to chip away at financial reform. After all, she argued, Dodd-Frank exempted community banks from certain provisions and authorizes regulators to make additional adjustments.</p> <p>The problem is that exemptions don't always work. As Sen. Jeff Merkley explained, requirements can trickle down from big to small banks. He described how the following conversation often occurs between regulators and community bankers:</p> <p>"Well, you must do X."</p> <p>"Well, why is that?"</p> <p>"Well, it's a best practice, and so you really don't legally have to do it, but we expect you to do it."</p> <p>This scenario is consistent with the findings of a February 2014 <a href="">survey</a> of roughly 200 small banks published by the Mercatus Center at George Mason University. As one community bank respondent wrote, "Rules are written for the largest institutions in the country, yet the smaller institutions have to abide by the same rules."</p> <p>Even when small banks receive explicit exemptions, we found that rules nonetheless impose additional burdens upon them. For example, although the Bureau of Consumer Financial Protection does not directly supervise community banks, the new agency was one of the biggest concerns for the small banks in our survey. Similarly, nearly half the banks in our survey reported being affected by the Durbin Amendment, a price cap on debit card processing fees that expressly does not apply to small banks. Small banks also expressed considerable frustration that Dodd-Frank's regulatory burdens were not producing any offsetting benefits for bank customers.</p> <p>Poorly crafted regulations may warm some people's hearts because they enjoy sticking it to the financial industry. But the real victims of such regulatory vengeance are the individuals, companies, and communities that rely on banks of all sizes.</p> <p>We should get rid of regulations that cost more than they are worth. This is a sensible way to ensure that financial institutions — large, small, well-established and new — can serve customers effectively and affordably.</p> Tue, 24 Feb 2015 16:41:29 -0500 Dark Dollar Dealings <h5> Expert Commentary </h5> <p><em>Yes, bitcoin helps illicit business dealings. But so does the $100 bill.</em></p> <p>A U.S. District Court jury in Manhattan recently found Ross William Ulbricht guilty of seven charges, including narcotics conspiracy, engaging in a continuing criminal enterprise, conspiracy to commit computer hacking and money laundering conspiracy. Ulbricht is the founder of the Silk Road, an online marketplace where drugs and other illegal goods were listed for sale. The site was instrumental in the early success of the digital currency bitcoin. Ulbricht, who awaits sentencing in May, faces 20 years to life in prison.</p> <p>The Silk Road, which <a href="">Gawker described as “the Amazon of drugs,”</a> launched in February 2011. A <a href="">2013 study by Nicolas Christin</a>, assistant research professor at Carnegie Mellon University, confirmed that more than half of all items listed for sale on the site were illegal substances and that nearly half of all sellers were willing to ship goods anywhere in the world. Evidence presented at trial suggests the site generated more than $213 million in revenue. It was shut down in October 2013, after Ulbricht’s arrest.</p><p><a href="">Continue reading</a></p> Thu, 26 Feb 2015 14:20:01 -0500 Certificate-of-Need Laws: Implications for Virginia <h5> Publication </h5> <p>Thirty-six states and the District of Columbia currently limit entry or expansion of health care facilities through certificate-of-need (CON) programs. These programs prohibit health care providers from entering new markets or making changes to their existing capacity without first gaining the approval of state regulators. Since 1973, Virginia has been among the states that restrict the supply of health care in this way, with 19 devices and services—including acute hospital beds, magnetic resonance imaging (MRI) scanners, and computed tomography (CT) scanners—requiring a certificate of need from the state before the device may be purchased or the service may be offered.</p> <p>CON restrictions are in addition to the standard licensing and training requirements for medical professionals, but are neither designed nor intended to ensure public health or ensure that medical professionals have the necessary qualifications to do their jobs. Instead, CON laws are specifically designed to limit the supply of health care, and are traditionally justified with the claim that they reduce and control health care costs. The theory is that by restricting market entry and expansion, states might reduce overinvestment in facilities and equipment. In addition, many states—including Virginia—justify CON programs as a way to cross-subsidize health care for the poor. Under these “charity care” requirements providers that receive a certificate of need are typically required to increase the amount of care they provide to the poor. In effect, these programs intend to create quid pro quo arrangements: state governments restrict competition, increasing the cost of health care for some, and in return medical providers use these contrived profits to increase the care they provide to the poor.</p> <p>However, these claimed benefits have failed to materialize as intended. Recent research by Thomas Stratmann and Jacob Russ demonstrates that there is no relationship between CON programs and increased access to health care for the poor. There are, however, serious consequences for continuing to enforce CON regulations. In particular, for Virginia these programs could mean approximately 10,800 fewer hospital beds, 41 fewer hospitals offering MRI services, and 58 fewer hospitals offering CT scans. For those seeking quality health care throughout Virginia, this means less competition and fewer choices, without increased access to care for the poor.</p> <p><strong>The Rise of CON Programs</strong></p> <p>CON programs were first adopted by New York in 1964 as a way to strengthen regional health planning programs. Over the following 10 years, 23 other states adopted CON programs. Many of these programs were initiated as “Section 1122” programs, which were federally funded programs providing Medicare and Medicaid reimbursement for certain approved capital expenditures. Virginia enacted its first CON program in 1973. The passage of the National Health Planning and Resources Development Act of 1974, which made certain federal funds contingent on the enactment of CON programs, provided a strong incentive for the remaining states to implement CON programs. In the seven years following this mandate, nearly every state without a CON program took steps to adopt certificate-of-need statutes. By 1982 every state except Louisiana had some form of a CON program.</p> <p>In 1987, the federal government repealed its CON program mandate when the ineffectiveness of CON regulations as a cost-control measure became clear. Twelve states rapidly followed suit and repealed their certificate-of-need laws in the 1980s. By 2000, Indiana, North Dakota, and Pennsylvania had also repealed their CON programs. Since 2000, Wisconsin has been the only state to repeal its program.</p><p>Virginia remains among the 36 states, along with the District of Columbia, that continue to limit entry and expansion within their respective health care markets through certificates of need. On average, states with CON programs regulate 14 different services, devices, and procedures. Virginia’s CON program currently regulates 19 different services, devices, and procedures, which is more than the national average. As figure 1 shows, Virginia’s certificate-of-need program ranks 11th most restrictive in the United States.</p><p><insert> </insert></p><p><strong><a href=""><img height="410" width="585" src="" /></a>Figure 1. Ranking of States by Number of Certificate-of-Need Laws</strong><br /> Note: Fourteen states either have no certificate-of-need laws or they are not in effect. In addition, Arizona is typically not counted as a certificate-of-need state, though it is included in this chart because it is the only state to regulate ground ambulance services.</p> <p><strong>Do CON Programs Control Costs and Increase the Poor’s Access to Care?</strong></p> <p>Many early studies of CON programs found that these programs fail to reduce investment by hospitals. These early studies also found that the programs fail to control costs. Such findings contributed to the federal repeal of CON requirements. More recently, research into the effectiveness of remaining CON programs as a cost-control measure has been mixed. While some studies find that CON regulations may have some limited cost-control effect, others find that strict CON programs may in fact increase costs by 5 percent. The latter finding is not surprising, given that CON programs restrict competition and reduce the available supply of regulated services.</p> <p>While there is little evidence to support the claim that certificates of need are an effective cost-control measure, many states continue to justify these programs using the rationale that they increase the provision of health care for the poor. To achieve this, 14 states—including Virginia—include some requirement for charity care within their respective CON programs. This is what economists have come to refer to as a “cross subsidy.”</p> <p>The theory behind cross-subsidization through these programs is straightforward. By limiting the number of providers that can enter a particular practice and by limiting the expansion of incumbent providers, CON regulations effectively give a limited monopoly privilege to providers that receive approval in the form of a certificate of need. Approved providers are therefore able to charge higher prices than would be possible under truly competitive conditions. As a result, it is hoped that providers will use their enhanced profits to cover the losses from providing otherwise unprofitable, uncompensated care to the poor. In effect, those who can pay are charged higher prices to subsidize those who cannot.</p> <p>In reality, however, this cross-subsidization is not occurring. While early studies found some evidence of cross-subsidization among hospitals and nursing homes, the more recent academic literature does not show evidence of this cross-subsidy taking place. The most comprehensive empirical study to date, conducted by Thomas Stratmann and Jacob Russ, finds no relationship between certificates of need and the level of charity care.</p> <p><strong>The Lasting Effects of Virginia’s CON Program</strong></p> <p>While certificates of need are neither controlling costs nor increasing charity care, they continue to have lasting effects on the provision of health care services both in Virginia and in the other states that continue to enforce them. However, these effects have largely come in the form of decreased availability of services and lower hospital capacity.</p> <p>In particular, Stratmann and Russ present several striking findings regarding the provision of health care in states implementing CON programs. First, CON programs are correlated with fewer hospital beds. Throughout the United States there are approximately 362 beds per 100,000 persons. However, in states such as Virginia that regulate acute hospital beds through their CON programs, Stratmann and Russ find 131 fewer beds per 100,000 persons. In the case of Virginia, with its population of approximately 8.26 million, this could mean about 10,800 fewer hospital beds throughout the state as a result of its CON program.</p> <p>Moreover, several basic health care services that are used for a variety of purposes are limited because of Virginia’s CON program. Across the United States, an average of six hospitals per 500,000 persons offer MRI services. In states such as Virginia that regulate the number of hospitals with MRI machines, the number of hospitals that offer MRIs is reduced by 2.5 per 500,000 persons. This could mean 41 fewer hospitals offering MRI services throughout Virginia. The state’s CON program also affects the availability of CT services. While an average of nine hospitals per 500,000 persons offer CT scans, CON regulations are associated with a 37 percent decrease in these services. For Virginia, this could mean about 58 fewer hospitals offering CT scans.</p> <p><strong>Conclusion</strong></p> <p>While CON programs were intended to limit the supply of health care services within a state, proponents claim that the limits were necessary to either control costs or increase the amount of charity care being provided. However, 40 years of evidence demonstrate that these programs do not achieve their intended outcomes, but rather decrease the supply and availability of health care services by limiting entry and competition. For policymakers in Virginia, this situation presents an opportunity to reverse course and open the market for greater entry, more competition, and ultimately more options for those seeking care.</p><p>&nbsp;</p> Thu, 26 Feb 2015 16:05:47 -0500