Explore 1.5M+ audiobooks & ebooks free for days

From $11.99/month after trial. Cancel anytime.

Jerry Ellig on Dynamic Competition and Rational Regulation: Selected Articles and Commentary
Jerry Ellig on Dynamic Competition and Rational Regulation: Selected Articles and Commentary
Jerry Ellig on Dynamic Competition and Rational Regulation: Selected Articles and Commentary
Ebook815 pages18 hours

Jerry Ellig on Dynamic Competition and Rational Regulation: Selected Articles and Commentary

Rating: 0 out of 5 stars

()

Read preview

About this ebook

In the United States, the express purpose of regulation is, according to a 1993 executive order, to "protect or improve the health and safety of the public, the environment, or the well-being of the American people." However well intentioned, all human action carries with it the potential for secondary, sometimes negative, consequences. In the c

LanguageEnglish
PublisherMercatus Center at George Mason University
Release dateDec 15, 2021
ISBN9781942951674
Jerry Ellig on Dynamic Competition and Rational Regulation: Selected Articles and Commentary
Author

Jerry Ellig

In his academic research, Jerry Ellig (1962-2021) focused on regulatory impact analysis, regulation of network industries, and performance management in government. But to ensure his work would make a difference in people's lives, he never forgot the need for a "bridge" connecting the academy and the policy world. It was one he traversed, back and forth, throughout his career. An assistant professor of economics at George Mason University between 1989 and 1995, Ellig served as a senior economist for the Joint Economic Committee of the US Congress (1995-96) before returning to Mason to join the university's Mercatus Center as a senior research fellow. Between 2001 and 2003 Ellig was deputy director and acting director of the Office of Policy and Planning at the Federal Trade Commission, after which he again returned to Mercatus and to a position as adjunct professor in Mason's School of Law (2005-08). In 2017 Ellig became chief economist at the Federal Communications Commission and a year later joined the George Washington University Regulatory Studies Center as a research professor. Ellig received his BA in economics from Xavier University and his MA and PhD in economics from Mason. He passed away suddenly in January 2021, leaving behind him his wife of 28 years, Sandy Chiong, and their daughter, Katherine.

Related to Jerry Ellig on Dynamic Competition and Rational Regulation

Related ebooks

Business For You

View More

Reviews for Jerry Ellig on Dynamic Competition and Rational Regulation

Rating: 0 out of 5 stars
0 ratings

0 ratings0 reviews

What did you think?

Tap to rate

Review must be at least 10 words

    Book preview

    Jerry Ellig on Dynamic Competition and Rational Regulation - Jerry Ellig

    Introduction

    Dynamic Competition, Rational Regulation, and Jerry Ellig

    Susan E. Dudley and Patrick A. McLaughlin

    We’ve been lucky to call Jerry Ellig a friend and colleague at the George Washington University Regulatory Studies Center and the Mercatus Center at George Mason University. His untimely death cut short a productive and generous career, and we hope this collection provides a glimpse of his creative mind and sense of humor to those who didn’t have the good fortune of knowing him.

    Jerry was a prolific writer, and this book captures only a fraction of his scholarship and popular writing. We hope the selected items reflect some of what made him such an impressive scholar and collaborator. Whereas Jerry was brilliant at applying economic concepts and empirical analysis to improve public policy through academic publishing, he was never content simply to write for other academics. He enjoyed not only studying but also finding solutions to real-world problems in government and academia. And because Jerry was a great communicator, he was able to take his academic work and translate it for different audiences, through congressional testimony, seminars, op-eds, short presentations, and classroom teaching. He was also a generous mentor, supporting and collaborating with graduate students and colleagues to publish prolifically in peer-reviewed economics, public administration, and political science journals, as well as law reviews and more popular publications.

    This book is divided into five parts, each reflecting an area where Jerry made significant contributions to the public policy, law, and economics literature. For each, in addition to scholarly writing, we have tried to include shorter pieces that communicated his research insights through more popular media. Because he was such a prolific writer, these selections only touch the surface of his scholarship. A full bibliography of Jerry’s work can be found on the GW Regulatory Studies Center’s and Mercatus Center’s websites.

    Jerry published 4 books, 16 book chapters, 87 journal articles, and numerous op-eds, essays, and commentary. The breadth, depth, and sheer fecundity of his scholarship presented us with several difficult choices when assembling this book. Many excellent papers were not included, although we tried to at least include an op-ed or congressional testimony that contained some of the essential ideas of the papers that we had to omit from the volume. For example, Jerry wrote a report for the Administrative Conference of the United States (ACUS) titled Agency Economists, which discusses how the structure of a regulatory agency can affect the quality of economic analysis of regulations and the odds that economic analysis will be used in decision-making. Jerry’s expertise in this wasn’t merely academic. He helped stand up the Office of Economics and Analytics at the Federal Communications Commission (FCC) in 2018 and made sure that the new office was structured in a way that incorporated the best practices that he wrote about in that ACUS report. Although we were not able to include the ACUS report in this collection, we did include one of Jerry’s op-eds about the new office at the FCC (Improved Economic Analysis Should Be Lasting Part of Pai’s FCC Legacy), which summarizes many of the points made in Jerry’s ACUS report.

    The first part of this book includes papers and commentary on the topic of regulatory process. It’s fitting for this to be the first of the five parts, because regulatory process reform permeated almost all of Jerry’s work. And understandably so: process and regulations go together like longboards and tiki drinks—that is, they’ve been linked from the beginning and appreciated as complements by those who dig deep into the subject. In the United States, key aspects of the regulatory process were defined in the Administrative Procedure Act of 1946, and it has not substantially changed since then.

    But that’s not for a lack of trying. A significant portion of the ink that has been spilled at the intersection of administrative law and economics has focused on potential reforms to the regulatory process. Jerry was no exception. We doubt that any of the regulatory studies that Jerry wrote, co-wrote, or even commented on lacked some lesson about the potential for improving the regulatory process.

    Where Jerry was exceptional, however, was in his commitment to large-scale, multiyear research projects that could help inform regulatory process reform with novel empirics and insights. The Regulatory Report Card, for example, was initiated in 2008 and ran through 2013. It involved training and managing scores of students and at least nine other scholars (including one of us [Patrick McLaughlin], John Morrall, Richard Williams, Sean Mulholland, Todd Nesbit, Feler Bose, James Broughel, Anthony Dnes, and Michael Marlow). As Jerry wrote in a 2016 working paper, Better analysis is an input into better regulation. The Regulatory Report Card project and its outputs, like much of Jerry’s work, focused on how economic analysis of regulations could be better performed and used in regulatory decision-making. The scale and scope of the project makes it continue to stand above all other attempts to assess the quality and use of economic analysis in the regulatory process. While Jerry’s work has undoubtedly led to some improvements in the quality of analysis—that is, in how specific regulatory agencies perform economic analysis of regulations—his words on the actual use of economic analysis by policymakers themselves seem, unfortunately, timeless:

    Under the current regulatory process, ignoring analysis is any administration’s prerogative. Some argue that this is perfectly proper in a democratic society, but such ignorance has real consequences for real people. When administrations skimp on regulatory analysis, they issue regulations without knowing whether a significant problem exists, the root cause of the problem, alternative solutions that address the root cause, the effectiveness of each alternative in solving the problem, the benefits to society of each alternative, and the cost to society of each alternative. Citizens should question whether ignorance of these factors is acceptable for regulations that affect hundreds of millions of Americans and impose hundreds of millions of dollars in costs. (Jerry Ellig, Evaluating the Quality and Use of Regulatory Impact Analysis: The Mercatus Center’s Regulatory Report Card, 2008–2013, Mercatus Working Paper, 2016, 85)

    The second part, Lessons from Regulatory Reform, highlights both the empirical nature of Jerry’s work as well as the humility that Jerry always exhibited. Jerry was unafraid to give others credit or to learn from the experiences (and data) of the reforms of yesteryears. For example, the 1970s and 1980s witnessed an extraordinary phenomenon. Bipartisan efforts across all branches of government led to a wave of economic deregulation in industries that had previously been characterized by federal price controls, quality standards, and restrictions on entry and exit. This economic deregulation was fueled by theoretical and empirical academic research showing that regulations tended to benefit incumbent companies, at the expense of consumers and innovators; economic deregulation was then championed by policy entrepreneurs in think tanks and government who were able to translate those ideas into action.

    Jerry was too young to contribute to this remarkable deregulation movement, but he was most certainly influenced by it, and during his career, he embodied the roles of both the academic and the policy entrepreneur. One area where his contributions have been most significant is in documenting, retrospectively, the long-term effects of the economic deregulation of the 1970s and 1980s. This part of the book reproduces a few of his most influential articles that quantify the extent to which the dynamic competition unleashed by deregulation increased innovation, lowered prices, and enhanced consumer welfare and that draw epistemic lessons from that experience. At the time of this writing, those lessons of reform are being forgotten by actors across the political spectrum, so his insights are particularly valuable.

    Part three contains several pieces from an earlier stage in Jerry’s career, when he was focused on competition and regulation. The first selection, A Taxonomy of Dynamic Competition Theories with Daniel Lin, is an accessible yet deeply insightful examination of the nature of competition, and the policy implications of different theories; it should be required reading for all regulators. In the early 2000s, he took a leave from George Mason University to work in the policy office at the Federal Trade Commission, eventually becoming the head of the office. From that perch, he and his team intervened in state policy issues, discouraging unnecessary licensing requirements and encouraging greater competition that would benefit consumers, workers, and businesses. For example, they showed empirically that state laws that only allowed funeral homes to sell caskets dramatically increased the costs of funerals by preventing bereaved parties from purchasing caskets online. His work on internet wine sales (with Alan Wiseman) helped pave the way for consumers being able to purchase wine online and was cited in a Supreme Court case on the topic.

    In part four, we have included a few papers that focus on network industries such as telecommunication, rail transportation, and passenger aviation. In some ways, this is the section of the book with which many traditional, academic economists will be most comfortable. Regulation for a large segment of economists means antitrust regulation—the narrow set of rules that ostensibly limit monopoly power in the name of competition. Of course, regulation went far beyond mere economic regulation of network industries and into social regulation many decades ago, but regulations applied to network industries remain an important aspect of both the state of regulations in the United States and in the body of Jerry’s scholarship.

    Part five contains some of the research from the final years of Jerry’s career, when he began empirically exploring whether judicial review of regulatory impact analyses could have a positive effect on their quality and use. Collaborating with attorney Reeve Bull, he conducted empirical analysis that showed that reviewing courts were quite capable of evaluating the quality of agencies’ regulatory justifications and that judicial review improved future analysis conducted by agencies. Their research also showed that agencies conduct better analysis, and that courts more effectively review that analysis, when Congress provides clear statutory direction concerning how agencies are to account for the economic effects of their regulations.

    Whereas we believe that we captured the essence of most of the major thrusts of Jerry’s writings, we also recognize that there is more than a single book can include. Our hope is that the volume will at least convey to the reader an appreciation of the methodical, rational approach that Jerry applied to various areas of regulatory economics and public policy and provide a road map to learning more about any of the topics.

    This book attempts to highlight some of Jerry’s scholarship, but it can’t capture what were perhaps his most outstanding qualities—his personal humility and good-natured approach to work and life. Never one to indulge in gossip or negativity, he endeared himself to people, even those who disagreed with him, because he was genuinely good and kind, unpretentious and honest, open-minded and laugh-out-loud funny. Everyone who worked with him will miss not only his intelligence and productivity but also his affability, Hawaiian shirts, trademark parrot tie, and wit.

    PART I

    Improving Regulatory Process

    Comprehensive Regulatory Impact Analysis

    The Cornerstone of Regulatory Reform

    Good morning Chairman Johnson, Ranking Member Carper, and members of the committee. Thank you for inviting me to testify today.

    I am an economist and research fellow at the Mercatus Center, a 501(c)(3) research, educational, and outreach organization affiliated with George Mason University in Arlington, Virginia. I’ve previously served as a senior economist at the Joint Economic Committee and as deputy director of the Office of Policy Planning at the Federal Trade Commission. My principal research for the last 25 years has focused on the regulatory process, government performance, and the effects of government regulation. For these reasons, I’m delighted to testify on today’s topic.

    I work at a university. That means I’m for knowledge and against ignorance. I think that regulators have a moral responsibility to make decisions about regulations based on actual knowledge of a regulation’s likely effects—not just on hopes, intentions, or wishful thinking. A decision maker’s failure or refusal to acquire this knowledge before making decisions is a willful choice to act based on ignorance.

    Executive orders, and sometimes laws, seek to encourage regulatory agencies to act based on knowledge rather than ignorance. For more than three decades, presidents of both political parties have instructed executive branch agencies to conduct regulatory impact analysis when issuing significant regulations. Some independent agencies, such as the Securities and Exchange Commission, are required by law to assess the economic effects of their regulations. Executive orders and laws requiring economic analysis of regulations reflect a bipartisan consensus that economic analysis should inform, but not dictate, regulatory decisions. A good regulatory impact analysis also lays the groundwork for an effective retrospective review of the regulation by identifying the outcomes that should be tracked in order to assess whether the regulation accomplishes the desired goals.

    Unfortunately, agencies’ regulatory impact analyses are not nearly as informative as they ought to be, and there is often scant evidence that agencies have utilized any part of the analysis in making decisions. These problems have persisted through multiple administrations of both political parties. The problem is institutional, not partisan or personal. Improvement in the quality and use of regulatory impact analysis will likely occur only as a result of legislative reform of the regulatory process. To achieve improvement, all agencies should be required to conduct thorough and objective regulatory impact analysis for major regulations and to explain how the results of the analysis informed their decisions.

    Let me elaborate on each of these points.

    WHY REGULATORY IMPACT ANALYSIS IS NECESSARY

    We expect federal regulation to accomplish a lot of important things, such as protecting us from financial fraudsters, preventing workplace injuries, preserving clean air, and deterring terrorist attacks. Regulation also requires sacrifices; there is no free lunch. Depending on the regulation, consumers may pay more, workers may receive less, our retirement savings may grow more slowly due to reduced corporate profits, and we may have less privacy or less personal freedom. Regulatory impact analysis is the key tool that makes these tradeoffs more transparent to decision makers. So, understanding the effects of regulation has to start with sound regulatory impact analysis.

    A thorough regulatory impact analysis should do four things:

    1. Assess the nature and significance of the problem that the agency is trying to solve, so the agency knows whether there is a problem that could be solved through regulation. If there is, the agency can tailor a solution that will effectively solve the problem.

    2. Identify a wide variety of alternative solutions.

    3. Define the benefits that the agency seeks to achieve in terms of ultimate outcomes that affect citizens’ quality of life, and assess each alternative’s ability to achieve those outcomes.

    4. Identify the good things that regulated entities, consumers, and other stakeholders must sacrifice in order to achieve the desired outcomes under each alternative. In economics jargon, these sacrifices are known as costs, but just like benefits, costs may involve far more than monetary expenditures.

    Without all this information, regulatory decisions are likely to be based on hopes, intentions, and wishful thinking rather than on reality. Regulators should not adopt a regulation without knowing whether it will solve a significant problem at a reasonable cost. Given the enormous influence regulation has on our day-to-day lives, decision makers have a moral responsibility to act based on knowledge of regulation’s likely effects, not just good intentions.

    High-quality regulatory impact analysis is also essential for effective congressional oversight.

    Mechanisms that provide for congressional approval or disapproval of individual regulations, such as the Congressional Review Act or the proposed REINS Act, presume that members of Congress have thorough knowledge about the root cause of the problem that the regulation seeks to solve and about the benefits and costs of alternatives. After all, how can legislators make a responsible decision to approve or disapprove a regulation if they do not know whether the regulation solves a real problem or whether there is a better alternative solution than the proposed regulation? Oversight of existing regulatory programs also presumes that congressional committees have good information about the outcomes that the regulation is intended to achieve and the results that are expected. A high-quality regulatory impact analysis provides that information.

    SHORTCOMINGS IN THE QUALITY AND USE OF REGULATORY IMPACT ANALYSIS

    Scholarly research demonstrates that regulatory impact analysis often falls short of the standards articulated in executive orders and in guidance from the Office of Management and Budget (OMB). More often than not, agencies appear not to use regulatory impact analyses to inform major decisions. Instead, regulatory impact analyses have often seemed to be advocacy documents written to justify decisions that were already made.¹

    Source: Author’s calculations based on data available at www.mercatus.org/reportcard.

    The Mercatus Center’s Regulatory Report Card provides some of the most recent evidence on the quality and use of regulatory impact analysis.² The Regulatory Report Card is a qualitative evaluation of both the quality and use of regulatory analysis in federal agencies. The scoring process uses 12 criteria based on Executive Order 12866 and OMB guidance to agencies.³ For each criterion, trained evaluators assign a score ranging from 0 (no useful content) to 5 (comprehensive analysis with potential best practices).⁴ The Report Card assesses how well a notice of proposed rulemaking and the accompanying regulatory impact analysis complies with key principles in Executive Order 12866. The scores do not assess whether the evaluators agree with the results of the analysis or believe the regulation is a good idea.

    The online Report Card database now includes evaluations of every economically significant prescriptive regulation proposed between 2008 and 2012—a total of 108 regulations.⁵ Figure 1 shows average scores for the four major elements of regulatory impact analysis. None of the average scores exceed 3.2 out of 5 possible points. If I were assigning letter grades, every one of these regulatory impact analyses would earn an F.

    Source: Author’s calculations based on data available at www.mercatus.org/reportcard.

    The broadest Report Card criterion, which measures use of analysis, asks whether the agency claimed to use or appeared to use any part of the analysis to guide any of the decisions. As figure 2 demonstrates, agencies often fail to provide any significant evidence that any part of the regulatory impact analysis helped inform their decisions. Perhaps the analysis affects decisions more frequently than these statistics suggest, but agencies fail to document this, either in the notice of proposed rulemaking or in the regulatory impact analysis. If the analyses are informing agency decisions more than documented, then at a minimum there is a significant transparency problem.

    For each Report Card criterion, we have found a few examples that demonstrate reasonably good quality or use of analysis. These are documented in past testimony and in a series of short Mercatus on Policy publications.⁶ But best practices are not widespread.

    Unfortunately, these less-than-stellar Report Card results are consistent with prior published research on regulatory impact analysis. Case studies document instances in which regulatory impact analysis helped improve regulatory decisions by providing additional options that regulators could consider or by unearthing new information about benefits or costs of particular modifications to the regulation.⁷ But studies by the Government Accountability Office (GAO) and scholarly research reveal that in many cases, regulatory impact analyses are not sufficiently complete to serve as a guide for agency decisions. The quality of analysis varies widely, but even the most elaborate analyses have problems.⁸ Surveying the scholarly evidence as of 2008, Robert Hahn and Paul Tetlock concluded that economic analysis has not had much impact, and the general quality of regulatory impact analysis is low.⁹ The Mercatus Center’s Regulatory Report Card suggests that matters have not improved since then.

    What I have just said may seem to contradict a GAO report released in September 2014 that was prepared at the request of Senator Johnson and Senator Warner.¹⁰ The GAO report defined four major elements of regulatory analysis: discussion of the need for the regulatory action, analysis of alternatives, and assessment of both the benefits and the costs of the regulation. For economically significant regulations issued between July 1, 2011, and June 30, 2013, the GAO found that agencies always included a statement of the regulation’s purpose, discussed alternatives 81 percent of the time, always discussed benefits and costs, provided a monetized estimate of costs 97 percent of the time, and provided a monetized estimate of benefits 76 percent of the time. This sounds pretty good.

    A closer look at the report, however, reveals that GAO did not evaluate the quality of any of these aspects of agencies’ analysis. The report notes, [O]ur analysis was not designed to evaluate the quality of the cost-benefit analysis in the rules. The presence of all key elements does not provide information regarding the quality of the analysis, nor does the absence of a key element necessarily imply a deficiency in a cost-benefit analysis.¹¹ For example, GAO checked to see whether an agency included a statement of the purpose of the regulation, but it apparently accepted a statement that the regulation is required by law as a sufficient statement of purpose.¹² Citing a statute is not the same thing as articulating a goal or identifying the root cause of the problem that an agency seeks to solve. Similarly, an agency can provide a monetary estimate of some benefits or costs without necessarily addressing all major benefits or costs that the regulation is likely to create. GAO notes that it did not ascertain whether agencies addressed all relevant benefits or costs.¹³

    IMPROVEMENT IN THE QUALITY AND USE OF REGULATORY IMPACT ANALYSIS REQUIRES REFORM OF THE REGULATORY PROCESS

    The problems identified by the Report Card have occurred under both President Bush and President Obama. An econometric analysis that controls for other factors affecting the quality and use of analysis finds that there is no statistically significant difference in Report Card scores between the Bush and Obama administrations, although Bush administration regulations that cleared review by the Office of Information and Regulatory Affairs (OIRA) after June 1, 2008, tended to have lower Report Card scores.¹⁴ Previous research by other scholars also finds little variation in the quality of regulatory impact analysis across administrations of different parties.¹⁵ Another consistent—but disturbing—pattern is that administrations of both parties appear to require less thorough analyses from agencies that are more central to the administration’s policy priorities. The Bush administration, for example, permitted the Department of Homeland Security to proceed with a number of regulations that were accompanied by very incomplete regulatory impact analysis; the Obama administration did likewise with the first major regulations implementing the Patient Protection and Affordable Care Act.¹⁶ This same pattern appears to occur with other agencies.¹⁷

    The persistence of mediocre regulatory impact analysis across administrations is an institutional problem, not a personal or partisan problem. Executive branch agencies often produce mediocre regulatory analysis in spite of executive orders and OIRA review. This happens for two related reasons. First, since executive orders are the president’s instructions to agencies, agencies can ignore the analytical requirements when the White House decides that other priorities take precedence. Second, OIRA review essentially means that the administration reviews its own regulations. Since OIRA’s decision to block a regulation can be appealed to the vice president, the OIRA administrator can credibly threaten to block a regulation only if he knows he can win the ensuing political argument within the administration.

    Scholarly research has found that many independent agencies conduct even less thorough economic analysis than executive branch agencies.¹⁸ Independent agencies are not currently subject to the executive orders on regulatory analysis and review. Some, such as the Securities and Exchange Commission, are required by law to conduct economic analysis when determining whether their regulations are in the public interest. Others have no such requirement.

    A statutory requirement that all regulatory agencies conduct regulatory impact analysis and explain how it informed their decisions, combined with judicial review to ensure that the analysis and explanation meet minimum quality standards, would provide a clear legal requirement and a credible enforcement mechanism. Courts routinely weigh the strength of conflicting evidentiary claims, guided by statutory language specifying the standards for review. There is no reason courts could not perform the procedural task of checking to see that agencies adequately perform the analysis that the statute instructs them to perform and clearly explain how the analysis affected their decisions about regulations. The Securities and Exchange Commission, for example, has pledged to improve its economic analysis, employing the principles in Executive Order 12866, after losing several court cases due to insufficient analysis.¹⁹

    To enforce the law, judges would not need to engage in benefit-cost balancing nor would they need to second-guess agency policy decisions. They would merely need to check that an agency’s analysis covered the topics specified in the law (such as analysis of the systemic problem, development of alternatives, and assessment of benefits and costs of alternatives), ensure that the analysis included the quality of evidence required by the legislation, and ensure that the agency explained how the results of the analysis affected its decisions.

    Debates over regulatory process reform often take a distinctly partisan tone. But the fundamental conflict in the debate over regulatory process reform is not Republicans versus Democrats, liberals versus conservatives, or even business versus the public. It’s knowledge versus ignorance. Decision makers should choose knowledge over ignorance.

    Thank you for your time, and I look forward to your questions.

    NOTES

    1. Richard Williams, The Influence of Regulatory Economists in Federal Health and Safety Agencies (Working Paper No. 08-15, Mercatus Center at George Mason University, July 2008), http://mercatus.org/publication/influence-regulatory-economists-federal-health-and-safety-agencies; Wendy E. Wagner, The CAIR RIA: Advocacy Dressed Up as Policy Analysis, in Reforming Regulatory Impact Analysis, ed. Winston Harrington et al. (Washington, DC: Resources for the Future, 2009), 57.

    2. The Report Card methodology and 2008 scoring results are in Jerry Ellig and Patrick McLaughlin, The Quality and Use of Regulatory Analysis in 2008, Risk Analysis 32, no. 5 (May 2012): 855–80. For a summary of the 2008–2012 findings, see Jerry Ellig and Sherzod Abdukadirov, Regulatory Analysis and Regulatory Reform (Mercatus on Policy, Mercatus Center at George Mason University, Arlington, VA, January 2015), http://mercatus.org/publication/regulatory-analysis-and-regulatory-reform-update.

    3. Exec. Order No. 12866, 58 Fed. Reg. 51735–44 (1993). President Obama reaffirmed Executive Order 12866 in Executive Order 13563, Improving Regulation and Regulatory Review, 76 Fed. Reg. 3821–23 (2011).

    4. For the first several years, the evaluators were senior Mercatus Center regulatory scholars and graduate students trained in regulatory impact analysis. Since 2010, we have developed a nationwide team of economics professors who serve as evaluators in conjunction with senior Mercatus Center regulatory scholars. Biographical information on current evaluators is available at www.mercatus.org/reportcard.

    5. Prescriptive regulations are what most people think of when they think of regulations: they mandate or prohibit certain activities. This is distinct from budget regulations, which implement federal spending programs or revenue collection measures. The Report Card evaluated budget regulations in 2008 and 2009, then discontinued evaluating budget regulations in subsequent years because it was clear that the budget regulations have much lower-quality analysis. See Patrick A. McLaughlin and Jerry Ellig, Does OIRA Review Improve the Quality of Regulatory Impact Analysis? Evidence from the Bush II Administration, Administrative Law Review 63 (2011): 179–202; Jerry Ellig, Patrick A. McLaughlin, and John F. Morrall III, Continuity, Change, and Priorities: The Quality and Use of Regulatory Analysis across U.S. Administrations, Regulation & Governance 7, no. 2 (2013): 153–73.

    6. Jerry Ellig, Look Before You Leap: Improving Pre-proposal Regulatory Analysis (Testimony before the Committee on the Judiciary, Subcommittee on the Courts, Commercial, and Administrative Law, US House of Representatives, March 29, 2011); Jerry Ellig and James Broughel, Regulation: What’s the Problem? (Mercatus on Policy, Mercatus Center at George Mason University, Arlington, VA, November 2011); Jerry Ellig and James Broughel, Regulatory Alternatives: Best and Worst Practices (Mercatus on Policy, Mercatus Center at George Mason University, Arlington, VA, February 2012); Jerry Ellig and James Broughel, Baselines: A Fundamental Element of Regulatory Impact Analysis (Mercatus on Policy, Mercatus Center at George Mason University, Arlington, VA, June 2012).

    7. Winston Harrington, Lisa Heinzerling, and Richard D. Morgenstern, eds., Reforming Regulatory Impact Analysis (Washington, DC: Resources for the Future, 2009); Richard D. Morgenstern, ed., Economic Analyses at EPA: Assessing Regulatory Impact (Washington, DC: Resources for the Future, 1997); Thomas O. McGarity, Reinventing Rationality: The Role of Regulatory Analysis in the Federal Bureaucracy (New York: Cambridge University Press, 1991).

    8. See Art Fraas and Randall Lutter, The Challenges of Improving the Economic Analysis of Pending Regulations: The Experience of OMB Circular A-4, Annual Review of Resource Economics 3, no. 1 (2011): 71–85; Jamie Belcore and Jerry Ellig, Homeland Security and Regulatory Analysis: Are We Safe Yet? Rutgers Law Journal 40, no. 1 (Fall 2008): 1–96; Robert W. Hahn, Jason K. Burnett, Yee-Ho I. Chan, Elizabeth A. Mader, and Petrea R. Moyle, Assessing Regulatory Impact Analyses: The Failure of Agencies to Comply with Executive Order 12,866, Harvard Journal of Law and Public Policy 23, no. 3 (2000): 859–71; Robert W. Hahn and Patrick M. Dudley, How Well Does the U.S. Government Do Cost–Benefit Analysis? Review of Environmental Economics and Policy 1, no. 2 (2007): 192–211; Robert W. Hahn and Robert E. Litan, Counting Regulatory Benefits and Costs: Lessons for the U.S. and Europe, Journal of International Economic Law 8, no. 2 (2005): 473–508; Robert W. Hahn, Randall W. Lutter, and W. Kip Viscusi, Do Federal Regulations Reduce Mortality? (Washington, DC: AEI-Brookings Joint Center for Regulatory Studies, 2000); Government Accountability Office, Regulatory Reform: Agencies Could Improve Development, Documentation, and Clarity of Regulatory Economic Analyses, Report GAO/RCED-98-142 (May 1998); Government Accountability Office, Air Pollution: Information Contained in EPA’s Regulatory Impact Analyses Can Be Made Clearer, Report GAO/RCED 97-38 (April 1997).

    9. Robert W. Hahn and Paul C. Tetlock, Has Economic Analysis Improved Regulatory Decisions? Journal of Economic Perspectives 22, no. 1 (Winter 2008): 67–84.

    10. Government Accountability Office, Federal Rulemaking: Agencies Included Key Elements of Cost-Benefit Analysis, but Explanations of Regulations’ Significance Could Be More Transparent, Report GAO-14-714 (September 2014).

    11. Government Accountability Office, Federal Rulemaking, 4.

    12. Government Accountability Office, Federal Rulemaking, 22.

    13. Government Accountability Office, Federal Rulemaking, 23.

    14. See Ellig et al., Continuity, Change, and Priorities; Jerry Ellig, Midnight Regulation: Decisions in the Dark? (Mercatus on Policy, Mercatus Center at George Mason University, Arlington, VA, August 2012).

    15. Hahn and Dudley, How Well Does the Government Do Cost-Benefit Analysis?

    16. Belcore and Ellig, Homeland Security and Regulatory Analysis; Jerry Ellig and Christopher J. Conover, Presidential Priorities, Congressional Control, and the Quality of Regulatory Impact Analysis: An Application to Health Care and Homeland Security, Public Choice 161, no. 3–4 (2014): 305–20; Christopher J. Conover and Jerry Ellig, Rushed Regulation Reform (Mercatus on Policy, Mercatus Center at George Mason University, Arlington, VA, January 2012).

    17. Ellig et al., Continuity, Change, and Priorities.

    18. Arthur Fraas and Randall L. Lutter, On the Economic Analysis of Regulations at Independent Regulatory Commissions, Administrative Law Review 63 (2011): 213–41; Jerry Ellig and Hester Peirce, SEC Regulatory Analysis: ‘A Long Way to Go and a Short Time to Get There,’ Brooklyn Journal of Corporate, Financial & Commercial Law 8, no. 2 (Spring 2014): 361–437.

    19. See Ellig and Peirce, SEC Regulatory Analysis.

    Originally published as testimony before the Senate Committee on Homeland Security and Governmental Affairs hearing on Toward a 21st-Century Regulatory System, Mercatus Center at George Mason University, February 25, 2015.

    Regulatory Process, Regulatory Reform, and the Quality of Regulatory Impact Analysis

    Jerry Ellig and Rosemarie Fike

    1. INTRODUCTION

    Regulatory impact analysis (RIA) has become a key element of the regulatory process in developed and developing nations alike. A thorough RIA identifies the potential market failure or other systemic problem a regulation is intended to solve, develops a variety of alternative solutions, and estimates the benefits and costs of those alternatives. Governments have outlined RIA requirements in official documents, such as Executive Order 12866 (Clinton 1993) and Office of Management and Budget (OMB) Circular A-4 (2003) in the United States and the Impact Assessment Guidelines in the European Union (European Commission 2009). More recently, President Obama’s Executive Order 13563 reaffirmed Executive Order 12866 and noted some additional values agencies could consider, such as fairness and human dignity (Obama 2011).

    Yet across the globe, evaluations of regulatory impact analysis have found that government agencies’ practice often falls far short of the principles outlined in scholarly research and governments’ own directives. Many RIAs in the United States lack basic information, such as monetized benefits and meaningful alternatives (Hahn et al. 2000; Hahn and Dudley 2007; Fraas and Lutter 2011a; Shapiro and Morrall 2012). Related analyses find that European Commission impact assessments have similar weaknesses (Renda 2006; Cecot et al. 2008; Hahn and Litan 2005). Case studies also find that RIAs have significant deficiencies (Harrington, Heinzerling, and Morgenstern 2009; Graham 2008; Morgenstern 1997; McGarity 1991; Fraas 1991). Some commentators have characterized individual RIAs as litigation support documents (Wagner 2009) or documents drafted to justify decisions already made for other reasons (Dudley 2011, 126; Keohane 2009). Interviews with agency economists indicate that this happens frequently (Williams 2008).

    In the United States, perceived deficiencies in the quality of regulatory analysis have led to calls for significant reforms of the regulatory process to motivate higher quality analysis (House Judiciary Committee 2013; President’s Jobs Council 2011; Harrington, Heinzerling, and Morgenstern 2009). One proposal would require that agencies publish an advance notice of proposed rulemaking (ANPRM) for all major regulations—typically regulations that have economic effects exceeding $100 million annually. Under current practice, an ANPRM may consist largely of a preliminary regulatory proposal or a request for information; reformers suggest that it should also include a preliminary RIA. Proponents believe that expanded use of ANPRMs to solicit comments on a preliminary RIA could improve the quality of regulatory analysis for three different reasons. First, public comment on a preliminary analysis provides the agency with more information; it allows the agency to benefit from critiques, feedback, and other public input (President’s Jobs Council 2011, 43). Second, requiring an agency to produce a preliminary analysis before it writes a proposed regulation helps counter the tendency of agencies to make regulatory decisions first and then task economists or other analysts with producing an analysis that simply supports the decision (Williams 2008; House Judiciary Committee 2013, 6–7). Third, public disclosure of a preliminary analysis alters incentives by crowdsourcing regulatory review, instead of leaving the review function solely to the Office of Information and Regulatory Affairs (OIRA) (Belzer 2009).

    Another proposal would require agencies to consult with affected private sector parties as early as possible, before proposing regulations that include significant mandates that affect the private sector. The Small Business Regulatory Enforcement Fairness Act already requires the Environmental Protection Agency and the Occupational Safety and Health Administration (OSHA) to create panels to advise them on how to craft certain types of regulations in a manner that minimizes impacts on small businesses. The Unfunded Mandates Reform Act requires agencies to consult with state, local, and tribal governments before writing regulations that affect them. Legislation that passed the House during the past several Congresses would extend this broader consultation requirement to cover the private sector as well (House Committee on Oversight and Government Reform 2015, 4). Proposed legislation would also require trial-like, formal rulemaking hearings for high impact regulations—generally, those that impose costs or other burdens exceeding $1 billion annually (House Judiciary Committee 2013).

    Some reforms would augment the resources and role of OIRA, the office within the OMB that reviews regulations and their accompanying RIAs for compliance with Executive Order 12866. Commentators have called for an expansion of OIRA’s staff (Shapiro and Morrall 2013; President’s Jobs Council 2011, 45) and for subjecting regulations from independent regulatory commissions to RIA requirements and OIRA review (Hahn and Sunstein 2002, 1531–37; House Judiciary Committee 2013, 24–26; President’s Jobs Council 2011, 45; Tozzi 2011, 68; Katzen 2011, 109; Fraas and Lutter 2011b). Shapiro and Morrall (2013) find that RIAs that underwent lengthier OIRA review contain a more thorough analysis. Based on this finding, they suggest that increasing OIRA’s staff to allow for more thorough review could improve the quality of analysis.

    In a sense, the reform proposals represent a continuation of a trend toward greater uniformity in administrative procedures that began with the Administrative Procedure Act (APA) of 1946. The APA instituted uniform procedures and established minimum standards for information gathering and disclosure across agencies (McCubbins, Noll, and Weingast 1987, 256). The RIA requirements in executive orders raised the standards by enunciating a series of substantive questions all executive branch regulatory agencies are supposed to address.¹ The proposed reforms would further standardize agency procedures for developing regulations and RIAs and apply these standards to independent agencies as well.

    Recent regulatory history provides a rich database of experience that can be used to test the prospective impact of proposed reforms. Many of the proposed reforms are similar to actions that agencies sometimes undertake voluntarily or are currently required by law for certain regulations. OIRA already subjects some regulations to a lengthier or more thorough review than others (McLaughlin and Ellig 2011). If more extensive effort by agencies and OIRA is correlated with higher quality analysis, then requiring such effort could improve the quality of regulatory analysis.

    This paper combines newly gathered data on the variation in regulatory processes with an extensive set of expert scores that evaluate the quality of regulatory impact analysis for proposed federal regulations to assess whether RIA quality varies systematically with the type of effort expended by agencies and OIRA. We find that many types of agency effort, such as a pre-proposal notice requesting comment from the public, consultation with state governments, and use of advisory committees, are associated with higher quality RIAs. The quality of regulatory analysis is positively correlated with the length of OIRA review time, and quality is lower when OIRA is headed by an acting administrator rather than a presidential appointee. For most of the explanatory variables, similar results occur using ordered logit, ordinary least squares (OLS), or a three-stage least squares estimator that includes instrumental variables. The results suggest that regulatory reforms designed to expand agency analytical activity and augment OIRA’s influence and resources could improve the quality of regulatory impact analysis.

    2. THEORETICAL CONSIDERATIONS

    Elected leaders delegate significant decision-making authority to regulatory agencies. This makes accountability more difficult, due to the asymmetry of information between the agencies and the elected leaders. As a result, elected leaders may not get the amount or type of regulation they would have written if they were privy to the agency’s expert knowledge (McCubbins 1985; Abbott 1987, 180).

    From the perspective of elected policymakers, agencies may be over- or under-zealous about adopting new regulations. Issuing new regulations requires effort, which is costly (McCubbins, Noll, and Weingast 1987, 247). Hence, bureaucratic inertia may lead to fewer regulatory initiatives than elected leaders desire (Kagan 2001). Antiregulatory interests are also often well organized and well funded, and they may influence agencies to under-regulate (Bagley and Revesz 2006, 1282–304). A president can counter these incentives by appointing regulatory enthusiasts who will seek out information about new opportunities to regulate (Bubb and Warren 2014).

    The most obvious reason that regulators might choose an inefficiently high level of regulation is that some statutes instruct them to make decisions based on factors other than efficiency. In those cases, regulators would be following elected leaders’ wishes. However, several other incentives may prompt agencies to engage in more regulation, or more intensive regulation, than elected leaders may prefer. Most agency officials benefit from growth and expansion. Regulatory agency success is usually defined as success in creating regulations intended to achieve the agency’s specific mission, such as workplace safety (OSHA) or clean water (EPA), rather than thoroughly investigating the opportunity costs of alternative uses of social resources (DeMuth and Ginsburg 1986; Dudley 2011). In addition, the typical agency’s position as a monopoly supplier that exchanges a bundle of outputs for a budget can lead to levels of output and expenditures that exceed the levels elected leaders would desire if monitoring of the agency were costless (Niskanen 1994). Even if regulators are primarily concerned with the public interest, they may genuinely believe that the most effective way to advance the public interest is to advance their agency’s specific mission (Downs 1967, 102–52; Wilson 1989, 260–62).

    By adopting procedural requirements that compel agencies to publicize regulatory proposals in advance and disclose their likely consequences, Congress and the president mitigate information asymmetries and make it easier for affected constituencies to monitor and alert them about regulatory initiatives of concern (McCubbins, Noll, and Weingast 1987). As Horn and Shepsle (1989) note, this can increase the value of the legislative deal generating the regulation if constituents can monitor the effects of proposed regulations at lower cost than elected leaders can.

    Executive orders requiring agencies to conduct and publish RIAs and clear regulations through the OMB are examples of presidential initiatives that seek to reduce information asymmetries (Bubb and Warren 2014, 116).² Posner (2001) argues that elected leaders should find RIA requirements useful even when their goal is something other than economic efficiency, because the RIA is supposed to provide a structured and systematic way of identifying the regulation’s likely consequences. As if to confirm Posner’s hypothesis, seminal articles by DeMuth and Ginsburg (1986) and Kagan (2001) portray centralized regulatory review and RIAs as important tools for ensuring agency accountability under presidents Reagan and Clinton—the two US presidents who did the most to shape the current requirements and review process in the executive branch, despite their divergent attitudes toward regulation.³

    As a first approximation, we expect that regulatory reforms aimed at increasing agencies’ analytical activity and OIRA’s influence would lead to more thorough RIAs. After all, it is logical to expect that greater effort will produce more thorough analysis. Several complicating factors, however, could lead to different predictions under specific circumstances.

    2.1. Agency Effort

    Increased agency activity may not always improve the quality of the RIA. Agencies can also devote analytical effort to increasing information asymmetries by making the RIA more complex but less informative. Some RIAs spend an inordinate amount of time on less important benefit or cost calculations while missing more substantial issues, such as significant alternatives (Keohane 2009; Wagner 2009). Or an RIA may exhibit what Sinden (2015) terms false formality, providing an extensive presentation of quantified benefits and costs that is used to justify decisions, while ignoring important unquantified benefits or costs. If pre-proposal effort merely promotes complexity, it may not improve the quality of the analysis.

    Extra procedural steps could also reduce the quality or use of RIAs by giving interest groups greater influence over the regulatory process. Public meetings or other forums that gather stakeholders together may facilitate collusion among stakeholders at the expense of the general public, even if the purpose of the meeting is merely information gathering. To the extent that the agency is guided by agreement among stakeholders rather than the results of analysis, the RIA may be used less extensively. If analysts expect this to occur, they will likely put less effort into creating a high-quality analysis. Of course, greater responsiveness to stakeholders may be precisely the result elected leaders intend; nevertheless, it could lead to lower-quality analysis. Even if stakeholders wield no inappropriate influence, public meetings or other extensive discussions may lead agencies to document the analysis or its effects less extensively in the NPRM or RIA, since major stakeholders already heard this discussion in meetings where many topics relevant to regulatory analysis were aired.

    2.2. OIRA Influence and Resources

    Executive Order 12866 explicitly gives OIRA two distinct functions, which sometimes conflict (Arbuckle 2011; Dudley 2011). OIRA has a dual role of ensuring that the regulations embody the regulatory analysis principles enunciated in Executive Order 12866 and ensuring that they reflect the president’s policy views. If OIRA primarily enforces the principles of Executive Order 12866, then we would expect greater effort on OIRA’s part to improve the quality and use of RIAs. If OIRA primarily enforces the president’s policy views on agencies, then OIRA’s efforts may have ambiguous effects on the quality of RIAs, depending on how much the administration’s policy views diverge from the principles in the executive order.

    Prior research on OIRA’s effectiveness often finds that regulations do undergo change during the review process. A 2003 Government Accountability Office report found numerous instances in which OIRA review affected the content of an agency’s regulatory analysis or the agency’s explanation of how the analysis was related to the regulation (USGAO 2007). Haeder and Yackee (2015) find that the

    Enjoying the preview?
    Page 1 of 1