Reexamining Regulations
Opportunities Exist to Improve Effectiveness and Transparency of Retrospective Reviews
Gao ID: GAO-07-791 July 16, 2007
Congress and presidents require agencies to review existing regulations to determine whether they should be retained, amended, or rescinded, among other things. GAO was asked to report the following for agency reviews: (1) numbers and types completed from 2001 through 2006; (2) processes and standards that guided planning, conducting, and reporting; (3) outcomes; and (4) factors that helped or impeded in conducting and using them. GAO evaluated the activities of nine agencies covering health, safety, environmental, financial, and economic regulations and accounting for almost 60 percent of all final regulations issued within the review period. GAO also reviewed available documentation, assessed a sample of completed reviews, and solicited perspectives on the conduct and usefulness of reviews from agency officials and knowledgeable nonfederal parties.
From 2001 through 2006, the selected agencies completed over 1,300 reviews of existing regulations. The mix of reviews conducted, in terms of impetus (mandatory or discretionary) and purpose, varied among agencies. Mandatory requirements were sometimes the impetus for reviews, but agencies more often exercised their own discretionary authorities to review regulations. The main purpose of most reviews was to examine the effectiveness of the implementation of regulations, but agencies also conducted reviews to identify ways to reduce regulatory burdens and to validate the original estimates of benefits and costs. The processes and standards guiding reviews varied across agencies and the impetus and phase of the review process. They varied by the extent to which agencies applied a standards-based approach, incorporated public participation, and provided complete and transparent documentation. For example, while almost all agencies had standards for conducting mandatory reviews, only about half of the agencies had such standards for conducting discretionary views. The extent of public involvement varied across review phases, with relatively more in the selection process for discretionary reviews. Agencies more often documented all phases of mandatory reviews compared to discretionary reviews. The outcomes of reviews included amendments to regulations, changes to guidance and related documents, decisions to conduct additional studies, and confirmation that existing rules achieved the intended results. Mandated reviews, in particular, most often resulted in no changes. Agencies noted that discretionary reviews generated additional action more often than mandatory reviews. Agencies and nonfederal parties generally considered all of the various review outcomes useful. Multiple factors helped or impeded the conduct and usefulness of retrospective reviews. Agencies identified time and resources as the most critical barriers, but also cited factors such as data limitations and overlapping or duplicative review requirements. Nonfederal parties said that the lack of transparency was a barrier; they were rarely aware of the agencies' reviews. Both agencies and nonfederal parties identified limited public participation as a barrier. To help improve the conduct and usefulness of reviews, agencies and nonfederal parties suggested practices such as pre-planning to identify data needed to conduct effective reviews, a prioritization process to address time and resource barriers, high-level management support, grouping related regulations together when conducting reviews, and making greater use of diverse communication technologies and venues to promote public participation.
Recommendations
Our recommendations from this work are listed below with a Contact for more information. Status will change from "In process" to "Open," "Closed - implemented," or "Closed - not implemented" based on our follow up work.
Director:
Team:
Phone:
GAO-07-791, Reexamining Regulations: Opportunities Exist to Improve Effectiveness and Transparency of Retrospective Reviews
This is the accessible text file for GAO report number GAO-07-791
entitled 'Reexamining Regulations: Opportunities Exist to Improve
Effectiveness and Transparency of Retrospective Reviews' which was
released on August 15, 2007.
This text file was formatted by the U.S. Government Accountability
Office (GAO) to be accessible to users with visual impairments, as part
of a longer term project to improve GAO products' accessibility. Every
attempt has been made to maintain the structural and data integrity of
the original printed product. Accessibility features, such as text
descriptions of tables, consecutively numbered footnotes placed at the
end of the file, and the text of agency comment letters, are provided
but may not exactly duplicate the presentation or format of the printed
version. The portable document format (PDF) file is an exact electronic
replica of the printed version. We welcome your feedback. Please E-mail
your comments regarding the contents or accessibility features of this
document to Webmaster@gao.gov.
This is a work of the U.S. government and is not subject to copyright
protection in the United States. It may be reproduced and distributed
in its entirety without further permission from GAO. Because this work
may contain copyrighted images or other material, permission from the
copyright holder may be necessary if you wish to reproduce this
material separately.
Report to Congressional Requesters:
United States Government Accountability Office:
GAO:
July 2007:
Reexamining Regulations:
Opportunities Exist to Improve Effectiveness and Transparency of
Retrospective Reviews:
GAO-07-791:
GAO Highlights:
Highlights of GAO-07-791, a report to congressional requesters
Why GAO Did This Study:
Congress and presidents require agencies to review existing regulations
to determine whether they should be retained, amended, or rescinded,
among other things. GAO was asked to report the following for agency
reviews: (1) numbers and types completed from 2001 through 2006; (2)
processes and standards that guided planning, conducting, and
reporting; (3) outcomes; and (4) factors that helped or impeded in
conducting and using them. GAO evaluated the activities of nine
agencies covering health, safety, environmental, financial, and
economic regulations and accounting for almost 60 percent of all final
regulations issued within the review period. GAO also reviewed
available documentation, assessed a sample of completed reviews, and
solicited perspectives on the conduct and usefulness of reviews from
agency officials and knowledgeable nonfederal parties.
What GAO Found:
From 2001 through 2006, the selected agencies completed over 1,300
reviews of existing regulations. The mix of reviews conducted, in terms
of impetus (mandatory or discretionary) and purpose, varied among
agencies. Mandatory requirements were sometimes the impetus for
reviews, but agencies more often exercised their own discretionary
authorities to review regulations. The main purpose of most reviews was
to examine the effectiveness of the implementation of regulations, but
agencies also conducted reviews to identify ways to reduce regulatory
burdens and to validate the original estimates of benefits and costs.
The processes and standards guiding reviews varied across agencies and
the impetus and phase of the review process. They varied by the extent
to which agencies applied a standards-based approach, incorporated
public participation, and provided complete and transparent
documentation. For example, while almost all agencies had standards for
conducting mandatory reviews, only about half of the agencies had such
standards for conducting discretionary views. The extent of public
involvement varied across review phases, with relatively more in the
selection process for discretionary reviews. Agencies more often
documented all phases of mandatory reviews compared to discretionary
reviews.
The outcomes of reviews included amendments to regulations, changes to
guidance and related documents, decisions to conduct additional
studies, and confirmation that existing rules achieved the intended
results. Mandated reviews, in particular, most often resulted in no
changes. Agencies noted that discretionary reviews generated additional
action more often than mandatory reviews. Agencies and nonfederal
parties generally considered all of the various review outcomes useful.
Multiple factors helped or impeded the conduct and usefulness of
retrospective reviews. Agencies identified time and resources as the
most critical barriers, but also cited factors such as data limitations
and overlapping or duplicative review requirements. Nonfederal parties
said that the lack of transparency was a barrier; they were rarely
aware of the agencies‘ reviews. Both agencies and nonfederal parties
identified limited public participation as a barrier. To help improve
the conduct and usefulness of reviews, agencies and nonfederal parties
suggested practices such as pre-planning to identify data needed to
conduct effective reviews, a prioritization process to address time and
resource barriers, high-level management support, grouping related
regulations together when conducting reviews, and making greater use of
diverse communication technologies and venues to promote public
participation.
What GAO Recommends:
GAO recommends that agencies incorporate various elements into their
policies and procedures to improve the effectiveness and transparency
of retrospective regulatory reviews and that they identify
opportunities for Congress to revise and consolidate existing
requirements. In commenting on a draft of this report, SBA‘s Office of
Advocacy agreed with and provided updated guidance in response to the
recommendations. OMB reported having no comments on the draft. All
others provided technical comments.
[Hyperlink, http://www.gao.gov/cgi-bin/getrpt?GAO-07-791].
To view the full product, including the scope and methodology, click on
the link above. For more information, contact Mathew J. Scire at (202)
512-6806 or sciremj@gao.gov.
[End of section]
Contents:
Letter:
Results in Brief:
Background:
All Selected Agencies Reviewed Existing Regulations, but the Purposes
of the Reviews Varied:
Agencies Varied in the Extent to Which They Used Formal Processes and
Standards for Selecting, Conducting, and Reporting on Retrospective
Reviews:
Reviews Resulted in a Variety of Outcomes that Were Considered Useful,
but Mandatory Reviews Most Often Resulted in No Change:
Agencies and Nonfederal Parties Identified Barriers and Facilitators to
the Conduct and Usefulness of Retrospective Reviews:
Conclusions:
Recommendations for Executive Action:
Matters for Congressional Consideration:
Agency Comments:
Appendix I: Objectives, Scope and Methodology:
Appendix II: Department of Agriculture Retrospective Reviews:
Appendix III: Department of Justice Retrospective Reviews:
Appendix IV: Department of Labor Retrospective Reviews:
Appendix V: Department of Transportation Retrospective Reviews:
Appendix VI: Consumer Product Safety Commission Retrospective Reviews:
Appendix VII: Environmental Protection Agency Retrospective Reviews:
Appendix VIII: Federal Communications Commission Retrospective Reviews:
Appendix IX: Federal Deposit Insurance Corporation Retrospective
Reviews:
Appendix X: Small Business Administration Retrospective Reviews:
Appendix XI: Department of Labor's Employee Benefit Security
Administration Example of a Documented Formal Review:
Appendix XII: Comments from the Small Business Administration Office of
Advocacy:
Appendix XIII: GAO Contact and Acknowledgments:
Tables:
Table 1: Impetus for Mandatory and Discretionary Reviews Conducted
Between 2001 and 2006:
Table 2: Retrospective Reviews for which Results Were Reported 2001-
2006, for Selected Agencies:
Table 3: Examples of Overlapping Timing and Review Factors in
Retrospective Review Requirements:
Table 4: Description of USDA Retrospective Reviews:
Table 5: Description of DOJ Retrospective Reviews:
Table 6: Description of DOL Retrospective Reviews:
Table 7: Description of DOT Retrospective Reviews:
Table 8: Description of CPSC Retrospective Reviews:
Table 9: Description of EPA Retrospective Reviews:
Table 10: Description of FCC Retrospective Reviews:
Table 11: Description of FDIC Retrospective Reviews:
Table 12: Description of SBA Retrospective Reviews:
Figures:
Figure 1: Agencies' Use of Standards-based Approaches in the Review
Process, for Discretionary and Mandatory Reviews:
Figure 2: Agencies' Incorporation of Public Involvement in the Review
Process, for Discretionary and Mandatory Reviews:
Figure 3: Agencies' Documentation of the Review Process, for
Discretionary and Mandatory Reviews:
Figure 4: Modifications to Rules Listed in the December 2006 Unified
Agenda by Type of Review for Selected Agencies:
Figure 5: USDA Review Process:
Figure 6: ATF Section 610 Review, Explosive Materials in the Fireworks
Industry:
Figure 7: EBSA Retrospective Review Process:
Figure 8: OSHA Retrospective Review Process:
Figure 9: Illustration of DOT's Review Program:
Figure 10: CPSC Retrospective Review Process:
Figure 11: EPA Retrospective Review Process:
Figure 12: FCC Retrospective Review Processes:
Figure 13: FDIC Retrospective Review Process:
Figure 14: SBA's General Retrospective Review Process:
Abbreviations:
AMS: Agricultural Marketing Service:
APHIS: Animal and Plant Health Inspection Service:
ATF: Bureau of Alcohol, Tobacco, Firearms, and Explosives:
CFR: Code of Federal Regulations:
CPSC: Consumer Product Safety Commission:
CRA: Congressional Review Act:
DEA: Drug Enforcement Administration:
DOJ: Department of Justice:
DOL: Department of Labor:
DOT: Department of Transportation:
EBSA: Employee Benefits Security Administration:
EGRPRA: Economic Growth and Regulatory Paperwork Reduction Act:
EPA: Environmental Protection Agency:
ETA: Employment and Training Administration:
FAA: Federal Aviation Administration:
FCC: Federal Communications Commission:
FDIC: Federal Deposit Insurance Corporation:
FFIEC: Federal Financial Institutions Examination Council:
FSIS: Food Safety and Inspection Service:
FTE: full-time equivalent:
GPRA: Government Performance and Results Act of 1993:
MSHA: Mine Safety and Health Administration:
NHTSA: National Highway Traffic Safety Administration:
OIRA: Office of Information and Regulatory Affairs:
OMB: Office of Management and Budget:
OSHA: Occupational Safety and Health Administration:
PART: Program Assessment Rating Tool:
PRA: Paperwork Reduction Act:
RFA: Regulatory Flexibility Act:
SBA: Small Business Administration:
SEISNOSE: significant economic impact upon a substantial number of
small entities:
USDA: Department of Agriculture:
United States Government Accountability Office:
Washington, DC 20548:
July 16, 2007:
The Honorable Joe Barton:
Ranking Member:
Committee on Energy and Commerce:
House of Representatives:
The Honorable Ed Whitfield:
Ranking Member:
Subcommittee on Oversight and Investigations:
Committee on Energy and Commerce:
House of Representatives:
Regulation is a basic tool of government. Each year, federal agencies
issue thousands of regulations to ensure public health and safety,
protect the environment, and facilitate the effective functioning of
financial markets, among other goals. The total costs of these
regulations are estimated to be in the hundreds of billions of dollars,
and the estimated benefits are even higher. Congress and presidents
have imposed many procedural and analytical requirements on the
regulatory process, in part, because of the substantial costs and
benefits of regulations. The requirements focus predominately on
agencies' development of new rules, but some, such as Section 610
reviews required by the Regulatory Flexibility Act (RFA),[Footnote 1]
call for the evaluation of existing regulations. Such evaluations can
be done through a variety of activities that may be considered
retrospective regulatory reviews.[Footnote 2] For purposes of this
report, we generally use the term retrospective review to mean any
assessment of an existing regulation, primarily for purposes of
determining whether (1) the expected outcomes of the regulation have
been achieved; (2) the agency should retain, amend, or rescind the
regulation; and/or (3) the actual benefits and costs of the implemented
regulation correspond with estimates prepared at the time the
regulation was issued. We have reported that federal and nonfederal
stakeholders and experts identified such evaluations as highly
desirable and potentially useful but, at the same time, recognized that
they may be difficult to do.[Footnote 3]
To provide insights concerning how agencies evaluate existing
regulations, you requested that we examine agencies' implementation of
retrospective regulatory reviews and the results of such reviews. For
selected agencies, we are reporting on:
1) the magnitude of retrospective review activity and type of
retrospective reviews agencies completed from calendar year 2001
through 2006, including the frequency, impetus (mandatory or
discretionary), and purposes of the reviews;
2) the processes and standards that guide agencies' planning,
conducting, and reporting on reviews, and the strengths and limitations
of the various review processes and requirements;[Footnote 4]
3) the outcomes of reviews, including the perceived usefulness of the
reviews to agencies and the public and how they affected subsequent
regulatory activities; and:
4) the factors that appear to help or impede agencies in conducting or
using retrospective reviews, including which methods, if any, that
agencies and we identified as most cost-effective for conducting
reviews.
We assessed the activities of nine agencies and their relevant
subagencies from 2001 through 2006, including the Departments of
Agriculture (USDA), Justice (DOJ), Labor (DOL), and
Transportation(DOT); Consumer Product Safety Commission (CPSC);
Environmental Protection Agency (EPA); Federal Communications
Commission (FCC); Federal Deposit Insurance Corporation (FDIC); and the
Small Business Administration (SBA).[Footnote 5] We selected these
agencies because they include Cabinet departments, independent
agencies, and independent regulatory agencies covering a wide variety
of regulatory activities in areas such as health, safety,
environmental, financial, and economic regulation.[Footnote 6] We were
not able to assess the activities of all regulatory agencies, due to
time and resource constraints, but, given the diversity and volume of
federal regulation conducted by the nine selected agencies, the results
of our assessment should provide a reasonable characterization of the
variety of retrospective regulatory reviews and the issues associated
with their implementation. Further, our Federal Rules Database showed
that the nine agencies accounted for almost 60 percent of all final
rules published from 2001 through 2006. The nine agencies accounted for
almost 36 percent of all major rules (for example, rules with at least
a $100 million impact on the economy) published during that time
period.[Footnote 7]
To address our four objectives, we reviewed documentation from the
selected agencies and interviewed agency officials. We administered and
collected responses to a structured data collection instrument that
solicited information on agencies' retrospective review activities and
lessons learned. We supplemented this data collection by obtaining
information from the Federal Register, Unified Agenda, and published
dockets and reports listed on agency Web sites, and by conducting a
more detailed assessment of a sample of studies completed between 2001
and 2006. We also solicited perspectives on the usefulness of agency
reviews and factors that help or impede the conduct of reviews from
regulatory oversight entities, such as the Office of Information and
Regulatory Affairs (OIRA) within the Office of Management and Budget
(OMB), the Office of Advocacy within SBA, and knowledgeable nonfederal
parties from a variety of sectors (academia, business, public advocacy,
and state government). We reviewed agency policies, executive orders,
and statutory requirements to identify what policies and procedures
agencies have in place to guide planning, conducting, and reporting on
reviews. Further, to identify the strengths and limitations of the
processes, we assessed agencies' use of three systematic review
practices that are important to the effectiveness and transparency of
agency reviews, including the (1) use of a standards-based approach,
(2) incorporation of public involvement, and (3) documentation of
review processes and results.[Footnote 8] In our more detailed
assessment of a limited sample of retrospective reviews completed
between 2001 and 2006, we also assessed the agencies' application of
research and economic standards and practices. The sample that we
assessed was too small to be generalizable to all agency retrospective
reviews, but this assessment illustrated some of the strengths and
limitations that exist in agency reviews. We conducted our work in
accordance with generally accepted government auditing standards from
May 2006 through April 2007. (See app. I for a more detailed
description of our objectives, scope, and methodology.)
Results in Brief:
Since 2001, all of the selected agencies conducted multiple
retrospective reviews of existing regulations. The nine agencies
reported completing over 1,300 reviews from 2001 through 2006. However,
it is not possible to compile a complete tally of all reviews that the
agencies completed, primarily because agencies reported that they did
not always document reviews that may have followed more informal review
processes. The mix of reviews conducted--in terms of impetus (mandatory
or discretionary) and purpose--varied among agencies. The impetus for
initiating reviews sometimes reflected various governmentwide or
agency/program-specific mandatory requirements. More often it reflected
how an agency exercises its own discretionary authorities, such as
responding to petitions or concerns raised by regulated entities or to
changes over time in an industry sector or technology. Agency officials
said that the primary purpose of most reviews was to examine the
effectiveness of the implementation of regulations, but they also
conducted reviews to identify opportunities to reduce regulatory
burdens and to validate the original estimates of benefits and costs
associated with regulations.
The processes and standards guiding retrospective regulatory reviews,
and the strengths and limitations of these processes, varied widely
across their three major phases, including the selection of regulations
to review, conduct of the review, and reporting review results. We
identified three important practices that can have an impact on the
effectiveness and transparency of each phase of the review process and
assessed the extent to which they existed within agencies' processes
and standards, including: (1) use of a standards-based approach, (2)
incorporation of public involvement, and (3) documentation of review
processes and results. We found that, while some agencies have not
developed formal processes or standards that include these practices,
others have developed detailed, systematic processes. Further, whether
agencies used these practices often depended on whether they conducted
discretionary or mandatory reviews. Even for reviews that had
standards, agencies did not always apply them to their analysis. For
example, in a Section 610 review, an agency relied only on public
comments and did not assess the other four factors identified in the
mandatory review requirements.[Footnote 9] The extent to which agencies
involved the public varied across review phases, with relatively more
public involvement in the selection process for discretionary reviews.
Similarly, the level of documentation of agency reviews varied
depending on the impetus of the review and the phase of the review
process. For example, agencies appeared to document the selection,
conduct, and results of reviews that they initiated in response to
mandatory requirements more often than they did for reviews conducted
at their own discretion.
Agencies reported that their reviews of existing regulations resulted
in various outcomes, including changes to regulations, changes to
guidance and related documents, decisions to conduct additional
studies, and confirmation that existing rules achieved the intended
results. Agency officials and nonfederal parties who we interviewed
reported that each of these outcomes could be useful to the agency and
the public. Agency officials reported that some types of reviews more
often resulted in subsequent action, such as prompting the agencies to
complete additional studies or to initiate rulemaking to amend the
existing rule. In particular, they reported that discretionary reviews
more often generated additional action than mandatory reviews. For the
mandatory reviews completed within our time frame, the most common
result was a decision by the agency that no changes were needed to the
regulation.[Footnote 10] Nonfederal parties who we interviewed
generally considered reviews that addressed multiple purposes (such as
validation of prior estimates of benefits and costs, burden reduction,
and efficiency and effectiveness) more useful than reviews that focused
on a single purpose. One of the major findings in our review was the
difference in the perceived usefulness of mandatory versus
discretionary regulatory reviews. Generally, the agencies characterized
the results of discretionary reviews as more productive and more likely
to generate further action. A primary reason for this appears to be
that discretionary reviews may better be suited to address emerging
issues than mandatory reviews with a predetermined time frame. For
example, USDA's AMS reported completing 11 mandated Section 610 reviews
since 2001, which resulted in no regulatory changes. For 9 of these
reviews, the related published Section 610 reports stated that AMS made
no changes to the regulations because they were modified "numerous
times" in advance of the 10-year Section 610 review to respond to
changes in economic and other conditions affecting the industry.
Agencies and others reported multiple factors that impeded the conduct
and usefulness of retrospective reviews. Agencies reported that the
most critical barrier to their ability to conduct reviews was the
difficulty in devoting the time and staff resources required for
reviews while also carrying out other mission activities. Another
barrier included limits on their ability to obtain the information and
data needed to conduct reviews. Agencies and others also identified
factors that impeded the usefulness of retrospective reviews. Among
these impediments were overlapping and duplicative review schedules and
evaluation factors. While the predetermined review schedules prompt
agencies to periodically reexamine certain regulations, their timing
may either be too short to produce meaningful analysis or too long to
account for ongoing changes that agencies make in their discretionary
reviews. For example, both EPA and FCC officials reported that their
annual and/or biannual review requirements do not allow enough time to
most effectively complete reviews and/or observe new changes before
starting a subsequent review. Further, our examination of the timing
and evaluative factors in mandatory review requirements revealed that
there is considerable overlap, which agencies reported creates
challenges to efficiently allocating limited resources. Agencies and
nonfederal parties identified other factors that impeded the usefulness
of retrospective reviews, including difficulties in limiting reviews to
a manageable scope, and constraints in agencies' abilities to modify
some regulations without additional legislative action. Further, they
identified the lack of public participation as a barrier to the
usefulness of reviews, but differed in their explanations for the lack
of public participation. The nonfederal parties also identified the
lack of transparency in agency review and reporting processes as a
barrier to the usefulness of reviews to the public; they were rarely
aware of the retrospective review activities reported to us by the
agencies. This lack of awareness may be, in part, because of
limitations in agencies' documentation and reporting of discretionary
reviews.
Agency officials and nonfederal parties suggested a number of practices
to help the conduct and usefulness of regulatory reviews, such as pre-
planning reviews to identify the data and analysis needed to conduct
effective reviews, developing a prioritization process to address time
and resource barriers, ensuring an appropriate level of independence in
the conduct of reviews to enhance credibility and effectiveness,
obtaining high-level management support to sustain commitment to a
review program and follow up on the review results, grouping related
regulations together when conducting reviews, making greater use of
informal networks to promote public participation, and targeting the
level of detail and type of product used to report results to meet the
needs of different audiences. Nonfederal parties also suggested that it
would be more useful if agencies selected a few (two to five) high-
priority regulations to substantively review each year rather than
conducting a cursory review of many regulations. We found that only a
few agencies tracked the costs associated with conducting their
reviews, so we were unable to identify the most cost-effective
approaches. However, agency officials told us that reviews have
resulted in cost savings to their agencies and to regulated parties.
For example, DOL's MSHA reported that it has a policy to review and
modify regulations that regulated parties repeatedly petition for
waivers. As a result, MSHA officials believe that they save the time
and resources that would otherwise be spent on repetitively reviewing
and responding to similar petitions.
We are making recommendations to executive agencies to improve the
effectiveness and transparency of reviews through incorporating, where
appropriate, various elements into the policies and procedures that
govern their retrospective review activities, such as establishing
plans for measuring the performance of regulations, a prioritization
mechanism for review activities based upon defined evaluation criteria,
and minimum standards for documenting and reporting review results,
among other things. We also recommend that the Administrator of OIRA
and the Chief Counsel for Advocacy work with regulatory agencies to
identify opportunities for Congress to revise the timing and scope of
existing requirements and/or consolidate existing requirements. In
addition, we suggest that Congress authorize a pilot program with
selected agencies to test the effectiveness of allowing agencies to
satisfy various retrospective review requirements with similar
objectives and evaluation factors that can be consolidated into one
review that is reported to all of the appropriate relevant parties and
oversight bodies. In formal comments on the draft report, the SBA
Office of Advocacy concurred with the recommendations and, as an
attachment, provided a copy of draft guidance that they developed in
response to our recommendations. OMB told us that they reviewed our
draft report and had no comments. All other agencies provided technical
and editorial comments, which we incorporated as appropriate. In its
technical comments, one agency suggested that one recommendation to
identify opportunities to consolidate existing retrospective reviews
for statutory requirements could be broadened to include executive
requirements. However, we limit our recommendation to statutory
retrospective review requirements because they tend to be recurring
and/or on a predetermined review schedule.
Background:
The performance and accountability of government agencies and programs
have attracted substantial attention by Congress, the executive branch,
and others, including GAO. For example, the Government Performance and
Results Act of 1993 (GPRA) established a statutory framework designed
to provide congressional and executive decision makers with objective
information on the relative effectiveness and efficiency of federal
programs and spending.[Footnote 11] A central element of the current
administration's Presidential Management Agenda is the Program
Assessment Rating Tool (PART), designed by OMB to provide a consistent
approach to assessing federal programs in the executive budget
formation process.[Footnote 12] Over the past 2 years, we have
emphasized that the long-term fiscal imbalance facing the United States
and other significant trends and challenges establish the need to
reexamine the base of the federal government and its existing programs,
policies, functions, and activities.[Footnote 13] We noted that a top-
to-bottom review of federal programs and policies is needed to
determine if they are meeting their objectives.[Footnote 14] To support
this reexamination, the policy process must have the capacity to
provide policymakers not only with information to analyze the
performance and results achieved by specific agencies and programs, but
that of broad portfolios of programs and tools (including regulation)
contributing to specific policy goals.
While initiatives such as GPRA and PART can evaluate regulatory
performance at the agency or program level, Congress and presidents
also have instituted requirements that focus on a more basic element,
the agencies' existing regulations.[Footnote 15] For example, through
Section 610, Congress requires agencies to review all regulations that
have or will have a "significant economic impact upon a substantial
number of small entities" (generally referred to as SEISNOSE) within 10
years of their adoption as final rules. The purpose of these reviews is
to determine whether such rules should be continued without change, or
should be amended or rescinded, consistent with the stated objectives
of applicable statutes, to minimize impacts on small entities. As
discussed later in this report, Congress also established other
requirements for agencies to review the effects of regulations issued
under specific statutes, such as the Clean Air Act.[Footnote 16] Every
president since President Carter has directed agencies to evaluate or
reconsider existing regulations. For example, President Carter's
Executive Order 12044 required agencies to periodically review existing
rules; one charge of President Reagan's task force on regulatory relief
was to recommend changes to existing regulations; President George H.W.
Bush instructed agencies to identify existing regulations to eliminate
unnecessary regulatory burden; and President Clinton, under section 5
of Executive Order 12866, required agencies to develop a program to
"periodically review" existing significant regulations.[Footnote 17] In
2001, 2002, and 2004, the administration of President George W. Bush
asked the public to suggest reforms of existing regulations.
The Office of Advocacy within SBA and OIRA within OMB have issued
guidance to federal agencies on the implementation of, respectively,
the RFA and Executive Order 12866.[Footnote 18] The available guidance
documents, including OMB Circular A-4 on regulatory analysis, focus
primarily on the procedural and analytical steps required for reviewing
draft regulations, but they are also generally applicable whenever
agencies analyze the benefits and costs of regulations, including those
of existing regulations. However, the documents provide limited
guidance focused specifically on retrospective reviews. In a short
discussion on periodic reviews of existing rules pursuant to Section
610, the Office of Advocacy's RFA guidance briefly touches on planning,
conducting, and reporting the results of reviews, but the OMB/OIRA
guidance does not specifically address the executive order's periodic
review requirement.[Footnote 19]
In our prior work on this subject, we found that agencies infrequently
performed certain types of reviews and identified potential challenges
and benefits of conducting retrospective reviews.[Footnote 20] In 1999,
we reported that assessments of the costs and benefits of EPA's
regulations after they had been issued had rarely been done.[Footnote
21] In a series of reports on agencies' compliance with Section 610
requirements, we again noted that reviews were not being
conducted.[Footnote 22] We identified a number of challenges to
conducting retrospective reviews. In general, these included the
difficulties that regulatory agencies face in demonstrating the results
of their work, such as identifying and collecting the data needed to
demonstrate results, the diverse and complex factors that affect
agencies' results (for example, the need to achieve results through the
actions of third parties), and the long time period required to see
results in some areas of federal regulation.[Footnote 23] We also
identified concerns about the balance of regulatory analyses, because
it may be more difficult to estimate the benefits of regulations than
it is to estimate the costs. Our report on EPA's retrospective studies
noted that such studies were viewed as hard to do because of the
difficulty in obtaining valid cost data from regulated entities and
quantifying actual benefits, among other reasons. Our work on agencies'
implementation of Section 610 requirements revealed that there was
confusion among the agencies regarding the meaning of key terms such as
SEISNOSE, what RFA considers to be a "rule" that must be reviewed, and
whether amending a rule within the 10-year period provided in Section
610 would "restart the clock," among other things.
However, our prior work also pointed out that retrospective evaluation
could help inform Congress and other policymakers about ways to improve
the design of regulations and regulatory programs, as well as play a
part in the overall reexamination of the base of the federal
government.[Footnote 24] For example, we reported that retrospective
studies provided insights on a market-based regulatory approach to
reduce emissions that cause acid rain and that the studies found that
the actual costs of reducing emissions were lower than initially
estimated. Experts and stakeholders whom we consulted during work on
federal mandates (including regulations) and economic and regulatory
analyses told us that they believe more retrospective analysis is
needed and, further, that there are ways to improve the quality and
credibility of the analyses that are done.[Footnote 25] One particular
reason cited for the usefulness of retrospective reviews was that
regulations can change behavior of regulated entities, and the public
in general, in ways that cannot be predicted prior to implementation.
All Selected Agencies Reviewed Existing Regulations, but the Purposes
of the Reviews Varied:
Since 2001, the nine selected agencies conducted multiple retrospective
reviews of their existing regulations to respond to mandatory and
discretionary authorities. Between 2001 and 2006, the nine agencies
reported completing over 1,300 mandatory or discretionary reviews, but
many could not tally the total number of reviews that they have
conducted. The reviews addressed multiple purposes, such as examining
the efficiency and effectiveness of regulations and identifying
opportunities to reduce regulatory burden. The mix of reviews
conducted--in terms of the impetus to start a review and the purpose of
the review--varied not only across but also within the agencies that we
examined.
Agencies Conducted Most Reviews on Their Own Discretion Rather than in
Response to Mandatory Requirements, but the Total Number Is Unknown:
Agencies conducted reviews in response to various mandatory
requirements, but most agencies indicated that they conducted the
majority of reviews based on their own discretion. Agencies reported
conducting reviews, at their own discretion, in response to their
agencies' own formal policies and procedures to conduct reviews or to
respond to accidents or similar events, changes in technology and
market conditions, advances in science, informal agency feedback, and
petitions, among other things. Among the main mandatory sources of
reviews were governmentwide statutory requirements (such as Section 610
of the RFA or the Paperwork Reduction Act (PRA)), agency-or program-
specific statutory requirements (such as Section 402 of the
Telecommunications Act of 1996 that requires FCC to review the
regulations promulgated under the act every 2 years),[Footnote 26]
Executive Order 12866 on Regulatory Planning and Review (which requires
agencies to periodically conduct reviews of their significant
regulations),[Footnote 27] and other executive branch directives (such
as the memorandum on plain language). In addition, agencies conducted
reviews in response to OMB initiatives to solicit nominations for
regulatory reexamination, which were not statutorily mandated reviews
or required by a specific executive order, but were a part of executive
branch regulatory reform efforts. The frequency of agency reviews
varied based on review requirements. In some cases, agencies were
required to conduct reviews every 2 years to 10 years. Available
information on completed reviews indicated that the numbers of reviews
completed by individual agencies in any given year varied from a few to
more than a hundred.
Agencies' officials reported they conducted discretionary reviews more
often than mandated studies. These discretionary reviews were often
prompted by drivers such as informal suggestions or formal petitions
from regulated entities seeking regulatory changes, suggestions from
the agency personnel who implement and enforce the regulations,
departmentwide initiatives to review regulations, or changes in
particular technologies, industries, or markets. Although these
discretionary reviews were often undocumented, our review of publicly
available results for retrospective reviews confirmed that, in some
instances, agencies like those within DOL, DOT, and DOJ cited
discretionary drivers as the motivation for their reviews.
Among the various reasons for agency-initiated reviews, petitions were
a major review driver for almost all of the agencies. Agency officials
cited three types of petition activities that largely influenced them
to conduct regulatory reviews. These petition activities included: (1)
petitions for rulemaking where the regulated entities or public
requested a modified rule that includes new or updated information, (2)
petitions for reconsideration where regulated entities or the public
requested that the agency revisit an aspect of the rule, and (3)
petitions for waivers where the regulated entities or public requested
waivers from regulatory requirements. Some agencies, such as DOT and
MSHA, have policies to begin a more formal review of the entire
regulation when there are a substantial number of petitions.
Agencies also reported conducting reviews in response to various
mandatory requirements. However, whether they conducted them more often
in response to governmentwide requirements or requirements specific to
the policy area or sector that they regulate varied among and within
agencies. For example, DOT, USDA's APHIS, and SBA conducted many
mandatory reviews in response to Section 610 requirements, while
others, such as EPA and FCC, conducted most mandatory reviews in
response to statutes that applied specifically to areas that they
regulate. Specifically, all of the mandatory reviews completed by APHIS
and SBA since 2001 were conducted in response to Section 610
requirements. Similarly, DOT initiated a systematic 10-year plan for
reviewing all of its sections of the CFR in order to satisfy the
provisions of Section 610, along with other mandatory review
requirements.[Footnote 28] DOT reported completing over 400 reviews
between 2001 and 2006 under this 10-year plan. However, even within
DOT, variation existed with regard to which review requirements more
often prompted reviews. For example, unlike some other DOT agencies,
FAA conducted the majority of its mandatory reviews in response to
industry-specific review requirements rather than governmentwide
retrospective review mandates.
While EPA, FCC, and FDIC also conduct some reviews in response to
governmentwide requirements, such as Section 610, they most often
conducted mandatory reviews to comply with statutes that apply
specifically to areas that they regulate. EPA officials provided a list
of seven statutes that require the agency to conduct mandatory
retrospective reviews, including requirements in the Safe Drinking
Water Act, Clean Air Act, Clean Water Act, and Federal Food, Drug and
Cosmetic Act, among others. Similarly, FCC conducts many of its
mandatory retrospective reviews to comply with the biennial and
quadrennial regulatory review requirements under the Communications
Act, as amended. One agency in our review, FDIC, conducted most of its
mandatory reviews in response to the financial-sector-specific Economic
Growth and Regulatory Paperwork Reduction Act of 1996 (EGRPRA), which
requires federal financial regulatory agencies to identify outdated,
unnecessary, or unduly burdensome statutory or regulatory requirements
every 10 years.[Footnote 29] Finally, agencies also conducted single
comprehensive reviews of a regulation as a method to satisfy multiple
review requirements. For example, DOL's EBSA and OSHA, DOT, and FDIC
conducted reviews that incorporated Section 610 and other review
requirements into broader reviews that the agency initiated as part
either of their regular review program or in response to industry
feedback and petitions, among other things. (Table 1 illustrates the
range of mandatory and discretionary reviews conducted by selected
agencies included in our scope.)
Table 1: Impetus for Mandatory and Discretionary Reviews Conducted
Between 2001 and 2006:
Impetus of review: Agency: USDA;
Impetus of review: Mandatory reviews: Governmentwide: Check;
Impetus of review: Mandatory reviews: Agency- specific: Check;
Impetus of review: Mandatory reviews: Sector-specific: [Empty];
Impetus of review: Discretionary reviews: Agency policy: Check;
Impetus of review: Discretionary reviews: Petitions or other forms of
industry and consumer feedback: Check;
Impetus of review: Discretionary reviews: Sector changes (e.g., new
science, technology, or market trends): Check;
Impetus of review: Discretionary reviews: Internal drivers (e.g.,
improving enforcement or compliance): Check.
Impetus of review: Agency: DOJ;
Impetus of review: Mandatory reviews: Governmentwide: Check;
Impetus of review: Mandatory reviews: Agency- specific: [Empty];
Impetus of review: Mandatory reviews: Sector- specific: [Empty];
Impetus of review: Discretionary reviews: Agency policy: [Empty];
Impetus of review: Discretionary reviews: Petitions or other forms of
industry and consumer feedback: Check;
Impetus of review: Discretionary reviews: Sector changes (e.g., new
science, technology, or market trends): Check;
Impetus of review: Discretionary reviews: Internal drivers (e.g.,
improving enforcement or compliance): Check.
Impetus of review: Agency: DOL;
Impetus of review: Mandatory reviews: Governmentwide: Check;
Impetus of review: Mandatory reviews: Agency- specific: [Empty];
Impetus of review: Mandatory reviews: Sector- specific: [Empty];
Impetus of review: Discretionary reviews: Agency policy: Check;
Impetus of review: Discretionary reviews: Petitions or other forms of
industry and consumer feedback: Check;
Impetus of review: Discretionary reviews: Sector changes (e.g., new
science, technology, or market trends): Check;
Impetus of review: Discretionary reviews: Internal drivers (e.g.,
improving enforcement or compliance): Check.
Impetus of review: Agency: DOT;
Impetus of review: Mandatory reviews: Governmentwide: Check;
Impetus of review: Mandatory reviews: Agency- specific: Check;
Impetus of review: Mandatory reviews: Sector-specific: [Empty];
Impetus of review: Discretionary reviews: Agency policy: Check;
Impetus of review: Discretionary reviews: Petitions or other forms of
industry and consumer feedback: Check;
Impetus of review: Discretionary reviews: Sector changes (e.g., new
science, technology, or market trends): Check;
Impetus of review: Discretionary reviews: Internal drivers (e.g.,
improving enforcement or compliance): Check.
Impetus of review: Agency: CPSC;
Impetus of review: Mandatory reviews: Governmentwide: [Empty];
Impetus of review: Mandatory reviews: Agency- specific: [Empty];
Impetus of review: Mandatory reviews: Sector- specific: [Empty];
Impetus of review: Discretionary reviews: Agency policy: Check;
Impetus of review: Discretionary reviews: Petitions or other forms of
industry and consumer feedback: Check;
Impetus of review: Discretionary reviews: Sector changes (e.g., new
science, technology, or market trends): Check;
Impetus of review: Discretionary reviews: Internal drivers (e.g.,
improving enforcement or compliance): Check.
Impetus of review: Agency: EPA;
Impetus of review: Mandatory reviews: Governmentwide: Check;
Impetus of review: Mandatory reviews: Agency- specific: Check;
Impetus of review: Mandatory reviews: Sector-specific: [Empty];
Impetus of review: Discretionary reviews: Agency policy: [Empty];
Impetus of review: Discretionary reviews: Petitions or other forms of
industry and consumer feedback: Check;
Impetus of review: Discretionary reviews: Sector changes (e.g., new
science, technology, or market trends): Check;
Impetus of review: Discretionary reviews: Internal drivers (e.g.,
improving enforcement or compliance): Check.
Impetus of review: Agency: FCC;
Impetus of review: Mandatory reviews: Governmentwide: Check;
Impetus of review: Mandatory reviews: Agency- specific: Check;
Impetus of review: Mandatory reviews: Sector-specific: [Empty];
Impetus of review: Discretionary reviews: Agency policy: [Empty];
Impetus of review: Discretionary reviews: Petitions or other forms of
industry and consumer feedback: Check;
Impetus of review: Discretionary reviews: Sector changes (e.g., new
science, technology, or market trends): Check;
Impetus of review: Discretionary reviews: Internal drivers (e.g.,
improving enforcement or compliance): Check.
Impetus of review: Agency: FDIC;
Impetus of review: Mandatory reviews: Governmentwide: Check;
Impetus of review: Mandatory reviews: Agency- specific: [Empty];
Impetus of review: Mandatory reviews: Sector- specific: Check;
Impetus of review: Discretionary reviews: Agency policy: Check;
Impetus of review: Discretionary reviews: Petitions or other forms of
industry and consumer feedback: Check;
Impetus of review: Discretionary reviews: Sector changes (e.g., new
science, technology, or market trends): Check;
Impetus of review: Discretionary reviews: Internal drivers (e.g.,
improving enforcement or compliance): Check.
Impetus of review: Agency: SBA;
Impetus of review: Mandatory reviews: Governmentwide: Check;
Impetus of review: Mandatory reviews: Agency- specific: [Empty];
Impetus of review: Mandatory reviews: Sector- specific: [Empty];
Impetus of review: Discretionary reviews: Agency policy: Check;
Impetus of review: Discretionary reviews: Petitions or other forms of
industry and consumer feedback: Check;
Impetus of review: Discretionary reviews: Sector changes (e.g., new
science, technology, or market trends): Check;
Impetus of review: Discretionary reviews: Internal drivers (e.g.,
improving enforcement or compliance): Check.
Source: GAO analysis of agency data and regulatory statutes.
Note: We were not able to verify the conduct of some discretionary
reviews reported by agencies because they were not documented.
[End of table]
The frequency with which agencies were required to conduct certain
reviews varied depending on the statutory requirement. For example,
under some statutory requirements, agencies must review certain
regulations every 2 or 3 years. Other requirements cover a longer
period, such as Section 610's requirement to revisit certain rules 10
years after their issuance, or specify no particular time. In addition,
agency policies on the frequency with which to conduct discretionary
reviews varied. For example, USDA has a departmentwide requirement
generally to review economically significant and major regulations
every 5 years, while FAA conducts its reviews on a 3-year cycle. Some
agencies, such as DOT, had a departmentwide requirement to review all
regulations in its CFR within 10 years of the creation of its Unified
Agenda Regulatory Plan, but provided discretion to its agencies on when
to conduct reviews during that period.
Despite a perception expressed by some federal and nonfederal parties
that agencies are not actively engaged in reviewing their existing
regulations, we found that agencies reported results for over 1,300
reviews completed between 2001 through 2006. However, even this number
may understate the total because it does not account for all of the
undocumented discretionary reviews conducted by agencies. In addition,
the available information reported by the agencies (and others, such as
OMB) may include some duplication or fail to capture additional follow-
up reviews by the agencies. It is also important to note that the units
of analysis that agencies used in their reviews, and the scope of
individual reviews, varied widely. Therefore, the level of agency
review activity cannot be compared by only assessing the number of
reviews completed. For example, a review might be as narrowly focused
as the review that DOT's NHTSA completed in 2005 on 49 CFR part 571.111
(regarding rearview mirrors) or as broad as a FDIC review that included
analyses of 131 regulations within a single review. DOT also pointed
out that because rules vary widely in complexity, even a narrowly
focused review can be a major undertaking. For example, NHTSA may
review a one-sentence rule that says, "all motor vehicles sold in the
U.S. must be equipped with airbags." In another review, they may look
at 131 regs that impose very minor requirements. It may well be a far
more time and resource intensive effort for NHTSA to review the effect
of the one-sentence airbag requirement. Further, because some agencies
produce many more regulations than others do, the number of reviews
that agencies reported conducting also should be considered within the
context of their volume of regulations and rulemaking
activity.[Footnote 30]
Table 2 lists the number of reviews that agencies reported completing
between 2001 and 2006.
Table 2: Retrospective Reviews for which Results Were Reported 2001-
2006, for Selected Agencies:
Agency: USDA;
Numbers and types of completed reviews that were reported: At least 531
reviews:
* APHIS completed 9 Section 610 reviews and numerous other reviews
resulting in 139 rule modifications;
* AMS completed 11 Section 610 and 300 discretionary reviews;
* FSIS completed 1 Section 610 review and 36 discretionary reviews in
response to petitions;
* USDA completed 17 reviews in response to OMB regulatory reform
nominations from 2001 to 2004.
Agency: DOJ;
Numbers and types of completed reviews that were reported: At least 20
reviews:
* Completed 1 Section 610 review;
* Completed 10 discretionary reviews and 2 other reviews for agency-
specific requirements;
* Completed 7 reviews in response to OMB regulatory reform nominations
from 2001 to 2004.
Agency: DOL;
Numbers and types of completed reviews that were reported: At least 60
reviews:
* EBSA completed 7 reviews, including 4 Section 610 reviews;
* OSHA completed 4 Section 610 reviews;
* MSHA completed 4 discretionary reviews;
* ETA completed 3 discretionary reviews;
* All DOL agencies conducted reviews under departmentwide initiatives
in 2005 and 2006, including those listed for the agencies above;
* Completed 42 reviews in response to OMB regulatory reform nominations
from 2001 to 2004.
Agency: DOT;
Numbers and types of completed reviews that were reported: At least 488
reviews:
* Completed 406 reviews per the departmentwide 10-year regulatory
review plan, which addresses mandatory requirements such as Section
610;
* Completed a stand-alone regulatory review initiative in 2005 (that
generated 120 public comments to which DOT responded);
* FAA conducted reviews of 3 regulations per requirements in the
Federal Aviation Authorization Act of 1996;
* NHTSA conducted 17 other reviews of existing regulations;
* Completed 61 reviews in response to OMB regulatory reform nominations
from 2001 to 2004.
Agency: CPSC;
Numbers and types of completed reviews that were reported: At least 4
reviews:
* Conducted as part of the agency's regulatory review program.
Agency: EPA;
Numbers and types of completed reviews that were reported: At least
156:
* Completed 14 Section 610 reviews;
* Completed 20 reviews in response to agency specific mandates;
* Completed 6 discretionary reviews;
* Completed 116 reviews in response to OMB regulatory reform
nominations from 2001 to 2004.
Agency: FCC;
Numbers and types of completed reviews that were reported: At least 47
reviews:
* Completed 1 Section 610 review (2006);
* Completed 15 biennial reviews of all telecommunications rules (2002,
2004, 2006);
* Completed 2 quadrennial reviews of all broadcast ownership rules
(2002, 2006);
* Completed 1 review of rules regarding the pricing of unbundled
network elements (2003);
* Completed 2 periodic reviews of digital television conversion
process;
* Completed 26 reviews in response to OMB regulatory reform nominations
from 2001 to 2004.
Agency: FDIC;
Numbers and types of completed reviews that were reported: At least 4
reviews:
* Completed 2 Section 610 reviews;
* Completed 1 EGRPRA review, in conjunction with the members of the
Federal Financial Institutions Examination Council (FFIEC);
* Completed 1 discretionary review.
Agency: SBA;
Numbers and types of completed reviews that were reported: At least 32
reviews;
* Completed 4 Section 610 reviews;
* Completed 27 reviews in response to petitions;
* Completed 1 review in response to OMB regulatory reform nominations
from 2001 to 2004.
Agency: Total;
Numbers and types of completed reviews that were reported: At least
1,300 reviews conducted in response to various requirements and
drivers.
Source: GAO analysis of agency and other publicly available data
sources.
[A] The numbers in this table represent individual reviews conducted
and not the number of regulations reviewed because, in some cases, one
review covered multiple regulations.
[B] The information presented in this table also only reflects reviews
reported by the agencies as having been completed. The agencies also
reported initiating other reviews during this time period that are not
yet complete.
[End of table]
Agency Officials Reported that They Most Often Conducted Reviews to
Assess the Effectiveness of Their Regulations:
According to agency officials, they conducted reviews for various
purposes but most often focused on assessing the effectiveness of
regulations. Agency officials reported conducting reviews to evaluate
or identify (1) the results produced by existing regulations, including
assessments to validate the original estimates of projected benefits
and costs associated with the regulation; (2) ways to improve the
efficiency or effectiveness of regulatory compliance and enforcement;
and (3) options for reducing regulatory burdens on regulated
entities.[Footnote 31] Overall, agency officials reported that their
reviews more often focused on improving effectiveness, with burden
reduction as a secondary consideration, even for review requirements
that are geared toward identifying opportunities for burden reduction,
such as those within Section 610. The approaches that agencies reported
taking to assess "effectiveness" varied, including measuring whether
the regulations were producing positive outcomes, facilitated industry
compliance with relevant statutes, and/or were assisting the agency in
accomplishing its goals. For example, DOL officials reported that the
agency attempts to assess "effectiveness" by measuring improvements
resulting from the regulation and by analyzing factors required by
Section 610 of RFA and Section 5 of Executive Order 12866, such as
whether there is new technology, excessive complexity, conflict with
other regulations, or whether cost-effectiveness can be improved.
However, the agency does not conduct what would be considered a
traditional benefit-cost economic analysis. Other agencies, such as
EPA, reported assessing "effectiveness" by determining whether the
regulation is achieving its intended goal as identified in related
statutes. For example, officials from EPA reported that their
retrospective review of the Clean Water Act helped them estimate the
extent to which toxic pollutants remained, thereby aiding them to
assess the effectiveness of each existing regulation in various
sections of the act. Since the goal of the Clean Water Act is zero
discharge of pollutants, the review was an indicator of the progress
the agency had made toward the statutory goals. Our limited review of
agency summaries and reports on completed retrospective reviews
revealed that agencies' reviews more often attempted to assess the
effectiveness of their implementation of the regulation rather than the
effectiveness of the regulation in achieving its goal.
Agencies Varied in the Extent to Which They Used Formal Processes and
Standards for Selecting, Conducting, and Reporting on Retrospective
Reviews:
The use of systematic evaluation practices and the development of
formal retrospective regulatory review processes varied among and even
within the agencies.[Footnote 32] To assess the strengths and
limitations of various agency review processes, we examined the three
phases of the process: the selection of regulations to review, conduct
of reviews, and reporting of review results. We identified three
practices that are important for credibility in all phases of the
review, including the extent to which agencies (1) employed a standards-
based approach, (2) involved the public, and (3) documented the process
and results of each phase of the review. The use of these evaluation
practices in agency review processes, and the development of formal
policies and procedures to guide their review processes, varied among
the agencies. Furthermore, whether agencies used these practices often
varied according to whether they conducted discretionary or mandatory
reviews.[Footnote 33] Fewer agencies used standards-based approaches or
documented the selection, conduct, or reporting of reviews when they
were discretionary. While more agencies incorporated public involvement
in the selection of regulations for review in discretionary reviews,
fewer included public involvement in the conduct of those reviews.
Generally, agencies did not consistently incorporate the three
practices we identified into their discretionary reviews as often as
they did for their mandatory reviews. However, it is important to note
that some agencies have recently developed their review programs and
others are attempting to find ways to further develop and improve their
retrospective review processes (i.e., establishing units that focus on
retrospective reviews and seeking assistance with establishing
prioritization systems). For example, CPSC and EBSA recently
established their review programs, and ETA recently established a
centralized unit to develop retrospective review processes for the
agency. Furthermore, although the process has been delayed because of
other regulatory priorities, MSHA recently sought contractor assistance
with developing a more standard selection process for its reviews.
(Additional details on each agency's review process can be seen in
apps. II through XI).
Agencies Employed Standards-Based Approaches More Often in Mandatory
Reviews:
All of the agencies in our review reported that they have practices in
place to help them select which of their existing regulations to
review. Some of these agencies have established or are establishing
standard policies and procedures, and guidance for this selection.
While almost all of the agencies reported having standards to select
regulations for their mandated reviews because the mandates identified
which regulations agencies must review or prescribed standards for
selecting regulations to review, fewer agencies had yet developed
formal standards for selecting regulations to review under their own
discretion. Agencies that had established such processes reported that
they were useful in determining how to prioritize agency review
activities. For example, DOL's EBSA and CPSC established detailed
standards for the selection of their discretionary reviews, which they
used to prioritize their retrospective review activities and the
corresponding use of agency resources. The officials reported that
their standards-based selection processes allowed them to identify
which regulations were most in need of review and to plan for
conducting those reviews. Furthermore, officials from both agencies
reported that their prioritization processes allowed them to focus on
more useful retrospective review activities, which resulted in
identifying important regulatory changes. We observed that this
standards-based approach to selecting regulations also increased the
transparency of this phase of the review process. We were better able
to determine how these agencies selected which regulations to review.
Further, as identified by CPSC officials and others, applying a
standards-based approach to selecting regulations to review can provide
a systematic method for agencies to assess which regulations they
should devote their resources toward, as they balance retrospective
review activities with other mission-critical priorities. The
consequence of not using a standards-based approach could result in
diverting attention from regulations that, through criteria, agencies
could identify as needing the most consideration. Otherwise, agencies
may instead focus their reviews on regulations that would warrant less
attention. In addition--for each phase of the review process--using a
standards-based approach can allow agencies to justify the
appropriateness of the criteria that they use (either because they are
objective, or at least mutually agreed upon), and thus gain credibility
for their review. Selecting a different criterion or set of standards
for each review could imply a subjective evaluation process and
possibly an arbitrary treatment of one regulation versus another.
Similarly, agencies varied in their use of a standards-based approach
when analyzing regulations in reviews. While most of the mandatory
review requirements identified by the selected agencies establish
standard review factors that agencies should consider when conducting
their reviews--which agencies reported following--about half of the
agencies within our review have formal policies and procedures that
establish a standards-based approach for conducting discretionary
reviews. Specifically, the five agencies that had formal procedures
that established standards defined the steps needed to conduct reviews
and review factors used to assess regulations. Other agencies had not
yet established written guidance or policies to guide their conduct of
reviews or define the analytical methods and standards they should use
to assess regulations. When we identified agencies that did specify
objectives and review factors that they used when conducting reviews,
they ranged from general (such as to identify if they were still needed
or could be simplified) to specific (such as to account for new
developments in practices, processes, and control technologies when
assessing emission standards). We observed that the more specific sets
of evaluative factors that agencies considered in different types of
reviews shared several common elements, such as prompting agencies to
consider comments and complaints received by the public and the
regulation's impact on small entities.
Our assessment of a small sample of agency reviews revealed that, even
when relevant standards were available to agencies for conducting
reviews, they did not always apply them. In one case, the economic
analyses conducted in the agency review did not employ some of the
relevant best practices identified for these types of analyses in OMB's
guidance. In another case, in conducting a mandated Section 610 review,
the agency relied only on public comments to evaluate the regulation
and did not provide any additional assessment on the other four factors
identified in the mandatory review requirements.[Footnote 34] According
to agency documentation, it (1) provided no further assessment of the
factors and (2) concluded that the regulation did not need changes,
because the agency received no public comments. Conversely, our review
of another agency's Section 610 assessment of regulations provided an
example of how agencies that apply standards can use the reviews to
produce substantive results. Because the agency relied both on public
comments and its own assessment of the five Section 610 review factors/
standards, the agency identified additional changes that would not have
been identified if it had relied on only one of these standards. When
we assessed whether agencies applied other generally accepted review
standards, such as identifying how well they implemented the
regulation, whether there was a pressing need for the regulation, or
whether intended or unintended effects resulted from the regulation, we
observed that some agencies' analyses did not.
Most of the agencies within our review had established policies on
reporting the results of their mandatory retrospective reviews to
senior management officials and to the public, although in some cases,
there was no requirement to do so. For example, Section 610 requires
federal agencies to report on the initiation of a review, but does not
require agencies to report the findings or policy decisions resulting
from the review. Nevertheless, as a matter of practice, all of the
agencies within our review reported information on the results of their
Section 610 reviews, though the content and level of detail varied.
Conversely, about half of the agencies in our review had not
established written policies or procedures for reporting the results of
their discretionary retrospective reviews to the public. As a result,
we found inconsistencies in practices within agencies on how and
whether they reported the results of these reviews to the public,
resulting in less transparency in this process. For example, agencies
within DOT, USDA, and some within DOL indicated that, at times, they
report review results of discretionary reviews in the preambles of
proposed rule changes or through other mechanisms, but they did not
consistently do so. Agencies also reported that they often do not
report the results of discretionary reviews at all, if they did not
result in a regulatory change. This lack of transparency may be the
cause for assertions from nonfederal parties that they were unaware
that agencies conducted discretionary reviews of their existing
regulations. Figure 1 illustrates the differences in agencies' use of a
standards-based approach in the three phases of the review process, for
discretionary and mandatory reviews.
Figure 1: Agencies' Use of Standards-based Approaches in the Review
Process, for Discretionary and Mandatory Reviews:
[See PDF for image]
Source: GAO analysis of agency data.
[A] One of the agencies within our review did not have any regulations
that were subject to mandatory review requirements.
[End of figure]
Level of Public Participation Varied by Impetus and Phase of the Review
Process:
Agency practices varied for soliciting and incorporating public input
during the selection of regulations to review. For example, in 2005 DOT
formally requested nominations from the public on which regulations it
should review. Further, in the 2006 semi-annual Unified Agenda, the
agency sought public suggestions for which regulations it should
review. However, agencies in our review more often reported that they
solicit public input on which regulations to review during informal
meetings with their regulated entities for their discretionary reviews.
Techniques used by some of the agencies to obtain public input were
informal networks of regulated entities, agency-sponsored listening
sessions, and participation in relevant conferences. For example, USDA
officials reported that they regularly meet with industry committees
and boards and hold industry listening sessions and public meetings to
obtain feedback on their regulations. DOJ's DEA reported holding yearly
conferences with industry representatives to obtain their feedback on
regulations. While almost all of the agencies in our review reported
soliciting public input in a variety of ways for their discretionary
reviews, SBA relied primarily on the Federal Register's Unified Agenda
to inform the public about their reviews. For mandatory reviews,
agencies appeared to do less outreach to obtain public input on the
selection of regulations to review. However, it is important to note
that such public input into the selection phase of mandated reviews may
not be as appropriate because agencies have less discretion in choosing
which regulations to review under specific mandates.
Almost all agencies within our review reported soliciting public input
into the conduct of their mandatory reviews, either through the notices
in the Federal Register, or in more informal settings, such as
roundtable discussions with industries, or both. For these reviews,
agencies appeared to almost always place notices in the Federal
Register, including the semiannual Unified Agenda, soliciting public
comments, but nonfederal parties cited these tools as being ineffective
in communicating with the public because these sources are too
complicated and difficult to navigate.[Footnote 35] Our review of the
Federal Register confirmed that agencies often provided such notice and
opportunity for public comment on these regulatory reviews. In
addition, some agencies, such as DOJ's ATF, USDA's AMS, and FCC,
reported on the agencies' analysis of the public comments and their
effect on the outcome of the review. However, we were not always able
to track such notices or discussions of public input into the conduct
of discretionary reviews. Officials from DOL's MSHA and ETA stated
that, if internal staff generated the review and it was technical in
nature, the agency might not include the public when conducting a
review. However, we were able to observe that some agencies, such as
FCC, DOT, and EPA, did post comments received from petitioners and
solicited public comments on these types of discretionary reviews in
order to inform their analyses.
Most agencies within our review did not solicit or incorporate public
input on the reported results of their reviews. We are only aware of a
few agencies (NHTSA, FCC, and EPA) that provided opportunities for
additional public feedback on the analysis of their regulations before
making final policy decisions. Figure 2 illustrates the difference in
agencies' incorporation of public involvement in the three phases of
the review process, for discretionary and mandatory reviews.
Figure 2: Agencies' Incorporation of Public Involvement in the Review
Process, for Discretionary and Mandatory Reviews:
[See PDF for image]
Source: GAO analysis of agency data.
[A] One of the agencies within our review did not have any regulations
that were subject to mandatory review requirements.
[End of figure]
The Level of Documentation in Agencies' Reviews Varied Depending on the
Impetus and Phase of the Review, but Overall Could Be Improved:
Agency documentation for selecting regulations to review varied from
detailed documentation of selection criteria considered to no
documentation of the selection process. For example, DOL's EBSA
documented its selection process, including selection criteria used, in
detail in its Regulatory Review Program. However, agencies did not
always have written procedures for how they selected regulations for
discretionary reviews. SBA officials, for example, were not able to
verify the factors they considered during the selection of some
regulations that they reviewed because employees who conducted the
reviews were no longer with the agency and they did not always document
their review process. The officials indicated that the agency
considered whether the regulations that they reviewed under Section 610
requirements were related to or complement each other, but did not
always document selection factors for discretionary reviews. This lack
of documentation was particularly important for SBA because the agency
reported having high staff turnover that affected maintaining
institutional knowledge about retrospective regulatory review plans.
For example, officials reported that within the 8(a) Business
Development program, there is a new Director almost every 4 years who
sets a new agenda for retrospective regulatory review needs. However,
because of other pressing factors, these reviews are often not
conducted. Consequently, we conclude that this lack of documentation
may result in duplicative agency efforts to identify rules for review
or not including rules that the agency previously identified as needing
review.
Agency documentation of the analyses conducted in reviews ranged from
no documentation to detailed documentation of their analysis steps in
agency review reports. While some agencies reported the analysis
conducted in great detail in review reports, others summarized review
analysis in a paragraph or provided no documentation of review analysis
at all. Some agencies did not provide detailed reports because they did
not conduct detailed analyses. For example, SBA officials reported
that, for Section 610 reviews, they do not conduct any additional
analysis of the regulation if the public does not comment on the
regulation. Our assessment of a sample of agency review reports
revealed that, even for some reviews that provided a summary of their
analysis, we could not completely determine what information was used
and what analysis the agency conducted to form its conclusions.
Further, agencies in our reviews reported that they less often
documented the analysis of those reviews conducted on a discretionary
basis. One SBA official acknowledged that it would be helpful if the
agency better documented its reviews.
For each of the agencies we reviewed, we were able to find reports on
the results of some or all of their completed reviews. Nonetheless, the
content and detail of agency reporting varied, ranging from detailed
reporting to only one-sentence summaries of results. Some agencies told
us that they typically only document and report the results if their
reviews result in a regulatory change. Further, officials from many
agencies primarily reported conveying the results of only mandatory
reviews to the public. Agencies employed a variety of methods to report
review results to the public, but more often used the Federal Register
and Unified Agenda. Although agencies in our review often reported the
results of their mandatory reviews by posting them in the Federal
Register, agencies like OSHA, CPSC, FCC, and those within DOT also made
some or all their review reports available on their Web sites. During
our joint agency exit conference, officials indicated that agencies
could do more to report their review analysis and results to a wider
population of the public by using the latest information technology
tools. Specifically, they said that agencies could: (1) use listserves
to provide reports to identified interested parties, (2) make review
analysis and results more accessible on agency Web sites, and (3) share
results in Web-based forums, among other things.[Footnote 36]
Nonfederal parties also reported that agencies could improve their
efforts to report review results to the public and cited similar
communication techniques. Additionally, nonfederal parties reported
that agencies could improve communication by conducting more outreach
to broad networking groups that represent various stakeholders, such as
the Chamber of Commerce, the National Council of State Legislators, and
the Environmental Council of States, and tailoring their summary of the
review results to accommodate various audiences. Figure 3 illustrates
the differences in agencies' use of documentation in the three phases
of the review process, for discretionary and mandatory reviews.
Figure 3: Agencies' Documentation of the Review Process, for
Discretionary and Mandatory Reviews:
[See PDF for image]
Source: GAO analysis of agency data.
[A] One of the agencies within our review did not have any regulations
that were subject to mandatory review requirements.
[End of figure]
Reviews Resulted in a Variety of Outcomes that Were Considered Useful,
but Mandatory Reviews Most Often Resulted in No Change:
Agency reviews of existing regulations resulted in various outcomes--
from amending regulations to no change at all--that agencies and
knowledgeable nonfederal parties reported were useful. Mandatory
reviews most often resulted in no changes to regulations. Conversely,
agency officials reported that their discretionary reviews more often
generated additional action. Both agency officials and nonfederal
parties generally considered reviews that addressed multiple purposes
more useful than reviews that focused on a single purpose.
Agency reviews of existing regulations resulted in various outcomes
including: changes to regulations, changes or additions to guidance and
other related documents, decisions to conduct additional studies, and
validation that existing rules were working as planned. Agencies and
nonfederal parties that we interviewed reported that each of the
outcomes could be valuable to the agency and the public. However, for
the mandatory reviews completed within our time frame, the most common
result was a decision by the agency that no changes were needed to the
regulation. There was a general consensus among officials across the
agencies that the reviews were sometimes useful, even if no subsequent
actions resulted, because they helped to confirm that existing
regulations were working as intended. Officials of some agencies
further noted that, even when mandatory reviews do not result in
changes, they might have already made modifications to the regulations.
Our examinations of selected completed reviews confirmed that this is
sometimes the case.
Among the various outcomes of retrospective reviews were changes to
regulations, changes or additions to guidance and other related
documents, decisions to conduct additional studies, and validation that
existing rules were working as planned. Agencies and nonfederal parties
that we interviewed reported that each of the outcomes could be
valuable to the agency and the public. In our review of agency
documentation, we confirmed that some reviews resulted in regulatory
actions that appeared useful. Our review of agency documentation
confirmed that some reviews can prompt potentially beneficial
regulatory changes. For example, OSHA's review of its mechanical press
standard revealed that the standard had not been implemented since its
promulgation in 1988 because it required a validation that was not
available to companies. Consequently, OSHA is currently exploring ways
to revise its regulation to rely upon a technology standard that
industries can utilize and that will provide for additional
improvements in safety and productivity.[Footnote 37] Although some
reviews appeared to result in useful changes, the most common result
for mandatory reviews was a decision by the agency that no changes were
needed to the regulation. However, there was a general consensus among
officials across the agencies that such decisions are still sometimes
useful because they helped to confirm that existing regulations were
working as intended. Officials of some agencies further noted that,
even when mandatory reviews do not result in changes, they might have
already made modifications to the regulations. Our examinations of
selected completed reviews confirmed that this is sometimes the case.
Agency officials reported that their discretionary reviews resulted in
additional action--such as prompting the agencies to complete
additional studies or to initiate rulemaking to amend the existing
rule--more often than mandatory reviews. In particular, officials from
USDA's AMS and FSIS, FCC, SBA, EPA, DOJ, and DOT reported that Section
610 reviews rarely resulted in a change to regulations. Although AMS
has initiated 19 Section 610 reviews since 2001, AMS officials reported
that, because of their ongoing engagement with the regulated community,
these reviews did not identify any issues that the agency did not
previously know, and therefore resulted in no regulatory changes.
Similarly, none of the Section 610 reviews conducted by SBA and DOL's
EBSA resulted in changes to regulations, and few changes resulted from
Section 610 reviews conducted by EPA, FCC, DOJ, and DOT. The one
apparent outlier in our analysis was FDIC, which conducted many of its
reviews in response to the financial-sector-specific burden reduction
requirement in EGRPRA. According to FDIC officials and the agency's
2005 annual report, reviews conducted in response to this mandate
resulted in at least four regulatory changes by the agency since 2001
and over 180 legislative proposals for regulatory relief that FDIC and
other members of the FFIEC presented to Congress. The legislative
proposals led to the passage of the Financial Services Regulatory
Relief Act, which reduced excessive burden in nine areas in the
financial sector. In addition, our analyses of the December 2006
Unified Agenda revealed that FDIC attributed four of its nine proposed
or initiated modifications to existing regulations to statutory
mandates.
Most agencies' officials reported that reviews they conduct at their
own discretion--in response to technology and science changes, industry
feedback, and petitions--more often resulted in changes to regulations.
As one of many examples, EBSA officials reported that because the
reviews initiated and conducted by the agency to date have been
precipitated by areas for improvement identified by the regulated
community or the agency, virtually all the reviews have resulted in
changes to the reviewed rules. They reported that, in general, these
changes have tended to provide greater flexibility (e.g., the use of
new technologies to satisfy certain disclosure and recordkeeping
requirements) or the streamlining and/or simplifying of requirements
(e.g., reducing the amount of information required to be reported).
Similarly, DOT officials and other agencies' officials reported that
reviews that they conduct in response to industry and consumer feedback
and harmonization efforts also resulted in changes to regulations more
often than mandated reviews.
In addition, some agencies also reported that reviews that incorporated
review factors from both their mandatory requirements and factors
identified by the agency in response to informal feedback often
resulted in useful regulatory changes. These agencies' reviews
incorporated factors identified by the agency as well as ones that were
requirements in mandatory reviews. For example, DOL's OSHA and EBSA
selected regulations for review based upon criteria that they
independently identified and selection criteria identified by Section
610 requirements. They also incorporated review factors listed in
Section 610 requirements into a broader set of evaluative factors
considered during their discretionary reviews, including assessing: (1)
whether the regulation overlaps, duplicates, or conflicts with other
federal statutes or rule; and (2) the nature of complaints against the
regulation. As a result, they reported that these reviews generated
useful outcomes. Nonfederal parties also indicated that reviews that
focus on multiple review factors and purposes are more useful than
reviews that focus only one purpose, such as only burden reduction or
only enforcement and compliance or only one factor, such as public
comments.
Because agencies did not always document discretionary reviews that
they conducted, it is not possible to measure the actual frequency with
which they resulted in regulatory change. However, we observed that,
for cases where agencies reported modifications to regulations, these
actions were most often attributed to factors that agencies addressed
at their own discretion, such as technology changes, harmonization
efforts, informal public feedback, and petitions. For example, although
EPA officials reported that they have many mandatory regulatory review
requirements, our review of proposed or completed modifications to
existing regulations reported in the December 2006 Unified Agenda
showed that 63 of the 64 modifications reported were attributed to
reasons associated with agencies' own discretion. As illustrated in
figure 4, other agencies within our review had similar results.
Figure 4: Modifications to Rules Listed in the December 2006 Unified
Agenda by Type of Review for Selected Agencies:
[See PDF for image]
Source: GAO analysis of December 2006 Unified Agenda entries.
[End of figure]
Although agencies reported, and our analysis of the Unified Agenda
indicated, that agencies more often modify existing regulations for
reasons attributed to their own discretion, it is important to note
that mandatory reviews may serve other valuable purposes for Congress.
Such reviews may provide Congress with a means for ensuring that
agencies conduct reviews of regulations in policy areas that are
affected by rapidly changing science and technology and that agencies
practice due diligence in reviewing and addressing outdated,
duplicative, or inconsistent regulations. For example, Congress
required FCC to conduct reviews of its regulations that apply to the
operation or activity of telecommunication service providers to
"determine whether any such regulation is no longer necessary in the
public interest as the result of meaningful economic competition
between providers of such service."[Footnote 38]
Agencies' officials reported that reviews often had useful outcomes
other than changes to regulations, such as changes or additions to
guidance and other related documents, decisions to conduct additional
studies, and validation that existing rules were working as planned.
For example, OSHA officials reported that, outside of regulatory
changes, their reviews have resulted in recommended changes to guidance
and outreach materials and/or the development of new materials or
validation of the effectiveness of existing rules. Our review of OMB's
regulatory reform nominations process confirmed that at least four of
OSHA's reviews conducted in response to OMB's manufacturing reform
initiative resulted in changes to or implementation of final guidance
or the development of a regulatory report. We observed similar results
from OMB's regulatory reform process for EPA. Similarly, DOT officials
reported that their reviews also often led to changes in guidance or in
further studies, and our examination of review results reported by DOT
confirmed that this was often the case. Moreover, all of the agencies
within our review reported that reviews have resulted in validating
that specific regulations produced the intended results.
Agencies and Nonfederal Parties Identified Barriers and Facilitators to
the Conduct and Usefulness of Retrospective Reviews:
Agencies' officials reported that barriers to their ability to conduct
and use reviews included: (1) difficulty in devoting the time and staff
resources required for retrospective review requirements, (2)
limitations on their ability to obtain the information and data needed
to conduct reviews, and (3) constraints in their ability to modify some
regulations without additional legislative action, among other
important factors. Both agencies and nonfederal parties identified the
lack of public participation in the review process as a barrier to the
usefulness of reviews. The nonfederal parties also identified the lack
of transparency in agency review processes as a barrier to the
usefulness of reviews. Agency officials and nonfederal parties also
suggested a number of practices that could facilitate conducting and
improving the usefulness of regulatory reviews, including: (1)
development of a prioritization process to facilitate agencies' ability
to address time and resource barriers and allow them to target their
efforts at reviews of regulations that are more likely to need
modifications, (2) pre-planning for regulatory reviews to aid agencies
in identifying the data and analysis methodology that they will need to
conduct effective reviews, and (3) utilizing independent parties to
conduct the reviews to enhance the review's credibility and
effectiveness, among other things. While there was general consensus
among federal and nonfederal parties on the major facilitators and
barriers, there were a few clear differences of opinions between them
regarding public participation and the extent to which reviews should
be conducted by independent parties. Because only a few agencies track
the costs associated with conducting their reviews, one cannot identify
the type and approach to retrospective review that may be most cost
effective. However, agency officials told us that the reviews have
resulted in cost savings to their agencies and to regulated parties,
for example by saving both the agency and the public the costs of
repeatedly dealing with petitions for change or waivers in response to
difficulties implementing particular regulatory provisions.
Time, Resources, and Information Were Among the Critical Barriers to
Conducting Reviews:
All of the agencies in our review reported that the lack of time and
resources are the most critical barriers to their ability to conduct
reviews. Specifically, they said that it is difficult to devote the
time and staff resources required to fulfill various retrospective
review requirements while carrying out other mission-critical
activities. Agencies' officials reported that, consequently, they had
to limit their retrospective review activities during times when they
were required to respond to other legislative priorities. For example,
officials from MSHA reported that they conducted fewer reviews in 2006
because they were heavily engaged in trying to implement the Mine
Improvement and New Emergency Response Act of 2006 (MINER Act), which
Congress passed in response to mining accidents that occurred in
2006.[Footnote 39] Prior to these events, MSHA was engaged in
soliciting a contractor to assist the agency in prioritizing its
retrospective review efforts. The officials reported that, because of
the need to develop regulations pursuant to the act, they stopped the
process of looking for a contractor, and conducted fewer reviews.
Officials from various agencies reported that retrospective reviews are
the first activities cut when agencies have to reprioritize based upon
budget shortfalls. A DOT official reported that, despite having high-
level management support for retrospective review activities, the
department has still experienced funding limitations that have affected
their ability to conduct retrospective review activities. Our
examination of agency documents confirmed that several agencies
indicated that they did not complete all of the reviews that they
planned and scheduled for some years within the scope of our review
because sufficient resources were not available. In one example, we
found that FAA delayed conducting any planned reviews for an extended
period because, as reported in the Unified Agenda, they did not have
the resources to conduct them. Many of the agencies in our review did
not track the costs (usually identified in terms of full-time
equivalent (FTE) staff resources) associated with their reviews;
therefore, they could not quantify the costs of conducting reviews.
Information and Data Limitations Present Challenges to the Conduct of
Retrospective Reviews:
Most agencies' officials reported that they lack the information and
data needed to conduct reviews. Officials reported that a major data
barrier to conducting effective reviews is the lack of baseline data
for assessing regulations that they promulgated many years ago. Because
of this lack of data, agencies are unable to accurately measure the
progress or true effect of those regulations. Similar data collection
issues were also identified by agencies in the Eisner and Kaleta study
published in 1996, which concluded that, in order to improve reviews
for the future, agencies should collect data to establish a baseline
for measuring whether a regulation is achieving its goal, and identify
sources for obtaining data on ongoing performance.[Footnote 40]
Agencies and nonfederal parties also considered PRA requirements to be
a potential limiting factor in agencies' ability to collect sufficient
data to assess their regulations. For example, EPA officials reported
that obtaining data was one of the biggest challenges the Office of
Water faced in conducting its reviews of the effluent guideline and
pretreatment standard under the Clean Water Act, and that as a result
the Office of Water was hindered or unable to perform some analyses.
According to the officials, while EPA has the authority to collect such
data, the PRA requirements and associated information collection review
approval process take more time to complete than the Office of Water's
mandated schedule for annual reviews of the effluent guideline and
pretreatment standard allows. While one nonfederal party did not agree
that PRA restrictions posed a significant barrier to conducting
reviews, agencies and nonfederal parties generally agreed that the act
was an important consideration in agency data collection. However,
while agencies identified the potential limitations of PRA, it is
important to recognize that PRA established standards and an approval
process to ensure that agencies' information collections minimize the
federal paperwork burden on the public, among other purposes.[Footnote
41]
In general, data collection appeared to be an important factor that
either hindered or facilitated reviews. Some of the agencies in our
review that promulgate safety regulations, such as CPSC, NHTSA, and
those within DOJ, reported that having sufficient access to established
sources of safety data, such as death certificates or hospital
databases on deaths and injuries related to products, greatly
facilitated their ability to conduct retrospective reviews of their
regulations. Finally, agencies also reported facing limits on their
ability to obtain data on their regulations because of the length of
time it takes to see the impact of some regulations and the scarcity of
data related to areas that they regulate. Nonfederal parties also cited
this data limitation as a challenge to agency reviews.
Barriers to the Usefulness of Reviews Included Overlapping and Generic
Schedules and Review Factors, Scope, Statutory Limitations, Limited
Public Participation, and Transparency:
Overlapping Schedules and Review Factors:
To make efficient use of their time and resources, various agency
officials said that they consider all relevant factors, including
effectiveness and burden reduction, whenever they review an existing
regulation. Therefore, when reviews that have predetermined or generic
schedules and review factors (such as 10-year Section 610 reviews)
arise, the agency might have already reviewed and potentially modified
the regulation one or more times, based upon the same factors outlined
in Section 610. The officials reported that, although the subsequent
predetermined reviews are often duplicative and less productive, they
nevertheless expend the time and resources needed to conduct the
reviews in order to comply with statutory requirements. However, they
reported that these reviews were generally less useful than reviews
that were prompted because of informal industry and public feedback,
petitions, changes in the market or technology, and other reasons.
Furthermore, agencies expressed concerns about whether predetermined
schedules may conflict with other priorities. DOT acknowledged this
issue even as it was establishing an agency policy to require
retrospective reviews. In response to a public suggestion that DOT
conduct reviews based upon a regular predetermined schedule, the agency
cautioned that arbitrary schedules might mean delaying other, more
important regulatory activities.
As examples of predetermined reviews that may be duplicative or
unproductive, officials from agencies within DOT, USDA, and DOL
reported that the regulations that most often apply to their industries
may need review sooner than the 10-year mark prescribed by Section 610.
To be responsive to the regulated community, the agencies regularly
review their regulations in response to public feedback, industry and
technology changes, and petitions, among other things, and make
necessary changes before a Section 610 review would be required. Our
assessment of reviews listed in the Unified Agenda confirmed that
agencies often noted that they had not made changes because of their
Section 610 reviews, but had previously made changes to these
regulations because of factors that previously emerged. For example,
USDA's AMS reported completing 11 mandated Section 610 reviews since
2001, which resulted in no regulatory changes. For 9 of these reviews,
the related published Section 610 reports stated that AMS made no
changes to the regulations because they were modified "numerous times"
in advance of the 10-year Section 610 review to respond to changes in
economic and other emerging conditions affecting the industry.
Similar to agency views on timing, views by an OMB official and some
nonfederal parties indicated that the period immediately after an
agency promulgates a rule may be a critical time for agencies to review
certain types of regulations, in part because once the regulated
community invests the resources to comply with the regulations and
integrates them into their operations, they are less likely to support
subsequent changes to the regulation. In addition, the immediate
effects of certain types of regulations, such as economic incentive
regulations, may be more apparent and changes, if needed, can be
brought about sooner.[Footnote 42] Nonfederal parties reported that
this may be especially important during the time that regulated
entities are facing challenges with the implementation of a regulation.
Some of these commenters noted that such immediate reviews might be
especially appropriate for rules that have a high profile, are
controversial, or involve a higher degree of uncertainty than usual.
Two agencies within our review that had predetermined deadlines that
are set only a few years apart also reported that these schedules
affected their ability to produce more useful reviews. The officials
reported that they do not have enough time to effectively complete the
reviews prior to beginning another review. For example, EPA and FCC
both stated that agency-specific review requirements to conduct reviews
of their regulations every few years make it difficult for the agencies
because either the agencies do not have enough time to effectively
gather data for the reviews or do not have enough time to observe new
effects of the regulation between reviews. As a result, the agencies
may be doing a less comprehensive job in conducting the reviews and
have more difficulty in meeting their review deadlines.
For requirements that specify a predetermined schedule for conducting
reviews, agencies also identified, as a potential barrier, the lack of
clarity on when to "start the clock" for regulations that have been
amended over time. For example, as previously mentioned, in order to
satisfy Section 610 requirements, DOT initiated an extensive process
for reviewing its sections of the CFR every year. The agency's
officials reported that they adopted this extensive approach because
they were unable to determine whether to review a regulation 10 years
after its promulgation or 10 years after its last modification. Other
agencies included in our review did not take this approach to meeting
Section 610 requirements. Similarly, in our 1999 report on RFA, we
reported that agencies' varying interpretations of Section 610
requirements affected when they conducted reviews.[Footnote 43]
While agencies' officials reported that predetermined schedules can
sometimes be ineffective, it is important to note that such schedules
can also help ensure that reviews occur. Specifically, some parties
have noted that a benefit of prespecifying the timing of reviews is
that this provides Congress with a way to force agencies to
periodically reexamine certain regulations.
In general, as illustrated in table 3, our review of the timing of
reviews and the evaluative factors that agencies are supposed to assess
in those reviews revealed that there is considerable overlap in the
various mandatory and discretionary review requirements.
Table 3: Examples of Overlapping Timing and Review Factors in
Retrospective Review Requirements:
Impetus for review: Governmentwide RFA, Section 610 review
requirements;
Regulations covered by the review: All regulations that are at least 10
years old and have a significant economic impact on a substantial
number of small entities;
Frequency: Every 10 years;
Evaluation factors to be considered: Identify burden reduction
opportunities by assessing the:
* continued need for the rule;
* nature of complaints or comments received concerning the rule from
the public;
* complexity of the rule;
* the extent to which the rule overlaps, duplicates or conflicts with
other federal rules, and, to the extent feasible, with state and local
governmental rules;[A];
* length of time since the rule has been evaluated or the degree to
which technology, economic conditions, or other factors have changed in
the area affected by the rule.
Impetus for review: EGRPRA review for Financial and Banking Regulatory
agencies;
Regulations covered by the review: All regulations prescribed by
financial regulatory agencies within FFIEC or by any such appropriate
federal banking agency;
Frequency: Every 10 years;
Evaluation factors to be considered: Identify and assess public
comments on areas of regulations that are:
* outdated,;
* unnecessary,;
* unduly burdensome.
Impetus for review: EBSA Regulatory Review Program;
Regulations covered by the review: Substantively review any EBSA
regulations that are selected as a priority, based upon 16 factors,
including regulations that are subject to the RFA, Section 610
requirements, and E.O. 12866;
Frequency: Annually;
Evaluation factors to be considered: Assess rules to determine whether:
* there is a continued need for the regulation;
* the regulation has been the subject of complaints or comments from
the public and the nature of those complaints;
* the regulation overlaps, duplicates, or conflicts with other federal
statutes or rules or with nonpreempted state or local statutes or
rules;
* the regulation is overly complex and could be simplified without
impairing its effectiveness;
* the regulation may be based upon outdated or superseded employment,
industrial, or economic practices or assumptions and whether
participants and/or beneficiaries of employee benefit plans may be
exposed to harm as a result;
* the regulation may impose significant economic costs on regulated
entities and whether the benefit(s) or purpose(s) of the regulation
could be achieved as effectively through an alternative regulatory
approach that would impose less economic burden on regulated
industries.
Impetus for review: FCC Communications Act Biennial Review;
Regulations covered by the review: All rules applicable to providers of
telecommunications service;
Frequency: Every 2 years;
Evaluation factors to be considered: Identify rules that are:
* no longer necessary in the public interest as a result of meaningful
competition.
Impetus for review: FCC Communication Act, Quadrennial Review;
Regulations covered by the review: Almost all broadcast ownership
rules;
Frequency: Every 4 years;
Evaluation factors to be considered: Determine whether any rules:
* are necessary in the public interest as the result of competition,
or;
* are no longer in the public interest.
Impetus for review: EPA 's Drinking Water Program /Review of National
Drinking Water Regulation;
Regulations covered by the review: All drinking water regulations that
were promulgated under Section 1412(b)(9);
Frequency: Every 6 years;
Evaluation factors to be considered: To assess: whether regulations
should be modified based upon contaminate occurrences in finished
drinking water.
Impetus for review: EPA Section 405(d)(2)(C) Clean Water Act Review;
Regulations covered by the review: All regulations on the Use and
Disposal of Sewage Sludge;
Frequency: Every 2 years;
Evaluation factors to be considered: To identify:
* whether the regulation should be updated to include additional toxic
pollutants.
Source: GAO analysis of agency data and regulatory statutes.
[A] Similar to this criterion, PRA requires agencies to consider
whether information that they propose to collect from public--including
information requests per regulatory requirements--overlaps, duplicates,
or conflicts with other federal requests for information. All proposed
collections from 10 or more persons, other than agencies,
instrumentalities, or employees of the United States must be approved
by OMB, and those approvals expire after 3 years, so must be reviewed
if the agency wishes to continue the collection.
[End of table]
Scope of Reviews:
Various agencies identified scoping issues as a barrier to the
usefulness of reviews. Agencies' officials reported significant delays
in completing reviews and making timely modifications, as well as
obtaining meaningful input in reviews that involved multiple
regulations as the unit of analysis. Some agencies, such as DOL's MSHA,
reported experiencing delays up to 16 years in completing a review
because they scoped their review too broadly. Specifically, MSHA
officials reported that, during a comprehensive review of their
ventilation standards, the scope of the review increased due to input
from other departmental agencies. Because of this input and the
complexity of the rule itself, it took 16 years to complete the
modifications, resulting in a major rewrite of the ventilation
standards. In our assessment of this review, the resulting information
was not as timely as it otherwise could have been, and therefore may
have been less useful.[Footnote 44] Similarly, officials from other
agencies reported that scoping reviews too broadly also affected their
ability to conduct expedient reviews. Agencies' officials suggested
that having a narrow and focused unit of analysis, such as a specific
standard or regulation, is a more effective approach to conducting
reviews. Specifically, officials from DOT and FDIC reported that, when
they conducted narrowly defined reviews, the public provided more
meaningful input on their regulations. Furthermore, one nonfederal
party emphasized that, when agencies choose to analyze a broad unit of
analysis, such as an act, it is difficult for the public to discern
which regulations are doing well and which are not. The positive
effects of one regulation under the legislation can overshadow the
negative effects of other regulations. Therefore, the performance
assessment of the relevant regulations is less transparent and,
consequently, less useful.
Statutory Provisions:
Agencies' officials reported that statutory requirements are a major
barrier to modifying or eliminating regulations in response to
retrospective regulatory reviews because some regulations are aligned
so closely with specific statutory provisions. Therefore, the agencies
may be constrained in the extent to which they can modify such
regulations without legislative action. For example, officials from
MSHA, FDIC, and SBA reported that many of their regulations mirror
their underlying statutes and cannot be modified without statutory
changes. During its retrospective reviews to reduce burden, FDIC along
with other banking agencies within the FFIEC, identified 180 financial
regulations that would require legislative action to revise. Similarly,
in our 1999 report on regulatory burden, we found that agencies often
had no discretion, because of statutory provisions, when they imposed
requirements that businesses reported as most burdensome.[Footnote 45]
One approach taken by FDIC to address this issue was to identify
regulations that required legislative action in their review process
and to coordinate with Congress to address these potential regulatory
changes. Because of this approach, Congress is actively involved in
FDIC's regulatory burden relief efforts and has passed changes in
legislation to provide various forms of burden relief to the financial
sector.
Limited Public Participation:
Agencies and nonfederal parties identified the lack of public
participation in the review process as a barrier to the usefulness of
reviews. Agencies stated that despite extensive outreach efforts to
solicit public input, they receive very little participation from the
public in the review process, which hinders the quality of the reviews.
Almost all of the agencies in our review reported actively soliciting
public input into their formal and informal review processes. They
reported using public forums, and industry meetings, among other things
for soliciting input into their discretionary reviews, and primarily
using the Federal Register and Unified Agenda for soliciting public
input for their mandatory reviews. For example, USDA officials reported
conducting referenda of growers to establish or amend AMS marketing
orders, and CPSC officials reported regularly meeting with standard-
setting consensus bodies, consumer groups, and regulated entities to
obtain feedback on their regulations. Other agencies reported holding
regular conferences, a forum, or other public meetings. However, most
agencies reported primarily using the Unified Agenda and Federal
Register to solicit public comments on mandatory reviews, such as
Section 610 reviews. Despite these efforts, agency officials reported
receiving very little public input on their mandatory reviews.
Nonfederal parties we interviewed were also concerned about the lack of
public participation in the retrospective review process and its impact
on the quality of agency data used in reviews. However, these
nonfederal parties questioned the adequacy and effectiveness of
agencies' outreach efforts. Specifically, 7 of the 11 nonfederal
parties cautioned that the Federal Register and Unified Agenda are not
sufficiently effective tools for informing the public about agency
retrospective review activities. In addition, most of the nonfederal
parties we interviewed were unaware of the extent to which agencies
conducted reviews under their own discretion, and most of those parties
reported that they were not aware of the outreach efforts agencies are
making to obtain input for these reviews. Limited public participation
in some review activities was cited by both agencies and nonfederal
parties as a barrier to producing quality reviews, in part because
agencies need the public to provide information on the regulations'
effects. Both agency officials and nonfederal parties identified
methods for improving communication, including using agency Web sites,
e-mail listserves, or other Web-based technologies (such as Web
forums), among other things.
Shortcomings in Transparency:
Nonfederal parties identified the lack of transparency in agency review
processes, results, and related follow-up activities as a barrier to
the usefulness of reviews to the public. Nonfederal parties were rarely
aware of the retrospective review activities reported to us by the
agencies in our review. Similarly, in our review of the Federal
Register and Unified Agenda, we were not always able to track
retrospective review activities, identify the outcome of the review, or
link review results to subsequent follow-up activities, including
initiation of rulemaking to modify the rule. As stated earlier, some
mandatory reviews do not require public reporting and many agencies did
not consistently report the results of their discretionary reviews,
especially if the reviews resulted in no changes to regulations. Some
nonfederal parties told us that lack of transparency was the primary
reason for the lack of public participation in agencies' review
processes.
Agencies and Nonfederal Parties Identified Lessons Learned and
Practices for Improving Retrospective Reviews:
Agencies and nonfederal parties identified pre-planning for regulatory
reviews as a practice that aids agencies in identifying the data and
analysis methodology that they need to conduct effective outcome-based
performance reviews. Some agencies within our review planned how they
would collect performance data on their regulations before or during
the promulgation of the relevant regulations or prior to the review.
They cited this technique as a method for reducing data collection
barriers. For example, DOT's NHTSA was an agency that OMB officials and
nonfederal parties identified as appearing to conduct effective
retrospective reviews of its regulations. NHTSA officials reported to
us that, to conduct effective reviews, they plan for how they will
review their regulations even before they issue them. Prior research on
regulatory reviews also cited the need for agencies to set a baseline
for their data analysis, in order to conduct effective reviews. In
addition, we have long advocated that agencies take an active approach
to measuring the performance of agency activities. Furthermore, we
observed that pre-planning for data collection could address some
challenges that agencies reported facing with PRA data collection
requirements, such as the length of time required to obtain approval.
Agencies reported that prioritizing which regulations to review
facilitated the conduct of and improved usefulness of their reviews.
Agencies that developed review programs with detailed processes for
prioritizing which regulations to review reported that this
prioritization facilitated their ability to address time and resource
barriers to conducting reviews and allowed them to target their efforts
at more useful reviews of regulations that were likely to need
modifications. As previously mentioned, DOL's EBSA and CPSC developed
detailed prioritization processes that allowed officials to identify
which regulations were most in need of review and to plan for
conducting those reviews. Furthermore, this process allowed CPSC to
prospectively budget for its reviews and to identify the number of
substantive reviews per year that the agency could effectively conduct,
while meeting its other agency priorities. Officials from both agencies
reported that their prioritization processes allowed them to focus on
the most useful retrospective review activities, which identified
important regulatory changes. Nonfederal parties that we interviewed
also asserted that it is not necessary or even desirable for agencies
to expend their time and resources reviewing all of their regulations.
Instead, they reported that it would be more efficient and valuable to
both agencies and the public for agencies to conduct substantive
reviews of a small number of regulations that agencies and the public
identify as needing attention. Nonfederal parties and agency officials
suggested that factors that agencies should consider when prioritizing
their review activities could include economic impact, risk, public
feedback, and length of time since the last review of the regulation,
among other things.
Nonfederal regulatory parties believed that reviews would be more
credible and effective if the parties that conduct them were
independent. For example, two different parties who we interviewed said
that EPA's first report in response to Section 812 under the Clean Air
Act could have been improved by involving independent analysts.
However, they recognized that it is important to include input from
those who were involved in the day-to-day implementation of the
regulation and were responsible for producing initial benefit-cost
estimates for the regulations. Almost all of the nonfederal parties
that we interviewed expressed concern that agency officials who
promulgated and implemented regulations may be the same officials who
are responsible for evaluating the performance of these regulations.
Although the nonfederal parties acknowledged that it is important for
officials with critical knowledge about the program to be involved with
providing input into the review, they were concerned that officials
placed in this position may not be as objective as others may be.
Nonfederal parties also expressed concerns about agencies' capacity to
conduct certain types of analyses for their reviews, such as benefit-
cost assessments. The nonfederal parties suggested that agencies could
consider having an independent body like another agency, Inspector
General, or a centralized office within the agency conduct the reviews.
During our review, agencies' officials reported that they sometimes
contract out their reviews if they do not have the expertise needed to
conduct the analyses. However, during a discussion of this issue at our
joint agency exit meeting, agency officials pointed out the difficulty
in finding a knowledgeable independent review body to conduct
retrospective reviews, and they noted that even contracted reviewers
may be considered less independent, because they are paid by the agency
to conduct the study.
Agencies and nonfederal regulatory parties agreed that high-level
management support in the review process is important to the successful
implementation of not only individual reviews but also to sustaining
the agency's commitment to a review program and following up on review
results. As an example, officials from FDIC credited the
accomplishments of their review program largely to the support of high-
level managers who headed the FFIEC effort to reduce regulatory burden
on financial institutions. Officials reported that the leadership of
the Director of the Office of Thrift Supervision, who chaired the FFIEC
effort, helped to catapult support for reviews at all of the FFIEC
agencies, including FDIC, and helped to free up resources to conduct
reviews at these agencies. Almost all of the selected agencies reported
involving some high-level management attention in their reviews, but
where and how they used this involvement varied. For example, while
almost all of the agencies reported involving high-level management
attention in decision-making processes that resulted from reviews, CPSC
and EBSA's review programs also involved high-level managers early in
their processes, in order to determine which regulations to review.
Overall, agencies and nonfederal parties indicated that having high-
level management attention is important to obtaining and sustaining the
resources needed to conduct reviews and the credibility of agency
reviews.
According to agency officials from DOT, DOL, SBA, and FDIC, they
learned that grouping related regulations together when conducting
reviews is a technique that more often generated meaningful comments
and suggestions from the public. For example, officials from FDIC
stated that categorizing regulations for review and soliciting input
over an extended time period proved to be a more effective way of
receiving public input. They reported that placing regulations into
smaller groups and soliciting feedback on these categories separately
over a 3-year period helped the members of the FFIEC to avoid
overwhelming the public with the regulatory reviews, and allowed the
agencies to receive more thoughtful participation and input. SBA
officials reported reviewing related regulations together because a
change to one rule can have an impact on the related rules. Similarly,
a DOT official reported that grouping similar regulations together to
solicit public input was an effective technique for FAA because the
agency regulates a broad policy area. FAA received 1800 suggestions for
regulatory changes based upon one such review. However, the official
cautioned that while grouping regulations is an effective technique to
obtaining useful public input, defining the categories too broadly can
lead to an effort that is too intensive. In addition, the practice may
be less convenient and practical for agencies that write very specific
standards, such as NHTSA. For these agencies it may be more effective
to pick related characteristics of rules in order to group regulations
to review.
Nonfederal parties suggested that agencies need to be more aware of the
different audiences that might be interested in their reviews, and
target the level of detail and type of product used to report the
results to meet the needs of these various audiences. For example, a
product that focuses on specific details of implementing a regulation
may be less useful to those interested in the policy effects of a
regulation, and vice versa. Further, both agency officials and
nonfederal parties identified methods for improving communication,
including better use of information technology tools, such as agency
Web sites, electronic docket systems, e-mail listserves, Web-based
forums, or other Web-based technologies.
Some Agencies Believe Retrospective Reviews Can Result in Cost Savings:
Agencies have not estimated all of the costs and benefits associated
with conducting retrospective reviews, but they believe that
retrospective reviews have resulted in cost savings to their agencies.
For example, MSHA officials reported that their retrospective
regulatory reviews related to petitions for modification produce
savings for the agency because the reviews prompt the agency to review
and modify regulations that are heavily petitioned, which reduces costs
associated with reviewing similar petitions.[Footnote 46] They reported
that these reviews also save the mining industry from the costs
associated with repeatedly filing petitions. In addition to petition-
related cost savings, agencies could save costs by reviewing and
eliminating regulations that are no longer useful. Therefore, agencies
could reduce costs associated with implementing and enforcing outdated
or unproductive regulations.
We found that only a few agencies track the costs associated with
conducting their reviews, so we were unable to identify which methods
are most cost effective. Some agency officials, such as those in MSHA,
reported that tracking direct costs associated with reviews is
difficult because reviews are conducted as part of the normal operation
of the agencies and in concert with other actions to fulfill the
agencies' missions. However, some agencies like CPSC establish budgets
for their reviews, and track the associated costs. As a result, CPSC
determined that conducting about four regulatory reviews per year was a
reasonable effort for the associated expense to the agency. OSHA also
tracks the costs associated with its reviews. The agency's officials
told us that each of its reviews typically requires 2/3 of a program
analyst FTE in the Office of Evaluations and Audit Analysis, about 1/5
of an attorney FTE in the Office of the Solicitor, 1/2 FTE for the
involvement of staff from other directorates, and approximately $75,000
to $100,000 of contractor support per review.
Although agencies did not always track the cost of their reviews,
officials reported that they know some reviews are not cost effective.
For example, a USDA official reported that, by nature, some regulations
are set up by the agency to be reviewed regularly. Therefore,
externally imposed reviews only duplicate this effort. An example of
such reviews would be those conducted for regulations that are
consistently reviewed by industry committees that are appointed by the
Secretary of an agency. AMS officials reported that industry committees
appointed by the Secretary of Agriculture oversee many of the agency's
regulations and, as one of their main functions, regularly review AMS
regulations to identify needed changes. Therefore, regulations under
the purview of these committees are already constantly being reviewed
and updated, and thus may benefit less from a Section 610 review than
other regulations.
Conclusions:
Our review revealed that agencies are conducting more reviews, and a
greater variety of reviews, than is readily apparent, especially to the
public. To facilitate their reviews, agencies, to greater and lesser
extents, have been developing written procedures, processes, and
standards to guide how they select which rules to review, conduct
analyses of those rules, and report the results. Given the multiple
purposes and uses of reviews, we recognize that there is no "one size
fits all" approach. However, there are lessons to be learned from
ongoing regulatory reviews that could benefit both the agencies in our
scope and others that conduct retrospective regulatory reviews. Because
agencies are attempting to find ways to further develop and improve
their retrospective review processes (for example, establishing units
that focus on retrospective reviews and seeking assistance with
establishing prioritization systems), identifying ways to share
promising practices could collectively improve agency review
activities. Feedback from agency officials and nonfederal parties, as
well as our own analysis, indicate that there are procedures and
practices that may be particularly helpful for improving the
effectiveness and transparency of retrospective review processes. For
example, agencies can be better prepared to undertake reviews if they
have identified what data will be needed to assess the effectiveness of
a rule before they start a review and, indeed, before they promulgate
the rule. If agencies fail to plan for how they will measure the
performance of their regulations, and what data they will need to do
so, they may continue to be limited in their ability to assess the
effects of their regulations.
Given increasing budgetary constraints, both agency officials and
nonfederal parties emphasized the need to better prioritize agency
review activities, when possible, to more effectively use their limited
resources. Agency officials and nonfederal parties recognize that time
and resources are too limited to allow for a regular, systematic review
of all of their regulations, and that devoting excessive time and
scarce resources to a formal review of all of their regulations could
result in insufficient attention to other regulatory needs or statutory
mandates. As we have observed, some agencies are already using such
prioritization processes. Without a detailed prioritization system,
agencies may not be able to effectively target their reviews so that
they devote resources to conducting substantive and useful reviews of
the regulations that need the most attention.
Agencies and nonfederal parties also reported that reviews are more
credible and useful to all parties if agencies have assessed multiple
review factors in their analyses of the regulations, rather than
relying on a single factor, such as public comments. The failure of
agencies to do this could result in reviews that miss assessing crucial
information that could provide context to the results of the analysis,
such as weighing the benefits against the burdens of the regulation.
Further, our assessment of the strengths and limitations of agency
reviews revealed that agencies could improve their efforts to employ a
standards-based approach to conducting discretionary reviews. Agencies
are inconsistently applying a standards-based approach to conducting
discretionary reviews. Applying a standards-based approach could
enhance the transparency and consistency of reviews.
Agencies' reporting of reviews appears largely ineffective. None of the
nonfederal parties we contacted were aware of the extent of agency
retrospective review activities. This lack of awareness might be
attributable to two reasons. First, agencies typically did not report
results for discretionary reviews, which account for most of agencies'
review activities. Therefore, the public cannot be expected to know
about these reviews. Second, when agencies do report on their
activities, the mode and content of these communications may not be
effective. For example, although we found that some agencies used
multiple modes of communication, for the most part agencies reported
that they rely heavily on the Federal Register. However, nonfederal
parties indicated that reliance on the Federal Register is not
sufficient. Further, the content that agencies do publish does not
always provide adequate information about the analysis and results of
the reviews. Our own assessment showed that it was sometimes difficult
to determine the outcomes of the reviews or the bases for the agencies'
conclusions. Some agencies have employed multiple communication modes
and provided detailed content in their reports, but still report
disappointing levels of public participation. Therefore, it is clear
that agencies need to continue to explore methods to more effectively
communicate and document information about their reviews and the
underlying analyses. According to agency officials and nonfederal
parties, such methods could include using agency Web sites, e-mail
listserves, or other Web-based technologies (such as Web forums). When
agencies do not effectively communicate the analysis and results of
their reviews, they miss the opportunity to obtain meaningful comments
that could affect the outcome of their reviews. Further, without
showing the underlying analysis of reviews, the agencies' conclusions
may lack credibility.
Agencies and nonfederal parties also emphasized the importance of
having high-level support for sustaining agency retrospective review
activities, and increasing their credibility with the public. Without
such attention, agencies will face difficulties in making retrospective
review a priority that receives the resources necessary for conducting
successful reviews. Agencies provided specific examples that
illustrated how high-level management support helped to ensure that
they followed through on the results of regulatory reviews. Although
agency officials cautioned that even high-level management support
might not be sufficient to overcome all budgetary constraints, having
such support may ensure that some retrospective review activity will be
sustained.
One of the most striking findings during our review was the disparity
in the perceived usefulness of mandatory versus discretionary
regulatory reviews. The agencies characterized the results of their
discretionary reviews as more productive and more likely to generate
further action. A primary reason for this appears to be that
discretionary reviews that address changes in technology, advances in
science, informal agency feedback, harmonization efforts, and
petitions, among other things, may be more closely attuned to
addressing issues as they emerge. While agencies' officials reported
that their discretionary reviews might be more useful than the
mandatory reviews, we can not definitively conclude which reviews are
most valuable. We did not assess the content and quality of
discretionary reviews, and could not have done so because they often
were not documented. Although the officials reported that the bulk of
their review activity is associated with discretionary reviews, they
could not provide evidence to show definitively that this was so or
that discretionary reviews more often generated useful outcomes.
Further, one cannot dismiss the value that Congress anticipated when
establishing the mandatory requirements for agencies to conduct reviews
for particular purposes and on particular schedules.
The predetermined time frames of mandatory reviews can both help and
hinder. On one hand, predetermined schedules are one means by which
Congress can force agencies to periodically reexamine certain
regulations. However, the timing for some mandatory reviews may either
be too short or overlap with other review requirements, making it more
difficult for agencies to produce meaningful analysis from their
reviews. Conversely, from the cursory information that agencies
reported for some mandatory reviews that have review periods as long as
10 years, it appears that agencies may devote limited time and
resources to conducting these reviews, perhaps partly because the
required timelines do not recognize ongoing changes to regulations.
Further, the criteria used in mandatory and discretionary reviews may
be duplicative. In general, our review of the timing of reviews and the
evaluative factors that agencies are supposed to assess in those
reviews revealed that there is considerable overlap in the various
mandatory and discretionary review requirements. To make efficient use
of their time and resources, agency officials said that they consider
all relevant factors, including effectiveness and burden reduction,
whenever they review an existing regulation. Therefore, when there are
duplicative review factors (such as assessing whether the rule is still
needed, overly burdensome, or overlaps with other regulations), the
agency might have already reviewed and potentially modified the
regulation one or more times based upon the same factors. The officials
reported that, although the subsequent reviews are often duplicative
and less productive, they nevertheless expend the time and resources
needed to conduct the reviews in order to comply with statutory
requirements.
Given the long-term fiscal imbalance facing the United States and other
significant trends and challenges, Congress and the executive branch
need to carefully consider how agencies use existing resources. In
particular, overlapping or duplicative reviews may strain limited
agency resources. As agencies face trade-offs in allocating these
limited resources to conducting mandatory and discretionary reviews, as
well as conducting other mission-critical activities, they have to make
decisions about what activities will produce the most benefit. In some
cases, we observed that agencies like FAA delayed conducting any
planned reviews for an extended period because they reported that they
did not have the resources to conduct them. Given the trade-offs that
agencies face, it makes sense to consider the appropriate mix of
mandatory and discretionary reviews, and other mission-critical
activities, that agencies can and should conduct. More specifically,
our findings and analysis suggest that it may be useful to revisit the
scope and timing of some review requirements to see whether there are
opportunities to consolidate multiple requirements to enhance their
usefulness and make them more cost effective and easier to implement.
If the current state of review requirements remains unchanged, agencies
may continue to expend their limited time and resources on conducting
pro forma reviews that appear to produce less useful results. Further,
agencies may also continue to produce less useful results for reviews
that they rush to complete, as identified by EPA and FCC officials who
reported that their annual and/or biannual review requirements do not
provide enough time for them to most effectively complete their reviews
and/or observe new changes before starting a subsequent review.
While we believe that employing the lessons learned by agencies may
improve the effectiveness of their retrospective reviews, we
acknowledge that the review of regulations is only one of the tools
that agencies will need to fully understand the implications of their
regulatory activities. In order to fully assess the performance of
regulatory activities, agencies will need to consider the performance
of the programs that implement their regulations and the statutes that
underlie the regulations. Considering any of these elements in
isolation will provide an incomplete picture of the impact of
regulations on the public. However, neglecting any of these elements
will have the same effect.
Recommendations for Executive Action:
In order to ensure that agencies conduct effective and transparent
reviews, we recommend that both the Director of the Office of
Management and Budget, through the Administrator of the Office of
Information and Regulatory Affairs, and the Chief Counsel for Advocacy
take the following seven actions.
Specifically, we recommend that they develop guidance to regulatory
agencies to consider or incorporate into their policies, procedures, or
agency guidance documents that govern regulatory review activities the
following elements, where appropriate:
1. Consideration, during the promulgation of certain new rules, of
whether and how they will measure the performance of the regulation,
including how and when they will collect, analyze, and report the data
needed to conduct a retrospective review. Such rules may include
significant rules, regulations that the agencies know will be subject
to mandatory review requirements, and any other regulations for which
the agency believes retrospective reviews may be appropriate.
2. Prioritization of review activities based upon defined selection
criteria. These criteria could take into account factors such as the
impact of the rule; the length of time since its last review; whether
changes to technology, science, or the market have affected the rule;
and whether the agency has received substantial feedback regarding
improvements to the rule, among other factors relevant to the
particular mission of the agency.
3. Specific review factors to be applied to the conduct of agencies'
analyses that include, but are not limited to, public input to
regulatory review decisions.
4. Minimum standards for documenting and reporting all completed review
results. For reviews that included analysis, these minimal standards
should include making the analysis publicly available.
5. Mechanisms to assess their current means of communicating review
results to the public and identify steps that could improve this
communication. Such steps could include considering whether the agency
could make better use of its agency Web site to communicate reviews and
results, establishing an e-mail listserve that alerts interested
parties about regulatory reviews and their results, or using other Web-
based technologies (such as Web forums) to solicit input from
stakeholders across the country.
6. Steps to promote sustained management attention and support to help
ensure progress in institutionalizing agency regulatory review
initiatives.
We further recommend that, in light of overlapping and duplicative
review factors in statutorily mandated reviews and the difficulties
identified by agencies in their ability to conduct useful reviews with
predetermined time frames, the Administrator of OIRA and Chief Counsel
for Advocacy take the following step.
7. Work with regulatory agencies to identify opportunities for Congress
to revise the timing and scope of existing regulatory review
requirements and/or consolidate existing requirements.
Matters for Congressional Consideration:
In order to facilitate agencies' conduct of effective and transparent
reviews, while maximizing their limited time and resources, Congress
may wish to consider authorizing a pilot program with selected agencies
that would allow the agencies to satisfy various retrospective review
requirements with similar review factors that apply to the same
regulations by conducting one review that is reported to all of the
appropriate relevant parties and oversight bodies.
Agency Comments:
We provided a draft of this report to the Secretary of Agriculture, the
Attorney General, the Secretary of Labor, the Secretary of
Transportation, the Administrator of EPA, the Administrator of SBA, the
Acting Chairman of CPSC, the Chairman of FCC, the Chairman of FDIC, the
Director of OMB, and the Chief Counsel for Advocacy for their review
and comment.
We received formal comments from the SBA Office of Advocacy; they
concurred with the recommendations and, as an attachment, provided a
copy of draft guidance that they developed in response to our
recommendations (see app. XII). The Office of Advocacy also suggested
that it would be more appropriate to direct the recommendations to the
Chief Counsel of Advocacy rather than the Administrator of SBA. Because
the Chief Counsel of Advocacy is the official who would need to act
upon these recommendations, we made the change.
OMB told us that they reviewed our draft report and had no comments.
All other agencies provided technical and editorial comments, which we
incorporated as appropriate. In its technical comments, DOT suggested
that we expand the recommendation for agencies to identify
opportunities for Congress to examine the timing and scope of existing
requirements and/or consolidate existing requirements, to include
executive agency mandated reviews. However, the focus of the
recommendation is on statutory requirements because they tended to have
recurring and/or predetermined review schedules. Therefore, we did not
expand the recommendation.
As we agreed with your office, unless you publicly announce the
contents of this report earlier, we plan no further distribution of it
until 30 days from the date of this letter. We will then send copies of
this report to interested congressional committees, the Secretary of
Agriculture, the Attorney General, the Secretary of Labor, the
Secretary of Transportation, the Administrator of EPA, the
Administrator of SBA, the Acting Chairman of CPSC, the Chairman of FCC,
the Chairman of FDIC, the Director of OMB, the Administrator of OIRA,
and the Chief Counsel for Advocacy. Copies of this report will also be
available at no charge on our Web site at http://www.gao.gov.
If you or your staff have any questions about this report, please
contact me at (202) 512-6806 or sciremj@gao.gov. Contact points for our
Offices of Congressional Relations and Public Affairs may be found on
the last page of this report. Key contributors to this report are
listed in appendix XIII.
Signed by:
Mathew J. Scire:
Director:
Strategic Issues:
[End of section]
Appendix I: Objectives, Scope and Methodology:
Objectives:
To provide insights concerning how agencies assess existing
regulations, congressional requesters asked us to examine agencies'
implementation of retrospective regulatory reviews and the results of
such reviews. Accordingly, for selected agencies, we are reporting on:
1. the magnitude of retrospective review activity and type of
retrospective reviews agencies completed from calendar year 2001
through 2006, including the frequency, impetus (mandatory or
discretionary), and purposes of the reviews;
2. the processes and standards that guide agencies' planning, conduct,
and reporting on reviews, and the strengths and limitations of the
various review processes and requirements;
3. the outcomes of reviews, including the perceived usefulness of the
reviews and how they affected subsequent regulatory activities; and:
4. the factors that appear to help or impede agencies in conducting or
using retrospective reviews, including which methods, if any, that
agencies and we identified as most cost-effective for conducting
reviews.
For purposes of this report, we generally use the term retrospective
reviews to mean any assessment of an existing regulation, primarily for
purposes of determining whether (1) the expected outcomes of the
regulation have been achieved; (2) the agency should retain, amend, or
rescind the regulation; and/or (3) the actual benefits and costs of the
implemented regulation correspond with estimates prepared at the time
the regulation was issued. We defined mandatory reviews as
retrospective reviews that agencies conducted in response to
requirements in statutes, executive orders, or executive branch
directives. We defined discretionary reviews as reviews that agencies
undertook based upon their own initiative.
Scope:
For calendar years 2001 through 2006, we assessed the retrospective
review activities of nine agencies and their relevant subagencies. The
nine agencies included the Departments of Agriculture, Justice, Labor,
and Transportation; Consumer Product Safety Commission (CPSC);
Environmental Protection Agency (EPA); Federal Communications
Commission (FCC); Federal Deposit Insurance Corporation (FDIC); and the
Small Business Administration(SBA). The subagencies covered in detail
by our review included USDA's Animal and Plant Health Inspection
Service, Agricultural Marketing Service, and Food Safety and Inspection
Service; Department of Justice's Bureau of Alcohol, Tobacco, Firearms,
and Explosives; Department of Labor's Employee Benefits Security
Administration, Occupational Safety and Health Administration, Mine
Safety and Health Administration, and Employment and Training
Administration; and the Department of Transportation's Federal Aviation
Administration and National Highway Traffic Safety Administration.
We selected these agencies because they include Cabinet departments,
independent agencies, and independent regulatory agencies covering a
wide variety of regulatory activities in areas such as health, safety,
environmental, financial, and economic regulation.[Footnote 47]
Further, we selected these agencies because they were actively
conducting regulatory reviews or were responsible for responding to
multiple review requirements. We were not able to assess the activities
of all regulatory agencies, due to time and resource constraints, but
given the diversity and volume of federal regulation conducted by the
nine selected agencies, we believe that the results of our assessment
should provide a reasonable characterization of the variety of
retrospective regulatory reviews and the issues associated with their
implementation. GAO's Federal Rules Database, which is used to compile
information on all final rules, showed that the nine agencies accounted
for almost 60 percent of all final regulations published from 2001
through 2006.[Footnote 48] However, the volume and distribution of
reviews covered in this report are not generalizable to all regulatory
reviews governmentwide. To supplement our assessment of these agencies'
activities, we also solicited the perspectives of regulatory oversight
entities and nonfederal parties knowledgeable about regulatory issues,
such as the Office of Information and Regulatory Affairs within the
Office of Management and Budget, the Office of Advocacy within SBA, and
11 nonfederal parties that represented a variety of sectors (academia,
business, public advocacy, and state government).
Methodology:
To address our first objective, we interviewed and obtained
documentation from agency officials as well as other knowledgeable
regulatory parties on agency retrospective reviews. We administered and
collected responses to a structured data collection instrument that
solicited information on agencies' retrospective review activities and
lessons learned. We supplemented this data collection by obtaining
information from the Federal Register, Unified Agenda, and published
dockets and agency reports. We used information obtained to describe
the "types" of reviews that agencies conducted--in terms of impetus
(mandatory or discretionary) and purpose (for example, burden reduction
or effectiveness). We compared agency review activities in terms of
impetus and purpose because important differences can be seen in the
processes used, outcomes derived, and lessons learned, based upon these
characteristics, which we further analyze in objectives two through
four. Although we note that reviews can be described and compared using
other characteristics, such as policy area assessed in the review (such
as health, safety, or economic) or type of analyses conducted (such as
economic benefit-cost analysis, other quantitative, and qualitative),
we believe our selection of characteristics in describing the types of
reviews conducted was most useful and relevant for addressing our
objectives.
To address our second objective, we interviewed and obtained
documentation from agency officials as well as other knowledgeable
regulatory parties on agency retrospective reviews. We collected
responses to the aforementioned structured data collection instrument
that solicited information on agencies' retrospective review activities
and lessons learned. We supplemented this data collection by obtaining
information from the Federal Register, Unified Agenda, and published
dockets and agency reports. We also reviewed agency policies, executive
orders, and statutory requirements to identify policies and procedures
that guide the planning, conduct, and reporting of agencies' reviews.
Further, to identify the strengths and limitations of agency review
processes, we assessed agencies' use of three review and economic
practices and standards that are important to the effectiveness and
transparency of agency reviews, including the (1) use of a standards-
based approach, (2) incorporation of public involvement, and (3)
documentation of review processes and results. In prior work, we
identified some overall strengths or benefits associated with
regulatory process initiatives, including: increasing expectations
regarding the analytical support for proposed rules, encouraging and
facilitating greater public participation in rulemaking, and improving
the transparency of the rulemaking process. Because these strengths or
benefits are also relevant and useful for assessing agency
retrospective review initiatives, we considered them in our selection
of assessment criteria for this review. Other practices that could
improve the effectiveness and transparency of reviews may exist and
could be considered when developing retrospective review processes.
However, we believe that the three practices that we assessed are among
the most important. While we did not assess whether agencies employed
these practices all the time, to the extent possible we did seek
documentation and evidence that they were applied. Further, while we
assessed whether agencies employed standards-based approaches in their
retrospective review processes--within the scope of our review--we did
not attempt to assess the quality of such standards. We compared the
strengths and limitations of review processes across agencies, types of
reviews, and phases of the review process. In our more detailed
assessment of a limited sample of retrospective reviews completed
between 2001 and 2006, we also evaluated the use of research and
economic practices and standards. The sample that we assessed was too
small to generalize to all agency retrospective reviews, but this
assessment illustrated some of the strengths and limitations that exist
in the agencies we reviewed.
To address the third objective, we interviewed and obtained
documentation from agency officials and collected responses on the
usefulness of various types of retrospective reviews using the
structured data collection instrument identified in objective one. To
obtain the perspectives of nonfederal parties on the usefulness of
agency reviews, we identified and interviewed 11 parties that represent
a variety of sectors (academic, business, public, advocacy, and state
government) and points of view.[Footnote 49] The parties were selected
based on their contributions to prior GAO work on regulatory issues and
our assessment of their recent publications on regulatory issues. The
opinions expressed by agency officials and these nonfederal parties may
be subjective and may not capture the views of all regulatory agencies,
experts, and stakeholders on the usefulness of reviews. However, we
believe that our selection represents a reasonable range of
knowledgeable perspectives on retrospective reviews. We supplemented
our data collection on the outcomes of agency reviews by reviewing the
Federal Register, Unified Agenda, and published dockets and reports.
For mandatory and discretionary reviews, we identified the reported
results of reviews, including whether the review prompted any change to
existing regulations. We also synthesized and described the usefulness
of different types of reviews, as determined by agency officials and
nonfederal parties knowledgeable about regulatory issues.
To address the fourth objective, we interviewed and obtained
documentation from agency officials, collected responses to a
structured data collection instrument, and reviewed existing research
on agency regulatory review initiatives. Further, we solicited
perspectives of the selected oversight and nonfederal parties on the
facilitating factors and barriers to the usefulness of agency reviews.
Based on our analysis of agency responses and documentation, we
described the lessons learned from the different agencies, and the
views of oversight and nonfederal parties on facilitating and impeding
practices. To supplement the lessons identified and to identify the
most prevalent and/or critical facilitators or barriers for the conduct
and usefulness of reviews, as well as options to overcome any barriers
identified, we hosted a joint agency exit conference. During this joint
exit conference, we discussed the collective responses of agencies and
nonfederal parties, and similarities and differences in experiences and
views.[Footnote 50] We had general consensus among federal agencies on
the points discussed during this exit conference and report on areas
where there was not consensus in agency and nonfederal parties' views.
We conducted our work from May 2006 through April 2007 in accordance
with generally accepted government auditing standards.
[End of section]
Appendix II: Department of Agriculture Retrospective Reviews:
The three Department of Agriculture (USDA) agencies examined in this
study actively reviewed their existing regulations under both mandatory
and discretionary authorities. The Animal and Plant Health Inspection
Service (APHIS), the Agricultural Marketing Service (AMS), and the Food
Safety and Inspection Service (FSIS) conducted reviews to reduce burden
on small entities under Section 610. USDA conducted discretionary
reviews to respond to industry petitions or informal feedback, to meet
recommendations from committees, to address new risks in regulated
environments, or to update rules due to advances in technology or
scientific knowledge. The agencies use both centralized and
decentralized review processes that rely on the input of outside
parties to inform their reviews.
USDA Retrospective Reviews:
The three USDA agencies examined in this study actively reviewed their
existing regulations under both mandatory and discretionary
authorities, with reviews conducted at their own discretion more common
than mandated reviews. For example, during the 2001 through 2006 period
covered in our review, APHIS reported conducting 18 Section 610 reviews
and completing rulemakings for 9, with 8 others currently in progress.
APHIS also reported that since 2001, it has completed a total of 139
regulatory reviews, which resulted in 139 rule modifications across 12
broad content areas. AMS officials reported initiating 19 and
completing 11 Section 610 reviews since 2001. However, AMS also
reported that it has issued approximately 300 modifications to 30
regulations based on interaction with Industry Committees between
fiscal years 2002 and 2006. In addition, AMS also reported that since
2001, the agency has conducted 18 independent assessments of its
commodity promotion programs, as required of AMS under 7 U.S.C. § 7401.
FSIS reported initiating 1 Section 610 review since 2001; however
during the same time period, the agency has conducted 36 reviews of its
rules as a result of industry petitions. The agencies' officials
reported that discretionary reviews more often resulted in regulatory
changes. Our analysis of the December 2006 Unified Agenda confirmed
that most modifications to the department's regulations were attributed
to reasons under USDA's own discretion rather than because of mandates.
Of the 132 rule changes listed in the Unified Agenda, 113 resulted from
decisions made at agency discretion while 19 of those changes were the
result of mandated actions.
Table 4: Description of USDA Retrospective Reviews:
Review authorities;
Mandatory reviews:
* Section 610;
* AMS: Commodity Promotion and Evaluation under 7 U.S.C. § 7401;
Discretionary reviews:
* Agency discretion;
* USDA internal policy to review significant rules every 5 years.
Frequency;
Mandatory reviews:
* 10 years (Section 610);
* 5 years (AMS Commodity Promotion and Evaluation);
Discretionary reviews:
* As needed based on industry feedback or petition;
* 5 years (USDA internal policy).
Purposes;
Mandatory reviews:
* Reduce economic impact on small entities;
* Reduce complexity of regulation;
* Consolidate duplicative rules;
* Respond to changes over time in technology, economic conditions;
* Evaluate program effectiveness and ensure that the objectives of the
program are being met (AMS Commodity Promotion and Evaluation);
Discretionary reviews:
* Respond to industry petitions, recommendations from committees;
* Advances in technology, science;
* Meet emerging risks (disease, pests);
* Align with international standards;
* Requests from trading partners, states;
* Feedback from regulated communities;
* Respond to new, revised legislation;
* Reduce complexity of regulation;
* Consolidate duplicative rules;
* Respond to changes over time in technology, economic conditions.
General outcomes;
Mandatory reviews:
* APHIS: 9 of 18 reviews have resulted in completed rulemaking
proceedings. Five other proceedings are pending completion;
* AMS: All 11 completed Section 610 reviews have resulted in no
changes;
* FSIS: The only Section 610 review conducted has yet to be reported;
Discretionary reviews:
* APHIS: Rule changes vary in terms of less/more stringent;
* FSIS: According to officials, 8 of the 36 reviews from petitions
resulted in changes, and generally less stringent rules;
* 113 of the 132 rule changes listed in the December 2006 Unified
Agenda resulted from decisions made under agency discretion.
Agency views on;
usefulness of reviews;
Mandatory reviews:
* APHIS officials reported that periodic reviews are less useful, as
small modifications are made to regulations on ongoing basis;
* AMS officials reported that Section 610 reviews have not identified
issues that were not previously known through other oversight
mechanisms;
Discretionary reviews:
* FSIS officials reported that reviews focus the agency's efforts on
measuring a regulation's effectiveness and determining whether a
particular policy is still consistent with the agency's overall policy
direction.
Source: GAO analysis of agency data and regulatory statutes.
[End of table]
USDA Retrospective Review Processes:
The processes employed for review varied by agency, with AMS program
offices conducting reviews of their own regulations, APHIS program
offices conducting reviews in concert with centralized offices within
the agency, and centralized offices within the agency conducting FSIS
reviews. However, all three agencies relied on the input of regulated
communities to inform their processes. As an example of a centralized
approach: APHIS' technical and policy program staff work with the
agency's Policy and Program Development (PPD) unit to conduct reviews,
and PPD works with the Deputy Administrators for each regulatory
program to set regulatory priorities. The program staff that oversees
the regulation, on the other hand, conducts AMS reviews, in-house. All
three agencies reported that they rely on outside parties to inform
their review process. For example, AMS reported that the agency
conducts periodic referenda of regulated growers of fruit and
vegetables to amend agency marketing orders and to identify programs
for discontinuance. APHIS reported that its review decisions are
influenced by ongoing discussions with industry, state and tribal
authorities, and foreign governments regarding setting international
trade standards. APHIS also reported that it has acted on
recommendations made by outside reviews of its programs conducted by
the National Plant Board and the National Association of State
Departments of Agriculture. FSIS reported that it holds industry
listening sessions and public meeting to inform its rulemaking and
affect the day-to-day implementation of regulations. Figure 5 depicts
USDA's general process for regulatory review.
Figure 5: USDA Review Process:
[See PDF for image]
Source: GAO analysis of agency data.
[End of figure]
[End of section]
Appendix III: Department of Justice Retrospective Reviews:
While the Department of Justice (DOJ) is not primarily a regulatory
agency, during the 2001 through 2006 period covered in our review, DOJ
component agencies have conducted reviews of their existing regulations
under both mandatory review requirements and under their own
discretionary authorities.[Footnote 51] Most DOJ reviews were
discretionary and in response to such drivers as changes in technology
or feedback back from regulated entities, among other factors. The
three mandatory reviews conducted by DOJ since 2001 were driven by
separate statutory requirements to review regulations or set
enforceable standards for others to follow. While DOJ has few formal
processes or standards to guide the planning, conduct, and reporting of
its internally conducted discretionary reviews, in the conduct of the
one Section 610 review conducted by DOJ and evaluated by GAO, statutory
standards were followed.
DOJ Retrospective Review Activity:
DOJ is not primarily a regulatory agency and officials reported that
most of its primary activities, including antiterrorism, investigation,
and law enforcement do not involve the department's regulatory process.
Officials reported that few of the department's regulations are subject
to Section 610 review, and one official reported that regulatory
review, as a whole, is not a major priority within the agency, compared
to its other functions. However, since 2001 DOJ agencies reported
completing at least 13 reviews of existing regulations. Based on
published documents in the Federal Register or Unified Agenda, 10 of
these reviews were conducted under DOJ's own discretion, while 3
reviews were in response to mandatory review requirements or to comply
with statutory requirements to revise regulations.
The drivers for the discretionary reviews conducted by DOJ included
responding to changes in technology or feedback from regulated
entities, among other factors. For example, FBI officials reported that
the Bureau has reviewed and is revising a rule preventing the FBI from
retaining or exchanging the fingerprints and criminal history record
information of nonserious offenses in the FBI's Fingerprint
Identification Records System. According to the proposed rule change
resulting from this review, the existing regulations were originally
implemented in 1974 and based on the data-processing capabilities of a
manual record-keeping environment. Officials reported that advances in
information technology precipitated a review of these regulations,
which once revised, will enhance the FBI's search capability for
fingerprint and criminal history background checks. DOJ also cited
feedback from regulated entities as an important driver of
discretionary reviews. DEA, for example, reported that the controlled
substance manufacturer and distributor industries requested that DEA
provide an electronic method to satisfy the legal requirements for
ordering Schedule I and II controlled substances, which previously
could only be ordered through a triplicate form issued by DEA.
According to officials, DEA reviewed its regulations and worked with
industry to develop a pilot program to update its system. After notice-
and-comment rulemaking, DEA published a Final Rule revising its
regulations on April 1, 2005.[Footnote 52]
In addition to these reviews, ATF conducted five discretionary reviews
since 2001, including a reorganization of Title 27 in the transition of
ATF functions from the Department of the Treasury to DOJ after the
creation of the Department of Homeland Security.[Footnote 53]
Additionally, OJP conducted two discretionary reviews since 2001 and
the BOP reported that it conducts annual, ongoing reviews of its Policy
Statements, many of which correspond with its regulations in the CFR,
to ensure that they are current.
GAO was able to identify three mandatory regulatory reviews completed
by DOJ since 2001, and the impetuses for these reviews varied. For
example, ATF in 1997 initiated a Section 610 review evaluating the
impact of changes to its fireworks storage and record-keeping
requirements on small entities.[Footnote 54] This review, concluded in
a January 29, 2003, Federal Register notice, certified that the revised
rule will have a minimal economic impact on the explosives industry,
and will no longer have a significant economic impact on a substantial
number of small entities.[Footnote 55] The review also identified other
areas of concern to the public, precipitating further actions. CRT
conducted a review pursuant to Executive Order 12250, which requires
the Attorney General to establish and implement a schedule for the
review of executive branch agencies' regulations implementing various
federal nondiscrimination laws, including the Civil Rights Act of 1964,
among others.[Footnote 56] According to officials, this "Cureton Review
Project" included an evaluation of the regulations of 23 agencies,
including DOJ, which resulted in clarified statutory language to
promote consistent compliance with the various nondiscrimination
statutes.[Footnote 57] In a third review, CRT published an Advanced
Notice of Proposed Rulemaking (ANPRM) to update regulations
implementing Title II and Title III of the Americans with Disabilities
Act of 1990 (ADA), including the ADA Standards for Accessible
Design.[Footnote 58] According to the ANPRM, the ADA requires DOJ to
adopt accessibility standards that are ''consistent with the minimum
guidelines and requirements issued by the Architectural and
Transportation Barriers Compliance Board," which were revised in July
2004. DOJ has also reported that it may conduct a Regulatory Impact
Analysis on the revised ADA standards, including a benefit-cost
analysis pursuant to Executive Order 12866, OMB Circular A-4, and the
Regulatory Flexibility Act.
Table 5: Description of DOJ Retrospective Reviews:
Review authorities;
Mandatory reviews:
* Section 610;
* Executive Order 12250 (implementation of federal nondiscrimination
laws);
* Revision of ADA Accessible Design Standards;
Discretionary reviews:
* BOP internal policy to annually review its Program Statements.
Frequency;
Mandatory reviews:
* 10 years (Section 610);
* Executive Order 12250 (Attorney General sets evaluation schedule);
* ADA Standards are revised as needed to enforce standards revised by
the Architectural and Transportation Barriers Compliance Board;
Discretionary reviews:
* As needed, based on the feedback of regulated communities;
* Annually (BOP).
Purposes;
Mandatory reviews:
* Reduce economic impact on small entities;
* Reduce complexity of regulation;
* Consolidate duplicative rules;
* Respond to changes over time in technology, economic conditions;
Discretionary reviews:
* Respond to petitions, feedback from regulated communities, law
enforcement agencies, advisory boards;
* Changes in technology, statutes, standards;
* Ensure that regulations are current (BOP).
General outcomes;
Mandatory reviews:
* In the ATF Section 610 review, ATF certified that the revised rule
will have a minimal economic impact on the explosives industry, and
will no longer have a SIESNOSE. However, the review identified
additional issues for further action;
* CRT's Executive Order 12250 review resulted in clarification of
regulatory language within federal agency programs;
Discretionary reviews:
* ATF reported four discretionary reviews since 2001 which resulted in
rulemaking proceedings.
Agency views on usefulness of reviews;
Mandatory reviews: ATF officials reported that reviews ensure that ATF
is meeting the needs of government, industry, and the law enforcement
community while meeting legislative mandates.
Source: GAO analysis of agency data and regulatory statutes.
[End of table]
DOJ Review Process:
Department officials stated that much of DOJ's regulatory review
process was "informally" structured, and without formal procedures and
standards. Professional judgment, officials stated, was used in some
cases in lieu of documented practices. However, a GAO evaluation of the
recent ATF Explosive Materials in the Fireworks Industry review
indicates that DOJ followed the statutorily defined process for its
completion. As required by Section 610, the review must describe (a)
the continued need for the rule; (b) the nature of complaints or
comments received concerning the rule from the public; (c) the
complexity of the rule; (d) the extent to which the rule overlaps,
duplicates, or conflicts with other federal rules and, to the extent
feasible, with state and local governmental rules; and (e) the length
of time since the rule has been evaluated or the degree to which
technology, economic conditions, or other factors have changed in the
area affected by the rule. GAO's evaluation of this proceeding
concluded that ATF addressed the requirements for responding to public
comments, complaints, and the rule's complexity. ATF's analysis was
primarily in response to public comments and a review of its own
experience implementing the rule. In a few cases, ATF responded to
comments by referencing published experts' opinions and scientific
tests. However, ATF provided no overall analysis of the cost of these
storage regulations or of their effectiveness in promoting public
safety, or law enforcement's ability to trace fireworks to their
manufacturer--a specific desired outcome referred to in the
notice.[Footnote 59] Figure 6 depicts the general process for
regulatory review in one ATF Section 610 review.
Figure 6: ATF Section 610 Review, Explosive Materials in the Fireworks
Industry:
[See PDF for image]
Source: GAO analysis of agency data.
[End of figure]
[End of section]
Appendix IV: Department of Labor Retrospective Reviews:
During the 2001 through 2006 period covered in our review, agencies
within the Department of Labor (DOL) have actively reviewed their
existing regulations in response to both mandatory and discretionary
drivers. Specifically, the Employee Benefits Security Administration
(EBSA), Occupational Safety and Health Administration (OSHA), Mine
Safety and Health Administration (MSHA), and Employment and Training
Administration (ETA) have conducted various retrospective reviews of
their regulations. The types of reviews--in terms of impetus and
purpose--outcomes of reviews and processes used to conduct the reviews
varied among the agencies. Specifically, while EBSA has established a
formal and documented regulatory review program, OSHA, MSHA, and ETA
have somewhat less formal review programs, but MSHA and ETA were in the
process of developing more standardized processes. Furthermore, while
all of the agencies reported that their discretionary reviews more
often resulted in subsequent regulatory action, the outcomes of
mandatory reviews varied slightly among the agencies.
DOL Retrospective Review Activity:
All of the DOL agencies within our review reported actively conducting
reviews of their regulations. However, the types of reviews--in terms
of impetus and purpose--and outcomes of reviews varied slightly among
the agencies. All of the DOL agencies reported that they conducted
ongoing reviews of their regulations, at their own discretion. However,
two of the agencies--OSHA and EBSA--also incorporated requirements from
mandatory reviews into these discretionary reviews. Furthermore, EBSA
conducts its discretionary reviews more formally as part of its
Regulatory Review Program. According to documentation that we reviewed
on this program, EBSA formally conducted reviews of its existing
regulations in response to specific developments and/or changes in the
administration of group health, pension, or other employee benefit
programs, changes in technology and industries, and legislation. EBSA
also reviewed regulations in response to identified enforcement
problems or the need to further the agency's compliance assistance
efforts through improved guidance. Furthermore, the review program
incorporates Section 610 reviews as part of the program. While OSHA did
not have a program that was as formalized and documented as EBSA, the
officials reported and our review of their analyses confirmed that the
agency also incorporated Section 610 criteria into broader review
initiatives that the agency undertook to address informal feedback from
industry, stakeholders, and staff. MSHA and ETA also reported
initiating reviews in response to either stakeholder input, technology
or policy updates, petitions, or internal identification of needed rule
changes. However, the agencies' officials reported that they have not
conducted any Section 610 reviews (which focus on burden reduction)
during the period covered in our review because they have not had any
regulations within the last 10 years that had a SEISNOSE effect.
Outcomes of reviews varied slightly among the agencies. While it was
not possible to account for all of the reviews conducted by all of the
agencies because the agencies did not document some informal reviews,
collectively the agencies reported completing at least 60 reviews since
2001. According to EBSA documentation, the agency completed at least 7
of its 13 formal retrospective reviews, including 4 Section 610
reviews. All of the discretionary reviews resulted in subsequent
regulatory changes, including changes to the regulation, guidance, or
related materials. None of EBSA's Section 610 reviews resulted in
regulatory changes. OSHA completed 4 reviews in response to both
discretionary and Section 610 requirements which resulted in regulatory
changes or changes to guidance documents or related materials.
According to OSHA documentation, 2 of their completed Section 610
reviews and 2 of their Standards Improvement Project Reviews
recommended regulatory changes, including clarifications to standards
or additional outreach or compliance assistance materials. MSHA
officials reported engaging in a 2004 MSHA Strategic Initiative Review
(a review of all Title 30 CFR regulations) and a review conducted
according to an MSHA initiative to improve and eliminate regulations
that were frequently the subject of petitions for modification. Both of
these reviews resulted in changes to regulations. ETA officials
reported that, in 2002, the agency conducted a regulatory cleanup
initiative that resulted in updates to individual regulations and that
ETA has updated individual regulations when the agency's program
offices identified a need to do so through their course of business.
The agencies also reported making regulatory changes based upon
departmentwide regulatory cleanup initiatives in 2002, and 2005/2006,
which the department's Office of the Assistant Secretary for Policy
spearheaded. Additionally, the department completed 42 reviews in
response to Office of Management and Budget (OMB) regulatory reform
nominations from 2001 to 2004, which resulted in subsequent regulatory
action.[Footnote 60]
Table 6: Description of DOL Retrospective Reviews:
Review Authorities;
Mandatory reviews:
* Section 610;
Discretionary reviews:
* Agency initiatives in response to technology changes, petitions,
enforcement and compliance issues;
* Department Regulatory Clean Up Initiative (all agencies, including
OSHA, EBSA, and ETA);
* EBSA: Reviews also conducted under the agency's formal Regulatory
Review Program.
Frequency;
Mandatory reviews:
* Every 10 years;
* MSHA and ETA: No reviews conducted under this authority;
Discretionary reviews:
* Discretionary reviews are completed as needed;
* Departmentwide Review Initiatives occurred at least twice;
* EBSA: Regulatory Review Program reviews are conducted annually.
Purposes;
Mandatory reviews:
* Reduce burden on small entities;
* Reduce complexity of regulation;
* Consolidate duplicative rules;
* Respond to changes over time in technology, economic conditions;
Discretionary reviews:
* Effectiveness (for example, performance measurement or other
measurements of efficiency);
* Improved enforcement and compliance;
* Burden reduction.
General outcomes;
Mandatory reviews:
* EBSA: No changes to regulations;
* OSHA: When Section 610 and discretionary reviews were combined,
reviews resulted in some changes;
Discretionary reviews:
* Subsequent regulatory action (e.g., changes to regulations, guidance,
or related materials).
Agency views on usefulness of reviews;
Mandatory reviews:
* EBSA officials reported Section 610 mandatory review requirements are
less comprehensive than their discretionary reviews because they are
limited to regulations with SEISNOSE and focus primarily on
deregulation;
* OSHA officials reported Section 610 mandatory review criteria were
reasonably clear and relevant, while Executive Order 12866 criteria are
less concise;
Discretionary reviews:
* Burden reduction;
* Officials reported that reviews are useful to identifying whether:
(1) the agency is achieving its goals, or (2) regulation or guidance
changes are needed. MSHA also reported reviews are useful to saving
costs associated with reviewing several petitions for a rule that needs
modification.
Source: GAO analysis of agency data and regulatory statutes.
[End of table]
DOL Retrospective Review Processes:
The development of review processes for DOL agencies ranged from
processes that were documented and formal with established review
structures and procedures, to informal undocumented review processes
with structures and procedures that were still developing. For example,
EBSA established a formal review program that established a formal
structure for reviews, including identification of what resources
(staff) would be involved, criteria that the agency would use to select
and assess regulations, and the method for reporting results. While
OSHA did not have a documented formal review program, the agency
described a somewhat formal structure that it uses to conduct its
reviews. Similarly, ETA officials reported that they just recently
established a more formal structure for their review process, including
the creation of a Regulations Unit that will coordinate the development
of regulations for ETA legislative responsibilities and formalize
regulatory procedures within the agency. According to the officials,
the Regulations Unit will establish time frames and/or internal
triggers for reviews to ensure the agency properly reviews and updates
regulations. However, they noted that, given the recent establishment
of this unit, it might take some time to implement these procedures.
MSHA did not appear to have a documented formal review process or
structure for its discretionary and mandatory reviews. However, the
agency reported that it had been engaged in soliciting contractors to
develop a more formal process for how to prioritize what regulations
that agency would review. Figures 7 and 8 illustrate an example of the
variation in the agencies' review processes. To facilitate sharing
practices, in appendix XI we provide a more detailed description of
practices within EBSA's review process, which was the most formalized
and documented review process that we examined within the scope of our
review.
Figure 7: EBSA Retrospective Review Process:
[See PDF for image]
Source: GAO analysis of agency data.
[End of figure]
Figure 8: OSHA Retrospective Review Process:
[See PDF for image]
Source: GAO analysis of agency data.
[End of figure]
[End of section]
Appendix V: Department of Transportation Retrospective Reviews:
Between 2001 and 2006, Department of Transportation (DOT) agencies
within the scope of our evaluation actively reviewed their existing
regulations under both mandatory and discretionary
authorities.[Footnote 61] The mandatory reviews conducted by DOT
agencies addressed governmentwide, departmentwide, and agency-specific
review requirements. DOT conducted discretionary reviews in response to
formal petitions and informal feedback from the public and in response
to accidents or similar events and changes in specific industries,
technologies, or underlying standards. Additionally, DOT conducted
reviews in response to Office of Management Budget (OMB) regulatory
reform initiatives as well as a stand-alone initiative to review all
rules under the department's authority. DOT has written policies and
procedures guiding the planning, conduct, and reporting of reviews.
While review processes may vary somewhat within DOT agencies, overall
these agencies follow DOT guidelines in the conduct of their reviews.
DOT Retrospective Review Activity:
DOT has conducted a number of initiatives to systematically review
existing regulations to comply with federal mandates and DOT's own
policies and procedures for regulatory review. In order to satisfy
Section 610 and other review requirements, DOT initiated a 10-year plan
in 1998 to systematically review some of its sections of the Code of
Federal Regulations every year, with the objective of reviewing all of
its regulations over a 10-year cycle. DOT also maintains a
departmentwide review requirement, instituted in 1979, to periodically
review existing regulations to determine whether they continue to meet
the needs for which they originally were designed or whether reviewed
rules should be revised or revoked. More recently, in 2005, acting
under its own discretion, DOT initiated and completed a special stand-
alone regulatory review in which the department sought public comment
on all rules and regulations under DOT's authority. DOT also reviewed
regulations in response to OMB initiatives in 2001, 2002, and 2004,
which solicited nominations from the general public for federal
regulations and guidance documents for reform. The agency completed 61
reviews in response to these reform initiatives, and the department
took subsequent action on 43 of the regulations it reviewed. Overall,
during the 2001 through 2006 period covered in our review, DOT has
reported conducting over 400 reviews of existing regulations to meet
governmentwide review requirements, including those under Executive
Order 12866 on Regulatory Planning and Review, Section 610, and the
Executive Memorandum of June 1, 1998, on Plain Language in Government
Writing.[Footnote 62]
In addition to reviews conducted under departmentwide requirements,
various agencies within DOT have reviewed regulations within the
specific statutes under their purview. For example, since 2001 FAA has
reviewed three regulations pursuant to requirements in the Federal
Aviation Reauthorization Act of 1996. According to agency officials,
these reviews included post implementation cost-benefit assessments of
three, high-cost FAA rules. FMCSA reported that it also reviews any
regulation impacted by the Motor Carrier Act of 1980; the Motor Carrier
Safety Improvement Act; and the Safe, Accountable, Flexible, Efficient
Transportation Equity Act: A Legacy for Users (SAFETEA-LU). Although
not within the time frame for this study, FTA announced in the December
2006 Unified Agenda that it will undertake a review of its regulations
to bring them into conformity with the SAFETEA-LU statute.
In addition to these more formal regulatory review efforts, DOT
officials reported that the department also reviews its existing
regulations at its own discretion as a function of its daily, ongoing
activities. According to officials, such reviews are often the result
of petitions from or consultations with parties affected by DOT
regulations or based on the experience of agency staff members in light
of changes in specific industries, technologies, or underlying
standards. DOT officials said that, for some of their agencies,
reviewing petitions for rulemaking or regulatory waivers is the most
productive way to obtain public input on a review of that rule. An
evaluation of NHTSA's entries in DOT's December 2005 Unified Agenda
indicated 10 rule change proceedings in various stages of completion
which were the result of petitions from regulatory stakeholders. NHTSA
also reported that, since 2001, it has conducted 17 reviews of Federal
Motor Vehicle Safety Standards (FMVSS), including a few studies
evaluating the benefits and costs of various standards. PHMSA reported
that the granting of numerous waivers of a regulation is a particular
signal that new technology or conditions may render that regulation
obsolete or in need of amendment.
Table 7: Description of DOT Retrospective Reviews:
Review authorities;
Mandatory reviews:
* Section 610;
* Executive Order 12866;
Discretionary reviews:
* Responses to petitions, requests for interpretations;
* Responses to changes in specific industries, technologies, or
underlying standards.
Frequency;
Mandatory reviews:
* DOT has a formal program for reviewing all of its regulations over a
10-year cycle. Every year, DOT reviews some of its sections of the CFR
in that 10-year effort;
Discretionary reviews:
* As needed, as a function of daily activities;
* As determined by the office initiating the review, under DOT
departmentwide policy to periodically review existing regulations.
Purposes;
Mandatory reviews:
* Reduce burden on small entities;
* Reduce complexity of regulation;
* Consolidate duplicative rules;
* Respond to changes over time in technology, economic conditions;
Discretionary reviews:
* Meet complaints or suggestions of public;
* Simplify or clarify language, particularly in response to petition,
problems evidenced in the enforcement of the regulation;;
* Eliminate overlapping, duplicative regulations;;
* Eliminate conflicts and inconsistencies in its own regulations or
those of other initiating offices or other agencies;;
* Insure continued relevance of the problem the regulations were
originally intended to solve;;
* Address changes in technology, economic conditions, or other factors;
and;
* Monitor rules receiving a number of requests for exemption from a
regulation.
General outcomes;
Mandatory reviews:
* According to DOT review summary reports, 102 of the 406 reviews
conducted between 2001 and 2006 resulted in further action, such as
making substantive or plain language amendments as well as identifying
regulations for further study;
* DOT reported 5 rulemakings resulting from Section 610 reviews;
* FAA reported that none of the agency's Section 610 reviews have
resulted in any changes to the regulations;
Discretionary reviews:
* 43 of the 61 DOT rules nominated in the OMB reform initiatives
resulted in subsequent actions;
* According to NHTSA officials, the majority of NHTSA reviews do not
result in changes to regulations because they confirmed that the
regulations were effective;
* According to FAA officials, no changes have been promulgated or
proposed for reviews conducted in response to the Federal Aviation
Reauthorization Act of 1996;
* DOT received and responded to 120 public comments based on 2005 stand-
alone review process. DOT identified 36 comments as warranting further
action and 21 as warranting further consideration.
Agency views on usefulness of reviews;
Mandatory reviews:
* DOT officials characterized regulatory reviews as generally valuable
because they ensure the agency focuses on whether the rule is meeting
its objectives, whether changed circumstances warrant revisions or
revocation, or whether the rule is as cost effective as originally
thought;
* DOT officials reported that when formal reviews do not result in
changes, it may be because changes were already made as a result of
informal or discretionary reviews.[A].
Source: GAO analysis of agency data and regulatory statutes.
[A] Informal reviews generally refer to those conducted as a routine
occurrence during daily general operations of an agency, when problems
are identified with existing rules that might warrant further action.
See Eisner and Kaleta at page 7.
[End of table]
DOT Retrospective Review Process:
DOT has written policies and procedures guiding the planning, conduct,
and reporting of reviews. While the processes employed by DOT agencies
may vary somewhat, overall these agencies follow DOT guidelines in the
conduct of their reviews. For example, DOT's Policies and Procedures
provide guidance for prioritizing regulations for review, including the
extent of complaints or suggestions received from the public; the
degree to which technology or economic factors have changed; and the
length of time since the regulations were last reviewed, among other
factors.[Footnote 63] DOT's procedures also provide agencies with
discretion in applying the procedures. For example, NHTSA reported that
it gives highest priority to the regulations with the highest costs,
potential benefits, and public interest, while PHMSA reported that it
gives highest priority to initiatives it deems most likely to reduce
risk and improve safety. Additionally, while DOT officials reported
that DOT considers OMB Circular A-4 on "Regulatory Analysis" as a guide
for cost/benefit analysis of regulatory outcomes, FAA reported that it
uses a set of flexible procedures recommended by an outside consultant
to conduct ex post evaluations of some rules. With regard to public
participation in the review process, Appendix D to the department's
Unified Agenda announces the complete schedule for all reviews,
requests public comments for reviews in progress, and reports the
results of completed reviews and their results. DOT agencies also
pointed out that they regularly interact with stakeholders, such as
regulated industries, consumers, and other interested parties to obtain
feedback on regulations. For example, FAA officials stated that the
agency holds conferences with industry and consumer groups to identify
regulatory issues for review. In terms of the reporting of review
results, DOT publishes brief summaries of completed reviews in Appendix
D of its Unified Agenda. However, agencies may report review results in
other ways. For example, FMCSA documents the results of its Section 610
reviews in an annual internal report, while NHTSA publishes the
technical reports of its reviews in the Federal Register, requesting
public comments on its determinations. Figure 9 depicts DOT's general
process for regulatory review.
Figure 9: Illustration of DOT's Review ProgramA:
[See PDF for image]
Source: GAO analysis of agency data.
[A] This flow chart presents a summary of what DOT describes as its
formal review process, which includes reviews conducted in its 10-year
review plan. This illustration may not convey the actual process used
in a given situation, which may vary to accommodate complexities not
included in this depiction.
[End of figure]
[End of section]
Appendix VI: Consumer Product Safety Commission Retrospective Reviews:
Since 2001, the Consumer Product Safety Commission (CPSC or the
Commission) systematically reviewed its regulations under its own
discretion, but has not conducted any mandatory reviews because none of
its rules triggered Section 610 or other mandatory review requirements.
Moreover, agency officials noted that because of its reliance on
voluntary consensus standards, the agency does not promulgate as many
rules as other regulatory agencies. However, the primary purpose of
CPSC discretionary reviews is to assess whether the regulations that
CPSC promulgates remain consistent with the objectives of the
Commission. In performing its reviews, CPSC has created systematic
processes for the planning, conduct, and reporting of its reviews.
Through this process, the Commission prospectively budgets for its
reviews. Because CPSC's review program is so new, the agency has not
completed most of the reviews that it has initiated, but the Commission
has proposed changes to at least two existing regulations. In addition,
the officials reported that their review program has been useful to the
Commission.
CPSC Retrospective Review Activity:
CPSC actively conducted reviews of its existing regulations under its
own discretion. Specifically, the Commission implemented a pilot review
program in 2004, with annual follow-up efforts in 2005 and 2006, which
resulted in the initiation of 14 retrospective reviews. CPSC initiated
this review process partly because of an Office of Management and
Budget (OMB) Program Assessment Rating Tool (PART) recommendation that
the agency develop a plan to systematically review its current
regulations to ensure consistency among them in accomplishing program
goals. The primary purpose of CPSC reviews is to assess the degree to
which the regulations under review remain consistent with the
Commission's program policies and program goals. CPSC also assesses
whether it can streamline regulations to minimize regulatory burdens,
especially on small entities. The officials reported that their review
process is so new that they have not yet fully completed it for all of
the reviews that they have initiated. However, they have completed at
least 3 of their 14 initiated reviews.
Officials reported that, while some of the regulations they reviewed
did not need a revision, they have proposed regulatory changes for two
regulations, including standards for flammability of clothing textiles
and surface flammability of carpets and rugs. They reported that their
reviews could focus on opportunities to either expand or streamline
existing regulations. Thus, their reviews could lead to increases or
decreases in the scope of CPSC regulations. As examples, CPSC officials
reported that during their review of their existing bicycle regulation
they identified that the regulation did not reflect new technology and
materials, and therefore needed to be modified and updated. Conversely,
their review of their cigarette lighter rule revealed that the agency
needed to promote greater compliance and more effective enforcement,
which increased the agency's regulatory oversight. Table 8 provides
additional detail on the CPSC retrospective reviews.
Table 8: Description of CPSC Retrospective Reviews:
Review authorities;
Discretionary reviews:
* Agency-initiated reviews conducted under the agency's formal
regulatory review program, in response to OMB PART recommendation;
* Technology changes, petitions, test lab input, new industry
standards, obsolescence of test equipment.
Frequency;
Discretionary reviews:
* Under new program review about 4 regulations per year;
* Other discretionary reviews conducted as needed.
Purposes;
Discretionary reviews:
* Effectiveness in meeting agency goals (such as performance
measurement);
* Burden reduction.
General outcomes;
Discretionary reviews:
* Process new, but completed reviews have led to subsequent regulatory
action (e.g., proposed changes to regulations).
Agency views on usefulness of reviews;
Discretionary reviews:
* Officials reported that reviews are generally useful to identifying
and responding to needed regulatory changes, confirming that some rules
are producing intended results and improving enforcement. However,
review of regulations not as prominent for CPSC because of reliance on
voluntary standards.
Source: GAO analysis of agency data and regulatory statutes.
[End of table]
CPSC Review Process:
CPSC established a formal review program that prospectively budgets for
the substantive reviews that the agency will conduct. Officials
reported that they have conducted about four substantive reviews per
year using this process, while still managing other agency priorities.
The process consists of three phases, including: (1) prioritization and
selection of regulations to substantively review, (2) substantive
review of the selected regulations, and (3) reporting results to the
Commissioners and the public, for certain reviews. As part of this
process, CPSC staff prioritize which regulations need review by
considering: (1) which rules have the oldest effective dates, (2) which
rules were adopted under various statutes under CPSC's authority, and
(3) which rules staff identified as good candidates for change (from
their experience working with the regulation). As resources allow, the
agency selects one substantive regulation from each of their statutes'
areas (with the exception of the Refrigerator Safety Act), starting
with their earliest regulations.[Footnote 64] As part of this
prioritization process, the agency considers input from CPSC's
technical staff and outside groups. CPSC staff initiates substantive
review of regulations that the Commission chooses for review. In this
process, the agency solicits public comments using the Federal
Register, assesses the comments received, conducts an internal
technical review of the regulation, and reports the results to the
Commissioners. The Commissioners make a policy decision on actions the
agency will take based upon staff recommendations. If the agency
decides to conduct a follow-on activity to update a rule, it
subsequently notifies the public via the Federal Register. For rule
reviews that result in Commission-approved projects for certain
rulemaking activities (such as developing revisions to a rule for
Commission consideration), CPSC makes the briefing packages available
on its Web site. Other rule reviews (such as reviews for which staff
suggests no action) are given to the Commissioners, but are not posted
on the Web site. Figure 10 illustrates the general review process.
Figure 10: CPSC Retrospective Review Process:
[See PDF for image]
Source: GAO analysis of agency data.
[End of figure]
[End of section]
Appendix VII: Environmental Protection Agency Retrospective Reviews:
During the 2001 through 2006 period covered in our review, program
offices within Environmental Protection Agency (EPA) have conducted
numerous retrospective reviews of EPA existing regulations and
standards. The mix of reviews conducted by EPA--in terms of
authorities--varied across the agency, but the purposes of these
reviews--effectiveness, efficiency, and burden reduction--were similar
across the agency. EPA's retrospective review results provided three
distinctive outcomes. While the agency conducts many reviews under its
mandates, reviews conducted on its own discretion yielded more changes
to existing regulations than mandated reviews. The review processes
within EPA's program offices, though different, typically shared
similar elements in the planning, conduct, and reporting of results.
Overall, EPA reported that its retrospective reviews have proven to be
useful to the agency.
EPA Retrospective Review Activities:
The Office of Air and Radiation (OAR), the Office of Prevention,
Pesticides, and Toxic Substance (OPPTS), the Office of Solid Waste and
Emergency Response (OSWER), and the Office of Water within EPA each
conduct mandatory retrospective reviews under their guiding statutes
and focus the reviews on what is stated in statute or developed by the
agency. Thus, the frequency of mandated reviews varies within EPA as
well as the program offices. For instance, the frequency of reviews
required by the Safe Drinking Water Act (SDWA), conducted by the Office
of Water, ranges from every 3 years to every 7 years, depending on the
review requirement, while the OAR is required to conduct reviews by the
Clean Air Act ranging from every 3 years to every 8 years. Mandated
reviews, such as those required by agency-specific statutes, mainly
focused on effectiveness, while Section 610 reviews and Office of
Management and Budget (OMB) Regulatory Reform Nominations were focused
on burden reduction. According to EPA officials, mandatory
retrospective reviews have generally resulted in limited or no changes
to regulations, while reviews conducted under discretionary authority
usually resulted in more changes.[Footnote 65] For instance, of the 14
Section 610 reviews conducted by the program offices since 2001, only 1
resulted in change.[Footnote 66] Moreover, OAR noted that most of its
reviews validated the need for the regulation or standard. However,
EPA's review of regulations in response to OMB's Manufacturing
Regulatory Reform initiative resulted in 19 regulatory changes and 19
nonregulatory actions, including the development of regulatory guidance
and reports. In addition, GAO's review of EPA's December 2006 Unified
Agenda entries also revealed that 63 out of 64 rules identified as
changed or proposed for changes were the result of decisions made under
EPA's discretionary authority. Though the use of discretionary
authority produced more rules changes, officials reported that
retrospective reviews, in general, were valuable in (1) determining
whether new information exists which indicates the need for revisions
and (2) enabling the agency to gain new insights about its analytical
methods. In addition, officials noted that retrospective reviews were
useful in determining whether the rule was working as intended and
helping to achieve the agency's or statute's goals.
Table 9: Description of EPA Retrospective Reviews:
Review Authorities;
Mandatory reviews: In response to:
* Clean Air Act;
* Clean Water Act;
* Federal Food, Drug, & Cosmetic Act;
* Federal Insecticide, Fungicide, and Rodenticide Act;
* Comprehensive Environmental Response, Compensations and Liability
Act;
* Food Quality Act;
* Safe Drinking Water Act;
* Section 610 of the Regulatory Flexibility Act;
* Court decisions;
Discretionary reviews: In response to:
* OMB's Manufacturing Regulatory Reform Nominations;
* Formal petitions;
* Industry changes;
* New scientific studies;
* Informal comments from the public;
* Emergencies or disasters.
Frequency;
Mandatory reviews: Ranges from every 2 years to every 10 years
depending on the review requirement;
Discretionary reviews: Depends largely on the nature of the source.
Purposes;
Mandatory reviews: Effectiveness, burden reduction, and efficiency;
Discretionary reviews: Depends largely on the nature of the source.
General outcomes;
Mandatory reviews: Reviews generally resulted in no changes, a
validation that existing regulations are working. Sometimes reviews
resulted in the development of guidance materials in lieu of a rule
modification;
Discretionary reviews: Reviews generally resulted in changes,
particularly in response to OMB nominations.
Agency views on usefulness of reviews;
Mandatory reviews: Officials reported that the frequency of some
mandated reviews limits the usefulness of the reviews. However, reviews
were useful to determine whether the regulations were working as
intended;
Discretionary reviews: Officials did not differentiate between the
usefulness of discretionary reviews versus mandatory reviews.
Source: GAO analysis of agency data and regulatory statutes.
[End of table]
EPA Review Process:
EPA's review process varied by program office and by review
requirement; however, most mandatory and discretionary reviews
contained consistent elements. The four EPA program offices included in
our review perform various functions of the agency that rarely overlap
into other program offices duties. For example, OAR exclusively
oversees the air and radiation protection activities of the agency,
while the Office of Water solely manages the agency's water quality
activities. These two offices have different guiding statutes that
require them to conduct reviews and, within those statutes, processes
are sometimes outlined for how the agency should conduct the reviews.
Therefore, the processes for these program offices varied. However,
three elements were similar across the offices: these included formal
or informal notification of the public; involvement of the public in
the conduct of the review mainly through the request of public
comments, science, risk, or policy assessments of the regulation; and
release of the results to the public primarily through the Federal
Register and the EPA Web site. In addition, mandatory and discretionary
regulatory reviews that were high profile in nature (e.g., because they
were conducted in response to emergencies, were contentious, or
received heavy attention from the public, Congress, or regulatory
experts) had the aforementioned elements as well as high-level
management attention from the Assistant Administrator of the program
office or the EPA Administrator. For example, the review processes of
the mandatory National Ambient Air Quality Standards and the Lead and
Copper review, which was initiated after elevated levels of lead were
found in the District of Columbia, were defined, documented, and
included extensive public involvement and high-level management
attention. Figure 11 illustrates the general review process for the
different review drivers.
Figure 11: EPA Retrospective Review Process:
[See PDF for image]
Source: GAO analysis of agency data.
[End of figure]
[End of section]
Appendix VIII: Federal Communications Commission Retrospective Reviews:
The Federal Communications Commission (FCC or the Commission) actively
reviews its existing regulations to meet congressionally mandated
review requirements, and to respond to petitions from regulated
entities and changes in technology and market conditions, under its own
discretionary authority. While FCC's retrospective review processes
vary depending on the review requirement the agency is addressing,
FCC's biennial and quadrennial review processes provide opportunities
for public participation and transparency. According to FCC officials,
the frequency of the biennial review requirement presents staffing
challenges to the agency, while the 10-year requirement for the Section
610 review presents a challenge to the usefulness of this review, as
regulations may have been previously modified under other requirements
prior to the review.
FCC Retrospective Review Activity:
FCC actively reviews its existing regulations to meet congressionally
mandated review requirements and to respond to petitions from regulated
entities and changes in technology and market conditions under its own
discretionary authority. Under the Communications Act, as amended, the
Commission is subject to two agency-specific mandated reviews: (1) the
biennial regulatory review of FCC telecommunications rules,[Footnote
67] and (2) the quadrennial regulatory review of the broadcast and
media ownership rules.[Footnote 68] FCC officials reported that these
reviews are guided by the deregulatory tenor of the Telecommunications
Act, which instructed the Commission to promote competition and reduce
regulation in the telecommunications and broadcast industries. The
purpose of these reviews is to identify rules no longer necessary in
the public interest so that they may be modified or repealed. In the
2002 biennial review, the Commission conducted and reported 89 separate
review analyses of its telecommunications regulations and made more
than 35 recommendations to open proceedings to consider modifying or
eliminating rules. The Commission is also subject to the governmentwide
review requirement to minimize significant economic impact on small
entities under Section 610. FCC has initiated 3 multiyear Section 610
review projects (1999, 2002, 2005), plus 1 single-year review (2006),
issuing public notices listing all rules subject to Section 610 review.
Officials pointed out that these reviews rarely result in rulemaking
proceedings and cited only one proceeding which resulted in the
elimination of obsolete rules as a result of the Section 610
process.[Footnote 69]
In addition to these mandatory requirements, FCC officials reported
that the Commission reviews existing regulations at its own discretion
in response to rapid changes in technology and market conditions and to
petitions from regulated entities. A GAO analysis of the December 2006
Unified Agenda indicated that most of FCC's proposed and final rule
changes for that year were the result of decisions made under FCC's
discretionary authority. Of the 39 rule changes listed in the Unified
Agenda, 33 were the result of decisions made at the Commission's own
discretion, while 6 of those changes were the results of mandated
actions. This informal analysis indicates that, in addition to its
mandatory review requirements, FCC does make efforts to review and
amend regulations under its own discretion.
Table 10: Description of FCC Retrospective Reviews:
Review authorities;
Mandatory reviews:
* Section 610;
* Section 11 of Communications Act;
* Section 202(h) of Telecommunications Act;
Discretionary reviews:
* Agency discretion.
Frequency;
Mandatory reviews:
* 10 years (Section 610);
* 2 years (Section 11);
* 4 years (Section 202(h));
Discretionary reviews:
* As needed, based on industry developments, issues of interest to the
general public.
Purposes;
Mandatory reviews: Section 610 Review:
* Reduce burden on small entities;
* Reduce complexity of regulation;
* Consolidate duplicative rules;
* Respond to changes over time in technology, economic conditions;
Biennial review (Section 11):
* Repeal or modify telecommunications rules that are no longer
necessary in the public interest as a result of meaningful economic
competition;
Quadrennial review (Section 202(h)):
* Repeal or modify broadcast ownership rules determined to be no longer
in the public interest;
Discretionary reviews:
* Respond to petitions for reconsideration, rulemaking, waiver, or
forbearance;
* Changes in technology or economic conditions in regulated sectors;
* Respond to interactions with regulated entities, general public.
General outcomes;
Mandatory reviews: Section 610 Review:
* FCC reported that most reviews result in no changes, and cited only
one proceeding which resulted in the elimination of rules;
Biennial review:
* Most biennial reviews result in some changes to the Commission's
regulations, either to modify or remove rules that are no longer
necessary;
Quadrennial review:
* The 2002 review order would have relaxed most rules under review;
Discretionary reviews:
* Officials stated that the outcomes of reviews varied based upon the
regulation reviewed, purpose of the review, and impetus of the review.
Agency views on usefulness of reviews;
Mandatory reviews:
* Officials reported that review requirements enable FCC to conduct
notice-and- comment reviews of existing regulations more often than it
might otherwise;
Discretionary reviews:
* Officials reported that discretionary reviews enable FCC to
prioritize reviews and provide timely response to industry
developments.
Source: GAO analysis of agency data and regulatory statutes.
[End of table]
FCC Review Process:
FCC's retrospective review processes vary somewhat depending on the
review requirement the agency is addressing. However, in its conduct of
the biennial and quadrennial review, FCC follows a notice-and-comment
review process which provides opportunities for public participation
and transparency. For example, in the 2002 biennial review, FCC Bureaus
and Offices issued public notices listing rules for review under their
purview and requesting comments regarding the continued necessity of
rule parts under review. The Bureaus and Offices published Staff
Reports on the FCC Web site summarizing public comments and making
determinations as to whether the Commission should open proceedings to
modify or eliminate any of the reviewed rules. The Commission released
Notices of Proposed Rulemaking, seeking further public comments.
Officials reported that if the Commission modifies or eliminates any
regulations as a result of its proceeding, that decision is announced
in a rulemaking order, which is published in the Federal Register.
Similarly, in the 2006 quadrennial review (which was in process at the
time this report was written) the Commission released a Further Notice
of Proposed Rulemaking (FNPR) and posted a Web page providing
background information and hyperlinks to FCC documents relevant to the
review. The FNPR requests public comment on the media ownership rules
and factual data about their impact on competition, localism, and
diversity. The Commission reported that it will hold six public
hearings in locations around the country and make available for public
comment 10 economic studies commissioned by FCC on issues related to
the media ownership rules.
Despite the opportunities for public participation in these regulatory
reviews, the mandated structure of some review processes presents a
challenge to the usefulness of FCC reviews. For example, according to
an FCC official, the requirement to review the Commission's
telecommunications rules every 2 years forces Bureau staff to be
constantly reviewing regulations. This official reported that the
quadrennial requirement is a more appropriate time period for review,
as it provides greater opportunity for regulatory changes to take hold.
Additionally, an official reported that too much time between reviews
can be problematic. For example, rules that require Section 610 review
every 10 years may have been modified or previously reviewed as part of
an overlapping review requirement or as part of a discretionary review
occurring prior to the 10-year review requirement.
Figure 12: FCC Retrospective Review Processes:
[See PDF for image]
Source: GAO analysis of agency data.
[End of figure]
[End of section]
Appendix IX: Federal Deposit Insurance Corporation Retrospective
Reviews:
During the 2001 through 2006 period covered in our review, the Federal
Deposit Insurance Corporation (FDIC), has performed numerous
retrospective reviews of its existing regulations in response to
mandatory authorities such as Section 610 of the Regulatory Flexibility
Act and the Economic Growth Regulatory and Paperwork Reduction Act of
1996 (EGRPRA) and at its own discretion. The focus of FDIC's reviews
has been on burden reduction, which is part of the agency's strategic
goals. The process that FDIC used to plan, conduct, and report its
reviews was coordinated by a larger organizational body. The
centralized review effort helped to leverage the agencies' resources
and facilitate the regulatory changes recommended as a result of the
EGRPRA reviews.
FDIC Retrospective Reviews:
FDIC, along with members of the Federal Financial Institutions
Examination Council (FFIEC) has examined 131 regulations under
EGRPRA.[Footnote 70] FDIC conducted two Section 610 reviews after 2001,
but before the initiation of the EGRPRA reviews in 2003. Because the
EGRPRA review affected almost all of FDIC's regulations, the agency
subsequently included Section 610 reviews within the EGRPRA review
effort. Also, the agency has conducted discretionary reviews in
response to petitions and external emergencies, such as natural
disasters. For instance, the agency reported reviewing its regulations
to reduce burden for businesses affected by Hurricane Katrina. In doing
so, the agency made 8 temporary regulatory changes to ease the burden
on affected entities. FDIC also made changes to 4 regulatory areas,
which included changes to 3 regulations, as a result of the EGRPRA
reviews. Additionally, GAO's review of the December 2006 Unified
Agenda, indicated FDIC made changes to 5 regulations as a result of
decisions under its own discretion and 4 changes as result of mandates.
FDIC and the other banking agencies also worked with Congressional
staff regarding legislative action as a result of the EGRPRA reviews.
For example, the agencies reviewed over 180 legislative initiatives for
burden relief in 2005. Furthermore, the agencies testified before the
Senate Banking Committee and House Financial Services Committee on a
variety of burden reduction measures and upon request, agency
representatives offered technical assistance in connection with the
development of legislation to reduce burden. Congress later passed and
the President signed the Financial Services Regulatory Relief Act of
2006 on October 13, 2006.
Table 11: Description of FDIC Retrospective Reviews:
Review authorities;
Mandatory reviews:
* EGRPRA;
* Section 610;
Discretionary reviews:
* Formal petitions;
* Emergencies or disasters.
Frequency;
Mandatory reviews:
* Every 10 years for both EGRPRA and Section 610 after adoption;
Discretionary reviews:
* Depends largely on the nature of the source.
Purposes;
Mandatory reviews:
* Both acts focus on reducing burden;
Discretionary reviews:
* Depends largely on the nature of the source.
General outcomes;
Mandatory reviews:
* Reviews generally resulted in changes and recommendations for
legislative action;
Discretionary reviews:
* Reviews generally resulted in changes.
Agency views on usefulness of reviews;
Mandatory reviews:
* Officials reported that these reviews help the agency to accomplish
its goals to reduce burden while also being valuable because there is a
need for regulatory decisions to reflect current realties;
Discretionary reviews:
* Officials did not differentiate between the usefulness of
discretionary reviews versus mandatory reviews.
Source: GAO analysis of agency data and regulatory statutes.
[End of table]
FDIC Retrospective Review Process:
FDIC and other financial regulatory agencies that are members of the
FFIEC decided to use the FFIEC as the coordinating body for the EGRPRA
review process because the act affected all of the agencies and the
agencies wanted to: (1) establish a centralized process for selecting,
conducting, and reporting its reviews; and (2) leverage the expertise
and resources of all of the member agencies. EGRPRA required the
agencies to categorize their rules, solicit public comment, and publish
the comments in the Federal Register. The act also required the
agencies to report to Congress no later than 30 days after publishing
the final summarized comments in the Federal Register. The FFIEC
established additional processes for planning, conducting, and
reporting of retrospective reviews conducted under EGRPRA outside of
these specified requirements, such as providing 90 public comment
periods, holding outreach meetings with regulated entities as well as
consumer groups across the United States, and establishing a Web site
dedicated to the EGRPRA reviews. Within all of the processes developed
by the FFIEC, a high level of management attention was maintained. For
instance, the Director of the Office of Thrift Savings, who is also a
member of FDIC's Board of Directors, headed the interagency effort. In
this capacity, a political appointee was involved in planning,
conducting, and reporting the reviews. As illustrated by figure 13, the
process involved interagency coordination and review activities within
each individual agency, including FDIC.
Figure 13: FDIC Retrospective Review Process:
[See PDF for image]
Source: GAO analysis of agency data.
[End of figure]
[End of section]
Appendix X: Small Business Administration Retrospective Reviews:
During the 2001 through 2006 period covered in our review, the Small
Business Administration (SBA) has reviewed its existing regulations in
accordance with Section 610 of the Regulatory Flexibility Act and on it
own discretion. While the purpose of the Section 610 reviews was to
reduce burden, the purpose of every discretionary review was to
increase effectiveness. SBA had written procedures to plan, conduct,
and report its Section 610 reviews. However, the agency did not have
written processes to guide planning, conduct, and reporting of
discretionary reviews. Overall, SBA's discretionary reviews have
resulted more often in regulatory changes than reviews mandated by
statute.
SBA Retrospective Review Activities:
Officials reported that SBA has conducted discretionary reviews based
on congressional interest or industry petitions. Specifically,
officials from the HUBZONE program indicated that their office receives
attention from congress about the workings of their regulations,
thereby prompting them to review their existing regulations. In
addition, SBA's Division of Size Standards completed 27 reviews in
response to industry petitions and congressional requests. SBA also
completed 4 Section 610 reviews in 2005. While the purpose of the
Section 610 reviews was to reduce burden, officials from one division
in SBA said that they focused many of their retrospective reviews on
examining the effectiveness of their regulations by evaluating their
progress on outcomes. However, they stated that because some of their
regulations are linked to the regulatory activity of other agencies,
they are not always able to achieve the intended outcome of the
regulation. Of the reviews conducted by SBA, discretionary reviews
yielded more changes to existing regulations than mandated reviews. For
instance, there were no changes made to the 4 Section 610 reviews
completed but there were 23 final or proposed changes to regulations in
response to industry petitions. In addition, GAO's examination of SBA's
December 2006 Unified Agenda entries indicated that 22 rule changes
were the result of the agency's discretionary authority rather than
statutory mandates.
Table 12: Description of SBA Retrospective Reviews:
Review authorities;
Mandatory reviews:
* Section 610;
Discretionary reviews:
* Formal petitions;
* Congressional interest.
Frequency;
Mandatory reviews:
* Every 10 years;
Discretionary reviews:
* Depends largely on the nature of the source.
Purposes;
Mandatory reviews:
* Burden reduction;
Discretionary reviews:
* Effectiveness.
General outcomes of reviews;
Mandatory reviews:
* Reviews generally resulted in limited or no changes. According to the
December 2006 Unified Agenda, SBA made changes to two regulations as a
result of an agency-specific mandate;
Discretionary reviews:
* SBA made changes to 23 out of 27 regulations resulting from
petitions.
Agency views on usefulness of reviews;
Mandatory reviews:
* Officials stated that retrospective reviews are useful because they
provide an opportunity for the public and the agency to revisit rules
to ensure continued effectiveness and achievement of goals and
objectives;
Discretionary reviews:
* Officials did not differentiate between the usefulness of
discretionary reviews versus mandatory reviews.
Source: GAO analysis of agency data and regulatory statutes.
[End of table]
SBA Retrospective Review Process:
SBA's Section 610 Plan in the May 2006 Unified Agenda described
procedures for conducting Section 610 reviews. The plan specifies that
SBA will consider the factors identified in Section 610. The plans also
specifies that the conduct of the review will be performed by the
program office of jurisdiction, which entails reviewing any comments
received from the public, in consultation with the Office of General
Counsel (OGC) and the Office of Advocacy. The document notes that the
program office may contact associations that represent affected small
entities in order to obtain information on impacts of the rules.
Although Section 610 does not require agencies to report the results of
the reviews, SBA reported its results in the Unified Agenda.
Under SBA's standard operating procedures each program office is
responsible for evaluating the adequacy and sufficiency of existing
regulations that fall within its assigned responsibilities. However,
according to the officials, the agency does not have a uniform way to
plan, conduct, and report these discretionary reviews. In general, the
agency conducts reviews in an informal manner; therefore, documentation
does not exist for the procedures or standards used to conduct these
reviews. However, officials described considering these factors to
prioritize their reviews: (1) the level of congressional interest in a
specific review, (2) OGC's input on which rules should be reviewed, and
(3) the number of petitions and appeals SBA has received regarding a
particular rule. Reviews are conducted differently in the various
program offices within SBA. Moreover, the agency described a high
turnover of employees, which makes it important to document SBA reviews
and processes. Currently, it does not appear that the agency documents
its review and processes.
Figure 14: SBA's General Retrospective Review Process:
[See PDF for image]
Source: GAO analysis of agency data.
[End of figure]
[End of section]
Appendix XI: Department of Labor's Employee Benefit Security
Administration: Example of a Documented Formal Review:
Employee Benefit Security Administration's (EBSA) retrospective
regulatory review process was the most documented and detailed formal
review process included in our review. According to EBSA officials and
our review of EBSA documentation on its Regulatory Review Program (the
Program), the agency established its program as a continuing and
systematic process that allows the agency to periodically reviews its
regulations to determine whether they need to be modified or updated.
The Program takes into account technology, industry, economic,
compliance and other factors that may adversely affect a rule's
continued usefulness, viewed with respect to either costs or benefits.
According to program documentation, through the integration of
prescribed review criteria, regulatory reviews conducted under the
Program would also help EBSA to satisfy the Section 610 requirement for
periodic reviews of agency regulations. In addition, the Program
provides information and data that assists EBSA in conducting
regulatory reviews of EBSA regulations in accordance with the
requirements of Executive Order 12866.
EBSA's regulatory review process is conducted annually by a Regulatory
Review Committee (RRC) composed of the Counsel for Regulation, Office
of the Solicitor's Plan Benefits and Security Division (or his
delegate), and the Directors of the following offices (or their
respective delegates): Office of Regulations and Interpretations,
Office of Policy and Research, Office of Enforcement, Office of Health
Plan Standards and Compliance Assistance, and Office of Exemption
Determinations. The Director of Regulations and Interpretations (or his
delegate) chairs the RRC. EBSA's review process consists of three
formal phases: (1) selection of regulations for further review, (2)
substantive review of the selected regulations, and (3) reporting
review results to high-level management and the public.
During phase 1 of EBSA's review process, the selection and
recommendation of regulations for substantive review, the RRC conducts
a preliminary review of EBSA's regulations and assigns a value for the
need for additional review on a scale of 1 (lowest value) to 5 (highest
value), based upon several factors (listed below). This ranking is
based on the RRC's "expert" judgment and knowledge of the regulated
community. For purposes of this type of review, the RRC may consider a
series of closely related regulations as one regulation. The RRC
subsequently compiles the results of the review in chart or outline
form in order to compare the regulations and recommends to the
Assistant Secretary at least three regulations as candidates for review
under the Program. At least one of the three regulations recommended is
subject to review under Section 610. The RRC presents its
recommendations for regulations that need further review in a written
report to the Assistant Secretary, including an explanation of the
reasons for its recommendations. The factors that the RRC considers
when preliminarily reviewing the regulations are:
* whether the regulation is subject to review under the RFA;
* whether the regulation is subject to review under statutory or
Executive Order requirements other than the RFA;
* absolute age of regulation (time elapsed since promulgation);
* time elapsed since regulation was last amended and nature of
amendment (major/minor);
* court opinions adjudicating issues arising under regulation;
* number of EBSA investigations that have found violations of
regulation;
* number of public requests received for interpretation of regulation;
* type(s) of plans affected by the regulation;
* number of plans affected;
* cumulative number of participants and beneficiaries affected by
regulation;
* cumulative amount of plan assets affected by regulation;
* relative difficulty of compliance with regulation for the regulated
entities (complexity, understandability);
* potential for cost burden as compared with intended benefits of the
regulation;
* extent to which development of new technology or industry practice
since promulgation may reduce effectiveness of regulation;
* extent to which legal changes (statutory, regulatory, executive
order) since promulgation of the regulation may affect its validity;
* significance of the regulation with respect to EBSA's goals;
* significance of the regulation with respect to enforcement,
compliance assistance and voluntary compliance efforts; and:
* availability of information pertinent to evaluating the regulation.
EBSA's review program also outlines detailed criteria for the selection
of regulations to review under Section 610. The program sets a clear
definition of what constitutes a "significant impact on a substantial
number of small entities" for its regulations, a step which, as we have
noted in the past, many agencies have not yet taken.[Footnote 71]
Specifically, the program sets threshold criteria for what constitutes
"significant impact" and "substantial number of entities." GAO has
reported on numerous occasions that the lack of clarity about these
terms is a barrier to agency conduct of reviews and has resulted in
fewer reviews being conducted. Therefore, this step in the review
program appears to be a very useful factor. Under EBSA's approach for
measuring these thresholds, the rules to be reviewed each year are
first subjected to quantitative analysis to determine whether they are
considered to have a significant economic impact on a substantial
number of small entities. For its initial Section 610 reviews, EBSA has
adopted a uniform standard of $25 per plan participant to measure the
discretionary impact of regulations reviewed under Section 610, and
whether it constitutes a "significant economic impact." EBSA's
definition of a small entity as an employee pension or welfare plan
with fewer than 100 participants is grounded in sections 104(a)(2) and
(3) of the Employee Retirement Income Security Act (ERISA),[Footnote
72] which permit the Secretary to prescribe simplified annual reports
for pension and welfare plans with fewer than 100 participants.
Additional details on these definitions and how they were derived can
be found in the agency's Regulatory Review Program guidance.
During phase two of EBSA's review process, the substantive review of
regulations, the RRC substantively analyzes each regulation selected by
the Assistant Secretary, and considers the following factors, among
others, in determining whether the agency should amend, rescind, or
retain a regulation:
* whether there appears to be a continued need for the regulation;
* whether the regulation has been the subject of complaints or comments
from the public and the nature of those complaints;
* whether the regulation overlaps, duplicates, or conflicts with other
federal statutes or rules or with nonpreempted state or local statutes
or rules;
* whether the regulation is overly complex and could be simplified
without impairing its effectiveness;
* whether the regulation may be based on outdated or superseded
employment, industrial, or economic practices or assumptions and
whether participants and/or beneficiaries of employee benefit plans may
be exposed to harm as a result;
* whether the regulation may impose significant economic costs on
regulated entities and whether the benefit(s) or purpose(s) of the
regulation could be achieved as effectively through an alternative
regulatory approach that would impose less economic burden on regulated
industries;
* whether an alternative regulatory approach that does not increase the
compliance burden for regulated industries could better serve the
purpose(s) of the regulation or provide better protection(s) to
participants and beneficiaries of employee benefit plans; and:
* whether it would be in the public interest to initiate particular
actions (e.g., contracting a research study, promulgating a Request for
Information, conducting a public hearing) within the authority of EBSA
to develop information or expertise pertinent to the regulation and
relevant to consideration of the above issues.
In phase 3 of the review process, reporting review results, the RRC
provides the Assistant Secretary with a report that includes (1) a
summary of the RCC's conclusions concerning its evaluation of the
regulation, based upon the factors described above; and (2)
recommendations to the Assistant Secretary concerning regulatory
actions that could be taken regarding the regulation. According to
Program documentation, the recommendations could include, but are not
limited to, issuing a Request for Information to supplement EBSA's
knowledge concerning the effect of the regulation, developing a
proposal to amend or withdraw a regulation, or retaining the regulation
in its current form. If the RRC recommends changes to a rule, the
agency analyzes this options in accordance with Executive Order 12866,
RFA, or other requirements, and publishes determined changes for public
comment in the Federal Register, typically as a Notice of Proposed
Rulemaking. If the RCC recommends issuing a Request for Information to
supplement EBSA's knowledge concerning the effect of the regulation,
the agency publishes this Request for Information in the Federal
Register, before it issues a notice for proposed rulemaking.[Footnote
73] (For an illustration of this process, see fig 7 in app. IV.)
[End of section]
Appendix XII: Comments from the Small Business Administration Office of
Advocacy:
SBA:
Office of Advocacy:
www.sba.gov/advo:
Advocacy: the voice of small business in government:
By Electronic Mail:
June 21, 2007:
Mr. Mathew Scire:
Director, Strategic Issues:
U.S. Government Accountability Office:
Mail Stop 2440C:
Room 2912:
441 G Street, NW:
Washington, D.C. 20548-0001:
RE: GAO's Examination of Retrospective regulatory reviews:
Dear Mr. Scire:
The Office of Advocacy (Advocacy) has reviewed the Government
Accountability Office's draft report, Reexamining Regulations:
Opportunities Exist to Improve Effectiveness and Transparency of
Retrospective Reviews-We believe that the draft report is well-
researched and accurately portrays federal agencies' experiences with
retrospective rule reviews.
Our only substantive comment on the draft report relates to the final
section, "Recommendations for Executive Action," in which GAO
recommends on page 55 that "the Administrator of the U.S. Small
Business Administration instruct the Chief Counsel for Advocacy to take
the following sec en actions." Because the Office of Advocacy is an
independent office, it would he inappropriate for the SBA Administrator
to instruct the Chief Counsel to take a particular action. Instead, we
recommend that GAO simply "recommend that the Administrator of the
Office of Management and Budget instruct the Administrator of the
Office of Information and Regulatory Affairs, together with the Chief
Counsel for Advocacy, to take the following seven actions."
Advocacy previously communicated to GAO that we are in the process of
developing supplemental guidance to federal agencies on how to conduct
retrospective reviews of existing regulations under section 610 of the
Regulatory Flexibility Act. An initial draft of this guidance, which
hich is being circulated for informal interagency review and comment,
is attached. This guidance is intended to supplement our May 2003
guidance, "How to Comply with the Regulatory Flexibility Act,"[Footnote
74] and covers several aspects of section 610's requirements, such as
the timing, scope, and purpose of a periodic rule review, and when
other retrospective reviews can be considered functionally equivalent
to a section 610 review.
Thank you for giving us the opportunity to review the draft report and
share our views. Please do not hesitate to call me or Keith Holman
(keith.holman@sBa.gov or (202) 205-6936) if we can be of further
assistance.
Sincerely,
Signed by:
Shawne C. McGibbon:
Deputy Chief Counsel for Advocacy:
Enclosure:
[This is the initial draft of supplemental guidance on compliance with
section 610 of the regulatory flexibility act. This guidance will be
issued by the office of advocacy following interagency review and
receipt of comments. Input from other agencies will help advocacy in
developing this guidance.]
How to Comply with Section 610 of the Regulatory Flexibility Act: A
Guide for Federal Agencies:
Introduction:
Section 610 of the Regulatory Flexibility Act (RFA)[Footnote 75]
requires federal agencies to review regulations that have a significant
impact on a substantial number of small entities within 10 years of
their adoption as final rules. This review is intended to assess the
impact of existing rules on small entities and to determine whether the
rules should be continued without change, or should be amended or
rescinded (consistent with the objectives of applicable statutes).
In practice, compliance with section 610's retrospective review
requirement has varied substantially from agency to agency.[Footnote
76] Although the Office of Advocacy of the U.S. Small Business
Administration (Advocacy) issued guidance in 2003 on how to comply with
the RFA, including section 610,[Footnote 77] further clarification is
warranted. The following provides Advocacy's interpretation of section
610 of the RFA and answers common questions about conducting
retrospective reviews of existing regulations.
The text of 5 U.S.C. § 610, Pub. L 96-354,94 Stat. 1164 (1981):
§ 610. Periodic review of rules:
(a) Within one hundred and eighty days after the effective date of this
chapter, each agency shall publish in the Federal Register a plan for
the periodic review of the rules issued by the agency which will have a
significant economic impact upon a substantial number of small
entities. Such a plan may be amended by the agency at any time by
publishing the revision in the Federal Register. The purpose of the
review shall be to determine whether such rules should be continued
without change, or should be amended or rescinded, consistent with the
stated objectives of applicable statutes, to minimize any significant
economic impact of the rules upon a substantial number of such small
entities. The plan shall provide for the review of all such agency
rules existing on the effective date of this chapter within ten years
of the publication of such rules as the final rule. If the head of the
agency determines that completion of the review of existing rules is
not feasible by the established date, he shall so certify in a
statement published in the Federal Register and may extend the
completion date by one year at a time for a total of not more than five
years.
(b) In reviewing rules to minimize any significant economic impact of
the rule on a substantial number of small entities in a manner
consistent with the stated objectives of applicable statutes, the
agency shall consider the following factors -:
(1) the continued need for the rule;
(2) the nature of complaints or comments received concerning the rule
from the public;
(3) the complexity of the rule;
(4) the extent to which the rule overlaps, duplicates or conflicts with
other Federal rules, and, to the extent feasible, with State and local
governmental rules: and:
(5) the length of time since the rule has been evaluated or the degree
to which technology, economic conditions, or other factors have changed
in the area affected by the rule.
(c) Each year, each agency shall publish in the Federal Register a list
of the rules which have a significant economic impact on a substantial
number of small entities, which are to be reviewed pursuant to this
section during the succeeding twelve months. The list shall include a
brief description of each rule and the need for and legal basis of such
rule and shall invite public comment upon the rule.
Legislative history relating to section 610.
Statements made during floor debate on the Regulatory Flexibility Act
in 1980 suggest that Congress meant section 610 to be a tool for
agencies to periodically re-examine their rules in light of changing
circumstances and address increased regulatory burdens vis-a-vis small
entities.[Footnote 78] Similarly, the section-by-section analysis of
the periodic review provision of S. 299, which became the RFA, notes
that the elements of a section 610 review mirror the evaluative factors
in President Carter's Executive Order 12,044, Improving Government
Regulations.[Footnote 79] Pursuant to that Executive Order, President
Carter issued a Memorandum to the Heads of Executive Departments and
Agencies in 1979, further instructing federal agencies:
As you review existing regulatory and reporting requirements, take
particular care to determine where, within statutory limits, it is
possible to tailor those requirements to fit the size and nature of the
businesses and organizations subject to them.[Footnote 80]
This view was also reflected in Advocacy's 1982 guidance explaining the
RFA, which stated that:
The RFA requires agencies to review all existing regulations to
determine whether maximum flexibility is being provided to accommodate
the unique needs of small businesses and small entities. Because
society is not static, changing environments and technology may
necessitate modifications of existing, anachronistic regulations to
assure that they do not unnecessarily impede the growth and development
of small entities.[Footnote 81]
Put simply, the objective of a section 610 review is like the goal of
most other retrospective rule reviews[Footnote82]: to determine whether
an existing rule is actually working as it was originally intended and
whether revisions are needed. Has the problem the rule was designed to
address been solved? Are regulated entities (particularly small
entities) able to comply with the rule as expected? Are the costs of
compliance in line with the agency's initial estimates? Are small
businesses voicing continuing concerns about their inability to comply?
The section 610 review is an excellent way to address these questions.
Scope of the review: What should be included?
At a minimum, once an agency has determined that an existing rule now
has a significant economic impact on a substantial number of small
entities, the agency's section 610 review should address each of the
five elements listed in section 610(b)(1)-(5):
* Whether or not there a continuing need for this rule;
* Whether the public has ever submitted comments or complaints about
this rule;
* The degree of complexity of this rule;
* Whether some other federal or state requirement accomplishes the same
regulatory objective as this rule; and:
* The length of time since the agency has reviewed this rule, and/or
the extent to which circumstances have changed which may affect
regulated entities.
Particular attention should be paid to changes in technology, economic
circumstances, competitive forces, and the cumulative burden faced by
regulated entities. Has the impact of the rule on small entities
remained the same? Even if an agency was originally able to properly
certify under section 605 of the RFA that a rule would not have a
significant economic impact on a substantial number of small
entities,[Footnote 83] changed conditions may mean that the rule now
has a significant impact and therefore should be reviewed under section
610. If there is evidence (such as new cost or burden data) that a rule
is now having a significant economic impact on a substantial number of
small entities, including small communities or small non-profit
associations, the RFA requires the agency to conduct the section 610
review.
Section 610(b) requires an agency to evaluate and minimize "any
significant economic impact of a rule on a substantial number of small
entities." To accomplish this economic analysis, the analysis best
suited for a section 610 review is similar to the analysis required in
an Initial Regulatory Flexibility Analysis (IRFA). [Footnote
84]Agencies certainly have the discretion to place significant weight
on other relevant factors besides the types of economic data required
by an IRFA. These other factors include an agency's experience in
implementing the rule, as well as the views expressed over time by the
public, regulated entities, and Congress. With the benefit of actual
experience with a rule, the agency and other interested parties are in
a good position to evaluate potential improvements to the rule.
Particular attention should be paid to such factors as unintended
market effects and market distortions, unusually high firm mortality
rates in specific industry sub-sectors, and widespread noncompliance
with reporting and other "paperwork" requirements. Thus, the focus of
the review should go beyond merely ensuring that regulatory
requirements are expressed in plain language and that paperwork can be
filed electronically. The analysis should be aimed at understanding and
reducing unnecessary burdens that impact small entities.
The section 610 analysis must be based on facts. To the extent that an
agency relies on specific data to reach a conclusion about the
continuing efficacy of a rule, the agency must be able to provide that
data. Similarly, the agency must be prepared to explain any assumptions
that it makes in the analysis. The review process must not be a "black
box" that stakeholders cannot see or understand.
Timing of the review: When does the agency have to start and finish?
The language of section 610 specifies that the review should take place
no more than 10 years after a rule is promulgated, but an agency can
choose to review a rule earlier, before 10 years have clapsed. While it
is recommended that agencies gain some experience with a rule before
undertaking a retrospective review, the review may take place at any
time during the 10-year period. If an agency substantially revises a
rule after its initial promulgation, the 10 year trigger for the
section 610 review may properly be delayed to correspond to the
revision date. However, minor revisions to a rule would be inadequate
as a basis for delaying the review of the rule. If there is confusion
regarding whether a revision is minor or major, an agency should seek
input form Advocacy. The RFA allows an agency to delay the initiation
of a section 610 review by one-year increments, up to a total of five
years.
Section 610 does not specifically set a limit on the amount of time for
a rule review. Some agencies have reported that they spend more than a
year on each section 610 review. It is within an agency's discretion to
determine how much time it needs to spend on retrospective rule
reviews. Advocacy does not anticipate, however, that a section 610
review should require substantially more time to complete than an
Initial Regulatory Flexibility Analysis. As long as an agency addresses
the required elements of a section 610 review and provides an adequate
factual basis for its conclusions, Advocacy believes that many section
610 analyses are capable of completion within a shorter timeframe.
Agencies may also wish to take advantage of the opportunity afforded in
section 605(c) of the RFA to consider a series of "closely related
rules" as one rule for retrospective review purposes. An agency can
accomplish a programmatic or comprehensive section 610 review of
closely related rules, satisfying the requirements of the RFA while
minimizing the resources needed for the review.
How should agencies communicate with interested entities about section
610 reviews they are conducting?
Section 610(c) of the RFA requires agencies to publish in the Federal
Register a list of the rules which they plan to review in the upcoming
year. This listing requirement is intended to give small entities early
notice of the section 610 reviews so that they will be ready and able
to provide the agency with comments about the rule under review. As a
practical matter, however, agencies often give stakeholders no other
information about the ongoing status of a section 610 review, what
factors an agency is considering in conducting the review, how comments
can be submitted to the agency, or the factual basis on which the
agency made its section 610 review findings.
At a minimum, agencies should develop a mechanism for communicating
with interested entities about the section 610 reviews that they are
conducting, as well as those they have completed. Experience has shown
that the Federal Register is not necessarily the most effective tool
for conveying up-to-date information about the status of ongoing
retrospective reviews. This information may be better communicated via
an agency website or other electronic media, and should inform
interested parties of their ability to submit comments and the agency's
commitment to consider those comments. Several agencies already utilize
web-based communications as an outreach tool during section 610
reviews; [Footnote 85]every agency should achieve at least this level
of accessibility.
Insights about an existing regulation received from regulated entities
and other interested parties who live with the rule should be a key
component of a retrospective rule review. By making the review process
transparent and accessible, agencies are more likely to identify
improvements that will benefit all parties at the conclusion of the
review.
Can other agency retrospective rule reviews satisfy the section 610
requirement?
Agencies that undertake retrospective rule reviews for other reasons
may satisfy the section 610 requirement, as long as the rule reviews is
functionally equivalent. For example, agencies that evaluated a
regulation pursuant to the Office of Management and Budget's 2002
publicly-nominated rule reform process[Footnote 86] or OMB's
manufacturing rule reform process[Footnote 87] would qualify as section
610 reviews. Similarly, agencies that undertook retrospective reviews
of their regulatory programs because of complaints from regulated
entities would likely qualify as section 610 reviews - as long as the
review includes the minimum elements required by section 610 and the
agency adequately communicates with stakeholders.
Examples. In Advocacy's view, what recent retrospective rule reviews
have been successful?
* Federal Railway Administration's Section 610 Review of Railroad
Workplace Safety - On December 1, 2003, the Department of
Transportation's Federal Railroad Administration completed a section
610 review of its railroad workplace safety regulations. After
determining that the workplace safety regulations had a significant
economic impact on a substantial number of small entities, the FRA
examined the rules in light of section 610's review elements. The FRA
provided a good description of its analysis of the workplace safety
regulations under each review element and the agency's conclusions. See
http://www.fra.dot.gov/downloads/safety/railroad_workplace_safety.pdf:
* EPA's RCRA Burden Reduction Initiative - As a result of public
nominations for reforms to the Environmental Protection Agency's waste
management program under the Resource Conservation and Recovery Act
(RCRA), EPA evaluated the program and identified duplicative
requirements, such as forcing filers to submit reports to multiple
locations when one location is adequate. By reducing or eliminating
these procedures after public notice and comment, EPA enabled regulated
entities to save up to $3 million per year while preserving the
protections of the RCRA program. The retrospective review was
successful because it involved a detailed review of the program's
requirements and their costs, based on years of practical experience.
The agency considered technical changes such as computerization that
have made some of the older paperwork requirements redundant, and found
ways to modernize the program to reflect current realities. See 71 Fed.
Reg. 16,862 (April 4, 2006).
* OSHA Excavations Standard - In March 2007, the Occupational Safety
and Health Administration (OSHA) completed a section 610 review of its
rules governing excavations and trenches. These standards had been in
place since 1989, and were designed to ensure that trenches do not
collapse on workers and that excavated material does not fall back into
a trench and bury workers. In the review, OSHA did a good job of
seeking public input on how and whether the rule should be changed.
While the agency ultimately decided that no regulatory changes to the
standard were warranted, it did determine that additional guidance and
worker training would help continue the downward trend of fewer deaths
and injuries from trench and excavation work. OSHA had concluded that
its current Excavations standard has reduced deaths from approximately
90 per year to about 70 per year. See 72 Fed. Reg. 14,727 (March 29,
2007).
* FCC Section 610 Review of 1993-1995 Rules - In May 2005, the Federal
Communications Commission undertook a section 610 review of rules the
Commission adopted in 1993, 1994, and 1995 which have, or might have, a
significant economic impact on a substantial number of small entities.
The FCC solicited public comment on the rules under review, explained
the criteria it was using to review the rules, and gave instructions
where to file comments. This approach was transparent because the
agency allowed adequate time for comments (three months) and gave
interested parties sufficient information to prepare useful comments.
See 70 Fed. Reg. 33,416 (June 8, 2005).
The Office of Advocacy is ready to assist agencies that are planning a
retrospective review of their regulations to ensure that the review
fully meets the requirements of section 610. Discussions with the
Office of Advocacy are confidential interagency communications, and the
Advocacy staff is ready to assist you. For more information about this
guidance, or for other questions about compliance with section 610,
please contact Advocacy at (202) 205-6533.
[End of section]
Appendix XIII: GAO Contact and Acknowledgments:
[End of section]
GAO Contact:
Mathew J. Scire, Director, Strategic Issues (202) 512-6806,
sciremj@gao.gov.
Acknowledgments:
Tim Bober, Assistant Director, and Latesha Love, Analyst-in-Charge,
managed this assignment. Other staff who made key contributions to this
assignment were Matt Barranca, Jason Dorn, Tim Guinane, Andrea Levine,
Shawn Mongin, Bintou Njie, Joe Santiago, Stephanie Shipman, Michael
Volpe, and Greg Wilmoth.
FOOTNOTES
[1] Pub. L. No. 96-354, 94 Stat. 1164, 1169 (Sept. 16, 1980) codified
at 5 U.S.C. § 610. These reviews are referred to as Section 610
reviews.
[2] There is no one standard term or definition for the variety of
activities that might be considered retrospective regulatory reviews.
In different contexts, these have been referred to as look-backs, ex
post (postregulation) studies, retrospective studies, validation
studies, and sometimes just as "reviews."
[3] See, for example, GAO, Regulatory Reform: Prior Reviews of Federal
Regulatory Process Initiatives Reveal Opportunities for Improvements,
GAO-05-939T (Washington, D.C.: July 27, 2005), and Environmental
Protection: Assessing the Impacts of EPA's Regulations Through
Retrospective Studies, GAO/RCED-99-250 (Washington, D.C.: Sept. 14,
1999).
[4] For purposes of this report, we define mandatory reviews as
retrospective reviews that agencies conducted in response to
requirements in statutes, executive orders, or executive branch
directives. We define discretionary reviews as reviews that agencies
undertook based upon their own initiative.
[5] We report findings from the following sub-agencies in the body of
our report: USDA's Agricultural Marketing Service (AMS), Animal and
Plant Health Inspection Service (APHIS), and Food Safety and Inspection
Service (FSIS); Department of Justice's Bureau of Alcohol, Tobacco,
Firearms, and Explosives (ATF) and Drug Enforcement Administration
(DEA); Department of Labor's Employee Benefits Security Administration
(EBSA), Occupational Safety and Health Administration (OSHA), Mine
Safety and Health Administration, and Employment and Training
Administration (ETA); and the Department of Transportation's Federal
Aviation Administration (FAA) and National Highway Traffic Safety
Administration (NHTSA). Additional information on these and other
subagencies included in our review are reported in appendices II-XI.
[6] The term "independent regulatory agencies" refers to the boards and
commissions identified as such in the Paperwork Reduction Act (44
U.S.C. § 3502(5)), including CPSC, FCC, and FDIC. "Independent
agencies" refers to agencies that answer directly to the President but
are not part of Cabinet departments. Some regulatory statutory
requirements apply to Cabinet departments and independent agencies,
while others apply to those agencies as well as the independent
regulatory agencies. Unless otherwise indicated, executive orders in
this report apply to agencies other than those considered to be
independent regulatory agencies, although the independent regulatory
agencies may be asked by the Office of Management and Budget (OMB) to
voluntarily comply.
[7] The Congressional Review Act (CRA) (5 U.S.C. § 801) requires
agencies to file final rules with both Congress and GAO before the
rules can become effective. To compile information on all the rules
submitted to us under CRA, GAO established a database and created a
standardized submission form to allow more consistent information
collection. The Federal Rules Database is publicly available at
http://www.gao.gov under Legal Products.
[8] In prior work, we identified overall strengths or benefits
associated with regulatory process initiatives, including: (1)
increasing expectations regarding the analytical support for proposed
rules, (2) encouraging and facilitating greater public participation in
rulemaking, and (3) improving the transparency of the rulemaking
process. Because these strengths or benefits are also relevant and
useful for assessing agency retrospective review initiatives, we
considered them in our selection of evaluation criteria. Other
practices that could improve the effectiveness and transparency of
reviews may exist and could be considered when developing retrospective
review processes. However, we believe that the three practices that we
assessed are among the most important.
[9] Section 610 requires agencies to consider: (1) the continued need
for the rule; (2) the nature of complaints or comments received
concerning the rule from the public; (3) the complexity of the rule;
(4) the extent to which the rule overlaps, duplicates, or conflicts
with other federal rules, and, to the extent feasible, with state and
local government rules; and (5) the length of time since the rule has
been evaluated or the degree to which technology, economic conditions,
or other factors have changed in the area affected by the rule.
[10] Outside of DOT, we identified 88 mandatory reviews completed by
our selected agencies. DOT has an agency policy that requires its
agencies to conduct reviews of all its regulations in the Code of
Federal Regulations (CFR) to ensure the department's compliance with
Section 610 and other mandatory requirements. DOT completed 406 reviews
under this policy between 2001 and 2006.
[11] GPRA (Pub. L. No. 103-62, 107 Stat. 285 (Aug. 3, 1993)) required
federal agencies to develop strategic plans with long-term, outcome-
oriented goals and objectives, annual goals linked to achieving the
long-term goals, and annual reports on the results achieved. See GAO,
Results-Oriented Government: GPRA Has Established a Solid Foundation
for Achieving Greater Results, GAO-04-38 (Washington, D.C.: Mar. 10,
2004).
[12] PART is a standard series of questions meant to serve as a
diagnostic tool, drawing on available program performance and
evaluation information to form conclusions about program benefits and
recommend adjustments that may improve results. See GAO, Program
Evaluation: OMB's PART Reviews Increased Agencies' Attention to
Improving Evidence of Program Results, GAO-06-67 (Washington, D.C.:
Oct. 28, 2005).
[13] See, for example, GAO, United States Government Accountability
Office: Supporting the Congress through Oversight, Insight, and
Foresight, GAO-07-644T (Washington, D.C.: Mar. 21, 2007), and 21st
Century Challenges: Reexamining the Base of the Federal Government, GAO-
05-325SP (Washington, D.C.: February 2005).
[14] See GAO, Forces That Shape America's Future: Themes from GAO's
Strategic Plan 2007-2012, GAO-07-467SP (Washington, D.C.: March 2007).
[15] The underlying statutes also play an important role and contribute
to differences in agencies' regulatory activities. See, for example,
GAO, Chemical Risk Assessment: Selected Federal Agencies' Procedures,
Assumptions, and Policies, GAO-01-810 (Washington, D.C.: Aug. 6, 2001).
[16] See, for example, 42 U.S.C. §§ 7411(b)(1)(B), 7412(d)(6).
[17] The Executive Order 12866 of September 30, 1993, as amended by
Executive Order 13258 of February 26, 2002, and Executive Order 13422
of January 18, 2007, is still in effect.
[18] OIRA has primary responsibility for implementing Executive Order
12866, reviewing and coordinating regulatory activities of Cabinet
departments and independent agencies, and issuing guidance on the
regulatory review process. SBA's Office of Advocacy monitors and
provides guidance regarding the compliance of all agencies, including
independent regulatory agencies, with the RFA.
[19] See U.S. Small Business Administration, Office of Advocacy, A
Guide for Government Agencies: How to Comply with the Regulatory
Flexibility Act (Washington, D.C.: May 2003).
[20] Other studies about retrospective reviews of federal regulations
have also recognized the potential usefulness of such evaluations and
identified problems associated with effectively reviewing existing
regulations. See, for example: Neil R. Eisner and Judith S. Kaleta,
Federal Agency Reviews of Existing Regulations, 48 Admin. L. Rev.
(1996) pp. 139-174; Office of Management and Budget, Validating
Regulatory Analysis: 2005 Report to Congress on the Costs and Benefits
of Federal Regulations and Unfunded Mandates on State, Local, and
Tribal Entities (Washington, D.C.: 2005); and Michael R. See, Willful
Blindness: Federal Agencies' Failure to Comply with the Regulatory
Flexibility Act's Periodic Review Requirement--and Current Proposals to
Invigorative the Act, 33 Fordham Urb. L.J. (2006) pp. 1199-1255.
[21] See GAO, Environmental Protection: Assessing the Impacts of EPA's
Regulations Through Retrospective Studies, GAO/RCED-99-250 (Washington,
D.C.: Sept. 14, 1999).
[22] See, for example, GAO, Regulatory Flexibility Act: Congress Should
Revisit and Clarify Elements of the Act to Improve Its Effectiveness,
GAO-06-998T (Washington, D.C.: July 20, 2006).
[23] GAO, Managing for Results: Regulatory Agencies Identified
Significant Barriers to Focusing on Results, GAO/GGD-97-83 (Washington,
D.C.: June 24, 1997).
[24] See GAO-05-939T.
[25] See GAO, Economic Performance: Highlights of a Workshop on
Economic Performance Measures, GAO-05-796SP (Washington, D.C.: July 18,
2005); Unfunded Mandates: Views Vary About Reform Act's Strengths,
Weaknesses, and Options for Improvement, GAO-05-454 (Washington, D.C.:
Mar. 31, 2005); GAO-05-939T ; and GAO/RCED-99-250.
[26] Section 402 amended the Telecommunications Act to add a new
Section 11. These reviews are referred to as Section 11 reviews. See 47
U.S.C.§161.
[27] A significant regulatory action under the executive order is
defined as any regulatory action that, for example, may have an annual
effect on the economy of $100 million or more, or create a serious
inconsistency or otherwise interfere with an action taken or planned by
another agency, among other things.
[28] Each year of DOT's plan coincides with the fall-to-fall schedule
for publication of the department's regulatory agenda. Year 1 of this
process began in the fall of 1998.
[29] 12 U.S.C. § 3311.
[30] For example, during the 2001 through 2006 time period covered by
our report, USDA published 1,132 final rules, DOJ 199, DOL 151, DOT
6,415, CPSC 12, EPA 2,948, FCC 1,044, FDIC 40, and SBA 47.
[31] OMB requested input on guidance and regulations, respectively. For
this review we only count agencies' reforms of their existing
regulations.
[32] Appendix IV provides an illustration of the variation in review
processes even within agencies, as demonstrated by agencies within DOL.
[33] To illustrate how an agency can apply these characteristics in a
detailed formal program that defines key terms and identifies important
evaluation factors that agencies can consider in each phase, we provide
a detailed example of EBSA's regulatory review program in app. XI.
EBSA's program was the most comprehensive documented review process
that we examined within the scope of our review.
[34] Section 610 requires agencies to assess: (1) the continued need
for the rule; (2) the nature of complaints or comments received
concerning the rule from the public; (3) the complexity of the rule;
(4) the extent to which the rule overlaps, duplicates, or conflicts
with other federal rules, and, to the extent feasible, with state and
local government rules; and (5) the length of time since the rule has
been evaluated or the degree to which technology, economic conditions,
or other factors have changed in the area affected by the rule.
[35] Some statutory mandates, such as Section 610, require agencies to
provide notice of retrospective reviews and solicit public comments on
the relevant regulations.
[36] A listserve is a program used to manage e-mails for members of a
discussion group. For example, a listerve program can be used as an
automatic mailing list server. When e-mail is addressed to a listserve
mailing list, it is automatically broadcast to everyone on the list.
However, listserves can take multiple forms. For example, DOT pointed
out that within their Docket Management System, the listserve users
sign up to be notified when the agency places something in a docket of
interest.
[37] When a press is equipped with presence sensing device initiation
(PSDI), the press cycle will not initiate until the PSDI system senses
that the danger zone is clear. OSHA reviewed its PSDI standard for
mechanical power presses and found that companies were not using PSDI
systems because the standard requires an OSHA-approved third party to
validate the systems at installation and annually thereafter, but no
party has pursued OSHA approval since the standard was implemented in
1988. OSHA is currently attempting to revise the regulation to rely
upon a technology standard that industries could utilize and that would
provide for additional improvements in safety and productivity.
[38] 47 U.S.C. § 161(a)(2).
[39] Pub. L. No. 109-231, 120 Stat. 493 (June 15, 2006).
[40] See Eisner and Kaleta at pages 9 and 14.
[41] For a discussion of the purposes, standards, and approval process
established by PRA see GAO, Paperwork Reduction Act: New Approach May
Be Needed to Reduce Government Burden on Public, GAO-05-424
(Washington, D.C.: May 20, 2005).
[42] Economic incentive regulations are rules that encourage behavior
through price signals rather than through explicit instructions on how
to meet standards, such as pollution control levels. An example of such
a regulation would be an EPA rule that uses tradable permits and
pollution charges administered by the agency to achieve pollution
control goals.
[43] GAO, Regulatory Flexibility Act: Agencies' Interpretations of
Review Requirements Vary, GAO/GGD-99-55 (Washington, D.C.: Apr. 2,
1999).
[44] This review was conducted between 1980 and 1996--outside of the
time period of our report--so we did not include it in our summary data
on the number of reviews conducted by MSHA. However, agency officials
identified it as a relevant example of a "lesson learned" about
conducting retrospective reviews.
[45] GAO, Regulatory Burden: Some Agencies' Claims Regarding Lack of
Rulemaking Discretion Have Merit, GAO/GGD-99-20 (Washington, D.C.: Jan.
8, 1999).
[46] MSHA officials pointed out that, when modifying regulations, the
agency is prohibited from reducing the protection afforded miners by an
existing mandatory health or safety standard. See Section 30 U.S.C. §
811(a)(9).
[47] The term "independent regulatory agencies" refers to the boards
and commissions identified as such in the Paperwork Reduction Act (44
U.S.C. § 3502(5)), including CPSC, FCC, and FDIC. "Independent
agencies" refers to agencies that answer directly to the President but
are not part of Cabinet departments.
[48] The Congressional Review Act (CRA) requires agencies to file final
rules with both Congress and GAO before the rules can take effect. To
compile information on all the rules submitted to us under CRA, GAO
established a database and created a standardized submission form to
allow more consistent information collection. We determined that the
data in the database were sufficiently reliable for the purpose for
which it was used. The Federal Rules Database is publicly available at
www.gao.gov under Legal Products.
[49] We interviewed 15 individuals from 11 separate groups that
represented the various sectors listed. Specifically, we interviewed 3
groups from academia, 3 groups that represent public advocacy, 3 groups
that represent business advocacy, and 2 groups that represent state and
local government.
[50] We held a separate exit meeting with the Department of Justice and
discussed the same information presented at the joint agency exit
meeting.
[51] The DOJ agencies included in our review were the Bureau of
Alcohol, Tobacco, Firearms, and Explosives (ATF); Civil Rights Division
(CRT); Federal Bureau of Investigations (FBI); Drug Enforcement
Administration (DEA); Bureau of Prisons (BOP); Executive Office for
Immigration Review; and the Office of Justice Programs (OJP).
[52] 70 Fed. Reg. 16,902 (Apr. 1, 2005).
[53] 68 Fed. Reg. 3744 (Jan. 24, 2003).
[54] 62 Fed. Reg. 1386 (Jan. 10, 1997).
[55] 68 Fed. Reg. 4406 (Jan. 29, 2003).
[56] 45 Fed. Reg. 72,995 (Nov. 2, 1980).
[57] 68 Fed. Reg. 51,334 (Aug. 26, 2003).
[58] 69 Fed. Reg. 58,768 (Sept. 30, 2004).
[59] 68 Fed. Reg. 4406 (Jan. 29, 2003).
[60] OSHA completed 24 of the 42 DOL reviews conducted in response to
the OMB initiatives.
[61] The DOT includes the Office of the Secretary (OST) and the
following operating administrations: Federal Aviation Administration
(FAA); Federal Highway Administration (FHWA); Federal Motor Carrier
Safety Administration (FMCSA); Federal Railroad Administration (FRA);
Federal Transit Administration (FTA); Maritime Administration (MARAD);
National Highway Traffic Safety Administration (NHTSA); Pipeline and
Hazardous Materials Safety Administration (PHMSA); Research and
Innovative Technology Administration (RITA); and St. Lawrence Seaway
Development Corporation (SLSDC). The components of DOT have changed
since 2001. For example, with the establishment of the Department of
Homeland Security (DHS) in 2003, the U.S. Coast Guard and the
Transportation Security Administration were transferred from DOT to
DHS.
[62] 63 Fed. Reg. 31,883 (June 10, 1998). The Executive Memorandum
directed agencies to consider rewriting existing rules in plain
language when the opportunity and resources permit.
[63] 44 Fed. Reg. 11,033 (Feb. 26, 1979).
[64] CPSC officials reported that their review process typically
involves a modest amount of staff resources per year (less than 10
staff months), and involves work by staff who specialize in a variety
of consumer product hazard categories. They pointed out that the costs
of their review program are built into CPSC's operating plan and budget
justification. Each portion of the plan and activities must be
accounted for in this budget plan. If resources or agency priorities do
not permit commitment of resources to revise regulations that CPSC
staff identified for revision, CPSC officials may forego initiating a
Phase 2 update until a later date.
[65] The Office of Prevention, Pesticides and Toxic Substances and the
Office of Solid Waste and Emergency Response reported that their
reviews in response to the OMB Manufacturing Regulatory Reform
initiative all resulted in the changes.
[66] OPPTS reported two changes as a result of its Section 610 reviews;
however, one change may have been conducted prior to years included
within the scope of our review.
[67] The biennial review is conducted under Section 11 of the
Communications Act, as amended by Section 402 of the Telecommunications
Act of 1996, which created the requirement for the biennial review.
[68] Section 202(h) of the Communications Act required a biennial
review of broadcast ownership rules. Congress, through the Consolidated
Appropriations Act of 2004, changed this review requirement to every 4
years (it is now referred to as the quadrennial review).
[69] See FCC 06-129, In the Matter of Amendment of Part 13 and 80 of
the Commission's Rules, PR Docket No. 92-257, Memorandum Opinion and
Order, 2006.
[70] The rule changes from the 2002 quadrennial review never went into
effect. In 2004, the U.S. Court of Appeals remanded back to FCC for
further review its rules for cross-media ownership, local television
multiple ownership, and local radio multiple ownership (Prometheus
Radio vs. F.C.C., 373 F.3d 372 (3rd Cir. 2004)). Additionally, Congress
overturned FCC's national television ownership rule which would have
allowed a broadcast network to own and operate local broadcast stations
reaching 45 percent of U.S. television households. Through the
Consolidated Appropriations Act of 2004, Congress set the national
television ownership limit at 39 percent (Pub. L. No. 108-199, 118
Stat. 3, 100 (Jan. 23, 2004)).
[71] EBSA's document "Regulatory Review Program" notes that Congress
did not prescribe specific standards for defining "substantial numbers"
or "significance of impact" to be used in threshold testing for Section
610 purposes. However, the document outlines, in great detail, guidance
for determining whether regulations meet this threshold. The document
also acknowledges that, while the agency considers its threshold
standards appropriate for purposes of RFA Section 610 reviews, the
agency is also considers it advisable to maintain some flexibility as
the standard is applied going forward, should application of the
threshold test produce a result that appears inconsistent with the
intent of RFA.
[72] The Employee Retirement Income Security Act of 1974 (ERISA) is a
federal law that sets minimum standards for most voluntarily
established pension and health plans in private industry to provide
protection for individuals in these plans. See, e.g., 29 U.S.C. § 1001.
[73] In the event that the RRC is unable to complete the review of a
regulation before the end of the year following the year in which it is
selected for review, it reports this to the Assistant Secretary in its
next report and provides the reason(s) for the delay and an estimate of
when the RRC expects to complete the review.
[74] Office of Advocacy, A Guide for Government Agencies, How to Comply
with the Regulatory Flexibility Act (May 2003) available at [hyperlink,
http://www.sba.gov/advolaws/rfaguide.pdf].
[75] Pub. L. 96-354,94 Stat. 1164 (1981) codified as 5 U.S.C. § 610
(2000).
[76] See, for example, Michael R. See, Willful Blindness: Federal
Agencies' Failure to Comply with the Regulatory Flexibility Act's
Periodic Review Requirement-and Current Proposals to Invigorate the
Act, 33 Fordham Urb. L.J. 1199-1255 (2006).
[77] Office of Advocacy, How to Comply with the Regulatory Flexibility
Act: A Guide for Government Agencies (May 2003) available at
http://www.sba.gov/advo/laws/rfaguide.pdf].
[78] House Debate on the Regulatory Flexibility Act, 142 Cong. Rec.
H24,575, H24,583-585 (daily ed. Sept. 8, 1980) ("At least once every 10
years, agencies must assess regulations currently on the books, with a
view toward modification of those which unduly impact on small
entities." (Statement of Rep. McDade)) ("[Agencies must review all
regulations currently on the books and determine the continued need for
any rules which have a substantial impact on small business."
(Statement of Rep. Ireland)).
[79] Exec. Order No. 12,044,43 Fed. Reg. 12,661 (March 24, 1978).
[80] President Jimmy Carter, Memorandum to the Heads of Executive
Departments and Agencies, November 16, 1979.
[81] Office of Advocacy, The Regulatory Flexibility Act (October 1982).
[82] Common retrospective regulatory reviews include post-hoc
validation studies, reviews initiated pursuant to petitions for
rulemaking or reconsideration, paperwork burden reviews, and reviews
conducted pursuant to agency policy.
[83] See 5 U.S.C. § 605(b).
[84] See 5 U.S.C. § 603. In the context of a section 610 review, the
elements of an IRFA analysis that should be present include: a
discussion of the number and types of small entities affected by the
rule, a description of the compliance requirements of the rule and an
estimate of their costs, identification of any duplicative or
overlapping requirements, and a description of possible alternative
regulatory approaches.
[85] See, e.g., www.osha.gov, www.epa.gov, and www.dot.gov and search
for "RFA section 610."
[86] See Table 9, "New Reforms Planned or Underway - Regulations" and
Table 10, "New Reforms Planned or Underway - Guidance Documents" in
Informing Regulatory Decisions: 2003 Report to Congress on the Costs
and Benefits of Federal Regulations and Unfunded Mandates on State,
Local, and Tribal Entities (September 2003) at 26-34; available at
http://www.whitehouse.gov/omb/inforeg/2003_cost_ben_final_rept.pdf.
[87] See Regulatory Reform of the U.S. Manufacturing Sector (2005).
GAO's Mission:
The Government Accountability Office, the audit, evaluation and
investigative arm of Congress, exists to support Congress in meeting
its constitutional responsibilities and to help improve the performance
and accountability of the federal government for the American people.
GAO examines the use of public funds; evaluates federal programs and
policies; and provides analyses, recommendations, and other assistance
to help Congress make informed oversight, policy, and funding
decisions. GAO's commitment to good government is reflected in its core
values of accountability, integrity, and reliability.
Obtaining Copies of GAO Reports and Testimony:
The fastest and easiest way to obtain copies of GAO documents at no
cost is through GAO's Web site (www.gao.gov). Each weekday, GAO posts
newly released reports, testimony, and correspondence on its Web site.
To have GAO e-mail you a list of newly posted products every afternoon,
go to www.gao.gov and select "Subscribe to Updates."
Order by Mail or Phone:
The first copy of each printed report is free. Additional copies are $2
each. A check or money order should be made out to the Superintendent
of Documents. GAO also accepts VISA and Mastercard. Orders for 100 or
more copies mailed to a single address are discounted 25 percent.
Orders should be sent to:
U.S. Government Accountability Office 441 G Street NW, Room LM
Washington, D.C. 20548:
To order by Phone: Voice: (202) 512-6000 TDD: (202) 512-2537 Fax: (202)
512-6061:
To Report Fraud, Waste, and Abuse in Federal Programs:
Contact:
Web site: www.gao.gov/fraudnet/fraudnet.htm E-mail: fraudnet@gao.gov
Automated answering system: (800) 424-5454 or (202) 512-7470:
Congressional Relations:
Gloria Jarmon, Managing Director, JarmonG@gao.gov (202) 512-4400 U.S.
Government Accountability Office, 441 G Street NW, Room 7125
Washington, D.C. 20548:
Public Affairs:
Paul Anderson, Managing Director, AndersonP1@gao.gov (202) 512-4800
U.S. Government Accountability Office, 441 G Street NW, Room 7149
Washington, D.C. 20548: